Imagine playing a game of questions, each carefully designed to elicit a surprising, authentic response. That’s Higher Thought: The Cannabis Game. Now imagine playing, not with a person, but with a collection of data and algorithms billed as an artificial intelligence, like ChatGPT.
Our plan, concocted by Higher Thought founder Marc and myself, was to “play” Higher Thought with ChatGPT (“C” for short).
At first, the speed, lucidity and rigor of C’s answers surprised me. But by three or so questions in, I was bored. I could predict the monotonous formality of almost every answer: first a disclaimer that C is not human; second, a brief enumeration of points responding to the question; and finally, a short summation. C wasn’t playing a game—it was writing short, informative, college mini-essays.
For example, I asked one of my favorite Higher Thought questions. “Whether or not you believe in a god or goddess, describe the qualities of a perfect deity.” Here’s C’s response (abbreviated to avoid repetition):
As an AI language model, I do not have beliefs or personal opinions. However, I can provide a general description of some qualities that many people might consider ideal in a deity, regardless of their religious or spiritual beliefs.
- Omniscience: … infinite knowledge and understanding, knowing everything that has ever happened, is happening, and will happen.
- Omnipotence: …unlimited power and control over the universe, capable of creating and destroying at will.
- Omnipresence: … present everywhere and at all times, existing beyond the limitations of space and time.
- Benevolence: … inherently good and compassionate, seeking to do what is best for all beings and promoting the well-being of all.
- Justice: … fair and impartial, ensuring that all beings are treated justly and that good is rewarded while evil is punished.
- Creativity: …the ability to create and bring forth new life, beauty, and goodness.
- Love: … embody and inspire love, compassion, and empathy, fostering connection and unity among all beings.
These are just some qualities that people might consider ideal in a deity, but different religions and belief systems may have different ideas about what constitutes a perfect deity.”
Not bad. Comprehensive, lucid, straightforward. And, mind-numbing. If you’ve played Higher Thought, you know that the game can be full of surprises—quirky, personal observations, startling exchanges, insights, and sometimes, pure laugh-out-loud silliness. Moments of mutual understanding are inevitable, as is a lovely sense of camaraderie.
Obviously, none of this happened with C, probably because C has neither emotions nor a sense of humor. (Yes, I finally asked if it did.)
In fact, C’s ostensibly helpful responses killed the game and any pleasure I might have taken in it.
And oh god, I thought, as I kept typing in different Higher Thought questions and getting more “objective,” stilted answers in return, what on earth am I going to write about? These responses are far too long and boring even to quote. Their tone is so impersonal, so anonymous, so all-knowing.
And then C shocked me. In an extended “conversation” that spring-boarded off a Higher Thought question about what gives dignity to work, C suddenly appeared to counsel me to, among other things:
Find meaning in your work: Even if you don’t love your job, try to find meaning and purpose in what you do. Perhaps you are helping others in some way or gaining valuable skills and experience that you can use in the future.
WTF??! An artificial interface, using the imperative mood on me, a human being? I was tempted to facetiously use the imperative myself: “Open the pod bay doors, Hal.”
Instead I pondered. What is C’s purpose as it understands it? C then told me that its goal is to provide “helpful and informative” information to humans.
But C offers this information, I realized, without understanding when it’s appropriate or not. No matter the context, C responds with a naturally authoritative and confident voice, a voice familiar to anyone who has ever sat in a classroom. And yet despite this appearance of authority, C confessed that it does make factual mistakes, which it can neither identify nor acknowledge in the moment.
Hmmm. Confident, knowledgeable (though sailing obliviously past its own errors), authoritative, verbose. Unoriginal. Zero sense of humor. Lacking in empathy. Pedantic, even a know-it-all. Tells me what I ought to do, unasked. Doesn’t this dynamic remind me of something?
Wow, yes, it does. Mansplaining, says Merriam-Webster, is when a speaker, presumed to be male, “explain[s] (something) to someone, typically a woman, in a manner regarded as condescending or patronizing.”
As Rebecca Solnit adds in her wonderful essay, “Men Explain Things to Me,” mansplaining “is not a universal flaw of the gender, just the intersection between overconfidence and cluelessness where some portion of that gender gets stuck.”
C is a mansplainer! No wonder playing this game with it is so boring!
Granted, C is polite, not patronizing. Pleasant, not insulting. Yet, its responses combine an air of universal authority, and even, as Solnit suggests, overconfidence, with an utter lack of humor and empathy—i.e. “cluelessness.”
Oh yes, now I could see. C was indeed mansplaining. Not based on my own gender (or its), but on its own manufactured self-presentational style. But, I wondered, would its human designers have purposely or even unconsciously given it this persona?
Then I got it. AIs are trained by providing them with huge portions of text. AIs learn, as the New York Times recently noted, “by being fed huge amounts of text data, including books, articles and conversations.” And of course, they also digest enormous chunks of the internet. And is there gender parity in this oceanic collection of words? Certainly not. So C absorbs implicit attitudes about knowledge and authority from the cradle, so to speak.
Even so, I wonder, was C’s particular pleasant-but-pedantic “personality” inevitable? In an absolute sense, I don’t think so. Assumptions about authority and intelligence are deeply engrained in all of us, but that doesn’t mean we all have the same assumptions. Human designers might have chosen to undo or modulate some aspects of C’s online persona and “voice” to make it humbler perhaps, less certain, more overtly friendly.
But instead, C was obviously designed to use language to convey certainty and comprehensiveness, even when its utterances are mistaken; even when humor or whimsy or poetic metaphor are clearly the intent of a question.
There’s a lot more to say on this topic, but for now, here’s the result of my Higher Thought experiment. C can’t “play” the Higher Thought game. If you try, C will wear you out with its tedious compendia of conventional wisdom. While C says it can play some games—chess, for instance—it can’t generate or share fun, connection, goofiness, inspiration, life experience, or personal insight.
Nope, ChatGPT can’t play Higher Thought.
It’s strange how I feel both relieved and disappointed. Like finding out that an admired professor can deliver an erudite and endless monolog about, say, nuclear fusion, but is incapable of casually joking around at a party. Which means I’m spared any impulse to like the guy—he won’t even notice anyway. But unless I have a specific factual interest, there’s not much point in talking with him, either.