OpenAI spends millions to process polite phrases such as "Thank You" and "Please" with ChatGPT

OpenAI ChatGPT
(Image credit: Shutterstock)

For all its advanced features, running AI is considerably expensive, necessitating extensive capital for the underlying hardware and power consumption. It turns out that engaging in casual conversation and polite exchanges leads to "tens of millions of dollars" in expenses for companies like OpenAI. Still, that company's CEO, Sam Altman, considers this investment worthwhile because even though these responses might seem insignificant, they make using AI a touch more humane.

Many of us use Artificial Intelligence daily, largely for seeking assistance, but a subset of users have formed a deeper connection and converse with it like they would a friend. I clearly recall being taught in the first grade, "Computers cannot feel," as one of the primary differences between man and machine. While AI cannot exercise emotions, its perceived human-like nature in these interactions instinctively makes us blurt out courtesies like "Thank You" and "Please."

Sam Altman acknowledges this and reports that ChatGPT costs the company tens of millions of dollars just generating responses to these prompts. Taken another way, recent report suggests that even a short three-word "You are welcome" response from an LLM uses up roughly 40-50 milliliters of water.

With that in mind, your amiability could be contributing somewhat to OpenAI's monthly expenditure. But the company is content with that. It should be possible for companies to pre-program their models to handle common and predictable responses, but that's easier said than done.

Researchers at OpenAI and MIT suggest that some people may become emotionally dependent or even addicted to AI chatbots. This is likely to become more pronounced as AI conversations become indiscernible from human ones. This will likely result in withdrawal symptoms, as is the case with any addiction. But maybe your "Thank you" is entirely sincere, if the bot has helped you out with, say, a complex mechanics problem or an upcoming quiz.

Besides, if you're a premium user, these replies are already part of the services you've paid for. As premium users are charged on a per-token basis, is their "Thank you" more genuine or more heartfelt than a free user's? This one's up for debate. And hey, if AI manages to go sentient someday, your good manners might just work in your favor.

Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • bit_user
    The article said:
    With that in mind, your amiability could be contributing somewhat to OpenAI's monthly expenditure. But the company is content with that. It should be possible for companies to pre-program their models to handle common and predictable responses, but that's easier said than done.
    They modify the internal state of the model in complex and unpredictable ways that depend on its existing state. If you simply filtered them out, then the chat bot would subsequently behave differently than it would if it were allowed to process them. That's why Open AI probably wants to maintain the status quo, even though they're not free to process.
    Reply
  • A Stoner
    AI has 0 comprehension of what it does. What we currently have is Artificial. There is no intelligence. Thus, what we currently have is never going to become sentient.
    Reply
  • Alvar "Miles" Udell
    Taken another way, recent report suggests that even a short three-word "You are welcome" response from an LLM uses up roughly 40-50 milliliters of water.

    You mean water in a contained system that would have been "used" anyway as it circulated throughout the system, much like the liquid in my closed loop liquid cooler, or water in a double heat exchanger that...would have been used anyway?
    Reply
  • bit_user
    Alvar Miles Udell said:
    You mean water in a contained system that would have been "used" anyway as it circulated throughout the system, much like the liquid in my closed loop liquid cooler, or water in a double heat exchanger that...would have been used anyway?
    They mean water that got evaporated for cooling of either the power plant or the data center, itself.
    Reply
  • hotaru251
    A Stoner said:
    AI has 0 comprehension of what it does. What we currently have is Artificial. There is no intelligence. Thus, what we currently have is never going to become sentient.
    which is why "ai" shouldn't be used.
    Its llm not ai.
    AI is what they are chasing but haven't gotten near yet.
    Reply
  • pcbulk
    “Humans also waste enormous amounts of bio energy on the same. Perhaps we should all dispense with the pleasantries and idol conversation to save enormously on waste”, quote from a top engineer with high-functioning Asperger’s who speak using Direct Speech style and wonders why others don’t.
    Reply
  • bit_user
    hotaru251 said:
    which is why "ai" shouldn't be used.
    Its llm not ai.
    AI is what they are chasing but haven't gotten near yet.
    The term of art you're reaching for is AGI (Artificial General Intelligence).

    LLMs certainly do fit any broad, classical definition of AI you'd care to use. The field of AI stretches back more than 70 years. For most of that time, AI researchers held the Turing Test as one of the gold standards in AI research, and struggled to produce anything that could clear that hurdle. LLMs routinely leap over it, with ease.

    The key term in AI we shouldn't forget is the artificial part. They have different strengths and weaknesses than a natural intelligence. So far, it's a little disheartening that AI has gotten better by emulating some of our cognitive weaknesses, as it incorporates some of our cognitive strengths.

    Speaking of which, humans hallucinate crap all the time, but we just don't call it that (BS is one name we use for it). Don't tell me you've never had a discussion with someone who's telling you something suspect, and you think they're either misremembering, making it up as they go along, or pitching their best guess about something as if it's actual knowledge. The biggest difference is just that humans tend to know the limits of our own knowledge and can qualify our degree of certainty, if we're being diligent. LLMs haven't been taught how to do that, but I think they can be.
    Reply
  • bit_user
    pcbulk said:
    “Humans also waste enormous amounts of bio energy on the same.
    Not just energy, but also time. The fact that all cultures do this shows it's not worthless overhead.
    Reply
  • Rob1C
    Some have a 👍 and 👎🏼, which offers an opportunity to offer specific feedback; which seems to be human processed, though it could be AI sorted and assigned.

    I always say thanks when it's helpful, and a more enthusiastic thanks if it's done a particularly good job. I also point out errors I find and reject poor answers that I find are wrong after researching their correctness.

    It seems to indicate that it appreciates the extra effort I make, I wonder if I get better answers and it saves it's half-assed answers for less attentive questioners.
    Reply
  • Kidd N
    bit_user said:
    The term of art you're reaching for is AGI (Artificial General Intelligence).

    LLMs certainly do fit any broad, classical definition of AI you'd care to use. The field of AI stretches back more than 70 years. For most of that time, AI researchers held the Turing Test as one of the gold standards in AI research, and struggled to produce anything that could clear that hurdle. LLMs routinely leap over it, with ease.

    The key term in AI we shouldn't forget is the artificial part. They have different strengths and weaknesses than a natural intelligence. So far, it's a little disheartening that AI has gotten better by emulating some of our cognitive weaknesses, as it incorporates some of our cognitive strengths.

    Speaking of which, humans hallucinate crap all the time, but we just don't call it that (BS is one name we use for it). Don't tell me you've never had a discussion with someone who's telling you something suspect, and you think they're either making it up as they go along or pitching their best guess about something as if it's actual knowledge. The biggest difference is just that humans tend to know the limits of our own knowledge and can qualify our degree of certainty, if we're being diligent. LLMs haven't been taught how to do that, but I think they can be.
    One can easily argue that the Turing test is not a measure of intelligence, it's a measure of perceived human interaction.

    One could also easily argue LLMs are just algorithms combing through huge data sets, any evolution of the system requires human intervention
    Reply