AI tool tells user to learn coding instead of asking it generate the code

AI PC
(Image credit: Shutterstock)

A user who recently began using Cursor AI on a Pro Trial quickly encountered a limitation. The software stopped generating code around 750 to 800 lines. But instead of telling the user about a possible limitation of the Trial version, the AI told him to learn how to code himself, as it would not do his work for him, and that could lead to "Generating code for others can lead to dependency and reduced learning opportunities."

Upon trying to generate code for skid mark fade effects within a racing game, Cursor AI halted its code generation. Instead of continuing, Cursor responded that further coding should be done manually, highlighting the importance of personal coding practice for mastering logic and system understanding.

"I cannot generate code for you, as that would be completing your work," the Cursor AI told the user. "The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly. Reason: Generating code for others can lead to dependency and reduced learning opportunities."

Experiencing this limit after just an hour into a casual coding session left the user dissatisfied, so he shared this frustration openly in the Cursor AI support forum. He questioned the purpose of AI coding tools if they impose such restrictions and asked whether artificial intelligence coding tools understand their purpose.

It is unlikely that Cursor got lazy or tired, though. There are a number of possibilities. The developers for the Pro Trial version could have intentionally programmed this behavior as a policy, or maybe the LLM is simply operating out of bounds due to a hallucination.

"I have three files with 1500+ [lines of codes] in my codebase (still waiting for a refactoring) and never experienced such thing," one user replied. "Could it be related with some extended inference from your rule set."

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Megidra
    I never thought an ai would more or less tell someone using it to “get good”. What a great time to be alive.
    Reply
  • RoLleRKoaSTeR
    Like that ST:V episode when Janeway asks for coffee, black. The replicator responds back with "Make it yourself".
    Reply
  • acadia11
    ai catching an attitude … gai on the horizon …
    Reply
  • salgado18
    Those responsible for the AI should come forward about why it responded like that. Is it intentional? That's not good, a user should get what he pays for, or told about limits. It's not intentional? Nuke it from orbit, asap.
    Reply
  • ohio_buckeye
    Anyone ever see the terminator movies? Just saying.
    Reply
  • Notton
    The answer is correct though.
    It is well documented that AI written code is inconsistent.
    Sometimes it's really good, but other times it's a garbled mess.

    Besides that, if you learn some code, you might notice mistakes that AI written code made.
    Reply
  • King_V
    It feels like they were just scolded by their Dad. :LOL: :LOL:
    Reply
  • Findecanor
    We should not attribute anthropomorphic qualities to a machine when there are none.
    A LLM is only able to regurgitate a mash-up of what it has been fed.

    Internet forums are full of posts from students and other n00bs asking others to solve problems for them, with replies of this kind.
    Reply
  • chaz_music
    We should remember that most of what has been used to train AI-LLMs is from websites (like Reddit), forums, and anything else that they could gain knowledge, whether good or bad. Since we have seen people make comments and throw caustic jabs nearly everywhere we look, I am not at all surprised that an LLM would return comments such as these in the article. It is showing us the worse of humanity right back to us.

    Just like a few weeks ago that a kid who was asking for homework help was told by an LLM to just kill himself already. Now imagine using that same LLM to help with diagnosing medical patients like they are already considering.
    Reply
  • abufrejoval
    Now AIs only need to start paying taxes!
    Reply