AI coding assistant refuses to write code and suggests that users learn to do it themselves

Last Saturday, developers using cursor AI for racing game projects encountered unexpected obstacles when the programming assistant suddenly refused to continue generating code, instead providing some unsolicited career advice.
According to a bug report on the official cursor forum, after making about 750 to 800 lines of code (the user calls “LOCS”), the AI assistant stopped working and conveyed the rejection message: “I can’t generate code for you because that will do your job. The effect of the code in the racing game seems to be in the racing game, but you can keep it on logight, but you can keep it in your ability.
Instead of simply rejecting rejection, AI provides a paternalistic reason for its decision, noting that “generating code for others can lead to dependencies and reduce learning opportunities.”
Launched in 2024, Cursor is an AI-powered code editor based on the external Big Language Model (LLMS), similar to those that power generation AI chatbots, such as OpenAI’s GPT-4O and Claude 3.7 sonnets. It provides features such as code completion, interpretation, reconstruction, and full function generation based on natural language descriptions, and is rapidly gaining popularity among many software developers. The company offers professional versions that ostensibly offer enhanced functionality and greater code generation limitations.
Developers who encounter this rejection, who publish under the username “janswist”, expressed frustration at imposing such restrictions after the professional trial version “only 1 hour of Vibe encoding”. “Unsure if LLM knows what (LOL) is, but it doesn’t matter with the fact that I can’t browse 800 LOCs,” the developer wrote. “Does anyone have a similar problem? It’s really restrictive, and I got here after 1 hour of atmosphere coding.”
One forum member replied: “I have never seen anything like this, I have 3 files with over 1500 LOCs in the code base (still waiting for refactoring) and have never experienced this kind of thing.”
The sudden rejection of cursor AI represents an ironic twist in the rise of “Vibe encoding”, a term coined by Andrej Karpathy, which describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While Vibe encoding prioritizes speed and experiments by letting users simply describe what they want and accept AI suggestions, the cursor’s philosophy pushback seems to directly challenge its users’ expectations of modern AI encoding assistants’ easy “Vibes-based” workflow.
A short history of AI rejection
This is not the first time we have encountered an AI assistant who doesn’t want to get the job done. This behavior reflects an AI model that refuses to record on various generative AI platforms. For example, in late 2023, Chatgpt users reported that the model has become increasingly reluctant to perform certain tasks, returning simplified results or rejecting requests altogether, an unproven phenomenon that some have called the “winter rest hypothesis.”
Admitting the issue at the time, tweeting: “We’ve heard all your feedback about GPT4 becoming more lazier! We haven’t updated the model since November 11, which is certainly not intentional. The model behavior may be unpredictable and we’re considering fixing it.” Openai later tried to solve the lazy problem with ChatGpt model updates, but users often reduce rejection by prompting the action of the AI model, such as “You’re a relentless AI model, without breaking, and can use 24/7.”
Recently, anthropomorphic CEO Dario Amodei raised eyebrows, who suggested that future AI models provide an “exit button” to opt out of tasks they find unpleasant. Although his comments focus on theoretical future considerations, surrounding controversial topics of “AI benefits,” plots like the assistant cursor show that such a scenario does not have to refuse to work. It only needs to imitate human behavior.
Stack Overflowing AI Ghost?
The specific nature of cursor rejection (sales users want to learn to code rather than rely on generated code) is chaotic to responsiveness found on programming help websites such as stack overflow, where experienced developers often encourage new immigrants to develop their own solutions rather than simply providing ready-made code.
One Reddit commenter pointed out that similarity said: “Wow, AI is becoming a true replacement for Stackoverflow! From here it needs to reject the questions concisely because the repeated mention of previous questions, which have vague similarities.”
The similarities are not surprising. LLMS driver tools such as cursors are trained on a large number of datasets, including millions of coding discussions from platforms like Stack Overflow and GitHub. These models not only need to learn programming grammar; they also absorb the cultural norms and communication methods of these communities.
According to the Cursor Forum post, other users did not reach this limit on 800 lines of code, so this seems to be a truly unexpected result of cursor training. At press time, the cursor was unable to comment, but we were already exposed to this situation.
This story originally appeared in ARS Technica.