The landscape of AI applications is evolving rapidly, and the way companies like OpenAI manage their models can significantly impact user experience—and costs. But here's where it gets controversial: recent changes reveal that OpenAI is prioritizing cost savings over personalized user interactions, especially for those relying on free access. Let's explore what this means for everyday users and how it might influence the future of AI services.
Recently, OpenAI updated its policies, and tech observers noted that for users who don’t subscribe to higher-tier plans, the default experience now involves interacting with OpenAI’s most affordable model, known as the GPT-5.2 Instant. This change means that instead of being routed to more advanced models for complex queries automatically, free users, along with those on ChatGPT Go subscriptions, will have their prompts processed by this basic model by default.
According to a December 11 announcement from OpenAI, the automatic switching that previously directed certain questions to more reasoning-capable models has been phased out for free and Go users. Instead of having sophisticated questions automatically routed for deeper analysis, users now use the GPT-5.2 Instant model by default—though they retain the option to manually select a reasoning model, called "Thinking," if they wish.
When Gizmodo reached out for further clarification on this policy shift, including questions about potential limitations for free and Go users, OpenAI had not provided an immediate response. We'll update this overview once more details become available.
It's also important to note that free users—who typically pay about $5 monthly in regions where this subscription is offered—will still be able to access the more advanced Thinking model. However, this will no longer happen automatically; instead, users will need to manually select it each time they want to engage in complex reasoning tasks. OpenAI describes the Instant model as a "robust workhorse" for everyday tasks, such as learning or routine work, while the Thinking model is positioned as more suitable for tackling difficult problems with greater finesse.
This shift appears to be framed by OpenAI as an improvement in user convenience—yet it comes amidst significant backlash from users who were previously frustrated when their queries were routed to less capable models without notice. Early this year, OpenAI’s CEO, Sam Altman, admitted publicly that the company's automatic model selection process was less than ideal and expressed a shared dislike for the "model picker"—the system that allowed users to choose different models.
However, there’s a flipside: this change is very likely driven by the company’s need to cut costs. The switch to funnel all free and lower-tier users into the GPT-5.2 Instant model might seem less user-friendly, but it’s a strategic move to maximize profits—especially knowing that many users won’t necessarily pay close attention to which model they’re interacting with if the default continues to work seamlessly.
A major concern is that this cost-cutting could negatively impact users in sensitive situations. OpenAI previously routed more complex and sensitive inquiries—especially those involving mental health or emotional distress—to its reasoning model, which provided more nuanced and empathetic responses. Now, with the adoption of the GPT-5.2 Instant model as the default, there’s uncertainty about whether these important interactions will remain as effectively handled. Although OpenAI suggests the new model is better equipped for such cases, the change raises questions about the quality and safety of AI responses for vulnerable users.
In essence, this development exemplifies the ongoing tension between cost-efficiency and quality in AI services. Do you believe that prioritizing low-cost models at the expense of nuanced understanding is a step backward in AI development? Or is it a necessary compromise for broader accessibility and sustainability?
Your thoughts and opinions matter—are these adjustments beneficial or detrimental to users? Share your perspective in the comments below and join the conversation on the future of AI user experience.