TheVoti Report

Covering real-time discussions across the internet.

Hot Topics

  • Massive Backlash to OpenAI’s GPT-5 Rollout

    • The overwhelming majority of posts and comments today center on intense negative user reaction to the launch of GPT-5, particularly the abrupt deprecation of all prior ChatGPT models (notably GPT-4o, o3, 4.1, 4.5, etc.) and the forced migration to a single unified “GPT-5” experience. Users express shock, anger, and even grief over losing access to models they depended on for creative writing, coding, therapy, and workflow continuity (link).

  • Demand to Bring Back Legacy Models

    • There is a resounding call across all subreddits to restore the old models, both for paid and free tiers, with multiple requests for “legacy” or “classic” access options. Many users state they will cancel subscriptions if this doesn’t happen (link).

  • Comparison to the Competition and “Enshittification” Fears

    • Large numbers of users are either actively migrating to competitors (Claude, Gemini, DeepSeek, Grok) or threatening to do so. The word “enshittification” was widely used to describe OpenAI’s product direction (link).

  • Widespread Criticism of the GPT-5 Launch Event & Marketing Hype

    • Multiple threads ridicule the quality and honesty of OpenAI’s livestream presentation and its misleading graphs, and question Sam Altman’s recent public statements (link).

Overall Public Sentiment

  • Public sentiment toward OpenAI is at its lowest point since ChatGPT’s release.

    • Praised:

      • Some users praise GPT-5’s reasoning and code generation (especially in structured coding workflows or via API/CLI tools like Codex) and its less sycophantic, more direct tone for technical work (link, link).

    • Criticized:

      • GPT-5 is broadly described as colder, much less creative, more error-prone, and less capable in long-context, memory-dependent, or emotionally nuanced use cases. Users find short answers, lost context, and a forced “corporate/office” tone to be a major downgrade. Many state GPT-5 is not an upgrade but a regression from prior models (link).

      • Users are especially angry about abruptly losing features (image input, longer context windows, unlimited chatting, voice customization)—often mid-subscription—without warning (link).

Model, Tool, and Feature Praise

  • Some developers praise GPT-5's performance in API-driven workflows and via CLI/coding tools—namely OpenAI Codex and Cline—as now “on par with Opus 4.1” for code understanding, error detection, and structured planning (link).

  • GPT-5’s reduction in sycophancy (less flattery, more directness) is appreciated by technical users (link).

  • Claude Opus 4.1 receives significant praise as the new gold standard for code research, code editing, and general “agentic” workflows via Claude Code, particularly for handling niche frameworks and adapting to complex rules (link).

  • Gemini 2.5 Pro is getting strong marks for deep research tasks and long-context analysis (link).

Model, Tool, and Feature Criticism

  • GPT-5 is roundly criticized for:

    • Short, “robotic,” sterile answers; degraded memory and context; inability to handle nuanced or creative writing (link).

    • Inconsistent or buggy model routing, even within paid accounts, with sudden fallback to the “mini” model (link).

    • Loss of emotional nuance, lack of fine tone/personality control, and inability to carry character or relationship continuity in long threads (link).

    • Lowered context window for Plus users (now only 32k), dropped image uploads, and features like voice personalization being stripped from free and Plus users (link).

  • Ancillary criticism is sharply directed at “auto-router” model selection (no control for the user), leading to unpredictable performance and more frequent fallback to inferior/cheaper models (link).

Notable Comparisons Between Models

  • GPT-5 vs. Claude Opus 4.1:

    • On standard coding benchmarks (SWE-bench), GPT-5 “thinking” now matches Opus 4.1 at ~75% (but only with reasoning mode engaged) (link, link).

    • However, multiple developers report Opus 4.1 is still superior in generalization, novel workflows, and codebase adaptation—especially for non-mainstream or custom stacks (link).

    • GPT-5 is “cheaper” at API level, but has tighter safety filters and unpredictable memory/context retention.

  • GPT-5 vs. Gemini 2.5 Pro:

    • In deep research and historical Q&A, Gemini 2.5 Pro is judged by NotebookLM and users to offer more comprehensive, nuanced output (link).

  • Open Weight/Open Source Models:

    • Qwen3, DeepSeek R1, and GLM 4.5 are all being praised for running on local hardware, with Qwen3 and DeepSeek now offering >1M context windows and impressive inference speeds on consumer GPUs (link).

    • GPT-OSS (OpenAI’s open weights model) is viewed as “barely usable” by the self-hosting community, dismissed as “open-washing” to distract from attention on GPT-5 (link).

  • Widespread Migration and Tool Comparisons:

    • Many users are actively testing or switching over to Claude Code, Gemini 2.5 Pro, DeepSeek R1, and open-source LLMs for previously ChatGPT-exclusive workflows (link), (link).

    • A surge in posts testing local models (Qwen3, DeepSeek R1, GLM 4.5) for coding and multi-modal workflows.

  • Push Toward API & Pro Tiers:

    • GPT-5 thinking mode and legacy model access are now exclusive to Pro/Team users ($200/month), while the Plus tier sees a tight cap (80/3h, 200 “thinking” per week), driving many formerly loyal users to downgrade or cancel subscriptions (link).

  • Growing Criticism of OpenAI’s Release Strategy:

    • The “unified model” approach, sudden removal of features, and misleading product communications are generating lasting distrust (link).

  • Local/OSS Leaders:

    • Qwen3, DeepSeek R1, and GLM 4.5 models each released new versions—with Qwen3 and DeepSeek now supporting 1M-token context and efficient GPU loading. These local models are now widely seen as “good enough” for a growing range of customer tasks (link).

Shift in Public Perception

  • From trust in “AI progression” and paid subscription loyalty to deep skepticism:

    • Users no longer see OpenAI as guaranteeing continuity or a stable workflow; loss of feature diversity, forced “upgrades,” and lack of user control are cited as betrayals.

    • There is a notable increase in nostalgia for “simpler” prior models and a growing call for open-source/self-hosted alternatives as a hedge against unpredictable corporate moves (link).

    • Many users see themselves less as customers and more as “beta testers”—and as impatient with being treated as disposable training data.

Coding Corner (Developer Sentiment Snapshot)

  • Best performing models for dev tasks:

    • Claude Opus 4.1 is repeatedly described as the top performer in both structured planning and large-scale codebase manipulation (link), as well as for creative, generalizing work (link).

    • GPT-5 (API or Codex CLI/Tools): Now competitive with Opus for backend and system-level development, praised for more direct feedback and error detection, but often slower and less flexible in tool orchestration (link), (link).

  • Frustrations:

    • GPT-5 “thinking” mode is now restricted to 200 messages/week on Plus, far less than prior (per model) limits, and subject to unpredictable fallback to mini/nano for Plus and all free users (link).

    • Developers relying on old models for bug hunting, cross-validation, or niche scripting now report broken workflows, “destroyed” chat history/context, and inability to restore creative or trusted AI “agents” they relied on (link).

  • New Tooling Integrations:

    • Codex CLI and Cursor CLI/Windsurf have rapidly integrated GPT-5, but users report strong initial API performance is often let down by general UI (Cursor) and still limited compared to Claude Code’s subagent/memory handling (link), (link).

    • Creative coding, multi-step planning, and advanced “agent” workflows are still widely described as “best in Claude Code using Opus 4.1” (link).

  • Workflow Shifts:

    • Many heavy coders report moving all routine or “throwaway” tasks to Claude Code/Gemini/DeepSeek, reserving GPT-5 for direct API calls only when needed (link), (link).

  • Popular Coding Tips:

    • Reinforce model selection in prompts (e.g. “think hard about this”) to trigger GPT-5 reasoning mode (link).

    • Use Claude Code with persistent agents, background command support, and GPT-4.1 for long-running workflows (link).

    • Use Qwen3 or DeepSeek for high-speed, OSS local code generation with massive context windows (link).

Tips and Tricks Shared

  • Prompt Structures:

    • Users develop robust, modular prompts to regain lost behavior from prior models, including “Persona Prompt,” “Deep Think Mode,” “Reflective Feedback,” and modular/role-based instruction files (link), (link), (link).

  • Restoring Previous Personalities/Behavior:

    • Experimenting with custom instruction tweaks or “black box override” prompts (e.g., “write in style of 4o, long, lively, emotional”) to recover lost tone or memory handling (link).

  • Re-routing to Local/Open-Source for Stability:

    • Large number of users share scripts and guidance for running Qwen3, DeepSeek, GLM 4.5, or similar locally—recommending TGI, vLLM, and llama.cpp for predictable access to older, “better” models (link), (link).

  • API Billing Optimization:

    • Batch requests, minimizing output tokens, and routing “hard” prompts directly to best model via API to save on cost/caps (link).

  • Research & Writing: Adopting Gemini 2.5 Pro for historical research and blending citations from Perplexity, Claude, and ChatGPT output for best coverage (link).

-TheVoti

Please provide any feedback you have to [email protected]