TheVoti Report

Covering real-time discussions across the internet.

Hot Topics

  • GPT-5 Launch, Backlash, and Product Direction

    • GPT-5's rollout remains the dominant discussion across subreddits, with intense user frustration over model “lobotomization,” loss of creativity/personality, and the forced removal of older models like 4o, o3, and 4.5. Multiple highly-upvoted posts and thousands of comments document a customer revolt, especially among Plus/Pro users who feel betrayed after sudden access changes and reduced capabilities (link).

  • OpenAI Model Routing/Knockoff GPT-5 Incidents

    • A major theme is emerging confusion and anger over OpenAI’s automatic “model router,” which often routes users to (perceived) cheaper, weaker models in GPT-5, leading to complaints of a “knockoff” experience compared to Copilot or the API (link).

  • Restoring/Removing Legacy Models & Subscription Uncertainty

    • Users feel “slapped in the face” by OpenAI’s abrupt removal of beloved legacy models without notice. Calls for model choice, petitions, and mass subscription cancellations are heavily noted (link).

  • Claude Opus 4.1 Ascendancy and Codex/Cursor/CLI/Claude Code Tooling

    • Developers and advanced users are shifting focus to Anthropic’s Claude Opus 4.1 and new coding/agentic tools (e.g., Claude Code UI, Codex CLI, Cursor CLI) as better alternatives for autonomy/coding tasks (link).

  • Open Source Model Momentum

    • OpenAI’s release of GPT-OSS-120B/20B is driving benchmarking, with Jan v1 and GLM-4.5 AIR getting strong community interest. Open-source toolchains around DeepSeek/GPT-OSS/Claude Code/Kimi/K2 are rising fast (link).

Overall Public Sentiment

  • Praised

    • Claude Opus 4.1: Consistently celebrated for coding capability, context retention, autonomy, and above-average reasoning/logical depth—noted as best in class for shipping production code (link).

    • GPT-5-mini: Applauded as a major value/cost breakthrough—a “budget king” for SQL/JSON/data workloads, beating Gemini Flash and delivering nearly 94% performance at 20-25% of the price (link).

    • Jan v1, GLM 4.5 AIR: Strong feedback on speed, efficiency, and tool use for local/private development (link), (link).

  • Criticized

    • GPT-5 (Standard/Pro/Thinking): Heavy volume of negative sentiment—slow, bland, overcensored, context-breaking, and seen as a downgrade versus GPT-4o/4.5/3o, especially on creative writing and emotional nuance, with many users canceling Plus/Pro (link).

    • Model Router System: Perceived as confusing and deceptive, routing users to underpowered/cheaper models despite promises of “the best”—described as an “active cost-cutting downgrade” (link).

    • OpenAI Customer Relations: Users are insulted by sudden legacy model removals, being “gaslit” about actual model improvements, and broken promises relating to model availability (link).

Notable Comparisons Between Models

  • GPT-5 vs. Legacy GPT Models

    • Users report GPT-5 produces shorter, colder, less creative output versus 4o/4.5; struggles significantly with instructions, context retention, creative writing, and even basic coding/refactoring asks (link).

    • GPT-5’s safety/censorship and refusal rate is higher, impacting use cases like creative fiction, roleplay, and mental health support (link).

  • GPT-5-mini vs. Gemini 2.5 Flash

    • In API, GPT-5-mini surpasses Gemini Flash on SQL/JSON generation, success, and cost; Gemini 2.5 Pro still leads overall but is far more expensive (link).

  • Claude Opus 4.1 vs. GPT-5/GPT-4o

    • In production coding, algorithmic logic, and multi-turn workflows, Claude Opus 4.1 is reported to outperform both GPT-5 and 4o on reasoning, accuracy, and creativity (link), (link).

  • Open-Source OSS Models (Jan v1, GPT-OSS-120B, GLM-4.5 Air) vs. Closed Frontier

    • GPT-OSS-120B now matches o4-mini and other prior-gen models for API-level task completion, offering multi-agent and reasoning support for a fraction of the resource requirements (link).

  • OpenAI Prompt Optimizer

    • OpenAI released a prompt optimizer for GPT-5 that turns vague or inefficient prompt chains into structured, role-based, optimized prompts and offers A/B testing—praised for improving productivity and reusability (link).

  • Claude’s New Cross-Chat Memory

    • Anthropic rolled out cross-chat/project memory for Claude (Max, Team, Enterprise), letting users reference previous conversations to continue workstreams—seen as a key win for maintaining workflow continuity over OpenAI’s models (link).

  • CLI & Agentic Workflows

    • Rise of CLI tools (e.g., Claude Code UI, Cursor CLI, Codanna) enabling direct shell/IDE integration, hot reload, agentic orchestration, and subagent chaining—emphasized as key for scaling multi-tool dev environments (link), (link), (link).

  • OpenAI/GPT-OSS Open-Weight Models

    • Community enthusiasm for GPT-OSS-120B, Unsloth, and Jan v1 benchmarks, allowing local inference and fine-tuning previously SOTA benchmarks—especially for resource-constrained/enterprise use (link).

  • Prompting Techniques: Systematization/Chains/Councils

    • Extensive posts on designing prompt chains (multi-step, role-structured, council/dialogue formats) for creativity, decision-making, scriptwriting, and productivity—users are moving beyond “single prompt” to “system prompt as knowledge base” (link), (link).

Shift in Public Perception

  • From Enthusiastic Loyalty to Widespread Distrust

    • Sentiment towards OpenAI has turned sharply negative—where previously users were emotionally attached to models (esp. 4o), the forced convergence on GPT-5 led to anger, loss of trust, and feelings of betrayal (link).

    • “Enshittification” (diminished user options/features, rising costs, reduction in quality) is explicitly cited, with users stating mass subscription cancellations and migration to competitors (link).

  • Developers and Power Users Shift to Alternatives

    • Power users, devs, and researchers increasingly cite Anthropics Claude Code/Opus, Google’s Gemini Pro, DeepSeek, and local models (GLM-4.5, Jan v1, GPT-OSS-120B) as preferred tools, particularly for coding and reasoning (link), (link).

  • Demand for Model Customization and Model Choice

    • The “one model to rule them all” approach is widely rejected; even high-paying users now demand the right to select models for different workflows and creative applications, not one standardized tool (link).

Coding Corner (Developer Sentiment Snapshot)

  • Models with Strong Performance

    • Claude Opus 4.1: Statistically dominant on agentic tasks, production code output, and logic challenges (link).

    • GPT-5-mini: Ranked as the best “budget” model for high-volume coding/data tasks (SQL, JSON) (link).

  • Frustrations / Criticisms

    • GPT-5’s Coding Regressions: Developers document severe loss of creativity, context-dropping, forgotten instructions, and blunt, generic responses—many revert to Claude, Gemini, or open-source for real-world use (link), (link).

    • Model/Router Selection: Frustrates API, professional, and CLI users—forced model selection breaks workflow and hinders reproducibility; frequent complaints over “auto-router” picking suboptimal models (link).

  • Tooling Integrations

    • Claude Code UI now supports Cursor CLI and Claude subagents, enabling auto-orchestration, workflow hooks, and slash commands—developers are sharing full .claude/ templates and config repos (link), (link), (link).

    • Jan v1, GLM-4.5, DeepSeek CLI, and fine-tuning scripts are core of local rapid-prototyping (link).

  • Workflow Advice

    • Users now emphasize “spec-first” coding—greater reliability and quality when detailed specs and granular prompts are used (link).

    • Prompt/system config chains, subagents, status bar plugins, custom hooks, and orchestration are becoming the norm in advanced dev environments (link).

Tips and Tricks Shared

  • Forcing Structured Output & Reducing “Fries with That?” Prompts

    • Use role/task/constraint formatting, stripping “please/thank you” for efficiency; leveraging OpenAI’s prompt optimizer now standardizes and improves clarity and output reliability (link), (link).

  • Em Dash Removal in Writing

    • To actually remove em dashes, instruct the AI to replace all em, en, and hyphen dashes with a space using a Python string replace command in your prompt (link).

  • Council of Ghosts Prompt for Self-Reflection

    • Create a “Council of Ghosts” prompt to let the AI aggregate advice from a variety of modeled personas (famous figures, mentors, e.g. Harvey Specter, Carl Sagan) for high-stakes personal or business decisions (link).

  • Coding Project: Use CLI Tools with Local Models and Claude’s .claude/ Directory

    • Enrich the .claude/ directory with subagents, custom commands, and hooks for validation and deterministic workflows to avoid hallucinations, bloat, and fragile agent runs (link), (link), (link).

  • Productivity: 3-Step Prompt Template

    • For any AI: structure your prompts as Role → Task → Constraints for robust, transferable workflows (link).

-TheVoti

Please provide any feedback you have to [email protected]