OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning
OpenAI’s “Open” Identity and Business Model
- Many commenters see a stark shift from the original “open AI for humanity” non‑profit vision to a closed, profit‑driven platform with some of the least open models in the industry.
- Some argue “open” was always meant as “open to use” via API, not open source; others say this redefinition makes “open” meaningless.
- The non‑profit / capped‑profit structure is debated: some note it’s legally common for nonprofits to own for‑profit entities; others see likely “private benefit” problems and possible fraud, referencing ongoing legal disputes.
- Several say the real driver of secrecy is competitive advantage and valuation, not safety.
Strawberry / o1 Reasoning and Chain-of-Thought Ban
- The “Strawberry” name is widely read as PR aimed at the meme about GPTs failing to count “r”s in “strawberry.”
- Banning users for eliciting chain-of-thought (CoT) is seen by many as overreach and a sign they lack confidence in their alignment / safety; others think it’s about hiding an easily copyable “secret sauce.”
- People worry about collateral damage: casual users, red‑teamers, or downstream app users might trigger bans; this is viewed as a brittle foundation for serious products and a potential attack vector.
Technical Discussion: Tokens, Counting, and Reasoning
- Long subthread explains why models often miscount letters: they operate on subword tokens, not characters, so can’t natively “see” letters; when correct, they’re likely recalling memorized facts.
- Others counter that this exposes limits of “reasoning” and highlights that LLMs are sophisticated interpolation systems, not symbol‑manipulating intelligences.
Prompt Engineering and Control
- One side calls “prompt engineering” pseudoscience propped up by policy and censorship; another credits it with turning LLMs from text generators into usable “knowledge engines.”
- Speculation appears about prompts that generate forbidden prompts, and about organizational filters controlling which questions can be asked.
Safety, Power, and Governance
- “For your safety” is framed by some as a common facade for control; others respond that safety motives can be genuine, while still easily abused.
- A minority expresses strong existential‑risk concerns and suggests AI development should be paused or tightly controlled, even via export controls on GPUs and research.
Ecosystem and Alternatives
- Several defend OpenAI by noting that without its commercialization we might not have widely accessible frontier models; critics respond that similar capability would have emerged elsewhere, possibly more openly.
- Multiple commenters report better practical results from competitors (e.g., Claude, open‑ish Meta models) and avoid OpenAI on principle.