Claude is a space to think

Business models & ads

  • Many see Anthropic’s “no ads” pledge as a deliberate contrast with OpenAI’s ad plans and heavy free usage, which some view as economically unsustainable loss-leading.
  • Several argue ads inherently distort incentives: to support margins, models must get cheaper/dumber or push more ad inventory, echoing Google Search’s decline.
  • Others counter that ads are the only way to fund global free access; advertisers will always pay more per user than consumers, so ad-funded players may win in a competitive market.
  • Some say Anthropic’s stance is easier because its focus is enterprise/B2B and paid dev/coding use, not massive consumer scale.

“Good guys”, values & corporate trust

  • Posters hope Anthropic is a net positive: citing stances on no ads, some regulatory issues, and limits on lethal military uses.
  • Concerns: Palantir and defense partnerships, lobbying for chip controls, courting authoritarian-linked money, shifting positions as competition grows.
  • Strong debate on whether companies can have “values” at all vs pure profit motives; many expect any idealistic stance to erode under investor pressure, comparing to Google’s “don’t be evil” and OpenAI’s trajectory.
  • Anthropic’s PBC status and AI-safety culture are noted, but skeptics still treat all commitments as marketing until backed by structural constraints.

Openness, lock‑in & ecosystem control

  • Anthropic is criticized as more closed than OpenAI: no open weights, Claude Code kept proprietary, and blocking third‑party tools like Opencode from using paid subscriptions.
  • Some see this as classic walled‑garden, lock‑in behavior and a bad signal for future “enshittification,” pushing them back toward “best model wins” rather than “values” loyalty.
  • Others attempt to steelman anti–open‑weights arguments: open models can’t be monitored, can be fine‑tuned for harm, and lower the barrier to scaled abuse.

Military, politics & ethics

  • Work with the US military and Palantir is a major fault line: some view it as inherently unethical; others frame it as ordinary defense work or unavoidable at scale.
  • A few posters provocatively argue Chinese labs might be “better” ethically; others reject this as naïve given state interests.

Product experience & “space to think”

  • Users often prefer Claude for coding, deep work, and brainstorming, describing its “thinking” as richer, while using ChatGPT more like a search engine.
  • Complaints include strict safety filters (especially on cybersecurity topics) and tight usage limits compared with ChatGPT’s generous quotas.
  • Several praise LLMs as genuinely helpful thinking partners; others liken them to TV—outsourcing thought rather than enabling it.

Long‑term outlook & trust

  • Many appreciate the current ad‑free, conversational ethos but assume it’s temporary and expect future backsliding once growth or IPO pressures mount.
  • There is broad agreement that trust in any proprietary AI is fragile, and that only running open models locally meaningfully addresses deeper privacy and control concerns.