Billing can be bypassed using a combo of subagents with an agent definition

Perceived Copilot billing bug and whether it’s real

  • Original claim: Copilot’s “subagents” let users invoke expensive premium models (e.g., Claude Opus) from a cheaper model session, bypassing per-request billing and enabling long-running agent loops “for free.”
  • Later commenters challenge this: detailed inspection of the runSubagent tool schema in VS Code shows it only accepts prompt and description; parameters like agentName/model are silently dropped.
  • A “banana test” (custom premium agent instructed to always answer “banana”) shows the subagent still behaves like the default free model, never loading the .agent.md profile or premium model.
  • Conclusion from that analysis: as implemented, the routing-to-premium-agent scenario doesn’t actually work; so there’s likely no billing bypass in practice, just misleading/unfinished “experimental” docs.

Microsoft process and organizational behavior

  • The reporter says Microsoft’s security response center rejected the billing-bypass report as “out of scope” and told them to file it publicly; this is mocked as a “not my job” attitude.
  • Similar stories appear about Azure and DevOps support bouncing users between teams or forums instead of owning cross-team issues.

Copilot pricing, value, and sustainability

  • Several commenters see Copilot as the cheapest way to access Claude Sonnet/Opus, especially via “premium requests” (flat per-prompt, token-agnostic) and agent workflows producing huge code changes from a single prompt.
  • Some note that at list API prices, heavy use of premium requests is likely unprofitable, but gym-style economics (many subscribed, few heavy users) and enterprise licenses may make it viable.
  • Debate over billing models: per-request vs per-token. Per-request is called unsustainable for long-running agents; per-token is also seen as incentivizing subtle quality degradation to drive token usage.

Views on Microsoft quality and ecosystem

  • Strong criticism of Microsoft’s recent software quality, Azure reliability, and support; some nostalgia for older Windows/server versions.
  • Nuanced takes on .NET: language/runtime praised, but tooling, documentation sprawl, and historical baggage criticized.

AI “slop”, GitHub etiquette, and support interactions

  • Many complain about AI-generated, low-effort comments and PRs on GitHub, including people “vibe-engineering” on high-traffic issues and possibly pretending to be maintainers.
  • Official Microsoft support replies are also perceived as GPT-written, sometimes conceding fault more readily than humans, sparking debate about fake vs real empathy and whether AI apologies or concessions have any value.
  • General concern that LLMs are lowering the bar for participation, turning issue trackers into noisy, Reddit-like threads.

Design and security analogies

  • Some compare controlling LLMs with in-band instructions to classic phreaking/injection problems and note that as more agent logic runs locally, billing/guardrails are easier to bypass if implemented only on the client.