AI is ushering in a “tiny team” era
Tiny teams, revenue per employee, and layoffs
- Several commenters note that extreme “revenue per employee” optimization previously led to terrible customer outcomes: no QA, minimal testing, rushed code, ignored privacy/security.
- Others argue it’s business-dependent: some users pay for reliability, and lack of quality investment is either a bad bet or a sign customers don’t care as much as engineers do.
- Many see the “tiny team” shift as driven as much by 2023 layoffs and Twitter’s survival after deep cuts as by AI itself; empire-building managers with huge org charts are out of fashion.
- One commenter stresses that the real metric is growth rate of revenue per employee, not revenue per employee itself.
How AI is changing individual and team productivity
- Multiple practitioners report 2–3x personal productivity gains, with much larger multipliers for testing, refactoring, internal tooling, and boilerplate.
- AI is praised for:
- Explaining and exploring non-trivial models and architectures.
- Generating tests and integration suites.
- Building dev tooling and automation (ETL scripts, CLIs, pipelines, Docker/K8s workflows).
- Acting as code reviewer and translator for small localization tasks.
- “Using agents well” is framed as a new differentiating skill; some people will gain much more than others.
Quality, reliability, and limits of AI coding
- Several worry that faster code generation will tempt management to stack more responsibilities onto fewer developers, increasing burnout and risk.
- Others emphasize that “figuring out what to build” remains the bottleneck; AI mainly shifts the work to verifying what it produced.
- There’s concern about subtle bugs and overconfidence: AI can produce plausible but wrong code or translations; human understanding and strong linting/testing are still essential.
- A few note that, so far, AI has mostly accelerated small tasks; they don’t yet see a wave of clearly better or more innovative products.
Economic endgame: automation, inequality, and markets
- One thread explores a vision where companies are mostly AI plus robots, with a single human “orchestrator,” framed as a shareholder dream of labor-free profit.
- Counterpoints:
- If fully automated factories are commoditized and pay-as-you-go, capital advantage shrinks and variety should explode.
- Distribution, marketing, and access to customers remain the real bottlenecks, even in food-like commodity markets (“soup” analogy).
- Strong worries about automation amplifying inequality and regulatory capture: more power to asset owners could mean more consolidation, worse quality, and “race to the bottom” behaviors.
- Others predict a rise in very small businesses (one to a few people) empowered by AI, but skeptics note that lack of capital, not knowledge, is often the binding constraint.
Human collaboration vs talking to LLMs
- Some find LLM-driven work socially and intellectually unsatisfying compared to whiteboarding with humans; LLMs miss tacit nuance and “how people will actually react.”
- Others value LLMs as judgment-free partners for asking “dumb” or exhaustive questions, then bringing refined ideas to teammates.
- There’s cultural backlash against people pasting long AI or Wikipedia answers into discussions, which can derail genuine idea exchange.
- Anecdotes highlight over-trusting AI advice (e.g., cosmetic surgery recommendations, dangerous cooking tips), reinforcing the need for skepticism and human judgment.
Venture capital, cloud, and structural factors
- Some predict a decline in early-stage B2B SaaS VC: with AI and cloud, skilled individuals can get much further before needing capital; growth funding persists, but not seed for basic SaaS.
- Enterprise and defense tech are seen as exceptions where sales, integration, and politics still demand large organizations and capital.
- Commenters note that tiny teams were already enabled by web frameworks and cloud (AWS/GCP); AI may be another step in that long trend rather than a wholly new era.