Will AIs take all our jobs and end human history, or not? (2023)
AI deciding “what’s worth doing”
- Some argue we’ll still argue about goals even with powerful AI, because “worth doing” is subjective and context‑dependent.
- Others think people will happily offload more and more real decision‑making to AI out of laziness, competitive pressure, or trust in “superintelligence.”
- Skeptics ask “better for who?” and doubt we’ll—or should—let machines define human values.
Human work, meaning, and uniquely human roles
- Many think AI will not end work but continually shift it; human wants are endless and new tasks and services emerge.
- Examples of likely resilient work: hands‑on care (nurses, childcare, elder care), trades (plumbers, construction), intimate/embodied services (sex work, personal training), and “authentic” experiences (chefs, bars vs. replicators).
- Others warn that if AI can do most economically valuable tasks, human capabilities may lose external value even if we still value each other intrinsically.
Economic impacts, inequality, and social stability
- A common view: AI is deflationary—driving down prices and wages—eventually creating new employment, but with a painful transition where many are left behind.
- Worries center on:
- Collapse of labor as the main wealth‑distribution mechanism.
- Rising inequality and potential civil unrest before policy catches up.
- “Baumol effect” pushing up prices of non‑automatable services.
- Some foresee nationalization or heavy state control of key AI infrastructure; others think concentrated corporate/elite power will block that.
Global and cultural attitudes toward AI
- Several comments frame intense AI job anxiety as especially American, tied to employer‑linked healthcare, weak safety nets, and identity fused with work.
- Others push back, saying AI worries are present in Europe, India, Vietnam, etc., even if hype levels and immediate exposure differ.
AGI behavior, survival drives, and control
- Debate over whether AGI would “want” to run the planet or even to survive:
- One side invokes instrumental convergence and selection pressure.
- The other notes current AIs are tools without inherent drives, and survival instincts aren’t automatic.
- Some see serious x‑risk and political capture by AI‑empowered elites; others think AGI takeover is as speculative as aliens or mythic beings.
Capabilities, limits, and creativity of today’s LLMs
- Mixed experiences on reliability: some say newer models rarely output “obviously wrong” answers; others report 10–15% subtle errors and dangerous hallucinations.
- Consensus that current systems can’t be safely unsupervised in high‑stakes domains yet.
- On creativity, several argue randomness alone isn’t “originality”; true creativity involves intention and a point of view, which LLMs may lack.