The Future of Everything Is Lies, I Guess: New Jobs
Terminology and “meat shields”
- Some object to calling people “meat” or “meat shields,” seeing it as dehumanizing with “sociopathic” undertones.
- Others argue the term is intentionally harsh to reflect how large employers already treat workers as disposable, non-human resources.
- “Meat shield” is used to describe humans hired primarily to absorb legal and public blame for AI-driven decisions.
Accountability, liability, and AI-era roles
- Strong consensus that machines cannot be held legally accountable; humans will remain on the hook for mistakes, losses, and crimes.
- Several commenters argue jobs with statutory or de‑facto liability (licensed professions, executives) will be among the last to be automated.
- The “moral crumple zone” idea is cited: humans positioned to absorb blame even when complex systems actually made or constrained the decisions.
UK blocking, archives, and Online Safety Act
- The blog is geo-blocked in the UK, reportedly as a self-imposed response to new safety/age-verification laws and adult/NSFW-adjacent content.
- Some see this as reasonable legal risk management; others as over-paranoid or mainly a political statement.
- Repeated patterns of comments about UK blocking and archive links are viewed by some as noise in every thread.
Future AI jobs and the article’s taxonomy
- Some like the outlined roles (incanters, process/statistical engineers, trainers, etc.) as plausible near-term specialization.
- Others think this is “magical thinking”: many of these roles will themselves be automated or mostly done once inside foundation-model companies.
- A contrasting view is that all these skills may collapse into a single broad role centered on critical thinking and statistical literacy.
Will AI replace software engineers?
- One camp finds LLMs already impressive and expects them to surpass most developers; another sees output as mediocre and limited to narrow tasks.
- Skeptics of human job security ask why a business wouldn’t eventually prompt an AI “senior engineer” directly instead of hiring engineers.
- Defenders say engineers are still needed for architecture, context, risk tradeoffs, and especially accountability; code has long been the easy part at senior levels.
- Many expect substantial displacement even without full replacement (e.g., 10–90% staff reductions), which is still economically and socially disruptive.
Engineer experiences and career anxiety
- Some engineers report being more productive and more excited than ever: LLMs handle boilerplate and tests, letting them focus on design and intent.
- Others fear the job becomes overseeing “idiot savant chatbots” and that teams will shrink drastically (e.g., 5 people replaced by 1).
- Past automation experiences are cited where “this will free you to focus on what matters” actually led to large layoffs and manager rewards.
Broader societal and economic concerns
- Several note we are effectively building an intelligence to replace humans, driven by competitive and game-theoretic pressures rather than collective consent.
- Some argue only a small minority is pushing this while most people would prefer a slowdown, but as a species “we” are still responsible for the trajectory.
- Commenters debate whether executives and boards will ever replace CEOs with AI; legal requirements for human officers may delay but not permanently prevent this.
- There is concern about rising “AI-slop” content (blogs, LinkedIn), weakening traditional signals of competence and contributing to a “dead internet” feel.