Stop overhyping AI, scientists tell von der Leyen
AI capabilities, Turing test, and “intelligence”
- One camp claims we effectively “blew past” the Turing test years ago and that denial of AI’s capabilities is widespread, especially given task performance (exams, research assistance, reasoning benchmarks).
- Others push back that this misstates Turing’s paper: the imitation game was a thought experiment about how to talk about machine thinking, not a hard AGI threshold.
- Several note that no proper modern Turing test (extended, adversarial human vs AI judgment) has really been run with top LLMs. Casual “I couldn’t tell” anecdotes don’t count.
- Many say LLMs still sound distinctly non‑human: formulaic politeness, RLHF “assistant” tone, poor handling of weird or provocative interactions. Supporters reply that this is mostly a prompting/style issue, not a hard capability limit.
- There’s disagreement over whether LLMs are “approaching human reasoning” or just extremely good pattern matchers whose apparent knowledge misleads users about real intelligence.
Risk, dependence, and “doomsday” scenarios
- Beyond job-loss fears, some worry about fast-growing dependence on systems with limited accuracy, transparency, and accountability.
- Two broad failure modes are discussed:
- A fast, dramatic scenario (AI with military control, classic sci‑fi takeover).
- A slow one where humans offload skills, then find the AI plateauing or degrading, leaving society less capable.
- Improved accuracy would ease some concerns but not remove these structural risks.
AI hype, markets, and Europe’s position
- Some see AI hype and extreme valuations (including non‑AI surveillance firms marketed as “AI”) as further proof that markets are now “vibes-based” rather than rational.
- Others argue that riding hype cycles is economically necessary; trying to suppress hype has never worked and only leaves regions like the EU further behind.
- Counterview: LLMs aren’t much closer to real intelligence than a decade ago; investors and politicians are being “duped” by fluent language.
The scientists’ letter and expertise
- Critics of the letter highlight many signatories from social sciences, critical studies, and “decolonial/critical AI” circles, questioning their technical authority and noting ideological framings.
- Defenders respond that there are also numerous computer science, AI, and cognitive science researchers signing; non-CS fields are still relevant to evaluating societal impact and hype.
- Dispute remains over whether the letter reflects “impartial scientific advice” or repackages familiar AI-skeptic rhetoric.
EU politics, lobbying, and governance
- Von der Leyen is heavily criticized as unelected, lobbyist‑like and credulous of corporate AI narratives; others note it’s normal for politicians to rely on expert input.
- Lobbying is described both as necessary feedback to avoid harmful regulation and as a mechanism that privileges corporate interests over citizens.
- Broader EU debates emerge: calls for deep institutional reform or even sortition versus reminders that, despite flaws, the EU still delivers high living standards and stability.