Data manipulations alleged in study that paved way for Microsoft's quantum chip
Academic fraud, incentives, and punishment
- Many see the alleged data manipulation as part of a broader pattern of misconduct in this subfield, with serious collateral damage: wasted money, careers, and follow‑on work built on bad results.
- Strong views that proven fabrication or plagiarism should be career‑ending, including revoked degrees and loss of future positions and grants.
- Others warn that a “career death penalty” can create perverse incentives: once someone has crossed the line, they may double down on fraud because they feel they have nothing left to lose.
- Commenters blame structural pressures: “publish or perish,” too many researchers chasing too few genuinely new problems, politicized internal misconduct committees, and prestige incentives at top journals.
- Some argue fraud should be handled by independent national bodies or even courts; others note that whistleblowers and investigative bloggers have been sued and judges often seem indifferent.
Specific concerns about the Microsoft‑related paper
- Key issues discussed: cherry‑picking 5 out of 21 devices without disclosure; averaging and other subtle data tweaks to make the data match theory; multiple “small” manipulations whose cumulative effect is large.
- Some note low device yield can be normal at the bleeding edge, so having 5/21 work isn’t itself suspicious—what is problematic is failing to report the non‑working devices.
- The pattern looks to some like “desperate PhD needs a high‑profile paper,” shifting a result from “maybe there’s something here” to a much stronger and unjustified claim.
- Debate over harm: a few say “only Microsoft loses” if the work is vaporware, others stress opportunity costs and misdirected public and private funds.
Quantum computing: hype vs. reality
- A sizable faction calls current quantum computing “smoke and mirrors” or even an outright scam: decades of effort, huge spend, but no clearly useful, uncontroversial computation yet.
- They point to tiny factoring demos (15, 21) often relying on prior knowledge, IBM’s cloud qubits yielding papers but no applications, and quantum annealers lacking clear scaling advantage.
- Others push back: quantum mechanics itself is extremely well‑tested; the question is engineering, not fundamental physics. They liken the situation to fusion or early computing—hard, slow, but not obviously impossible.
- Some note that even a demonstrated failure to scale (e.g., gravity or decoherence fundamentally blocking large systems) would be a major scientific result.
Industry, networking, and broader culture
- Big‑tech involvement is seen as driven by FOMO, executive image, and the need for “something new” at conferences, not just realistic timelines.
- One technical thread explains quantum networking use cases (quantum key distribution and linking small chips into larger machines) but other commenters challenge claims of “100% security,” arguing that real implementations and hardware assumptions undermine absolutes.
- Several connect this case to a wider “spectacle” culture: “fake it till you make it” sliding into “fake it,” metrics and image prioritized over substance, and the erosion of trust in both science and tech.