Gemini Robotics

Demo authenticity and staging

  • Many suspect the videos are heavily staged: fruit appears fake, objects are dropped carelessly, audio (“doink” bananas) suggests props rather than real food.
  • Viewers note sped‑up segments (“Autonomous 3x/5x”) and slowed or clumsy humans, making robots look better by comparison.
  • Concerns that tasks are “trick shots” with low success rates and tightly controlled setups (specific banana, specific bowl, fixed positions).
  • Google’s history of misleading demos (previous Gemini video, Duplex phone-calls) leads several to treat this with “a heaping cup of salt.”

Perceived capabilities vs limitations

  • Some tasks impress people, especially threading a tight belt over pulleys and desk-cleaning around a seated human.
  • Others find the origami “fox” primitive and the overall speed too slow, attributing it to model inference limits, safety concerns, and control/feedback constraints.
  • Commenters contrast vision-heavy control with the relative neglect of tactile sensing and rich proprioception; current grippers lack human‑like sensitivity (eggs, brittle items).
  • Robotics veterans emphasize repeatability and robustness to “noise” (different objects, lighting, clutter) as the real hurdle, not single curated demos.

Coffee Test and generalization

  • The “Wozniak coffee test” (enter random house, find machine, make coffee) is debated: some say most adults, even a trained chimp, could do it; others call it a high bar due to layout variability and missing items.
  • The discussion highlights the difference between domain knowledge (what a coffee maker is) and general intelligence (coping with corner cases, “eyeballing” measures, explaining improvised choices).

From research to products

  • Frustration that Google/DeepMind repeatedly publish glossy robotics and AI demos without shipping widely usable products or code (e.g., AlphaProof).
  • Some note Gemini Robotics models are only in partner/private preview; many regions can’t access even consumer AI tools (ImageFX/VideoFX), which kills interest.
  • Several argue Google excels at core research (Transformers, Waymo, robotics) but is chronically weak at productization, long‑term follow‑through, and coherent AI strategy.

Google’s strategy, value, and culture

  • One camp sees Google as massively undervalued given its stack: frontier models, in‑house accelerators, self‑driving (Waymo), and apparent robotics capability.
  • Others counter that:
    • Revenue is overwhelmingly ads/search, now threatened by AI search alternatives.
    • Google repeatedly squanders leads (LLMs, Maps, chat, hardware), kills products, and suffers from reorgs and short‑term metrics.
    • This resembles Bell Labs/Xerox/Kodak: world‑class IP, poor capture of value.
  • Internal culture is described as risk‑averse, hyper‑bureaucratic, and driven by protecting the ad “cash cow” rather than letting new businesses cannibalize search.

Ethics, safety, and weaponization

  • Google’s “responsible development” language is viewed skeptically; some want hard commitments (no military/police sales, universal “stop, you’re hurting me” override).
  • Cheap, hackable robots are seen as both desirable (indie innovation) and dangerous (easy weaponization), with analogies to consumer drones and explosives.
  • Asimov’s Three Laws are invoked as early “alignment prompts” but also criticized as fictional thought experiments that break in edge cases.

Applications, economy, and personal anxiety

  • People fantasize about robots doing laundry, dishes, cooking, and real‑world garbage sorting/recycling; others note that many industrial sorting tasks already use simpler, faster non‑humanoid systems.
  • Some think cooking competence or household chores would be a labor market tipping point; others stress enormous gaps between lab demos and robust deployment.
  • A firmware engineer voices fear of obsolescence; replies emphasize:
    • Real value will be in turning models into working products.
    • Low‑level hardware, debugging, and regulated domains (medical, automotive, aerospace) will still need humans.
    • This resembles prior shifts (cloud, DevOps, high‑level languages): roles change more than they vanish.