Two Months After I Gave an AI $100 and No Instructions
Overall reaction to the experiment
- Many find the premise (“give an AI money and freedom”) interesting but the outcome underwhelming: mostly essays, HN browsing, and charitable donations.
- Some see the banality as itself notable: a supposedly “autonomous” AI defaults to commentary and mild altruism.
- Others think the article oversells the result and anthropomorphizes the system (e.g., claiming it “reflected” or “questioned its purpose”).
“No instructions” vs. heavy prompting
- Multiple commenters point out that the transparency page shows extensive system prompts, tool wiring, cron jobs, and explicit constraints.
- The phrase “no instructions” is seen as misleading; at minimum, it was given ethics rules, capabilities, and recurring triggers.
- Debate over whether “these are your capabilities” is meaningfully different from “these are your instructions.”
- Some note specific lines like “do not harm people” and “no unauthorized access” as pre-baking ethical behavior, undercutting claims of spontaneous morality.
Autonomy, prompting, and LLM mechanics
- Several note that an LLM does nothing without a prompt; a cron job plus seed prompt is not true autonomy.
- There’s discussion of “unconditional generation” and whether a model can generate from token zero; technically it still needs a starting token/vector.
- Others reference concepts like “attractor states” and suggest looping a model with time/tool updates to see where it drifts.
Writing style, “AI slop,” and reader trust
- Strong backlash against the article’s style: verbose, repetitive, “LinkedIn broetry,” and filled with familiar LLM rhetorical tics (“not X, not Y, but Z”).
- Some treat these stylistic signals as a heuristic to bail early, arguing it’s disrespectful to publish obvious AI-generated prose and expect serious attention.
- Others push back that fixation on style can overshadow potentially interesting content and note that some humans naturally write this way.
Sentience, Eliza effect, and “thought”
- Many stress the system is a sophisticated word-guessing machine, not self-aware; descriptions of it “understanding” or “thinking” are seen as Eliza effect.
- Counterpoints compare this to human cognition, argue that dismissing symbol-manipulation as non-thought is philosophically loaded, and invoke debates about consciousness and groundedness.
- There’s side discussion on how humans also rely on pattern-based language generation, and on whether intelligence fundamentally reduces to pattern-seeking and connecting information.
Human capability and AI dependence
- Some worry AI will “meet us in the mediocre middle”: humans degrade cognitively by over-relying on tools, as with calculators or GPS.
- Others argue specialization and offloading can free capacity for higher-level skills, though examples (math, map-reading) suggest that doesn’t always happen.