Show HN: I used Claude Code to discover connections between 100 books

Perception of LLM Output

  • Many commenters immediately recognized a “distinct LLM voice” in the trail titles and blurbs, and some felt even the announcement post read AI-generated.
  • Assessments of quality diverge sharply: some call the connections “LLM slop,” random or trivial word associations; others argue the trails are surprisingly good and require a more literary sensibility to appreciate.
  • Several note Claude’s tendency to drift into themes of secrecy, conspiracy, and hidden systems, interpreting this either as an intrinsic bias or as a reflection of the prompt/task.

Meaning and Value of the Connections

  • A central criticism is that the connections are often tenuous: the system may grab a single paragraph from thousands and elevate it to a “theme” not representative of the whole book.
  • Critics say this becomes a Rorschach test: broad, generic links where humans then project meaning. They ask for concrete examples of genuinely new insights and often remain unconvinced.
  • Defenders argue the trails offer another lens for reflection, especially around systems-thinking topics, and that even loose connections can prompt useful lines of thought (e.g., “father wound,” tempo/OODA loops, pacemaker-like bottlenecks).
  • Some debate whether 100 books is too small a corpus; others counter that depth of reading matters more than scale.

UX and Visualization

  • The UI and animations draw widespread praise: “fun,” “beautiful,” “inspiring,” and inviting to explore.
  • However, many say the word-level linking lines look meaningful but often connect phrases with “zero connection,” undermining trust in the visualization.

LLMs, Reading, and the Humanities

  • One cluster sees this as a concrete example of “distant reading” and digital humanities, where computational methods surface patterns across many texts.
  • Another group worries it hollowizes reading: outsourcing the very interpretive work that is the point of engaging with books, turning active insight into passive consumption.
  • Some see value mainly as a window into how recommender systems might work, rather than as a genuine aid to readers.

Related Experiments and Techniques

  • Commenters share similar projects: using Claude/ChatGPT to “read” complex GitHub repos, classify movies by narrative structure, cluster personal PDF libraries with embeddings, explore Shakespeare with ANN search, and build knowledge trees/Syntopicons.
  • There’s technical discussion (GraphRAG, embeddings, clustering, rerankers) and a meta-pattern: iteratively asking LLMs what tools they need, then encoding that into docs/scripts to improve future interactions.