Show HN: Now I Get It – Translate scientific papers into interactive webpages
Concept & Intended Use
- Service converts scientific PDFs into interactive, layperson‑friendly single‑page sites.
- Users see it as helpful for:
- Quickly triaging more papers than they can deeply read.
- Explaining work to non‑experts (friends, family, lab websites).
- Internal documents (e.g., architecture docs) and potentially company documentation.
- Creator emphasizes it as a complement to papers, not a replacement.
Quality, Hallucinations & Evaluation
- Mixed feedback on accuracy:
- Some users report it “worked” for them or for authors of processed papers.
- Others find serious conceptual mistakes (e.g., in “Attention is All You Need”) and “plausible nonsense” on their own papers.
- LLM sometimes fabricates illustrative charts not present in the original; in at least one case this was acknowledged as a “conceptual” visualization, not extracted data.
- No formal evaluation of whether users actually understand better; tool is still experimental.
- Concern raised that output is far from hand‑crafted interactive explainers (Distill, redblobgames, NYT).
Technical Approach & Prompting
- Pipeline is fully automated: PDF in → HTML out, using a frontier LLM.
- Backend: S3 + CloudFront, DynamoDB for metadata, AWS Lambdas.
- Strict system prompt for:
- Treating PDFs as untrusted data.
- Blocking dangerous JS / external calls.
- Producing metadata then a “really freaking cool‑looking” interactive page.
- PDF parsing is acknowledged as brittle; no chunking yet; hard 100‑page limit.
Costs, Limits & Monetization
- Current cap: ~100 papers/day; average cost ≈ $0.65 per paper, dominated by LLM spend.
- Users frequently hit “daily processing limit reached.”
- Ideas discussed:
- Simple cost‑plus per‑paper pricing.
- Donations tied to number of papers funded.
- Charging for repository access instead of subscriptions.
- Letting people sponsor specific papers.
Feature & UX Requests
- Light mode toggle; anchor links for headings; social preview meta tags.
- Better gallery organization and more examples across subfields.
- Possible integrations with citation managers (e.g., Zotero), deep‑reference “graph” exploration, and support for Wikipedia/topic pages.
- Interest in self‑hosting; some would pay for code access and run their own API usage.
Broader Reflections
- Some see this as another thin wrapper over foundation models and worry about value capture by a few big providers.
- Others argue LLMs enable a “Cambrian explosion” of short‑lived, creative software, where tools like this are early examples.