Show HN: DeepSeek My User Agent
Project behavior
- Site takes browser headers (user agent, referrer, location, basic device info) and sends them to DeepSeek R1 to generate a three‑sentence roast.
- Prompt is visible and can be reused with other models; responses often include visible chain‑of‑thought reasoning.
- Many users paste their own roasts, noting how the model picks a few “unusual” fields (location, CPU cores, resolution, referrer) and builds jokes around them.
Humor quality and reasoning
- Many find the roasts “shockingly” funny, varied, and specific, sometimes the first time an LLM has made them genuinely laugh.
- Others see it as wordy or generic insult comedy, with reasoning text that feels like verbose self‑talk rather than deep analysis.
- Users note occasional mismatches between the reasoning and final roast (e.g., it re-selects features mid-way or repeats itself).
- Some propose comedy benchmarks and see this as evidence that humor may require careful prompt engineering.
Technical and pricing discussion (DeepSeek vs others)
- DeepSeek’s API is reported as much cheaper than OpenAI’s o1; some wonder how it can be so low.
- Explanations offered: mixture‑of‑experts architecture with only ~37B active parameters at inference, highly optimized serving on H800s, large batch sizes, speculative decoding.
- There is skepticism that US providers can easily match the price even with open weights because of engineering effort and hardware mismatch (H100 vs H800).
- Comparisons are made to Groq and other hosts; DeepSeek’s own hosting is said to be cheaper than third‑party runners.
User agent quirks, accuracy, and privacy
- Multiple comments explain that Chrome, Safari, and Firefox now freeze macOS at “10.15” in the UA string, causing the model to think Catalina is still in use.
- iPad Safari self-identifies as macOS; Chrome on Android reports simplified strings like “Android 10, K”; some systems misreport memory or cores due to anti‑fingerprinting limits.
- As a result, many roasts get OS, device model, location, or core counts wrong; some users spoof ancient browsers for fun.
- Privacy tools (Tor, VPNs, privacy extensions, GrapheneOS, DDG Browser) further confuse detection, sometimes to users’ satisfaction.
Reliability and deployment issues
- Some users see only partial reasoning or timeouts. The author later attributes this to default Vercel function timeouts and to DeepSeek API flakiness.
- When DeepSeek’s platform has outages, the page is adjusted to immediately show the prompt so users can paste it into DeepSeek’s chat manually.
Broader LLM reflections
- One subthread argues that this kind of demo is fun but trivial compared to potential “internet buffer” tools: AI layers that block ads, filter clickbait, and curate content.
- A commenter says they’re already building such a system and using it as a personal interface to the web; others express strong interest.
- There’s debate over whether LLMs will kill targeted ads or become even more powerful advertising and manipulation channels.
- Some lament that big AI advances are funded by ad-driven companies and foresee trade‑offs if ad effectiveness declines.