Show HN: Open-source real-time talk-to-AI wearable device for few $
Interaction modality (voice vs text)
- Some find voice interactions more mentally draining than typing, likening them to phone calls vs IM.
- Others say recent real‑time voice models make spoken conversations feel natural and productive, especially for architecture/design discussions, language learning, and while commuting.
- A few prefer speaking input but still reading text output as fastest overall.
Local vs cloud and self‑hosting
- Multiple comments ask about running everything locally.
- Community suggests using OpenAI‑compatible local servers (LMStudio, llama.cpp, mistral.rs, ollama) as backends.
- Project maintainers say local LLM, STT, and TTS support is planned and that the backend can already be self‑hosted via Docker, with the subscription mainly covering hosted inference costs.
Dedicated hardware vs smartphone apps
- Skeptics question why new hardware is needed when phones already have mics, screens, and connectivity; they’d prefer an app.
- Counterpoints:
- Target users may include children without smartphones.
- iOS/Android heavily restrict always‑on background listening; a dedicated device avoids OS limitations.
- There are concerns about large platforms blocking data capture; custom hardware gives more control.
- Clarification that “always listening” usually means low‑power wake‑word detection, not full streaming, though some argue continuous STT is already feasible with more powerful boards.
Pricing, lock‑in, and architecture
- Concerns that the cheap hardware may be offset by ongoing subscription fees and a dependency on the company’s servers.
- The team pegs premium at under $9/month and emphasizes that self‑hosting is possible.
- One commenter notes that the current stack depends on several third‑party cloud services, making full isolation/self‑hosting non‑trivial.
Use with children, therapy, and ethics
- Marketing claims around emotional support, “safe for all ages,” and complementary caregiving draw strong criticism.
- Critics argue LLMs are unpredictable, unvetted for therapeutic use, and could cause harm, especially to vulnerable children; comparisons are made to medical devices that require trials.
- Others counter that people already use chatbots for late‑night emotional support and that, while not replacements for humans, they can be helpful supplements.
- The team reframes the device as non‑medical, more like a comforting/educational tool (e.g., explaining procedures to pediatric patients), and acknowledges the need for domain experts and more careful wording.
- Some worry about the social message of substituting machines for human attention; others note many children already lack adequate human care and might still benefit.
- Safety concerns extend to content filtering (OpenAI’s refusals on taboo topics) and past examples of AI giving dangerous advice.
Other use cases and ideas
- Interest in a simple, hackable device that streams mic audio to arbitrary HTTP endpoints and plays back responses; example ESP32 code is shared.
- Ideas include inspection workflows (spoken notes → structured templates) and LLM‑powered “Teddy Ruxpin”‑style toys, with both excitement and surveillance fears.