I built an open source AI tool to find my autoimmune disease
Patient Empowerment and Positive AI Use‑Cases
- Several commenters describe using ChatGPT/Claude to interpret oncology reports, ER notes, lab panels and imaging, then using that understanding to ask better questions and advocate for themselves or relatives.
- LLMs are praised for: translating jargon to lay terms, summarizing scattered records, surfacing possible conditions or tests, and preparing patients for specialist visits or hospital rounds.
- Some say AI helped them or relatives move toward correct diagnoses (e.g., MS, ankylosing spondylitis, Crohn’s-related arthritis, dyshidrotic eczema) faster than or alongside doctors.
Skepticism About the Tool and AI Diagnosis Claims
- Multiple commenters argue the showcased tool is mainly re-packaging information created by doctors; AI’s “diagnosis” came only after ~$100k of tests and specialist notes.
- Critics say the story is vague about the exact autoimmune disease, the timeline, and what the model actually output (likely a broad “possible autoimmune condition” among others).
- Some suspect the post is more about promoting an AI side project than demonstrating a genuine diagnostic breakthrough.
Fragmented Healthcare Systems and Missed Diagnoses
- Many describe years of being bounced between specialists, each focused on one organ system, with no one owning the “big picture” (common for autoimmune, EDS/hEDS, ME/CFS, endometriosis, chronic pain).
- Structural issues cited: underfunded public systems, private‑equity and hospital consolidation, short appointments, heavy administrative overhead, and limited incentives to pursue low‑probability “zebras.”
- Commenters emphasize the need for self‑advocacy or a dedicated case manager; some think AI could partly fill that coordination role.
Risks of Self‑Diagnosis, Hallucinations, and Hype
- Several doctors and technically literate users worry about AI reinforcing hypochondria, over‑testing, and doctor‑shopping, similar to but stronger than WebMD or TikTok self‑diagnoses.
- LLMs are described as performing like a “recent graduate”: useful for checklists and differentials but prone to confident errors and missing nuance in lab variability.
- Concern that desperate patients might over‑trust vague AI suggestions like “you may have an autoimmune disease” and pressure doctors into unnecessary or harmful procedures.
Privacy, Regulation, and Data Ownership
- Debate over whether privacy rules (e.g., HIPAA) truly prevent doctors from using cloud LLMs with patient consent, or whether institutions are simply risk‑averse.
- Many insist PHI should not be sent to commercial models; they favor self‑hosted or fully local tools and formats like FHIR to integrate personal records safely.
- Some argue current regulation and institutional conservatism already impede potential benefits, while others stress that cautious rollout is appropriate.
Lifestyle, Genetics, and the Limits of Medicine
- Thread includes long side discussions about diet (keto/carnivore, gluten sensitivity), micronutrients, MTHFR and other SNPs, and whether AI can help individuals navigate these complex, poorly understood areas.
- Others push back that much online gene/diet discourse is alternative‑medicine hype with weak evidence, and that over‑medicalization and overtreatment are already serious problems.
Broader Debate on AI’s Role in Society
- A large subthread veers into AI in the arts: some see it as just another tool in a long history of tool changes; others see it as theft and deliberate devaluation of creative labor.
- Similar anxieties are projected onto medicine: will AI be used to augment diagnosis and care, or primarily to cut costs and deny treatments?
- Overall, commenters see AI in healthcare as both promising and fraught: potentially transformative for organizing information and widening access, but dangerous if oversold or deployed under perverse economic incentives.