AISLE Discovers 38 CVEs in OpenEMR Healthcare Software
Nature of the OpenEMR Vulnerabilities
- Most of the 38 CVEs are basic issues: SQL injection, XSS, path traversal, and insecure direct object reference (IDOR).
- Many see these as “low-hanging fruit” that should be caught by competent teams and tools.
- OpenEMR is an old (~25-year) PHP application; several note such legacy PHP apps are typically messy and insecure.
- Some argue this reflects poorly on OpenEMR as a viable, safe EMR, especially given comments that parts of it date to PHP 3 and prior warnings not to expose it publicly.
- There is skepticism about claimed adoption (100,000 providers, 200M patients); some healthcare engineers say they’ve never seen it in practice.
AI Security Scanners vs Existing Tools
- Many argue traditional static analyzers, SAST tools, and linters (e.g., SonarQube, Psalm) could have found these bugs years ago.
- Others see value in AI as an “extra eye” that can cheaply scan for common patterns and low-hanging security flaws.
- Debate over whether AI is doing anything fundamentally new, or just automating grep/static analysis with better UX.
- Some warn against “delegating” security to AI, distinguishing it from using tools to augment human review and strong engineering discipline.
Security Culture, Training, and Checklists
- Several advocate code review checklists (e.g., OWASP top issues) and security-focused review culture as the primary defense.
- Others note that even when checklists and tools exist, teams often don’t consistently use them; AI can help enforce a baseline.
- Discussion on whether AI explanations actually teach deep security concepts versus encouraging superficial fixes.
Disclosure and Marketing Concerns
- Initial concern about responsible disclosure is resolved: the article states issues were disclosed and patched, but some feel that was “buried.”
- Some view the writeup as partly marketing-driven and would like comparisons to standard SAST/DAST results.
- Questions remain about how autonomous the AI analysis was and how prompts/workflows were structured (unclear from the thread).
Broader Implications
- Recognition that similar or worse vulnerabilities likely exist in closed-source EMRs and other critical systems (e.g., voting machines), but can’t be audited publicly.
- Concern that attackers also use AI, making defensive AI scanning more necessary.
- Worry about lone maintainers and “vibe-coded” apps producing insecure systems, and whether AI will raise the security floor or just scale insecure code.