Copilot broke audit logs, but Microsoft won't tell customers
Scope and Severity of the Issue
- Many see this as a serious security/compliance bug: an AI-assisted feature could expose document contents without a corresponding, expected audit trail.
- Others downplay it as a regular defect that was reported and fixed, arguing it doesn’t automatically imply catastrophic HIPAA or regulatory failure.
- There is concern that customers weren’t proactively notified, despite clear implications for audits and incident investigations.
CVE and Vulnerability Classification
- Strong disagreement over whether this deserves a CVE:
- Some argue CVEs are just standardized IDs for specific vulnerabilities and should apply even to cloud services and single-vendor systems.
- Others claim CVEs are for broadly distributed software or issues requiring customer action; since Copilot is auto-patched, they say a CVE is unnecessary.
- Several commenters suspect Microsoft’s interpretation of CVE scope is influenced by PR concerns rather than technical criteria.
How Copilot Likely Interacts with Data and Logs
- Many infer that Copilot is not directly opening files; it’s using an indexed or RAG-based search layer over M365 data.
- The consensus guess: audit events are emitted by the surrounding “scaffolding” or search/index services, and instrumentation was placed in the wrong spot (e.g., only when content is surfaced, not when it is retrieved).
- Some stress that logging should be deterministic and tied to data access at the storage/search layer, not to LLM prompts or behavior.
Compliance, HIPAA, and Audit Implications
- Commenters familiar with compliance note:
- HIPAA does not literally require every access be logged, but regulators strongly encourage detailed auditing and “reasonable and appropriate” controls.
- Any path where sensitive info can be surfaced without a reliable audit trail undermines SOC 2 / HIPAA / ISO-style assurances.
- Several note this is especially dangerous where users can ask about medical, HR, or other regulated data via Copilot and have no corresponding record of access.
Microsoft Security Culture and AI Push
- Many see this as fitting a pattern: “insecure by default,” product sprawl, rushed AI integrations, and competing internal KPIs (security vs growth/engagement).
- References are made to prior Microsoft security criticisms and marketing claims about “security above all else,” contrasted with behavior in this case.
- Strong frustration at Copilot being “crammed into everything” (VS Code, M365, Excel, etc.), sometimes re-enabling itself or being hard to disable.
Technical Debate: Secure RAG, Indexing, and Permissions
- Long subthread on how to do access-controlled AI search:
- Some argue this is a well-known, solved problem in enterprise search: store ACLs as metadata, pre-filter candidates by permissions, then pass only allowed documents to the LLM.
- Others counter that real environments have complex, changing rights across multiple systems, making per-user or per-query filtering and reindexing hard, race-prone, and potentially leaky.
- Concerns that separate search indexes (or vector stores) can become effectively a second, under-audited copy of sensitive data.
- Debate over embeddings: some say vectors are like irreversible hashes; others note that embeddings can leak semantic information if the model is known.
Trust, Governance, and Responsibility
- Repeated theme: customers’ trust in Microsoft for security and compliance is eroding; some organizations are actively trying to move off the stack.
- Several argue executives often prefer “vibes” and short-term AI wins over deeply understanding risks; “the AI did it” is seen as a future accountability shield.
- For internal AI chatbot projects, commenters warn that unless authorization is enforced at every data access point, sensitive leaks are inevitable—and that raising this with leadership is often met with resistance.