Artificial intelligence is now deeply integrated into enterprise software. From drafting emails to summarizing long conversations, tools like Microsoft 365 Copilot promise to save time and improve productivity. Many organizations are paying significant subscription fees to bring these AI features into their workflows. However, a newly discovered bug has triggered serious privacy concerns for enterprise customers.
The issue was first reported by Office365ITPros, which spotted an advisory published on the Microsoft admin portal. According to the advisory, Microsoft 365 Copilot was reading confidential emails while generating summaries for users.
The problem was linked to Copilot Chat’s summarization feature. Due to a programming bug, emails from the Sent Items and Drafts folders were being sent for summarization, even when data protection policies should have restricted such access.
This is a major concern because many organizations rely on Data Loss Prevention policies and privacy labels to prevent sensitive data from being exposed or processed in unintended ways. The fact that Copilot could access such emails raised alarms across IT departments.
The incident is being tracked under ID CW1226324. According to Microsoft, customers initially discovered the issue on January 21, 2026. It was not an external security researcher who found the flaw. Instead, enterprise users themselves flagged the behavior, which then led to Microsoft publishing the advisory.
Microsoft has acknowledged that the behavior was unintended. The company stated that the issue was caused by a programming bug.
The fix began rolling out in a staggered manner starting February 10. However, Microsoft has confirmed that the update has not yet reached all affected customers.
The company is currently informing impacted organizations and testing the remediation measures to ensure the fix works properly before full deployment.
For enterprises, trust is everything. AI tools are being integrated into core business processes. If privacy controls can be bypassed because of a software bug, it creates serious compliance and security risks. Organizations that handle legal documents, financial data, healthcare records, or confidential contracts cannot afford accidental exposure, even if it happens due to unintended behavior.
The incident also comes at a time when many companies are still evaluating the risks and benefits of generative AI in the workplace. News like this may make some decision makers reconsider the pace of AI adoption.







