Microsoft Copilot Cowork is positioned as a powerful agentic workspace for knowledge workers, but some meaningful gaps around data protection and reliability deserve attention before organizations roll it out widely.
Custom Skills Are Rough Around the Edges
Getting started with Cowork is straightforward enough, but the custom skills experience still has friction. When attempting to create a skill through the Cowork frontier agent interface, the agent can confidently generate a polished HTML report scoring the skill’s quality and declaring it ready, without ever actually creating the skill file or placing it in the correct OneDrive directory. That kind of confident hallucination about its own actions is a meaningful usability problem, not a minor quirk.
Even after manually provisioning a skill into OneDrive, the agent’s ability to recognize it is inconsistent across chat sessions. A skill visible in one session can be completely unknown to the next, despite no changes to the underlying file. The output artifact experience compounds the frustration: session files land in GUID-named folders buried under a Cowork/sessions/ path in OneDrive, and the built-in option to open artifacts directly in OneDrive does not reliably work. Navigating to outputs requires manual file browsing.
The Anthropic API Dependency Has Real Consequences
Cowork routes requests through the Anthropic API rather than keeping processing within Microsoft’s cloud boundary. This has several practical implications that are easy to overlook until something breaks. Microsoft’s own FAQ confirms that Cowork connects to external models for processing, and a recent Anthropic API outage demonstrated what that means in practice: active sessions crashed, and the chat history for those sessions became inaccessible. The session files in OneDrive survived, but the conversational context did not.
This external routing also explains why Cowork cannot open Microsoft Purview Information Protection-encrypted documents. Even though a user has the appropriate credentials and permissions, the encrypted content cannot be passed to an external API. Standard Copilot chat sessions using GPT models handle Purview-protected files without issue because processing stays within Microsoft’s infrastructure. For organizations with significant volumes of protected content, this is a meaningful workflow limitation.
Organizations in regions where GDPR or other data residency requirements apply should note that certain countries and regions may not have access to Cowork precisely because of this external data routing, creating uneven capability across global teams.
The DLP Policy Gap Is the Most Pressing Issue
The most significant finding is that Cowork does not appear to respect Microsoft Purview Data Loss Prevention policies configured for Copilot. A document with a DLP policy explicitly blocking Copilot access returns the expected blocked response in standard Copilot chat. The same document, queried through Cowork, returns the content without restriction.
Sensitivity labels for Microsoft 365 Copilot are documented as applying to Copilot and Copilot Chat, but the boundary of where those policies apply across Cowork, declarative agents, and Azure AI Foundry agents is not clearly defined. Purview-encrypted files offer a harder technical barrier. Sensitivity labels used for classification without encryption, however, do not appear to block Cowork from accessing the underlying content.
Microsoft’s data handling documentation for Cowork covers tenant isolation, file storage in OneDrive and SharePoint, and data subject rights, but is less