Skip to main content
Microsoft Security

New Microsoft whitepaper shares how to prepare your data for secure AI adoption

The era of AI brings many opportunities to companies, from boosts in productivity to generative AI applications and more. As humans continue to harness the power of machine learning, these AI innovations are poised to have an enormous impact on organizations, industries, and society at large. A recent study by PwC estimates generative AI could increase global gross domestic product up to 14% by 2030, adding $15.7 trillion to the global economy.1 But along with tremendous value, AI also brings new data risks. In this blog, we’ll summarize the key points of our new whitepaper—Data security as a foundation for secure AI adoption—which details strategies and a step-by-step guide to help organizations deal with the new data challenges and data security needs in the era of AI.

A programmer uses a computer to write code to develop network security and enhance product safety.

Data security as a foundation for secure AI adoption

Learn the four steps organizations can take to prepare their data for AI.

Preparing data for AI adoption

In a recent survey on the state of generative AI, business leaders expressed optimism on the potential of AI, but shared their struggle to gain full visibility into their AI programs—creating data security and compliance risks.2 58% of organizations surveyed expressed concern about the unsanctioned use of generative AI at their companies, and the general lack of visibility into it. And 93% of leaders report heightened concern about shadow AI—unsanctioned or undetected AI usage by employees.3 Our whitepaper walks through four key steps organizations can take to prepare their data for AI and includes a detailed checklist at each stage. The stages include knowing your data, governing your data, protecting your data, and preventing data loss. Taking these steps and understanding how to prepare your data properly for AI tools can help mitigate leader concerns and decrease data risk.

Choosing which AI to deploy

Data security defined

Read more

Once you secure your data and prepare to deploy AI, how do you decide which generative AI application is best for your organization? For many customers, choosing AI that integrates with their existing Microsoft 365 apps helps maintain security and maximize their current technology investments.

Copilot for Microsoft 365 is integrated into Microsoft 365 apps so that it understands a user’s work context, is grounded in Microsoft Graph to provide more personalized and relevant responses, and can connect to business data sources to reason over all of user’s enterprise data. Copilot inherits Microsoft 365 controls and commitments, such as access permissions, as well as data commitments and controls for the European Union Data Boundary, providing customers with comprehensive enterprise data protection. And with Microsoft Purview, Copilot customers receive real-time data security and compliance controls seamlessly integrated into their organization’s Microsoft 365 deployment.

Secure and govern usage of Copilot for Microsoft 365

As organizations deploy Copilot and other generative AI applications, they want to get ahead of the inherent risks of data being shared with generative AI applications—including data oversharing, data leakage, and non-compliant use of generative AI apps. In the whitepaper, we walk through the steps you can take to discover and protect your organization data as it interacts with AI, then how to govern usage of Copilot once it is deployed. Many organizations also choose to add Microsoft Purview, which provides value like Microsoft Purview AI Hub to help you gain visibility into how your organization is already using AI, including insights into sensitive data being shared with AI applications. The whitepaper shares more detail on the AI Hub interface, its capabilities, and insights into the risks identified by Microsoft Purview. It also shows how you can protect sensitive data throughout its AI journey, with information on sensitivity labeling, data security controls, and data loss prevention capabilities.

Microsoft Data Security solutions

Learn more

The whitepaper also details how your organization can prioritize compliance obligations with Microsoft Purview, assess your compliance with existing AI regulations, and conduct legal investigations for incidents where AI interactions were involved.

Gain the confidence to innovate with AI, securely

Implementing the strategies described in our whitepaper—Data security as a foundation for secure AI adoption—can help give your organization the confidence to explore new avenues and opportunities with AI while protecting and governing your data to minimize security risks and stay ahead of compliance obligations.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1PwC AI Analysis—Sizing the Prize, PwC.

2The 2023 State of Generative AI Survey, Portal26.

3As Companies Eye Generative AI to Improve Productivity and Growth, Two-thirds Admit to GenAI-related Security or Misuse Incident in the Last Year, Yahoo.