Skip to main content
Industry

Cyber defence in the age of AI

In this age of digital disruption, as every business strives to become hyper-connected, cybercrime becomes ever more impactful and disruptive to our economy and our society, with far-reaching effects on individuals and businesses. Defenders are fighting an asymmetrical battle, where attackers are often better skilled, resourced, and organised than many security teams. Nor do attackers have to play by the same rules we must. Compounding this, in most organisations, the incident response team can receive far more security alerts than they can realistically manage.

The use of automated detection and response systems can help tip the scale in favour of defenders by using risk-based algorithms and anomalous activity detection to flag events that require human expertise to investigate further. This helps security analysts detect patterns and behaviours that are not obvious to the human eye, with more precision and speed than human defenders alone.

The background to “cognitive cyber”

As advances in dynamic and adaptive cyber defence systems become reality, what do organisations need to do to become ready for cognitive cyber, and what exactly is it?

Cognition refers to the mental processes involved in gaining knowledge and comprehension. Cognitive cyber attempts to simulate that process with the application of self-learning algorithms, natural language processing, and big-data mining techniques as applied to the cybersecurity domain. It uses cognitive system overlays to traditional artificial intelligence (AI)/machine learning (ML) models to achieve something greater than the sum of the parts. 

To recap:

  • Classic/traditional AI and ML​ detects and classifies, and can work on vast amounts of data for use in real-time applications and automation of capabilities. ​Traditional AI is strong when it comes to looking at a large field of data and finding patterns or continuations (like making recommendations).
  • Generative AI (GAI), often powered by generative pre-trained transformers (GPT), effectively understands and creates content. It works on relatively small chunks of data – text, images, sounds, videos. Large language models (LLMs) are a kind of GAI that work on text.​ LLMs are good at understanding language, summarising, and translating concepts, for example from language to code or vice-versa. ​

Clearly, linking these models makes for a much more powerful narrative. And, by using the compute power, scalability, and richness of the cloud, we can build entire systems of intelligence that can reason over vast amounts of information – structured and unstructured.​

Our name for this intelligence-based cognitive capability? Microsoft Copilots. These are experiences that use generative AI to help humans with complex cognitive tasks.

Introducing Microsoft Security Copilot

Built specifically to augment human security expertise, Microsoft Security Copilot is a combination of the most advanced GPT4 model from OpenAI, with a Microsoft expert-driven, security-specific LLM model.

Most LLMs are trained on corpuses of written human language. Security Copilot is trained on security logs, attack telemetry and threat intelligence, the outcome of which is the first AI/ML model trained specifically for security.

But the capability is much more than just the large language model. Built into the product are specific cyber skills and promptbooks informed by our global threat intelligence, which runs on Azure’s hyperscale infrastructure. This means that the models inherit Microsoft’s comprehensive approach to security, compliance, and privacy. When it comes to the data Copilot is reasoning across, your data remains your data.

Security Copilot democratises defender skills by allowing natural language for querying rather than having to learn complex querying languages like Kusto Query Language (KQL). This lowers the barrier to entry for new analysts, which helps address the cybersecurity skills shortage. We’ve launched an Early Access program for qualified candidates to explore the capabilities of Security Copilot. Reach out to your sales representative to get more details.

Use cases for Microsoft Security Copilot

Human ingenuity and expertise will always be an irreplaceable component of defence, so we need technology that can augment these unique capabilities to improve the analyst experience all-up. For this reason, initially we are focusing on security operations centre (SOC) use cases.

The three primary use cases are security posture management, incident response, and security reporting.​

  • Security posture management: Security Copilot delivers information on anything that might expose an organisation to a known threat. It then gives prescriptive guidance on how to protect against those potential vulnerabilities.​ A query such as: ‘How can I improve my security posture?’ will return evidence-based recommendations.
  • Incident response: Security Copilot can quickly surface an incident, enrich it with context from other data sources, assess its scale and impact, and provide information on what the source might be. Again, it will support the analyst through the response and remediation steps with guided recommendations.
  • Security reporting: Security Copilot can deliver customisable reports that are ready to share and easy to consume to keep managers and other stakeholders in the loop. What this means tactically is you can ask Security Copilot in natural language: ‘Summarise this incident in a single PowerPoint slide’, and it will do just that.

Preparing for cognitive cyber defence: 3 steps

In the future, our vision with Security Copilot is to support use cases across security, identity, management, compliance and more, leveraging skillsets across Microsoft and third-party products. In the meantime, and whilst Security Copilot is not yet publicly available, there are things organisations can do to prepare for these cognitive cyber defence capabilities:

Step 1: Secure your identities, especially privileged identities, and SOC members. Attackers will frequently target these individuals to gain access to critical information and systems to elevate the impact of a successful compromise.

Step 2: The age of AI is also referred to as the age of platforms. Integrating your security signals into an observability platform brings huge security gains in terms of visibility and automation. 

Step 3: Initially, Security Copilot is integrated with Microsoft Defender for Endpoint, and for an even better experience, deploy Microsoft Sentinel and Intune. Going forward, Security Copilot will integrate with third-party products.  

Finally, prepare for the risks. As with any new technology, there are both risks and rewards. To help organisations navigate the risk/reward balance, we’ve released guidance, frameworks, and tooling. 

More information, including links to the risk assessment framework, the Counterfit tool and the Adversarial Threat Matrix (MITRE ATLAS) can be found in our Security blog post Best practices for AI security risk management

For information on our commitment to build trustworthy and responsible AI, please read Responsible and trusted AI and Building AI responsibly from research to practice.

Cognitive and AIML technologies are here to stay. While they have the power to bring immense potential for improving our defenders’ experience, securing our organisations, and protecting society, we must also be mindful of potential vulnerabilities on an equally large scale and defend against that risk.

Find out more

Introducing Microsoft Security Copilot

Microsoft Security Copilot Early Access Program

News Center: Microsoft brings the power of AI to cyberdefense

Microsoft Security Copilot: Empowering defenders at the speed of AI

About the author

Lesley Kipling, Chief Cybersecurity Advisor, Microsoft EMEAPreviously lead investigator for Microsoft’s detection and response team (DART), Lesley Kipling has spent more than 17 years responding to our customers’ largest and most impactful cybersecurity incidents. As Chief Cybersecurity Advisor, she now provides customers, partners and agencies around the globe with deep insights into how and why security incidents happen, how to harden defences and more importantly, how to automate response and contain attacks with the power of the cloud and machine learning. She holds a Master of Science in Forensic Computing from Cranfield University in the United Kingdom.