What it is
Funding for collaborative research between Microsoft and universities working together to make advances in artificial intelligence to solve computing security problems.
About
Microsoft is committed to pushing the boundaries of technology to empower every person and every organization on the planet to achieve more. The cornerstone of how Microsoft does this is by building systems that are secure and by providing tools that enable customers to manage security, legal, and regulatory standards.
The goal of this request for proposals (RFP) is to spark new AI research in different areas of phish protection that will expand our understanding of the communication graph, email and web content, economics of phishing and how to secure our customer’s assets in the face of increasingly sophisticated attacks while providing fairness and privacy guarantees.
As our cyber defense systems grow more complex in the face of ever evolving and sophisticated attackers, the human element remains the weakest link with few effective protections. Humans are targeted through various modern communication channels and tricked into disclosing sensitive information that may include credentials, financial details, PII data and certificates. According to the FBI’s 2020 IC3 Report (opens in new tab) social engineering attacks including phishing, vishing, smishing, etc. have gone up by 110% from 2019 alone.
Microsoft Security AI Academic Program is launching an academic grants program. We will fund one or more projects (up to $150K in total funding for this RFP) in new collaborative research efforts with university partners so that we can invent the future of security together.
Timeline
- April 30, 2021: RFP published.
- June 6, 2021: Proposals due.
- June 18, 2021: Winners announced.
- Summer 2021: Awards made, and planning begins with regularly scheduled meetings, calls, and visit(s) by Microsoft to MSecAI winning university.
- Spring 2022: Review of progress for potential second round of funding (pending progress and availability of funds).
- Fall 2022: Report back.
Research Goals
Research is an integral part of the innovation loop. Most of the exciting research is happening in universities around the world. The goal of the Microsoft Security AI (MSecAI) RFP is to develop new knowledge and capabilities that can provide a robust defense against future attacks. Through our grants program, we hope not only to support academic research, but also to develop long-term collaborations with researchers around the world who share the same goal of protecting private data from unauthorized access.
Proposals are invited on all areas of computing related to phish protection and AI, particularly in the following areas of interest:
Understanding the communication graph
A communication graph is a collection of entities including user accounts, applications, websites, shared infrastructure and the relationships between those entities such as emails, P2P messages, login attempts, etc. How do we leverage this dynamic graph at scale to extract key insights while providing privacy guarantees? Can we understand user interaction profiles over time and identify deviations to detect compromised accounts, phish emails from spoofed domains, bulk emails, etc.?
Understanding the content
90% of large enterprise customer breaches start from email that tricks users into revealing sensitive information. Most of these emails leverage some part of psychological manipulation that displays a sense of authority or urgency to take immediate action, threat, opportunity for monetary gain or loss, etc. Assuming clear text email data is available, what are some approaches that help machines understand the high level intention of a given email while providing privacy guarantees? How can we effectively group known phish emails into high level campaigns based on the content topics and exploitation techniques?
Fairness and accountability for security
As ML is used for more security-sensitive applications, the ability for these systems to generalize globally, not be disruptive to end users, especially any specific segment of user population is quite important. How do we define fairness in security and identify related issues when developing AI systems? Can we develop offline and online experimentation tools to test that our ML models are not biased with respect to attributes such as geo locations, language, industry verticals, etc. How do these test cases help us validate the fairness of ML models?
When it comes to accountability, how can we identify and assign responsibility for a decision made by an AI system? What steps can an incident responder take to respond to the business disruptions caused by misclassifications from AI system? How can we validate that the same misclassifications do not reoccur as ML systems are retrained? In addition, some of the ML systems may work with complex obfuscated data sources that might not generate human understandable explanations. How do we justify the decisions made by AI systems in such cases?
Verifying the authenticity of modern communication channels
While industry phishing attempts are predominantly carried out through email, many of these attacks have migrated to modern communication channels like professional networks, p2p messaging, search and ads. Phish attempts are becoming increasingly convincing to end users with the advancement of techniques like deep fakes for audio and video generation, content morphing, fake replies. How do we leverage AI systems to verify the authenticity of such content? Moreover, how do we differentiate legitimate user accounts from adversarial/ tester accounts setup to test defense systems or pollute backend telemetry?
Protecting patient zero
Based on this (opens in new tab) paper, an average phishing attack spans 21 hours between the first and last victim and the detection of each attack occurs an average 9 hours after the first victim. This gives attackers a window of opportunity during which most of the damage is done. How do we leverage AI systems to adapt to the adversarial temporal drift and prevent the first victim/ patient zero from being compromised? How can we use human-in-the-loop AI systems to enable experts to update defenses automatically? How can AI systems be leveraged to identify and learn from discovery of new attack campaigns? How can we augment supervised ML approaches with unlabeled, noisy data to ensure a good feature distribution coverage in training our ML models?
Economics of phishing
Phishing can be seen as an economic problem. Attackers operate like businesses by making investments in campaign inputs to generate returns by selling stolen credentials, using stolen credentials to gain network access, or committing direct fraud. Firms and users invest hundreds of billions of dollars annually in security protection and expect returns on those investments through reduced cyber risk or increased productivity gains. These markets are rich in common economic complications like externalities, asymmetric information, and uncertainty. However, they remain poorly understood. Can we categorize the attacker ecosystem by business model? What are the returns to firms’ security investments? How do security investments impact the attacker ecosystem and vice versa?
-
Microsoft funding
Microsoft will fund one or more projects (up to $150K in total funding for this RFP). A second round of funding pending initial progress and outcomes (see Timeline above) may be considered at some point during this collaboration. All funding decisions will be at the sole discretion of Microsoft. Proposals for this RFP should provide an initial budget and workplan for the research based on the Timeline section below.
Microsoft encourages potential university partners to consider using resources outlined in the RFP in the following manner:
- PhD scholarship stipends.
- Post-doctoral researcher funding.
- Software and hardware research engineer funding.
- Limited but essential hardware and software needed to conduct the research.
Proposal plans should include any of these, or other items, that directly support the proposed research.
Microsoft research collaborators, at no cost to the winning teams, may visit the university partners one or more times to foster collaborative planning and research. These visits will be agreed upon and scheduled after an award decision is made. Likewise, a cadence of meetings will be mutually agreed upon at the start of the collaboration. Proposals are welcome to include other suggestions about how to foster an effective collaborative research engagement.
-
Eligibility
This RFP is not restricted to any one discipline or tailored to any methodology. Universities are welcome to submit cross-disciplinary proposals if that contributes to answering the proposed research question(s).
To be eligible for this RFP, your institution and proposal must meet the following requirements:
- Institutions must have access to the knowledge, resources, and skills necessary to carry out the proposed research.
- Institutions must be either an accredited or otherwise degree-granting university with non-profit status, or a research organization with non-profit status.
- Proposals that are incomplete or request funds more than the maximum award will be excluded from the selection process.
- The proposal budget must reflect your university’s policies toward receiving unrestricted gifts and should emphasize allocation of funds toward completing the research proposed.
Additionally:
- Proposals should include a timeline (approximately 12-18 months) or workplan that begins in summer 2021 and ends in fall of 2022.
- To optimize the chances of receiving an award, we encourage researchers from the same university to consider submitting a single, joint proposal (rather than multiple individual proposals) that leverages their various skills and interests to create the strongest possible proposal.
- Multiple universities can submit a joint/single proposal together. Please clearly indicate in the budget section how the budget, not to exceed $150K USD, will be shared.
-
Selection process and criteria
All proposals received by the submission deadline and in compliance with the eligibility criteria will be evaluated by a panel of subject-matter experts chosen by Microsoft. Drawing from evaluations by the review panel, Microsoft will select which proposals will receive the awards. Microsoft reserves the right to fund the winning proposal at an amount greater or lower than the amount requested, up to the stated maximum amount. Note: Microsoft will not provide individual feedback on proposals that are not funded.
All proposals will be evaluated based on the following criteria:
- Addresses an important research area identified above that, if answered, has the potential to have a significant impact on that domain.
- Expected value and potential impact of the research on relevant information security fields.
- Potential for wide dissemination and use of knowledge, including specific plans for scholarly publications, public presentations, and white papers.
- Ability to complete the project based upon adequate available resources, reasonable timelines, and the identified contributors’ qualifications.
- Qualifications of the research team, including previous history of work in the area, successful completion of previous projects, research or teaching awards, and scholarly publications.
- Diversity is highly valued and research teams should strive to reflect a diversity of backgrounds, experiences, and talent reflected in the research teams.
- Evidence of university support contributed in-kind to directly support and supplement the research efforts.
- Budget is strategic to maximize impact of research.
- Possible additional information as requested by the review panel, which might be requested via a conference call.
-
Conditions
- As a condition of accepting an award, principal investigators agree that Microsoft may use their name and likeness to publicize their proposals (including all proposal content except detailed budget information) in connection with the promotion of the research awards in all media now known or later developed.
- Researchers will be willing to engage with Microsoft about their project and experience, and provide updates via monthly or quarterly calls.
- The review process is internal, and no review feedback will be given to submitters.
- Microsoft encourages researchers to publish their work in scholarly venues such as journals and conferences. Researchers must provide Microsoft a copy of any work prior to publication. So long as accurate, such publications are not subject to Microsoft’s approval except that, at Microsoft’s request, researcher will delete any Microsoft Confidential Information identified or delay publication to enable Microsoft to file for appropriate intellectual property (IP) protection for any project IP disclosed in such work.
- All data sets and any new IP resulting from this effort will be made public and publicly available for any researcher, developer, or interested party to access to help further the goals of this initiative in providing higher quality and better access to technology services that empowers people and organizations to be more productive.
- Funded researchers must seek approval of their institution’s review board for any work that involves human subjects.
- At the completion of the project, the funded researchers will be required to submit to Microsoft a report describing project learnings.
- Any security issues in Microsoft products or services discovered during this research must be reported to the Microsoft Security Response Center (opens in new tab).