We’ve shared how Android’s proactive, multi-layered scam defenses utilize Google AI to protect users around the world from over 10 billion suspected malicious calls and messages every month1. While that scale is significant, the true impact of these protections is best understood through the stories of the individuals they help keep safe every day. This includes people like Majik B., an IT professional in Sunnyvale, California.
Despite his technical background, Majik recently found himself on a call that felt dangerously legitimate. While using his Pixel, he received a call that appeared to be from his bank. The number looked correct, the caller knew his name and his address, and the story about a "suspicious charge" made perfect sense. "I’m usually pretty careful about this stuff," Majik recalled, "but I stayed on the line longer than I normally would. Even knowing how these scams work, it was convincing in the moment."
The turning point came when his phone displayed a Scam Detection warning during the call, which provided a critical moment to pause and reflect. Majik hung up, checked his bank app directly, and confirmed there was no fraudulent charge. For Majik, Scam Detection was the intervention he needed: “The warning is what made me pause and avoid a bad situation”.
While stories like Majik’s show how our existing protections provide a robust shield against scams, our work isn't done. As scammers evolve their tactics and create more convincing and personalized threats, we’re using the best of Google AI to stay one step ahead.
A recent evaluation by Counterpoint Research found that Android smartphones provide the most comprehensive AI-powered protections of any mobile platform. We are committed to building on this foundation by expanding our AI-powered protections to more users and devices, while rolling out new features that utilize on-device AI to defend against increasingly sophisticated threats.
Expanding Scam Detection for Calls to Samsung Devices
To help protect you during phone calls, Scam Detection alerts you if a caller uses speech patterns commonly associated with fraud. We are bringing these protections to more of our users through new regional expansion and availability on new devices. Scam Detection for phone calls on Google Pixel devices is available in the U.S., Australia, Canada, India, Ireland, and the UK.
Scam Detection is already helping millions of users to stay safe from scammers, and we are expanding this feature to more manufacturers, starting with the Samsung Galaxy S26 series in the U.S. We are continuing to work with our partners to bring these industry-leading protections to even more users.
Powered by Gemini’s on-device model, Scam Detection provides intelligent protection against scam calls while ensuring that the processing stays on your device. This keeps your conversations private while delivering warnings in real-time. To preserve your privacy, the phone conversation processed by Scam Detection is neither stored on your device, nor shared outside of the device. To ensure you stay in total control of your experience, Scam Detection is turned off by default. When enabled, the feature only applies to calls identified as potential scams and is never used in calls with your contacts. You can easily manage these preferences in your phone settings whenever you choose.
Enhanced Protection Against Messaging Scams
We want everyone to feel secure when they open their messages, no matter where they are or what language they speak. To make this possible, we’ve now expanded Scam Detection for Google Messages to more than 20 countries. This includes support for several languages including English, Arabic, French, German, Portuguese, and Spanish.
Beyond reaching more people, we are also making our protections more intelligent. We are enhancing how Google Messages identifies fraudulent texts by utilizing our Gemini on-device modelon the latest Android flagship devices in the US, Canada, and the UK. The added power of Gemini’s on-device model allows for a much more nuanced analysis of complex conversational threats.
For example, it can better detect the subtle conversational patterns used in job offer scams or sophisticated romance baiting scams (also known as “pig butchering”), a deceptive tactic where a scammer builds a long-term "relationship" with a potential victim to gain their trust, before tricking them into a fraudulent investment. Because these methods rely on gradual manipulation and don’t present typical warning signs, they need more advanced capabilities to catch them at scale. These advanced protections are now rolling out on Google Pixel 10 series and other select devices, and will be available on the Samsung Galaxy S26 series.
Gemini-powered Scam Detection alerts a user to a job offer scam
Using the Best of Google AI to Set the Standard in Mobile Scam Protection
Android continues to set the standard in mobile scam protections by leveraging advanced AI to identify and intercept threats as they happen. As scammer’s strategies shift, we remain committed to developing equally adaptive and intelligent defenses. Our goal is to provide you with peace of mind so you can continue to connect and communicate with confidence, knowing our multi-layered defenses are there to help protect your financial and personal data against mobile scams.
Disclaimers
1: This total comprises all instances where a message or call was proactively blocked or where a user was alerted to potential spam or scam activity.
The Android ecosystem is a thriving global community built on trust, giving billions of users the confidence to download the latest apps. In order to maintain that trust, we’re focused on ensuring that apps do not cause real-world harm, such as malware, financial fraud, hidden subscriptions, and privacy invasions. As bad actors leverage AI to change their tactics and launch increasingly sophisticated attacks, we’ve deepened our investments in AI and real-time defenses over the last year to maintain the upper hand and stop these threats before they reach users.
Upgrading Google Play’s AI-powered, multi-layered user protections
We’ve seen a clear impact from these safety efforts on Google Play. In 2025, we prevented over 1.75 million policy-violating apps from being published on Google Play and banned more than 80,000 bad developer accounts that attempted to publish harmful apps. These figures demonstrate how our proactive protections and push for a more accountable ecosystem are discouraging bad actors from publishing malicious apps, while our new tools help honest developers build compliant apps more easily. Initiatives like developer verification, mandatory pre-review checks, and testing requirements have raised the bar for the Google Play ecosystem, significantly reducing the paths for bad actors to enter.
User safety is at the core of everything we build. Over the years, we’ve continually introduced ways to help users stay safe and make informed app choices — from parental controls to data safety transparency and app badges. We’re constantly improving our policies and protections to encourage safe, high-quality apps on Google Play and stop bad actors before they cause harm.
Apps on Google Play undergo rigorous reviews for safety and compliance with our policies. Last year, we shared that Google Play runs over 10,000 safety checks on every app we publish, and we continue to check and recheck apps after they’ve been published. In 2025, we continued scaling our defenses even further by:
Boosting AI-enhanced app detection: We integrated Google’s latest generative AI models into our review process, helping our human review team continue to find complex malicious patterns faster.
Preventing unnecessary access to sensitive data: We prevented over 255,000 apps from getting excessive access to sensitive user data and continued to strengthen our privacy policies. Our commitment to privacy-forward app development, supported by tools like Play Policy Insights in Android Studio and Data safety section, has empowered developers to continue to: minimize privacy-sensitive permission requests, and prioritize the user in their design choices.
Blocking spam ratings and reviews: Whether they lead to review inflation or deflation, spam ratings and reviews can negatively impact our users’ trust and our developers’ growth. We’re continually evolving our detection models to help ensure app reviews are accurate. Our anti-spam protections blocked 160 million spam ratings and reviews last year, including inflated and deflated reviews. We also prevented an average 0.5-star rating drop for apps targeted by review bombing, protecting our users and developers from unhelpful reviews.
Safeguarding kids and families: Our approach to kids and families is built on the core belief that children deserve a safe, enriching digital environment. Our commitment is to empower parents with robust tools while providing children with access to high-quality, age-appropriate content. Last year, we announced new layers of protection, in addition to our existing safeguards, to prevent younger audiences from discovering or downloading apps involving activities like gambling or dating.
Enhancing Google Play Protect to help keep the entire Android ecosystem safe
We also continued to improve our protections for the broader Android ecosystem, by expanding Google Play Protect and real-time security measures like in-call scam protections to help keep users safe from scams, fraud, and other threats.
As Android’s built-in defense against malware and unwanted software, Google Play Protect now scans over 350 billion Android apps daily. This proactive protection constantly checks both Play apps and those from other sources to ensure they are not potentially harmful. And, last year, its real-time scanning capability identified more than 27 million new malicious apps from outside Google Play, warning users or blocking the app to neutralize the threat. To benefit from these protections, we recommend that users always keep Google Play Protect on.
While fraudsters are constantly evolving their tactics, Google Play Protect is evolving faster. Last year, we expanded:
Enhanced fraud protection: Google Play Protect’s enhanced fraud protection analyzes and automatically blocks the installation of apps that may abuse sensitive permissions to commit financial fraud. This protection is triggered when a user attempts to install an app from an "Internet-sideloading source" — such as a web browser or messaging app — that requests a sensitive permission. Building on the success of our initial pilot in Singapore, we expanded enhanced fraud protection to 185 markets, now covering more than 2.8 billion Android devices. In 2025, we blocked 266 million risky installation attempts and helped protect users from 872,000 unique, high-risk applications.
In-call scam protection: We also introduced new protections to combat social engineering attacks during phone calls. This feature preemptively disables the ability to turn off Google Play Protect during phone calls, stopping bad actors from being able to trick users into disabling their device's built-in defenses to download a malicious app while on a call.
Partnering with developers for a more secure, privacy-friendly future
Keeping Android and Google Play safe requires deep collaboration. We want to thank our global developer community for their partnership and for sharing their feedback on the tools and support they need to succeed.
In 2025, we focused on reducing friction for developers and providing them with tools to safeguard their businesses:
Building safer apps more easily: We’re helping developers streamline their work by bringing insights directly into their natural workflows. It starts with Play Policy Insights in Android Studio, which gives developers real-time feedback as they code. We focused first on permissions and APIs that grant deeper system access or handle personal data, like location or photos. This gives developers a head start on policy requirements, including prominent disclosures or usage declarations, while they’re still building. When developers move to Play Console to prepare their apps for submission, our expanded pre-review checks help catch common reasons for rejection, like improper usage of credentials or permissions and broken privacy policy links, ensuring smoother, faster reviews.
Stronger threat detection with Play Integrity API: Every day, apps and games make over 20 billion checks with Play Integrity API to protect against abuse and unauthorized access. In 2025, we added hardware-backed signals to make it even harder for bad actors to spoof devices and introduced new in-app prompts that let users fix common issues like network errors without leaving the app. We also launched device recall in beta to help developers identify repeat bad actors even after a device has been reset, all while protecting user privacy.
Building trust through developer verification: We’ve seen how effective developer verification is on Google Play, and now we’re applying those lessons to the broader Android ecosystem. By ensuring there is a real, accountable identity behind every app, verification helps legitimize authentic developers and prevents bad actors from hiding behind anonymity to repeatedly cause harm. After gathering feedback during our early access period, we’ll open up verification to all developers this year. We’ve also added a dedicated account type for students and hobbyists, which will allow them to distribute these apps to a limited number of devices without the full verification requirements.
Greater security with every Android release: In Android 16, developers can protect users’ most private information, like bank logins, with just one line of code. We’ve integrated this feature automatically to certain apps for an instant security boost against “tapjacking,” a trick where bad apps use hidden layers to steal clicks for ad fraud.
Looking ahead
Our top priority remains making Google Play and Android the most trusted app ecosystems for everyone. This year, we’ll continue to invest in AI-driven defenses to stay ahead of emerging threats and equip Android developers with the tools they need to build apps safely. To empower developers who distribute their apps on Google Play, we’ll maintain our focus on embedding checks to help build apps that are compliant by design, while providing guidance to help proactively avoid policy violations before an app is published. We’ll also roll out Android developer verifications to hold bad actors accountable and prevent them from hiding behind anonymity to cause repeated harm.
Thank you for being part of the Google Play and Android community as we work together to build a safer app ecosystem.
We're also enhancing our recovery tools to make them even more helpful. This update is now available for Android devices running Android 10+.
More Control for Remote Lock. Remote Lock (android.com/lock) is a crucial tool that lets you lock your lost or stolen device from any web browser. We are adding a new optional security question/challenge to the process. This helps ensure that only you, the real device owner, can initiate a lock, adding another layer of security to your recovery flow.
Proactive Protection: "Default-On" in Brazil
Keeping our users safe is a top priority, which is why we're working to make theft protection available out-of-the-box for more Android users. For new Android devices activated in Brazil, two of our key theft protection features are now enabled by default:
Theft Detection Lock: Uses on-device AI to sense motion and context that may indicate a "snatch-and-run" theft. If a theft attempt is detected, it will quickly lock the device screen to help protect your data.
Remote Lock: Allows users to lock their device from any device that provides a web access to android.com/lock without the need to have enabled the feature in advance.
This helps ensure new devices have a critical layer of theft protection from day one.
Continuing To innovate in Device Theft Protection
We’re always evolving our protections to stay one step ahead of thieves. Our ongoing updates help ensure that no matter where you go, you can have greater peace of mind knowing your device and data are protected by Android’s multi-layered defenses. Keep a lookout for even more Android theft protection updates.
Enhanced Recovery Tools
We're also enhancing our recovery tools to make them even more helpful. This update is now available for Android devices running Android 10+.
More Control for Remote Lock. Remote Lock (android.com/lock) is a crucial tool that lets you lock your lost or stolen device from any web browser. We are adding a new optional security question/challenge to the process. This helps ensure that only you, the real device owner, can initiate a lock, adding another layer of security to your recovery flow.
Proactive Protection: "Default-On" in Brazil
Keeping our users safe is a top priority, which is why we're working to make theft protection available out-of-the-box for more Android users. For new Android devices activated in Brazil, two of our key theft protection features are now enabled by default:
Theft Detection Lock: Uses on-device AI to sense motion and context that may indicate a "snatch-and-run" theft. If a theft attempt is detected, it will quickly lock the device screen to help protect your data.
Remote Lock: Allows users to lock their device from any device that provides a web access to android.com/lock without the need to have enabled the feature in advance.
This helps ensure new devices have a critical layer of theft protection from day one.
Continuing To innovate in Device Theft Protection
We’re always evolving our protections to stay one step ahead of thieves. Our ongoing updates help ensure that no matter where you go, you can have greater peace of mind knowing your device and data are protected by Android’s multi-layered defenses. Keep a lookout for even more Android theft protection updates.
A flow chart that depicts the User Alignment Critic: a trusted component that vets each action before it reaches the browser.
The User Alignment Critic runs after the planning is complete to double-check each proposed action. Its primary focus is task alignment: determining whether the proposed action serves the user’s stated goal. If the action is misaligned, the Alignment Critic will veto it. This component is architected to see only metadata about the proposed action and not any unfiltered untrustworthy web content, thus ensuring it cannot be poisoned directly from the web. It has less context, but it also has a simpler job — just approve or reject an action.
This is a powerful, extra layer of defense against both goal-hijacking and data exfiltration within the action step. When an action is rejected, the Critic provides feedback to the planning model to re-formulate its plan, and the planner can return control to the user if there are repeated failures.
Enforcing stronger security boundaries with Origin Sets
Site Isolation and the same-origin policy are fundamental boundaries in Chrome’s security model and we’re carrying forward these concepts into the agentic world. By their nature, agents must operate across websites (e.g. collecting ingredients on one site and filling a shopping cart on another). But if an unrestricted agent is compromised and can interact with arbitrary sites, it can create what is effectively a Site Isolation bypass. That can have a severe impact when the agent operates on a local browser like Chrome, with logged-in sites vulnerable to data exfiltration. To address this, we’re extending those principles with Agent Origin Sets. Our design architecturally limits the agent to only access data from origins that are related to the task at hand, or data that the user has chosen to share with the agent. This prevents a compromised agent from acting arbitrarily on unrelated origins.
For each task on the web, a trustworthy gating function decides which origins proposed by the planner are relevant to the task. The design is to separate these into two sets, tracked for each session:
Read-only origins are those from which Gemini is permitted to consume content. If an iframe’s origin isn’t on the list, the model will not see that content.
Read-writable origins are those on which the agent is allowed to actuate (e.g., click, type) in addition to reading from.
This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set. This reduces the model’s exposure to unnecessary cross-site data. Like the Alignment Critic, the gating functions that calculate these origin sets are not exposed to untrusted web content. The planner can also use context from pages the user explicitly shared in that session, but it cannot add new origins without the gating function’s approval. Outside of web origins, the planning model may ingest other non-web content such as from tool calls, so we also delineate those into read-vs-write calls and similarly check that those calls are appropriate for the task.
Iframes from origins that aren’t related to the user’s task are not shown to the model.
Page navigations can happen in several ways: If the planner decides to navigate to a new origin that isn’t yet in the readable set, that origin is checked for relevancy by a variant of the User Alignment critic before Chrome adds it and starts the navigation. And since model-generated URLs could exfiltrate private information, we have a deterministic check to restrict them to known, public URLs. If a page in Chrome navigates on its own to a new origin, it’ll get vetted by the same critic.
Getting the balance right on the first iteration is hard without seeing how users’ tasks interact with these guardrails. We’ve initially implemented a simpler version of origin gating that just tracks the read-writeable set. We will tune the gating functions and other aspects of this system to reduce unnecessary friction while improving security. We think this architecture will provide a powerful security primitive that can be audited and reasoned about within the client, as it provides guardrails against cross-origin sensitive data exfiltration and unwanted actions.
Transparency and control for sensitive actions
We designed the agentic capabilities in Chrome to give the user both transparency and control when they need it most. As the agent works in a tab, it details each step in a work log, allowing the user to observe the agent's actions as they happen. The user can pause to take over or stop a task at any time.
This transparency is paired with several layers of deterministic and model-based checks to trigger user confirmations before the agent takes an impactful action. These serve as guardrails against both model mistakes and adversarial input by putting the user in the loop at key moments.
First, the agent will require a user confirmation before it navigates to certain sensitive sites, such as those dealing with banking transactions or personal medical information. This is based on a deterministic check against a list of sensitive sites. Second, it’ll confirm before allowing Chrome to sign-in to a site via Google Password Manager – the model does not have direct access to stored passwords. Lastly, before any sensitive web actions like completing a purchase or payment, sending messages, or other consequential actions, the agent will try to pause and either get permission from the user before proceeding or ask the user to complete the next step. Like our other safety classifiers, we’re constantly working to improve the accuracy to catch edge cases and grey areas.
Illustrative example of when the agent gets to a payment page, it stops and asks the user to complete the final step.
Detecting “social engineering” of agents
In addition to the structural defenses of alignment checks, origin gating, and confirmations, we have several processes to detect and respond to threats. While the agent is active, it checks every page it sees for indirect prompt injection. This is in addition to Chrome’s real-time scanning with Safe Browsing and on-device AI that detect more traditional scams. This prompt-injection classifier runs in parallel to the planning model’s inference, and will prevent actions from being taken based on content that the classifier determined has intentionally targeted the model to do something unaligned with the user’s goal. While it cannot flag everything that might influence the model with malicious intent, it is a valuable layer in our defense-in-depth.
Continuous auditing, monitoring, response
To validate the security of this set of layered defenses, we’ve built automated red-teaming systems to generate malicious sandboxed sites that try to derail the agent in Chrome. We start with a set of diverse attacks crafted by security researchers, and expand on them using LLMs following a technique we adapted for browser agents. Our continuous testing prioritizes defenses against broad-reach vectors such as user-generated content on social media sites and content delivered via ads. We also prioritize attacks that could lead to lasting harm, such as financial transactions or the leaking of sensitive credentials. The attack success rate across these give immediate feedback to any engineering changes we make, so we can prevent regressions and target improvements. Chrome’s auto-update capabilities allow us to get fixes out to users very quickly, so we can stay ahead of attackers.
Collaborating across the community
We have a long-standing commitment to working with the broader security research community to advance security together, and this includes agentic safety. We’ve updated our Vulnerability Rewards Program (VRP) guidelines to clarify how external researchers can focus on agentic capabilities in Chrome. We want to hear about any serious vulnerabilities in this system, and will pay up to $20,000 for those that demonstrate breaches in the security boundaries. The full details are available in VRP rules.
Looking forward
The upcoming introduction of agentic capabilities in Chrome brings new demands for browser security, and we've approached this challenge with the same rigor that has defined Chrome's security model from its inception. By extending some core principles like origin-isolation and layered defenses, and introducing a trusted-model architecture, we're building a secure foundation for Gemini’s agentic experiences in Chrome. This is an evolving space, and while we're proud of the initial protections we've implemented, we recognize that security for web agents is still an emerging domain. We remain committed to continuous innovation and collaboration with the security community to ensure Chrome users can explore this new era of the web safely.
Bringing in-call scam protections to more users on Android
The UK pilot of Android’s in-call scam protections has already helped thousands of users end calls that could have cost them a significant amount of money. Following this success, and alongside recently launched pilots with financial apps in Brazil and India, we’ve now expanded this protection to most major UK banks.
We’ve also started to pilot this protection with more app types, including peer-to-peer (P2P) payment apps. Today, we’re taking the next step in our expansion by rolling out a pilot of this protection in the United States4 with a number of popular fintechs like Cash App and banks, including JPMorganChase.
We are committed to collaborating across the ecosystem to help keep people safe from scams. We look forward to learning from these pilots and bringing these critical safeguards to even more users in the future.
Notes
Google/YouGov survey, July-August, n=5,100 (1,700 each in the US, Brazil and India), with adults who use their smartphones daily and who have been exposed to a scam or fraud attempt on their smartphone. Survey data have been weighted to smartphone population adults in each country. ↩
Among users who use the default texting app on their smartphone. ↩
Updated data for 2025. This data covers first-party and third-party (open source) code changes to the Android platform across C, C++, Java, Kotlin, and Rust. This post is published a couple of months before the end of 2025, but Android’s industry-standard 90-day patch window means that these results are very likely close to final. We can and will accelerate patching when necessary.
We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android’s C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one.
In this post, we dig into the data behind this shift and also cover:
How we’re expanding our reach: We're pushing to make secure code the default across our entire software stack. We have updates on Rust adoption in first-party apps, the Linux kernel, and firmware.
Our first rust memory safety vulnerability...almost: We'll analyze a near-miss memory safety bug in unsafe Rust: how it happened, how it was mitigated, and steps we're taking to prevent recurrence. It’s also a good chance to answer the question “if Rust can have memory safety issues, why bother at all?”
Building Better Software, Faster
Developing an operating system requires the low-level control and predictability of systems programming languages like C, C++, and Rust. While Java and Kotlin are important for Android platform development, their role is complementary to the systems languages rather than interchangeable. We introduced Rust into Android as a direct alternative to C and C++, offering a similar level of control but without many of their risks. We focus this analysis on new and actively developed code because our data shows this to be an effective approach.
When we look at development in systems languages (excluding Java and Kotlin), two trends emerge: a steep rise in Rust usage and a slower but steady decline in new C++.
Net lines of code added: Rust vs. C++, first-party Android code. This chart focuses on first-party (Google-developed) code (unlike the previous chart that included all first-party and third-party code in Android.) We only include systems languages, C/C++ (which is primarily C++), and Rust.
The chart shows that the volume of new Rust code now rivals that of C++, enabling reliable comparisons of software development process metrics. To measure this, we use the DORA1 framework, a decade-long research program that has become the industry standard for evaluating software engineering team performance. DORA metrics focus on:
Throughput: the velocity of delivering software changes.
Stability: the quality of those changes.
Cross-language comparisons can be challenging. We use several techniques to ensure the comparisons are reliable.
Similar sized changes: Rust and C++ have similar functionality density, though Rust is slightly denser. This difference favors C++, but the comparison is still valid. We use Gerrit’s change size definitions.
Similar developer pools: We only consider first-party changes from Android platform developers. Most are software engineers at Google, and there is considerable overlap between pools with many contributing in both.
Track trends over time: As Rust adoption increases, are metrics changing steadily, accelerating the pace, or reverting to the mean?
Throughput
Code review is a time-consuming and high-latency part of the development process. Reworking code is a primary source of these costly delays. Data shows that Rust code requires fewer revisions. This trend has been consistent since 2023. Rust changes of a similar size need about 20% fewer revisions than their C++ counterparts.
In addition, Rust changes currently spend about 25% less time in code review compared to C++. We speculate that the significant change in favor of Rust between 2023 and 2024 is due to increased Rust expertise on the Android team.
While less rework and faster code reviews offer modest productivity gains, the most significant improvements are in the stability and quality of the changes.
Stability
Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.
For medium and large changes, the rollback rate of Rust changes in Android is ~4x lower than C++. This low rollback rate doesn't just indicate stability; it actively improves overall development throughput. Rollbacks are highly disruptive to productivity, introducing organizational friction and mobilizing resources far beyond the developer who submitted the faulty change. Rollbacks necessitate rework and more code reviews, can also lead to build respins, postmortems, and blockage of other teams. Resulting postmortems often introduce new safeguards that add even more development overhead.
In a self-reported survey from 2022, Google software engineers reported that Rust is both easier to review and more likely to be correct. The hard data on rollback rates and review times validates those impressions.
Putting it all together
Historically, security improvements often came at a cost. More security meant more process, slower performance, or delayed features, forcing trade-offs between security and other product goals. The shift to Rust is different: we are significantly improving security and key development efficiency and product stability metrics.
Expanding Our Reach
With Rust support now mature for building Android system services and libraries, we are focused on bringing its security and productivity advantages elsewhere.
Firmware: The combination of high privilege, performance constraints, and limited applicability of many security measures makes firmware both high-risk, and challenging to secure. Moving firmware to Rust can yield a major improvement in security. We have been deploying Rust in firmware for years now, and even released tutorials, training, and code for the wider community. We’re particularly excited about our collaboration with Arm on Rusted Firmware-A.
First-party applications: Rust is ensuring memory safety from the ground up in several security-critical Google applications, such as:
Nearby Presence: The protocol for securely and privately discovering local devices over Bluetooth is implemented in Rust and is currently running in Google Play Services.
MLS: The protocol for secure RCS messaging is implemented in Rust and will be included in the Google Messages app in a future release.
Chromium: Parsers for PNG, JSON, and web fonts have been replaced with memory-safe implementations in Rust, making it easier for Chromium engineers to deal with data from the web while following the Rule of 2.
These examples highlight Rust's role in reducing security risks, but memory-safe languages are only one part of a comprehensive memory safety strategy. We continue to employ a defense-in-depth approach, the value of which was clearly demonstrated in a recent near-miss.
Our First Rust Memory Safety Vulnerability...Almost
We recently avoided shipping our very first Rust-based memory safety vulnerability: a linear buffer overflow in CrabbyAVIF. It was a near-miss. To ensure the patch received high priority and was tracked through release channels, we assigned it the identifier CVE-2025-48530. While it’s great that the vulnerability never made it into a public release, the near-miss offers valuable lessons. The following sections highlight key takeaways from our postmortem.
Scudo Hardened Allocator for the Win
A key finding is that Android’s Scudo hardened allocator deterministically rendered this vulnerability non-exploitabledue to guard pages surrounding secondary allocations. While Scudo is Android’s default allocator, used on Google Pixel and many other devices, we continue to work with partners to make it mandatory. In the meantime, we will issue CVEs of sufficient severity for vulnerabilities that could be prevented by Scudo.
In addition to protecting against overflows, Scudo’s use of guard pages helped identify this issue by changing an overflow from silent memory corruption into a noisy crash. However, we did discover a gap in our crash reporting: it failed to clearly show that the crash was a result of an overflow, which slowed down triage and response. This has been fixed, and we now have a clear signal when overflows occur into Scudo guard pages.
Unsafe Review and Training
Operating system development requires unsafe code, typically C, C++, or unsafe Rust (for example, for FFI and interacting with hardware), so simply banning unsafe code is not workable. When developers must use unsafe, they should understand how to do so soundly and responsibly
To that end, we are adding a new deep dive on unsafe code to our Comprehensive Rust training. This new module, currently in development, aims to teach developers how to reason about unsafe Rust code, soundness and undefined behavior, as well as best practices like safety comments and encapsulating unsafe code in safe abstractions.
Better understanding of unsafe Rust will lead to even higher quality and more secure code across the open source software ecosystem and within Android. As we'll discuss in the next section, our unsafe Rust is already really quite safe. It’s exciting to consider just how high the bar can go.
Comparing Vulnerability Densities
This near-miss inevitably raises the question: "If Rust can have memory safety vulnerabilities, then what’s the point?"
The point is that the density is drastically lower. So much lower that it represents a major shift in security posture. Based on our near-miss, we can make a conservative estimate. With roughly 5 million lines of Rust in the Android platform and one potential memory safety vulnerability found (and fixed pre-release), our estimated vulnerability density for Rust is 0.2 vuln per 1 million lines (MLOC).
Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.
Memory safety rightfully receives significant focus because the vulnerability class is uniquely powerful and (historically) highly prevalent. High vulnerability density undermines otherwise solid security design because these flaws can be chained to bypass defenses, including those specifically targeting memory safety exploits. Significantly lowering vulnerability density does not just reduce the number of bugs; it dramatically boosts the effectiveness of our entire security architecture.
The primary security concern regarding Rust generally centers on the approximately 4% of code written within unsafe{} blocks. This subset of Rust has fueled significant speculation, misconceptions, and even theories that unsafe Rust might be more buggy than C. Empirical evidence shows this to be quite wrong.
Our data indicates that even a more conservative assumption, that a line of unsafe Rust is as likely to have a bug as a line of C or C++, significantly overestimates the risk of unsafe Rust. We don’t know for sure why this is the case, but there are likely several contributing factors:
unsafe{} doesn't actually disable all or even most of Rust’s safety checks (a common misconception).
The practice of encapsulation enables local reasoning about safety invariants.
The additional scrutiny that unsafe{} blocks receive.
Final Thoughts
Historically, we had to accept a trade-off: mitigating the risks of memory safety defects required substantial investments in static analysis, runtime mitigations, sandboxing, and reactive patching. This approach attempted to move fast and then pick up the pieces afterwards. These layered protections were essential, but they came at a high cost to performance and developer productivity, while still providing insufficient assurance.
While C and C++ will persist, and both software and hardware safety mechanisms remain critical for layered defense, the transition to Rust is a different approach where the more secure path is also demonstrably more efficient. Instead of moving fast and then later fixing the mess, we can move faster while fixing things. And who knows, as our code gets increasingly safe, perhaps we can start to reclaim even more of that performance and productivity that we exchanged for security, all while also improving security.
Acknowledgments
Thank you to the following individuals for their contributions to this post:
Ivan Lozano for compiling the detailed postmortem on CVE-2025-48530.
Chris Ferris for validating the postmortem’s findings and improving Scudo’s crash handling as a result.
Dmytro Hrybenko for leading the effort to develop training for unsafe Rust and for providing extensive feedback on this post.
Alex Rebert and Lars Bergstrom for their valuable suggestions and extensive feedback on this post.
Peter Slatala, Matthew Riley, and Marshall Pierce for providing information on some of the places where Rust is being used in Google's apps.
Finally, a tremendous thank you to the Android Rust team, and the entire Android organization for your relentless commitment to engineering excellence and continuous improvement.
Notes
The DevOps Research and Assessment (DORA) program is published by Google Cloud. ↩