steveklabnik
This is a very interesting post! One takeaway is that you don't need to re-write the world. Transitioning new development to a memory safe language can bring meaningful improvements. This is much easier (and cheaper) than needing to port everything over in order to get an effect.
infogulch
I'd like to acknowledge that the charts in this article are remarkably clear and concise. A great demonstration of how careful data selection and labeling can communicate the intended ideas so effortlessly that they virtually disappear into the prose.

So the upshot of the fact that vulnerabilities decay exponentially is that the focus should be on net-new code. And spending effort on vast indiscriminate RiiR projects is a poor use of resources, even for advancing the goal of maximal memory safety. The fact that the easiest strategy, and the strategy recommended by all pragmatic rust experts, is actually also the best strategy to minimize memory vulnerabilities according to the data is notably convergent if not fortuitous.

> The Android team has observed that the rollback rate of Rust changes is less than half that of C++.

Wow!

Wowfunhappy
> The answer lies in an important observation: vulnerabilities decay exponentially. They have a half-life. [...] A large-scale study of vulnerability lifetimes2 published in 2022 in Usenix Security confirmed this phenomenon. Researchers found that the vast majority of vulnerabilities reside in new or recently modified code.

It stands to reason, then, that it would be even better for security to stop adding new features when they aren't absolutely necessary. Windows LTSC is presumably the most secure version of Windows.

gortok
There is a correlation between new code and memory vulnerabilities (a possible explanation is given in the blog post, that vulnerabilities have a half-life that decays rapidly), but why does the blog post indicate causation between the two factors?

There is more than one possible and reasonable explanation for this correlation:

1. New code often relates to new features, and folks focus on new features for vulnerabilities. 2. Older code has been through more real life usage, which can exercise those edge cases where memory vulnerabilities reside.

I’m just not comfortable saying new code causes memory vulnerabilities and that vulnerabilities have a half-life that decays rapidly. That may —- may be true in sheer number count, but doesn’t seem to be true in impact, thinking back to the high-impact vulnerabilities in OSS like the heartbleed bug, and the cache-invalidation bugs for CPUs.

benwilber0
> Increasing productivity: Safe Coding improves code correctness and developer productivity by shifting bug finding further left, before the code is even checked in. We see this shift showing up in important metrics such as rollback rates (emergency code revert due to an unanticipated bug).

> The Android team has observed that the rollback rate of Rust changes is less than half that of C++.

I've been writing high-scale production code in one language or another for 20 years. But I when I found Rust in 2016 I knew that this was the one. I was going to double-down on this. I got Klabnik and Carol's book literally the same day. Still have my dead-tree copy.

It's honestly re-invigorated my love for programming.

SkyMarshal
They talk about "memory safe languages (MSL)" plural, as if there is more than one, but only explicitly name Rust as the MSL they're transitioning to and improving interoperability with. They also mention Kotlin in the context of improving Rust<>Kotlin interop, which also has some memory-safe features but maybe not to same extent as Rust. Are those the only two Google uses, or are there others they could be referring to?
ievans
So the argument is because the vulnerability lifetime is exponentially distributed, focusing on secure defaults like memory safety in new code is disproportionately valuable, both theoretically and now evidentially seen over six years on the Android codebase.

Amazing, I've never seen this argument used to support shift/left secure guardrails but it's great. Especially for those with larger, legacy codebases who might otherwise say "why bother, we're never going to benefit from memory-safety on our 100M lines of C++."

I think it also implies any lightweight vulnerability detection has disproportionate benefit -- even if it was to only look at new code & dependencies vs the backlog.

naming_the_user
I'm a little uneasy about the conclusions being drawn here as the obvious counterpoint isn't being raised - what if older code isn't being looked at as hard and therefore vulnerabilities aren't being discovered?

It's far more common to look at recent commit logs than it is to look at some library that hasn't changed for 20 years.

daft_pink
I’m curious how this applies to Mac vs Windows, where most newer Mac code is written in memory safe swift, while Windows still uses primarily uses C or C++.
0xDEAFBEAD
Trying to think through the endgame here -- As vulnerabilities become rarer, they get more valuable. The remaining vulnerabilities will be jealously hoarded by state actors, and used sparingly on high-value targets.

So if this blog post describes the 4th generation, perhaps the 5th generation looks something like Lockdown Mode for iOS. Let users who are concerned with security check a box that improves their security, in exchange for decreased performance. The ideal checkbox detects and captures any attack, perhaps through some sort of virtualization, then sends it to the security team for analysis. This creates deterrence for the attacker. They don't want to burn a scarce vulnerability if the user happens to have that security box checked. And many high-value targets will check the box.

Herd immunity, but for software vulnerabilities instead of biological pathogens.

Security-aware users will also tend to be privacy-aware. So instead of passively phoning home for all user activity, give the user an alert if an attack was detected. Show them a few KB of anomalous network activity or whatever, which should be sufficient for a security team to reconstruct the attack. Get the user to sign off before that data gets shared.

musicale
"The net result is that a PL/I programmer would have to work very hard to program a buffer overflow error, while a C programmer has to work very hard to avoid programming a buffer overflow error."

https://www.acsac.org/2002/papers/classic-multics.pdf

kernal
>Note that the data for 2024 is extrapolated to the full year (represented as 36, but currently at 27 after the September security bulletin).

The reduction of memory safety bugs to a projected 36 in 2024 for Android is extremely impressive.

cakoose
What happens if we gradually transition to memory-safe languages for new features, while leaving existing code mostly untouched except for bug fixes?

...

In the final year of our simulation, despite the growth in memory-unsafe code, the number of memory safety vulnerabilities drops significantly, a seemingly counterintuitive result [...]

Why would this be counterintuitive? If you're only touching the memory-unsafe code to fix bugs, it seems obviously that the number of memory-safety bugs will go down.

Am I missing something?

Stem0037
The idea of "Safe Coding" as a fundamental shift in security approach is intriguing. I'd be interested in hearing more about how this is implemented in practice.
Animats
Half a century since Pascal. Forty years since Ada. 28 years since Java. Fifteen years since Go. Ten years since Rust. And still unsafe code is in the majority.
xyst
[flagged]
0xbadcafebee
So there's a C program. There's a bunch of sub-par programmers, who don't use the old, well documented, stable, memory-safe functions and techniques. And they write code with memory safety bugs.

They are eventually forced to transition to a new language, which makes the memory safety bugs moot. Without addressing the fact that they're still sub-par, or why they were to begin with, why they didn't use the memory safe functions, why we let them ship code to begin with.

They go on to make more sub-par code, with more avoidable security errors. They're just not memory safety related anymore. And the hackers shift their focus to attack a different way.

Meanwhile, nobody talks about the pink elephant in the room. That we were, and still are, completely fine with people writing code that is shitty. That we allow people to continuously use the wrong methods, which lead to completely avoidable security holes. Security holes like the injection attacks, which make up 40% of all CVEs now, when memory safety only makes up 25%.

Could we have focused on a default solution for the bigger class of security holes? Yes. Did we? No. Why? Because none of this is about security. Programmers just like new toys to play with. Security is a red herring being used to justify the continuation of allowing people to write shitty code, and play with new toys.

Security will continue to be bad, because we are not addressing the way we write software. Rather than this one big class of bugs, we will just have the million smaller ones to deal with. And it'll actually get harder to deal with it all, because we won't have the "memory safety" bogey man to point at anymore.