Skip to main content
  1. Blog
  2. Article

Lech Sandecki
on 27 March 2026

The “scanner report has to be green” trap 


Stability, backports, and hidden risks of the bleeding edge

In the modern DevSecOps world, CISOs are constantly looking for signals in the noise, and the outputs of security scanners often carry a lot of weight. A security scan that returns a “zero CVE” report often unlocks promotion to production; a single red flag can block a release.

This binary view of security has birthed two diametrically opposed philosophies. On one side, we have the long-term support  (LTS) approach: stay on a battle-tested version and backport specific security fixes. On the other hand, we have the push-the-latest approach: stay ahead of the CVEs by constantly moving to the latest version upstream.

In this article, I will compare the pros and cons of the “push the latest” approach and the “LTS approach”, and argue that while “rolling forward” makes your scanners happy, it might be making your infrastructure more fragile.

The case for the backport: stability is the security pillar

The LTS model is built on the principle of minimal change. For instance, when a vulnerability is found, Canonical security engineers extract the minimal set of changes required to fix the issue and apply it to the older versions.

The benefit of this model is stability: you get the security fix without the “feature churn.” Your APIs don’t change, your configuration files don’t break, and your application behavior remains predictable. Stability isn’t just about APIs. It’s about resource consumption. An LTS backport doesn’t suddenly double your memory usage or add new background telemetry – something the ‘latest’ version might do without warning.

However, this approach might be a challenge for vulnerability management because many security scanners rely primarily on version strings. Because some of these tools may not always have immediate visibility into the specific, surgical patches backported to an older version, they might flag a package as vulnerable based simply on its version number alone. This creates “CVE noise” and security teams have to then spend hours writing exceptions and ignore lists, to justify why the scanner’s report is a non-issue. 

To bridge this gap, we collaborate with the top security scanner partners to share deep-level vulnerability data. By providing this visibility, we ensure their tools can accurately recognize backported fixes, significantly reducing the manual burden on security teams to investigate and justify these reports.

The flaws of the “push the latest” approach

Some vendors solve the problem by simply using the latest version of everything. As long as you are on the latest version, the scanner sees no known vulnerabilities. It feels like magic: you get a clean report and your auditors are happy. 

However, “push the latest” approach operates on a dangerous assumption: that “newer” is always “safer.” Let’s examine some of the negative points and risks of this approach.

1. The “Unknown Unknown” risk

A CVE is a known vulnerability. When you use an older, widely-deployed version of a package (like those in Ubuntu LTS), you are likely using code that has been “seasoned” by years of global production use. 

In contrast, when you pull the latest version of a package that was released 48 hours ago, you are the first line of testing of a bleeding edge. You have potentially traded a known vulnerability (which might already be patched and backported) for an unknown, undiscovered, and unfixed vulnerability in the latest version.

2. The XZ Utils cautionary tale

The XZ Utils backdoor (CVE-2024-3094) is the ultimate rebuttal to the “always latest” philosophy. In this infamous example, the malicious code was injected into the latest “bleeding edge” versions of the tool. Every new line of code is a potential new vulnerability.

Users of distributions like Debian Stable or Ubuntu LTS were protected by default, not because they were smarter, but because their reliance on proven, distribution-vetted code acted as a mandatory cooling-off period for the global supply chain. In comparison, the “push the latest” model of rapid ingestion creates a highway for these types of sophisticated attacks to reach production.

3. Introducing breaking changes inadvertently

If you’re using the “push the latest” model, the stability of your environment also gets hit, because your dependencies are constantly shifting. A minor version update upstream might include a “fix” that changes how a library handles memory or network timeouts.

If your image rebuilds every morning with the “latest” packages, you may find your application failing in production due to a regression that was never caught in your CI/CD pipeline, simply because the upstream developers changed a default setting you relied on. Furthermore, if you miss even a single update cycle, this fragile house of cards collapses, leaving you to wonder if your environment was ever truly stable to begin with.

Security versus compliance

When you consider the debate of the “LTS approach” vs “push the latest”, it becomes clear where the real source of the tension really lies: in the balance between security and naive compliance.

Vendors who champion the “push the latest” model haven’t necessarily fixed the problem; they’ve just shifted the risk. By building custom, rolling-release operating systems stripped of historical stability, they promise instant patching. At the expense of the LTS promise of guaranteed compatibility. 

In the container world, they argue that breaking changes don’t matter because containers are ephemeral. However, while containers may be ephemeral, software contracts are not. Your application relies on stable APIs, ABIs, and library behaviors. When you blindly target the latest, you are constantly shifting the ground beneath your application. It doesn’t matter how fast a container can restart if the new upstream package it just pulled fundamentally breaks your app’s dependencies. A fast crash is still an outage.

Conclusion

The choice comes down to what you value more: a quiet scanner or a quiet night on-call. While chasing upstream versions offers the instant gratification of a green dashboard, it does so by offloading the vetting process to your production environment.

True security requires the vital work of backporting – fixing vulnerabilities without introducing volatility. In a world where supply chain attacks are the new frontier, stable, battle-tested code isn’t just a convenience. It is your most critical defensive layer.

Are we actually more secure, or are we just tired of looking at red dots on a dashboard? By backporting CVE fixes, Ubuntu Pro acknowledges that code needs time to be trusted. In a world where supply chain attacks are the new frontier, stable and tested code might just be the most secure feature you have.

Further reading

Related posts


Massimiliano Gori
27 March 2026

Modern Linux identity management: from local auth to the cloud with Ubuntu

Cloud and server Article

The modern enterprise operates in a hybrid world where on-premises infrastructure coexists with cloud services, and security threats evolve daily. IT administrators are tasked with a difficult balancing act: maintaining traditional local workflows while managing the inevitable shift toward cloud-native architectures. Identity has emerged ...


Abdelrahman Hosny
24 March 2026

Canonical welcomes NVIDIA’s donation of the GPU DRA driver to CNCF

Partners Article

At KubeCon Europe in Amsterdam, NVIDIA announced that it will donate the GPU Dynamic Resource Allocation (DRA) Driver to the Cloud Native Computing Foundation (CNCF). This marks an important milestone for the Kubernetes ecosystem and for the future of AI infrastructure. For years, GPUs have been central to modern machine learning and high ...


ijlal-loutfi
23 March 2026

Hot code burns: the supply chain case for letting your containers cool before you ship

Ubuntu Article

Zero CVEs doesn’t mean secure. It means unexamined. New code has zero CVEs because no one has studied it yet, and if you’re rebuilding nightly from upstream, you’re signing first and asking questions later. In software supply chain security, the freshest code isn’t always the safest. Sometimes the most secure component in your pipeline is ...