Many of us in security have been anticipating that AI would reshape vulnerability discovery. In January 2026, we got the clearest evidence yet.

AI security firm AISLE disclosed 12 previously unknown zero-day vulnerabilities in OpenSSL. For those unfamiliar, OpenSSL is the cryptographic library embedded across most encrypted communications infrastructure and throughout industrial software stacks. Three of those bugs had been present since 1998-2000. One predates OpenSSL itself. They survived 25 years of expert review and millions of CPU-hours of automated fuzzing.

AISLE’s system had already found 13 of 14 OpenSSL CVEs assigned across all of 2025. For five of the 12 new findings, the AI proposed patches that were accepted into the official release. Not a proof of concept. Operational output.

Both attackers and defenders benefit

Worth being clear: the news here is both good and challenging.

Defenders gain real capability from this. AISLE found bugs that decades of human review couldn’t. Their system wrote patches that maintainers accepted. Open-source security has needed that kind of deep, repeatable analysis for years. Other projects are moving in the same direction. Daniel Stenberg ended the curl bug bounty in favour of AI-assisted review, for instance.

But the same capability will reach attackers too. It always does. What a well-resourced team can do today becomes widely available as tooling matures. State-affiliated groups like UNC6201 were already exploiting critical zero-days for 18 months before public disclosure. We should assume AI-assisted discovery is already part of sophisticated actor toolsets, and will become more broadly available over the next 12-24 months.

So vulnerability discovery speeds up for everyone. For IT environments, that’s broadly positive: faster discovery paired with faster patching improves your overall position. OT is where it gets harder, because faster discovery creates a growing gap between ‘known’ and ‘fixed.‘

The IT-OT split in manufacturing

For IT teams in manufacturing, AI-driven vulnerability discovery is mostly good news. Libraries get audited more thoroughly, patches come faster, and upstream code quality improves as a result. IT patch cycles are short enough that accelerated discovery is manageable. You find it faster, you fix it faster.

OT is a different story. OpenSSL is embedded in industrial historians, SCADA platforms, HMIs, and remote access gateways, often without appearing in product documentation. Most OT asset inventories track hardware and network connections. They don’t track software composition.

That gap has always existed, but it mattered less when new vulnerabilities in well-audited libraries appeared slowly. With AI accelerating the pace of discovery, the time between ‘new vulnerability published’ and ‘exploit available’ is compressing. CVE-2025-15467 (CVSS 9.8) had public exploits within days.

For OT assets, where patching requires maintenance windows, vendor certification, and operational impact assessment, the response needs to be different. Network segmentation limits lateral movement. Monitoring encrypted channel behaviour can flag anomalies. Direct vendor engagement gets you interim guidance. These compensating controls become the primary response path while patching catches up.

And OpenSSL isn’t special here. The same logic applies to any embedded cryptographic library or legacy protocol implementation in the OT stack. AI has shown that bugs in well-reviewed code are findable at scale. More disclosures will follow.

What manufacturing security teams should be looking at

The first priority is software composition visibility. If you don’t know where OpenSSL sits in your OT environment, you can’t assess your exposure when the next advisory drops. The EU Cyber Resilience Act (CRA) is already pushing SBOM requirements for products with digital elements sold in the EU. Manufacturers will need to provide and maintain software composition data as a regulatory baseline. But AISLE’s findings show why this matters beyond compliance. When vulnerabilities in widely-used libraries are being found at this pace, the organisations that can respond quickly are the ones that already know what’s running where. CRA provides the regulatory push. These findings show why it’s also an operational priority.

Response planning also needs to catch up. Traditional 30/60/90-day patch cycles were designed for a slower world. They still make sense as a framework, but the compensating controls for the gap between ‘disclosure’ and ‘patch deployed’ need to be planned in advance, not improvised per-incident. Segmentation plans, detection rules, vendor communication channels: these should be ready before the next critical CVE lands, not assembled after it. This becomes even more pressing as CRA vulnerability notification requirements take effect in September 2026, mandating 24-hour supplier disclosure of actively exploited vulnerabilities.

And risk models need recalibrating. Quantitative frameworks like FAIR use threat event frequency inputs based on historical vulnerability discovery rates. Those rates have changed. Any risk assessment, board report, or insurance discussion that assumes ‘finding new vulnerabilities in mature code is hard’ needs a refresh. The inputs have shifted and the models should reflect that.

A balanced view forward

On balance, AI-driven vulnerability discovery improves security. AISLE’s OpenSSL work shows that clearly. IT teams in manufacturing benefit directly through better upstream code quality and faster identification of issues.

The challenge is OT, where discovery speed now outpaces remediation speed. The manufacturers who come through this well will have software composition visibility across both IT and OT. If you know what you’re running, you can respond, whether that’s a quick patch or a compensating control while you wait for the next maintenance window.


References


Connect: Follow for more insights on security governance and risk management on LinkedInMastodonBluesky