December 15, 2025

What Infosec Doesn’t Understand about Cyber Insurance

Robert "RSnake" Hansen

Blog Details Image

After a run of conversations with some of the largest players in cyber insurance, I walked away with a set of observations I didn’t expect. Some were counterintuitive and some were alarming, and all of them revealed a gap between what infosec believes about this market and how the market actually operates. The consequences of that disconnect shape everything from product design to breach response, so even if parts might seem obvious to some readers, I think on the whole our industry lacks understanding about how cyber insurance sees our industry, and that is a big problem. This post aims to clear some of that up, if you care to know how they see our world.

Infosec practitioners working for vendors often start from the idea that security products drive down premiums. I often hear, “If they use my product, maybe they’ll drive down premiums.” It feels reasonable. Install more tools, prove diligence, enjoy lower costs. However, the carriers I spoke with have no inherent interest in lowering premiums. Those premiums are revenue, from their perspective. They only are required to reduce them if competitive pressure forces the issue or if a particular control shows a measurable impact on expected losses, meaning that they’ll pay less to keep the book of business they already have. But the changes can’t be theoretical improvements. They can’t be marketing fluff. The security vendors must directly impact loss behavior. As long as they have to pay out at the current rates, the carrier's premiums stay roughly where they are.

Infosec also trusts that these same tools make companies safer. The carriers are far less convinced. They’ve had a front row seat to real incident data for about half a decade now, adding to their actuarial knowledge. They can name the tools that show up again and again as initial access points/vectors. Some vendors are practically household names in breach reports. Ivanti and Fortinet edge devices, for instance, increase, not decrease, the chances of being compromised, due to the number of exploited CVEs that are associated with those devices. Infosec has long believed firewalls and VPNs create a strong perimeter. The carriers, in many cases, see them as loss predictors.

Another common belief is that insurers try to avoid paying out. That isn’t how the system works. A market built on the idea of risk transfer needs to prove the transfer is real. Claims do get paid. But the carriers have learned that they can claw back losses through subrogation when vendors fail to deliver the security they promised the policy holder. The era of insurance quietly absolving vendors is over. Carriers will recover their costs from anyone whose product caused avoidable exposure, and they will do it without hesitating, unless there are very strong disclaimer and indemnification agreements between the vendor and the policy holder.

The Infosec industry often thinks carriers want more data. The carriers want less data but with higher predictive power. Most of what infosec offers has no actuarial signal.  For instance, if someone has a slightly weak SSL certificate, we lose our collective minds, while that appears to have zero correlation with loss. The insurance industry wants external posture data, breach frequency modeling, and technographic snapshots that correlate with actual losses. They don’t care that a company filled out an artificially long questionnaire, the specifics of the questions they chose and the reason it is short is not by accident. They care about whether those questions tell them anything useful about the probability of a claim. And in fact, there are very few signals that indicate future loss and almost all of them currently appear to be around the size/complexity of infrastructure - the external attack surface. For example, roughly speaking, IP count and number of cloud computing environments that a company uses are both good proxies for likelihood of future loss.

Infosec leans heavily on the idea that MFA, EDR, and backups serve as universal requirements. Carriers adopt any control that has statistical weight behind it. They aren’t dogmatic about which tools or processes get them there. If some outlier method produced a reliable reduction in severity across a broad enough pool, they would adopt it overnight. They don’t have philosophical allegiance to any product category. But they do care about loss, so any way to get there works. That said, MFA and EDR do appear to reduce loss, so for now, they are above the line.  

There’s also a persistent idea that controls exist to prevent breaches. There are two ways to reduce loss: remove the chances of breach, or reduce dwell time. Virtually everything that they ask about helps them identify if you have solutions that fall into those two categories. However, most security products fall into neither category. Carriers look for controls that reduce severity after a breach has already occurred. Faster containment, more reliable/fast restoration, and less chance of paying extortion and subsequent fines/fees. But most interestingly of all, when I spoke to them they all indicated that almost no security products reduce loss or reduce dwell time. Either they cannot disprove the null hypothesis because they are so thinly deployed they simply do not move the needle, or the adversaries find them to be sufficiently good enough that they have since avoided those controls to attack weaker access points. In any case, VERY few security categories seem to reduce claims. 

Incident response tends to be misunderstood in a similar way. Infosec expects top tier forensic work, but carriers primarily want to contain losses quickly and resist anything that drags the process into long legal battles or slow forensic investigations. They don’t necessarily prioritize deep technical introspection beyond understanding the initial access vector which may help them indicate future losses. They primarily prioritize cost control. However, there is a battle between actuarial science which appreciates additional data, and the claims team who want to reduce any singular claim and move it off their desk. So, I expect changes in this area as Cyber Insurance evolves.

Infosec expects that good posture leads directly to better premiums. It leads to better relative pricing, not always better absolute pricing. Market cycles, reinsurance conditions, whether it is a “soft” or “hard” market, and systemic risk concerns affect everyone regardless of their posture. Even a spotless company lives inside the market’s shared risk environment. But there are quite a few levers at the disposal of the carriers. They can deny/allow coverage, they can increase premiums, they can increase/decrease the size of the deductible and they can increase or decrease the size of the policy. And most importantly, in “soft” markets, which we find ourselves in now, there is a lot of competition for premiums, so they tend to be priced quite competitively.

Infosec tends to frame cyber insurance around ransomware. Carriers view ransomware as only one piece of a broader set of losses. Regulatory penalties, business interruption, destructive events, vendor collapse, and systemic cloud outages trouble them as well. Also, I have heard quite a bit of concern around large cloud computing and SaaS platforms and the risks they represent. If one falls, how many companies are at risk?

People are also saying that the insurers don’t have good data, or they don’t understand infosec. I have also heard the opposite, that they understand it extremely well and better than anyone in our industry. The truth is in the middle. They are hamstrung by only being able to see what is on the outside of a company and whatever garbage comes in from the policy questionnaire. They are also limited by what data accompanies the claim via the low-cost bidder DFIR firm. So while they have good data, it’s nowhere near what it could be.

Finally, infosec treats the insurance process as an annual event. Carriers are moving toward continuous monitoring. They tune their view of an organization as conditions change, and those changes shape renewal terms without waiting for another calendar cycle. This shift will surprise teams who think of insurance as a static line item to complete annually. There is a greater move to understanding ongoing portfolio risk, rather than just a point-in-time snapshot of risk.

I hope that this has been somewhat helpful for those who are starting to see what we are - that the Cyber Insurance industry is going to be a huge force in Infosec, so the sooner we understand their world, the better off we will be. After all, you don’t want to skate where the puck is, you want to skate to where the puck is going!

Stay Tuned For More

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.