The evolution of the cyber threat landscape highlights the emerging need for organizations to strengthen their ability to identify, analyze, and evaluate cyber risks before they evolve into security incidents. Although the terms “patch management” and “vulnerability management” are used as if they are interchangeable, this is not the case. Most are confused because applying patches is one of the many tools that’s available in our arsenal for mitigating cyber risks.
Benefits and Risks of Patching (and Patch Management)
Before deciding whether to install a patch or not, it is important that we understand the associated benefits and risks of doing so. Is patching worth the effort?
The most obvious reason for patching, and what organizations usually think of when it comes to patching, is the need to fix security flaws in either the OS or the applications. However, this is not the only benefit you gain from patching timely and correctly. A lot of vendors release patches to improve the applications’ stability. These types of improvements provide a strong case for rolling out patches in the ICS environment because stability and uptime of critical devices are of the utmost importance. Lastly, patches can also assist in resolving specific bugs or flaws in certain applications. Again, this is another benefit, and it strengthens the business case for why organizations should patch.
However, besides the benefits, there are equally negative or risky reasons for not patching. This is closely related to how risk is perceived within the IT side of the business as opposed to the OT side. Within the IT side of the organization, the benefits outweigh the risks, as loss of data is considered a bigger concern than the downtime of a network. On the other hand, for the OT side, systems uptime is (Read more…)