Tier your unstructured data to lower storage costs and cyber risk | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Unstructured data has quietly become a whale-like burden on IT infrastructure, storage, and security resources. Large enterprises commonly house 20, 30 or 40 petabytes (PB) of unstructured data, which is primarily generated by applications and users and ends up locked in file data storage systems that consume a disproportionate share of IT budgets. This file data quagmire is also a significant risk, particularly in the face of growing ransomware threats.

For IT leaders, the rapid growth of file data is no longer a background issue—it’s front and center. Cost optimization and risk management have become top priorities in enterprise data strategies.  File-level unstructured data tiering (a form of online archiving) has emerged as an innovative and effective solution.

Why File Data Now Commands CIO Attention

According to the 2024 Komprise State of Unstructured Data Management survey, more than half of CIOs—56%—identify cost optimization as their top data management priority. This makes sense when considering the nature of file data. Unlike structured databases, which consist of rows and columns, file data often comprises documents, images, videos, and logs that can be retained for decades, typically without clear data lifecycle policies.  In addition, files can take up large swaths of prime storage space.

This long-term accumulation leads to multiple redundant copies: a primary version, a backup, and a disaster recovery (DR) version. This, of course, can triple storage requirements and associated costs. Ironically, much of this data is rarely accessed or “cold,” yet may be sitting on expensive storage devices in the data center or in high-performance file storage in the cloud.

At the same time, business leaders are realizing that file data is an untapped asset for analytics and artificial intelligence. When managed intelligently, this data can improve customer relationships, support product innovation and inform strategic decisions. But doing so affordably and securely requires a shift in how file data is stored and protected.

The Ransomware Threat to File Data

File data is particularly vulnerable to ransomware attacks because it is widely accessed across users, groups, and systems. This broad exposure means that even one compromised user account can lead to a widespread infection. Since file systems are often interconnected, ransomware can silently spread through the network before being detected.

Given the complexity and distributed nature of file data, it’s no surprise that it represents one of the largest surfaces for potential ransomware damage. Ignoring this exposure is no longer an option. A comprehensive ransomware strategy must include file data protection, and that’s where data tiering becomes crucial.

What Is File-Level Data Tiering?

File-level data tiering is a method of reducing the cost and risk of storing cold file data while ensuring a non-disruptive experience for data access and ongoing mobility. An unstructured data management system first scans files across storage and then identifies files that are no longer active. The timeframe for labeling or “tagging” data as “cold” can vary, ranging from three months for medical images and closed legal cases to one year for user documents.

The data management system then moves the tagged cold data from expensive primary storage (a.k.a. a network-attached storage or NAS-system) to an economical secondary location, such as cloud object storage, which has higher latency but a fraction of the cost per terabyte (TB). Unlike block-level tiering, which occurs behind the scenes within storage systems, file-level tiering operates on entire files and should deliver a transparent experience for end-users and applications.

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW