[ad_1]
The National Cybersecurity Alliance is now celebrating the third annual National Data Privacy Week with a series of educational events aimed at helping people protect their data and their privacy in this new era, where everything is interconnected and new threats like artificial intelligence are poised to make a big impact on cybersecurity. The week-long event is filled with free webinars and talks from cybersecurity experts covering everything from new privacy laws to ways to keep children safe online and the emerging threat that AI poses to both organizations and people.
Nextgov/FCW talked with NCA Executive Director Lisa Plaggemier about the key concerns as well as the most active threats that people are facing these days when trying to keep their data safe.
Nextgov/FCW: Congratulations on hosting another Data Privacy Week. Can you tell us about the history and goals of the event?
Plaggemier: Data Privacy Week, now in its third year, is an annual campaign hosted by the National Cybersecurity Alliance that focuses on promoting awareness and education about data privacy for both consumers and businesses. It builds on the success of Data Privacy Day which began in the United States and Canada in January 2008 as an extension of Data Protection Day in Europe. Data Protection Day commemorates the January 28, 1981 signing of Convention 108, the first legally binding international treaty dealing with privacy and data protection.
To raise awareness for Data Privacy Week, NCA will be hosting a series of webinars throughout the week to include discussions on data sharing permissions, strategies to protect personal data from brokers, safeguarding children’s online privacy, addressing privacy challenges in the AI era and exploring the legal aspects of data privacy. These themes aim to provide a comprehensive understanding of data privacy and protection across different contexts.
Nextgov/FCW: You mention the rise of artificial intelligence and AI tools. What special challenges do they bring regarding cybersecurity and privacy?
Plaggemier: The widespread availability of AI tools — such as ChatGPT — for public use raises important questions about privacy and cybersecurity. As these tools become more prevalent, there is an increased risk of malicious actors exploiting them for unauthorized access or manipulation. For instance, AI-generated content could be used for sophisticated phishing attacks or deepfakes, undermining the trustworthiness of online information.
Developers and organizations must implement stringent security measures to prevent misuse, including robust encryption, access controls and ongoing monitoring for any signs of malicious activity. Additionally, there’s a need for ethical guidelines and regulations to ensure responsible AI development and usage, striking a balance between innovation and security.
Nextgov/FCW: And as someone who studies this issue in great detail, can we ask about what are some of the top concerns in cybersecurity and privacy that we should be most concerned about as we celebrate Data Privacy Week this year?
Plaggemier: In the evolving landscape of cybersecurity, several key concerns are on the rise. Ransomware attacks have become more sophisticated and prevalent, targeting individuals, businesses and even critical infrastructure. Phishing and social engineering tactics continue to exploit human vulnerabilities, requiring increased awareness and education. Vulnerabilities in IoT devices pose a significant risk, as they can be exploited to gain unauthorized access to networks.
Additionally, recent incidents, like the Microsoft attack, underscore the importance of addressing poor basic IT hygiene. Attacks often exploit weaknesses in access controls, passwords and the absence of multi-factor authentication. Basic principles and actions, such as improving access controls, strengthening passwords and implementing MFA are essential steps that don’t necessarily require significant costs, but can significantly enhance cybersecurity defenses.
Supply chain attacks, where adversaries compromise the security of products or services through the supply chain, are also a growing threat. Lastly, the integration of AI in cyber threats adds complexity, with the potential for automated attacks and manipulation of AI-generated content. To address these concerns, individuals should adopt a proactive approach, implementing strong security practices, staying informed about emerging threats and supporting initiatives that promote a secure digital environment.
CISA looks to shift responsibility for vulnerabilities
While the efforts of the NCA are aimed at helping people protect themselves from the many vulnerabilities found in the devices, software and applications that they use every day, the Cybersecurity and Infrastructure Security Agency is instead looking to transform the cybersecurity landscape by placing the blame for poor cybersecurity squarely back on the developers and manufacturers of vulnerable technology.
While this plan to shift the responsibility for vulnerabilities was hinted at in CISA’s 2023 to 2025 Strategic Plan, it has now been much more precisely defined in the agency’s recently released Secure By Design program.
Secure By Design calls for a new model of cybersecurity responsibility where any gaps in security are fixed long before they reach the public, and where people can trust in the safety and integrity of the technology they use every day. It’s a tall order, but one that CISA Director Jen Easterly says is long past due. She explained her frustration at the current state of cybersecurity during a recent speaking engagement at Carnegie Mellon University.
“We’ve normalized the fact that technology products are released to market with dozens, hundreds or thousands of defects, when such poor construction would be unacceptable in any other critical field,” said Easterly. “We’ve normalized the fact that the cybersecurity burden is placed disproportionately on the shoulders of consumers and small organizations, who are often least aware of the threat and least capable of protecting themselves.”
In many ways, Secure By Design follows up on the National Security Strategy of the United States, which also states that “poor software security greatly increases systemic risk across the digital ecosystem.” The new CISA program would work to first encourage developers and manufacturers to voluntarily take more responsibility for the vulnerabilities placed in their code, but also hints that one day, shipping insecure software may result in legal penalties and consequences.
For example, the CISA strategic plan states that the agency will use “all available levers to influence the risk decisions of organizational leaders.” It also hints that laws like the Cyber Incident Reporting for Critical Infrastructure Act of 2022 — or CIRCIA — which currently governs the reporting of cyber incidents among critical infrastructure providers, could act as a model to eventually shift voluntary compliance with new guidelines like Secure By Design regulations to becoming more mandatory for all software and device manufacturers.
According to the Secure By Design website, many manufacturers and software developers are already starting to get onboard with the new program. Hopefully, many more will follow. Until then, programs like the NCA’s Data Privacy Week or CISA’s Cybersecurity Awareness Month can help people to continue to protect themselves from their own devices and software, although now with a little bit more hope for a brighter, more secure, future.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys
[ad_2]