Info@NationalCyberSecurity
Info@NationalCyberSecurity

Dechert Cyber Bits – Issue 43 | Key Developments in Privacy & Cybersecurity | Dechert LLP | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Articles in this issue

  • NSA and CISA Release Report on “Top Ten” Cybersecurity Misconfigurations; CISA Calls for Software Manufacturers to Implement Best Practices
  • Blackbaud Agrees to Pay US$49.5 million to Settle Claims Arising from Investigation of 2020 Data Breach
  • UK ICO Publishes Methodology for Issuing Fines
  • FTC Documents Consumer Concerns About AI
  • EDPB Guidelines on Transfers of Personal Data by Law Enforcement Authorities

NSA and CISA Release Report on “Top Ten” Cybersecurity Misconfigurations; CISA Calls for Software Manufacturers to Implement Best Pratices

On October 5, 2023, the United States National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CISA) released a joint cybersecurity advisory (CSA) titled “NSA and CISA Red and Blue Teams Share Top Ten Cybersecurity Misconfigurations.” The CSA is available on the CISA website here. Its purpose is to “highlight the most common cybersecurity misconfigurations in large organizations and detail the tactics, techniques, and procedures [threat] actors use to exploit these misconfigurations.” The ten most common network misconfigurations identified are:

  1. Default configurations of software and applications;
  2. Improper separation of user/administrator privileges;
  3. Insufficient internal network monitoring;
  4. Lack of network segmentation;
  5. Poor patch management;
  6. Bypass of system access controls;
  7. Weak or misconfigured multifactor authentication methods;
  8. Insufficient access control lists on network shares and services;
  9. Poor credential hygiene; and
  10. Unrestricted code execution.

The CSA was a result of a years-long operation by the NSA and CISA to “assess organizations to identify how a malicious actor could gain access, move laterally, and target sensitive systems or information.” In announcing the CSA, CISA explained that “[w]hile enterprises can and must take steps to identify and address these misconfigurations” there is also a burden on software manufacturers to provide appropriate tools. CISA’s statement included a list of best practices it said “[e]very software manufacturer should urgently adopt . . . to reduce the prevalence of common misconfigurations by design.”

Takeaway: The list coincides with the top ways we see threat actors gain access to clients’ systems in our breach response matters. While at first blush some may appear obvious or “blocking and tackling” issues, it is important that all organizations, large and small, take the time to carefully go through the list with an eye toward vulnerabilities that may still be lurking in the information security program at the company. Of course, software manufacturers will want to implement the best practices identified by the agencies in the CSA. With significant vendor breaches being so prevalent this year (MOVEit as just one example), the identification of vulnerabilities will also be of use to companies assessing vendors and, in particular, software providers.

 

Blackbaud Agrees to Pay US$49.5 million to Settle Claims Arising from Investigation of 2020 Data Breach

In early October, Blackbaud, Inc., a leading cloud software company, entered into settlement agreements with the Attorneys General of 49 states and the District of Columbia to resolve claims that its data security practices and response to a 2020 ransomware attack violated state and federal law. Blackbaud did not admit wrongdoing as part of the settlement.

Blackbaud primarily serves nonprofits, foundations, educational institutions, and healthcare providers. The settlement relates to claims arising from a 2020 data breach that affected data belonging to over 13,000 Blackbaud business customers. The impacted data related to millions of individuals and contained contact and demographic information, Social Security numbers, driver’s license numbers, financial information, and protected health information covered by HIPAA.

The states alleged that Blackbaud had failed to implement reasonable data protection measures and had failed to remediate known security gaps. The states also alleged that Blackbaud failed to provide customers with timely, complete, and accurate information regarding the 2020 breach. For example, in July 2020 Blackbaud announced that the threat actors had not gained access to donor financial information or Social Security Numbers, which was not correct, and failed to correct that statement to customers even after discovering it was incorrect. The states alleged that these actions led to notifications to consumers being significantly delayed, or in some cases, not occurring at all. The states claimed that Blackbaud’s actions violated HIPAA as well as state consumer protection, data privacy, and data breach notification laws.

Under the settlement, Blackbaud will pay a total of US$49.5 million and has also agreed to reforms to its security and breach notification practices. Required reforms will include implementing and maintaining written incident response plans to prepare for and respond to similar incidents in the future.

Takeaway: Of course, Blackbaud’s settlement serves as yet another cautionary example of the importance of implementing strong security practices and, in particular, of addressing known vulnerabilities promptly. Those of us who handled the Blackbaud breach from the standpoint of its customers recall Blackbaud’s efforts to be very forthcoming with information related to the breach. The states’ stance drives home that if information is provided up front, it is important for a company to correct that information should the facts take a turn. More importantly, it is critical not to go out with information until those facts have been forensically verified. This also avoids the loss of credibility with customers when “facts” about the incident later turn out to be different than originally conveyed.

 

UK ICO Publishes Methodology for Issuing Fines

The UK Information Commissioner’s Office (ICO) has published draft guidance on its approach to issuing fines under the UK GDPR. The ICO’s draft guidance is open to public consultation until November 27, 2023.

Under the draft guidance, the ICO, when deciding whether to issue a fine and at what level, would form a view on the seriousness of the infringement taking into account the gravity and duration of the infringement, whether the infringement was intentional or negligent, and the categories of personal data affected.

The ICO also would take into account various aggravating and mitigating circumstances. For instance:

  • A history of non-compliance would be an aggravating factor, in particular where the ICO has previously taken enforcement action;
  • Pro-active steps to minimize damage to data subjects can be a mitigating factor, but steps taken only after the ICO commences an investigation would be less influential;
  • Cooperation with the ICO is to be expected and would often not be a mitigating factor, but failing to properly engage with the ICO could aggravate matters; and
  • Financial benefits, including costs saved from failure to invest in appropriate measures, could also be an aggravating factor.

With respect to penalty amounts, the ICO is permitted to issue fines up to the higher of £17.5 million or 4% of total worldwide annual turnover. In practice, however, the ICO will consider turnover when setting fines for all businesses. The draft guidelines suggest that the ICO would include not only the turnover of the particular legal entity that carried out the infringement, but all entities that form part of a “single economic unit.” The guidelines on amounts also include these considerations:

  1. The starting point for less serious cases would be 0-10% of the statutory maximum fine for that type of infringement; for cases of a medium degree of seriousness, it would be 10-20% of the maximum; and for the most serious cases, 20-100% of the maximum.
  2. The ICO would reduce that amount for businesses with turnover less than £435 million to a percentage proportionate to the size of the business.
  3. A further adjustment would be made to account for aggravating and mitigating circumstances.
  4. The ICO would then check that the amount is “effective, proportionate and dissuasive” and does not exceed the statutory maximum. In exceptional circumstances, financial hardship could be relied upon to reduce the fine further, but the ICO indicates that this would only be likely to apply to businesses where the proposed fine would “irretrievably jeopardise an organisation’s economic viability.”

Where an ICO investigation identifies multiple infringements, a single cap would apply if the infringements related to the same or linked processing operations. Where infringements involve separate processing activities the ICO plans to apply a separate cap to separate infringing activities.

Takeaway: The ICO’s draft guidance provides a helpful methodology for organizations to assess the level of potential liability. Under it, only the most serious infringements by the largest organizations could give rise to fines close to the maximum. Fining decisions will be highly fact-dependent and the ICO’s fining determinations are not bound by precedent. However, the guidelines indicate that the ICO intends to ensure a broad level of consistency. The ICO places significant weight on an organization’s turnover when determining the level of a fine. In the context of M&A, when reviewing a target’s UK GDPR compliance, organizations should be conscious that the level of risk associated with ongoing non-compliance may be amplified by bringing the target into a group with higher turnover. These draft guidelines reinforce the need for international corporations to have sophisticated systems in place to mitigate risk and to document remedial steps taken. Such processes are not just best practice, but can reduce the risk of significant fines and attendant bad publicity.

 

FTC Documents Consumer Concerns About AI

In an October 3, 2023, blog post by the Federal Trade Commission’s (FTC) Office of Technology, the FTC “summarize[d] a few key areas of harm” the FTC has reviewed related to consumers’ concerns about harms related to artificial intelligence (AI). The blog post is available at this link.

The first major consumer concern the FTC identified was about “how AI is built” and, in particular, the fact that AI tools are trained, a process that requires vast quantities of data. The FTC identified further concerns related to (i) copyright and intellectual property, including those stemming from AI developers alleged scraping of copywritten data or protected by intellectual property rights to train AI, and (ii) biometric and personal data, including those about the use of biometric data, especially voice recordings, to train and develop AI.

The FTC also identified major consumer concerns related to “how AI works and interacts with users,” issues arising from bias and inaccuracies in AI, and issues that may arise from “how AI is applied in the real world,” particularly in relation to its potential for misuse in fraud and scams.

The FTC concluded by stating that it is “keeping a close watch on the marketplace and company conduct as more AI products emerge.”

Takeaway: Companies that develop, train and use AI will want to review the FTC’s blog post to better understand the concerns consumers have about AI and to get a sense of areas the FTC may focus on when determining areas and issues for future regulation and/or enforcement.

 

EDPB Guidelines on Transfers of Personal Data by Law Enforcement Authorities

The European Data Protection Board (EDPB) adopted draft guidelines on law enforcement authorities transferring personal data out of the EU. The draft guidelines are open for consultation until November 8, 2023.

The Law Enforcement Directive (LED) limits the circumstances in which personal data can be transferred from law enforcement authorities in the EU to authorities outside the EU or to international organizations. The draft guidelines specify that the transferred data must benefit from a level of protection that is “essentially equivalent’” to the level of protection in the EU. Subject to certain exceptions, the LED requires “appropriate safeguards” to be in place for the transfer unless an adequacy decision under the LED has been issued by the European Commission. Currently, only the UK benefits from an LED adequacy decision.

The LED provides two options for what constitute “appropriate safeguards.” First, the safeguards may be provided by a legally binding instrument, such as a bilateral agreement between the EU member state and the destination country relating to data sharing for law enforcement. The second option allows the relevant EU enforcement authority to conduct its own assessment of the circumstances of the transfer to determine that appropriate safeguards are in place. In these situations, the draft guidelines specify that law enforcement authorities must perform a detailed assessment of the circumstances surrounding the data transfers and put in place additional safeguards where needed to supplement the existing laws and practices in place in the third country.

Takeaway: Because the European Commission at this point has issued only one adequacy decision for the purposes of law enforcement data transfers, law enforcement bodies would be required under the draft guidelines to largely rely on “appropriate safeguards” as the basis for international data sharing. Suspects, victims, witnesses and others involved in criminal investigations should be alert to their rights and enforcement authorities’ obligations under the LED.

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW