Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish

Dechert Cyber Bits – Issue 14 | Dechert LLP | #cybersecurity | #cyberattack | #hacking | #aihp

Clearview AI Settles Biometric Data Privacy Suit with ACLU

On May 9, 2022, Clearview AI, Inc. (“Clearview”) and the American Civil Liberties Union (“ACLU”) announced an agreement to settle a lawsuit involving Clearview AI’s database of human facial recognition data. Clearview sells the data primarily as a subscription service to government and law enforcement agencies across the world—including U.S. Immigrations and Customs and the Department of Defense—for the purposes of identifying people and solving crimes. The ACLU filed the suit against Clearview in Illinois state court in March

2020, alleging that it violated the Illinois Biometric Information Privacy Act (“BIPA”) by scraping billions of individuals’ faceprints and using the data without their knowledge or consent. BIPA requires companies operating in Illinois to obtain explicit consent from individuals to collect their biometric data. The ACLU filed suit “on behalf of groups representing survivors of domestic violence and sexual assault, undocumented immigrants, current and former sex workers, and other vulnerable communities uniquely harmed by face recognition surveillance.”

According to the ACLU, as part of the settlement filed in Illinois federal court, Clearview has agreed to implement certain processes to bring its business into “alignment with BIPA.” Some of the terms include:

  • a restriction on the sale of its full facial recognition database across the United States;
  • a permanent ban nationwide against making its faceprint database available to most businesses and other private entities;
  • an agreement to stop selling access to its database to any entity in Illinois, including state and local police, for 5 years;
  • ending its practice of offering free trial accounts to individual police officers;
  • maintaining an opt-out request form on its website; and
  • for the next five years filtering out photographs that were taken or uploaded in Illinois. (Because this may not capture all images that affect Illinois residents protected by BIPA, Illinois residents can upload their image with a photo ID and Clearview will block its software from finding matches for their faces. The terms of the settlement bar Clearview from using that facial image for anything other than removal.)

The settlement, which is still being finalized, does not prevent Clearview from continuing to do business with government entities outside of Illinois. BIPA only limits the company from selling or giving away access to its facial recognition database to private companies and individuals. Clearview may continue to sell its bias-free algorithms, but without the faceprint database, to commercial customers, and this is not precluded by the settlement or BIPA.

Clearview continues to defend itself against privacy claims by others, especially individuals, in a U.S. multi-district litigation, and faces legal complaints filed by privacy watchdogs in France, Austria, Italy, Greece and the UK. In the past year, Clearview has also been accused of violating privacy laws in Canada, France, Australia, Italy, and the UK and has faced various fines and orders to destroy the biometric data it collected on citizens in these jurisdictions. On May 23, 2022, Clearview was ordered by the UK’s Information Commissioner’s Office to delete facial recognition data belonging to UK residents and pay a fine of £7.5 million ($9.4 million), and in March, Italy’s privacy guarantor fined Clearview €20 million (US$22 million) for collecting biometric data without the consent of Italian residents.

Takeaway: The Clearview case highlights how, in the absence of unifying federal regulation, state law is playing an increasing role in regulating data privacy rights in the United States, and especially individuals’ rights to control their biometric data. The settlement aligns with the FTC’s and UK ICO’s implementation of algorithmic disgorgement as a remedial measure, including in the context of facial recognition tools. Aggressive class action litigation under BIPA is highly unlikely to abate given the potential for significant damage awards and the refining of legal theories with seemingly unlimited targets as far as enterprising lawyers are concerned.


International Cybersecurity Authorities Warn Managed Service Providers About Cyber Threats

On May 11, 2022, the CIA, NSA and FBI joined the cybersecurity authorities of the United Kingdom, Australia, Canada, and New Zealand in issuing a joint alert warning of an increase in malicious cyber activity targeting Managed Service Providers (“MSPs”). The alert warned that “cybersecurity authorities expect malicious cyber actors—including state-sponsored advanced persistent threat groups—to step up their targeting of MSPs in their efforts to exploit provider-customer network trust relationships.” Malicious actors that successfully compromise an MSP could then deploy ransomware and cyber espionage tools against the MSP as well as against the MSP’s customer base.

The alert provides guidance to both MSPs and their customers, recommending that both take steps to implement baseline security measures and operational controls to protect against cyber-attacks. These include specific recommendations, such as requiring multifactor authentication, implementing monitoring and logging, segregating internal networks, applying updates in a timely manner, regularly backing up systems and data, and developing and exercising incident response and recovery plans. The guidance also recommends that customers require their MSPs to implement these measures and controls in their service contracts with them.

Takeaway: Businesses should ensure that the MSPs they use are taking the appropriate precautions to prevent cyber-attacks and should take steps to protect their own systems in case their MSP is compromised.


Senate Confirms Bedoya to FTC, Establishing
Democratic Majority

On May 16, 2022, Alvaro Bedoya was sworn in as the fifth commissioner of the Federal Trade Commission (“FTC”). Vice President Kamala Harris voted to break a 50-50 tie on the Senate floor to confirm Bedoya as the FTC’s third Democrat on May 11, after eight months of hearings, nominations, and delays. The five-member commission had been deadlocked with two commissioners from each party since former FTC Commissioner Rohit Chopra was confirmed last October to a new role leading the Consumer Financial Protection Bureau.

Previously Bedoya served as founding director of the Center on Privacy & Technology at Georgetown Law School, and has drawn attention for his privacy work, especially studying the disproportionate impacts of surveillance technologies, like facial recognition, on historically marginalized communities.

The FTC—under the leadership of Chair Lina Khan—has recently heightened its scrutiny over the tech industry. Khan’s stated vision for the agency is to break down silos within the FTC that separate competition and consumer protection investigations, and to focus on rulemaking regarding digital privacy and mitigating harms from next-generation technologies, innovations, and nascent industries across sectors. Some recent high-profile agenda items include an increased focus on potential antitrust violations by tech companies, “right to repair” restrictions, and enforcement actions over allegedly unlawful data collection.

Takeaway: Bedoya’s confirmation breaks a 2-2 partisan tie at the FTC at a time when FTC Chair Lina Khan is seeking to have the Commission engage in more aggressive enforcement and rulemaking. Bedoya is likely to play a leading role in privacy decisions as a third Democratic vote in Chair Khan’s favor. Expect an uptick in enforcement and rulemaking. Now is the time to go over company policies and ensure that they say what you do, and that you do what they say and ensure that data practices align with FTC guidance and enforcement priorities.


Biden Administration Warns That Use of AI and Algorithms for Employment Decisions May Violate the ADA

Employers increasingly rely on algorithmic software to aid in employment decision-making, including resume scanners, virtual assistants and chatbots, video interviewing software, and testing software. But on May 12, 2022, the Department of Justice and Equal Employment Opportunity Commission warned that such software may violate the Americans with Disabilities Act (“ADA”) by screening out individuals with disabilities. Under the ADA, an unlawful screen out occurs when a disability prevents a job applicant who is able to perform the essential functions of the job with a reasonable accommodation from meeting a selection criterion and the applicant or employee loses a job opportunity as a result.

The new guidance provides suggested steps for employers to avoid violating the ADA, such as informing applicants of the steps included in the evaluation process, asking whether the applicant will need reasonable accommodations to complete it, developing and selecting tools that measure only the abilities or qualifications that are truly necessary for the job, and avoiding using algorithmic tools that make inferences about abilities and qualifications for performing a job based on characteristics that are indirectly correlated with those criteria instead of directly measuring the applicant’s abilities and qualifications.

Takeaway: Businesses should audit algorithms and AI used for employment decisions to ensure they are not inadvertently violating the ADA. For a more detailed discussion of the EEOC’s guidance and practical advice on steps employers can take to comply with it, see the Dechert OnPoint on this issue from our Labor and Employment Group.


EDPB Adopts Guidelines on Harmonized Calculation of Fines

In Issue 13 of Cyber Bits, Dechert reported that the European Data Protection Board (the “EDPB”) outlined its intention to improve cooperation among local Data Protection Authorities (“DPAs”) regarding enforcement of the GDPR. As part of this, on May 12, 2022, the EDPB adopted new guidelines relating to methodologies for calculating fines under the GDPR. EDPB Chair Andrea Jelinek said in a press release: “From now on, DPAs across the EEA will follow the same methodology to calculate fines. This will boost further harmonization and transparency of the fining practice of DPAs.”

The new guidelines provide DPAs with a five-step approach when calculating fines but emphasize that the aim is to create harmonized starting points where the final amount will depend on all of the circumstances of the case. The five steps are:

  1. The DPA must determine whether the case raises one or more instances of sanctionable conduct and, within that, whether there is one or more instances of infringement. This is to clarify whether all or only some of the infringements can be fined.
  2. The harmonized starting point should take into account the maximum fine level under the GDPR, the seriousness of the infringement, and the turnover of the relevant controller or processor.
  3. The DPAs must consider whether there are any aggravating or mitigating factors that may increase or decrease the amount of the fine, such as actions taken to mitigate damage (mitigating) or previous infringements (aggravating).
  4. The DPAs must look at the maximum limits set by the GDPR and not exceed those levels. In the case of a turnover-based fine, that limit will be dynamic and require consideration by the DPA.
  5. Finally, the DPAs must determine whether the final calculated amount of the fine is effective proportionate and dissuasive or whether the level of the fine will require further adjustment. This adjustment might be downwards (such as to account for specific social or economic factors) or upwards (such as to add a deterrence multiplier).

The guidelines are now in a public consultation period until June 27, 2022, after which the EDPB will prepare a final version taking into account any feedback received. The EDPB press release states that the final version will include a reference table with a range of starting points for the calculation of a fine, correlating the seriousness of an infringement with the turnover of a controller or processor.

Takeaway: These guidelines signal the EDPB’s ongoing commitment to build its strategic framework for effective cooperation among DPAs throughout the EEA. The guidelines will also provide greater clarity for DPAs and businesses alike in their interpretation of GDPR enforcement, which may assist in harmonization and redressing a perceived imbalance in the fines seen to date between authorities such as the French CNIL and the Irish DPC.


Changes to Sanctions Regime of Proposed AI Act

Euractiv reports that the French Presidency (the “Presidency”) of the European Council (“EC”) has made a number of sanctions proposals in its latest French-language Compromise Text on the European Union’s proposed and highly anticipated Artificial Intelligence (“AI”) Act (the “Act”). The Compromise Text presents some important changes to the Act, as it seeks to amend the sanctions regime, the implementation timeline, confidentiality considerations for supervisory bodies, and the delegated powers of the European Commission (the “Commission”).

The Compromise Text proposes considering a company’s size and the severity of the violation when deciding the size of a fine. The highest fines would be limited to the unlawful use of prohibited practices, including manipulative algorithms.

Where small and medium enterprises (“SMEs”) and start-ups commit severe violations of the Act, they could face a maximum fine of 3% of their annual turnover or €30 million, whichever is higher. For all other violations, the maximum fine is 3% of annual turnover or €20 million, whichever is higher. Larger companies could face a maximum fine of 6% of their annual turnover.

The Compromise Text proposes three years, as opposed to the original two years, before application (i.e. enforceability) of the Act to allow businesses more time to adapt and to give member states more time to prepare for effective implementation. Member states would also have 12 months, rather than 3 months, to establish the relevant national authorities and notified bodies called for under the Act.

In our AI series of OnPoints (here, here and here), Dechert identified confidentiality as being a key concern of businesses in relation to the proposed Act. The Compromise Text extends confidentiality requirements related to the information and/or data received in relation to the application of the Act to also include the Commission, the board of national authorities and all those involved in the application.

The Commission’s powers
The Compromise Text now includes a “sunset clause” enabling the Commission to adopt delegated acts for a limited period of five years only, as opposed to the indefinite period previously proposed.

Takeaway: The proposed amendments to the Act would provide businesses with additional time to prepare for implementation and signal a more proportionate approach to the fines regime.

Click Here For The Original Source.


National Cyber Security