Recent State, Federal, and International Cybersecurity Laws | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Welcome to this month’s issue of The BR Privacy, Security & AI Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice.

When AI Takes Notes: Protecting Privilege, Privacy, and Professional Obligations

Blank Rome vice chair of artificial intelligence Sharon R. Klein, partner Alex C. Nisenbaum, and associate Sierra N. Lactaoen authored this alert discussing how AI notetaking tools can potentially undermine attorney‑client privilege, confidentiality, and ethical obligations if used without proper safeguards.

State and Local Laws & Regulations

CalPrivacy Seeks Preliminary Comments on Reducing Friction in Privacy Rights and Opt-Out Preference Signals: The California Privacy Protection Agency (“CalPrivacy”) issued an Invitation for Preliminary Comments exploring whether regulatory changes are needed to reduce friction in consumers’ exercise of privacy rights under the California Consumer Privacy Act (“CCPA”) and challenges faced by businesses in the implementation of opt-out preference signals (“OOPS”). CalPrivacy is soliciting stakeholder input on challenges consumers and businesses face, including difficulties locating privacy rights information, user-interface designs that may impair privacy choices, identity verification, use of authorized agents, and request-submission limits. The invitation also seeks feedback on OOPS, including experiences using signals such as Global Privacy Control; challenges businesses face in applying signals across browsers, devices, and identifiers for known consumers and pseudonymous profiles; and whether additional regulatory clarity is needed. Stakeholders are encouraged to propose specific regulatory language and to identify priorities for reducing friction. Preliminary comments are being accepted through April 6, 2026. If CalPrivacy proceeds with rulemaking, a formal comment period under the Administrative Procedure Act will follow. 

Colorado AI Policy Work Group Announces Revised AI Policy Framework: The Colorado AI Policy Work Group, convened by Governor Jared Polis, announced its support for a proposed legislative framework governing the use of artificial intelligence (“AI”) and automated decision-making technology (“ADMT”) in consequential decisions affecting consumers. The proposed bill would replace Colorado’s prior AI legislation with a revised consumer protection framework. The bill defines “covered ADMT” as ADMT used to materially influence consequential decisions in domains, including employment, housing, lending, insurance, healthcare, education, and essential government services. Under the framework, developers must provide deployers with technical documentation, including intended uses, training data categories, known limitations, and instructions for meaningful human review. Deployers must provide consumers with clear notice when ADMT is used in consequential decisions and, following an adverse outcome, must disclose the role of the ADMT, the data used, and how to request human review or data correction. The bill provides no private right of action, with enforcement reserved exclusively to the Colorado Attorney General under the Colorado Consumer Protection Act and includes a 90-day cure period before penalties may be sought. The bill includes carve-outs for Health Insurance Portability and Accountability Act (“HIPAA”)-covered entities, Gramm-Leach-Bliley Act (“GLBA”)-regulated disclosures, and insurance practices subject to Commissioner of Insurance rules. If enacted, the law would take effect January 1, 2027. 

Oklahoma Enacts Comprehensive Consumer Data Privacy Law: Oklahoma Governor Kevin Stitt signed Senate Bill 546 into law, making Oklahoma the twentieth State to enact comprehensive consumer data privacy legislation. The law, which takes effect on January 1, 2027, is modeled largely on Virginia’s Consumer Data Protection Act and applies to businesses that control or process: (i) personal data of at least 100,000 Oklahoma consumers; or (ii) the data of at least 25,000 consumers while deriving at least 50 percent of gross revenue from data sales. The law establishes standard consumer data rights, including the right to access information about stored personal data, know where it was transferred and to whom it was sold, and to request deletion. Businesses must also provide consumers with opt-out mechanisms for targeted advertising and the sale of personal data. The law requires data protection assessments for specified processing activities. Notably, the law does not include requiring recognition of universal opt-out mechanisms or enhanced children’s privacy protections, provisions that have become more common in recently enacted state privacy laws. Enforcement authority rests exclusively with the Oklahoma Attorney General, and the law includes a 30-day cure provision that does not sunset. 

Washington Enacts Laws on AI likenesses, AI Content, and Chatbot Protocols to Prevent Harm: Washington Governor Bob Ferguson signed three bills expanding protections against the misuse of digital likenesses and AI-generated content. Senate Bill 5886 (“S.B. 5886”) amends the State’s personality rights law to cover “forged digital likenesses,” defined as AI-manipulated audio, video, or images that misrepresent an individual’s appearance, speech, or conduct and are likely to deceive a reasonable person. The law increases the civil penalty for violations from $1,500 to $3,000, plus actual damages, and introduces noneconomic damages for unauthorized use of a digital likeness, regardless of whether the infringement generated a profit. S.B. 5886 takes effect on June 10, 2026. House Bill 1170 (“H.B. 1170”) requires developers of generative AI systems with more than one million monthly users to embed provenance data, such as watermarks, in AI-generated images, video, and audio, to the extent commercially and technically reasonable. Government agencies using public-facing AI features must also notify consumers of AI interactions. H.B. 1170 takes effect on February 1, 2027. House Bill 2225 (“H.B. 2225”) requires AI chatbot operators to implement protocols for detecting and responding to users expressing suicidal ideation or self-harm, including referrals to crisis hotlines. Chatbots designed for minors must disclose their AI-powered nature and implement safeguards against generating sexually explicit content or employing manipulative engagement techniques. H.B. 2225 takes effect on January 1, 2027. 

South Dakota Enacts Genetic Data Privacy Act: South Dakota Governor Rhoden signed Senate Bill 49 (the “Act”), establishing comprehensive protections for consumer genetic data. The Act, which takes effect on July 1, 2026, applies to direct-to-consumer genetic testing companies and entities that analyze data derived from such products and services. The Act requires covered companies to publish detailed privacy policies addressing data processing, retention, and security practices, and to obtain separate express written consent for each distinct use of genetic data, including initial collection, third-party disclosures, uses beyond the primary testing purpose, retention of biological samples, and certain marketing activities. The Act also grants South Dakota residents rights to access and delete their genetic data, delete their accounts, and request destruction of biological samples. The Act exempts HIPAA-covered protected health information, biological samples obtained for medical diagnosis or treatment, higher education institutions, law enforcement forensic labs, and certain research activities. The South Dakota Attorney General may seek civil penalties of up to $5,000 per violation.

Federal Laws & Regulations

White House Releases National AI Legislative Framework Urging Federal Preemption of State AI Laws: The White House released a national AI legislative framework setting forth legislative recommendations for Congress on AI. A central provision directs Congress to preempt State AI laws that “impose undue burdens” in favor of a national standard, while preserving States’ traditional police powers to enforce generally applicable laws protecting children, preventing fraud, and safeguarding consumers. The framework also recommends that States not be permitted to regulate AI development, which the White House characterizes as “an inherently interstate phenomenon with key foreign policy and national security implications.” On intellectual property, the framework supports allowing courts to resolve whether AI training on copyrighted material constitutes fair use and directs Congress not to intervene, while recommending that Congress consider enabling collective licensing frameworks for rights holders to negotiate compensation from AI providers. The framework further calls on Congress to establish commercially reasonable age-assurance requirements for AI platforms likely to be accessed by minors, affirm that existing child privacy protections apply to AI systems, and consider a federal right protecting individuals from unauthorized AI-generated digital replicas of their voice and likeness. 

Republican State Lawmakers Urge White House to Stop Blocking State AI Laws: More than 50 Republican State lawmakers sent a letter to President Trump urging the administration to halt efforts to prevent the passage of State AI legislation. The letter was prompted by White House actions pressuring lawmakers in Utah and other States to abandon AI bills, including a measure that would require AI developers to create and publish public safety and child protection plans. The lawmakers argued that State-led AI regulation is consistent with conservative principles and federalism, noting that States have long served as laboratories of democracy. The letter specifically raised concerns about the exposure of young people to deep-fake content and manipulative design practices, the need for transparency when consumers interact with AI systems, and the impact of data center buildouts on local communities. The signatories criticized the administration’s approach as going beyond coordination to effectively shielding the technology industry from accountability. The Trump Administration has previously sought to block State AI laws in favor of a single federal framework, arguing that a patchwork of State laws could slow innovation as the United States competes with China. After congressional efforts to preempt State AI laws failed, President Trump signed an Executive Order creating an “AI Litigation Task Force” to challenge State AI laws deemed overly burdensome and tying federal broadband funding to such efforts.

White House Releases National Cyber Strategy and Accompanying Executive Order: The White House released President Trump’s National Cyber Strategy, outlining the administration’s approach to cybersecurity across six policy pillars. The strategy emphasizes an aggressive, offense-forward posture, pledging to “deploy the full suite of U.S. government defensive and offensive cyber operations” to erode adversary capabilities, and warns that the administration “will not confine [its] responses to the ‘cyber’ realm.” Other pillars address modernizing federal networks through AI-powered cybersecurity solutions, zero-trust architecture, and post-quantum cryptography; securing critical infrastructure and supply chains; and building cyber workforce capacity. On AI, the strategy directs agencies to adopt AI-enabled tools for network defense and calls for securing the data and infrastructure underpinning U.S. AI leadership. An accompanying Executive Order targets transnational cybercrime, directing the Attorney General to prioritize prosecutions of cyber-enabled fraud and tasking the Secretary of the Department of Homeland Security to work with state and local partners on anti-scam training. 

GSA Issues First Federal Acquisition Regulation Clause for AI Systems: The U.S. General Services Administration (“GSA”) issued GSAR Clause 552.239-7001, the first federal acquisition regulation specifically addressing AI systems used in government contracting as part of its latest multiple award schedule update, Refresh 31. The clause imposes detailed requirements on contractors and their AI service providers, including government ownership of all data inputs, data outputs, and custom developments; restrictions on the use of government data for model training, marketing, or business purposes; mandatory “eyes off” data handling procedures with logged and limited human access; data localization and logical segregation of government data; mandatory use of “American AI Systems” with a prohibition on foreign AI components; 72-hour incident reporting to the Cybersecurity and Infrastructure Security Agency (“CISA”) and the contracting officer; and adherence to “Unbiased AI Principles” requiring truthfulness, neutrality, and nonpartisanship in AI outputs. Following industry pushback, the GSA extended the comment period through April 3, 2026, and announced it would defer the clause to Refresh 32. Government contractors should monitor this clause closely, as it may serve as a template for AI acquisition requirements across other federal agencies. 

NIST Publishes Report on Challenges to Post-Deployment Monitoring of AI Systems: The National Institute of Standards and Technology (“NIST”) published NIST AI 800-4, a report examining challenges to monitoring AI systems after deployment. The report, informed by three practitioner workshops and a review of 87 papers, identifies six categories of post-deployment monitoring: functionality, operational, human factors, security, compliance, and large-scale impacts, as well as barriers and open questions across each. Among the challenges highlighted are the absence of trusted standards or guidelines for monitoring methods and tools, an immature ecosystem for sharing incident and performance information across the AI value chain, and difficulties scaling human-driven monitoring alongside rapid system rollouts. The report notes that monitoring itself may create privacy and security risks, particularly when AI systems interact with sensitive personal information, and that organizations face tension between transparency obligations and competitive or proprietary interests. Notably, compliance monitoring remains hindered by a complex and shifting regulatory landscape, with inconsistencies across existing standards and jurisdictions. NIST is soliciting stakeholder input to guide future work in this area. 

U.S. Litigation

New Mexico Jury Awards $375 Million Against Meta over Harm to Teen Users: A New Mexico jury returned a $375 million verdict against Meta Platforms, Inc. (“Meta”), finding that the company engaged in unfair and unconscionable practices by concealing the extent of mental health harm its social media platforms caused to underage users. The verdict followed a six-week trial brought by New Mexico Attorney General Raul Torrez, who had sought up to $2 billion in damages. The jury awarded $187.5 million on each of two claims, unfair practices and unconscionable acts, finding 37,500 violations at the statutory maximum civil penalty of $5,000 per violation, a figure tied to estimated ranges of teen Facebook and Instagram users in the State. The State alleged that Meta’s platforms failed to adequately protect minors from sexual predation, bullying, and harmful content related to suicide and self-injury, and that a 2016 algorithmic change to content feeds prioritized engagement over user well-being, contributing to what the Attorney General characterized as addiction. Meta stated it “respectfully disagree[s] with the verdict and will appeal.” 

Jury Awards $6 Million Against Meta and Google in Landmark Social Media Youth Harm Bellwether Trial: A Los Angeles County Superior Court jury found Meta and Google liable for harming the mental health of a plaintiff who alleged she became addicted to Instagram and YouTube as a child, awarding $3 million in compensatory damages and $3 million in punitive damages. The jury found Instagram 70 percent responsible and YouTube 30 percent responsible and determined that both companies were negligent in designing their platforms and failed to warn of their dangers. The punitive damages award was based on a finding that both companies acted with malice, fraud, or oppression. The trial is the first bellwether out of thousands of similar cases consolidated in the Los Angeles County Superior Court and has widespread implications for the tech industry, including for TikTok and Snapchat, which settled with the plaintiff prior to trial but remain defendants in other pending actions. Notably, the trial addressed only platform design features such as algorithms, notifications, autoplay, and infinite scroll, as the Court ruled that content is protected by Section 230 of the Communications Decency Act. Meta and Google executives, including Meta CEO Mark Zuckerberg, testified that social media addiction is not real, but were confronted with internal documents suggesting the companies knew their platforms were harmful to children. Both companies have indicated they intend to challenge the verdict. 

Fifth Circuit Affirms Summary Judgment for Pest Control Company in TCPA Pre-Recorded Call Case: The U.S. Court of Appeals for the Fifth Circuit affirmed summary judgment in favor of Sovereign Pest Control of TX, Inc. (“Sovereign Pest”) in Bradford v. Sovereign Pest Control of TX, Inc., No. 24-20379 (5th Cir. Feb. 25, 2026). The plaintiff alleged that Sovereign Pest violated the Telephone Consumer Protection Act (“TCPA”) by placing pre-recorded calls to his cell phone without obtaining prior express written consent. The Court held that the TCPA’s plain text requires only “prior express consent,” which encompasses both oral and written consent, for any auto-dialed or pre-recorded call to a wireless number, regardless of whether the call constitutes telemarketing. In so holding, the Court declined to defer to the Federal Communication Commission’s (“FCC”) regulation distinguishing between telemarketing and informational calls, which imposes a heightened “prior express written consent” requirement for pre-recorded telemarketing calls, citing the Supreme Court’s 2024 decision in Loper Bright Enterprises v. Raimondo. The Court found that the plaintiff had provided prior express consent by voluntarily furnishing his cell phone number on his service-plan agreement and later confirming Sovereign Pest could call him, noting that he renewed his service plan four times without ever objecting to the calls. The decision is significant for organizations that rely on pre-recorded calls for customer communications, as it narrows the consent standard under the TCPA and calls into question the enforceability of the FCC’s more restrictive written-consent requirement for telemarketing calls.

Virginia Federal Court Blocks State Social Media Time Limits for Minors: U.S. District Judge Patricia Tolliver Giles of the Eastern District of Virginia granted a preliminary injunction blocking enforcement of Virginia Senate Bill 854 (“S.B. 854”), which required social media platforms to limit minors under the age of 16 to one hour of daily use absent verifiable parental consent. The Court found that NetChoice, a trade association whose members include Meta, YouTube, and Reddit, demonstrated a likelihood of success on its First Amendment claim. The Court held that S.B. 854 is a content-based restriction because it exempts platforms consisting primarily of news, sports, entertainment, ecommerce, or provider-preselected content, thereby drawing subject-matter distinctions among categories of protected speech. Applying strict scrutiny, the court found the law was not narrowly tailored, as it burdened more speech than necessary by requiring all users, including adults, to verify their age before accessing constitutionally protected content. The Court also found the law underinclusive, noting that it exempted interactive gaming despite evidence that digital gaming poses similar addiction risks to minors. The ruling follows a growing line of federal court decisions enjoining state social media age-restriction laws on First Amendment grounds, including similar rulings in Louisiana and Georgia. 

Federal Court Denies Motion to Dismiss in Washington Antispam E-mail Class Action: U.S. District Judge Rebecca L. Pennell of the Eastern District of Washington denied a motion to dismiss filed by Ulta Salon, Cosmetics & Fragrance, Inc. (“Ulta”) in a putative class action alleging that Ulta’s promotional e-mails violated the Washington Commercial Electronic Mail Act (“CEMA”) and the Washington Consumer Protection Act (“CPA”). Plaintiffs alleged that Ulta sent commercial e-mails with false or misleading subject lines that created a false sense of urgency by misrepresenting the duration or availability of promotions. The Court rejected Ulta’s argument that CEMA is preempted by the federal CAN-SPAM Act, holding that CEMA’s subject-line provision falls within CAN-SPAM’s express exception permitting State laws that prohibit “falsity or deception” in commercial e-mail. The Court also rejected Ulta’s dormant Commerce Clause challenge, finding that CEMA applies evenhandedly to in-state and out-of-state senders, does not improperly regulate wholly out-of-state conduct, and does not impose a substantial burden on interstate commerce. The ruling is consistent with a similar decision issued weeks earlier in the Western District of Washington and follows a wave of CEMA litigation prompted by the Washington Supreme Court’s 2025 decision in Brown v. Old Navy, LLC, which broadly interpreted CEMA’s subject-line provision to prohibit any false or misleading information in the subject lines of commercial e-mails sent to Washington residents.

Court Denies Motion for Preliminary Injunction Against California AI Training Data Transparency Law: U.S. District Judge Jesus G. Bernal of the Central District of California denied xAI LLC’s (“xAI”) motion for a preliminary injunction seeking to block enforcement of California’s Assembly Bill 2013 (“A.B. 2013”), which requires developers of generative AI systems accessible in California to post documentation on their websites describing the datasets used to train their models. The required disclosures include, among other things, dataset sources, whether datasets include personal information or data protected by intellectual property rights, whether datasets were purchased or licensed, and whether synthetic data was used. xAI argued that A.B. 2013 compelled disclosure of trade secrets in violation of the Fifth Amendment’s Takings Clause, violated the First Amendment by compelling speech based on content and viewpoint, and was unconstitutionally vague. The Court found that xAI failed to demonstrate a likelihood of success on the merits on any claim. On the trade secrets issue, the Court held that xAI’s complaint relied on “frequent abstraction and hypotheticals” and did not identify any dataset or cleaning method sufficiently unique to warrant trade secret protection. On the First Amendment claim, the Court concluded that A.B. 2013 likely regulates commercial speech subject to intermediate scrutiny, as it provides consumers with information necessary to evaluate competing AI models rather than compelling ideological statements. The Court also rejected xAI’s vagueness challenge, noting that the statute’s enumerated disclosure topics provide sufficient notice of compliance obligations. 

Ninth Circuit Partially Vacates Injunction Against California Age-Appropriate Design Code Act: In NetChoice, LLC v. Bonta, No. 25-2366 (9th Cir. Mar. 12, 2026), the Ninth Circuit affirmed in part and vacated in part the district court’s preliminary injunction blocking enforcement of the California Age-Appropriate Design Code Act (“CAADCA”), which requires businesses providing online services “likely to be accessed by children” to implement heightened privacy and safety protections for users under the age of 18. The panel vacated the injunction as to the CAADCA’s coverage definition and age estimation requirement, holding that NetChoice failed to carry its burden under the Supreme Court’s Moody v. NetChoice standard for facial First Amendment challenges because it did not develop a sufficient record addressing the full range of the statute’s applications. The Court emphasized that the coverage definition does not raise the same First Amendment issues in every application, as the statute applies to a broad array of online services, including ride sharing, ticketing, and financial platforms, regardless of the content they publish. However, the panel affirmed the injunction with respect to the CAADCA’s data use restrictions and dark patterns prohibition, agreeing with the district court that these provisions are unconstitutionally vague because terms such as “materially detrimental” to a child’s “well-being” and “best interests of children” fail to provide covered businesses with adequate notice of proscribed conduct. The case was remanded for further proceedings, including consideration of the severability of the enjoined notice-and-cure provision from the CAADCA’s remaining valid provisions.

U.S. Enforcement

CalPrivacy Issues First Enforcement Decision Addressing Student Privacy Violations: The CalPrivacy Board announced it had issued a decision requiring 2080 Media, Inc., d/b/a PlayOn Sports (“PlayOn”), a youth sports media and technology company, to pay a $1.10 million fine and overhaul its privacy practices following PlayOn’s settlement with CalPrivacy’s Enforcement Division. The decision is CalPrivacy’s first to address privacy violations involving students and California schools. PlayOn operates the GoFan digital ticketing platform, which is used by approximately 1,400 California schools and serves as the official ticketing platform for the California Interscholastic Federation. CalPrivacy found that PlayOn used tracking technologies, including first- and third-party cookies and MetaPixel, to collect personal information and deliver targeted advertisements to ticketholders, and forced consumers to click “Agree” to these tracking technologies before they could use their tickets, without providing an effective method to opt out of the sale or sharing of personal information. PlayOn also failed to recognize and honor opt-out preference signals and did not provide adequate notice of consumers’ privacy rights. In addition to the fine, the stipulated order requires PlayOn to conduct risk assessments reviewed by its board of directors, ensure disclosures are easy to read and understandable for the intended audience, including minors attending high school events, and implement proper opt-out methods, including recognition of opt-out preference signals. 

CalPrivacy Fines Automotive Company for Adding Unnecessary Friction to Opt-Out Process: CalPrivacy announced it had issued a decision requiring Ford Motor Company (“Ford”) to pay a $375,703 fine and change its business practices following a settlement with CalPrivacy’s Enforcement Division. The action arose from CalPrivacy’s broader review of data privacy practices by connected vehicle manufacturers. CalPrivacy found that Ford violated the California Consumer Privacy Act (“CCPA”) by requiring consumers to complete an e-mail verification step before processing their requests to opt out of the sale and sharing of personal information collected through Ford’s digital properties and connected vehicle services. Under CCPA regulations, businesses may not require consumers to submit verifiable consumer requests to exercise the right to opt out. Ford deemed as “expired” all opt-out requests from consumers who did not complete the verification step, resulting in the continued sale and sharing of those consumers’ personal information in violation of the CCPA. In addition to paying the fine, the order requires Ford to provide consumers with easy opt-out methods requiring minimal steps, conduct an audit of tracking technologies on its website, and ensure compliance with OOPS, including the Global Privacy Control. The enforcement action follows a similar CalPrivacy action against Honda Motor Co. in 2025. 

FTC Takes Action Against Dating App Provider for Sharing Personal Data with Third Party: The Federal Trade Commission (“FTC”) announced an enforcement action against OkCupid and its affiliate Match Group Americas (“Match”), alleging that OkCupid deceived users of its dating app by sharing their personal information, including photos, location data, and other personal details, with an unauthorized third party in violation of OkCupid’s privacy policies. OkCupid’s privacy policy represented that it would not share personal information except with service providers, business partners, entities within its family of businesses, or when consumers were informed and given the opportunity to opt out. The FTC alleged that OkCupid provided the third party, which had no business relationship with OkCupid but whose investors included OkCupid’s founders, access to nearly three million user photos along with location and other data, without any formal or contractual restrictions on use. The FTC further alleged that OkCupid and Match took extensive steps to conceal and obstruct the agency’s investigation into the data sharing, including issuing misleading public statements denying involvement. Under the proposed settlement filed in the U.S. District Court for the Northern District of Texas, OkCupid and Match are permanently prohibited from misrepresenting the extent to which they collect, use, disclose, or protect personal information; the purposes for which such data is processed; and the function of privacy controls offered to consumers. 

Nebraska Attorney General Files Lawsuit Against Video Game Publisher for Enabling Child Exploitation and Deceptive Safety Practices: Nebraska Attorney General Michael Hilgers filed a complaint against Roblox Corporation (“Roblox”) in Adams County District Court, alleging the gaming platform has knowingly maintained an environment that exposes millions of children to sexual predators, violent content, and illegal activity while falsely marketing itself as safe for minors. The complaint asserts that Roblox, which has over 151 million daily active users, roughly two-thirds of U.S. children ages nine to 12, has long been aware that predators use its platform to groom, extort, and abuse children, yet the company failed to implement basic safety controls such as meaningful age verification, effective parental controls, and adequate content moderation. The lawsuit further alleges that Roblox’s executives made numerous material misrepresentations to parents about the platform’s safety features and moderation capabilities, while internal decisions prioritized user growth and Wall Street metrics over child protection. Nebraska brings claims under the Nebraska Consumer Protection Act, the Uniform Deceptive Trade Practices Act, common law negligence (failure to warn), and fraudulent and negligent misrepresentation, seeking injunctive relief, civil penalties, restitution, and damages. The action joins a growing wave of State Attorney General enforcement actions targeting online platforms’ failures to protect minors. 

International Laws & Regulations

ICO Publishes Interactive Guidance Tool on International Transfer Rules: The United Kingdom’s Information Commissioner’s Office (“ICO”) published an interactive guidance tool designed to help organizations determine whether their transfers of personal information outside the UK qualify as “restricted transfers” subject to the transfer rules under the UK GDPR. The tool is modeled on a “three-step test” set out in the ICO’s broader guidance on international transfers and is intended to be completed in approximately ten minutes through a series of up to six questions. Based on the user’s responses, the tool provides tailored guidance on how the legislation is likely to apply to the specific transfer scenario and directs users to detailed ICO guidance where further information is needed. For purposes of the tool, “transfer” encompasses both sending personal information to and making personal information accessible to a separate organization located outside the UK. The ICO notes that the tool applies to all types of organizations that handle personal information, including sole traders and self-employed individuals, but does not apply to processing conducted for law enforcement purposes. 

South Korea Overhauls PIPA, Ties Fines to CEO Accountability: South Korea promulgated the most significant revision to its Personal Information Protection Act (“PIPA”) since the law’s 2023 overhaul, with an effective date of September 11, 2026. The amendment introduces a punitive fine ceiling of up to 10 percent of total turnover, layered on top of the existing three percent baseline, triggered by repeat serious violations involving intent or gross negligence within a three-year window, a single incident affecting 10 million or more data subjects, or failure to comply with a formal Personal Information Protection Commission (“PIPC”) corrective order. The amendment also explicitly assigns supervisory responsibility for data protection compliance to the CEO, who is designated as the ultimate responsible person with a statutory duty to manage and supervise compliance. CPO appointments, reassignments, and dismissals for organizations above a size threshold now require a formal board resolution and must be reported to the PIPC. The amendment shifts breach notification to a probabilistic trigger, requiring notification when a controller becomes aware of a qualifying likelihood of compromise, even before a breach is conclusively verified. 

European Parliament Adopts Position on AI Act Simplification Omnibus and Delay of High-Risk AI Rules: The European Parliament announced it had adopted its position on an omnibus proposal amending the EU Artificial Intelligence Act (“AIA”), passing with broad support by a vote of 569 to 45, with 23 abstentions. The amendments would delay the application of certain obligations for high-risk AI systems to allow time for implementing guidance and standards. Members of the European Parliament (“MEPs”) proposed fixed application dates: December 2, 2027, for high-risk AI systems specifically listed in the regulation, including those involving biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice, and border management, and August 2, 2028, for AI systems covered by EU sectoral product safety legislation. The amendments also extend the compliance deadline for watermarking AI-generated audio, image, video, or text content to November 2, 2026. Notably, MEPs introduced a new prohibition on AI “nudifier” systems that generate or manipulate sexually explicit images resembling identifiable real persons without consent, though an exception exists for systems with effective safety measures preventing such outputs. Additional provisions allow personal data processing to detect and correct AI biases under strict necessity safeguards, extend subject matter expert support measures to small mid-cap enterprises, and permit less stringent AIA obligations for products already regulated under sector-specific EU laws, such as medical devices and toy safety. The MEPs’ position sets the stage for negotiations with the European Council on the final text, as part of the broader digital omnibus package proposed by the European Commission in November 2025. 

European Commission Publishes Second Draft of Code of Practice on Marking and Labelling of AI-Generated Content: The European Commission published a second draft of a voluntary Code of Practice designed to help providers and deployers of AI systems meet the marking and labelling requirements for AI-generated content under Article 50 of the AIA. The revised draft, prepared by independent experts, incorporates written feedback from hundreds of participants and observers, including industry, academia, civil society, Member States, and MEPs. The Code is organized into two sections. Section 1 addresses providers of generative AI systems under Article 50(2) and introduces a revised two-layered marking approach involving secured metadata and watermarking, with optional fingerprinting, logging, and detection and verification protocols. Section 2 targets deployers of AI systems under Article 50(4), focusing on labelling deepfakes and text publications concerning matters of public interest. Notably, the revised Section 2 removes the previous taxonomy distinguishing AI-generated content from AI-assisted content and proposes a task force to develop a uniform, interactive EU icon. The Code has been streamlined to provide greater flexibility, reduce compliance burdens, and promote the use of open standards. Feedback on the second draft is being accepted until March 30, 2026, with finalization expected by early June 2026, and the transparency rules becoming applicable on August 2, 2026. 

Recent Publications & Media Coverage

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW