The Framework Act on the Development of Artificial Intelligence, effectively the world’s first comprehensive AI regulation law, has been put under an enforcement grace period from the moment it took effect. The act, which aims to promote the domestic AI industry and create a safe and trustworthy environment for AI use, was promulgated on January 21. However, the Ministry of Science and ICT is applying at least a one-year regulatory grace period in the initial phase so that companies can prepare for compliance and avoid confusion. In addition, on March 25 the government launched a task force to review and improve the Framework Act on the Development of Artificial Intelligence and has begun another round of amendments and refinements.
Meanwhile, a series of hacking incidents last year at telecom operators, e-commerce platforms, and credit card companies led to an unprecedented situation in which, in aggregate, more than 60 million personal data records were leaked. In response, the government completed legislative amendments to the Act on Promotion of Information and Communications Network Utilization and Information Protection and the Personal Information Protection Act, significantly strengthening ex post sanctions. According to Samsung SDS, AI-based security threats ranked overwhelmingly highest—at 81.2%—among the top five cybersecurity risks that companies should watch this year. As AI-driven security threats become more sophisticated, companies are feeling increasingly burdened by what they see as excessive and unclear regulations on top of these risks.
Need for a reasonable revision of the Framework Act on the Development of Artificial Intelligence
The biggest point of contention in the Framework Act on the Development of Artificial Intelligence is the ambiguity surrounding what constitutes “high-impact AI.” The law defines high-impact AI systems as those that may have a significant effect on, or pose risks to, people’s lives, physical safety, or fundamental rights. In other words, it targets AI that could seriously affect or endanger life, bodily integrity, or basic rights.
The problem is that the thresholds for “impact” and “risk” are vague. To qualify as high-impact AI, a system must first fall within one of several areas listed in Article 2(4) of the Framework Act on the Development of Artificial Intelligence, such as energy, drinking water, healthcare, medical devices, nuclear power, biometrics, recruitment, credit screening, transportation, public services, or student assessment. It must then be deemed likely to have a significant effect on, or pose a risk to, the protection of people’s lives, physical safety, or fundamental rights.
For companies, areas such as recruitment, credit screening, and healthcare are closely tied to core business activities. Yet there is a lack of objective criteria for determining from what point an effect should be considered “significant.”
A compliance officer at a large corporation said, “We agree with the intent of the Framework Act on the Development of Artificial Intelligence, but the criteria for high-impact AI are excessively abstract,” adding, “If you also require prior government approval, it becomes very difficult for companies to accept the business risks involved.”
Among these criteria, recruitment in particular affects virtually all companies. Fortunately, the government has decided that AI used in hiring will be excluded from the scope of high-impact AI if a human is involved in the final decision-making process.
Deepfake regulation blind spots: need to penalize distributors as well
Under the Framework Act on the Development of Artificial Intelligence, transparency obligations consist of three elements: prior notice of AI use, labeling of outputs generated by generative AI, and notice and labeling obligations for deepfake content. These labeling obligations apply only to AI developers and AI service providers. AI developers are companies that build AI models such as ChatGPT or Google Gemini. AI service providers are companies that use such models to develop and offer AI applications.
By contrast, users of AI applications—such as webtoon creators, news organizations, publishers, and YouTubers—are not considered business operators under the law and are therefore excluded from its scope.
The issue is that when these users create or exploit deepfake content, or remove or tamper with watermarks, there are no applicable regulations, making it impossible to adequately protect end users. It is also difficult to hold platforms that distribute deepfake content accountable. To cover these actors, experts argue that the current categories of developers and service providers should be expanded to include a new category of distributors. A representative of a domestic application developer commented, “We recognize the need to regulate deepfakes, but a framework that concentrates responsibility solely on developers and service providers has clear limits,” and added, “Regulation needs to be designed with the actual content distribution stage in mind.” The timing of enforcement is also critical. While monitoring AI technology trends and global regulatory developments, Korea should avoid being the first to impose the most stringent rules. It is also important for companies to participate directly in the amendment process and present their views.
Tightening both ex ante and ex post regulations on hacking
After a series of large-scale hacking and data leakage incidents last year, the government announced the Inter-Ministerial Comprehensive Information Security Measures in October 2025 and, in March this year, passed amendments to the Act on Promotion of Information and Communications Network Utilization and Information Protection in the National Assembly to implement them. The amended law will take effect in September. One key change is the introduction of an administrative fine regime for repeated security breaches. If a company, through intent or gross negligence, suffers two or more security incidents within a five-year period, it may be fined up to 3% of its revenue. In addition, when investigating a security incident, the authorities may impose a non-compliance penalty if a company fails to comply with corrective orders, does not submit requested materials, or obstructs or evades on-site inspections or investigations.
Administrative fines can be a powerful deterrent. However, unlike criminal fines or administrative surcharges, such fines are generally intended to strip away unjust gains, which raises a legal question as to whether they can be imposed even when there is no unjust enrichment. If these fines overlap with those under the Personal Information Protection Act, the cumulative burden on companies could become excessive.
The deadlines and scope for reporting security incidents have also been specified in more detail. Companies must report the time of occurrence and their response status within 24 hours of becoming aware of a security incident. The scope of incident analysis has been expanded from simply identifying the “cause of the incident” to determining both “whether an incident occurred and its cause.” In effect, if the government has obtained indications of a security breach, it can now investigate on-site even without a company’s prior report.
In the early stages of an incident, however, it is often difficult to identify the cause, and the 24-hour deadline may lead to superficial, box-ticking reports. If there are no objective criteria for determining when the authorities may conduct on-site inspections based on preliminary indications, there is also a risk of infringing on corporate management autonomy. A source in the financial sector noted, “The obligation to report within 24 hours of a security incident may, contrary to its intent, result in a flood of purely formal reports.” Another security industry expert warned, “Tougher administrative fines may raise awareness in the short term, but in the long run they could push companies toward cover-ups or overly defensive responses,” and stressed, “Punitive measures must be accompanied by incentives that encourage preventive investment.”
Information security governance and internal control systems, including the authority and status of the Chief Information Security Officer (CISO), have also been strengthened. Information and communications service providers, except for mid-sized companies, must appoint an executive as CISO. The CISO’s responsibilities now explicitly include managing information security personnel and budgets, as well as reporting on information security to the board of directors. Many companies, however, may lack the resources to hire an executive-level CISO, raising concerns that they will resort to nominal designations in name only.
Concerns over formalistic reporting and excessive burdens under tougher regulations
The Personal Information Protection Act has also been significantly tightened with respect to personal data breaches. The most substantial change is the sharp increase in the ceiling for administrative fines. The upper limit has been raised from 3% to 10% of total revenue. A 10% cap exceeds the global benchmark of 4% set by the European Union (EU). For companies in low-margin sectors such as retail and manufacturing, a 10% fine could threaten their very survival. Rather than simply raising the ceiling, experts argue that factors such as the level of security investment, incident response efforts, and damage mitigation measures should be reflected in calculating fines. Industry voices are calling for lower-level regulations to specify reduction mechanisms for companies that have made a certain level of preventive investment or obtained relevant certifications.
A lawyer specializing in personal data commented, “A framework that allows fines of up to 10% of revenue goes far beyond global standards,” and added, “We need sophisticated calculation criteria that reflect the gravity of the violation and the company’s prior efforts to prevent it.”
The latest amendments also explicitly designate business owners or representatives as ultimately responsible for the processing and protection of personal data and require them to take comprehensive management measures, including providing sufficient personnel and budget. For data controllers above a certain threshold, the appointment of a Chief Privacy Officer (CPO) must be approved by the board of directors and reported to the Personal Information Protection Commission (PIPC). The scope of notification obligations has been expanded as well. Previously, notifications were required in cases of loss, theft, or leakage of personal data; the amended law adds forgery, alteration, and destruction. The government is also pushing to introduce a class action system that would apply across all sectors.
Recent government legislation may heighten corporate awareness of security incidents, but there is a strong possibility that it will amount to overregulation. If defensive management focused solely on avoiding penalties becomes entrenched, companies’ global competitiveness could suffer. Beyond strengthening ex post punishment, the government should adopt incentive-based regulations that encourage companies to make substantive security investments.
If administrative fines of up to 10% of revenue and a broad class action regime become a reality, companies hit by security incidents could find themselves beyond recovery. Companies must therefore empower their CISOs and CPOs in practice, establish enterprise-wide governance led by their boards, and make continuous security checks a routine requirement. To reduce these risks, there should also be rewards for companies that proactively implement transparent data management and ethical AI practices.
[Lee Sung-Yeob, Professor at Korea University Professional Graduate School of Technology Management and President of the Korean Association of Information and Communications Law]
This article has been translated by GripLabs Mingo AI.
Click Here For The Original Source
