What the Biden administration’s new executive order on AI will mean for cybersecurity | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

Regulations have been proposed by a presidential administration committed to a responsible rollout of one of the most consequential technologies since the advent of the internet.

The adoption of AI products accelerated rapidly over the past year since OpenAI released its large language model-powered chatbot, ChatGPT. Today, the generative AI platform boasts more than 100 million weekly users worldwide and is used by developers at 9 in 10 Fortune 500 companies.

In that time, academics, researchers, and even the CEOs of companies producing AI tools themselves have called for guidance and responsible safeguards for AI. Among their concerns are the displacement of workers; violations of copyright law; furthering wealth inequality in financial services, as well as dissemination of discrimination and misinformation; and national security in a world where other global powers have access to AI as well.

Drata reviewed the Biden administration’s 48-page executive order on AI and analyses from law firms and researchers to identify the proposals most likely to affect U.S. cybersecurity.

The order has been described as “sweeping” by academics because it proposes regulations for using AI in the federal government and managing risks to privacy, consumer protections, national security, and civil and human rights in both the public and private sectors.

The Biden administration said it not only wants to prevent harm but promote a responsible development of AI tools that will keep the U.S. at the forefront of what’s been dubbed the

“AI arms race.” On that front, the order aims to “maximize the benefits of AI” for working Americans, expand grant funds for AI research, attract workers who can work on advanced AI systems, and hire AI professionals into federal agencies.


Click Here For The Original Source.

National Cyber Security