
With help from Derek Robertson
The excitement — and skepticism — over how artificial intelligence will rock the world of cybersecurity took hold of Las Vegas this past weekend as thousands of elite hackers, government officials and security professionals gathered for the DEF CON hacking conference.
This year’s conference became a meeting of the minds on how to deal with generative AI. Amidst the hacking wargames, tinfoil hats (yes, literally) and light-up badges, policymakers laid out their vision for the future of AI, while hackers found holes in large language models, and industry giants looked for new ways to use AI to improve cybersecurity.
Here are five takeaways from DEF CON 31:
1. The Pentagon is skeptical about generative AI’s accuracy and is looking for ways to validate the technology.
Craig Martell, who heads the Pentagon’s Chief Digital and Artificial Intelligence Office, laid out his trepidations about large language models and whether they can reach “five nines” accuracy (i.e. be 99.999% accurate) when needed. “If you’re a soldier in the field, asking a large language model a question about a new technology that you don’t know how to set up, I need five nines of correctness. I cannot have a hallucination that says, ‘Oh yeah, put widget A, connect it to widget B,’ and it blows up,” Martell told a packed room.
Martell told POLITICO afterward the AI industry must be more rigorous in determining what their AI models are really capable of, but added that company officials he’s spoken with are amenable to coming up with performance standards for their technology.
And Martell is optimistic a new Pentagon taskforce can help the Defense Department understand its own AI needs, including for operational planning, streamlining administrative tasks and defending against adversarial uses of AI. The task force would also help come up with a set of conditions under which an AI tool is acceptable for the Pentagon to use, he said.
And Martell thinks the hacker community can help find those conditions for large language models. “Tell us how they break. Tell us the dangers,” he said. “If we don’t know how it breaks, we can’t get clear on the acceptability conditions. And if we can’t get clear on the acceptability conditions, we can’t push the industry towards building the right thing.”
2. The hacker community shares the Pentagon’s skepticism and is eager to find the limitations of the current models.
Martell’s speech sparked a flurry of concerned questions and comments from the audience on the potential pitfalls of large language models, including how their performance would be evaluated.
Cody Ho, a Stanford student who spent his DEF CON trying to find flaws in large language models in a White House-endorsed red-teaming challenge, said he is happy the government is being proactive in guiding AI development. “Historically, in my opinion, Uncle Sam hasn’t done a great job of adhering and following best practices. And they’ve always lagged behind the state of the art,“ he said. But Ho is still waiting to see if the government’s active presence in the hacker community will actually improve the nation’s use of AI for cyber defense.
Ho said he was able to elicit negative behavior from the systems he probed by creating new conditions for the model to follow. Others reportedly found ways to get the AI systems to divulge sensitive information and instructions on how to surveil people without their knowledge.
3. The cybersecurity industry is excited to incorporate this new technology into their workflow.
“We will always remember this as the AI DEF CON,” Heather Adkins, Google’s vice president of security, told POLITICO. After over two decades in the cybersecurity field, Adkins said the yearly conversations around improving cybersecurity could get monotonous. But the rather sudden arrival of large language models is helping Adkins remember why she does this job. “It reinvigorates your spirit,” she said.
Adkins believes AI-powered assistants will eventually help cybersecurity professionals sort through vast amounts of data to investigate cybersecurity incidents. It could also help draft incident reports or review code, tasks that are often considered tedious within the field, she said.
4. Conference demos aren’t just for the masses — they’re becoming a way to educate policymakers.
During a walk-through of the DEF CON’s AI village, where the red-teaming exercise was held, Sven Cattell, an information security expert who helps run the AI village, said the large language model demos made for the hacker community have gotten good mileage with lawmakers.
At the South by Southwest conference in March, Cattell said policymakers like Rep. Michael McCaul (R-TX) saw a demo of how the guts of a large language model work and asked “well informed questions about what’s going on.” The AI demos at SXSW are also how White House officials — including Arati Prabhakar, director of the White House’s Office of Science of Technology Policy — first became involved in the AI red-teaming challenge at DEF CON.
5. For a community and industry associated with covert actions, it turns out working transparently with the government may be the best way forward.
Multiple conference attendees told me about an old DEF CON game called “Spot the Fed,” where the objective was to identify the federal government officials trying to blend in with the unorthodox crowd at hacking conventions.
But with people like Martell, Prabhakar and U.S. Homeland Security Secretary Alejandro Mayorkas in public attendance at this year’s DEF CON, those days of tacit stand-offs between hackers and feds are gone. “There’s long been a recognition that the community wants to help, and the government clearly needs it and private industry clearly needs it,” Google’s Adkins said.
The industry is keenly aware of the growing attention these security-focused communities are receiving from the federal government when it comes to evaluating new technologies. “DEF CON and BlackHat have morphed pretty significantly over the years — from being something that was fairly fringe and maybe even frowned upon by law enforcement types to something that governments participate in a really significant way,” said Michael Sellitto, head of geopolitics and security policy at Anthropic, whose AI model was part of the red-teaming exercise.
A high-profile newsletter has the latest on the financialization of AI processing power, one of the newest trends in the industry.
Anthropic co-founder Jack Clark writes in his Import AI newsletter today on how cloud company CoreWeave raised $2.3 billion by borrowing money against the value of the computer hardware it owns, the scarcity of which amid the AI boom has been well-documented. Clark writes that this is significant not just as a reflection of how highly valued computer power is now amid that scarcity, but of the increasing movement of traditional capital into the AI sector.
“That means AI companies (and cloud providers) are going to start doing more complicated and weirder forms of financing, and it also means some of the infrastructure of AI (e.g, chips) and some of the demand signals (e.g, pre-committed customer contracts for allocations to cloud infra) will become turned into financial instruments and further integrated into the rest of the capital economy,” Clark writes. — Derek Robertson
Labor unions and some policymakers are starting to make hay protecting workers from potential displacement by AI.
POLITICO’s Nick Niedzwiadek writes in today’s Morning Shift newsletter about the slate of bills that have been introduced in some states and cities related to AI employment issues. New York City has been working on the topic as early as 2018, when it launched a task force to investigate potential impacts, and California, New Jersey, and Washington, D.C., have recently followed suit.
At the federal level, the Equal Employment Opportunity Commission is making headway as well, as Reuters reported that the agency has settled its first lawsuit against a company that was accused of using AI software to screen out older job applicants.
John Rood, CEO of AI compliance firm Proceptual, told Nick that although there isn’t an industry standard yet for AI auditing, his “guess is that changes” before too long. — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
——————————————————–
Click Here For The Original Story From This Source.