Biden Makes Bold Moves On AI Regulation, How Hackers Help Israel’s War Effort | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker


This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.

As war continues in the Middle East between Israel and Hamas, the Israeli tech community has used its strengths to continue the fight. Some company leaders and employees are doing more behind-the-scenes work to help people who have been impacted by the war. This conflict brings a new meaning to the technology of war: Hacking, digital organization and videoconferencing are being relied on to engage in the fight.

Israeli organizations—including monday.com, Guesty and United Hatzalah—have been using tech professionals and platforms to perform critical tasks. These include emergency medical response for people who are injured, places to stay for those who are displaced and purchasing floral wreaths for victims’ funerals. Cybersecurity experts are volunteering for a project called the Civic Center for the Coordination of Cyber Activities that helps evacuees who may have left phones and computers behind regain access to their accounts. We’ll get into more of these efforts below.

In a more defensive move, the Israel Defense Forces have reached out to cybersecurity and surveillance experts—including NSO Group, which developed the controversial iPhone hacking software Pegasus—to hack into phones and computers of people who have been taken hostage or killed by Hamas. This kind of access can not only provide information about the victims and their condition to loved ones, but also data about hostage location and movement for the military.

“The entire [hacker] ecosystem is devoted to the effort to gather any kind of information,” a source told Forbes.

Until next time.

POLICY + REGULATIONS

It’s been a huge week for oversight of AI. President Joe Biden unveiled a broad executive order aimed at curbing the risks and “seizing the promise” of the technology. The regulation will require government oversight of AI models that could pose a risk to national security. These standards, which govern the data that is being collected and how it is being secured, will be developed by the National Institute of Standards and Technology. The order also mandates guidelines and standards for the use of AI by the federal government. Each department needs to outline how they plan to bring the technology into their operations, as well as how the technology’s security is being protected.

“I’m determined to do everything in my power to promote and demand responsible innovation,” Biden said as he signed the executive order.

Experts quickly weighed in on the meaning of the executive order. In short, the federal government will concentrate on ensuring privacy, equity and civil rights are protected through the use of AI technology, so startups working in that space should concentrate on making sure their tech in those areas is solid.

Several tech CEOs were immediately on board with the executive order, having been actively working with governments to proactively develop regulations for the fast-growing technology. But some fear the executive order could stifle innovation. Heavy regulations could benefit larger and better capitalized companies, who have the funding and ability to meet them, some said. It might be much harder for newcomers to meet a new set of regulations.

“America is built on risk-taking, not red tape,” enterprise search startup Hebbia’s founder George Sivulka wrote in an email to Forbes.

However, the U.S. is not alone in its push to regulate AI technology. Leaders from the G7 nations—Canada, France, Germany, Italy, Japan, the U.K. and the U.S.—also announced on Monday an agreement on a set of international guiding principles on AI, as well as a voluntary code of conduct for AI developers. The EU is also finalizing its own laws governing AI, which could ban or block services deemed harmful.

ARTIFICIAL INTELLIGENCE

The federal government is working to keep AI secure, but so are developers. ChatGPT parent OpenAI announced last week it’s building a team to oversee and evaluate “catastrophic risks” that come from the technology. The Preparedness team will focus on “frontier models”—AI models with advanced capabilities that could threaten public safety and global security through actions like designing advanced weapons, finding vulnerabilities in critical software systems, synthesizing persuasive disinformation or evading human control.

AI currently is capable of doing things that are less threatening to society as a whole, but still harmful. AI-generated images and video can mislead people, like deepfake news segments where actual news anchors appear to report fake stories. But chip maker Qualcomm and camera manufacturer Leica are planning to add new metadata to images that brands them as real or generated by AI. They aren’t the first to do this. In August, Google Deep Mind announced a beta version of SynthID, a tool that embeds a digital watermark into the pixels of AI-generated images.

Those AI-generated images are built via models that are “trained” on human-created ones, which some artists don’t look kindly on. A tool from the University of Chicago called Nightshade adds subtle and invisible changes to images to cause AI algorithms to misinterpret them. The system could make a picture a person would identify as a flower look like something completely different to an AI system, like a truck.

However, companies using AI technology to improve their efficiency and operations need to do more to protect themselves. A study from risk management software company Riskonnect found only 9% of companies are prepared to deal with the risks posed by AI. And just 17% have conducted trainings or briefings to let their teams know about threats the technology poses.

NOTABLE EARNINGS

Chip maker AMD beat top and bottom line investments, reporting $5.8 billion in revenue. This represents modest 4% year-over-year growth, but the company’s stock prices spiked after CEO Lisa Su told investors that the company expects $2 billion in chip revenue alone next year. Su said AMD has secured commitments from large tech and cloud computing companies to use its MI300 chips, which are designed for AI and high-performance computing.

DEEP DIVE

The ‘Uber Of Life-Saving’: These Tech Tools Are Helping Israeli Workers Coordinate Volunteers, Support

Even before the war between Israel and Hamas started, Israel had its own nonprofit backup EMT organization called United Hatzalah. The group has volunteer dispatchers who connect medical professionals and EMTs to emergency callers using software to match specialties, equipment, experience and location. Volunteer medics are then directed to the emergency using a custom-made Android device with communication and location apps. But United Hatzalah saw its utility proven—and usage skyrocket—when Hamas terrorists began attacking Israel. In the first days of the war, the group’s medics fielded about 10,000 emergency calls per day, an increase of 400%.

Other tech-enabled Israeli businesses are also helping manage civilian needs during the war. Project management software platform monday.com, which has been used by emergency responders around the globe for initiatives including Covid-19 vaccinations and Ukrainian refugee management, has been put to work for blood donations and needed medical equipment. And vacation property management startup Guesty has helped find housing for Israelis displaced by fighting.

“The tech ecosystem is stepping up, overcompensating and continuing to kick ass,” Israeli tech blogger and adviser Hillel Fuld told Forbes. “It’s been a really beautiful thing to see, especially given the lack of unity that we had just before in this country.”

FACTS + COMMENTS

Google is expanding its vulnerability rewards program for those who find weak spots in its AI systems and services.

$12 million: The amount Google paid hackers in 2022 as part of its “bug bounty” program

9: Number of specific AI bugs and vulnerabilities in Google that the software company may reward, signaling the company won’t reward an ‘anything goes’ approach

‘We look forward to continuing our work with the research community’: A Google spokesperson said about the program

VIDEO

Inside The Mind Of A Computer Hacker

QUIZ

Virtual reality company Cosm is partnering with Warner Bros. to build two domed venues with 360-degree screens meant to simulate being at events in person. There are several events that can be broadcast in the dome, according to the current agreement. Which is not included?

A. NBA games

B. The NHL’s Stanley Cup playoffs

C. U.S. women’s national soccer team matches

D. Major League Baseball’s World Series

Check if you got it right here.

——————————————————–


Click Here For The Original Story From This Source.

National Cyber Security

FREE
VIEW