Powerful new AI model raises security risks #AI


NOTE: If you are short on time, watch the video and complete this See, Think, Wonder activity: What did you notice? What did the story make you think about? What would you want to learn more about?

SUMMARY

Anthropic announced that it has started a very limited test of its newest AI model called Mythos. It’s a model deemed so powerful that the company warned it could cause widespread disruption if it were released to the public. Anthropic is giving some companies access to Mythos to test and identify vulnerabilities, a move that is raising concerns. Geoff Bennett discussed more with Gerrit De Vynck.

View the transcript of the story.

News alternative: Check out recent segments from the News Hour, and choose the story you’re most interested in watching. You can make a Google doc copy of discussion questions that work for any of the stories here.

WARM-UP QUESTIONS

  1. What is Mythos, and what company created it?
  2. Why is Mythos not being released to the general public, according to Anthropic?
  3. Who is Gerrit De Vynck, and what is his background?
  4. How does Mythos identify software vulnerabilities?
  5. Where (with what companies) is Anthropic sharing Mythos?

FOCUS QUESTIONS

  • How do you think people in the U.S. should best protect against security concerns related to AI?
  • Do you think the federal government (or state governments) should regulate AI by passing laws restricting its use, development or how it is marketed to the public? If so, how?

Media literacy: In this segment, guest Gerrit De Vynck says, “I think we always need to take these big A. companies with a grain of salt. It’s not the first time an AI company has said, oh, my goodness, our new technology is so powerful, we should be afraid of it. You know, it’s great marketing, right, because if something is so powerful that it could change the world or cause chaos, it’s also very powerful for doing other things.”

  • What do you think De Vynck means by this? Why would a company promote its product as being potentially dangerous?

WHAT STUDENTS CAN DO

Examine the infographic below. Then discuss —

  • Is the 70% figure higher or lower than you would expect? Why so?
  • What sort of limits for use do you think poll respondents had in mind? How might the government set limits for AI use? Who else might set limits outside of legislation?
  • Why do you think the government might disagree with setting limits to AI use?

 

Sign up to receive our weekly newsletter with Daily News Lessons and community events.

To provide feedback on News Hour Classroom’s resources, including this lesson, click here.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW