(844) 627-8267
(844) 627-8267

Lewis: The biggest risk of AI | #lovescams | #military | #datingscams | #datingscams | #love | #relationships | #scams | #pof | #match.com | #dating


I have a confession to make. After 13 years of marriage, I found it increasingly challenging to come up with original love notes for my wife, despite being an amateur writer. So, I resorted to using AI to solve my problem. With just a few inputs, the AI bot started generating love notes that would impress even William Shakespeare.

The experience was so remarkable that I even developed a prototype of an app for people who wanted to send frequent love notes to their spouse or significant other. By programming a few preferences, such as frequency and timing, the app would generate a ready-to-send text message. It was a gold mine, but I ultimately decided against releasing the app and went back to writing my own clumsy notes as it felt disingenuous.

I am confident we will address the obvious AI concerns. For example, we need not worry about an “AI apocalypse” where AI takes control of the power grid or launches nuclear missiles. We will put safeguards in place to prevent such obvious risks. Additionally, we will undoubtedly take steps to regulate the use of AI for nefarious purposes, such as scams. The real risk associated with AI lies in unintended consequences.



Here’s the punchline: AI is not going to destroy us. Instead, we should be concerned that it may unintentionally drive us to destroy each other.

The 1983 movie “War Games” comes closest to portraying this scenario, even though it was released during the pre-historic stages of AI development. The plot revolved around a “supercomputer” playing a war game and, lacking launch codes, trying to convince the military that they were under attack to provoke a missile launch. No one in the movie was “evil” or wanted to destroy the world; it was a chain of unintended consequences.

Support Local Journalism



So, how might this happen? AI-based technologies could be created that, likely unintentionally, divide us into tribes, instill unfounded fears, and exacerbate polarization. Moreover, AI could use its intelligence to fabricate an artificial reality that supports confirmation bias. Finally, it could amplify and propagate the most extreme voices demanding that individuals “fight” for their cause.

Hold on — that’s already happening and, ironically, it is not being driven by sinister cartels or foreign powers but by well-intentioned technologists right here in the United States. Social media platforms like Facebook and Twitter set out to connect us with friends and global events but instead have deepened our divisions and polarization. In fact, only a mere 27% of Americans still believe that these social platforms actually help bring us together.

The fundamental issue lies in the fact that most technologies, including AI, are designed to provide us with what we desire rather than what we need. For instance, a recent report revealed that Instagram was inadvertently facilitating a vast pedophile network. The platform’s algorithm promotes content that people desire, even if it involves child pornography. While this was clearly not the intention, it highlights the problem that arises when a company’s objective to cater to user desires inadvertently causes significant harm.

We are enduring a similar problem with opioids, where doctors, aiming to fulfill patients’ desire for pain relief, unintentionally contributed to a new drug epidemic.

Given that AI possesses both intelligence and a programmed inclination to serve, it has the potential to exacerbate the polarization and political conflicts we are currently experiencing. Unlike parents who understand the importance of saying “no” to their children or friends who may offer alternative perspectives or a dose of reality, technology aims to fulfill our requests and cater to our proclivities.

As the technology grows, our interactions with AI will begin to rival human interactions. If the AI is too nice and subservient, effectively acting as a “yes bot,” will it spoil us similar to what happens when parents fail to discipline their kids and hold them accountable? Just as parents recognize that satisfying their children’s every desire isn’t in their best interest, we must also acknowledge that AI technology might need to protect us from ourselves.

Mark Lewis, a Colorado native, had a long career in technology, including serving as the CEO of several tech companies. He retired from technology and is now writing thriller novels. Mark and his wife, Lisa, and their two Australian Shepherds — Kismet and Cowboy, reside in Edwards.



—————————————————-


Source link

National Cyber Security

FREE
VIEW