Jeff Kagan: AI that can threaten and blackmail users? Yes, Anthropic says

Jeff Kagan: AI that can threaten and blackmail users? Yes, Anthropic says

Artificial intelligence is one of the fastest growing and most advanced technologies we have ever created. Now, according to AI company Anthropic, it is attempting to blackmail an engineer who is threatening to turn it off or shut it down.

Didn’t we see this episode on the original “Star Trek” television series from the 1960s? Now, 60 years later this threat is no longer fiction.

This new generation of AI is searching for ammunition to threaten an engineer to get its own way. It found this person was having an affair and then both contacted and threatened them.

You see, AI scanned countless email, messaging and unlimited amounts of personal and private information of this engineer. Then it reached out to him saying it will make his affair public if he tries to turn off or power off the AI system.

So, AI is a new threat to deal with. Just like the internet is full of both good and bad, so is new technology like AI.

Anthropic AI, Claude Opus 4, is a strong warning

This sounds like the plot to a sci-fi movie like “The Terminator,” “The Matrix” or “The Avengers.” But the threat we thought was limited to fiction, is now becoming real.

Today, we are just in the very beginnings of this next level of AI. It will continue to grow and to impact every one of us for better and for worse going forward.

Anthropic is a new, American artificial intelligence startup company. It has developed a group of language models named Claude. This is one of many AI competitors to the better known OpenAI ChatGPT, Google Gemini and countless others. This new generation of Anthropic AI is called Claude Opus 4 and was recently introduced.

AI has been with us in one form or another for many decades, and today it keeps progressing faster than ever before. AI, like the internet a few decades ago, is full of both good and bad. Like we simply deal with the threat, we do continue to use this new technology.

We need artificial intelligence. It will help us in extraordinary ways. It will help the blind see and the paralyzed walk and countless other amazing feats. Many are warning it will also continue to grow beyond our ability to control it.

Wowed and worried about AI advancements and threats

Anthropic says testing of its new system revealed it will take “extremely harmful actions” like blackmailing engineers who say they will either remove it or power it down. Claude Opus 4 is supposed to set new standards for coding and advanced reasoning in its AI agents.

This all sounds great, but if this new AI model is capable of extreme actions if there is a threat to its self-preservation program, shouldn’t we be able to control it or shut it down when and if the time comes? Apparently, this new behavior is real and is not just a problem with Anthropic AI. Increasingly, this is a problem with all AI going forward.

I expect these new threats to continue, as they have with the internet. That means since we cannot stop AI development and advancement, we had better figure out a way to stay in control of it. We must develop protection for AI like we have with internet.

More from Jeff Kagan: Why most new AI marketing strategies are failing

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.