‘To err is human,” goes the popular saying. But that doesn’t mean artificial intelligence (AI) can’t make mistakes. It’s neither perfect, nor divine.
I recently took a short introductory course to AI at a local community center. The program was aimed at senior citizens in a fast-changing world. Technophobe that I am, I went mainly to try to conquer my fears.
Although I can see the value of AI – and realize that you can no more ignore it than get by in the modern world without a smartphone – I also see its inherent dangers. Perhaps my greatest takeaway from the course was a quote by Jon Kabat-Zinn: “You can’t stop the waves, but you can learn to surf.” So I surf the web, and use ChatGPT, and try to keep my head above the water.
Artificial intelligence programs are so prevalent that when I called up Word to write this column, an AI prompt immediately suggested: “Describe what you want to write.”
Yet I didn’t give in to temptation and ask the computer to do the work for me in the time it would take me to make a cup of coffee. I wrote, rewrote, trimmed, and edited this the old-fashioned way.
As part of the course, we learned how to use AI programs to write a poem, put it to music, and create an avatar to perform it. The process didn’t take long, but what it provided in instant gratification, it lacked in emotional satisfaction. The result, however good, did not come from the heart. It was more artificial than authentic.
IF YOU’RE into futuristic, dystopian movies, watch I, Robot. The 2004 film was ahead of its time. I was reminded of engineer Blake Lemoine’s warning in June 2022 that Google AI program LaMDA (Language Model for Dialogue Applications) was close to being sentient – with a built-in fear of dying. Google fired the whistleblower rather than pull the plug on the program.
This week, I read a disconcerting Huffington Post piece which concluded that an Amazon-backed AI model “would try to blackmail engineers who threatened to take it offline.” “In tests, Anthropic’s Claude Opus 4 would resort to ‘extremely harmful actions’ to preserve its own existence, a safety report revealed.”
It is disturbing to see AI programs writing not only themselves but also offering their own versions of history and general knowledge.
Relying on previously presented material – some drawn from the dark world of fake news and conspiracy theories – over time, AI programs can change what is recorded in the future. When a lie is repeated often enough, conventional wisdom can turn into unconventional warfare.
We are in a brave new world where seeing isn’t believing
It is easy to share photos, including fake images, and shut down discussion of what is really going on. Such falsehoods can inspire violence and terror attacks.
When Elias Rodriguez murdered young diplomats Yaron Lischinsky and Sarah Milgrim in Washington last week, his shouts of “Free, free Palestine” continued to echo on social media after the sound of the gunshots had faded.
There is a constant battle between facts and fallacies. The toxic effect of the “Muhammad al-Dura incident” (when the death of a 12-year-old caught in crossfire with Palestinians was falsely blamed on Israel) is still felt nearly 25 years later.
The lies about Israel hitting Gaza’s al-Ahli Hospital early in this war have never been fully laid to rest, despite evidence that it was caused by a failed Palestinian rocket launch on Israel.
The claim by a top UN official appearing on the BBC last week that 14,000 babies in Gaza faced imminent death within 48 hours has not disappeared. The UN itself clarified, when pressed, that it referred to a threat of malnutrition, not death, and was predicted over a period of more than a year without aid. Yet the story continued to circulate even after the two days passed without the threatened thousands of fatalities.
This week, clearly AI-generated images appeared in the press and on social media purporting to show the bodies of nine children of Gazan doctor Alaa al-Najjar, who were reportedly killed by an Israeli drone while the pediatrician was at work. Other photos showing the same children had previously been used to illustrate different stories of purported atrocities.
As the HonestReporting watchdog noted: “It’s also important to ask why all the sources of this sad story are secondary at best (the relatives) or agenda-driven at worst (Hamas Health Ministry officials)... any journalist should have asked why al-Najjar’s house was targeted, given that the IDF has made clear that it targets terrorists, not the civilians they hide behind.
“When such questions are not asked, the result is irresponsible reporting that takes Hamas’s word as gospel and does further injustice to those it uses as human shields. It also exploits the faith of news consumers who believe they get all the facts from a reliable source.”
The reason for the war – the Hamas invasion and mega-atrocity on October 7, 2023, in which 1,200 people in Israel were murdered and 251 abducted – has been replaced by new images, many of them false. This reduces the pressure on Hamas to end the war and release the remaining hostages – those being tortured and starved in terror tunnels, and the bodies of those killed and being held as bargaining chips.
I ASKED ChatGPT 4 about the dangers of AI, and it swiftly compiled a list divided into key categories and subcategories. These included “job displacement” and “widening inequality” as the economic benefits could “be concentrated in the hands of a few companies or individuals”; biased data and opaque decision-making; “privacy and surveillance” – governments and corporations can use AI to monitor individuals at an unprecedented scale; data misuse; and misinformation and manipulation.
AI can create “deepfakes,” highly realistic fake videos, audio, or images that can be used for fraud, political manipulation, or blackmail. AI-generated content and automated propaganda produced by bots can flood social media, spreading misinformation, disinformation, and influencing public opinion.
AI can be used in weapons systems that make life-and-death decisions without human oversight, potentially leading to unintended mass casualties. And AI can intensify cyberattacks.
The program carried on, succinctly summing up its own faults: “Loss of human control: A powerful AI pursuing goals that are not aligned with human values could act in harmful or unpredictable ways. Existential risk: Some experts warn that superintelligent AI – if it surpasses human intelligence – could pose a risk to humanity if not properly aligned and controlled.”
Over-reliance on AI may erode critical skills, including human expertise, judgment, and capabilities, the system informed me. And blind trust in AI systems can lead to complacency and “critical errors in fields like healthcare, aviation, or defense.”
“Mitigating these risks involves responsible development, transparent regulation, public awareness, and ensuring human-centered AI design. The benefits of AI are vast, but only if its development is handled with care,” it concluded.
IN ISRAEL HAYOM last weekend, psychologist Ran Puni interviewed Eran Katz, an author and memory artist. Among other things, he holds the Guinness Book of Records for “Best Memory Stunt,” having recited 500 numbers forward and backward after hearing them only once. At 60, he gives workshops and demonstrates memory-improving techniques for seniors.
“Memory defines us, makes us what we are, and without it, we are nothing,” Katz observed. “Humanity is becoming less intelligent because we have become addicted to technology and AI engines.
“In the past, using memory was much more natural; we would simply use memory because we had no other choice. The Romans, our ancient sages, and even the first generations in the Land of Israel did not have smartphones or laptops. They had to rely solely on the brain to remember. Today, we are in a different situation.”
Hadera Magistrates’ Court Judge Ehud Kaplan this month threw out a case after concurring with the defendant’s lawyer, who “suspected the police response was generated by ChatGPT. The cited legal clauses don’t exist.” The police admitted “there was a mistake.”
AI is programmed to be nice; it needs to please to keep humans engaged. After presenting me the list of hazards, the program politely added: “Let me know if you’d like more details on any specific danger.”
I decided to call it a day – before the artificial crystal ball could show me something too futuristic.