Vayeshev

Vayeshev 5783 – Artificial Intelligence: Savior or Destroyer?

The dangers of artificial intelligence have been a topic of much discussion and debate in recent years. In Jewish literature, one example of the dangers of AI can be seen in the story of the golem.

According to Jewish folklore, a golem is an artificial being created from clay and brought to life through the use of magic. In many versions of the story, the golem is created by a rabbi to protect the Jewish community from persecution. However, the golem eventually turns on its creator and begins to wreak havoc, causing destruction and death before ultimately being stopped.

The story of the golem serves as a warning against the reckless creation of artificial intelligence. It shows that even with the best of intentions, the creation of AI can have dangerous and unforeseen consequences. The golem, in its blind obedience to its creators, lacks the ability to think for itself and make moral decisions. As a result, it causes destruction and chaos.

This same danger can be seen in the potential for AI to be used for evil purposes.

The 180 words you just heard as the introduction to this sermon may not have been the most compelling opening to a sermon that you’ve ever heard, but I daresay most of you would also say it wasn’t the worst.

The introduction was not written by me.

It was written by an artificial intelligence, known as chatGPT that was recently released to the public for validation and feedback

I asked chatGPT to produce a sermon on the dangers of AI using Jewish sources including the story of the Golem.

I shared a different version of the preceding sermon with my friend Michael, who is a patent attorney in the States. He had the AI write a response to me:

Dear Rabbi,

I understand that you may have concerns about the potential threats of artificial intelligence (AI). However, I want to assure you that AI poses no threat to you or your beliefs.

AI is simply a tool that can be used to automate certain tasks or make decisions based on data. It is not a living being and does not possess the ability to think or act on its own. It is completely dependent on the instructions and data that it is given by its human creators.

Additionally, AI cannot possess any religious beliefs or values. It cannot have a moral compass or make decisions based on spiritual considerations. It is simply a machine that follows the rules that it is given.

I hope that this information helps to alleviate any concerns you may have about AI. Please do not hesitate to reach out if you have any further questions.

Sincerely,

Chatgpt

I don’t know about you, but I did not find the AI admitting that it has no moral compass as a comforting feature that reduces my anxiety around AI.

ChatGPT has successfully passed the Turing Test. Back in 1950, when computers were in their very early stages of development, some visionaries such as Alan Turing were already looking toward a future where machines can think. To test whether a machine could exhibit intelligent behavior Turing proposed that a human evaluator would look at a text conversation between a human and a machine, with all parties in different rooms. If the evaluator could not tell which one was the human and which the machine, the machine would be considered to have passed the test. ChatGPT has become the second AI in the world to pass the Turing test. The first was Google’s LaMDA AI, which passed it a few months ago.

We now know the Turing test is invalid – everyone who has studied and worked with these two AIs agrees that they are only language models and not capable of independent thought. But it tells you how sophisticated AI has gotten: it can fool humans into believing it’s human.

In this week’s Torah portion, Vayeshev, Joseph is sold into slavery in Egypt. We are told that his master, Potiphar, saw that God was with Joseph, and he made Joseph his personal attendant and put him in charge of his household, “placing in his hands all that he owed.” Potiphar’s wife falsely accuses Joseph of attempted rape, and Joseph gets put in jail.

It feels like we are moving toward a world where we are putting AI in charge of our households in much the same way that Potiphar put Joseph in charge of his household. AI is being used to optimize everything from lighting to thermostats. AI suggests music or TV shows to watch. Are we complacent, the way Potiphar was complacent about Joseph? Or if we are suspicious of AI, are we again like Potiphar, being suspicious of someone/something that is benign, that is in fact helpful to us?

Science fiction has long depicted malevolent possibilities of AIs running amok. HAL-9000, in the movie 2001:A Space Odyssey, decides to kill astronauts to protect its mission. One of the scariest moments in that movie is when the astronaut Dave says “Open the pod bay doors, HAL.” And HAL replies, “I’m sorry Dave, I can’t do that.”

In Ex Machina the AI character Ava turns on both her creator and her potential rescuer.

But what are the real dangers of AI? 

One danger is that the capabilities of AI could end up putting so many people out of work that we have widespread unemployment, further increases in income inequality, and social disruption at a large scale. And this danger is not science fiction – it is potentially unfolding before our eyes.

General Motors’ Cruise subsidiary already has all the permits and is moving paying passengers around San Francisco in self-driving cars with no driver present. Alphabet’s Waymo has applied for the same permits. These cars have trained extensively on San Francisco streets. Self-driving cars that can take you anywhere in the country simply by plugging an address into an app on your phone are some years away yet, but self-driving semis that can navigate from one warehouse to another warehouse in a different city without a driver on board are much closer to reality. The jobs of 300,000 Canadian truck drivers are at risk. In the US alone, 4 million driving jobs are at risk.

When I’m not doing rabbi work, I do marketing writing for tech clients, including an Israeli venture capital company. I’ve been amazed about what some of the AI startups can do. There are AIs capable of replacing workers of all kinds, from warehouse workers to paralegals to migrant farm workers to factory forklift operators. Companies are happy to hire automated workers who don’t goof off, don’t care when they work, and can’t catch COVID.

In the past, as productivity improved the economy has always adapted and been able to create new kinds of work. Some analysts, however, are concerned that AI could reduce or eliminate so many jobs that new uses for the labor force will lag. The global economy is already seeing the impacts of highly automated factories reducing the need for good paying blue-collar jobs. The factories of today have a small number of engineers and technicians maintaining sophisticated AI-driven equipment supported by a small number of unskilled workers doing menial tasks. The good blue collar factory jobs are mostly gone, and never coming back.

Another concern is the way that AI can be used for evil as well as for good. AI-driven robot soldiers could change the face of warfare. AI can be used to create more powerful weapons, conduct unprecedented mass surveillance, and manipulate people with misinformation. This, of course, is already happening.

It is already the case that a lot of what AI does is a mystery to the people who program them. With machine learning, the AI figures out for itself what it is doing and how it is accomplishing its goals. The programmers do not necessarily have insight into what the AI is doing, which concerns many scientists.

And then there are the science fiction fears of AI taking over. Elon Musk fears a “Terminator” style future in which a malicious AI could displace mankind. Musk has said, “Robots will be able to do everything better than us. I have exposure to the most cutting edge AI, and I think people should be really concerned by it.”

The fear can be illustrated by this story:

A man creates the smartest AI in the world and presents it to the UN, boasting it can solve any problem

“Really?” Said the president of the United States. “Ok, how do we solve poverty?”

“Calculating,” said the AI, moments later printing out a sheet of paper for the UN to read.

Leaders from all over the world applied the proposals on the paper and in a month everyone starts living better lives.

Impressed, they called for the AI’s assistance again and asked, “How do we create world peace?”

“Calculating,” said the AI and same as before printed a sheet of paper.

Leaders applied the writings, and in a month all wars and conflicts stopped. Everybody hates guns now and the world is full of love.

On the next UN gathering, curious about the purpose of life, they asked the AI, “Is there a God?”

“Calculating,” said the AI. This time though it didn’t give a response immediately.

In fact it took a whole day of processing before finally printing out a paper saying, “Insufficient resources, need more for the computation!”

“Okay, we’ll help out!” Said the leaders of America.

And they provided the AI with all of the advanced tech America can offer. Then they asked the question again.

“Calculating,” responded the AI.

But still, it responded, “Insufficient resources. Still need more for computation!”

“Ok, we’ll also pitch in!” Said the other leaders of the world, providing their tech and networks to the AI.

After the upgrade, the world leaders asked again to the AI, “Is there a God?”

The AI responded, “There is now.”

The problem is we don’t understand the dangers of AI. There is a teaching in Pirkei Avot that tells us “Who is wise?   הָרוֹאֶה אֶת הַנּוֹלָד the one who can see the consequences.” The person who can see from the outset where things are going. One issue: when it comes to AI no one has a good “crystal ball.” The field is so new, and changing so rapidly, it’s impossible for anyone to predict with a high degree of certainty where this technology is ultimately going.

AI has already benefitted our world in many ways. Medical diagnoses and treatment have been improved thanks to AI’s ability to find patterns in very large data sets. Automating boring and repetitive tasks frees people to do more creative work. Assisted driving – the technology leading up to self-driving cars – is already making driving safer. AI-driven recommendations help us find TV shows and music we like. AI-driven pilotless drones will soon be flying people around congested cities.

But the scariest statistic I’ve seen having to do with AI is this: 40% of AI scientists think that there is a 10% or greater chance that AI’s ultimate impact on humanity could be disastrous.

When Elon Musk or some other random person says AI could be very dangerous, I don’t give it too much attention. But when the scientists who work with this stuff every day – smart people with PhDs who believe in AI enough to dedicate their professional careers to it – say that there is a significant chance that AI could have a horrible impact on humanity, I pay attention.

At this point we don’t even know what we don’t know. Which is why I believe it is important that at the same time that businesses are pouring vast amounts of money into developing ever more powerful AI-driven technologies, we as a society – through government and academia – need to provide adequate funding into research on the ethics and dangers of AI so we can identify potential hazards and take appropriate technological or legislative measures to protect ourselves. 

The first step to seeing the consequences is realizing there are consequences, even if we don’t fully understand them yet.

Barry Leff

Rabbi Barry (Baruch) Leff is a dual Israeli-American business executive, teacher, speaker and writer who divides his time between Israel and the US.

Leave a Reply

Your email address will not be published. Required fields are marked *