Skip to main content Skip to search

YU News

YU News

Straus Center Course Spotlight: Ethics in Artificial Intelligence

In the Fall 2022 semester, Rabbi Netanel Wiederblank, Maggid Shiur at RIETS and author of the Illuminating Jewish Thought series (Maggid Books and YU Press, 2018-2020), co-taught a course at Yeshiva College with professional engineer and rabbinic scholar Rabbi Dr. Mois Navon on "Ethics in Artificial Intelligence." YUNews sat down with Rabbi Wiederblank to reflect on the course, which was offered in partnership with the Zahava and Moshael Straus Center for Torah and Western Thought. 


The course syllabus set out to tackle the ethical issues inherent in the technologies spawned by big data and artificial intelligence, yet you turned to Talmudic sources for answers. How do rabbinic scholars derive modern computer ethics from the Talmud, a nearly 1500-year-old compendium of Jewish law?

We turn to the Talmud in two ways. First, the discussions of the Talmud teach us values that address modern questions. In fact, we were able to see how biblical and Talmudic sources could be invoked to address almost all of the questions we considered in the course. For example, the Talmud’s discussion of privacy in the first chapter of Bava Batra sheds light on the question of a company’s rights and responsibility when collecting big data. Second, and this is most remarkable, occasionally, Talmudic sources directly address modern questions. One example is the trolly question, which is relevant to the programming of self-driving vehicles. Another example is the status of humanoids. Remarkably, for centuries, Torah scholars have discussed the question of the status of humanoids—creatures created through The Book of Creation or Sefer Yetzirah. While most commentaries understand it as a form of applied mysticism, some, like the thirteenth-century Rabbi Menachem Meiri (Sanhedrin 67b), believe it refers to using technology to create a synthetic human-like organism. What emerges from Jewish tradition is a whole literature about the legal status of the humanoid (sometimes called a golem). The Talmud (Sanhedrin 67b) considers whether golems can be killed, and later thinkers debate whether you can harm them, whether they can be counted toward a minyan (quorum for prayers) and whether their creator is liable for damage they cause. Astonishingly, these are the very questions ethicists are currently grappling with regarding AI (well, not the minyan question).

Besides for halakhic responsa, students also read up on a wide range of non-rabbinic approaches to ethics: everything from Plato to Neitzche. What did students gain from studying these alternative ethical systems?

While we certainly turn to the Torah for guidance in the moral sphere, the Torah teaches us that humans can frequently use their God-given intellect to determine the propriety of particular actions. For example, Sodom is destroyed because "she did not strengthen the hand of the poor and needy" (Yechezkeil 16:49). But who told them to give charity? They were not blamed for violating any of the seven Noahide laws. We must conclude that they were accountable even for things that they were not specifically instructed to do because a human being, to some degree, is capable of independently determining what is right and wrong. Accordingly, studying other systems of morality often helps us better understand what is right. Sometimes studying non-Torah thinkers also does the opposite. We can better understand the nuance and novelty of the Torah when we contrast its divine perspective with other contrary views. Either way, the exercise helps us figure out what to do.

There has been a lot of debate about whether AI (artificial intelligence) machines like Google’s LaMDA should be considered sentient. How was this question addressed in your course?

This past June, Google engineer Blake Lemoine was fired shortly after he shared his belief that LaMDA is sentient. Most academics disagree with Lemoine, believing that LaMDA, which stands for Language Model for Dialogue Applications, is not sentient but merely imitates and impersonates humans as it responds to prompts. It sounds human because it is programmed to sound human—but that doesn’t mean it is sentient. But even if LaMDA isn’t sentient, maybe the next generations of humanoids will be. Indeed, teams of brilliant scientists are working very hard to create a conscious robot. And here is where the true problem lies—there is no scientific test for sentience. In fact, there isn’t even an agreed-upon definition of the term. Nobody seems to agree about what sentience would actually look like. However, there is an even bigger problem—society doesn’t know how to address the mystery of our humanness in the first place. While we may think scientists or programmers can answer this conundrum, that is not the case. The question of what makes us human is fundamentally a non-scientific question. Science can answer all sorts of questions, but it cannot answer all questions. It can’t, for example, tell us what love, or happiness, or goodness means. And it can’t define what it means to be human. One can suggest all sorts of answers to that question (sentience, intelligence, consciousness, awareness, reflection, et cetera), but these are not scientific solutions, which leaves us wondering what makes one answer better than any other. That is why it is so important to turn to Torah for answers. In class we spent a lot of time trying to determine the Torah’s approach to this question and discovered how Torah sources present numerous features that define us as human. These include being ensouled, having meaningful free will, the ability to conceptualize, being creative, the ability to handle complexity and contradiction, having emotions and the possibility for altruism.

How do those definitions play out in a practical sense?

For the time being, I don’t believe that machines have these qualities, though it’s sometimes hard to tell since the output of programs like ChatGPT is sometimes breathtaking. However, instead of depressing us, this should inspire us to accentuate the parts of ourselves that are uniquely human. Put differently, how many of us actualize our humanity? Do we seek spirituality, or are we drawn after the transient? Do we express our free will, or do we live lives guided by habit, allowing ourselves to be governed by nature and nurture? Do we connect to the corporeal or do we seek a relationship with God? Do we imitate God by becoming a creator and expressing creativity, or do we just rehash the same old things? Do we acknowledge the complexity of life, or do we prefer a black-and-white reality where there are simple answers to complex questions? Has our drive for efficiency succeeded in numbing our emotions, making us dispassionate and mechanistic? And finally, do we truly care about others, or are our acts of kindness just meant to quiet our conscience—or are they actually secret gifts to ourselves? That machines can do so many things that seem human forces us to better appreciate what it really means to be human. The incredible progress of AI reminds us that it is our job to actualize our humanity, to become a true human; otherwise, we are no better than a machine. 

 You co-taught this course with Rabbi Dr. Mois Navon, a rabbinic ethicist with professional experience in the field of engineering. What did Rabbi Navon bring to the table?

 Rabbi Navon’s engineering expertise, especially his experience working at Mobile Eye, allowed him to show us how these questions actually play out in the real world. This, combined with his mastery of both Jewish and secular ethics, made for an incredible experience.

What do you think will be the next big thing in AI?

I predict that I will be shocked. But I am confident that whatever it is, we will be able to look to the Torah for answers.


The course syllabus is available online here. You can learn more about the Straus Center by signing up for our newsletter here. Be sure to also like us on Facebook, follow us on Twitter and Instagram and connect with us on LinkedIn. To learn more about the Straus Scholars program, click here.