Webinar Summary By JoAnne Wadsworth, Communications Consultant, G20 Interfaith Forum
On Wednesday, July 28th, the G20 Interfaith Forum (IF20), together with eleven partner organizations, held the sixth and final installment of its “Ahead of the 2021 Italy G20 Summit” webinar series: “A Digital Revolution? Ethical Implications and Interreligious Engagement.” Partner organizations included the Fondazione per le scienze religiose (FSCIRE); The Baha’i International Community; The European Evangelical Alliance; The European Platform Against Religious Intolerance and Discrimination; The Fondazione Bruno Kessler; The International Center for Law and Religion Studies at Brigham Young University; The King Hamad Chair for Inter-Faith Dialogue and Peaceful Co-existence at the Sapienza University of Rome; The Study Center and Magazine Confronti; The Church of Jesus Christ of Latter-day Saints – European Union & International Affairs Office; The University of Siena; and the World Faith Development Dialogue. Panelists included Prof. Noreen Herzfeld, Professor of Science and Religion at Saint John’s University and College of St. Benedict; Dr. David Zvi Kalman, Scholar in Residence and Director of New Media at the Shalom Hartman Institute of North America; Dr. Biliana Popova, Assistant Professor at the School of Humanities and Social Sciences of Al Akhawayn University, Morocco; Prof. Robert Geraci, Professor of Religious Studies at Manhattan College; Dr. Branka Marijan, Senior Researcher at Project Ploughshares; Prof. Marco Ventura, Director of the FBK Centre for Religious Studies; and Prof. W. Cole Durham, President of the G20 Interfaith Forum Association. Dr. Pasquale Annicchino of the Bruno Kessler Foundation moderated the discussion.
After Annicchino introduced the panelists and the day’s discussion topic, he asked each of the speakers to offer brief comments in regard to the Innovation and Technology Working Group’s recent policy brief, created in preparation for the G20 Interfaith Forum in Bologna, Italy this September.
Prof. Marco Ventura
Ventura offered a background to this final episode of the “Ahead of the G20 Series,” emphasizing that the initial vision was to progressively build a foundation around the core of what the G20 Interfaith Forum stands for—a dialogue between religious, private, and political actors.
“Progress in this series was understood as bringing the different actors and the public closer and closer to the actual work of the G20. This episode is closest, in a sense, to a conversation where we all take part—the public, the invited experts, and the organizers—in an integrated, interactive conversation.”
Dr. Branka Marijan
Marijan outlined several key points from the working group brief, including:
Prof. Noreen Herzfeld
Herzfeld emphasized Marijan’s point that AI should just be a supplement and a support for human decision-making, drawing on examples from the autonomous weapons sector. She said that AI is able to process at a speed that goes far beyond the human. Particularly with autonomous weapons, the speed at which AI moves will increase the tempo of the battlefield to a level where humans may not be able to keep up—bringing forward the potential necessity of regulating the actual processing speed of AI in the future.
Dr. David Zvi Kalman
Kalman focused his comments on religious communities and their influence on the AI issue. He argued that while the current policy brief calls on “established cultural values” to protect society, it isn’t reasonable to expect or assume that values are already established on these issues. Religious communities don’t already know how to react to AI developments and other technological breakthroughs. For them to be relevant, they must do more than simply add their voices to the existing calls for caution.
“AI’s current narrative centers on how these technologies represent a breakthrough for humankind—but this narrative isn’t fully true. Religious communities need to develop their own narratives about how AI fits into the human story.”
Dr. Biliana Popova
Popova centered her comments on the ideological framework of the current policy, which appears closest to the principles of humanism as understood in liberal democracies.
“Humanism says that human knowledge is the ultimate judgement, but I foresee two issues with that. One—whenever there is a crisis, the average individual knowledge is brushed off in favor of expert knowledge, giving only experts the legitimacy to make decisions. The other clash is the fact that this policy brief calls for a global dialogue, yet sets a tone that demands that other parties either fully agree or just not participate in the discussion. The thought that average human knowledge is a legitimate judge isn’t an accepted concept in all religions, which could prevent us from achieving a truly meaningful dialogue within the current framework.”
Annicchino then shifted the focus of the discussion to on-the-ground policy considerations regarding AI, asking each of the panelists to offer specific recommendations.
Prof. Noreen Herzfeld
Herzfeld emphasized the need to enforce anti-trust laws, citing that 2/3 of AI research money is currently being spent by only six or seven large corporations and there is a very real danger of only a few small actors controlling AI. She also said that, though a complete ban on autonomous weapons would be best, a more realistic safeguard would be to ensure that there is a set minimum of humans in the loop of decision-making.
“The way AI is manipulating people needs to be brought to light, and religious communities can play an important role in doing this—re-framing the story around AI and bringing education about its ramifications down the chain to the people.”
Dr. David Zvi Kalman
Kalman focused on the shared principle across faiths that human life has special value, and offered four recommendations to protect that value:
Dr. Biliana Popova
Popova commented on the way physical space no longer provides people with protection the way that it used to.
“Physical space and temporal protections are completely out of the equation in AI development. I would like to see a policy that tries to translate some of these physical and temporal protections we have as human beings into algorithmic reality.”
In addition, she said that education needs to make a shift from where it is now (a simple focus on the technical how-to of these systems) to teaching ontological principles and civic engagement, in order to ethically protect individuals and communities.
The Q&A section of the discussion covered a variety of points, including:
In conclusion, Annacchino offered his thanks to the organizers and the cooperation between institutions that led to the series, then allowed time for final remarks from Geraci and Durham.
Prof. Robert Geraci
Geraci said that society may be at an inflection point in relation to AI, with public awareness growing around the importance of these issues.
“We need to find ways to scale solutions more quickly than we scale problems. The idea that different nations and companies are all trying to beat each other in the AI game is a framework that causes humanity to lose. In these conversations, what we’re trying to do is figure out how to make our degree of cooperation greater than our degree of competition—and to better the world by doing so.”
Prof. W. Cole Durham, Jr.
At the conclusion of the series, Durham recognized the excellent work that this discussion represented in lead-up to the G20 Interfaith Forum in Bologna, emphasizing the impressive scale of the teamwork involved and the crucial nature of these conversations as the world strives to come out of the COVID-19 crises and return to normal.