AI Ethicist Addresses Safety and Oversight Concerns
Artificial Intelligence is advancing at a rapid pace, and some top researchers are calling for a pause. An open letter issued by the Future of Life Institute argues for a 6-month pause on the training of AI systems more powerful than GPT-4, or government moratoriums if all key labs won’t comply. Their stated goal is to develop shared safety protocols overseen by independent experts that would ensure systems are safe beyond a reasonable doubt.

Baobao Zhang is a political science professor at Syracuse University’s Maxwell School of Citizenship & Public Affairs. She is also a Senior Research Associate with the Autonomous Systems Policy Institute specializing in the ethics and governance of Artificial Intelligence. She answered some questions about the state of AI research and the concerns raised by the open letter.
The letter refers to an “out-of-control race” to develop technology that no one can predict or reliably control. How significant is this concern?
There is a race between major tech companies and even smaller start-ups that are trying to release generative AI systems, including large language models, text-to-image models, and multimodal models that work with several different types of media. The main concern is that these models are deployed across different settings without sufficient safety audits and guardrails. For example, earlier this year, we saw an early version of Bing’s chatbot powered by ChatGPT threaten, emotionally manipulate, and lie to users. More recently, we have seen many people being fooled by synthetic images (e.g., the Pope wearing a stylish puffer jacket) generated by Midjourney, a text-to-image AI system. Given that these generative AI systems are relatively general-purpose, it’s much harder for those developing or deploying AI systems to know what risks these AI systems could pose.
It’s critical that we study and anticipate how powerful AI systems can impact society before we deploy them.
Baobao Zhang
The letter also raises a number of ethical concerns about what we should allow machines to do. Do you feel there is enough ethical oversight at the companies where this technology is being developed?
I don’t think there is sufficient ethical oversight at the companies where these technologies are being developed. Given the economic pressures that these companies face, internal AI ethics teams may have limited power to slow or stop the deployment of AI systems. For example, Microsoft just laid off an entire AI ethics and society team that is supposed to make sure its products and services adhere to its AI ethics principles. At this point, I think ethical oversight should come from governmental regulation and public scrutiny. I think the European Union’s Artificial Intelligence Act is a step in the right direction because it scales regulatory scrutiny with risk. Nevertheless, we need to rethink how to classify risk when it comes to more general-purpose AI systems where some applications are high-risk (e.g., generating political news content) and some applications are low-risk (e.g., generating a joke for a friend).
What could a six-month pause on AI experimentation accomplish, and can we expect that enough governments and researchers would abide by that to make an impact?
I agree that we need to slow down the development and deployment of powerful generative AI systems. Nevertheless, a 6-month pause on AI experimentation is not particularly helpful by itself. We have to consider longer-term technical and governance guardrails for the development of more general-purpose AI systems. Furthermore, how can we ensure that AI developers abide by the 6-month moratorium? At a minimum, we would need to create a scheme to monitor how these AI developers use computing resources or a whistleblower protection program for those who want to disclose that their employer is violating the moratorium.
What should AI researchers consider as they push forward with new technology, and is there anything the general public should keep in mind as they see the headlines?
AI researchers should consider working with social scientists, civil society groups, and journalists as they develop new models. It’s critical that we study and anticipate how powerful AI systems can impact society before we deploy them. It’s a confusing time for the general public because there is expert disagreement about whether we are developing AI systems posing an existential threat to humanity. But there is expert consensus that generative AI could be hugely impactful, if not disruptive to how we work and relate to each other now and in the near future. One of the risks the open letter noted is the proliferation of “propaganda and untruth.” Harms from misinformation and disinformation are not new, but generative AI would allow bad actors to scale and personalize their campaigns greatly.
To request interviews or get more information:
Chris Munoz
Media Relations Specialist
cjmunoz@syr.edu