In Conversation with Dr. Anjana Susarla: Ethics, Regulations, and the Future of Responsible AI
Feb 7, 2024
Dr. Anjana Susarla is the Omura-Saxena Professor in Responsible AI at the Broad College of Business. With degrees from IIT Chennai, IIM Calcutta, and a Ph.D. from the University of Texas at Austin, her research focuses on responsible AI, social media analytics, and economics of Artificial Intelligence. We were honoured to have her with us for an interview. The rapid advancement of artificial intelligence, particularly in areas like generative AI and deepfakes, has raised significant ethical and regulatory concerns. In this segment, Dr. Anjana discusses how these challenges are evolving, and the role of different stakeholders in fostering responsible AI.
Given your work in responsible AI, how do you see ethical considerations around this technology evolving in the next few years especially in the context of technologies like deepfakes?
Ethical considerations will need to be addressed through a variety of mechanisms including:
1. Legislative actions
2. Increased awareness, and
3. Formation of industry consortiums and citizen advocacy groups
Different countries have taken different approaches. For example, in the United States, we don't have comprehensive federal legislation on deepfakes, but various states have tried to implement some protections. One initiative proposed by the industry is the creation of a consortium to promote digital watermarking standards. Companies like Adobe have shown a lot of leadership in this area. In India, there have been many initiatives focused on responsibility in AI, but not specifically deepfakes. Ultimately, one key aspect of future regulation will involve codifying and standardizing a set of principles to guide the responsible use of AI.
2. Addressing the misuse of generative AI technologies, are there any global best practices, or in the U.S., any best practices you recommend? How do you view regulation around this in general?
I think everyone understands that some sort of action is necessary. So the question is what's the best way to adopt? Can we have a common set of principles and sort of minimum guarantees, and can we do it in a responsible manner?
European Union:
Now, the European Union has unveiled lots of AI regulations, some of them will take effect very gradually. I think in the US, we don't have an appetite for those kinds of things yet. Maybe the European approach is seen by American companies and the content creators in the US as a little too restrictive.
United States:
NIST has developed a set of benchmarks, such as risk management frameworks, under the leadership of the Biden-Harris administration. These frameworks were created through extensive consultations, including roundtables and AI advisory committees, to establish guiding principles for responsible AI use.
In my mind, we already have a very comprehensive system, but will the industry adopt it widely, we have to see. Responsible AI is not like something in a vacuum. I feel that industry is ahead of academia in terms of understanding some of the dangers of AI without any kind of guardrails.
3. In your work you mention the importance of algorithmic literacy and highlight the impact of AI-generated content on information verification issues. What specific measures should we take to tackle these risks? Can identity compromise, misinformation and similar issues be mitigated by approaches like red teaming?
Red teaming is accepted as a very good practice. OpenAI considers red teaming as well. When it comes to algorithmic literacy, the question is: what is the right way to develop these skills? Some people would say that we need more curriculum even as early as middle school. And I've seen some of these initiatives where educators have worked with middle school teachers to develop this curriculum centered on algorithmic literacy or bring in some of these best practices.
Meta, I think, is prioritizing a lot of AI-generated avatars. So, what does that do? What happens to human engagement? How do we measure it? So then the question is: where are the new sources of value being created? How are they harnessing the newer way of working with AI? Those are all open questions.
Stay tuned for part 2, where we delve more into how communication channels and societal structures are going to be reshaped by these emerging technologies.
2024 © Project Karnā Inc.