Part 2: Dr. Susarla on AI Challenges in Communication, Regulation and Accountability
Feb 11, 2025
In part 2 of our interview with Dr. Susarla, she delves into how communication channels and societal structures are being reshaped by AI, and the importance of collective action to address these changes. To read part 1, where she covers ethical and regulatory concerns, click here.
How do you think communication channels will evolve with the rapid increase in AI-generated content (incl. deepfakes) and associated threats?
Important points:
The big challenge in the future will be having a validation layer for current verification methods (eg: KYC) since it is increasingly hard to differentiate between AI and human generated content
We'll also be running out a lot of the training data as we're all using the same training data to create these bigger benchmarks and models
Where is all the value coming from? To realize all the value of AI systems, we need a lot more than we have right now and that would require a lot of domain knowledge. We need people working and building new stuff for new processes, and we need ways to audit and measure the effectiveness. It's very unglamorous but that's where the value comes from
If anyone can create these digital avatars very easily, then it is not necessarily a good thing and that can amplify bias. Our legal system is not really designed for this fast pace with which we are creating and consuming content.
So I understand that it's like a cat-and-mouse game where regulation hasn't caught up with the amount of AI content that we're all generating. Do you think the solution needs to come from businesses or governments?
Transparency is crucial. For example, YouTube requires labels for AI-generated content, but voluntary transparency measures can only go so far. If we look at how ESG initiatives have panned out, the question of whether similar approaches could work for AI comes up, though their success is debatable.
I support government mandates, even though they’re often unpopular, because the significant negative externalities of AI have been largely ignored. In just 20 years, technology has evolved so rapidly that society hasn’t fully grasped the unforeseen consequences. Without regulatory catch-up, the unchecked use of algorithmic recommendations in daily life could lead to substantial costs. We need a better understanding of these risks and proactive measures to address them.
What steps can be taken to ensure greater transparency and accountability in the development and deployment of advanced AI systems?
The key to addressing AI-related challenges lies in creating more informed citizens. People need to ask questions of Congress and the Senators and advocate for greater transparency. Initiatives by platforms like Meta, aimed at increasing transparency, show promise but require coordination to be effective.
The Biden administration has made progress by convening tech CEOs to agree on guiding principles, and international collaborations have led to some shared standards. However, addressing these issues requires a combination of national and international efforts.
(end of interview)
2024 © Project Karnā Inc.