Loading... please wait

The Nexus of Artificial Intelligence (AI) and Misinformation

Speakers:

Yalda Aoukar, Managing Partner, Bracket Capital / Bracket Foundation
Michael Crow, President, Arizona State University
Igor Jablokov, CEO & Founder, Pryon
Reggie Townsend, Vice President, Data Ethics Practice, SAS
Misha Zelinsky, War Correspondent, Columnist, Australian Financial Review

“The rise of deep fakes and similar technologies is seen as a significant challenge to maintaining the integrity of information in the digital age.”

– Misha Zelinsky

“It’s just bad business to tell people that I built something, it’s gonna kill you, and there’s nothing you can do about it, right? That just doesn’t make a lot of sense from a business perspective.”

– Reggie Townsend

“The rise of deep fakes and similar tech poses a significant challenge to preserving information integrity in the digital age.”

– Yalda Aoukar

“The web as we know it died last year.”

– Igor Jablokov

“We refuse to look at the negative implications. All technologies have negative implications. All new developments have negative implications.”

– Michael Crow

Key takeaways & next steps:

  • There is a significant threat posed by artificial intelligence (AI), particularly generative AI and deep fakes, in spreading misinformation. Generative AI creates realistic-looking content, even if entirely fabricated, while deep fakes are manipulated media, often convincingly impersonating real people. The concern arises from AI’s ability to produce content that’s hard for humans to identify as fake, leading to confusion and manipulation of public perception. This proliferation of falsehoods erodes trust in facts and can disrupt democracy.
  • There is a prevailing fear that the incapability to distinguish authentic from fabricated content could foster widespread cynicism and impede democratic dialogue. The emergence of deep fakes and analogous technologies represents a substantial obstacle to safeguarding the integrity of information in the digital era.
  • Concerns exist about the tech sector’s limited regulation, especially in AI, and doubts arise about its capacity for self-regulation due to AI’s swift advancement. The absence of regulatory supervision is viewed as a problem, particularly for potentially harmful AI applications. While there are calls for increased regulation, it’s acknowledged that regulating emerging technologies like AI is complex and doesn’t have straightforward solutions.
  • Advocacy is essential to promote government supervision and the regulation of AI technologies. Policymakers should collaborate with experts to formulate all-encompassing regulations that effectively tackle the complexities presented by AI.
  • We need to promote ethical AI development practices that prioritize transparency, accountability, and responsible use of AI technologies.