💻 Week 10 - Class Roadmap (90 min)

2024/25 Autumn Term

Author

The DS101A Team

Published

06 December 2024

AI and the information environment

Welcome to our week 10 seminar/lab class for DS101A.

In this class we will look at the effect of AI on perception of reality.

When AI is used to convey information to the individual person, we consider:

  • Identity
  • Rights
  • Control - a.k.a. “mimicry”
  • The Quantified Self - consent, measurement and estimation

There are also considerations for society as a whole:

  • Mediating public discourse
  • Trust & degradation of the information environment
  • Psychology & behaviour

Preparation

To prepare for the class, you can watch a short clip from these two videos.

First of all, where Sir David Attenborough reflects on the digital re-purposing of his likeness:

And second, where Eric Schmidt – once CEO and Chairman of Google – talks about how it is possible to repurpose your identity to configure your world view:

It should take around 10 minutes of your time.

Step 00 - Identity: personal data 🐾 (15m)

👨‍🏫 Teaching moment (5m)

Your tutor will ground this class around a brief discussion of different sources and forms of data.

  • What is data?” – revisited

Discussion (10m)

For the individual, they are their data (their “digital footprints”) – but there are considerations of knowledge and consent and control that have become much more acute due to the onset of powerful AI.

Generative AI has made it possible to leverage tiny “snapshots” of data to synthesise completely realistic scenes, scenarios and narratives – whole new worlds – to the extent that within the digital realm, the individual has become a puppet, open and available to whoever holds the rights to “their” data.

“Digital mimicry” becomes possible for whoever holds the rights to personalised data downstream.

  • Who owns the rights to your personal profile?”

  • How is personalisation made possible?”

  • How can personalisation be used for good?”

  • How can personalisation be used for harm?”

  • Is mimicry ever justified?”

Background:

Step 01 - Media communications 🎤 📺 (15m)

Where data can be used to create useful illusions, for society, there is the role of the mediator, and then there is also sophisticated targetting.

AI makes it possible to influence the message in powerful new ways. In this class we will revisit the old debate around free-speech, propaganda and censorship, to consider whether and how we should upgrade our thinking.

Discussion (15m)

  • Is this the same debate around censorship vs. free speech?”

  • What if anything, has changed?”

Mediation here means control over the message. AI can be used to create scenarios from seeds that don’t exist, so we have to consider the effect on the individual and on society when there is confusion between the world we inhabit and the world (that looks like the world we inhabit, but) that actually lacks any grounding in actual events.

Background:

🍵 Break (~5 min)

Step 02 - Mimicry & personalised data 🎎 (15m)

In addition to perfect duplication – by digitisation of the real-world substrate, an AI model is able to incorporate a style, a tone, a “mental” world model, or even a whole personality . AI can be considered a new form of medium, where data can be copied, it can be translated and it can also be “coloured”.

Discussion (15m)

  • How good is reality are deepfakes?”
  • Opportunities for the future: ”How far can this go?”?”

Background:

Step 03 - Trust & degradation of the information environment 🎳 🚱 (15m)

AI is being used to filter events from the past, to present the news on current events, and also to construct new media for mass consumption.

Discussion (15m)

  • What do you consider to be acceptable use of AI in presenting documentary events?”

Background:

Step 04 - Psychology & behaviour 🎭 (15m)

How will this situation develop to impact both the psychology of the individuals and of the behaviour of society?

Discussion (15m)

  • What happens when your world view is taken over by technology?”

Background:


OpenAI Sora

References

Atwell, Katherine, Sabit Hassan, and Malihe Alikhani. 2022. APPDIA: A Discourse-Aware Transformer-Based Style Transfer Model for Offensive Social Media Conversations.” In. https://openreview.net/forum?id=3NII4exfLt&referrer=%5Bthe%20profile%20of%20Katherine%20Atwell%5D(%2Fprofile%3Fid%3D~Katherine_Atwell1).
BBC Click. n.d. “Peter Jackson Colourises World War One Footage - BBC Click - YouTube. BBC Click.” Accessed December 5, 2024. https://www.youtube.com/watch?v=EYIeactlMWo&t=5s.
BBC News. 2024. Sir David Attenborough Says AI Clone of His Voice Is ’Disturbing’ BBC News. https://www.youtube.com/watch?v=-72XVvwZcfI.
Galuppo, Mia. 2023. “Doc Producers Call for Generative AI Guardrails in Open Letter. The Hollywood Reporter.” November 27, 2023. https://www.hollywoodreporter.com/news/general-news/doc-producers-call-for-generative-a-i-guardrails-in-open-letter-exclusive-1235649102/.
Jennifer Baichwal, Mark Bailey, Sam Ball, Amina Bayou, Richard Berge, Ken Burns, Sarah Burns, et al. n.d. GenAI Initiative. Archival Producers.” Accessed December 5, 2024. https://www.archivalproducersalliance.com/apa-genai-initiative.
Jenny Kleeman. n.d. “She Was Accused of Faking an Incriminating Video of Teenage Cheerleaders. She Was Arrested, Outcast and Condemned. The Problem? Nothing Was Fake After All Deepfake the Guardian. The Guardian.” Newspaper. Accessed December 5, 2024. https://www.theguardian.com/technology/article/2024/may/11/she-was-accused-of-faking-an-incriminating-video-of-teenage-cheerleaders-she-was-arrested-outcast-and-condemned-the-problem-nothing-was-fake-after-all.
Jones, Nicola, and Scientific American. 2024. “Who Owns Your Voice in the Age of AI?” Scientific American, May. https://www.scientificamerican.com/article/scarlett-johanssons-openai-dispute-raises-questions-about-persona-rights/.
Kate Berry. 2024. “Scams: ’I Was Duped by Martin Lewis Deepfake Advert’. BBC News.” News. November 24, 2024. https://www.bbc.com/news/articles/clyvj754d9lo.
Kolorize. n.d. “Colorize Photo with Next-Gen AI for Free Kolorize. Kolorize.” Commercial. Accessed December 5, 2024. https://kolorize.cc/.
Kyle Wiggers. n.d. OpenAI Built a Voice Cloning Tool, but You Can’t Use It... Yet TechCrunch.” Accessed December 5, 2024. https://techcrunch.com/2024/03/29/openai-custom-voice-engine-preview/.
Łabuz, Mateusz, and Christopher Nehring. 2024. “On the Way to Deep Fake Democracy? Deep Fakes in Election Campaigns in 2023.” European Political Science 23 (4): 454–73. https://doi.org/10.1057/s41304-024-00482-9.
Luke Taylor. n.d. “Amnesty International Criticised for Using AI-Generated Images Colombia the Guardian. The Guardian.” Newspaper. Accessed December 5, 2024. https://www.theguardian.com/world/2023/may/02/amnesty-international-ai-generated-images-criticism.
Magna AI. 2024. OpenAI Sora All Example Videos. https://www.youtube.com/watch?v=TU1gMloI0kc.
Manisha Ganguly. n.d. ‘It’s Not Me, It’s Just My Face’: The Models Who Found Their Likenesses Had Been Used in AI Propaganda Artificial Intelligence (AI) the Guardian. The Guardian.” Newspaper. Accessed December 5, 2024. https://www.theguardian.com/technology/2024/oct/16/its-not-me-its-just-my-face-the-models-who-found-their-likenesses-had-been-used-in-ai-propaganda.
Peter Benie. 2018. “The Man Who Helped to Preserve Stephen Hawking’s Iconic Voice. Department of Engineering.” University. December 21, 2018. https://www.eng.cam.ac.uk/news/man-who-helped-preserve-stephen-hawking-s-iconic-voice.
Peter Hoskins. 2024. “Dame Judi Dench and John Cena to Voice Meta AI Chatbot. BBC News.” September 26, 2024. https://www.bbc.com/news/articles/c6258zn1663o.
Ruiter, Dana, Thomas Kleinbauer, Cristina España-Bonet, Josef van Genabith, and Dietrich Klakow. 2022. “Exploiting Social Media Content for Self-Supervised Style Transfer.” In Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media, edited by Lun-Wei Ku, Cheng-Te Li, Yu-Che Tsai, and Wei-Yao Wang, 11–34. Seattle, Washington: Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.socialnlp-1.2.
Wiggers, Kyle. 2024. “Microsoft Will Soon Let You Clone Your Voice for Teams Meetings. TechCrunch.” November 19, 2024. https://techcrunch.com/2024/11/19/soon-microsoft-will-let-teams-meeting-attendees-clone-their-voices/.