🗓️ Week 11
Ethical issues of AI and ethical AI: an overview

DS101 – Fundamentals of Data Science

09 Dec 2024

A few stories

Source: (Reuters 2018)

Source: (Vincent 2021)

For more on Delphi, see (Piper 2021) and (Noor 2021)

A few stories (2)

Source: (Davis 2021)

Source: (UCL 2022)

Cooking time

  • These are actual recipes suggested by the recipe bot, Savey Meal-bot , of a New Zealand supermarket chain Pak ‘n’ Save
  • Bot based on ChatGPT 3.5
  • Included notice “You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot”
  • New warning notice now appended to meal planner that the recipes aren’t reviewed by a human being.
  • For more on the topic, see (Loe 2023), (McClure 2023) and (Doyle 2023)

Common AI issues and risks

🗣️ Reading/Discussion:

  • Read (Parsons 2020) or (Bossman 2016) and discuss it briefly within your table (~10 min).
  • What are the main ideas of the article? In particular, what does it say about AI issues?

Common AI issues and risks

…Artificial Intelligence (AI) entails several potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

European commission white paper, 2020

Source: (Amaro 2021) Source: (Björn ten Seldam and Alex Brenninkmeijer 2021)

Common AI issues and risks

Common AI issues and risks

Source: (Goodfellow, Ian et al. 2017) Source: (Boesch 2023)

Common AI issues and risks

Source: (Professor Goetz Richter 2019) Source: (Beaumont-Thomas 2023)

Common AI issues and risks

  • Misinformation/deepfakes Snowy Delhi by Angshuman Choudhury

Trump arrest by Eliot Higgins

Check out how AI-generated deepfakes played a role in the US 2024 primaries: CNN video

Common AI issues and risks

  • Risks of trauma for content moderators (Rowe 2023)

Source: (Rowe 2023)

Source: (Naughton 2024) Source: (Vincent 2024) Source: (Melissa Heikkilä 2023)

Where do these issues come from?

Where do these issues come from?

Where do these issues come from?

Where do these issues come from?

The technical response: trying to make AI algorithms fair by design

  • debiasing datasets (e.g debiasing of ImageNet - see here or here or here)
  • creating more balanced and representative datasets:
    • e.g IBM’s “Diversity in Faces” (DiF) dataset created in response to criticism of IBM’s facial recognition software (did not recognize faces of people with darker skin)
    • DiF made up of almost a million images of people gathered from the Yahoo! Flickr Creative Commons dataset, assembled specifically to achieve statistical parity among categories of skin tone, facial structure, age, and gender
    • IBM DiF team also wondered whether age, gender, and skin color were truly sufficient in generating a dataset that can ensure fairness and accuracy, and concluded that it wasn’t the case and added…facial symmetry and skull shapes to build a complete picture of the face (appropriateness of such features begs the question given craniometry history in 19th century and links to racial discrimination)
  • defining fairness mathematically and optimizing for fairness

The technical response: making algorithms transparent and explainable

  • need to justify decisions made by “black box” algorithms (especially in sensitive contexts/applications)
  • need to foster trust in decisions made by algorithms
  • need to verify soundness of decisions and/or understand source of errors and biases

AI explainability refers to easy-to-understand information explaining why and how an AI system made its decisions.

Some of the post-hoc model-agnostic explanation methods:

  • LIME (Local Interpretable Model-Agnostic Explanations) (Ribeiro, Singh, and Guestrin 2016)
    • based on the idea of measuring the effect of local perturbations of feature values on the model
    • see this page for more on LIME
  • Shapley values:
    • based on collaborative game theory
    • measure of feature importance
    • see this video or this post for a simple explanation of Shapley values

See this page for more on explainable AI (XAI). For an application to LIME and Shapley values in finance (credit default estimation), see (Gramegna and Giudici 2021)

The technical response: making algorithms transparent and explainable

Example from (Ribeiro, Singh, and Guestrin 2016):

  • logistic regression classifier trained on training set of 20 images of wolves and huskies (10 of each category).
  • The task is to distinguish images of both categories.
  • Features extracted from images with a type of neural net
  • Then the model is tested on a test set of 10 images (5 of each category): the model misclassifies one instance of husky for a wolf and a wolf for a husky.
  • LIME explanations show that the misclassified husky was on a snowy background and that the misclassified wolf was not on snowy background and that the model was actually detecting background patterns (and not husky/wolf patterns as intended!).

The non-technical response: Issues with regulation

  • Regulation and innovation timelines are different: regulation takes time!

The non-technical response : issues with regulation

The non-technical response: Issues with regulation

  • Regulations might be outdated by the time they are released, e.g shoehorning genAI in EU AI Act:

    • attempts to include substantial flexibility in AI regulations, either by using deliberately high-level wording and policies, or by allowing for future interpretation and application by courts and regulators in bid to expand regulation lifespan \(\Longrightarrow\) risk in businesses not knowing whether their planned implementations of AI will be lawful in the medium-to-long term, making it harder to attract long-term AI investment
  • Patchwork of regulatory frameworks (Editorial 2023) (Jones and Scientific American 2024):

    • “AI” means different things in different jurisdictions
    • emerging AI regulations come in different forms, they have no consistent legal form
    • emerging AI regulations have different conceptual approaches
    • overlap between AI regulation and other areas of law is complex
  • Regulators might not have the technical expertise to understand the technologies i.e AI they are regulating

  • The recourse to consultants from big industry players leaves regulators open to potential conflicts of interest (Naughton 2023b)

Additional resources on AI ethics

References

Amaro, Silvia. 2021. “Dutch Government Resigns After Childcare Benefits Scandal. CNBC.” January 15, 2021. https://www.cnbc.com/2021/01/15/dutch-government-resigns-after-childcare-benefits-scandal-.html.
Beaumont-Thomas, Ben. 2023. “Édith Piaf’s Voice Re-Created Using AI so She Can Narrate Own Biopic.” The Guardian, November. https://www.theguardian.com/music/2023/nov/14/edith-piaf-voice-recreated-using-ai-so-she-can-narrate-own-biopic.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In, 610–23. Virtual Event Canada: ACM. https://doi.org/10.1145/3442188.3445922.
Björn ten Seldam, and Alex Brenninkmeijer. 2021. “The Dutch Benefits Scandal: A Cautionary Tale for Algorithmic Enforcement – EU Law Enforcement.” April 30, 2021. https://eulawenforcement.com/?p=7941.
Boesch, Gaudenz. 2023. “What Is Adversarial Machine Learning? Attack Methods in 2024. Viso.ai.” January 1, 2023. https://viso.ai/deep-learning/adversarial-machine-learning/.
Bossman, Julia. 2016. “Top 9 Ethical Issues in Artificial Intelligence. World Economic Forum.” October 21, 2016. https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/.
Davis, Nicola. 2021. AI Skin Cancer Diagnoses Risk Being Less Accurate for Dark Skin – Study.” The Guardian, November. https://www.theguardian.com/society/2021/nov/09/ai-skin-cancer-diagnoses-risk-being-less-accurate-for-dark-skin-study.
Dergaa, Ismail, Feten Fekih-Romdhane, Souheil Hallit, Alexandre Andrade Loch, Jordan M. Glenn, Mohamed Saifeddin Fessi, Mohamed Ben Aissa, et al. 2024. ChatGPT Is Not Ready yet for Use in Providing Mental Health Assessment and Interventions.” Frontiers in Psychiatry 14 (January). https://doi.org/10.3389/fpsyt.2023.1277756.
Doyle, Trent. 2023. “Pak’nSave’s AI Meal Bot Suggests Recipes for Toxic Gas and Poisonous Meals.” Newshub, August. https://www.newshub.co.nz/home/new-zealand/2023/08/pak-nsave-s-ai-meal-bot-suggests-recipes-for-toxic-gas-and-poisonous-meals.html.
Editorial. 2023. “The Guardian View on AI Regulation: The Threat Is Too Grave for Sunak’s Light-Touch Approach.” The Guardian, November. https://www.theguardian.com/commentisfree/2023/nov/01/the-guardian-view-on-ai-regulation-the-threat-is-too-grave-for-sunaks-light-touch-approach.
Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et al. 2018. “AI4People—an Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines (Dordrecht) 28 (4): 689–707.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2015. “Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.
Goodfellow, Ian, Papernot, Nicolas, Huang, Sandy, Duan, Yan, Pieter Abbeel, and Jack Clark. 2017. “Attacking Machine Learning with Adversarial Examples.” February 24, 2017. https://openai.com/research/attacking-machine-learning-with-adversarial-examples.
Gramegna, Alex, and Paolo Giudici. 2021. “SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk.” Frontiers in Artificial Intelligence 4. https://doi.org/10.3389/frai.2021.752558.
Howard, Jacqueline. 2023. ChatGPT’s Responses to Suicide, Addiction, Sexual Assault Crises Raise Questions in New Study. CNN.” June 7, 2023. https://www.cnn.com/2023/06/07/health/chatgpt-health-crisis-responses-wellness/index.html.
Jones, Nicola, and Scientific American. 2024. “Who Owns Your Voice in the Age of AI?” Scientific American, May. https://www.scientificamerican.com/article/scarlett-johanssons-openai-dispute-raises-questions-about-persona-rights/.
Kafka, Peter. 2023. “The AI Boom Is Here, and so Are the Lawsuits. Vox.” February 1, 2023. https://www.vox.com/recode/23580554/generative-ai-chatgpt-openai-stable-diffusion-legal-battles-napster-copyright-peter-kafka-column.
Loe, Molly. 2023. AI Recipe Generator Suggests Something Unsavory. TechHQ.” August 16, 2023. https://techhq.com/2023/08/ai-recipe-generator-bleach-sandwich-new-zealand/.
McClure, Tess. 2023. “Supermarket AI Meal Planner App Suggests Recipe That Would Create Chlorine Gas.” The Guardian, August. https://www.theguardian.com/world/2023/aug/10/pak-n-save-savey-meal-bot-ai-app-malfunction-recipes.
Melissa Heikkilä. 2023. “Making an Image with Generative AI Uses as Much Energy as Charging Your Phone. MIT Technology Review.” December 1, 2023. https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/.
Naughton, John. 2023a. “Can AI-Generated Art Be Copyrighted? A US Judge Says Not, but It’s Just a Matter of Time.” The Observer. Retrieved from Https://Www.theguardian.com/Commentisfree/2023/Aug/26/Ai-Generated-Art-Copyright-Law-Recent-Entrance-Paradise-Creativity-Machine.
———. 2023b. “Europe’s AI Crackdown Looks Doomed to Be Felled by Silicon Valley Lobbying Power.” The Observer, December. https://www.theguardian.com/commentisfree/2023/dec/02/eu-artificial-intelligence-safety-bill-silicon-valley-lobbying.
———. 2024. AI’s Craving for Data Is Matched Only by a Runaway Thirst for Water and Energy.” The Observer, March. https://www.theguardian.com/commentisfree/2024/mar/02/ais-craving-for-data-is-matched-only-by-a-runaway-thirst-for-water-and-energy.
Noor, Poppy. 2021. ‘Is It OK to …’: The Bot That Gives You an Instant Moral Judgment.” The Guardian, November. https://www.theguardian.com/technology/2021/nov/02/delphi-online-ai-bot-philosophy.
Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342.
Parsons, Lian. 2020. “Ethical Concerns Mount as AI Takes Bigger Decision-Making Role. Harvard Gazette.” October 26, 2020. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
Pessach, Dana, and Erez Shmueli. 2022. “A Review on Fairness in Machine Learning.” ACM Comput. Surv. 55 (3). https://doi.org/10.1145/3494672.
Piper, Kelsey. 2021. “How Well Can an AI Mimic Human Ethics? Vox.” October 27, 2021. https://www.vox.com/future-perfect/2021/10/27/22747333/artificial-intelligence-ethics-delphi-ai.
Professor Goetz Richter. 2019. “Composers Are Under No Threat from AI, If Huawei’s Finished Schubert Symphony Is a Guide. The University of Sydney.” February 18, 2019. https://www.sydney.edu.au/music/news-and-events/2019/02/18/composers-are-under-no-threat-from-ai--if-huawei-s-finished-schu.html.
Reuters. 2018. “Amazon Ditched AI Recruiting Tool That Favored Men for Technical Jobs.” The Guardian, October. https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “"Why Should i Trust You?": Explaining the Predictions of Any Classifier.” https://arxiv.org/abs/1602.04938.
Rowe, Niamh. 2023. ‘It’s Destroyed Me Completely’: Kenyan Moderators Decry Toll of Training of AI Models.” The Guardian, August. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai.
Thorbecke, Catherine. 2023. “National Eating Disorders Association Takes Its AI Chatbot Offline After Complaints of ‘Harmful’ Advice CNN Business. CNN.” June 1, 2023. https://www.cnn.com/2023/06/01/tech/eating-disorder-chatbot/index.html.
UCL. 2022. “Gender Bias Revealed in AI Tools Screening for Liver Disease. UCL News.” July 11, 2022. https://www.ucl.ac.uk/news/2022/jul/gender-bias-revealed-ai-tools-screening-liver-disease.
Verma, Sahil, and Julia Rubin. 2018. “Fairness Definitions Explained.” In Proceedings of the International Workshop on Software Fairness, 1–7. FairWare ’18. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3194770.3194776.
Vincent, James. 2021. “The AI Oracle of Delphi Uses the Problems of Reddit to Offer Dubious Moral Advice. The Verge.” October 20, 2021. https://www.theverge.com/2021/10/20/22734215/ai-ask-delphi-moral-ethical-judgement-demo.
———. 2024. “How Much Electricity Does AI Consume? The Verge.” February 16, 2024. https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption.
Viswanathan, Giri. 2023. ChatGPT Struggles to Answer Medical Questions, New Research Finds. CNN.” December 10, 2023. https://www.cnn.com/2023/12/10/health/chatgpt-medical-questions/index.html.