By |Published On: September 1st, 2021|Categories: Media Psychology, School of Psychology|
by Jason Ohler, Ph.D.
Fielding Faculty, Media Psychology

It has been an honor to work for an institution that values social justice so highly. History has consistently taught us that if we aren’t vigilant about social justice, a vacuum develops that can be filled all too easily by the forces of misogyny, ethnocentrism, and xenophobia. Recent history has also taught us that technology exacerbates the potential for this to happen. Artificial Intelligence (AI) technology is so powerful, and often so discrete, that it can be used to win elections, set social policy, or dupe us into believing something that is patently false.

Let’s begin with some AI wow

By now, most readers have seen an example of deep faking AI. If not, go to YouTube, search for Obama deep fakes, and stand back. In one example, Jordan Peele controls Obama’s video image so convincingly that when he goes on a rant about the previous administration, you’d swear it was real – a combination of highly convincing visuals and confirmation bias in action.

Yet, it is an event that never happened. When we imagine this technology unchecked, and in the hands of those who seek to thwart the common good, the mind boggles at how public opinion could be swayed and how truth could eventually become an indiscernible distant memory.

AI and Fake News

We unknowingly experienced the new face of injustice and racism when we watched the purveyors of fake news try to hijack (some say successfully) an election, using bots that created invisible, AI-facilitated target marketing using big data sets, like those maintained by Facebook. (For more about this, read about the Oxford Analytica debacle.) At issue is not just the ill intent of those who seek power at all costs, but also the technology that allows this to happen so discretely. From now on we will have to grapple with the fact that we will never know for sure whether the media we consume is real, concocted, or somewhere in between.

Everything by us contains our bias

But the problem goes much deeper. We are best served to remember that behind all the dazzling AI are people, just like you or I, sitting at computers hammering out lines of code. Even the programmers who consciously self-check to eliminate bias can’t help but craft an artificial world into which they project their prejudices, many of which may be unknown to them. Thus, round one of public AI came with reports of a number of racist and misogynist false positives:

  • In 2015, Amazon realized that its AI hiring algorithm was biased against women.
  • In 2018, researchers determined that a widely used healthcare algorithm discriminated against blacks. As quoted in Science: “Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.”
  • In 2019, NIST researchers found racial bias in facial recognition programs. “Among U.S.-developed algorithms, there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups… a notable exception was for some algorithms developed in Asian countries (where) there was no such dramatic difference in false positives.”

These are just a few examples. And these are just the headlines.

The massive scale at which AI programmers work, and the indiscernibility of much of what they do, leaves us wondering how to move forward in creating just, fair, and transparent social institutions when we don’t know about issues that our critical thinking radar can’t detect.

AI and Ethics

Alas, things get trickier.

Imagine you are driving down the highway in the family SUV, your two children and the dog in the back seat. Suddenly, a deer jumps out in front of your car. You can: 1) jump the curb and hope you don’t hurt everyone in the car, as well as two people who are walking their dog on the sidewalk; 2) hit the deer, knowing that doing so would probably injure or maybe even kill you, your passengers and anyone in the cars behind you who swerve to avoid the accident, or 3) cross into oncoming traffic and take a chance you can outmaneuver all the cars headed straight for you. A decision needs to be made in a split second.

And, oh yes, you aren’t driving. You are in an autonomous SUV, which means that your car will need to decide. Even if your car has an override that allows you to take control of the vehicle, events are happening too fast. You have no choice but to let your car make the decision while you hope for the best.

This is not a contrived situation. Tech ethicists are already trying to unravel quandaries like this as AI permeates daily living.

The Trolley Problem, Updated with AI

This autonomous vehicle (AV) dilemma is not unlike the one described in the “The Trolley Problem,” a foundational thought experiment in most college ethics classes that has been debated by numerous moral philosophers. In Dr. Judith Jarvis Thompson’s version, a trolley with failed brakes is hurtling down a hill toward five workmen who are repairing the tracks. There is the very real possibility that the workmen will not see the train in time to move. However, you can throw a switch and send the trolley on to another track where it will assuredly kill only one person. Which option is more ethically sound? Or, in more contemporary terms, how would we program an AI machine – like a self-driving car – to respond?

AVs are just the beginning. Most of our new tech will be AI infused in some way. Our robots and self-aware homes, even the bots we use to answer our email, will also be faced with similar moral dilemmas. As good consumers, we will shop for the smartest AI we can afford. The smarter our tech becomes, the more we will depend on programmers to craft AI that extends us, in McLuhanistic terms, in ways that reflect who we are as moral human beings. Given that each of us might handle the deer and SUV situation differently, what kind of programmers will we turn to?

What to Do

Any institution concerned with social justice needs to be concerned with how AI may undo whatever social progress has been made over the last two centuries. Perhaps we need citizen’s boards that examine AI for prejudicial thinking, and education at an early age for our students, all of whom need help understanding the ethical dimensions of living an AI-assisted lifestyle. We most certainly need to add AI racism to the list of topics to address at Fielding if we are to maintain our commitment to maintaining a leadership position in the area of social justice.

At the very least, we need to listen to AI ethicists like Timnit Gebru and Margaret Mitchell, who were fired from Google for sounding the alarm about the lack of transparency in how Google uses the big data sets it collects on each of us who enjoy their services, usually unaware of the data Google collects about our behaviors. Gebru and Mitchell’s concerns should become Fielding’s concerns. The year is only 2021, and we are headed into an unforeseeable world in which technology is largely a rollercoaster without a braking system. We currently entrust our concern for the social justices issues implicit in AI to pundits, who are famously, and often, wrong. We need to adopt this concern as our own, bringing the forces of research, as well as student, faculty, an administrative interest, to bear on an issue that may well determine our quality of life going forward.

And we should probably hurry. After all, soon our AI robots will become our neighbors and fellow digital citizens. We will want to make sure they are the kind of intelligent entities we want living in our communities.

If we aren’t involved in steering AI, then the techies and the interests of business will run the show. If that happens, then a pre-tech prophecy that has always been true will come to pass on steroids – if we don’t tell our stories, others will do it for us. And I doubt any of us would be pleased with the results.

In the prophetic words of Orwell: “Who controls the past controls the future. Who controls the present controls the past.”

Credits

  1. Hislop. (2019). How the Obama / Jordan Peele DEEPFAKE actually works | Ian Hislop’s Fake News – BBC. https://www.youtube.com/watch?v=g5wLaJYBAm4
  2. https://www.youtube.com/watch?v=g5wLaJYBAm4
  3. Cambridge Analytica Scandal. (2016). (https://en.wikipedia.org/wiki/Oxford_Analytica)
  4. Amazon’s sexist AI recruiting tool: how did it go so wrong? (2015). https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e
  5. National Institute of Standards and Technology. (2019). NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software. https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software
  1. Obermeyer, et. al. Bias in health software (2019). http://science.sciencemag.org/content/366/6464/447
  2. The Trolley Problem. (1985). https://www.jstor.org/stable/796133?origin=crossref
  3. Understanding Media (1964). https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf

About the Author: Jason Ohler

Dr. Jason Ohler is a professor emeritus in educational technology and virtual learning from the University of Alaska. After retiring from the UofA, he went to work in Fielding's Media Psychology Program, in the School of Psychology. Dr. Ohler has been a teacher, writer, researcher, international speaker for forty years. He specializes in the areas of the psychology of media literacy, social justice in a technological age, digital ethics, and narrative theory and digital storytelling. He is author of several books, articles, and studies, and has received numerous awards for his work. He is known for his work in the fields of digital citizenship, ‘art the fourth R’, and "creatical" thinking, which is the combination of critical and creative thinking into a unified approach to problem solving. His motto for the past forty years has been ‘To promote the appropriate, creative, and wise use of technology; personally, socially, and professionally; and, whenever possible, to have fun.’

Share This Post!

Filter by Category

Recent Posts

Join Over 7,500 Fielding Alumni Located Around The World!

Change the world. Start with yours.™