← Return to search results
Back to Prindle Institute
Technology

The Algorithm Made Me Do It

By Matthew S.W. Silk
28 Oct 2024
image of smiling monitor on top of shoulders

Artificial intelligence promises greater efficiency, greater accuracy, and lower costs in a host of different fields. The development of hiring algorithms, for example, can streamline the hiring process and better identify the best candidates for the job. In healthcare, algorithms can cut down on hours of work by quickly detecting illnesses or broken bones and finding the most efficient use of scarce healthcare resources. In the criminal justice system, AI can speed up the process by identifying potential recidivists.

However, these algorithms are often not transparent in how they work, or even how they are designed to work. This leaves us – as end-users of these algorithms – in an awkward position:  forced to accept the conclusions of an opaque algorithm that could be loaded with faulty assumptions, pseudo-science, and statistical noise. Might this blind acceptance conflict with our moral duties regarding responsible belief? How should we balance the tradeoffs between efficiency and lower cost versus credulity and gullibility?

While it’s easy to criticize AI’s many applications, it’s important to recognize the potential benefits. For example, while a self-driving car may present concerns about accountability for accidents, it is also true that the technology could offer countless benefits in terms of traffic efficiency, minimizing driver fatigue, and significant economic benefits.

In the field of affective computing, using AI to identify and categorize emotions can offer significant benefits to people with autism, or can help potentially identify people likely to have a stroke or a heart attack. It can also help with caregiving, with automated assistants that are more emotionally aware of the people they are helping. AI can also be used for the purposes of diagnosis or resource management. Similarly, the use of AI in the financial sector for things like loans can lead to better identification of risk, greater profits, and more competitive pricing.

The use of hiring algorithms in the workplace, meanwhile, will allow employers to cut down on the amount of time and resources it takes to find a new hire and can potentially take the guesswork out of identifying the most promising candidates. Similar benefits can accrue to workplaces that use algorithms for employee evaluations. Supposing that issues relating to bias can be addressed, algorithms offer the benefit of a more impartial evaluation, less dependent on the personal feelings of an individual evaluator. Ultimately, there are a great many reasons why taxpayers, job seekers, and home buyers all stand to benefit from the AI.

Still, we must be wary of the hidden costs. We may be tempted, either because it’s cheaper or more convenient, to accept unaccountable and unfair systems that we may have good reason not to excuse.

Consider the case of Tammy Dobbs. A resident of Arkansas, Tammy has cerebral palsy and requires assistance getting into a wheelchair. In 2016 after the state adopted a new algorithm to determine what level of care she should receive, the regular hours of assistant care that Tammy received were severely cut, making it difficult for her to do things like go to the bathroom. A government official came to her house, completed a questionnaire, and then relayed the algorithm’s determination. When pressed for an explanation, the official could only say, “Sorry, that’s what the computer is showing me.” The government’s expectation seemed to be for Dobbs to simply accept it. Eventually, a legal team revealed significant flaws in the state’s algorithm – the algorithm didn’t even consider whether someone had cerebral palsy.

Similar cases are easy to find. Glenn Rodriguez had to fight to get an explanation for why the recidivist algorithm COMPAS concluded that he was at a high risk for reoffending. The corporation who created COMPAS refused to reveal how the assessment was made – even to the parole board – citing trade secrets. If an algorithm can have such a profound impact on your life, surely we deserve a better explanation than “The algorithm made me do it.”

Many algorithms can have prejudicial assumptions baked in. A recidivism algorithm that is mostly trained on blue-collar or petty crime will not likely evaluate everyone the same. A hiring algorithm that contains a personality test designed to identify extroverted personality types might also be tracking whether candidates are likely to have a mental illness. Many hiring companies now make use of video recordings of candidates to detect body language, despite the research demonstrating that body language cannot predict successful job performance and likening the practice to pseudoscience like phrenology. Unfortunately, candidates have no idea how they are being evaluated and no avenue to appeal if they believe that an error has occurred.

In cases like this, particularly where there are financial incentives to sell these products as efficient no-brainer solutions, developers will have reason to stifle doubts and concerns. As the designer who created the algorithm in the Dobbs case argued, perfect transparency is overrated. “It’s not simple…My washing machine isn’t simple,” but “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” All of this means that there is an incentive on the part of developers and end-users to put their faith in algorithms that may be quite suspect.

As W.K. Clifford argued in his ethics of belief, every time we adopt beliefs without sufficient evidence, we do something wrong. This is because beliefs dispose us to action; the more we adopt the habit of passively accepting algorithmic conclusions without adequate inquiry, the more we expose ourselves to risk. But the consequences of the beliefs we adopt extend beyond the individual; our beliefs affect our entire community. If customers and taxpayers don’t ask questions – and developers are happier not to answer them – we end up with a situation much like that government official in the Dobbs case. No accountability; no justification. Don’t ask questions, just accept the outcome.

Artificial intelligence presents a collective action problem. Individuals alone cannot challenge these poor answers that lack transparency. Instead, resolution requires a collective response – we will need to work together to resist the constant temptation of lower costs, greater efficiency, and passing the buck.

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories