← Return to search results
Back to Prindle Institute

The Algorithm Made Me Do It

image of smiling monitor on top of shoulders

Artificial intelligence promises greater efficiency, greater accuracy, and lower costs in a host of different fields. The development of hiring algorithms, for example, can streamline the hiring process and better identify the best candidates for the job. In healthcare, algorithms can cut down on hours of work by quickly detecting illnesses or broken bones and finding the most efficient use of scarce healthcare resources. In the criminal justice system, AI can speed up the process by identifying potential recidivists.

However, these algorithms are often not transparent in how they work, or even how they are designed to work. This leaves us – as end-users of these algorithms – in an awkward position:  forced to accept the conclusions of an opaque algorithm that could be loaded with faulty assumptions, pseudo-science, and statistical noise. Might this blind acceptance conflict with our moral duties regarding responsible belief? How should we balance the tradeoffs between efficiency and lower cost versus credulity and gullibility?

While it’s easy to criticize AI’s many applications, it’s important to recognize the potential benefits. For example, while a self-driving car may present concerns about accountability for accidents, it is also true that the technology could offer countless benefits in terms of traffic efficiency, minimizing driver fatigue, and significant economic benefits.

In the field of affective computing, using AI to identify and categorize emotions can offer significant benefits to people with autism, or can help potentially identify people likely to have a stroke or a heart attack. It can also help with caregiving, with automated assistants that are more emotionally aware of the people they are helping. AI can also be used for the purposes of diagnosis or resource management. Similarly, the use of AI in the financial sector for things like loans can lead to better identification of risk, greater profits, and more competitive pricing.

The use of hiring algorithms in the workplace, meanwhile, will allow employers to cut down on the amount of time and resources it takes to find a new hire and can potentially take the guesswork out of identifying the most promising candidates. Similar benefits can accrue to workplaces that use algorithms for employee evaluations. Supposing that issues relating to bias can be addressed, algorithms offer the benefit of a more impartial evaluation, less dependent on the personal feelings of an individual evaluator. Ultimately, there are a great many reasons why taxpayers, job seekers, and home buyers all stand to benefit from the AI.

Still, we must be wary of the hidden costs. We may be tempted, either because it’s cheaper or more convenient, to accept unaccountable and unfair systems that we may have good reason not to excuse.

Consider the case of Tammy Dobbs. A resident of Arkansas, Tammy has cerebral palsy and requires assistance getting into a wheelchair. In 2016 after the state adopted a new algorithm to determine what level of care she should receive, the regular hours of assistant care that Tammy received were severely cut, making it difficult for her to do things like go to the bathroom. A government official came to her house, completed a questionnaire, and then relayed the algorithm’s determination. When pressed for an explanation, the official could only say, “Sorry, that’s what the computer is showing me.” The government’s expectation seemed to be for Dobbs to simply accept it. Eventually, a legal team revealed significant flaws in the state’s algorithm – the algorithm didn’t even consider whether someone had cerebral palsy.

Similar cases are easy to find. Glenn Rodriguez had to fight to get an explanation for why the recidivist algorithm COMPAS concluded that he was at a high risk for reoffending. The corporation who created COMPAS refused to reveal how the assessment was made – even to the parole board – citing trade secrets. If an algorithm can have such a profound impact on your life, surely we deserve a better explanation than “The algorithm made me do it.”

Many algorithms can have prejudicial assumptions baked in. A recidivism algorithm that is mostly trained on blue-collar or petty crime will not likely evaluate everyone the same. A hiring algorithm that contains a personality test designed to identify extroverted personality types might also be tracking whether candidates are likely to have a mental illness. Many hiring companies now make use of video recordings of candidates to detect body language, despite the research demonstrating that body language cannot predict successful job performance and likening the practice to pseudoscience like phrenology. Unfortunately, candidates have no idea how they are being evaluated and no avenue to appeal if they believe that an error has occurred.

In cases like this, particularly where there are financial incentives to sell these products as efficient no-brainer solutions, developers will have reason to stifle doubts and concerns. As the designer who created the algorithm in the Dobbs case argued, perfect transparency is overrated. “It’s not simple…My washing machine isn’t simple,” but “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” All of this means that there is an incentive on the part of developers and end-users to put their faith in algorithms that may be quite suspect.

As W.K. Clifford argued in his ethics of belief, every time we adopt beliefs without sufficient evidence, we do something wrong. This is because beliefs dispose us to action; the more we adopt the habit of passively accepting algorithmic conclusions without adequate inquiry, the more we expose ourselves to risk. But the consequences of the beliefs we adopt extend beyond the individual; our beliefs affect our entire community. If customers and taxpayers don’t ask questions – and developers are happier not to answer them – we end up with a situation much like that government official in the Dobbs case. No accountability; no justification. Don’t ask questions, just accept the outcome.

Artificial intelligence presents a collective action problem. Individuals alone cannot challenge these poor answers that lack transparency. Instead, resolution requires a collective response – we will need to work together to resist the constant temptation of lower costs, greater efficiency, and passing the buck.

Smart Mouthguards and the Problem of Choice

photograph of football player with mouthguard out

Anyone who has played a contact sport like rugby or American football will tell you that it is tough. The physicality of such games — from the speeds at which players must move to the colossal collisions they endure when tackled and tackling — is extreme, and with any sport involving such physical demands comes the risk of injury. Indeed, it is not unheard of for rugby players to experience dislocations, fractures, and, in some of the worst cases, paralysis or even death as a result of game or training activities. It is no surprise that the governing bodies of such sports (like the NFL or World Rugby) are constantly considering methods to reduce the risk to players.

Their motivations can be viewed from several angles. For the compassionate and optimistic amongst you, such bodies are taking an active interest in the well-being of their players. They recognize that these athletes give their all, and the governing bodies want to ensure that the players are as healthy and play as safely as possible because the players are people, and the organizing bodies genuinely care about them. The colder, pessimistic amongst you might think that the bodies try to make changes not out of care for the players but for themselves. The safer the sport is, the less likely governing bodies are to be subject to financial claims by injured players. Also, it costs a lot to train players to the point they can play professionally, and safeguarding players’ health brings a safeguarding of significant financial investment.

For my two cents, I suspect there’s a mixture of both. These organizing bodies do care about their players and don’t want them to get hurt, but they also recognize that it is in their financial interest to do what they can (to a degree) to make play as safe as possible.

The methods by which play can be made safer take different forms. One of the most common is changes to the rules. The idea is that by banning dangerous strategies and behaviors, players will be less likely to get injured by reckless collisions. For example, in 2020, World Rugby revised the rules around high tackles (tackles that directly impact the tackled player’s head) to reduce trauma to the neck and brain. While the creation, modification, and enforcement of game rules can help prevent harm, they don’t necessarily help detect or respond to injuries when they occur; especially if match officials don’t see an injury. After all, pitches can be chaotic, and referees only have one pair of eyes. It can sometimes be incredibly difficult to identify, in the rush of play, when someone might have been injured, especially if that player hasn’t noticed themselves.

To that end, this year, World Rugby mandated the use of smart mouthguards in all professional training and games. Unlike traditional mouthguards that only protect players’ teeth, smart mouthguards come with embedded sensors that track the forces and events players’ heads experience during play. This provides a new avenue for understanding and potentially preventing injuries like concussions, which can have devastating long-term health impacts.

Before getting into the weeds of the potential ethical issues that such a form of tracking brings, however, I want to be clear that I am overwhelmingly in favor of this technology. The damage that can be caused during high-contact sports can be terrible, and effective methods of reducing and redressing injury should be welcomed. That being said, though, smart mouthguards come with some ethics concerns, specifically around the nature of the data it captures, that cannot be overlooked.

Now, we can’t jump into all the issues here; there are far too many. So, I’m going to focus on what I think is the key one: player choice. (If you want to read about some more, you can go to this blog post that a co-author and I wrote, or, if you really want to get into it, this article).

As mentioned, this year, World Rugby mandated that all elite players use smart mouthguards when playing in games or when training. If they refuse and do not have a medically justifiable reason for doing so, they are subject to the “recognize and remove” policy. In short, that policy states that if a player gets hit in the head, and there is any suspicion that such a hit might have caused a concussion, that player will have to sit the rest of that game out. This means that they cannot go to team medics to be checked out and assessed and, potentially, come back onto the field if they are given an all-clear, which is what normally happens. For those who refuse to wear a smart mouthguard, that’s just not an option.

So, on the face of it, players do, technically speaking, have a choice when it comes to smart mouthguards and, thus, having their data tracked. They can either wear the mouthguards and have details about their personal health collected and stored (potentially indefinitely), or, if they choose, they can go without.

But this is an oversimplification. Players must sacrifice an incredible amount to get to the level of a professional sports athlete. Time, money, energy, relocating to chase contracts, not to mention all the health risks (and inevitable injuries and pain) that come with playing sports – all before you even get to a professional level. And while most would say that they couldn’t see themselves doing anything else, this just adds to their work pressure. Investing so much into your dream job means that anything that might jeopardize your ability to play is likely to be seen as a danger, one that should, if possible, be avoided. Here, the risk of coercion might emerge, and thus a compromise of autonomy and player choice.

If you are at a greater risk of being removed from a game, be that rugby in this instance, because you refuse to wear a smart mouthguard and thus are subject to the recognize and remove policy, then you are a less attractive prospect for your manager. After all, why would they pick you for the team when they could instead go with a player who is compliant and doesn’t run the risk of being removed under the mere suspicion of an injury? Given that it’s your dream job and everything you would have sacrificed to have a shot at playing professionally, you are likely to simply go with the flow. So, there is a huge degree of pressure on players to wear these mouthguards simply to show that they are a team player who won’t put at risk their team’s chances on the pitch.

Additionally, this pressure will come at players from different angles depending on their level of security within the team. Star players may feel the pressure to conform from above as management wants to minimize the chance of such valuable players being removed from the field of play. Less experienced players, whose place on a team has only just been secured and is, thus, tenuous, might feel the pressure to conform simply so that management doesn’t replace them with someone else.

What both result in is pressure to use a technology which records intimate health data that those players weren’t having recorded before. And, as is so often the case when it comes to health decisions and biometric data collection, it is paramount that we protect and promote the freedom of choice that individuals have, especially those in vulnerable situations.

Now, as I said earlier, I’m not against the usage of this technology to help prevent harm to players. But what I think is essential is that we openly recognize that asking players to wear these devices places pressure on them to have their health data collected and monitored in a way which hasn’t been done before. This is something that they should be able to make a decision about free from pressures that might coerce or influence their decision-making. If we don’t do that — if we assume that players will invariably be fine with such data collection, or worse, force them into it regardless of how they might feel — we risk not only making the sport less desirable for those who may have a real passion for it but also infringe upon some pretty fundamental freedoms that, in other situations, we might feel very uncomfortable about.

Ultimately, how would you feel if your boss or teacher said if you wanted to keep coming to work or class, they wanted access to your biological data? It is this scenario we risk if we fail to consider the smart mouthguard question.

The Ethics of Money as Speech

image of US Capitol on dollar bill

The 2024 American presidential election is on pace to be the most expensive ever. Harris has a decisive advantage on the fundraising front, but both candidates are bringing in hundreds of millions. Other contentious elections, like the Pennsylvania senate election, are similarly awash in money. A new class of political actor has emerged, the mega-donor, who may spend millions supporting a preferred candidate. Though few in number, they count for a vastly outsized proportion of political spending — in 2018 just 10 people accounted for 7% of spending. Corporations are also in on the frenzy, with crypto companies especially emerging as leading donors in the 2024 election.

Fundraising is not destiny and monetary advantage may not translate into electoral victory, but the issue with political fundraising runs deeper than the mere fear of bought elections. It’s becoming impossible to run for political office without a massive war chest, and the political favors that can be purchased by mega-donors threaten to undermine political equality. Nonetheless, the Supreme Court has struck down one campaign finance law after another, arguing that the protection of free speech outweighs concerns about the influence of money in politics. How did we get here?

The most famous precipitating cause is Citizens United v. Federal Election Commission (2010). This controversial 5-4 Supreme Court decision allows for unlimited independent expenditures by non-profits, unions, and, most impactfully, corporations. What counts as an independent expenditure? An independent expenditure would be if I spent one hundred dollars on a YouTube ad for a preferred candidate, but did not coordinate directly with the campaign. This contrasts with individual contributions, where I would just give that 100 dollars directly to the campaign.

Citizens United raises a whole raft of questions about why abstract legal entities, like corporations, deserve separate free speech rights. However, the treatment of political spending as a free speech issue stems primarily from the notoriously complex 1976 Supreme Court decision, Buckley v. Valeo. The case had two central holdings. First, it held that restrictions on individual contributions are legitimate, maintaining election integrity and preventing corruption or even the mere appearance. Second, Buckley v. Valeo held that limits on independent expenditures violate freedom of speech.

While the Supreme Court did not claim that money is literally speech, they did suggest that money plays an important role in modern political communication. Ultimately, the government’s interest in protecting political spending follows a broader interest in facilitating the ability to meaningfully express oneself publicly. Still, it does not automatically follow that political spending should be unlimited. In fact, the permissiveness of unlimited independent expenditures is a notable contrast from other more restrictive speech rulings.

Generally, rights are restricted to balance them against competing rights or interests. If there are no competing interests, then making the right unlimited could make sense. Individual contributions to campaigns are restricted because of competing worries about political corruption. By contrast, the Court assumed in both Buckley v. Valeo and Citizens United that because independent expenditures are supposed to be, well, independent, corruption should not be at issue. In retrospect, this is incorrect. Especially after Citizens United, coordination is common, bribery is facilitated, and, as previously discussed in the Prindle Post, the public is definitely worried about corruption.

Clearly, there are competing interests to unlimited independent expenditures. But advocates might nonetheless argue that unlimited spending benefits outweigh the risks of corruption. One potential defense of unlimited spending, present in both Buckley and Citizens United, is that more speech is simply better for democracy. Although, what needs to be defended is that more political spending is better for democracy, for while money facilitates speech, it is not itself speech. By the same token, we could agree that more speech is better for democracy, but dispute that more television (a facilitator of expression) is therefore always better.

But is more speech better for democracy? Justice Kennedy, writing for the majority in Citizens United, argued: “The right of citizens to inquire, to hear, to speak, and to use information to reach consensus is a precondition to enlightened self-government and a necessary means to protect it.” This reflects a classic “marketplace of ideas” approach which holds that the best ideas will prevail in a free exchange of information.  As previously discussed in the Prindle Post, the marketplace of ideas is more a nifty ideal than an actual description of public discourse. Regardless, the ideal runs on the diversity of ideas, not the sheer volume of speech. By allowing wealthy actors with more money to take up disproportionate space in the marketplace, less moneyed voices are washed out and the marketplace suffers.

In Buckley, the Court asserted that the government “may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.” Or, more accurately, “that government may restrict the [political spending] of some elements of our society in order to enhance the relative [spending] of others is wholly foreign to the First Amendment.” However, if money is to be treated as tantamount to speech, then the Court must care about one’s actual ability to express themselves and not merely the formal right. And if they do care about the actual ability, then it makes sense to regulate political spending to protect diversity in the marketplace of ideas.

Alternatively, one could contend that the capacity to express ourselves is fundamental to a life well lived. In such a case, making speech as free, as uninhibited, and as effectively enabled as possible could be considered a general good. Still, the number of legal restrictions should not be conflated with the practical freedom of speech. Consider an extreme example: say, a threat to kill someone if they publish an op-ed about dangerous chemicals in the local water supply. Strictly speaking, if threats are protected by the First Amendment, then more kinds of speech are allowed, and speech is more “free,” legally speaking. But practically speaking, it becomes more restricted – the ability to issue these threats publicly discourages others from speaking. In short, even if one’s goal is simply to make speech more uninhibited, regulations and restrictions may still be sensible.

The Supreme Court was perhaps too quick to blur together money as enabling speech, and money as speech. Substituting “political spending” for “speech” in some of the loftier rhetoric can help one to stay clear-sighted on the distinction. Moreover, there may be grounds to regulate it regardless. As Benjamin Rossi pointed out in the Post previously, no free speech regime is without costs, and the on-balance effect should be seriously considered. Corruption — even with independent expenditures — appears to be a central cost of unlimited political spending. Perhaps most compelling, if the concern is free speech represents a meaningful capacity to express oneself and participate in the marketplace of ideas, then this seems to tell in favor of spending limits.