← Return to search results
Back to Prindle Institute

A New Kind of Risk?

We usually expect to be held accountable for our actions – for both results we intend, and those we do not. We expect, for example, that a car company will ensure that a vehicle doesn’t have major flaws that could result in serious harm before they sell it to customers. To not consider the risks would be negligent, and this is why recalls often look bad for such companies.

But what about algorithms? Should we have a similar expectation that a corporation that develops an algorithm to detect cancer or to detect whether someone is passing off AI-generated content as their own should be sure that there are no significant flaws in their product before they sell it? What if there is no way they could reasonably do so? Given that algorithms can generate erroneous results resulting in serious harms, what is a reasonable standard when it comes to product testing?

In one of the chapters of my forthcoming book on the ethics of AI, I consider a hypothetical issue involving ChatGPT and a professor who might use an algorithm to accuse a student of passing off ChatGPT-written work as their own. There are a great many ethical issues involved when we don’t understand the algorithm and how it might generate false positive results. This has already become a serious issue as students are now being falsely accused of handing in AI-generated work because an algorithm flagged it. A Bloomberg Businessweek study on the services GPTZero and Copyleaks found a 1-2% false positive rate. While that may not sound like a lot, it can mean that millions of students will be falsely accused of cheating with almost no way of defending themselves or receiving an explanation as to what they did wrong.

According to Bloomberg, these interactions are already ruining academic relationships between teachers and students. Some students have now taken to recording themselves writing their entire papers just to be able to disprove the algorithm. Others now obsess about not writing “too robotic” lest they be accused themselves, a problem that is especially prominent for ESL and neuro-divergent students. Should we hold the AI developer whose faulty product generates these kinds of results negligent?

Philosophers of science generally agree that researchers have an obligation to assess inductive risk concerns when accepting a conclusion. In other words, they need to consider what the moral consequences of potentially getting it wrong might be and then consider whether a higher or lower standard of evidence might be appropriate. If, for example, we were testing a chemical to determine how hazardous it is, but the test was only accurate 80% of the time, we would likely demand more evidence. Given the potential harm that can result and the opaqueness of algorithms, AI developers should be similarly conscientious.

If an algorithm operates according to black box principles, the developer may have a good understanding of how to create an algorithm – they will understand that the model can take in various inputs and translate those into outputs – but they will not be able to retrace the steps the model used to arrive at its conclusion. In other words, we have no idea what evidence an algorithm like GPTZero is relying on when it concludes that a piece of text is generated by AI. If the AI developer doesn’t know how the algorithm is using input data as evidence, they cannot evaluate the inductive risk concerns about how sufficient that evidence is.

Still, there are ways, despite the opacity, that an AI developer might attempt to address their inductive risk responsibilities. Koray Karaca argues that developers can build in inductive risk by using cost sensitive machine learning by assigning different costs to different kinds of errors. In the case of AI detectors, the company Turnitin claims to intentionally “oversample” underrepresented students (especially ESL students). By oversampling in this way, the evidentiary standard by which different forms of writing are judged is fine tuned.

Still, there is little accounting for what correlations a model might rely on, making it difficult to explain to students who do get falsely accused why they are being accused in the first place. AI developers have struggled to assess the reliability of their models or evaluate the risks when those correlations are used in error. This issue becomes especially concerning when it comes to things like credit reports. If you don’t know how or why a model compiles a credit report, how can you manage those risks of error? How much must a developer understand about how their algorithm functions before it is put to use? If a developer is aware of the risks of error but also knows that their algorithm is limited in terms of mitigating those risks, at what point do we consider that negligent behavior? If negligence is essentially something we police as a community, we will need to come together quickly to decide what the promise of AI can and can’t excuse.

When Is Fair Use “Fair” for AI (and When Is It “Use”)?

The Internet Archive recently lost a high-profile case. Here’s what happened: the Open Library, a project run by the Internet Archive, uploaded digitized versions of books that it owned, and loaned them out to users online. This practice was found to violate copyright law, however, since the Internet Archive failed to procure the appropriate licenses for distributing e-books online. While the Internet Archive argued that its distribution of digital scans of copyrighted works constituted “fair use,” the judge in the case was not convinced.

While many have lamented the court’s decision, others have wondered about the potential consequences for another set of high-profile fair use cases: those concerning AI models training on copyrighted works. Numerous copyright infringement cases have been brought against AI companies, including a class-action lawsuit brought against Meta for training their chatbot using authors’ books without their permission, and a lawsuit from record labels against AI music-generating programs that train on copyrighted works of music.

Like the Internet Archive, AI companies have also claimed that their use of copyrighted materials constitutes “fair use.” These companies, however, have a potentially novel way to approach their legal challenges. While many fair use cases center around whether the use of copyrighted materials is “fair,” some newer arguments involving AI are more concerned with a different kind of “use.”

“Fair use” is a legal concept that attempts to balance the rights of copyright holders with the ability of others to use those works to create something new. Quintessential cases in which it is generally considered “fair” when someone uses copyrighted materials include criticism, satire, educational purposes, or other ways that are considered “transformative,” such as in the creation of art. These conditions have limits, though, and lawsuits are often fought in the gray areas, especially when it is argued that the use of the material will adversely affect the market for the original work.

For example, in the court’s decision against the Internet Archive, the judge argued that uploading digital copies of books failed to be “transformative” in any meaningful sense and that doing so would likely be to the detriment of the original authors – in other words, if someone can just borrow a digital copy, they are less likely to buy a copy of the book. It’s not clear how strong this economic argument is; regardless, some commentators have argued that with libraries in America facing challenges in the form of budget cuts, political censorship, and aggressive licensing agreements from publishers, there is a real need for the existence of projects like the Open Library.

While “fair use” is a legal concept, there is also a moral dimension to the ways that we might think it acceptable to use the work of others. The case of the Internet Archive arguably shows how these concepts can come apart: while the existing law in the U.S. seems to not be on the side of the Open Library, morally speaking there is certainly a case to be made that people are worse off for not having access to its services.

AI companies have been particularly interested in recent fair use lawsuits, as their programs train on large sets of data, much of which is used without permission or a licensing agreement from the creators. While companies have argued that their use of these data constitutes fair use, some plaintiffs have argued they violate fair use law, both in terms of not being sufficiently transformative, and in terms of competing with the original copyright holder.

For example, some music labels have argued that music-generating AI programs often produce content that is extremely similar, or in some cases identical to existing music. In one case, an AI music generator reproduced artist Jason Derulo’s signature tag (i.e., that time when he says his name in his songs so you know it’s by him), a clear indication that the program was copying an existing song.

Again, we can look at the issue of fair use from both a legal and moral standpoint. Legally, it seems clear that when an AI program produces text verbatim from its source, it is not being transformative in any meaningful way. Many have also raised moral concerns around the way that AI programs use artistic materials, both around work being used without permission, as well as in ways that they specifically object to.

But there is an argument from AI defenders around fair use that has less to do with what is “fair” and how copyrighted information is “used”: namely, that AI programs “use” content they find online in the same way that a person does.

Here is how such an argument might go:

-There is nothing morally or legally impermissible about a person reading a lot of content, watching a lot of videos, or listening to a lot of music online, and then using that information as knowledge or inspiration when creating new works. This is simply how people learn and create new things.

-There is nothing specifically morally or legally significant about a person profiting off of the creations that result from what they’ve learned.

-There is nothing morally or legally significant about the quantity of information one consumes or how fast one consumes it.

-An AI is capable of reading a lot of content, watching a lot of videos, and listening to a lot of music online, and using that information as knowledge or inspiration when creating new works.

-The only relevant difference between the way that AI and a person use information to create new content is the quantity of information that an AI can consume and the speed at which it consumes it.

-However, since neither quantity nor speed are relevant moral or legal factors, AI companies are not doing anything impermissible by creating programs that use copyrighted materials online when creating new works.

Arguments of this form can be found in many places. For example, in an interview for NPR:

Richard Busch, lawyer who represents artists who have made copyright claims against other artists, argues: “How is this different than a human brain listening to music and then creating something that is not infringing, but is influenced.”

Similarly, from the blog of AI music creator Udio:

Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has “listened” to and learned from a large collection of recorded music.

While these arguments also point to the originality of the final creation, a crucial component of their defense lies in how AI programs “use” copyrighted material. Since there’s nothing inherently inappropriate about a person consuming a lot of information, processing it, getting inspired by it, and producing something as a result, nor should we think it inappropriate for an AI to do the same things.

There have, however, been many worries raised already with inappropriate personification of AI, from concerns around AI being “conscious,” to downplaying errors by referring to them as “hallucinations.” In the above arguments, these personifications are more subtle: AI-defenders talk in terms of the programs “listening,” “creating,” “learning,” and “studying.” No one would begrudge a human being for doing these things. Importantly, though, these actions are the actions of human beings – or, at least, of intelligent beings with moral status. Uncritically applying them to computer programs thus masks an important jump in logic that is not warranted by what we know about the current capabilities of AI.

There are a lot of battles to be fought in terms of what constitute truly “transformative” works in lawsuits against AI companies. Regardless, part of the ongoing legal and moral discussions will undoubtedly need to shift their focus to new questions about what “use” means when it comes to AI.

Insects on the Menu: The Class Divide in Sustainable Protein

photograph of insect dishes on buffet

This article contains spoilers for the 2013 film Snowpiercer and some possible spoilers for the series of the same name.

Last night, I attended a talk by the U.K.’s Royal Entomological Society. The talk was titled “Insects as Food and Feed: Delivering Insect Proteins in the UK.” The panel consisted of a lawyer specializing in the regulation of regenerative agriculture, technology, and innovation; a professor in animal health and production; a CEO of an insect-based waste management company; and a poultry nutrition and innovation manager. The general gist of the talk was to consider the scientific, legal, and market factors that might enhance or inhibit the use of insects as food, both for humans and for farm animals.

Now, I should have realized that a talk hosted by an entomological society, with a panel consisting of persons interested in seeing commercial insect consumption for both people and farm animals, would have a certain bias. Or, to put it more delicately, such an event would focus on the areas mentioned above and not consider, as a central point, the ethics of animal consumption. Unfortunately, this didn’t occur to me before attending the talk, and despite my best efforts during the Q&A (for which I wasn’t called to ask a question), I could not shift the conversation in a more philosophical direction.

So, I want to abuse my position here just a little and talk about one of the many ethical issues that insect farming raises: the symbolism of insect consumption regarding class division.

The reason this topic came to the forefront of my mind during last night’s talk can be traced back to a single point. About halfway through, the panel were discussing how to make eating insects more appealing to people. Obviously, to many of us, the idea of eating bugs rather than steak, chicken, pork, or any other farmyard animal, is less than appealing (I’m a vegetarian, so it’s all bad in my eyes). This, however, isn’t universal. In countries like South Africa, Zimbabwe, Tanzania, and Madagascar, insects form a staple part of one’s diet. Yet, here, in the “Western world,” there is a significant, if not overwhelming, taboo against insect consumption. So, any discussion about getting more people to eat insects, for whatever motivating reason, would naturally turn to methods.

A member of the panel noted that insect consumption is generally considered less troublesome when one isn’t faced with an actual insect to eat. Instead, revulsion at the idea tends to drop if they have been crushed, diced, blended, or processed via any other means. This shift shouldn’t be too surprising. It seems only natural that the physical nature of the insect, having to bite into a thorax or carapace, would generate feelings of disgust far more than if one can’t tell what a processed food was before being processed. So, the panel talked about protein powders, much like many of us use in our shakes or oats. But rather than the protein in these powders being sourced from things like whey, it would instead come from insects.

Ultimately, their argument was that the processing process rendered insects more palatable and thus would provide a valuable, and importantly, cheap, source of protein.

This stance makes sense to me, and I don’t wish to contest it. Nevertheless, the idea of processing insects to make them more likely to be consumed by a target population instantly conjured up memories of Bong Joon Ho’s 2013 post-apocalyptic action thriller Snowpiercer.

Before going any further, I feel compelled to say that if you have not watched Snowpiercer, stop reading this, go watch it, and then come back. The film is a masterpiece. It has a stellar cast (Chris Evans, Tilda Swinton, John Hurt, Ed Harris), fantastic set design, and a storyline and premise that captures you from the first frame to the last. I don’t want to spoil the film for you, so seriously, go now, watch it, and come back.

Welcome back.

Throughout the film, we follow the tailies (those living in the tail end of the train) as they fight their way up towards the train’s engine, the seat of symbolic and actual power. As they move forward, they move through the train’s class system and witness the increasing bleakness of life in the tail compared to the ever-growing luxury. It is with one scene before they get far on their journey, however, that instantly came to my mind last night.

After leaving the area where they had been contained, the tailies take control of a production carriage where the protein bars they had been living off of for the past 17 years were made. Chris Evens’ character quickly looks into the machine and sees (you guessed it) that the bars on which every person living in the train’s tail end ate were made from ground-up cockroaches. His reaction makes it instantly clear that neither he nor anyone else in the tail knew the true nature of what they were eating, and he decides to keep it that way. This scene starkly contrasts later ones where, as the group progresses, we see the splendor and variety of food the more affluent passengers ate (sushi, for example).

Now, I am not saying that this is what would occur in real life. I don’t think anyone on last night’s panel was suggesting that insects should be smuggled into people’s diets. Yet, there is something here — some sort of overlap — between the real-world possibility of processing insects so that more people would eat them, and the bleakness of Snowpiercer’s bug-based diet. And, after some reflection, I think it comes down to money. Or, put another way, down to the market forces that have opened a window and seemingly necessitate insect consumption.

Currently, buying insects to eat them is marginally more expensive than buying more traditional meat products (at least, it is in the U.K.). This, however, shouldn’t surprise anyone. Things get cheaper when production is scaled up, and at the moment, the edible insect market is small. So, companies that farm them are also small, with relatively high prices. There is also the legal environment with which to contend. Insects are only just now starting to be entertained as a suitable food. So, there are limited marketable opportunities because we don’t have straightforward or accommodating regulations. Should these two constricting factors change — a larger market and a less cautious regulatory and legal environment — the cost of insect production will fall.

Indeed, many believe the cost would fall so significantly that insects would make an ideal, cheaper, protein-rich alternative to pork, beef, chicken, etc. After all, it takes far less space and resources to farm insects than it does cows, pigs, or sheep. Plus, insects can be fed things which farm animals cannot (things like waste).

It is this point, however, that gives me pause for thought. I am not against finding cheaper (and more environmentally friendly) ways of feeding people; far from it. But I don’t think we can get away from the fact that marketing insects as a cheaper alternative to traditional meats means that those with the shallowest pockets are more likely to buy them, even if they don’t necessarily want to. This possibility is something we see all the time when it comes to the purchase of foods that are cheap yet unhealthy. People, generally, don’t want to buy things that will make them ill in the long run, but when that’s all they can afford, and when our economic and food production systems funnel consumers into those purchases, they have little alternative.

The same is true here with insects. Not that they may make people ill, but those who can’t afford anything else will be forced to have bugs for dinner by their economic reality.

Of course, on the flip side of this, we have the rich. Those with the material resources can purchase what they want, when they want, free from the constraints under which many others labor. For them, insect consumption might be a choice, they might make such sources of protein a staple of their diet. They might eat bugs occasionally or as a one-off just for fun. Or they might never do it because the idea gives them the willies. The important thing, though, is that they have a choice. They can decide, and my sneaking suspicion is that for many here in the West, given how attached we are to our traditional meats, we would be reluctant to give them up for a bug burger, even if the insects are well processed.

And so, what I worry about, what Snowpiercer depicts, and what I think is almost inevitable with our general rush to the bottom economic system, is that people with low incomes will be forced, through the accident of birth and the whims of a financial system over which they have no meaningful control, to exist on protein bars, shakes, and other products. At the same time, those who already possess substantial material and financial resources will find that anything resembling a decent cut of meat will be theirs to enjoy without competition.

Ultimately, I worry about bug consumption’s class implications and social justice concerns. I don’t want to live like a tailie, and I certainly don’t want to have the idea sold to me as a benefit when, in reality, it constricts diet options even more than finances already do.

Can Voting Make You a Bad Person?

photograph of line at voting place

We hear it all the time: Every voter has the civic duty to cast a ballot. If you disagree, you’re in a very small minority. Over 90% of Americans agree that voting is important for being a good citizen. Yet for most voters, there is simply no chance that their ballot will swing the race. If you don’t live in a swing state, it is hard to convince yourself that your vote even matters.

But this overlooks a crucial insight: The way you vote may be making you a worse person, and this is especially true when it seems like your vote doesn’t matter. Now, I’m not saying who you choose to vote makes you a bad person, but how  you arrived at that choice.

Let me explain. As a philosophy professor, my work focuses on how our actions shape our character. The ancient Greek philosopher Aristotle puts the point this way: When your habits are good, they can help you cultivate virtues like courage, wisdom, and generosity. But when your habits are bad, they will eventually make you a worse person – cowardly, foolish, and greedy, amongst other things.

What does this have to do with voting? How we vote becomes a habit. And as this ancient wisdom makes clear, when you do it poorly, shoddy voting can corrupt you. In particular, bad voting can make you intellectually irresponsible.

In many areas of your life, you have a duty to think well. If you’re a student, you should study for the test. If you’re a parent, you need to plan what your children will eat. Failing to attend to these obligations is careless, and when done repeatedly will increasingly make you intellectually irresponsible.

The same is true of voting. When casting a ballot, voters have a responsibility to think well. In 1963, John F. Kennedy observed that “ignorance of one voter in a democracy impairs the security of all,” and the story is no different today. Uninformed voting is a danger not just to our country, but threatens to undermine democracy itself.

Are you acting responsibly when you cast a ballot? Almost certainly not. Voters display a shocking amount of political ignorance, struggling to identify even their own representatives. A 2022 Harvard study found that a third of voters cannot identify their U.S. senator and house representative.

Knowledge of state-level representatives is far worse. A 2018 study by Johns Hopkins discovered that only 28% knew their state representative, while just 19% knew their state senator. This data suggests that most of us are not responsible voters.

A confession is in order. I am in need of some ancient wisdom myself. I looked up the answers to all of these questions while writing this piece. Of course I have excuses. Just this semester, I moved states, had my first child, and started a new job. But we all have a lot to do. If I didn’t write this piece, would I have known my representatives by November 5th?

Why are you (and I) ignorant of basic political facts? Researchers have called this phenomenon rational ignorance. When you buy a new car, carefully considering your options will clearly benefit you. But because your vote won’t swing the race, large democratic elections can make us careless and irresponsible. It is rational for you to do something else, practically anything else, over becoming politically educated.

Ironically, emphasizing the duty to vote only makes this worse. Because there is a considerable social cost for not voting, you still feel the pressure to head to the polls, no matter how ignorant you might be, further entrenching bad voting habits. And once careless voting becomes a habit, other forms of intellectual irresponsibility can take hold as well. It will be easier to rationalize away your candidate’s flaws or overlook scandalous behavior.

So how should we vote? The remedy may seem obvious. To be a better voter, you need to become more politically informed. The philosopher John Stuart Mill agreed, advocating that the most educated should receive more votes, with college graduates receiving as many as 6 votes. While we might have concerns about giving some more votes than others, we can agree that the better informed make better voters.

But informed voting isn’t enough. Paradoxically, educated voters can be more biased, using their superior education to rationalize voting for politicians that cater to their private interests. Plato even predicted that democracies would be susceptible to electing tyrants. Without strong intellectual characters, citizens can be swayed by emotional appeals and inflated campaign promises. These tools can then be used by dictators to gain and consolidate power.

For this reason, you should strive to not only develop your mental library but your intellectual character. Memorizing a lot of facts is not the only ingredient of good thinking. Thinking well also requires open-mindedness, humility, and curiosity – traits that can correct for common political-reasoning flaws.

Open-mindedness can decrease polarization. When you are more willing to consider the viewpoint of others, you can better understand their perspective. Have a conversation with someone you disagree with, and commit to just listening

Intellectual humility can reveal blind spots. By recognizing your own weaknesses, you will also be more conscious of your biases and limitations. Take a political bias test to see what viewpoints you might be neglecting.

Curiosity can reveal things we’ve missed. If you are curious, then you are more open to learning surprising information, including evidence that runs contrary to your political opinions.

And finally, be patient. Forming new habits takes time, and political issues are quite complicated, so cut yourself some slack. Practice this ancient wisdom one election at a time, and pretty soon you will find yourself not only becoming a better voter, but a better person as well.

Who Is Worthy of Power?

photograph inside Capitol Building dome

The U.S. election is approaching its culmination. The winner, almost overnight, will become one of the most powerful people in the world. This ritual of empowerment repeats for senators, representatives, and even some judges. Let us then step back and reflect on what kind of person we should invest power in.

Ancient thinkers were intensely interested in questions of virtue and what virtues rulers should possess. The Athenian philosopher Plato famously (and perhaps a bit conveniently) argued for “Philosopher Kings” in his Republic — rule by lovers of wisdom. Wisdom for him concerns the ability to discern the best course of action and provide good counsel. Plato’s ideal ruler is connected with his vision of an ideal society, for a ruler should embody their role in the larger society and act for the good of that society rather than themselves.

The most feared ruler for Plato is the tyrant. More than simply a bad or authoritarian ruler, Plato’s tyrant is a kind of psychological monster. Achieving power through low cunning, flattery, and lies, they are ruled by their appetites for food, power, material wealth, and even violence, rather than by reason. The wants of a tyrant are endless — they can never have enough power or enough pleasure — and thus if not checked, they can consume all around them.

Beyond classical Greek and Roman philosophers, ancient Chinese philosophers such as Confucius and Mencius also considered the virtues of the ruler. Especially central for them was benevolence, caring for the well-being of others and acting to the benefit of the people.

Across both ancient Greek and Chinese philosophical traditions, learning and growth are pivotal. The ideal ruler is not only morally virtuous but actively strives to cultivate their virtues and improve themselves. This relates to humility. For example, as the Philosopher King loves truth and right action rather than simply appearing right, they listen to the wisdom of others and are willing to change their mind. In ancient traditions, the virtuous leader also often serves as a moral exemplar — someone worth emulating for how to live ethically and well. (Although, both Confucius and Plato emphasized that people should live well within their station, so following a moral exemplar entailed applying their lessons as appropriate to one’s circumstances.)

For hundreds of years after these philosophers of antiquity, there remains a strong connection between individual moral virtue and good leadership. Although, especially during medieval times, there is a darker undercurrent to this. By extolling the virtue of rulers, such philosophy could legitimize hereditary rulers, who ultimately possessed power through accident of birth. The link between individual morality and leadership is weakened by the Renaissance-era Italian philosopher Niccolò Machiavelli, from which we derive the adjective Machiavellian, meaning amoral and manipulative.

His famous text The Prince, written as advice for princes (especially a prince just assuming power), advocates scheming and ruthlessness to pursue and maintain political power. What precisely Machiavelli’s intent was remains unclear. Was it legitimate practical advice? Was he slyly ridiculing the ruling class? Regardless, it raises an important consideration about who we want to hold power.

Perhaps, rather than simply having someone virtuous and of excellent moral character, we want someone who will do what is needed even if they have to engage in a bit of skullduggery.

This echoes later political thinking such as Realpolitik, which emphasized practical considerations in political decision-making over ethical ones.

But morals cannot be escaped so easily. Rather, a good ruler, when necessary (and only when necessary!), may engage in unethical actions in service of the greater good. That these unscrupulous actions are done for a defensible higher reason is paramount. Someone who is simply okay with lying, cheating, and stealing for personal gain cannot be trusted to achieve worthwhile political ends. By contrast, someone who understands with due seriousness that personal morals may occasionally need to be compromised for the good of the people and the nation could be more worthy of power.

Put differently, a willingness to compromise ethics alone is clearly not laudatory. A willingness to occasionally be strategic or flexible to achieve larger goals of merit, perhaps is.

Impressionistically, philosophers have been progressively less attentive to the moral character of leadership since Machiavelli. There are several possibilities here. First, with the growth of new approaches to ethics such as utilitarianism (the greatest good for the greatest number), ethics becomes about something one does, rather than a virtue or character trait. Second, with the rise of modern democracies, political power is vindicated by popular mandate and no longer needs to use moral virtues as a justification for holding power. This shifts focus from individual character to political process. Third, especially when all candidates are of acceptable character or personal moral virtue is unclear, elections may center on political ideology and policy.

But none of these developments negate the foundational importance of character in considering who should be invested with power. Whatever your ethical system, you need to rely on a leader to have the disposition to actually try to achieve your ethically preferred results. And even if holding power is no longer legitimized through ethics and virtue,  that does not make such characteristics irrelevant to whom we want to hold power.

The role of policy and political ideology merits more scrutiny. For it might be objected that rather than electing politicians based on what they are like, we should elect them for what they will do.

However, the predictability of policies relies on personal characteristics such as honesty. If someone cannot be trusted to be truthful about their intended policies, or to earnestly and effectively pursue them, then voting based on policy is pointless. Moreover, even if one can be trusted, the policy process is enormously complicated and uncertain, and depends on the machinery of governance working just so. At best, policy goals are more indicative of what someone stands for than what they will do. Most importantly, policy cannot always account for the unexpected. One significant function of a leader is to react to changes and emergent challenges. Will they act effectively in the interest of the people under conditions of uncertainty, or will they instead act ineffectively, or even exploit the crisis for personal gain? This is not a policy matter.

There is a related concern: modern governance is complicated, involving many people and underlying factors, rather than singular leaders. This does not negate the importance of worthiness to hold power, but rather contends that powerful people may be less individually powerful than we might think. Assuredly it is true that figures such as governors, senators, and presidents are not personally making every decision within their domain, but they still have an impressive capacity to make decisions if so inclined. Moreover, the relevance of other people cuts both ways. If we think someone is morally rotten and unworthy to hold power, why would we think they surround themselves with those worthy to hold power, or that the worthy to hold power would support them?

Elections are an investiture of power. Consider what traits make someone worthy (or unworthy) of power. There is a compelling case that foundational moral considerations, such as integrity, wisdom, and beneficence, should stand above policy and even political ideology. For if we cannot rely on a basic commitment to the well-being of the people, then all else is suspect.