Should You Have the Right to Be Forgotten?
In 2000, nearly 415 million people used the Internet. By July 1, 2016, that number is estimated to grow to nearly 3.425 billion. That is about 46% of the world’s population. Moreover, there are as of now about 1.04 billion websites on the world wide web. Maybe one of those websites contains something you would rather keep out of public view, perhaps some evidence of a youthful indiscretion or an embarrassing social media post. Not only do you have to worry about friends and family finding out, but now nearly half of the world’s population has near instant access to it, if they know how to find it. Wouldn’t it be great if you could just get Google to take those links down?
This question came up in a recent court case in the European Union in 2014. A man petitioned for the right to request that Google remove a link from their search results that contained an announcement of the forced sale of one of his properties, arising from old social security debts. Believing that since the sale had concluded years before and was no longer relevant, he wanted Google to remove the link from their search results. They refused. Eventually, the court sided with the petitioner, ruling that search engines must consider requests from individuals to remove links to pages that result from a search on their name. The decision recognized for the first time the “right to be forgotten.”
This right, legally speaking, now exists in Europe. Morally speaking, however, the debate is far from over. Many worry that the right to be forgotten threatens a dearly cherished right to free speech. I, however, think some accommodation of this right is justified on the basis of an appeal to the protection of individual autonomy.
First, what are rights good for? Human rights matter because their enforcement helps protect the free exercise of agency—something that everyone values if they value anything at all. Alan Gewirth points out that the aim of all human rights is “that each person have rational autonomy in the sense of being a self-controlling, self-developing agent who can relate to others person on a basis of mutual respect and cooperation.” Now, virtually every life goal we have requires the cooperation of others. We cannot build a successful career, start a family, or be good citizens without other people’s help. Since an exercise of agency that has no chance of success is, in effect, worthless, the effective enforcement of human rights entails that our opportunities to cooperate with others are not severely constrained.
Whether people want to cooperate depends on what they think of us. Do they think of us as trustworthy, for example? Here is where “the right to be forgotten” comes in. This right promotes personal control over access to personal information that may unfairly influence another person’s estimation of our worthiness for engaging in cooperative activities—say, in being hired for a job or qualifying for a mortgage.
No doubt, you might think, we have a responsibility to ignore irrelevant information about someone’s past when evaluating their worthiness for cooperation. “Forgive and forget” is, after all, a well-worn cliché. But do we need legal interventions? I think so. First, information on the internet is often decontextualized. We find disparate links reporting personal information in a piecemeal way. Rarely do we find sources that link these pieces of information together into a whole picture. Second, people do not generally behave as skeptical consumers of information. Consider the anchoring effect, a widely shared human tendency to attribute more relevance to the first piece of information we encounter than we objectively should. Combine these considerations with the fact that the internet has exponentially increased our access to personal information about others, and you have reason to suspect that we can no longer rely upon the moral integrity of others alone to disregard irrelevant personal information. We need legal protections.
This argument is not intended to be a conversation stopper, but rather an invitation to explore the moral and political questions that the implementation of such a right would raise. What standards should be used to determine if a request should be honored? Should search engines include explicit notices in their search results that a link has been removed, or should it appear as if the link never existed in the first place? Recognizing the right to be forgotten does not entail the rejection of the right to free speech, but it does entail that these rights need to be balanced in a thoughtful and context-sensitive way.
Long Distance Information, Give Me Memphis, Tennessee
This is the second in a series on American History and the Ethics of Memory. This post originally appeared on September 15, 2015.
Warner Madison doesn’t trust the police. He thinks they view all black people with suspicion, harass them on the streets, and arrest them without cause. When police accost his children on their way to school, he can barely contain his anger. He fires off a letter protesting what he calls “one of the most obnoxious and foul and mean” things he has ever witnessed. But he possesses little hope that police treatment of African Americans in his city will change.
What does Warner Madison think of Trayvon Martin, Eric Garner, Michael Brown, Walter Scott, Freddie Gray, Samuel DuBose? There’s no telling. He’s been dead for probably more than a century. Warner Madison’s outrage about race and policing came to a head just after the Civil War, in 1865, when he was a 31-year-old barber living in Memphis, Tennessee.
Memphis isn’t among recent flashpoints—Ferguson, North Charleston, Baltimore—nor does it crop up among placenames associated with racial violence of decades past—South Central, Crown Heights, Watts. In collective memory of American history, Memphis figures as the scene of a lot of great music and the assassination of Martin Luther King. Yet the Memphis of Warner Madison’s time is an essential, though largely forgotten, part of understanding race and policing in America.
A recent New York Times article suggests Americans are living through “a purge moment” in their relationship to history. Especially since the June shooting at Emanuel AME Church in Charleston, icons of slavery and the Confederate States of America have been challenged and, in many cases, removed: the Confederate battle flag from the South Carolina statehouse grounds and, most recently, a statue of Jefferson Davis from the University of Texas at Austin’s Main Mall. At Yale University, students are now debating whether to rename Calhoun College, which honors a Yale alumnus who is best known as the antebellum South’s most ardent defender of slavery.
Skeptics are concerned about “whitewashing” the past or hiding important if controversial aspects of our history in “a moral skeleton closet.” (At the extreme end, one can find online commenters likening such reconsiderations to ISIS’s destruction of pre-Islamic antiquities in Syria.) But there’s little harm and often much good in collective soul-searching, as among the students at Yale, about how best to remember a community’s past and express its shared values—whatever decision that community may finally come to about its memorials. And, as I argued here previously, memory is a scarce resource. Figures like Jefferson Davis and John C. Calhoun shouldn’t be forgotten, but their public memorialization, in statues, street names, and the like, keeps them before our eyes to the exclusion of things we could be remembering instead—things we might find more useful for understanding the world around us.
Take Memphis in the 1860s. A southern city that fell to Union forces only about a year into the Civil War, in June 1862, it became a testing ground for the new order that would follow the destruction of slavery. As word spread across the countryside that the reputed forces of freedom had taken Memphis, African Americans flocked to the city. By the end of the war, Memphis’s black population had grown from 3,000 to 20,000, and surrounding plantations were proportionately emptied—to the dismay of cotton planters who needed laborers in their fields.
White authorities forcibly moved former slaves back onto the plantations. How? Using newly invented laws against “vagrancy.” Memphis blacks who could not prove gainful employment in the city were deemed vagrants, and vagrants were subject to arrest and impressment into the agricultural labor force.
Vagrancy had existed as a word and a phenomenon for centuries. In the post-Civil War South, it became a crime. Vagrancy laws were mainstays of southern states’ “black codes” during the late nineteenth century, because they helped white supremacists restore a social order that resembled slavery. Black men without jobs were guilty of the crime of vagrancy simply by going outside and walking down the street. Once arrested and imprisoned, they could be put on chain gangs—and white southerners could once again exploit their unpaid labor.
It’s not surprising that former Confederates were responsible for criminalizing black unemployment. But so was the Freedmen’s Bureau—the federal agency expressly charged by Congress and Abraham Lincoln with assisting former slaves in their transition to freedom. The bureau’s Memphis superintendent wrote in 1865 that the city had a “surplus population of at least six thousand colored persons [who] are lazy, worthless vagrants” and authorized patrols that were arresting black Memphians indiscriminately.
When some of the leading black citizens of Memphis had seen enough of this—men taken away to plantations at the points of bayonets, children stopped on their way to school and challenged to prove they weren’t “vagrants”—they called on the most literate people among them, including Warner Madison, to compose petitions to high-ranking federal officials. Paramount among their grievances was the harassment of “Children going to School With there arms full of Book[s].” To the African American community, freedom meant access to education. But to whites—even the very officials responsible for protecting black rights—freedom meant that African Americans needed to get to work or be policed.
Clinton Fisk, a Union general during the war, now oversaw Freedmen’s Bureau operations in all of Tennessee and Kentucky. He replied politely to the letters he received, but he never credited or even mentioned the reports of abuse Warner Madison and his compatriots provided. He asked his subordinate in Memphis to investigate, and the unbothered report came back: “I can find no evidence whatever that School children, with Books in their hands have been arrested, except in two or three cases.” Even Clinton Fisk—an abolitionist who so strongly advocated African American education that Fisk University bears his name—failed to affirm that having a book kept a person from being vagrant.
This period of conflict culminated in the Memphis riots of 1866—an episode that ought to be infamous (and is the subject of a few good books) but is generally absent from public consciousness. White Memphians initially assumed the “riots” were a black protest that turned violent, but what actually occurred in Memphis on the first days of May in 1866 was a massacre of black men, women, and children by white mobs, among whom were many police officers.
In failing to remember 1860s Memphis—failing even to know the name of someone like Warner Madison, after whom no highways or elementary schools are named—we fail to remember that the federal government once made it the special province of law enforcement agents to accost African Americans in public places. Without remembering that, we cannot apprehend the complexity and durability of the problems underlying current events.
What we now call “racial profiling,” and even the appallingly frequent uses of lethal force against black citizens, may result less from the implicit bias of police officers than from a historical legacy. Abiding modes of law enforcement and criminal justice, brought to us by nineteenth-century white Americans’ anxieties about the abolition of slavery, were designed to treat black people walking freely on city streets—unless they were being economically productive in ways white people approved—as social threats.
Racism may be only a partial explanation. Some of the people who were arresting “vagrants” in Memphis were African American—they were soldiers in the U.S. Army acting under Freedmen’s Bureau orders—and so are three of the six Baltimore police officers charged with the death of Freddie Gray. Blame may rest, too, with habits of mind upon which few people even frown—like taking gainful employment as a measure of human worth (a pernicious corollary of the belief that markets possess wisdom), or presuming that someone must be up to no good if he has (in Chuck Berry’s words) no particular place to go.
American History and the Ethics of Memory
(How) Does Capitalism Incentivize? Part II
This post originally appeared June 16, 2015.
My last post discussed the bifurcated incentivization structure of capitalism: owners profit while workers become disempowered by working harder. In this post, I want to address an accompanying myth to the myth that capitalism compensates you better for working harder which is that collective ownership divests individuals of motivation to work.
People say that the problem with collective ownership in producing an incentive to work is that no one takes responsibility. If you don’t own it, you won’t care to maintain it. But the incentive in capitalism isn’t that you work on a thing because you own it, you work because otherwise, you will starve. The ideology here is that we are working on our own thing and that we have more investment because it is ours. This is the case in capitalism for the self-employed and small business owners–the middle class–but the middle class has shrunk considerably. A 2011 Pew Charitable Trust study shows that a third of those raised in the middle class (earning between 30 and 70% of their state’s average income) fall out of it in adulthood. A recent article on The Washington Post on the cost of college shows that it isn’t college costs that have risen but the purchasing power of the middle class that has shrunk.
The second myth–that collective ownership divests individuals of motivation to work–follows from the failure to think the collective as such. Instead of having the value of your work product going to the pocket of the owner, let’s suppose that you own the means of production collectively with all the other workers — that is, let’s say you live in communism. Ending the division between the worker and the owner and thus ending the cross-purposes between them would change the opposition between the work and the benefit of the work. It would incentivize you to work, not because otherwise you would die, but because in fact, you would reap the rewards of your work, but the you is part of a collective you, not an individual worker. Capitalism lives and dies on getting the worker to see herself more as an individual than as part of a collective. If the collective is something that we have to struggle to conceive in order to recognize, the individual is too, capitalist ideology has just been better at achieving it through limiting possibilities for subjectivization to that of the individual, as Jodi Dean argues in Political Theory and as I discuss here.
There are two ways we think that collectivity fails to incentivize: the one is a fear of getting more than you deserve and the other is a concern that what is in common won’t be taken care of. In one of the earliest defenses of private property, Aristotle argues in Politics II.5, that if citizens communally work the land and communally enjoy the profits from it there will be resentment when citizens who do less take more (Pol. 1263a9-14) and further that people tend to care for what is their own. Though, note that even in that situation, “friends share everything in common” (Pol. 1263a29).
Both of these concerns–getting more than you deserve and not taking care of the common seem to be about desire and motivation. And they assume that the desire and motivation of individuals is at odds with the community. How do we make people work? How do we make people care? When people say capitalism incentivizes, they say, the way to make people work and care is to threaten them with death. So much for right replacing might.
We think the individual’s desires and motivations are opposed to the collective’s desires and motivations–well, we deny that there are any collective desires and motivations altogether–because we have already interpellated the subject as individual and then we deny that the subject has become an individual through this process and suppose that it is natural and given. Then we foreclose the possibilities for interpellating the subject as the collective (most of pop culture is an engine for this foreclosure). This foreclosure justifies the anxiety that the individual will try to take advantage of the community, finding her ends at odds with its ends.
My point here is not to contribute to the imaginative work of conceiving the collective, but to argue that capitalism produces the conditions (the individual as the only conceivable way of thinking of the subject, the individual’s desires at odds with the community’s) to which it then argues it is the only solution.