We like to rate each other. We rate restaurants on Yelp, drivers on Lyft, and movies on Rotten Tomatoes. And these ratings can help us make decisions. Google Reviews can guide me away from the worst coffee shops, and IMDB can tell me that the fourth Jaws movie was a flop (okay, maybe I could have figured that out on my own).
With all of this rating going on, wouldn’t it be helpful if we rated how ethical other people are? Not only do we already personally keep track of who treats us well and who doesn’t, but this would help us make better decisions. Knowing the moral scruples of others could help us make friends, choose who to date, and avoid getting ripped off.
But even though lots of ratings are useful, I don’t think that giving each other a moral score is a good idea. In fact, I think it might make us even more unethical.
There are many different kinds of ratings systems. Yelp, Lyft, and IMDB – all of these are crowd-sourced and numerical. Grades are also ratings of a sort, scoring how well a student did in a particular class. But grades are not crowd-sourced, and instead depend solely on the decision of the instructor. Other ratings systems make use of expert opinion, like judges for figure skating or the critics on Rotten Tomatoes, while yet others, like the MCAT or the LSAT, hire employees to produce exams and scoring rubrics.
One rating question that has been gaining traction recently is how to measure virtue. Psychologists interested in moral character have been working on how to score virtues like honesty and humility, using strategies ranging from self-report surveys to ratings supplied by close associates. And it would undoubtedly be helpful to know how ethical people are. Not only would this information help us to decide who to trust, it could also assist us in designing interventions to become even better. But is developing such a measure possible?
With all rating systems comes the worry of “value capture.” Value capture redirects our attention from what we originally cared about to the rating itself. Maybe I enrolled in a class because I was interested in the material, but now all I care about is getting a good grade. Or maybe I got on Instagram to share pictures with my friends, but now I’m just in it for the likes. In these cases, chasing the ratings co-opts and corrupts my original goals and desires.
But just as ratings systems come in many varieties, the severity of the issues created by value capture can differ as well. In some cases, a rating system can very well encourage the kinds of behavior we want to see. Restaurants that are pursuing higher Yelp reviews will likely be more attentive to their customers, and AirBnB ratings can drive down prices while increasing the quality of a weekend vacation.
But the problem of value capture makes other kinds of ratings systems almost useless. Imagine, for example, a system that rated hotels but only allowed one rating from the owner of each hotel. Because the owners have a strong financial incentive to rate their hotels highly, there would be little to learn from these scores.
And not only are some ratings systems borderline meaningless, but others can make things worse than if no rating system had been introduced in the first place. Social media “likes,” while encouraging further engagement with the platform, may be helping drive a youth mental health crisis.
So what about measuring virtue? Like in our other examples, introducing a rating system can create a value capture problem. Instead of aiming at actually being a good person, we may start to aim instead at just getting a good score.
But it may be even worse. When it comes to ethics, we are already primed for a value capture issue. If we know others are watching, we are less likely to cheat, lie, and steal. This reveals that often we don’t desire to be ethical, but merely to appear like we are ethical. And this makes sense. Appearing to be a good person allows us to get all the benefits of having a great reputation with none of the costs of doing the right thing when no one is looking.
Because we want to appear virtuous, however, this makes the value capture potential of virtue scores even more potent. But how seriously should we take this concern? Does this value capture problem generally improve our lives, like the ratings for Airbnb stays, or does it make things worse, as with likes on social media?
An ethical rating system has the potential to not only sidetrack our values, but also to make us less ethical. An important part of being a good person comes down to our motivations. If someone regularly does the right thing because they care about others, then this can gradually mold them into a better person. On the other hand, if someone regularly does what they think other people want to see, this can actually make them worse, undermining their integrity and making them more deceptive.
In this way, virtue scores could actually make us worse by corrupting our motivations. Those who start out wanting to appear virtuous will only become more duplicitous, never doing the right thing for its own sake. And those who start out wanting to do the right thing will slide, slowly but surely, into doing it because they want to achieve a high score.
Because the concerns of value capture are even more acute in the ethical domain, we should think carefully about how we rate the virtues. It might be possible to measure how ethical we are, but by introducing such a measure, we might also just make things worse.