← Return to search results
Back to Prindle Institute

Xenotransplantation: Lifesaving Breakthrough or Profit Engine?

The idea of using animal cells, tissues, and organs in human transplantation efforts — today known as xenotransplantation — is an old one. In the 1600s, French doctor and physician to King Louis 14th of France, Jean-Baptiste Denys, began transferring blood from lambs into people. Remarkably, this didn’t always prove fatal, and we know that a 15-year-old boy survived this procedure. Others, however, were not so lucky. Eventually, Denys was banned from performing further experiments by a French court (with the backing of the Paris Faculty of Medicine), and then by the French Parliament. Following France’s lead, the British government banned such transfers shortly after.

In the 1900s, several attempts to cure people’s ills using animal materials were attempted, to varying degrees of… well, success isn’t the word; let’s say not failure. Serge Voronoff thought xenotransplantation could cure aging and its associated effects on men’s sex drives. His method involved implanting bits of monkey testicles into the scrotum of willing participants in the hope of restoring those men’s lost vigor. John R. Brinkley tried a similar procedure, but rather than use monkey tissue, he would implant bits of goat testicles. Voronoss and Brinkley made a lot of money from offering these procedures, with the latter going on to have a modestly successful career in radio and politics.

However, not all modern attempts at xenotransplantation involve the restoration of sexual vigor. In 1963-64, surgeon Keith Reemtsma conducted thirteen chimpanzee-to-human kidney transplants. None of these were what we’d probably call successful, with no patient surviving beyond nine months. Most died shortly after the operations, but the fact that a reputable surgeon even tried them was remarkable.

Then there was the case of Baby Fae, who, having been born with hypoplastic left heart syndrome, in 1984, received a baboon heart in a last-ditch attempt to save her life. Fae survived the surgery but died 21 days later. While failing to save her life, this single operation, conducted at Loma Linda University, changed the perceptions of xenotransplantation. No longer was it the preserve of quack scientists or charities hoping to find fame and fortune. Instead, it was a theoretically possible solution to one of the biggest problems within medicine, both then and now — the organ shortage crisis.

Now, xenotransplantation is not a new subject here at The Prindle Post, with Goodwine and Arnet having written about it before. But, given the advancements that have been made in the practice over the past couple of years, I wanted to draw the Post back into the xeno-world.

Arguably, the most significant change in the past few years has been the success rate. Gone are the days when fringe scientists (or pretenders to the title) would implant whichever bit of animal they thought would cure whatever problem the patient in front of them had. Today, it is gene-edited pigs that provide the bulk of organs being used by researchers and surgeons. These pigs, raised in clinical conditions to control for variables, have their genetics altered so that, when an organ, like a kidney, is removed from them and put into a person, that person’s immune system doesn’t immediately reject the transplanted organ. And, while this practice is still very experimental and most often performed on compassionate grounds, it has started showing results.

To illustrate the rapid pace of development in this field, let’s look at two cases that, while relatively close in terms of time, are fields apart in terms of results.

On March 16th, 2024, 62-year-old Richard “Rick” Slayman underwent surgery to receive a kidney from a genetically altered pig. Slayman had been suffering from end-stage kidney disease, which meant that he had to regularly undergo dialysis. In addition to this, he also had Type 2 diabetes and hypertension and had previously received a transplant from a human donor in 2018. However, this organ began to fail five years after implantation. So, in a last-ditch effort to save his life, and knowing that his actions could help future generations of people who may find themselves in similar predicaments, he underwent the experimental procedure. At first, things seemed to go well, with Slayman no longer needing to be in dialysis as the new kidney began to function. Sadly, however, on May 12th, 2024, Slayman died; less than two months after surgery. A recent paper in the New England Journal of Medicine claims that Slayman died of cardiac causes and that there was evidence of organ rejection.

Fast forward to January 26th, 2025, and Towana Looney becomes the world’s longest-surviving xeno-graph recipient. Like Slayman, Looney needed a new kidney but had exhausted all other options. She, however, was lucky in that others, like Slayman, had gone before her, and the procedures that they underwent provided knowledge that Looney’s surgical team could learn from to improve her chances of survival. As far as I’m aware, Looney is still alive. This change signifies a remarkable change in outcomes in less than a year. And if the hype is to be believed, similar organs could one day be rolled out to those who need them. Indeed, the US Food and Drug Administration (FDA) has been so swayed by these compassionate applications of xenotransplantation that, earlier this year, they gave the green light for the United Therapeutics Corporation to begin clinical trials. This regulatory approval marks a notable change in circumstances and perceptions from the days of Denys, Voronoff, and Brinkley.

But it is not all sunshine and rainbows with xenotransplantation. Putting aside the significant additional suffering that pigs must endure to be suitable for our needs, substantial risks come with rolling out this biotechnology, much like there are with them all.

For my part, one of the biggest concerns is the perverse incentives businesses will have to maximize profits when it comes to producing these genetically altered organisms. Once large corporations start to profit from the production of pigs for xenotransplantation, they will be highly resistant to change their practices for any other reason than the securement of more profit. Any bad practices that get baked into the system at the early stage will be problematic to modify unless there is a saving in dollar form for the companies producing these organisms. This, in turn, will make it difficult for any issues regarding animal welfare, present at this sector’s genesis, to be purged. If this position is a little too pessimistic for you, I would urge you to look at the farming sector and the incredible efforts it takes to make only the smallest changes, which, while improving animal welfare, harm corporate profits.

Businesses exist to make money, and they resist anything that will prevent them from doing so as efficiently as possible. In the context of xenotransplantation, once companies start to make money from rearing pigs and selling their organs, they will resist changes that might hurt their bottom dollar, even if such a change is better for the pigs or even everyone else. A similar pattern can be seen in other areas of the health sector, where multinational conglomerates lobby lawmakers to enact policy decisions and changes which, while having broad appeal, would constrain those companies from making as much money as possible. As noted in a paper by Bobier, Merlocco, Rodger, Hurst, and myself:

the pharmaceutical industry spent hundreds of millions of U.S. dollars to try and prevent the passage of the Inflation Reduction Act of 2022, which allows Centers for Medicare and Medicaid Services to negotiate drug prices, regulate insulin prices, and cap out-of-pocket spending for Medicare recipients, all of which would promise to reduce national health spending and help individuals afford health care.

Ultimately, while xenotransplantation would likely prove hugely beneficial for those needing the organs, it could act as a barrier to further, more systemic changes necessary for the betterment of all. After all, while it would be great to meet the organ shortage, it would be even better to identify why so many organs are necessary and solve the issue at the cause. However, doing so would likely impact the interests of the companies that stand to gain from xenotransplantation’s rollout.

So, then, the question I want to leave you with is this: is it better to use xenotransplantation to meet the organ shortage crisis and save countless lives, or would we be better served by trying to deal with the socio-economic, political, health, and environmental factors that mean so many of us need organs in the first place? The latter may seem like the correct choice, but as the stories of Slayman and Looney illustrate, it’s far easier to change the nature of an organ than it is the churn of our economic system.

No More Patience, No More Books

Confession: I’m a first-year English teacher who still thinks about that viral Atlantic article from October: “The Elite College Students Who Can’t Read Books.” In it, journalist Rose Horowitch sifts through a flurry of interviews with post-secondary instructors and, ultimately, generalizes today’s undergraduates as unpracticed in reading entire literary works from beginning to end.

The piece’s cited professors are not bemoaning the loss of literacy as a skill; they perceive students to be capable of the processes constitutive of “reading,” such as decoding and comprehending language. The worry, rather, is that today’s learners seem averse to reading long-form text. As Horowtich posits, “Students can still read books […] [T]hey’re just choosing not to.”

What gives? More broadly, what’s the value in engaging in a long-form, meditative activity like reading? And how ought schools — and the rest of us — evaluate and respond to its allegedly waning popularity?

First, let’s look at schools.

Educational policy, one of many influences on students, exists to support and raise the standard of academic achievement. For example, consider the popularly cited No Child Left Behind (2001) and Common Core (2009) legislation. Bipartisan they were enacted, and bipartisan they remain praised and critiqued, these policies remain steady shapers of 21st century American schooling, catapulting standardized testing — a projected $1.7 billion industry — into generation-defining prevalence.

These two programs sought to eliminate achievement gaps and synchronize learning goals across public schools in the United States. As measures of progression toward these ends, students take state and federal assessments multiple times in one school year. The tests in language arts, for example, ask students to read short-form passages and demonstrate comprehension by bubbling answers to multiple-choice questions.

Rather than stew over their empirical efficacy, let us mull over a particular implication: a test populated with short-form reading rewards analytical prowess (and test-taking deftness) over the gains of long-form reading of particular literary works. And if tests deprioritize the endurance required for novel-reading, then it is no wonder that a focus on lengthy text becomes academically disincentivized.

But is blame on testing disproportionately dealt? While a large sum, the $1.7 billion for testing rounds to just 0.003% of American K-12 education’s estimated overall $600 billion in annual spending. Furthermore, these reforms do not actually preclude teachers from assigning whole books. We do not, in fact, actually know how many primary and secondary teachers do (not) involve novel-reading in coursework — the piece expresses a shared sentiment rather than hard data. And it’s not as if we no longer wish for our students to be held to some kind of standard; perhaps there is room for favoring components standardization without fully endorsing today’s assessment methods.

And yet, school administrator Mike Szkolka’s words maintain a poignant tug: “There’s no [standardized] testing skill that can be related to […] Can you sit down and read Tolstoy?” But therein lies another query: In what sense do students actually need to?

Because for a number of learners, reading doesn’t hold perceptible value. “Students today are far more concerned about their job prospects than they were in the past,” Horowitch writes. “Every year, they tell [Professor] Howley that, despite enjoying what they learned in Lit[erature] Hum[anities], they plan to instead get a degree in something more useful for their career.”

Career-directedness seems to suggest a revaluation of education. School as a vehicle primarily for job acquisition implies that we most value what others value in us. We are reactive, not active, in seeking knowledge. In other words: it is a norm to prioritize flourishing within one’s outer rather than inner world. Do we want this to be the case? Ought employability be both the means for and end of one’s existence?

At face value, perhaps the asking of such questions appears — or are — inaccessible or elitist; the average person cannot afford to value character-cultivation at the expense of a paycheck. However, this mutual exclusivity might suggest a false dichotomy. While inaccurate to suggest all a student “needs” is a soul-stirring novel, it also seems corrupting to postulate that a human’s only purpose is survival. Isn’t it fishy to suggest that forgoing purposeful reflection is the price of work?

Students deserve more from us than a scarcity mindset. Surely, schooling has failed if both material and immaterial needs go unmet. In fact, books opportunize “deep reading — sustained immersion in a text — which stimulates a number of valuable mental habits, including critical thinking and self-reflection, in ways that skimming or reading in short bursts does not.” Further, as the Atlantic piece suggests, sticking with one character “through their journey” produces benefits like an enlarged capacity for understanding others’ feelings. Bal and Veltkamp’s 2013 study, for instance, found that high narrative transportation — immersion — in fiction leads to higher empathy.

Even the most eager students, however, face an oft-spotlighted modern obstacle: shrinking attention spans.

In blame, it feels instinctive to point fingers at algorithm-propagated short-form entertainment. But to be charitable to today’s learners — who often face beration for addiction to the devices adults handed them — technological hyperfixation indicates not that adolescents are necessarily allergic to literature (they aren’t!). More so, it seems plausible that perpetual (short-lived) satisfaction just begets low tolerance for boredom. For instance, a personalized feed of sub-one-minute videos analyzing a book (with sound, color, and jump cuts) might more quickly deliver gratification than the quiet, meditative act of reading one.

Even still, maybe you’re tired of blaming phones. Perhaps the screens are mere scapegoats for our larger economic sway toward fast-paced, immediate meeting of needs. (Who needs patience when the internet can predict your next delivery?) If truth be told, then the introduction of any new media seems to prompt alarm — consider the frenzied responses to radio and television.

But a patterned emergence of new-media-induced perturbation doesn’t mean these panics are without reason; after all, it was social media, not the printing press, that’s been dubbed a “dopamine machine.” Exerting self-sufficient, methodical mental activity (e.g., reading) is a hard ask if our phones are already adept at tickling our brains for us. Still, this focus on distractibility might itself distract from the more pervasive force at this conundrum’s root: a cultural devaluation of reading. And if this is the case, then it is worth considering what it says about ourselves — and what we’re losing.

Devaluing reading implies a valuation of something — or things — else. The most salient indicators of our values are reflected in the means by and subjects to whom we give our attention; there is a real worry, it seems, about the next-steppage and “checking out” embedded in many of today’s minute-by-minute priorities of focus. Staring at one’s phone, staring at a multiple choice test, and staring at a resume are all — whether you like it or not — endorsements of those activities. And while such endorsements necessitate neither desire nor aspiration to pursue excellence in the activity, engagement nonetheless indicates and cultivates (unconscious) value and habit.

But the thoughtful engagement required to read or discuss a novel seems nearly as deliberate and life-affirming as laughing or taking a mindful stroll. Perhaps that is the wish for our kids — that they are educated in the contemplative, patient, and gracious habits of (re)grounding themselves in the reality we find ourselves too often evading, devaluing.

So for all of our sakes, throw a paperback in your tote the next time you leave. Read it. Talk about it. Normalize it. Allow the opportunity for a friend, colleague, or even a rogue middle schooler to catch the contagion themselves. The simplest act of resistance against a future we’d like to avoid is to be stewards — not mere pontificators — of the habits we hope the rest of the world might also cultivate.

What Dirty Dishes and the Paris Climate Agreement Have in Common

Back when I shared a home with roommates, dirty dishes were a constant source of strife. I had many roommates who loved to cook, but were largely disinterested in the clean-up that followed. (Of course, I was never part of the problem. I was the model roommate…) But, nevertheless, the dishes would pile up until there wasn’t a single clean plate or cup in the house, and we would inevitably have to ask: who’s going to clean up this mess?

In many ways, the climate crisis is a lot like those stacks of dirty dishes spread across our kitchen counter. Both are problems that result from a series of collective actions (or inactions). Further, both problems – if left unaddressed – lead to serious ramifications down the line (though of vastly differing severities).

By withdrawing the U.S. from the Paris Climate Agreement, President Trump is like the roommate who, upon seeing the stack of dirty dishes, slowly backs out of the kitchen and leaves everyone else to figure out the solution. But is this so bad? Why think the U.S. is under any kind of obligation to help clean up the mess in the first place?

When it comes to addressing collective action problems, there are a number of different approaches we might take. Perhaps the most straight-forward solution would have been to evenly split the task of washing the dirty dishes amongst myself and my roommates. This would certainly solve the problem. But some of us would have objections. Why, for example, should the roommate who used only a single plate for each and every meal have to clean as much as the roommate who created an elaborate four-course dinner for a handful of his friends? In other words, we might be skeptical that an even share is a fair share. Why? Because each roommate’s contribution to the problem wasn’t equal.

Perhaps, then, a fair solution needs to take into account how responsible each party is for the problem. If one roommate was responsible for producing half of the dirty dishes, then they should be responsible for cleaning half of the dishes. This is, essentially, the common notion that we have a duty to clean up our own mess. In discussions of environmental ethics, this approach is commonly referred to as the Polluter Pays Principle (PPP). Applied to the climate crisis, the PPP would require those who have historically produced more emissions to shoulder a greater share of the burden of solving the problem (both by reducing future emissions, and contributing financially to the cost of addressing the crisis).

With total historical emissions of just under 400 billion tonnes CO2, the United States is responsible for a full quarter of global cumulative emissions – more than double that produced by China. The PPP, then, would hold that whatever needs to be done to address the climate crisis, the U.S. is on the hook for a quarter share of the burden.

Yet we might have concerns with this approach too. Consider that roommate who cooked the four-course dinner for friends. While they might have been responsible for producing the mess, they were not the only one to receive the benefits of that mess. That roommate might argue that their friends – those who enjoyed the elaborate degustation – should now be held accountable for cleaning some of those dishes. This intuition can be captured by a different approach, commonly referred to as the Beneficiary Pays Principle (BPP). According to this principle, the burden of cleaning up a mess shouldn’t just fall on those who made the mess, but rather on those who benefited from that mess.

In the context of the climate crisis, this shift from the PPP to the BPP makes little difference to the onus placed on the United States. The term “carbon majors” is used to refer to a collection of the world’s largest oil, gas, coal, and cement producing corporations – that is, those businesses that have received the greatest financial benefit from the climate crisis. Of these 122 carbon majors (who are, collectively, responsible for 72% of all historical emissions) more than 27% are housed within the United States. But the benefits extend further than this. The United States economy has flourished as a direct result of its dependence on carbon-intensive industry, creating myriad benefits for those who live here. According to the BPP, then, it is only fair that we in the U.S. take on a substantial portion of fixing the problem we benefited from.

But suppose that, on the morning my roommates and I convene to discuss the dish dilemma, everyone is very busy. Some have class, while others need to get to work. In fact, the only person who has the time and resources to clean is me. In such a circumstance, my roommates might argue that I should be the one to shoulder the greater burden simply because I’m able to. This approach is referred to as the Ability to Pay Principle (APP), and it underpins many intuitive notions. It explains, for example, why we tend to place a greater expectation on wealthier people to donate more to charity. Why? Because they are more able to give up their resources. It also provides the foundation for the kind of progressive tax system we have here in the U.S.

Applied to the climate crisis, the APP would look not to those who contributed to or benefited from the problem, but would instead focus on those who are most able to help. Once again, however, this principle would prioritize the U.S.A. as needing to shoulder a substantial portion of the burden. With the eleventh highest GDP per capita in the world, the U.S. is among the most able to financially contribute to solving the problem. What’s more, we also have the greatest ability to take on non-financial burdens like reducing our carbon emissions. Why? Because we have available world-leading technological expertise to effectively transition to renewable energy.

The upshot then, is this: When it comes to the U.S.A.’s climate obligations, it doesn’t really matter if we think that it’s contribution or benefit or ability that determines who should solve the problem. Why? Because the U.S.A. is on the hook regardless of the approach we take. And that’s what makes Trump’s exit from the Paris Climate Agreement so egregious. In doing this, the U.S. is, essentially, being the very worst kind of roommate to have to share a home with.

Should We Worry About a Tech-Industrial Complex?

In his final speech before the end of his term President Biden warned that “Today, an oligarchy is taking shape in America of extreme wealth, power and influence that literally threatens our entire democracy, our basic rights and freedom, and a fair shot for everyone to get ahead.” He further warned that a “tech-industrial complex” was a threat to American democracy as well. While it isn’t unusual for presidents to express warnings and concerns about the future of the country when they reach the end of their term, it’s worth considering how the rise of technology can and is contributing to concentrations of power and money and the effect that it might have on democracy. Does the rise of tech companies, especially given the emergence of AI and our potential reliance on it really represent a threat to democracy or is this phenomenon nothing new?

First, it is worth clarifying the nature of Biden’s final warnings. His reference to a tech-industrial complex echoes Eisenhower’s final presidential warning about the military-industrial complex. The military and the defense industry have been able to shape public policy because of their great influence and mutual interests. Defense industry contractors’ promises to invest in congressmembers state or district or to donate to campaigns in exchange for greater military funding are powerful tools. The concern, likely similar to Biden’s, is that this arrangement is not in the country’s best interests.

Unlike military production where factories can be built in districts to attract political support, much of the political support for the tech sector stems from the promise of greater efficiency and convenience from their products and services. This in turn creates greater dependence on technology and support for the tech sector. As many of these companies own prominent media outlets, they have enormous opportunity and leeway to advance their interests.

Might this concentration of power in the hands of the wealthy few threaten democracy?

Biden’s worry is not new. Political scientists have been arguing for over a decade that the U.S. is already an oligarchy. Wealth inequality has only gotten worse, the fundamental dynamics of the super wealthy owning the vast majority of the total wealth is also not a new dynamic. Nor is the idea that political parties are dependent on the super wealthy for donations.

Indeed, the U.S. has survived previous robber barons. Fortunately, the United States Constitution was designed – with all its checks and balances – to minimize such threats. Thus far, it has remained resilient in the face of these oligarchical forces.

But this time might be different. The particular means by which the tech sector builds influence can be especially problematic. Unlike military production, the tech sector, particularly through the use of AI, is capable of personally affecting many more people on an individual level and across all industries. The promise of efficiency is often incentive enough to significantly change the way a business or institution operates – changes that prove difficult to reverse or abandon.

This increasing dependency can lead to an omni-presence of tech that is hard to opt out of. Are you looking for a job, seeking medical care, or in the market for insurance? Chances are an algorithm will determine the outcome.

But the consequences of the tech-industrial complex’s stranglehold run deeper. Philosopher Thomas Christiano has argued that democracy requires accessible information that can be used to form and advance one’s interests. This is what he calls “informational power.” However, since most information is transmitted (and filtered) through tech platforms like social media, we remain dependent on tech companies for the knowledge we need to advance those interests. As Christiano notes, those who control these tech platforms do not share the same interests as the general public. and yet they play an outsized role in shaping what information is shared.

In other words, while some aspects of the modern world are nothing new, it may be that a tech-industrial complex is far more wide ranging, unavoidable, and deeply tied to our own personal interests in a way that a military-industrial complex never could be. Northrop Grumman could likely never hope to have the kind of personal influence over you that a corporation that develops hiring algorithms, or medical technology, or even media companies ever could. They also create a level of personal dependence that seems unavoidable and which leaves us unable to recognize problems and advance our own interests.

Tech Layoffs and Reasonable Expectations for Good Jobs

Meta, Facebook’s parent company, opened the year by announcing likely layoffs for around 5% of its workforce. This continues a broader trend for the tech industry, which jettisoned over 150,000 workers in 2024 (and the year before that, and the year before that). How AI will affect the tech workforce is yet to become fully clear, but many may need to “reskill” to stay competitive.

Current layoffs and uncertainty in tech are an interesting follow-up to that pillar of 2010s career advice: “learn to code.” At one level, it was simple, practical advice. Tech jobs were ascendant, and coding was a great way to get into them. And to be clear, demand for tech workers is still relatively high despite layoffs and shifting priorities.

But the advice also speaks to a particular perspective on career success — one of individual responsibility. Its harsher corollary is that if you aren’t successful, it’s because you studied the wrong thing. “Learn to code” has even been used as a way to harass certain professions, such as journalists, when those fields were losing people. The implication is that such individuals have only themselves to blame for choosing a suboptimal career. However, as layoffs continue to rock the tech job market, it is worth revisiting this simple story.

What are reasonable expectations to secure “good” employment – that is, one that meets our material needs and provides basic dignity?

Without defining “reasonable expectations” directly, a few considerations may nonetheless help us think about it more clearly.

First, possible is not the same as reasonable. To give an extreme hypothetical, imagine a well-paying job that anyone can get as long as they are willing to stick their bare hand into a campfire for two minutes. Given the availability of this job, would it be reasonable to say that people are to blame if they lack a good job? That depends on whether you think this is a reasonable expectation — hopefully not. The comparable question is how much work (pain, suffering, discipline, etc.) is reasonable to expect from someone to achieve basic dignified work. Should college be a requirement? Should someone have to study something specific (like coding)? Is it reasonable to expect someone to work their way through college taking night classes just to get a good job? How much time researching the labor market should be expected? How much upskilling? Reskilling? Networking? Self-promoting?

Second, what someone should do to advance their personal goals, is not the same as what’s reasonable to expect from them. Imagine a high school freshman dealt a rough hand in life staring down a bleak future of continuous poverty. But they see a potential path to financial success. By avoiding distractions such as a social life, studying exceptionally hard, and self-teaching themselves at the public library on nights and weekends, they might be able to earn a scholarship to attend a nearby college where they can study something eminently practical like IT, nursing, or engineering. If this person wants to escape their lot, then this is the path that they need to trod. Moreover, it doesn’t matter how unfair they think it is. Poverty has indeed diminished their agency and their resources, but better they invest what agency and resources they do have into improving their situation, than blaming society and railing against the injustice of it all. But is this level of self-sacrifice and discipline to achieve basic stability a reasonable societal expectation for a child? For anyone? (16% of American children born into poverty successfully break out.)

Third, our assessments of reasonable societal expectations for the effort required to get a “good” job should depend on how important jobs are in a given society. If we think one of the justifications for forming a society is that it secures certain rights and protections, then an especially troublesome scenario is one in which good jobs are hard to get, especially for those coming from disadvantaged circumstances, yet the basic material ingredients of a satisfactory life — adequate pay, retirement support, health care — are all dependent on having a good job. This seems to be the reality in America: almost half of American full-time workers are earning below a living wage. Services like universal basic income or universal healthcare would shift the calculation drastically, as a job would no longer be a prerequisite for fundamentals such as shelter, food security, and healthcare.

There is no hard and fast rule here. As a society, we will have to navigate together what a “reasonable expectation” for a good job is. Nonetheless, as the present tumult of the tech sector reminds us, current expectations are set extremely high, and the penalties for failing to achieve success are harsh.

Thus far, we have been considering “good” jobs in only the monetary sense. We can turn the “reasonable expectation” question on its head. What is reasonable to expect out of a job: job security, good pay, regular holidays, flexible hours, agency in the workplace, respectful treatment, freedom from workplace surveillance, stimulating work, more?  There is a tendency to dismiss this as mere entitlement. And assuredly, there are practical difficulties. Even if we resolve to demand less of job seekers, and more of jobs, there is no magic button that can be pressed that would give everyone what they need and want.

However, the world transforms all the time, especially in our fast-paced era. We have put a library in our pockets, and developed code that can generate detailed texts, videos, and images from a simple prompt. Is it so unreasonable for workers to expect more?