Ethics can be viewed as something like a painting; meta-ethics represents the boundaries, whereas ethics is the actual substance. While paying attention to the substance is important, we must also keep in mind the boundaries to determine what is in the painting and what is outside of it. We couldn’t have a discussion about the painting itself if we didn’t even know where its edges were.

So where do ethics start and stop?

First, morality isn’t a theory of the good. It’s not one’s personal sense of meaning or private values. Ethics are only normative “should” statements that result from valuing freedom and reason.

Aesthetic, romantic, and religious values have their own conclusions, which lack authority over people who don’t share those values. So our favorite movies, Spotify playlists, and travel destinations aren’t the subject of moral discourse.

We can fault Aristotle for this conflation of “the good life” and “ethics,” but modern philosophers who carry on this convention are just as blameworthy. We need to start keeping “the good” and “the right” in their separate categories.

And ethics and values are separate fields of inquiry. Ethics deals with objective moral facts, whereas value is something that conscious beings impose on the world. Value doesn’t exist independently of conscious imposition and, therefore cannot be an external feature of the world for us to analyze and judge. And there is no value if there are no free people to impose value. However, ethics, and other objective elements of the world, would still exist—even without people.

And whereas ethics are the normative products of valuing freedom and reason, values outside of those two aren’t ethical concerns. Questions of ethics can only be answered through public reasons, and questions of value can only be answered through private reasons.

We can see this confusion in the debate between moral realists and relativists. The latter might point out that certain cultural taboos on food, sexuality, and purity fall within the moral domain of many societies. However, realists argue that some of these cultural rules include practices like caste hierarchy and spousal abuse, which are plainly immoral. Well, who is right?

The issue is that the two groups are talking past one another. Relativists point to differences that are a matter of preference, whereas realists point to similarities that are a matter of morals. Both are right—moral certainty is compatible with cultural traditions.

We can imagine a circle that includes all moral truths and an even larger circle surrounding it that includes community values and personal expression. The former is within the realm of objective reason, whereas the latter is in the realm of subjective freedom (see my discussion of these concepts here). Moral realists like to focus on the smaller circle, whereas moral relativists like to focus on the larger circle.

This is fine, but when they both use the same word to refer to different things, moral discourse becomes confused. As I argued in my previous post, morality is those principles that free people would reasonably accept based on public reason. I’ll call this the “reason core.” The area outside morality comprises subjective value and personal meaning supported by private reasons, what I’ll call the “freedom residual.”

The two circles overlap as they are both a part of the mental reality of conscious beings. However, the circles only have reason-giving authority in their respective areas, with neither being able to infringe on the other.

The problem does arise when relativists talk about the “reason core” or realists talk about the “freedom residual.”

This might include fundamentalist Christians making judgments about everything, including our bodily autonomy and sexual preferences, or cultural anthropologists justifying everything, including honor killings and genital mutilation. So we need to strike a balance and recognize what “ought” conclusions can actually be justified through freedom and reason (i.e., the social contract) and what can’t be.

Now it’s just a matter of discovering what belongs in the reason core and the freedom residual. As I stated previously, there are no true moral dilemmas. Either a reasonable principle would be agreed to that would resolve a moral dilemma (proximal duty to save a drowning child, but not the world at large), or there would be no reasonable agreement, and the decision would be personal (the Trolley problem).

Second, morality is not rational self-interest. Instead, morality is about how to treat others despite one’s self-interest.

Meta-ethics is about why you should give regard to other beings, not the benefits of exchanging that regard. And normative ethics is about what that regard practically entails.

Reciprocal altruism, in this case, is an oxymoron. True altruism can’t be motivated by reciprocation. Otherwise, it’s just a self-interested trade with no true regard for the other person.

If self-interest guides your treatment of others, it’s still self-interest, not morality. Just because two different motivations can sometimes lead to the same action doesn’t make those motivations the same.

Many non-moral questions are labeled moral questions, likely because we haven’t yet agreed on morality’s ontology and epistemology. Doing so was the goal of my previous article.

Yet part of defining morality is also excluding what we mean by morality. So we need to stop talking past one another and converge on a comprehensible public term, which shouldn’t just inform our ethics, but also political authority and legal ontology.

In summary, morality is what reasonable people would agree to, and claims outside of that agreement are subject to preference. Some might call it supererogatory, but really that’s just a preference for being nice. Yet many still make moral claims without regard to morality’s “acceptance” condition. Imagine a moral philosophy that no one would agree to. This is basically the repugnant conclusion.

Picture a future dystopia where the state has fully accepted the repugnant conclusion. First, the state calculates the welfare of living people and predicts the welfare of future people. Then, based on their measurements, it assigns certain people “reproduction quotas,” where families are required to produce additional offspring or face imprisonment. All for the sake of maximal total welfare.

Even though the people subject to this regime are fairly unhappy, they are just happy enough to want to continue living. So they’re still better off than non-existence. But, of course, when happiness increases too much, so do reproduction quotas.

Everyone hates this regime, even the Department of Welfare Maximization bureaucrats, who are also subject to their quotas (at least the law is impartial). As soon as a family might start doing well for themselves, the DWM sends them a notice that they have to have another kid, leaving everyone at subsistence-level happiness.

Future people may be grateful for their existence, but they wouldn’t have imposed that duty on others, especially given that they would have that duty imposed upon themselves. However, since the controlling regime subscribes to pure utilitarianism, the DWM is the ethical conclusion.

Is this a moral outcome? Clearly, it’s not. Free people would never agree to be subject to reproduction quotas. So to claim that the dystopia’s “reproduction quotas” are moral represents a deep misunderstanding of morality. Yet committed utilitarians might seriously conclude that fulfilling one’s quota is a moral duty. This is what happens when we don’t include “acceptance” in our definition of morality.

This isn’t to say that current people should give no regard to future people. Instead, people would agree to a just savings principle, where the interests of future people are generally considered, and the current generation preserves some social gain for their sake.

What would be the alternative to treating people how they would reasonably agree to be treated? Could it ever be moral to treat someone how they would not want to be treated?

By definition, no, since that treatment could not arise from valuing freedom and reason. You’d have to value something else entirely, in which case you wouldn’t be doing ethics.

With the definition of morality established, we can conclude that utilitarianism and libertarianism would be excluded as authoritative moral doctrines. I’m picking on these two since they tend to be fairly popular, and therefore I consider them contractualism’s rivals.

First is utilitarianism. There are many problems with utilitarianism. However, we can accumulate all of its issues to form the conclusion that free reasonable people wouldn’t agree to it, given its unacceptable implications.

How do we know they wouldn’t? Because there is not a single utilitarian contract in existence. No one has ever made a party with a group of parties, stating, “I subject all of my moral rights and freedoms to the collective welfare of the organization.” So how can all of society be forced to accept an agreement that no one would even choose for themselves?

People can form a legally binding utilitarian agreement to base all their free choices on welfare maximization among each other. So if someone in the group received greater pleasure from using someone else’s money, property, or organs, they would have the legal right under the contract to take them. The fact that no one has made this agreement is a testament to utilitarianism’s unreasonableness.

This pure utilitarian contract might be written up and enforced as far as the law is would allow it to be (although some of its application would undoubtedly be deemed unconscionable). But I doubt even the most committed Effective Altruists would try this. Yet it’s this contract that utilitarians argue should be enforced onto the world.

Although there are plenty of rule utilitarian contracts where people make certain agreements that add to collective welfare as well as their own, but they still retain certain rights that aren’t subject to a utilitarian calculus.

While workers, shareholders, and managers may be duty-bound to act for a corporation’s welfare, they still have worker rights, shareholder rights, and management rights. And those parties wouldn’t surrender these rights to maximize total corporate good. Why should they even care about total corporate good anyways?

Given the absence of any pure utilitarian contract, and the prevalence of rule utilitarian ones, pure utilitarianism fails to meet the “acceptance” condition of morality. And so, we can safely exclude it from our definition of morality.

Second, is libertarianism. Again, there are many problems with libertarianism, including its brute assumption of rights and its adoption of the “magic words” theory of consent. However, it would seem that given the libertarian focus on autonomy, it wouldn’t fail the “acceptance” condition like utilitarianism does. Yet (since I’m talking about RC in this article) from a population ethics perspective, libertarianism fails to account for the interests of future people and wouldn’t be accepted by them.

Libertarianism takes the individual as the basic unit of social analysis. Individuals accordingly possess unalienable rights and are entitled to exercise maximal agency so long as that agency doesn’t infringe on the rights of others.

However, by taking the individual as the social unit of analysis, libertarianism adopts a person-affecting view, whereby the moral status of an act depends on the rights and consent only of affected living people. Given that future people are not deemed to hold any rights that can be violated, and the rights of living individuals should be maximized, then people in the present are not morally required to have any regard for future people, who they might view as only abstractions. To give future people rights under libertarianism would already be accepting a hypothetical social contract since future people cannot actually consent.

Therefore, so long as the Lockean proviso is met, it is permissible to disregard long-term welfare, maximally exploit current resources, and burden future generations (leading to what Mulgan (2016) refers to as a cross-generation “survival lottery”).

This person-affecting view has clear immoral implications. Where utilitarians give too much regard to future people, libertarians don’t give enough.

Contractualism would take the interests of future people into account, given their reasonable certainty for existence. So living generations would accept reasonable restrictions on their freedoms for the sake of future welfare (see the Just Savings Principle).

This is because people of different generations would be included in the social contract without knowing which generation they’ll live in. Duties developed under this arrangement would be reasonable and, most importantly, acceptable.

I hope now we have a clearer sense of what are and aren’t moral claims. While on its face assertive, this article only seeks to defend a definition of morality that has already been developed. Unfortunately, this definition is hardly used in much of the moral discourse I’ve seen. But hopefully, we can all start adopting it (or, at the very least, a definition).

And given that definition of morality, and the “acceptance” condition it contains, many theories of morality are off the table. However, if anyone has a better conception of morality or finds any issues with the one I present here, I’ll be happy to address it or adjust my priors accordingly.

Read More