One afternoon a while ago I was bored and decided to answer one of the many Quora.com questions on objective morality–namely, on how such a thing could be possible. This proved a useful exercise in distilling my moral theory down to a fairly succinct, fairly outsider-friendly sketch, and I thought it might be helpful if I reproduced my answer here. There’s a lot more unpacking that needs to be done, and a longer treatment is in the wings, but in the meantime this ought to give you a rough idea of where I’m coming from, metaethically and ethically speaking.
The question: Is there an objective moral standard? If not, what are we doing when we think and talk about morality?
It’s important to make sure at the outset that we’ve got a clear and consistent notion of objectivity to work with. People often conflate moral objectivism with moral absolutism, the latter of which might most succinctly be stated as the view that moral facts would obtain even if humans or other value-having creatures had never existed. Note that there are many non-moral facts that fail the test of absolutism (viz., descriptive facts about humans) yet are nevertheless regarded, and with amply good reason, as perfectly objective features of reality. So we must take care not to have a double standard here, not to unduly demand more of objective moral facts than we demand of objective non-moral facts.
“Objective” is often taken to mean “mind-independent,” but there are at least two senses of “mind-(in)dependent” that need to be distinguished. I take the aim of any serious candidate for an objective moral system to be the articulation of some set of principles (or standards, as the OP puts it) to which all persons are beholden and on the basis of which their behavior may justifiably be judged and regulated. Note that to meet this aim, we don’t need a moral system that is independent of the general existence of minds but only one that is independent of any particular mental states. The key idea is that I can’t simply exempt myself from an objective moral obligation by citing some desire or preference of mine that the obligation confounds or contradicts. If an action is morally right, then I ought to do it, whether I want to or not. Sure, there would be no moral obligations in a world with no minds, but so what? That is not our world. The ultimate contingency of these obligations is no barrier to their objectivity.
Now that we have a clearer idea of what’s needed, we can broach the question of how to get there. Different ethical theorists (e.g., consequentialists, deontologists, virtue ethicists, etc.) have given very different answers to this question. My own proposal is deliberately minimalistic. I’m a bit of a metaphysical ascetic—a naturalist, reductionist, and hard determinist—and so I need a moral theory that doesn’t commit me to more than these views can countenance. Of course, if there proves to be more in heaven and earth than my current philosophy allows, there may be more to objective morality than what I offer below. If not, however, I think my minimal account is actually a pretty satisfying and practicable consolation prize.
I follow Peter Railton in providing first an account of non-moral goodness to serve as a sort of conceptual stepping stone to an account of moral goodness. Suppose I am given a choice between eating a slice of apple pie and a nutritious salad. I desire to eat the apple pie, but it seems I really ought to desire to eat the salad; it is, after all, healthier for me. We might say that while eating the apple pie will provide some temporary pleasure, eating the salad is in my better long-term interest (Railton would say that the apple pie constitutes a subjective interest of mine, while the salad constitutes an objective interest).
Now, what, if anything, makes these sorts of judgments—judgments about what’s really best for me, independent of my transient desires—true, or at least justifiable? I submit that it’s the existence of a more general desire—say, a desire to be in good health—that is in several important respects superordinate in my cognitive ecology to my desire for the apple pie. These sorts of superordinate desires, unlike transient desires, tend not to be exhaustible and tend to have as their objects not momentary events or achievements but stable states of the world (you don’t just acquire good health; you cultivate and maintain it—or fail to do so). They are also, I think, more central to one’s identity, in that they are much harder to change or replace than transient desires. Henceforth, I’ll call this special subset of desires values.
On this account, the proper analysis of the claim that I ought to choose the salad over the apple pie would be something like: “Eating the salad will (partially) satisfy the value of being healthy, while eating the pie will not, and satisfying this value is more important for my overall wellbeing than satisfying the transient desire for apple pie.” The various facts about my values, transient desires, and the conditions of their satisfaction are the truthmakers of any claims about what is good for me. Insofar as these claims are justified, then, my values give me reasons to act accordingly. Failure to heed these reasons would constitute a failure of instrumental rationality.
Now, again, the above is only an account of non-moral goodness. However, I think an account of moral goodness can be built upon it. In the details of how this is to be down I henceforth part ways with Railton, and those interested in his particular account of moral goodness should read his paper, “Moral Realism.” One way you might think to found an objective moral system on the above account would be to posit a set of values that are universal, i.e., held by everyone. While I find it plausible that there are such values, I think this is the wrong approach. People might have some values in common, but their particular circumstances may dictate distinct conditions on the satisfaction of those values, and these particular satisfaction conditions may give rise to reasons to act in different, conflicting ways. That is to say, shared values alone do not guarantee moral agreement as to what ought to be done, even when everyone is ideally informed and acting perfectly rationally (that is, in accordance with their most salient reasons).
However, not all value satisfaction conditions are context- and situation-dependent. To satisfy any value, one must be able to act effectively in the world. And to act effectively in the world with any sort of consistency requires certain things: namely, (1) freedom from unnecessary restrictions external to the body (that is, liberty); (2) freedom from unnecessary restrictions internal to the body (that is, healthfulness); and (3) knowledge of how the relevant parts of the world work. These three things, I submit, are genuinely objective moral goods. They are made moral by the fact that every valuer—every person—has a stake in them, and they are made objective by the fact that these stakes exist regardless of the specific values held. People have reasons to act toward the realization of these goods in virtue of their values, but these reasons are independent of the particularities of those values (since all values require these goods for their satisfaction). I therefore can’t escape being bound by these reasons simply by appealing to some other value (much less any transient desire!).
Note that while these goods may be explicitly valued by people (as healthfulness was in our example of non-moral goodness), they needn’t be in order for valuers to have reasons to realize them (the salad, recall, wasn’t explicitly valued by me, but I nevertheless had a reason to eat it).
Suppressing details and formal pieties, we can capture the gist of the above in the form of a simple argument (there is a much more nuanced argument here, but this answer is long enough as it is):
P1. All valuers have, in virtue of their values, reasons to act in ways that facilitate the satisfaction of those values.
P2. Some goods are instrumentally necessary for the satisfaction of any value whatsoever.
C. All valuers have, in virtue of their values, reasons to act toward the realization of such goods.
But doesn’t this only suggest that I should work toward my own liberty, my own healthfulness, and my own knowledge, and likewise for everyone else? Aren’t we faced with essentially the same problem that frustrated the value universalist?
I don’t think so. I rely constantly on knowledge acquired by others (scientists, philosophers, mechanics, journalists, friends), on liberty protected by others (judges, lawyers, police officers, good Samaritans, even—ugh—legislators), and on healthcare and health-related goods and info provided by others (doctors, pharmacists, nutritionists, food providers, employers, family members). These folks could not provide these resource without sufficient knowledge, liberty, and health of their own, and they depend for their share of these resources on the activities of others, and so on, and so on. I thus have a stake in their access to these goods as well as mine—and likewise for each of them.
The convergence point, if you will, for all these overlapping and complementary reasons would, I think, be something like the following:
All should work toward the realization and maintenance of a stable society that affords and assures optimally equal access to liberty, health, and knowledge.
Pursuant to the OP’s question, this is my candidate for an “objective moral standard.”
Admittedly, the above is all quite broad. How does this moral system counsel with respect to some particular contemplated action? On this account, for me to have a moral obligation to do some act, x, the following conditions must be met:
- I have a reason to do x (because x facilitates the satisfaction of some value I hold).
- All other valuers have a reason to want me to do x (per the analysis just given). Note: this doesn’t require that all valuers actually want me to do x, but only that it would be rational for them to want this, since my doing x would plausibly facilitate the satisfaction of their own values.
A claim that I have a moral obligation to x, then, is simply the claim that these two conditions are satisfied with respect to the x in question. Note that here, as in our analysis of claims about non-moral goods, the truthmakers are simply facts about the values people hold and the material conditions of their satisfaction. Claims about what we morally ought to do are no more mysterious or metaphysically loaded than claims about what we non-morally ought to do; the former are, in fact, a proper subset of the latter.
This, at any rate, is one possible (I would argue plausible) path to an objective moral standard. Others may be possible as well, but I offer the above because it is extraordinarily ontologically innocent, requiring of us no commitment to strange sui generis moral properties. One additional perk is that it provides a sound foundation for a novel, and more practicable, kind of cosmopolitanism—one that doesn’t depend upon shared values but only on the common goods necessary for their satisfaction. Let a million values bloom; we can still defensibly hold each other morally accountable.