On Moral Buck-Passing

I want to discuss an issue that’s been looming increasingly ominously for me of late. It presents, I think, an incredibly thorny ethical problem, and I’d like to workshop some possible solutions or constructive approaches.

I’ll be looking at the issue here through the lens of consequentialism, the view that the goodness or badness of an action depends principally upon the consequences that result from that action. Other moral theories, like deontology or virtue ethics, undoubtedly have unique takes on the matter, but my metaethics and metaphysics have led me pretty decisively to the view that any moral properties deserving of realist countenance must be, first and foremost, properties of states of the world and properties of actions or persons only in a derivative sense.

To begin, let me lob a few scenarios at you:

(1). A man at a zoo, ignoring all posted warnings, scales a fence and enters the lions’ enclosure. He is attacked and killed. Who between the man and the lion bears greater responsibility for the man’s death?

(2). A mentally ill man with tendencies toward violence is (legally) denied insurance coverage for medications he would otherwise be unable to afford and which are necessary to keep his symptoms under control. Unmedicated, he has psychotic episode and kills someone. Who is to blame?

(3). A small cult is publicly satirized and shamed. Indignant, its members retaliate and an innocent bystander is killed in the ensuing fracas. On whose hands is the bystander’s blood?

(4). You learn that a company has been engaging in unethical (though by some technicality still legal) activity. You take to your social media platform of choice and spearhead a large-scale boycott. Profits fall precipitously and the company duly ceases the unethical behavior but also, in order to absorb the financial blow, lays off a third of its ground-level work force, most of whom had no participation in or even knowledge of the unethical activity. Who is to blame for the suffering of those now unemployed?

Now, in each of these examples, focus only on the proximate cause of the bad consequence: the lions, the mentally ill man, the aggrieved cult members, and the company (or its executives). However you ultimately came down on each question, I suspect you found a greater willingness to assign blame to the proximate cause as you progressed from (1) to (4), perhaps with a corresponding increase in reluctance to assign blame to the more remote causes (the man climbing the fence, the insurance company, the satirist, and the boycott leader) whose ultimately negative effects are brought about through the actions of the more proximate causes. Regardless, I want you to think about how you reasoned in each case and see if you can pinpoint the factors on which any changes in your judgments about the more deserving focus of blame seemed to depend.

Consequentialists seek to act in the world in order bring about outcomes that realize or facilitate some moral good—whatever that may be per the particular species of consequentialism in question. In order to do this reliably, they must try to predict the likely consequences of a considered action (as far downstream as feasible) and weigh the costs and benefits of each. Often, a very important element of this calculus consists of the responses other persons are likely to make to the action. In some cases, we might have good reason to think that a person is likely to respond to an action in an immoral way (i.e., with an action of his or her own that brings about some state that would be reckoned a moral harm by the consequentialist). Now, suppose we go ahead with the action and the other person duly brings about the bad consequence we predicted. Let us further stipulate that the harm brought about by the second action is greater in magnitude than the benefit brought about by the first. Is the other person, the proximate cause of the harm, wholly at fault for that consequence or do we share any of the blame? Generalizing the question: Can a consequentialist ever safely pass the moral buck?

It’s tempting to seek to answer this by appealing to juridical notions of culpability. We want to know whether the proximate cause of a bad consequence was of sound mind at the time the action in question was taken, whether that person was capable of understanding that the action was wrong, and so forth. These are important considerations, but they don’t, I think, comprise all the relevant moving parts. Before we dig any deeper, though, a bit of terminological housekeeping is required.

In a consistently consequentialist framework, talk of moral responsibility is, I contend, ultimately a sort of shorthand for talk about where in a causal chain one might most effectively intervene in order to bring about good outcomes or prevent or ameliorate bad ones. We punish agents (i.e., persons of sound mind) who’ve done bad in order to dissuade them from doing bad again or to dissuade others from doing bad in the first place, and we praise and reward those who’ve done good in order to incentivize them and their observers to do good in the future. To determine the most appropriate locus of intervention, we must assess, as best we can, both the ease of bringing an intervention to that locus and the likelihood that such an intervention will have the desired effects. The relevant relationship may be crudely expressed as follows:

Degree of moral responsibility = P(I&S) = P(I) * P(S|I),

where “P(I)” is the probability of bringing the intervention and “P(S|I)” is the probability of success once the intervention is brought.

A more complete calculus would also have us factor in the net cost of the intervention, but this simplified version will do for the present discussion (one could argue that the above two probabilities jointly capture the cost—or at least a considerable amount of it—insofar as costlier interventions will be less likely to be attempted and will have higher thresholds for success). It is also the case that for a given node in a causal chain, there may be several feasible interventions with differing costs and probabilities of success. It’s an interesting question whether the moral responsibility of a node might depend in some way on the number of options available for influencing its behavior, but that’s a topic to be taken up another time. In what follows, then, we’ll only consider for each node the intervention with the highest value of P(I&S).

Another issue we need to bring to the table is what we may call moral justification (I would call it moral rightness, in contrast to moral goodness, but “justification” brings out the essential feature more clearly). In epistemology, the justification of a belief is widely distinguished from its truth value. One could be justified in believing some proposition (e.g., in accordance with solid epistemic principles and all the best presently available evidence) which nevertheless turns out to be false. And one could, of course, believe something that’s in fact true without being justified in doing so, as when one believes something for insufficient or irrelevant reasons. In a similar vein, one may have very good reasons to suspect that an action under consideration will bring about net good when it in fact turns out to do the opposite. It seems that in cases like these, the individual is responsible for the bad outcome but still justified in doing what she did (that is to say, she did not exhibit any marked failure of rationality in arriving at the conclusion that the action was the right thing to do).

Now, how we answer the justification question in a given case affects what sorts of interventions it is reasonable to bring to a responsible moral agent. If the agent had been well-intentioned in the first place, then simply knowing of the unforeseen bad consequence may be more than enough to prompt an appropriate change in her behavior (e.g., attempting to account for a new variable, brought to light by reflection on the present case, in relevantly similar future situations). It is, of course, also possible that the bad outcome was due entirely to lousy moral luck (even highly justified predictions are occasionally wrong), and that nothing gleaned from the episode militates toward any change in the moral calculus used by the agent. In such cases we wouldn’t, I think, wish for her to alter her behavior at all, and so no intervention will be needed.

Now, the above scenario embeds only a very simple causal chain with two nodes: An actor and the (unforeseen) bad consequence she brings about. In cases like the four with which we began this post, there is another actor interposed between the two nodes whose (foreseen) response to the original action serves as the proximate cause of the bad outcome. So, we’ve got something like the following causal structure:

(A) → (a1) → benefit

(B) → (mb) → (a2) → harm,

where (a1) and (a2) are the two actions undertaken by (A) and (B), respectively, (mb) is some mental state of (B) brought about by (a1), and |harm| > |benefit|.

In trying to decide what action to perform, (A) must take into account what (B) is most likely to do, as best she can ascertain, and what (B)’s degree of moral responsibility for that action would be. Now, in each of our four opening examples, the occupiers of the (B) node all had pretty high likelihoods of bringing about bad consequences. These outcomes were highly foreseeable from the (A) node occupiers’ vantage points.

What about responsibility? What makes scenarios with the above causal structure so ethically tricky, as I see it, is that simply in virtue of entertaining these sorts of moral questions with an open mind, (A) will have a higher value for P(I&S) than (B), at least as best as she’ll be able to judge. That is to say, she knows a moral intervention will be more likely to succeed with her than with (B)—since she is already of the right sort of moral disposition—and will thus always be more justified in holding herself responsible for the bad consequence than in holding (B) responsible. At the point of contemplation, (A) is in a very unique position vis-à-vis her moral epistemology, for she is her own surest intervener. And in the mere act of contemplating the decision to initiate the causal chain leading to some net harm, she has shown herself to be a more promising target of moral intervention than anyone to whose mental states she has no direct access (viz., literally anyone else).

This isn’t to say that (B) could not be saddled with some degree of responsibility, perhaps modulo his soundness of mind or reflective capacity. The problem, though, is that whatever the actual values of P(I&S) for (A) and (B), (A) must ultimately make a dichotomous choice: Do the action under consideration or don’t. Now, if she knows that the action is likely to ultimately lead, via the counteraction of (B), to a state of net moral harm, and if she judges herself more responsible for that state than (B), then it would seem she must conclude that she shouldn’t perform the action.

But this seems to lead us into moral absurdity. If this reasoning were followed consistently, the most moral among us would end up perpetually hamstrung and held hostage by the most recalcitrantly immoral. Bullies and terrorists everywhere would be rewarded and incentivized.

Clearly, the ideal situation would be one in which (A) performs the action (for remember, it is beneficial on its own) and (B) abstains from or is prevented from performing his desired counteraction, but it seems it will always be more morally rational for (A) to refrain from initiating the causal chain in the first place than to initiate it and hope (B) doesn’t do what she knows he’s strongly inclined to do. What looks like individual rationality would breed catastrophe in aggregate.

There’s another important dynamic here concerning the relative power (A) and (B) wield over each other. When (A) has the resources necessary to punish (B) and so disincentivize its counteraction, then (A) may feel much more justified in going through with the initial action, despite (B)’s threats. The central dilemma of this essay becomes particularly acute, though, when the power ratio is reversed. To venture briefly out of the realm of the abstract: This is precisely the circumstance in which many of America’s (and indeed the world’s) marginalized find themselves. They face a situation in which any overt attempts to secure fairer treatment for themselves are likely to be met with reactionary assaults by a more powerful social class on whatever measures of justice they had already managed to win. Now, the solutions that leap most readily to mind vis-à-vis cases like this involve (A) seeking sufficient coercive power over (B) to be able to hold it to moral account, but, of course, each move toward securing this power is apt to instantiate the same problematic causal structure. The buck-passing issue is no idle abstraction.

So, what prima facie options are there for a consequentialist to justify passing the moral buck to (B) in situations like these?

A few to consider:

(1). One might, of course, simply take this as a fatal reductio of consequentialism. However, I’m very reluctant to give my intuitions at the level of applied ethics authority over my intuitions at the metaethical level. To do so would seem to tacitly commit me to a view that I already had a coherent, broadly truth-tracking unconscious moral “theory” prior to any attempts to ground it metaphysically in the world, and I’m extremely skeptical of this being the case.

(2). One might, alternatively, seek to retain the broad consequentialist framework but jettison or supplement the particular account of moral responsibility sketched above. I’m open to this move in the abstract, but care will need to be taken to ensure that any proposed alternatives don’t sneak in any incompatible, irreducibly deontic or aretaic intuitions.

(3). One might adopt a sort of proximate consequentialism which would doggedly restrict moral judgment to only the most proximate causes of any outcome. My first thought here is that such a move seems unprincipled and ad hoc, though there are (a small number of) consequentialists who hold this view due to deep skepticism about our ability to predict the future beyond the most immediate effects. It should be noted, however, that this stance will not likely sanction buck-passing in every scenario in which it is desired. If (B)’s counteraction happens to precede in time the intended effects of (A)’s action, then ought the proximate consequentialist regard (B)’s counteraction, rather than those intended effects, as the proximate effect of (A)’s action—a fortiori if proximate consequentialism was adopted on the basis of uncertainty vis-à-vis remote effects? This approach would seem to penalize any sort of long-term moral planning. I am not ready to accept that we can’t do better than this.

(4). Perhaps the most obvious answer is a sort of rule consequentialism that would sanction some buck-passing as a way of keeping us from getting stuck in local minima in what we might call moral error space. Note that this approach, in attempting to see beyond the horizon of the local minimum, is rather straightforwardly antithetical to proximate consequentialism. Now, there are likely cases in which moral buck-passing really should not happen (e.g., when the threatened harms are extreme); a rule can be a very blunt instrument, and I think what’s really wanted here is a more nuanced decision procedure. So, what conditions must be met in order to justify (or prohibit) an instance of buck-passing? Again, I’m skeptical that legal notions of culpability are the right ones to lean on here. It seems fairly obvious to me that an actor of sound mind could, due to personal convictions alone, be much more resistant to moral instruction and rehabilitation than an actor deemed unfit to stand trial for his actions.

The juridical view would have us ask to what extent (B) could have responded to (A)’s actions differently. Alternatively, we might ask to what extent (B) could have realized the harm of its action even if (A) had done nothing. On this view, in order for (A) to be responsible for the harm realized by (B), (A)’s action would have to be a necessary—and not merely contributory or sufficientcause of (B)’s action. This seems to give us acceptable answers to cases (1) and (4) discussed above. The lion cannot eat the man until the man makes himself available to the lion, so (A) gets the—ahem—lion’s share of responsibility in that case. The corporation, on the other hand, is in a position to lay off employees with or without some profit-harming boycott. So far, so good, but things get murky pretty quickly on closer examination.

The notion of “could have” in the above formulation needs a lot more unpacking. Are we talking here only about something like a general capacity? It seems it can’t just be that, since the lion had the general capacity to kill the fence-scaler all along and had hitherto lacked only the opportunity. The mentally ill man, on the other hand (I am assuming most would not wish for the moral buck to be passed to him), has both capacity and opportunity for the murder he commits. What the lack of medication seemed to bring about for him was a particular desire to kill. But do not the acts of (A) in scenarios (3) and (4) also provide the requisite desires for the harmful counteractions taken by (B)? Perhaps it matters whether the particular desire is character-consistent for (B). The corporation in (4) may have no particular predilection to fire those low-level employees prior to (A)’s boycott, but it has a more general willingness to downsize in order to offset profit loss. So (A)’s boycott only gave the corporation incentive to act on a pre-existing stable disposition. Sounds…kind of right, but look again at scenario (2) in light of this. Perhaps the man has no particularly strong general proclivity toward violence as long as his symptoms are controlled. But perhaps it is part of even his healthier character that he is willing to respond violently to sufficiently strong perceived threats. And suppose the murder he commits was due to him misperceiving the victim as just such a threat. Could not the desire on which he murderously acts be regarded as character-consistent in the same way as the desire on which the corporation acts? Does it matter that he acts on faulty perception while the corporation does not? Perhaps. If so, then our decision procedure at this juncture looks something like the following:

In situations with the causal structure described above, it is permissible for (A) to pass the moral buck to (B) just in case:

  1. (B) has the capacity to perform (a2) irrespective of (A)’s actions,
  2. (B) has the opportunity to perform (a2) irrespective of (A)’s actions,
  3. (mb) is consistent with (B)’s character, and
  4. (B) is under no relevant misapprehensions as a result of (A)’s actions.

This is, admittedly, a bit ungainly. Suffice it to say, there is still much work to do. A rule consequentialist justification for buck-passing, broadly speaking, seems to me the most promising path forward, but this particular formulation will need to be tested against a much larger battery of examples. Such examples shouldn’t be too hard to come by—the buck-passing problem is terrifyingly ubiquitous—but I leave the exercise, for the moment, to you.

Leave a Comment