Over on The Electric Agora, a philosophy blog I sometimes haunt, a bit of spat has recently erupted over Wilfred Sellars’ “Philosophy and the Scientific Image of Man” and to what extent certain reductionist or eliminativist programs contradict the “stereoscopic vision” (in which the Scientific and Manifest Images overlay and complement, rather than compete with, each other) Sellars argued was necessary for a complete account of persons. Particular focus here was on the eliminativism re: folk psychology espoused by Paul Churchland, one of Sellars’ students. Churchland is sometimes regarded as a “right-wing Sellarsian” (I suppose because his view of science is considered imperialistic), in contrast with left-wing Sellarsians like John McDowell and Bob Brandom. Anyway, most of The Electric Agora’s regulars are pretty vehemently opposed to reductive/eliminative projects, which they see as forbidden incursions of the Scientific into the domain of Manifest. One recent post described such projects as a kind of philosophical philistinism.
As you might infer from other writings here, I have very little sympathy for these sorts of arguments, less still for the vitriolic hysteria in which they are often cast. Instead of just cannonballing into the shouting match, though, I wanted to do a bit of conceptual housekeeping in attempt to clarify why it is that Churchlandian programs are seen to be in such tension with the aforementioned stereoscopic vision and assess whether this apparent tension holds up to close scrutiny. I thought the effort produced a helpful way of thinking about some of these issues and wanted to share it here for anyone still interested in these sorts of debates.
First, a bit of background: The Manifest Image Sellars speaks of is not meant to be the same as our intuitive or commonsense conception of ourselves. It is, in short, the image of ourselves as subjects and objects of normative evaluation. It concerns itself with the realm of reasons, rather than causes. A number of left-wing Sellarsians, however, seem to regard certain commonsense posits—viz., the ontology of our folk psychology—as in some way necessary to support the normative discourse that is the Manifest Image’s bailiwick. It is this assumption that I want to clarify and interrogate.
Now, a central issue muddling this question of whether Churchlandian eliminativism constitutes some sort of betrayal of the Sellarsian project is that the canonical targets of that eliminativism—the propositional attitudes—sit right at the nexus of the descriptive and the normative. Beliefs, desires, and the like are objects of evaluation (epistemological, logical, pragmatic, moral, etc.), but they are also traditionally tools for predicting behavior. If Alice desires that x and believes that y-ing will bring it about that x, then, ceteris paribus, we expect Alice to y.
The Churchlands contend that propositional attitudes really aren’t very good tools for predicting behavior and advocate for their replacement by a more neuroscientifically informed ontology. Is this an illicit encroachment of Science into the domain of the Manifest? Only if we assume the replacement ontology can’t itself furnish any suitable targets of normative evaluation. (Side note: we could just as well ask whether the traditional descriptive/predictive uses of the propositional attitudes constituted an illicit encroachment of the Manifest into the domain of Science.)
Paul Churchland pretty clearly doesn’t think this is the case. He’s argued at length against anti-representationalist interpretations of the connectionist and dynamical systems paradigms and has a whole book, Plato’s Camera, devoted to describing how the brain constructs “pictures” (or, more accurately, analogue dynamical models) of the world. Now, a model of this sort is unlike a belief in certain ways. Perhaps most importantly, it doesn’t have a binary truth value; rather, its success condition (in an epistemic context) is something like global isomorphism with its worldly domain. The important bit, though, is that it does have success conditions. Models can be more or less accurate, and they can be accurate in distinct respects (a nuance not available to propositions with binary truth values).
Now, truth/accuracy is fairly easy to recover in a Churchlandian Image. A harder problem, one might think, is something like justification, which traditionally concerns itself not just with beliefs, but desires, intentions, actions, inferences, etc. Broadly speaking, to be justified in believing, desiring, doing, etc., is to have reasons of sufficient number or strength to believe, desire, or do. Now, a reason is a strange beast, even within the Manifest image. It, like the propositional attitudes, has traditionally had both normative and descriptive uses.
Let’s say Bob desires to lose weight and accordingly has a reason to refrain from buying fattening foods at the grocery store. If we know that Bob has a basic awareness of the relevant connections between his desired weight loss, his food intake, and his purchasing habits, we might expect to see Bob consistently purchasing fewer fattening foods. So, in this case, we appear to be using a reason as a predictive tool (i.e., as a cause). Yet if Bob, subject to temptation as we all are, fails to consistently refrain from buying fattening foods, we would not simply conclude we were in error about Bob having a reason to refrain from those purchases.
Consider as a more extreme example Carl, who has no desire at all to lose weight—even though he would be more satisfied with his life if he did. On the basis of this counterfactual, we could well say Carl too has a reason to refrain from buying fattening foods. We would certainly not, though, try to predict Carl’s behavior on the basis of this reason, for Carl appears to be either wholly ignorant or insufficiently appreciative of it.
There’s an ambiguity in our use of the term “reason” that I think is needlessly complicating this broader debate. Sometimes, by “reason” we mean something like a set of facts-in-the-world that justify x-ing. In the case of Bob, these might be, inter alia, facts about Bob’s psychology (what he desires, what would make him most satisfied), facts about his metabolism, facts about the contents of the various foods available to his purchase, etc. And sometimes, by “reason” we meaning something like a subject’s representation of such facts. It is this latter sort of reason that we give and ask for in the “game of giving and asking for reasons.”
Now, the Bob and Carl examples suggest, I think, that there’s a pretty tidy division of labor here. It is the first sort of reason that does the justificatory work and the second sort, reason-as-represented, that does the predictive and explanatory work (for the sake of convenience, I’m going to henceforth refer to these as j-reasons and r-reasons, respectively). Granted, this is easiest to see when the facts grounding a given j-reason are represented very poorly or not at all by the subject to which the j-reason applies. Things get a little murkier when the j-reason is represented well (i.e., when the representing subject is apt to behave rationally). Here, the j-reason starts to look a lot more indispensable to prediction and the r-reason a lot more indispensable to justification. But as long as the representation falls short of perfect isomorphism (as it always will in the real world), the r-reason will always have more predictive utility than the j-reason, and one will always be more justified in acting in accordance with a j-reason than an r-reason.
Churchlandian eliminativism certainly could threaten to revise our understanding of r-reasons. But if r-reasons are properly tools of prediction rather than sources of justification, then they already belong to the Scientific Image, and their revision should pose no threat to normativity. What might be more concerning, at least at first blush, is that some of the facts constitutive of a j-reason are facts about the mental states of the subject to whom the j-reason applies (his desires, values, etc.), and eliminativism might contend that we have these facts all wrong.
The question, then—again—is whether whatever brain states would replace desires and the like in an eliminativist program could ground j-reasons. I don’t see why they couldn’t. What seems to me most crucial for the normative functions of the propositional attitudes is their (defeasible) connection to motivation and action, and I see no reason to suspect this connection would be severed simply by replacing the traditional representational vehicle—the proposition—with something like the high-dimensional models of the neurocomputational program. Let’s say a desire turns out to be something like a high-dimensional representation of some possible world state + a state of activation in, say, premotor cortex that primes the body to work toward realizing this world state under certain perceived circumstances. Such a brain state would seem to afford all the same sort of “rational purchase” on the individual’s behavior as a traditional desire. We’d still have j-reasons—in this case constituted by facts about these brain states, facts about the circumstances on which certain actions are conditioned, facts about the probable outcomes of those actions, etc.—and we’d still seek normative influence over others by giving r-reasons, which will succeed just to the extent that they put their targets in epistemic contact with the facts constitutive of the j-reasons.
If this all just sounds too descriptive to undergird any sort of real normativity, it may be worth remembering that even on the traditional picture, descriptions can serve clearly normative functions. If we’re camping and I suddenly announce “There’s a bear behind you!” you don’t need me to add “and you should run or you’ll be mauled!” in order to take the appropriate action. I have trusted that you were already acquainted with a certain set of facts—facts about your aversion to bodily harm, facts about a bear’s ability to inflict bodily harm, facts about how to avoid bodily harm in the presence of a bear—and judged that I needed only to make you aware of one additional fact (viz., that you are currently in the presence of a bear) in order to put you in sufficient epistemic contact with the relevant j-reason.
The lesson here, I think, is that normativity is not, except perhaps in a derivative sense, a property of certain languages or conceptual schemes. It is first and foremost a property of intelligent systems: those capable of modeling the world and regulating their behavior so as to bring the world and their models into congruence (whether by changing the model or by changing the world). This general picture seems unaltered by Churchlandian eliminativism, and so I don’t see the assault on the Manifest Image that others apparently see in that program.
By the way, I have a lengthy series on reductionism in the works. It’s going to be foundational for much of what I ultimately want to talk about in this blog. Look for the first part soon.