Apologies for the downtime. Still settling in after a big international move.
I have several major projects in the works (always, always), but I thought that, as a tide-me-over (tide-you-over?), I’d try to briefly outline my core criticisms of Jordan Peterson. There are, frankly, many nits one can pick with his worldview (he gets a great many things wrong about the particulars of human evolution, for instance), but these, I think, are the most fundamental problems:
1. His James-via-Darwin pragmatic notion of truth is deeply conceptually confused. Peterson mistakes what is at best a defeasible indicator of truth—viz., utility—for truth itself. Now, the issue to which Peterson’s responding is legitimate and pressing enough: We don’t have any god’s-eye-view of the relationship between reality and our representations, no perfect vantage point from which we can assess the accuracy of those representations. We therefore have to judge their accuracy via more indirect assessments, like whether they allow us to predict really unexpected things and to reliably intervene in the world in successful ways. Put somewhat inversely, it would be very hard to explain how it is we can do so many astounding things with, e.g., the theory of general relativity without that theory being at least partially true.
So utility has some relevance here. But note that only certain uses can really speak to the question of a representation’s truth. The fact that GR can be used to make an entertaining sci-fi story or to inspire awe toward the universe is not particularly relevant to the question of GR’s truth (though, of course, these are nice features); the fact that GR can be used to predict and model gravitational lensing phenomena, however, is. We can think of it this way: Plenty of false representations might allow us to write an entertaining sci-fi story or feel wonder at the universe, but a task as specific and demanding as modeling gravitational lensing is a much finer sieve; very few contenders for truth can get through.
How fine a sieve is ancestral survivability? Not very—at least not with respect to the sorts of truths with which religions are classically concerned. True, we needed to have at least a crude grasp of certain Middle World facts—what is safe to eat, what predators look like and how they may be avoided, when the wet and dry seasons fall, etc.—but plenty of incompatible falsehoods about, e.g., the fundamental constituents of the universe or its ultimate purpose could be utterly interchangeable as far as natural selection is “concerned.” And within the ethical realm, survival wouldn’t have required us to know what constitutes the most morally optimal life but only what our fellow tribesmen would demand of us and what they’d let us get away with. And even among those mundane Middle World facts, selection often treats false positives and false negatives differently. It was super important, ancestrally, that we not confuse a dangerous snake for a harmless snake-shaped object. It was far less important, however, that we not confuse a harmless snake-shaped object for a dangerous snake. These are both errors of similar magnitude, factually speaking, yet their effects on survivability are radically different. Fleeing from a harmless object costs a little extra energy, but failing to flee from a dangerous snake could well cost everything.
Contribution to ancestral survivability is neither a necessary nor a sufficient condition for truth (and by the by, it’s pretty rich that someone who complains so much about postmodernism has no apparent qualms about attacking and perverting traditional notions of truth in order to insulate his own beliefs from factual criticism).
2. As suggested above, Peterson seems to invest natural selection with far more power and intentionality than it actually has. Evolution is only pseudo-teleological; its apparent goal-directedness is an artifact of differential extinction. And fitness is always relative to protean environmental context (which, incidentally, renders it woefully inadequate as a foundation for morality).
I don’t know if Peterson actually thinks evolution is purposeful or if he’s simply been lead into confusion by the ways in which we (biologists often included) often speak heuristically about what genes “want” and so forth. If he does believe it’s properly teleological, then of course that might explain why he thinks it can ground a defensible pragmatic notion of truth. But there’s a problem with this, for evolution can only really be teleological if there’s intelligence behind it. If the proposition that this is so is true, then either it is true in a non-pragmatic sense, in which case Peterson owes us a non-pragmatic argument for it, or it is true (or meta-true, as he sometimes says) in precisely the pragmatic sense the proposition is intended to justify. The latter possibility, of course, suggests that Peterson is arguing in a circle, attempting to ground the theological elements of his worldview in a pragmatic conception of truth that itself depends for its justification upon the truth of (a subset of) those same theological elements. You can’t just take a Thomistic metaphysics as a given.
I’ve not really seen this circularity criticism crop up in debates over Peterson’s views, but I think it is foundational and hugely important. So, you know, feel free to deploy it as needed.