In my last post I raised the issue of the specialized, abstracted ways in which philosophers of mind tend to describe and analyze mental life. This will be crucial to some of the arguments I want to develop. There’s a risk here that it will seem as though I’m constructing a straw man, criticizing a caricature that doesn’t faithfully depict the way philosophers actually talk. There’s also a risk that my argument will seem like a simplistic, knee-jerk reaction against complex vocabulary or sentence construction.
This post is an effort to head-off those types of objections by giving specific examples of what I mean by abstract, technical, specialized philosophical language. All that matters for the sake of my arguments is that it is markedly different, and in some sense removed, from concrete or empirical data — whether objective (observable phenomena and behavior) or subjective (direct experience).
I’ll give just a couple of examples (the first good ones I found on a random search), but there are countless others throughout the field. My point applies to any discussion that sounds even remotely like what follows, or that is based on conclusions drawn from this sort of analysis.
The following examples are taken from Concepts: Where Cognitive Science Went Wrong, by Jerry Fodor (1998).
"Since content supervenes on purely nomic relations — that is, on certain lawful relations among properties — and since lawful relations can presumably hold among properties that are, de facto, uninstantiated, the metaphysical conditions for content can in principle can be met entirely counterfactually: no actual tokens of DOG have actually to be caused by dogs for the counterfactuals that its content supervenes on to be in place."
My point is not that this is complex, rich with philosophical jargon, or difficult for non-philosophers to understand — though of course it is — but that what’s being described here are abstract entities and categories (DOG, tokens, properties) and how those relate to one another. DOG is not a living, panting cocker spaniel sitting at Fodor’s feet; it is a theoretical construct.
Fodor describes such theoretical constructs as interacting or relating to each other in a variety of ways, such as “instantiating,” “supervening on,” or being “constitutive of” on one another. Here are two other examples:
"So we have it, by assumption, that 'dog' and DOG mean dog because 'dog' expresses DOG, and DOG tokens fall under a law according to which they reliably are (or would be) among the effects of instantiated doghood.
"[C]oncepts are constituents of mental states. Thus, for example, believing that cats are animals is a paradigmatic mental sate, and the concept ANIMAL is a constituent of the belief that cats are animals."
These next quotes are by David Chalmers, in “The Content and Epistemology of Phenomenal Belief” (2003).
“I take concepts to be mental entities on a par with beliefs: they are constituents of beliefs (and other propositional attitudes) in a manner loosely analogous to the way in which words are constituents of sentences. Like beliefs, concepts are tokens rather than types in the first instance. But they also fall under types, some of which I explore in what follows.”
Again, the entities being discussed here are abstract. (Which is not to say that nobody imagines these abstract entities will correspond to physical entities. But so far, we’re operating in pure theory.)
“I look at a red apple, and visually experience its color…” (So far so good! A little oddly phrased, but still pretty well grounded in the reality of people, apples, and colors.) “…This experience instantiates a phenomenal quality R, which we might call phenomenal redness.” (And then *poof!* we’re back in theory-land.)
Note: Lest it still seem as though I am critiquing an outdated, minority viewpoint, please read the post "This Means You Too, Andy Clark."