The editors at Review of Philosophy and Psychology invite submissions for a special issue on consciousness attribution in moral cognition. Guest authors include: Kurt Gray (Maryland), Edouard Machery (Pittsburgh) and Justin Sytsma (East Tennessee State), and Anthony I. Jack (Case Western Reserve) and Philip Robbins (Missouri).
Submissions are due March 31, 2011.
The full CFP, including relevant dates and submission details, is available here.
Abbreviated CFP: When people regard other entities as objects of ethical concern whose interests must be taken into account in moral deliberations, does the attribution of consciousness to these entities play an essential role in the process? In recent years, philosophers and psychologists have begun to sketch limited answers to this general question. However, much progress remains to be made. We invite contributions to a special issue of The Review of Philosophy and Psychology on the role of consciousness attribution in moral cognition from researchers working in fields including developmental, evolutionary, perceptual, and social psychology, cognitive neuroscience, and philosophy.
Tuesday, December 27, 2011
The editors at Review of Philosophy and Psychology invite submissions for a special issue on consciousness attribution in moral cognition. Guest authors include: Kurt Gray (Maryland), Edouard Machery (Pittsburgh) and Justin Sytsma (East Tennessee State), and Anthony I. Jack (Case Western Reserve) and Philip Robbins (Missouri).
Friday, December 23, 2011
Clark Glymour has an opinion piece urging philosophers to reach out beyond their disciplinary circles and encouraging the pursuit of big-dollar grants. Adam Briggle and Robert Frodeman say much the same thing. (Glymour emphasizes philosophy of science and Briggle & Frodeman applied ethics.) I agree that philosophers as a group should reach out more than they do. But I think the increasing emphasis on grant-getting in academia is a disease to be fought, not a trend to be encouraged.
Academic research scientists spend a lot of time applying for grant money. This is time that they are not spending doing scientific research. I've often heard that applying for an NSF grant takes about as much time as writing a journal article. Now, most scientists need money to do their research and there should be mechanisms to fund worthy projects, so maybe for them passionate summers of grant application are a worthy investment. But do philosophers need to be doing that? I doubt philosophy is best served by encouraging philosophers to spend more time thinking up ways to request money.
Furthermore, for both scientists and philosophers I think a better model would be a hybrid in which it is possible to apply for grants but in which, also, productive researchers could be awarded research money without having to apply for it. Look, V.S. Ramachandran is going to do something interesting with his research money no matter what, right? Philip Kitcher too. Let them spend their time doing what they do best and monitor the funds post facto. Let us all have a certain small amount of money to attend (and sometimes organize) conferences, without our having to manufacture elaborate bureaucratic pleas in advance. The same total funding could go out, with much less time wasted, if grant writing were only for exceptional cases and exceptional expenses.
A very different type of reason to resist the increasing academic focus on grant-getting is this: Grant-driven bureaucracy decreases the power of researchers to set their own research agenda and increases the power of the grant agencies to set the agenda. Maybe that's part of what Glymour and Briggle & Frodeman want, since they seem to distrust philosophers' ability to choose worthy topics of research for themselves. But philosophy in particular has often been advanced by people working outside the mainstream, on projects that might not have been seen as valuable by the well-established old-school researchers and administrators that tend to serve on grant committees. In ancient Greece, the sophists were the ones getting grants, while Socrates was fightin the powa.
If you want to apply for grants, terrific! I have no problem with that. Get some good money to do your good work. Organize an interesting conference; fly across the world to thumb through the archives; get some time away from teaching to write your book. Absolutely! But let's not try to push the discipline as a whole more into the grant-getting game than it already is.
Thursday, December 15, 2011
There's a huge literature in philosophy of language on what's called "Frege's puzzle" about belief reports. Almost all the participants in this literature seem to take for granted something that I reject: that sentences ascribing beliefs must be determinately true or false, at least once those sentences are disambiguated or contextualized in the right way.
Frege's puzzle is this. Lois Lane believes, it seems, that Superman is strong. And Clark Kent is, of course, Superman. So it seems to follow that Lois Lane believes that Clark Kent is strong. But Lois would deny that Clark Kent is strong, and it seems wrong to say that she believes it. So what's going on? There are several standard options, but all lead to trouble of one sort or another. (If you don't like Superman, try Twain/Clemens or Unabomber/Kaczynski.)
On a dispositional approach to belief of the sort I favor, to believe some proposition P -- the proposition, say, that that guy (variously known as "Superman" or "Clark Kent") is strong -- is to be disposed to act and react, both outwardly and inwardly, as though P were true. (On my version of dispositionalism, this means being disposed to act and react in ways that ordinary people would regard as characteristic of belief that P.) Lois has some such dispositions: For example, she's disposed to say "Superman is strong". But she notably lacks others: She's not disposed to say "Clark Kent is strong". She's disposed to ask Superman/Clark Kent to lift her up in the air when he's in costume but not when he's in street clothes.
Personality traits also involve clusters of dispositions, so consider them as an analogy. If someone is disposed to be courageous in some circumstances and not courageous in other circumstances, it might be neither quite right to say that she is courageous nor quite right to say that she isn't. "Courageous" is a vague predicate, and we might have an in-between case, in which neither simple ascription nor simple denial is entirely appropriate (though there may also be contexts in which simple ascription or denial works well enough -- e.g., battlefields vs. faculty meetings if she has battlefield courage but not interpersonal courage). Compare also "Amir is tall", said of a man who is 5'11". Lois's belief about Superman/Clark Kent might similarly be an in-between case in the application of a vague predicate.
You'll probably object that Lois simply and fully believes that Superman is strong, and it's not an in-between case at all. I have two replies. First, that way of putting it -- in terms of Superman rather than Clark Kent -- highlights certain aspects of Lois's dispositional profile over others, thus creating a conversational context that tends to favor believes-strong ascription (like a battlefield context might favor ascription of courage to a person who has battlefield courage but not other sorts of courage). Second, consider a version of the case in which the belief ascriber doesn't have the name "Clark Kent" available, but only the name "Superman". The ascriber and his friend are looking through a window at Superman/Clark Kent in street clothes. The ascriber's friend, who doesn't know that Lois is deceived, asks, "Does Lois believe that Superman is strong?" What should the ascriber reply? He should say, "Well, um, it's a complicated case!" I see no point in insisting that underneath that hedge there needs to be a determinate metaphysical or psychological or (disambiguated [update Dec. 16: e.g., "de re / de dicto"]) linguistic fact that yes-she-really-does (or no-she-really-doesn't), any more than there always has to be a determinate fact about whether someone is tall simpliciter or courageous simpliciter.
Now this is a heck of a mess in philosophy of language, and I haven't thought through all the implications. I'm inclined to think that excessive realism about the identity of propositions is part of the problem too. I don't claim that this is a full or non-problematic solution to Frege's puzzle. But it seems to me that this general type of approach should be more visible among the options than it is.
[HT: Lewis Powell on Kripke's Puzzle.]
Saturday, December 10, 2011
On Nature's website:
“Descartes said that if there's something you can be certain of in this world, it's that your hand is your hand,” says Ehrsson.Um, whoops! Descartes said that what he couldn't doubt was his own thinking. It was G.E. Moore who famously said it would be absurd to suggest that he didn't know that "here is a hand".
Descartes, G.E. Moore, whatever! It's only philosophy, after all -- not something worth bothering to get right in the the flagship journal of the natural sciences.
(If I sound prickly, maybe it's because I'm currently on hold with AT&T, about to talk to my eleventh representative in two months about being double billed for internet service.)
Update, Dec. 15: The author of the piece has now corrected the error. It turns out that philosophy is worth getting right after all!
Posted by Eric Schwitzgebel at 4:11 PM
Thursday, December 08, 2011
A recent paper by Francesca Gino and Dan Ariely suggests that relatively creative people are more likely to be dishonest than are relatively less creative people because they are better at concocting rationalizations for potential dishonesty. I can't say I'm entirely swooned by Gino & Ariely's methodology, which measures dishonesty by seeing whether people will give wrong answers in psychology laboratory studies when they are paid to give those wrong answers. (If psychologist says: "Roll a die, I'm not going to check the outcome, but I'll pay you $1 if you say it's a 1 and $6 if you say it's a 6", how exactly should the participant react to what's going on here?) I'd rather see more naturalistic observations of behavior in real-life situations, or at least better cover stories. Nor do I think Gino & Ariely do a terrific job of establishing that ability to creatively rationalize is the real mediator of the apparent difference in honesty.
Nonetheless, the conclusion is interesting, the mechanism plausible, and the results at least suggestive. And their picture fits nicely with my favorite hypothesis about the apparent fact that professional ethicists behave no morally better than do socially similar non-ethicists. Philosophical moral reflection, I'm inclined to think, rather than being inert, is bivalent: On the one hand, it highlights the moral dimension of things and can help you appreciate moral truths; but on the other hand, people who are skilled at it will also be skilled at finding superficially plausible rationalizations of attractive misconduct which might then allow them to feel freer to engage in that misconduct (e.g., stealing a library book). Professional ethicists develop their creativity in exactly an area in which being creative brings substantial moral hazards.
Tuesday, December 06, 2011
In 2010, I compiled a list of the top 200 most-cited contemporary authors in the Stanford Encyclopedia of Philosophy. (By "contemporary" I mean born in 1900 or later.) One striking feature of this list is the underrepresentation of baby boomers, especially near the top.
Let's compare the representation of people born 1931-1945 (the fifteen years before the baby boom) with those born in 1946-1960 (the bulk of the baby boom), among the top 25.
Among the pre-baby boomers, we find:
David Lewis (#1)Among the baby boomers we find:
Saul Kripke (#6)
Thomas Nagel (tied #7)
Jerry Fodor (#9)
Daniel Dennett (tied #10)
Frank Jackson (tied #10)
Robert Nozick (tied #13)
John Searle (tied #13)
Gilbert Harman (#16)
Ronald Dworkin (#18)
Joseph Raz (tied #19)
Bas Van Fraassen (tied #19)
Fred Dretske (tied #22)
Peter Van Inwagen (tied #22)
Alvin Goldman (tied #24).
Martha Nussbaum (tied #19)These numbers seem to suggest that the depression-era and World War II babies have had a much larger impact than the baby boomers on mainstream Anglophone philosophy.
Philip Kitcher (tied #24).
You might have thought the reverse would be the case. Aren't there more baby boomers? Haven't baby boomers been culturally dominant in other areas of society? So what's going on here?
One possibility is that the boomers haven't yet had time to achieve maximum influence on the field. Someone born in 1940 has had ten more years to write and to influence peers and students than has someone born in 1950. Although I think there is something to this thought, especially for the younger boomers, I suspect it's not the primary explanation. A boomer born in 1950 would be sixty years old by 2010. The large majority of philosophers who have a big impact on the field achieve a substantial proportion of that impact well before the age of sixty. Certainly that's true of the top philosophers on the list above -- Lewis, Kripke, Nagel, and Fodor. Their most influential work was in the 1960s to early 1990s. The boomers have had plenty of time to generate the same kind of influence, if it were simply a matter of catching up from a later start. In fact, contemporary Anglophone philosophers seem to have their average peak influence from about age 55-70, declining thereafter. On average, the baby boomers should be enjoying peak citation rates right now, and the depression babies should be starting to wane.
Here's an alternative diagnosis: College enrollment grew explosively in the 1960s and then flattened out. The pre-baby-boomers were hired in large numbers in the 1960s to teach the baby boomers. The pre-baby boomers rose quickly to prominence in the 1960s and 1970s and set the agenda for philosophy during that period. Through the 1980s and into the 1990s, the pre-baby-boomers remained dominant. During the 1980s, when the baby boomers should have been exploding onto the philosophical scene, they instead struggled to find faculty positions, journal space, and professional attention in a field still dominated by the depression-era and World War II babies.
This started to change, I think, with the retirement of the depression babies and the hiring boom of Gen-Xers in the late 1990s and early 2000s. It remains to be seen if history will repeat itself.
Wednesday, November 30, 2011
I keep bumping into this question. Casey Perin gave a talk on it at UCR; Daniel Greco has a forthcoming paper on it in Phil Review. Benj Hellie launched an extended Facebook conversation about it. Can the radical skeptic live his skepticism? I submit the following for your consideration.
First, a bit about belief. I've argued that to believe some proposition P is nothing more or less than to be disposed to act and react in a broadly belief-that-P-ish way -- that is, to be disposed, circumstances to being right, to say things like "P", to build one's plans on the likelihood of P's truth, to feel surprised should P prove false, etc. Among the relevant dispositions is the disposition to consciously judge that P is the case, that is, to momentarily explicitly regard P as true, to endorse P intellectually (though not necessarily in language). Dispositions to judge that P often pull apart from the other dispositions constitutive of belief, for example in self-deception, implicit bias, conceptual confusion, and momentary forgetting. (See here and here.) To believe that P is to steer one's way through the world as though P were the case. One important part of the steering, but not the only part, is being disposed to explicitly judge that P is the case.
Okay, now skepticism. My paradigm radical skeptics are Sextus Empiricus, Montaigne (of the Apology), and Zhuangzi (of Inner Chapter 2). When such radical skeptics say they aim to suspend all belief, I recommend that we interpret them as really endorsing two goals: (a.) suspending all judgment, and (b.) standing openly ready, with equanimity, for alternative possibilities.
Arguments that it's impossible to suspend all belief tend to be, at root, arguments that it's impossible to refrain from action and that action requires belief. Perhaps it is impossible to refrain from all action. No skeptic advises sitting all day in bed (as though that weren't itself an action). Sextus advises acting from habit; Zhuangzi seems to endorse well-trained spontaneity. (Of course, they can't insist dogmatically on this, and Zhuangzi actively undermines himself.) If the runaway carriage is speeding toward the skeptic, the skeptic will leap aside. On my account of belief, such a disposition is partly constitutive of believing that the carriage is heading your way. So the skeptic will have at least part of the dispositional profile constitutive of that belief. This much I accept.
But it's not clear that the skeptic needs to match the entire dispositional profile constitutive of believing the carriage is coming. In particular, it's not clear that the skeptic needs to consciously judge that the carriage is coming. Maybe most of us would in fact reach such a judgment, but spontaneous skillful action without conscious judgment is sometimes thought to be characteristic of "flow" states of peak performance; and Heidegger seems to have valued them and regarded them as prevalent; and perhaps certain types of meditative practice aim at them. Suspension of judgment seems consistent with action, perhaps even highly skilled action. Though suspension of judgment isn't suspension of the entirety of the dispositional profile characteristic of belief, it's suspension of an important part of the profile -- perhaps enough so that the skeptic achieves what I call a state of in-between believing, in which there's enough deviation from the relevant dispositional profile that it's neither quite right to say he believes nor quite right to say he fails to believe.
The skeptic will also, I suggest, stand openly ready, with equanimity, for alternative possibilities. The skeptic will leap away from the carriage, but she won't be as much surprised as the non-skeptic would be if the carriage suddenly turns into a rooster. The skeptic will utter affirmations -- Zhuangzi compares our utterances to the cheeping of baby birds -- but with an openness to the opposing view. The skeptic will be less perturbed by apparent misfortune (for maybe it's really good fortune in disguise) and thus perhaps achieve a certain tranquility unavailable to dogmatists (as emphasized by both Sextus and Zhuangzi). The skeptic stands humbly aware, before God or the universe, of his flawed, infinitesmal perspective (as expressed by Montaigne).
Judgment is stoppered; action still flows; there's a humility, openness, tranquility, lack of surprise. None of this seems psychologically impossible to me. In certain moods, I even find it an appealing prospect.
Tuesday, November 22, 2011
I conceptualize the history of philosophy as, in part, the source of interesting empirical data about the psychology of philosophy. Nietzsche and Dewey also conceptualized the history of philosophy this way, but I don't think many other philosophers do. There's a lot of untapped potential.
Here are some ways I've put the history of philosophy to empirical use:
* As evidence that it is impossible to construct a detailed, thoroughly commonsense metaphysics of mind-body dualism.I can't resist also mentioning Shaun Nichols's observation of the suspicious lack of historical occupants of one theoretically available position regarding free will.
* As evidence for a relationship between culturally available metaphors for visual experience and views of the nature of visual experience.
* As evidence for a relationship between culturally available metaphors for dream experience and views of the nature of dream experience.
* As evidence that philosophical expertise doesn't diminish the likelihood of being swept up in noxious political ideologies.
* As evidence of the diversity of philosophical opinions that can be held by presumably reasonable people (especially on the character of conscious experience and the metaphysics of mind).
These analyses are mostly not quantitative, but that doesn't make them less empirical. In all cases, the fact that some philosophers claimed X (or X1... Xn) or did Y is treated as empirical evidence for some different hypothesis Z about the psychology of philosophy.
Maybe empirically oriented philosophers typically don't regard themselves as expert enough in history of philosophy to write about it. But I think we hobble ourselves if we allow ourselves to be intimidated. The standard of expertise for writing about Descartes or Kant in the context of a larger project -- a project that isn't just Descartes or Kant interpretation -- shouldn't be world leadership in Descartes or Kant interpretation. It should be the same standard of expertise as in writing about a contemporary colleague with a large body of influential work, like Dennett or Fodor.
Friday, November 18, 2011
Wednesday, November 16, 2011
Nowadays, most Americans report dreaming in color. In the 1950s, most Americans reported dreaming in black and white. In a series of articles I have argued that the reason for this change is not that people used to dream in black and white and now dream in color. Rather, I argue, people over-analogize dreams to movies. Thus, as movie technology shifts, people's dream reports shift, though their dreams themselves remain the same.
(Two pieces of evidence for this view: (a.) The use of color terms ("brown", "orange", etc.) in dream diaries seems to have been consistent since the 1950s. (b.) Color dream reporting correlates with group history of black-and-white media exposure across socioeconomic groups in China.)
A new study by Hitoshi Okada and colleagues in Japan calls my research into doubt. In 1993, Okada and colleagues had found that young Japanese respondents tended to report colored dreaming while older respondents tended to report not dreaming in color -- a result entirely in accord with my hypothesis, due to respondents' presumably different histories of black-and-white vs. colored media exposure. Now in 2011, Okada at al. find almost exactly the same pattern of responding. Thus, the cohort of respondents that was in their 20s and 30s in 1993, and who reported mostly colored dreaming back then, reports relatively infrequent color dreaming now. Twenty years of (presumably) colored media exposure appears not to have shifted them toward reporting more colored dreaming -- if anything, the opposite.
Maybe these results can be reconciled with my view. For example, maybe older Japanese regard as the archetypal movie the old-fashioned high-art black-and-white movies of Kurosawa and others. But that doesn't seem especially likely.
Another possibility (as always!) is that Okada's research is open to interpretations other than its face-value interpretation.
The following is Okada et al.'s entire description of their questionnaire:
The participants were required to check one of five categories describing the frequency with which color occurred in their dreams during the past year: 1 (always), 2 (sometimes), 3 (occasionally), 4 (seldom), or 5 (never) (p. 216).In English, I don't know that "sometimes" implies higher frequency than "occasionally", but I trust that this is just an infelicity of translation from the original Japanese.
One worry is that this measure has no denominator. So here's one possible explanation of the Okada et al. results: Older Japanese people report dreaming less in general than do younger Japanese, so they report less frequent colored dreaming too. This would be consistent with their self-reported ratio of black-and-white to colored dreaming being about the same. (In my own work on the issue, I ask some respondents about absolute frequency or colored dreaming and others about the proportion of colored to black-and-white dreams.)
Another potential concern is non-response bias. Okada et al. state that their participants were "students in Bunkyo University, Jissen Women’s University, and Iwate University, or members of their families" (p. 215). They don't indicate the response rates of the family members, but it's possible that only a minority of family members who heard about the questionnaire chose to respond. If so, those family responders would mostly be people with higher-than-average interest in the issue of black-and-white vs. colored dreaming. And we might reasonably worry that such people would not have views on that question that are representative of the population as a whole. (This is, of course, the notorious problem with online polls.)
I'd be very interested to see a follow-up study addressing these concerns.
Tuesday, November 15, 2011
What happens to your moral behavior and moral attitudes when you reflect philosophically? Philosophers all seem to have opinions about this, but those opinions diverge and there's very little serious research on the issue.
Here are four possibilities:
(1.) The booster view: Philosophical moral reflection leads to the discovery of moral truths – either general moral truths that people tend to not to endorse absent such reflection (such as, perhaps, that eating meat is morally bad) or particular moral truths about specific situations that would not otherwise have been properly morally appreciated (such as that some particular behavior would be objectionably sexist). Such discoveries have a significant positive overall impact on moral behavior – though perhaps only on average, to a moderate extent, and in some areas. Furthermore, since it reveals connections between specific instances of moral behavior and general moral principles, philosophical moral reflection tends to increase the overall consistency between one’s broad moral attitudes and one’s practical moral behavior.
(2.) The epiphenomenalist view: Philosophical moral reflection is virtually powerless to change moral behavior or moral attitudes, either for better or for worse – though it may produce decorative linguistic justifications of what we would have thought and done in any case.
(3.) The rationalization view: Philosophical moral reflection tends to increase the consistency between attitudes and behavior, as the booster suggests, but it does so in the opposite causal direction than the booster suggests: The ethically reflective person’s attitudes shift to match her behavior rather than her behavior shifting to match her attitudes. The philosophically reflective person’s practical behavior may be unaffected by such rationalizations (the inert rationalization view); or the tendency to rationalize may morally worsen philosophically reflective people by freeing them to act on immoral impulses that are superficially but unsatisfactorily justified by their reflections (the toxic rationalization view). On the inert rationalization view, for example, one will either steal or not steal a library book as a result of psychological processes uninfluenced by one’s philosophical reflections, and then one will shape one’s moral attitudes to justify that incipient or recently past behavior. On the toxic rationalization view, one might feel an inclination to steal the book and act on that inclination as a consequence of a spurious moral justification for the theft.
(4.) The inert discovery view: Philosophical moral reflection tends to lead to the discovery of moral truths (as also suggested by the booster view). However, such discoveries have no material consequences for the practical behavior of the person making those discoveries. Philosophical reflection might lead one to discover, for example, that it is morally wrong to eat the meat of factory-farmed mammals, but on this view one would continue to eat factory-farmed meat at virtually the same rate as one would have done absent any philosophical reflection on the matter.
Monday, November 07, 2011
... a new essay of mine, now in circulating draft. Comments welcome, either on this post or by email.
Crazyism about X is the view that something it would be crazy to believe must be among the core truths about X. In this essay, I argue that crazyism is true of the metaphysics of mind. A position is "crazy" in the intended sense if it is contrary to common sense and we are not epistemically compelled to believe it. Views that are crazy in the relevant sense include that there is no mind-independent material world, that the United States has a stream of conscious experience distinct from the experiences of the individuals composing it, that chimps or the intelligent-seeming aliens of science fiction fantasy entirely lack conscious experience, that mental events are causally inefficacious. This is by no means a complete list. Well developed metaphysical theories will inevitably violate common sense, I argue, because common sense is incoherent in matters of metaphysics. No coherent and detailed view could respect it all. With common sense thus impaired as a ground of choice, we lack the means to justifiably select among several very different metaphysical options concerning mind and body. Something bizarre must be true about the mind, but which bizarre propositions are the true ones, we are in no good position to know.
Monday, October 31, 2011
I wrote a bit about this issue last May, and it's still really bugging me. Let me try another angle in.
It would be bizarre to suppose that the United States has a stream of conscious experience distinct from the streams of conscious experience of the people who compose it. I hope you'll agree. (By "the United States" here, I mean the large, vague-boundaried group of compatriots who sometimes act in a coordinated manner.) Yet it's unclear by what materialist standard the U.S. lacks consciousness. Nations, it would seem, represent and self-represent. They respond (semi-)intelligently and self-protectively, in a coordinated way, to opportunities and threats. They gather, store, and manipulate information. They show skillful attunement to environmental inputs in warring and spying on each other. Their subparts (people and larger subgroups of people) are massively informationally connected and mutually dependent, including in incredibly fancy self-regulating feedback loops. These are the kinds of capacities and structures that materialists typically regard as the heart of mentality. Nations do all these things via the behavior of their subparts, of course; but on materialist views individual people also do what they do via the behavior of their subparts. A planet-sized alien who squints might see individual Americans as so many buzzing pieces of a diffuse body consuming bananas and automobiles, invading Iraq, exuding waste.
Even if the U.S. still lacks a little something needed for consciousness, it seems we ought at least hypothetically to be able to change that thing, and so generate a stream of experience. We presumably needn't go nearly as far as Ned Block does in his famous "Chinese nation" example -- an example in which the country of China implements the exact functional structure of someone's mind for an hour -- unless we suppose, bizarrely, that consciousness is only possible among beings with almost exactly our psychology at the finest level of functional detail. If we are willing to attribute conscious experience to relatively unsophisticated beings (frogs? fish?), well, it seems that the United States can, and does sometimes, act with as much coordination and intelligence, if on a larger scale.
The most plausible materialistic attempt I have seen to confine consciousness within the skull while respecting the broadly functionalist spirit of most materialism is Andy Clark's and Chris Eliasmith's suggestion that consciousness requires the functional achievements possible through high bandwidth neural synchrony. However, it's hard to see why speed per se should matter. Couldn't conscious intelligence be slow-paced, especially in large entities? And it's hard to see why synchrony should matter either, as long as the functional tasks necessary for intelligent responsiveness are successfully executed.
Alternatively, one might insist that specific details of biological implementation are essential to consciousness in any possible being -- for example, specific states of a unified cortex with axons and dendrites and ion channels and all that -- and that broadly mammal-like or human-like functional sophistication alone won't do. However, it seems bizarrely chauvinistic to suppose that consciousness is only possible in beings with internal physical states very similar to our own, regardless of outwardly measurable behavioral similarity. If aliens come visit us tomorrow and behave in every respect like intelligent, conscious beings, must we check for sodium and calcium channels in their heads before admitting that they have conscious experience? Or is there some specific type of behavior that all conscious animals do but that the United States, perhaps slightly reconfigured, could not do, and that is a necessary condition of consciousness? It's hard to see what that could be. Is the United States simply not an "entity" in the relevant sense? Well, why not? What if we all held hands?
In his classic early statement of functionalism, Hilary Putnam (1965) simply rules out, on no principled grounds, that a collection of conscious organisms could be conscious. He didn't want his theory to result in swarms of bees having collective conscious experience, he says. But why not? Maybe bee swarms are dumber and represent less than do individual bees -- committees collectively act and collectively represent less than do their members as individuals -- but that would seem to be a contingent, empirical question about bees. To rule out swarm consciousness a priori, regardless of swarm behavior and swarm structure, seems mere prejudice against beings of radically different morphology. Shouldn't a well developed materialist view eventually jettison unprincipled folk morphological prejudices? The materialist should probably expect that some entities to which it would seem bizarre to attribute consciousness do in fact have conscious experience. If materialism is true, and if the kinds of broadly functional capacities that most materialists regard as central to consciousness are indeed central, it may be difficult to dodge the conclusion that the United States has is own stream of conscious experience, in addition to the experiences of its individual members.
(Yes, I know this is crazy. That's the point.)
Thursday, October 27, 2011
As I remarked several years ago in my series of posts about applying to PhD programs in philosophy, it seems to be extremely difficult to gain admission to an elite PhD program in philosophy if you're not from an elite undergraduate institution. Inspired by a comment on a recent post, I decided to look at this a bit more systematically.
Here's what I did. First, I looked to see which of the top ten Leiter ranked philosophy PhD programs consistently displayed undergraduate institution information for their graduate students. Two did: Princeton and Berkeley. Of the 121 graduate students listed on their websites, 119 had undergraduate institution information listed. Of these, 25 were from foreign universities -- typically elite universities (especially Oxford). Excluding the foreign students leaves a pool of 94 students with US undergraduate university listed (21 also listed some graduate work, typically an MA). I then looked at the US News and World Report rankings of their undergraduate institutions.
Twenty-seven students (29%) come from just eight universities: The US News top 10 National Universities, excluding MIT and CalTech (Chicago, Columbia, Duke, Harvard, Penn, Princeton, Stanford, and Yale).
Another seventeen (18%) come from the universities ranked 11-25 (Berkeley, Brown, Cornell, Johns Hopkins, Northwestern, Rice, UCLA, USC, and Vanderbilt being represented).
Ten more (11%) come from universities ranked 26-50. And of these ten, seven are from universities with elite graduate programs in philosophy: Three from NYU (Leiter ranked #1 in the U.S.), 1 from Michigan (Leiter ranked #5), 2 from UNC Chapel Hill (Leiter ranked #9), and 1 from Tufts (Leiter ranked as the #1 master's program in the U.S). So, really, these universities are more elite in philosophy than their US News ranking would suggest. Rounding out the mix are Brandeis, UC Santa Barbara, and UW Madison. [Revised 10/28]
Only three universities ranked 51-100 are represented: Two students from Rutgers (whose PhD program is Leiter ranked #2), one from Northeastern (though this student took an MA from Minnesota first), and strikingly four students from Colorado (which has a mid-ranked PhD program: Leiter rank #26).
Many of the remaining students are from elite schools in the US News category "National Liberal Arts Colleges". Eight (9%) are from colleges in the top ten (Amherst, Claremont McKenna, Middlebury, Pomona, Swarthmore, and Williams represented), and seven more from those ranked 11-50 (Bates, Franklin & Marshall, Kenyon, Mount Holyoke, Oberlin, and Wesleyan represented).
Only eighteen students (19%) come from all the remaining universities in the United States combined. And even this number overestimates the number of students with genuinely nonelite backgrounds: Three are from Reed College, which though only ranked #57 among liberal arts colleges has a very strong tradition in philosophy; and at least another nine supplemented their undergraduate work with master's degrees or other work at elite schools or places with strong master's programs. Represented are: Arizona State, Biola, Catholic University, Cincinnati, Florida State, Houghton, Indiana-South Bend, Kalamazoo, Nebraska, North Carolina State, Reed, St John's College Santa Fe, St Vincent, and U Mass Boston.
To help give a sense of how thin a representation this is of nonelite schools, consider that there is not a single student on this list from the two biggest public university systems in the country: the Cal State system (412,000 students) and the SUNY system (468,000 students, but that number includes students in two-year colleges and technical institutes). Even the UC system is poorly represented once we exclude the two most elite universities (Berkeley and UCLA): The remaining campuses (Davis, Irvine, Merced, Riverside, San Diego, Santa Barbara, and Santa Cruz) are represented by only a single student from Santa Barbara.
I don't conclude that admissions committees are being unfair, much less explicitly elitist. Maybe students from Harvard and Columbia really are that much better. Or maybe the epistemic task of discerning the genuinely most promising applicants is so difficult that committees need to play the odds and the odds almost always say that the Harvard student is more likely to succeed than the Cal State student. Or maybe so much turns on the credibility of the letter writers that students whose letter writers aren't well known can't really be fully evaluated. Or, or, or, or.
But regardless how innocent the explanation, it's a shame.
Update, 5:44 pm:
Very interesting discussion in the comments! Let me clarify two points:
First: I interpret these results as applying only to the very most elite PhD programs -- roughly the Leiter top ten. There is plenty of evidence that lower-ranked PhD programs (like UCR, ranked #30) admit a substantial proportion of their students from nonelite schools (though I suspect there still is a large pedigree advantage). However, that fact is less consoling than it might seem if it's the case, as I suspect it is, the top ten PhD programs are vastly more successful than lower-ranked schools in placing their students into the sorts of elite research-oriented jobs and elite liberal-arts-college teaching jobs that many graduate students covet.
Second: I somewhat regret the impression that the title of this post might give that there is simply no chance to be admitted to an elite program from a Cal State or similar. There are a few exceptions, as should be evident from the data included in the post. At least some of the off-list schools are comparable in prestige to the Cal States and SUNYs. Whether these exceptions are frequent enough to constitute any practical chance even for awesome students from such schools, I'm not sure.
Update, October 28:
A reader compiled some data for me from Stanford. This list is not strictly comparable to the Princeton/Berkeley list, since it is list of last institution prior to Stanford, whether undergrad or graduate, but it's still probably somewhat comparable.
To my eye, the results look similar, with 28% (out of 40 total US students) from the top ten universities, and another 48% from the top 11-50 universities and top 1-50 liberal arts colleges. Only one student is from a university ranked 51-100, and that university, Pittsburgh, has an elite PhD program (Leiter ranked #4). 23% of the students (9 total) are from all the remaining universities in the US; and at least three of those are from well-regarded MA programs at those universities (to judge from those universities' MA placement lists: Cal State LA, Georgia State, and Texas Tech), while one more student is from a university that although not generally elite has a very strong PhD program in philosophy (Arizona, Leiter ranked #13). The remaining five students are from Illinois Wesleyan, Nevada-Las Vegas, Northern Arizona, Northern Iowa, and South Florida.
19%-23% representation from nonelite universities might not seem very skewed, but I think that would be a false impression. Many more students graduate from nonelite universities than from elite universities. Their low odds of admission are better seen looking up from the bottom than down from the top, as it were. If we take an arbitrary selection of nonelite schools, say all of the dozens of Cal States and SUNYs, we see not a single undergraduate from these schools in any of these three departments. (Caveat: Stanford has a CSLA MA student, and to judge from the comments section and private emails, at least two or three Cal State students have recently cracked other top ten departments; I haven't yet heard good news about any SUNY students.) Also if we look at the very good / marginally elite universities ranked 51-100 on the US News list -- schools which one might think could contribute substantial numbers of students to elite PhD programs -- we still see only very thin representation: Combining Princeton, Berkeley, and Stanford together, only four of those 50 schools are represented; only two if Rutgers and Pitt are reclassed as elite due to their top-ten rankings in philosophy. In contrast, almost all of the top 25 schools are represented, often multiply represented.
Here's another way of thinking about the distribution: In a typical smallish Princeton-Berkeley-Stanford class of six students, four will be from elite undergrad institutions, one will be from a (probably elite) foreign institution, and only one will be from any of the hundreds of good but nonelite US institutions -- and that one student as likely as not spent some time in some capacity either visiting an elite institution or at one of the top MA programs.
Update, August 7, 2013:
See these reflections by David Holiday on his failure to make the jump from a non-prestigious MA program to a PhD program. Starting a few paragraphs in he makes the case that "the student at the ho-hum department has no way of knowing what she doesn’t know, and what she doesn’t know is evident in her work". I do suspect this is part of the story.
Wednesday, October 19, 2011
I've been told that Kant and Hegel were poor writers whose impenetrable prose style is incidental to their philosophy. I've also been told that their views are so profound as to defy expression in terms comprehensible even to smart, patient, well-educated people who are not specialists in the philosophy of the period. I've heard similar things about Laozi, Heidegger, Plotinus, Derrida. (I won't name any living philosophers.) I don't buy it.
Philosophy is not wordless profound insight. Philosophy is prose. Philosophy happens not in mystical moments, but in the creation of mundane sentences. It happens on the page, in the pen, through the keyboard, in dialogue with students and peers, and to some extent but only secondarily in private inner speech. If what exists on the page is not clear, the philosophy is not clear. Philosophers, like all specialists, profit from a certain amount of jargon, but philosophy need not become a maze of jargon. If private jargon doesn't regularly touch down in comprehensible public meanings, one has produced not philosophy but merely a fog of words of indeterminate content. There are always gaps, confusions, indeterminacies, hidden assumptions, failures of clarity, even in great philosophical prose stylists like Hume, Nietzsche, and David Lewis. Thus, these philosophers present ample interpretative challenges. But the gaps, confusions, indeterminacies, hidden assumptions, and even to some extent the failures of clarity, are right there on the page, available to anyone who looks conscientiously for them, not shrouded in a general fog.
If a philosopher can convince the public to take him seriously -- or her, but let's say him -- being obfuscatory yields three illegitimate benefits: First, he intimidates the reader and by intimidation takes on a mantle of undeserved intellectual authority. Second, he disempowers potential critics by having a view of such indeterminate form that any criticism can be written off as based on a misinterpretation. Third, he exerts a fascination on the kind of reader who enjoys the puzzle-solving aspect of discovering meaning, thus drawing from that reader a level of attention that may not be merited by the quality of his ideas (though this third benefit may be offset by alienating readers with low tolerance for obfuscatory prose). These philosophers exhibit a kind of intellectual authoritarianism, with themselves as the assumed authority whose words we must spend time puzzling out. And simultaneously they lack intellectual courage: the courage to make plain claims that could be proven wrong, supported by plain arguments that could be proven fallacious. These three features synergize: If a critic thinks she has finally located a sound criticism, she can be accused of failing to solve the interpretive puzzle of the philosopher's superior genius.
Few philosophers, I suspect, deliberately set out to be obfuscatory. But I am inclined to believe that some are attuned to its advantages as an effect of their prose style and for that reason make little effort to write comprehensibly. Perhaps they find their prose style shaped by audience responses: When they write clearly, they are dismissed or refuted; when they produce a fog of words that hint of profound meaning underneath, they earn praise. Perhaps thus they are themselves to some extent victims -- victims of a subculture, or circle of friends, or intended audience, that regards incomprehensibility as a sign of brilliance and so demands it in their heroes.
Tuesday, October 18, 2011
Readers interested in graduate school in philosophy might be interested to see my seven-part series on PhD admissions, collected here. It's time to start thinking about the application process, if you're aiming to begin a PhD program in fall 2012.
For advice on applying to Master's programs, see the guest post by Robert Schwartz of University of Wisconsin, Milwaukee.
My impression is that admissions are somewhat more competitive in recessions than in boom times, since there are fewer options outside of academia to draw top students out of the applicant pool. Regarding the job market for newly minted philosophy PhDs, we should probably think of the period from about 1999-2007, bruising and competitive though it was, as boom times unlikely to be replicated in the near future. So don't be misled by departments' placement records from that period. On the other hand, the horrible job market of the past two years is probably also an aberration.
Thursday, October 13, 2011
In his famous 1980 essay, "Mad Pain and Martian Pain", David Lewis tries to thread the needle between a flat-footed functionalism and a flat-footed neural-state identity theory about the mental. Flat-footed neural-state identity theory equates mental states, like being in pain, with possession of particular neural states. Thus, counterintuitively, it implies that beings who are behaviorally similar but internally very different, such as (hypothetically) Martians, can't feel pain. Flat-footed functionalism equates mental states with causal/functional roles. Being in pain, on such a view, is just being in a state that is caused by things like tissue stress and that tends to cause things like wincing, avoidance, and self-ascriptions of pain. This view, counterintuitively, implies the impossibility of "madmen" who feel pain for unusual reasons and have unusual reactions to it.
Lewis's solution is to say that some entity X is in pain if and only if X is in the state that occupies the causal role of pain for the "appropriate population". The "appropriate population", he says, might be (1.) us, since it's our term, (2.) a population that X belongs to, (3.) a population in which X is not exceptional, and (4.) a natural kind such as a species. In the normal case, all four criteria are met. In the Martian case, 2-4 are met though 1 is not, which is good enough. In the mad case 1, 2, and 4 are met though 3 is not, which is also good enough. Since mad Martian pain also seems possible, 2 and 4 alone will be sufficient for pain on Lewis's account.
Now the funny thing about these criteria is that they are all extrinsic or relational , and you might have thought that whether X is in pain or not should depend entirely on what is going on within X; you might have thought that pain would, in today's jargon, "supervene locally". The weirdness can be made vivid with further thought experiments. Criterion (3), for example, can be altered by genocide. Suppose that X is in a state that plays the causal role of pain for most of the population but the causal role of hunger for him and maybe a few others -- a "madman" case. On Lewis's account he will be experiencing pain. Now suppose that X is desperate to end his pain. On Lewis's account he might end his pain by perpetrating genocide upon all the non-mad people of the world. Voila, condition (3) flips, and X's pain has changed to hunger! This is anesthesia by genocide. We could similarly produce anesthesia by reproduction or speciation.
Real advocates of physical-state identity theory are hardly ever as flat-footed as those imagined by Lewis (as Lewis explicitly acknowledges). Like Lewis, they tend to embrace accounts on which to be in pain (or any other mental state) is to be in a state of a certain physical type, where the relevant physical type can vary between different types of being. What type of physical state is identical to what type of mental state, for beings of your type, then depends on facts about the particular causal or functional role of that state in members of your group or on the causal or functional history of that physical state in members of your group and/or in your own evolutionary or developmental past. Such type classifications are extrinsic or relational. Thus, such views have the bizarre consequences that flow from the denial of local supervenience. They allow anesthesia by genocide, or by speciation, or by hypothetical differences in past history that have no neural trace in the present.
We might thus see the mad pain-Martian pain issue as a trilemma in which each horn has bizarre consequences: Either accept the bizarre consequences of a strict functionalism (no mad pain), accept the bizarre consequences of neurobiological chauvinism (no Martian pain), or accept the bizarre consequences of denying local supervenience (anesthesia by genocide or speciation). Can a plausible materialist metaphysics dodge this trilemma? (I set aside hand-waving appeals to yet-to-be-identified intrinsic properties, a la John Searle.) I'd be very interested if you think you can point me to an example!
If all the options are bizarre, as I think, then something bizarre must be true. (Yes, dualism is also bizarre.) The problem is in figuring out which bizarre view to accept! If none of the various bizarre options merits credence, then crazyism follows.
Wednesday, October 05, 2011
Is there a psychology of metaphysics? Yes! And one of its high-profile results seems to be this: Ordinary people (or at least the less scientific among them) are metaphysical substance dualists. They think that material bodies are one thing and immaterial souls quite another. Paul Bloom argues for this on cross-cultural and developmental grounds. Brian Fiala, Adam Arico, and Shaun Nichols propose a cognitive mechanism to explain it. And it certainly seems to fit with widespread belief in God, the afterlife, etc. You are an immortal soul "fastened to a dying animal".
But if you've read your history of philosophy, you might wonder this: If dualism is just common sense, why are dualistic metaphysical systems always so bizarre? Leibniz sees a universe of "monads" that move in pre-established harmony with each other but do not causally interact. Malebranche thinks nothing has any real causal power except for God, who constantly creates the universe anew at every moment. Descartes, whose "causal interactionist" dualism might initially seem a reasonable candidate for common sense, held that nonhuman animals were mere mindless machines, incapable of conscious experience. (It's hopefully not true that Descartes tossed a cat out of a window in Leiden to illustrate his belief in this.) "Common sense" philosopher Thomas Reid attributed immaterial souls to vegetables and denied that material objects had the power even to cohere into shapes without the regular intervention of immaterial souls on their behalf.
Here's my explanation of the bizarreness of dualist metaphysical systems: Commonsense opinion is not straightforwardly substance dualist. Rather, commonsense opinion about the metaphysics of mind is an incoherent mess. Thus, it's impossible to develop a detailed, coherent dualist metaphysics that respects all the inclinations of common sense.
There are at least two broad issues on which dualistic metaphysical systems have repeatedly stumbled against common sense: the causal powers of the immaterial mind and the class of beings with immaterial minds.
The causal powers issue can be posed as a dilemma: Does the immaterial soul have the causal power to affect material entities like the brain? Both yes and no answers lead to trouble. If yes, then physical entities like neurons must be regularly and systematically influenced by immaterial events. A neuron must be caused to fire not just because of the chemical, electrical, and other physical influences on it but also because of immaterial happenings in spiritual substances. Either events in the immaterial realm give it some physical or quasi-physical push that leads it to behave other than it would without that immaterial push -- which seems to violate our commonsense ideas about nonmiraculous causes of physical movement (and a minimal commonsensical deference to mainstream physics and neuroscience) -- or the immaterial causal influence somehow operates on the physical despite the fact that the physical would behave no differently absent that influence, which seems an equally strange view. Suppose, then, the other horn of the dilemma: The immaterial soul has no causal influence on physical events. If immaterial souls do anything, they engage in rational reflection. On a no-influence view, such rational reflection would have no power to causally influence the movements of the body. You can't make a rational decision that has any effect on the flow of the physical world, including the movements of your own body. This again seems bizarre by the standards of ordinary common sense.
The scope-of-mentality issue can be posed as a quadrilemma: Either (a.) among Earthly animals, only human beings have immaterial souls and they have those souls from birth (or maybe conception), or (b.) there are sharp boundaries in phylogeny and development between ensouled and unensouled creatures, or (c.) whether a being has an immaterial soul isn't a simple yes-or-no matter but rather a gradual affair, or (d.) panpsychism is true, that is, every being has, or participates in having, an immaterial soul. Each possibility violates common sense in a different way. Since on a substance dualist metaphysics of mind, the immaterial soul is the locus of mentality and conscious experience, option (a) denies dogs and apes mentality and conscious experience, contrary to what seems to be the clear opinion of most of humankind. Option (b) requires sudden saltations in phylogeny and development, which seems bizarre given the smooth gradation of differences in behavioral capacity, both developmentally and across the range of non-human animals, and given the work the immaterial soul must do if it's not to be otiose. Option (c) appears incomprehensible from a commonsense point of view: What would it mean to sort of, or kind of, or halfway have an immaterial soul? (Would you sort of go to Heaven? Even Dog Heaven, which might be a "sort of" Heaven, seems to require dichotomously either that dogs are materially instantiated there or that they have some immateriality that transcends the grave.) And despite a certain elegance in panpsychism, the idea, in option (d), that even vegetables and bacteria and proteins and thermostats have immaterial souls, or alternatively that they participate in a single grand immaterial soul, seems bizarre on the face of it, by the standards of ordinary common sense.
Any well developed metaphysical substance dualism must make choices on such matters. And all the choices seem weird. If you think otherwise, I suspect philosophy has dulled your sense of what's weird. But weird does not imply false! We have good independent reasons to think, on physical and cosmological grounds, that the world is a pretty weird place, not well matched with our commonsensical intuitions about what must be so.
Tuesday, September 27, 2011
... or at least they tend that direction on personality tests.
There are, I think, some gaps in the Bartels and Pizarro argument -- especially since there might be a pretty loose connection between real consequentialist moral thinking and tending to say "push the fat man!" when given a trolley problem. Quite possibly, undergraduates tending toward psychopathic personality will say the latter even if they aren't very good representatives of genuine consequentialist moral thought.
Josh Rust and I, in our study of the moral behavior of ethics professors, found that ethicists favoring deontology vs. consequentialism vs. virtue ethics all behaved about the same, both by self-report measures and by direct observational measures. To the extent there was a tendency, it was for virtue ethicists to self-report slightly worse behavior.
Update, Sept 28: In the comments I think I more clearly articulated my concern about Bartels and Pizarro than I did above, so I paste it here: "As a cartoon, imagine that you have a group of respondents who don't really think ethically about the dilemmas at all and just think it's funny to tell the prof to push the fat man, and suppose that psychopathic personality types are overrepresented in that group. Then you get the Bartels and Pizarro results, but there's no relationship with consequentialist thinking."
Monday, September 19, 2011
Bizarre views are a hazard endemic to metaphysics. The metaphysician starts, seemingly, with some highly plausible initial commitments or commonsense intuitions -- that there is a prime number between 2 and 5, that I could have had eggs for breakfast, that squeezing the clay statue would destroy the statue but not the lump of clay -- thinks long and hard about what they imply, and then ends up positing a realm of abstract Platonic entities, or the real existence of an infinite number of possible worlds, or a huge population of spatiotemporally coincident things on her mantelpiece. I believe there is not a single detailed exploration of fundamental issues of metaphysics that does not, by the end, entangle its author in seeming absurdities (sometimes advertised as "surprising conclusions"). Rejection of these absurdities then becomes the commonsense starting point of a new round of metaphysics, by other philosophers, which in turn generates a complementary bestiary of metaphysical strangeness. Thus are philosophers happily employed.
I see three possible explanations of why philosophical metaphysics is never thoroughly commonsensical.
One is that a thoroughly commonsensical metaphysics wouldn't sell. It would be too boring. A famous philosopher can't say only obvious things. The problem with this explanation is that there should be at least a small market for a thoroughly commonsensical philosophy. Common sense might not be quite as fun as Nietzsche's eternal recurrence or Leibniz's windowless monads or Hegel's world spirit, but a commonsensical metaphysics ought to serve at least as a foil; it oughtn't be so downmarket as to be entirely invisible. In the 18th century, Thomas Reid helped found the Scottish school of "common sense" philosophy, and today he is the best known representative of that school -- so one might naturally wonder if Reid's metaphysics is thoroughly commonsensical. It's not. See, for example, his thoughts on the immaterial souls of vegetables. Nor is G.E. Moore's, when he develops his positive views in detail, despite his famous "Defence of Common Sense". See, for example, his treatment of sense data.
Another possible explanation is that metaphysics is incredibly hard. There is a thoroughly commonsensical metaphysics out there to be had; we simply haven't pieced it together yet. Maybe someday someone will finally bring it all together, top to bottom, with no serious violence to common sense at any point in the system. I fear this is wishful thinking against the evidence. (In a future post I hope to argue the point in more detail for the metaphysics of mind.)
A third explanation of the bizarreness of metaphysics is this: Common sense is incoherent in matters of metaphysics. Detailed examination of the consequences of our commonsense opinions inevitably leads to contradictions. To develop a coherent metaphysics in detail thus necessarily involves rejecting some aspects of common sense. Although ordinary common sense serves us fairly well in negotiating our everyday social and physical environments, it has not proven a reliable guide in cosmology or microphysics or neuroscience or evolutionary biology or probability theory or structural engineering or medicine or macroeconomics or topology. If metaphysics more closely resembles items in the second class than in the first, as it seems to, we might justifiably be suspicious of the dependability of common sense as a guide to metaphysics. Undependability does not imply incoherence, but it does seem a natural next step in this particular case, especially since it would generate a tidy explanation of the historical fact that detailed metaphysical systems are never thoroughly commonsensical.
Thus, I am endorsing the incoherence of common sense in matters metaphysical as an empirical hypothesis, justified as the best explanation of an empirically observed pattern in the history of philosophy.
Wednesday, September 14, 2011
(collaborative with Alan Moore)
I’m engaged in a certain exercise. You might doubt the value of this exercise, but it’s a traditional philosophical exercise, and I’m a philosopher. The exercise is attempting to prove the existence of an external world beyond my own stream of experience.
As I mentioned in a previous post, the two most historically famous proofs of the external world appear to be unsatisfactory: Descartes’s because it turns on the dubious claim that the idea of perfection could only be caused by a perfect being, and G.E. Moore’s because it starts from the question-begging premise “here is a hand”.
My strategy is this: First, I assume introspective knowledge of my current sensory and other experiences, memory of past experiences back to the beginning of the exercise, and the general concepts and methods of science insofar as those are stripped of any presupposition of a world beyond my stream of experience. Then, I attempt to find experimental evidence sufficient to justify belief in a world beyond my stream of experience, as the best scientific explanation of patterns in my stream of experience.
In Experiment 1, I did something that seemed like programming a spreadsheet to determine whether various four-digit numbers were prime, and I also guessed whether those numbers were prime. As confirmed by subsequent hand calculation, the seeming spreadsheet did a far better job than I did at correctly marking the primes, and the best explanation appeared to be that something exists with calculating abilities exceeding my own – at least insofar as the “I” is conceived solipsistically, as constituted entirely by my stream of experience.
Experiments 2a and 2b:
Now I will try a second experiment, using an apparent confederate. I will call this apparent confederate “Alan”, though without meaning to presuppose that he exists as anything but a figment of my own stream of experience.
This “Alan”, by what I think of as prearranged instructions, gives me a list of 20 three-letter combinations to memorize (“EMA”, “GLL”, etc.). The list of three-letter strings appears orally, by which I mean that I seem to hear him say the 20 three-letter combinations. He appears to present the list twice. I then attempt to freely recall the letter-combinations from the list, offering up six guesses – the three-letter strings that it seems I can recall without prompting. “Alan” then presents me visually with a forced-choice recognition test: 40 three-letter combinations, half of which he says are new and half of which he says are the original 20, in random order. I select 20 of the 40 as my best guess as to which are the old ones. I feel high confidence about 10 of these guesses. We then repeat the same experiment with a second set of three-letter combinations. This time I offer up eight guesses in free recall and in the 40-item recognition test I again feel high confidence about 10 items.
Now, if nothing exists beyond my own stream of experience, then the forgotten items on these lists should not, it seems, be stable. They exited my mind – either as measured by free recall or by the stricter measure of recognition – and thus they exited the universe. Thus, when Alan seems to present me with these lists a second time, which he will shortly do, nothing endures across time that could anchor the forgotten items in place, ensuring that the new perfectly matches the old. The seemingly re-presented items could be any set of plausible items; my mind has unconstrained liberty to invent any letter combinations that seem reasonable candidates to span the gaps in my memory. On solipsism, the present needn’t preserve the details of the past; it need only preserve, seemingly can only preserve, the details of the past that I remember.
However, Alan now shows me evidence that undercuts this solipsist picture – evidence suggesting that the 20 three-letter combinations that were the “right” answers in each test were in fact anchored in place and stable across time rather than created at liberty anew. He shows me, first, the lyrics of “Take Me Out to the Ball Game”, and he tells me that the first 20 three-letter combinations were the letters, in reverse order, from the end of the chorus of that song (excluding spaces, punctuation, and two repeated letter combinations). As I look through that list of letter combinations (“EMA”, “GLL”, “ABD”, etc.), it is quite evident to me that this is the case: I am forcefully struck by the fact that the letter combinations that I do seem to recall from the original list fit exactly into that pattern, though I was entirely unaware of that pattern at the time they were originally presented. Comparing my free recall and recognition lists with the reversed-ballgame song, I see that 5 of my 6 freely recalled letter combinations appear in that song (one guess, Alan tells me, was wrong) and that 15 of the 20 I seemed to recognize were correct, including all 10 about which I felt confident in the recognition test. The other stimulus, Alan tells me, was similarly derived in backwards chunks from a sentence from George Berkeley: “In short, if there were external bodies, it is impossible we should ever come to know it; and if there were not, we might have the very same reasons to think there were that we have now.” Here, Alan tells me I accurately freely recalled 7 of the 20 combinations (having made one erroneous guess out of eight) and correctly recognized 16 out of 20, including all 10 about which I had felt confident.
Why should I believe that these lists now shown to me by this seeming Alan actually match the original ones presented? Their structure suggests so: Although I am not assuming the existence of ballgames or peanuts or cracker jack, it is one of the background assumptions of this exercise that I permit myself my existing conceptualizations, stripped of any commitment to a really existing external world. And my conceptual structure contains the song “Take Me Out to the Ball Game”, in which these words appear as indicated. Likewise, it’s part of my conceptualization of George Berkeley that this is just the sort of thing he would say and of Alan’s sense of humor that he would use this sentence to generate the stimulus. While it’s remotely possible that from the fragments that I happened to remember such natural-seeming structures could be found post-hoc to fit, that would seem to require enormous happenstance. Suppose that there were as many as a trillion possible strings of text Alan could have used, per the instructions I seem to remember having given him, to generate suitable test material, given present purposes. And suppose conservatively that English allows only about 1000 viable, roughly equiprobable, plausible three-letter combinations (it would be about 18,000 if all 26 letters could appear in any combination). Still there would be only about a one in ten billion chance that I could find two matches from that trillion strings for my two sets of ten confidently recognized three-letter combinations.
A better explanation, I venture, than such a boggling chance is that these remembered letter combinations were indeed generated in something like the manner described by this “Alan”, and thus that the lists Alan is now showing me do, as he says, match the original, in both their remembered and their unremembered parts. In other words, this pattern among my remembrances was not constructed anew but was present from the beginning in the original experience of hearing the letter combinations, including in the unremembered parts of that experience, since without those unremembered parts the strings would not fit the stated pattern. But then these lists, or at least the pattern they reflect, must have endured beyond my conscious ken from the time of original experience to the time at which the pattern was revealed to me, sustained by some mechanism running outside my stream of conscious experience.
Maybe my knowledge of the ballgame song or the Berkeley sentence operated nonconsciously in me to generate those letter combinations from the start? But that hypothesis still involves rejecting the most radical form of solipsism, the form of solipsism currently under discussion, which posits that all that exists in the universe is my own stream of experience. I don’t conclude that I know what, exactly, is beyond my experience – that Alan exists or that the physical list exists – only that something is beyond. That something might be my nonconscious mind.
Maybe my conceptualization of song lyrics and great philosophers is, unbeknownst to me, highly labile, so that although it seems that the lyrics are annoyingly familiar and the quote from Berkeley oh-so-apt, in fact this sense of familiarity and aptness arose wholly for the occasion, via some strange law of relating past and present experience, and the combinations are actually new, perhaps even the words themselves new English words? Then there need have been nothing outside my own conscious experience that endured from the first moment of presentation until now. This objection seems ad hoc given the background rules of this exercise, but in any case, it can be avoided with a third experiment.
Experiment 3 is structured much like experiments 2a and 2b, except that the items to be remembered are 20 three-digit numbers. So: I (and “Alan”) run the experiment. Alan tells me, now, that I have correctly freely recalled eight three-number combinations (nine guesses with one error) and have correctly recognized 15 items, including all of the eight combinations about which I felt confident during the recognition test. What suggests the stability of the list in this case? Alan tells me that the 20 three-digit numbers are actually the first 60 decimal places of the long cyclic number that comes from dividing 1 by 1939, the year in which G.E. Moore delivered his famous “Proof of an External World”. This time, instead of trusting in the stability of my conceptualization of song lyrics, I plan to trust to the stability of mathematics, confirming the list’s stability, the existence of the pattern Alan alleges to unite the initial and present stimuli, by long division by hand to 60 decimal places.
But, alas! After the first six digits something has gone wrong. Hand division does not confirm the list; the numbers diverge. Have I made a calculation mistake? No, evidently not, unless I am very deeply confused. Is there, perhaps, no external world or stability to mathematics after all? I suppose I am a biased experimentalist; I am reluctant to draw that conclusion. The number is a bit larger in the 7th digit, so on a hunch, I check the decimal expansion of 1/1938. Indeed, there is a perfect match all the way out to the 60th digit, between the list Alan tells me was the original stimulus and the hand-proven results of my calculation. (N.B.: In subsequent conversation “Alan” confirmed the diagnosis, but that small slice of experience is outside the scope of the experiment and thus out of bounds for now.)
I seem to recall having suggested to Alan that he use a long cyclic decimal expansion or some other readily calculated but unpredictable number string to generate the list for this experiment. Supposing that there were a million suitable pseudo-random number strings from which Alan might have chosen, the odds are over a million to one that all eight stably remembered three-digit combinations would appear among the 20 sets of three digits from one of those million candidate number strings. Again, the more likely explanation appears to be that the eight remembered combinations were indeed generated by the discovered pattern, along with the other unremembered combinations, and thus that the present list matches the original list and consequently that something endured over time, preserving that structure outside my conscious ken, contra the radically solipsistic hypothesis. This conclusion appears justified even though it took some searching to find the driving pattern.
The resolute skeptic will, of course, find joints at which to challenge this argument: Maybe the laws of math aren’t stable over time. Maybe there’s unmediated action at a temporal distance between my first oral experience of the list and my subsequent visual experience of the list. Maybe my memory is so poor that I can’t even trust that I did in fact recall several items correctly and this whole business has been an illusion of the past five seconds. In general, anywhere there is an undefended premise, the skeptic can insert a knife; and all arguments must have undefended premises on pain of regress. But recall the aim of this exercise: It is to provide experimental evidence, in the form not of incontestable proof but rather of inference to the best explanation, for the existence of something beyond my stream of experience, while taking for granted knowledge of my stream of experience and my pre-existing scientific standards and conceptualizations except insofar as those standards and conceptualizations presuppose the existence of anything beyond my stream of experience. That, I tentatively believe I have done.
Proposals for a fourth experiment warmly welcomed.
Tuesday, September 13, 2011
is now online here at Philosophical Psychology. If you want to see it and you're stuck behind a pay wall, feel free either to email me for a free copy or download the penultimate draft from my website.
If philosophical moral reflection tends to promote moral behavior, one might think that professional ethicists would behave morally better than do socially comparable non-ethicists. We examined three types of courteous and discourteous behavior at American Philosophical Association conferences: talking audibly while the speaker is talking (vs. remaining silent), allowing the door to slam shut while entering or exiting mid-session (vs. attempting to close the door quietly), and leaving behind clutter at the end of a session (vs. leaving one’s seat tidy). By these three measures, audiences in ethics sessions did not appear, generally speaking, to behave any more courteously than did audiences in non-ethics sessions. However, audiences in environmental ethics sessions did appear to leave behind less trash.
Thursday, September 08, 2011
Mahalo, which will be featuring me in their Meet Authors series. In that series, authors spend about an hour on video addressing selected pre-submitted reader questions.
So... readers of this blog, please feel free to submit. I hope some of you do, so that I have something interesting to talk about!
Recording is currently scheduled for September 26th, and questions (indicating my name as the author receiving them) can be sent to questions[at sign]mahalo[dot]com.
Amazon is quoting $17.75 for Perplexities -- a very good price for an academic hardback. Plenty of time to buy and read before September 26, if you're interested.
Update, Sept 28: The entire Mahalo authors series has been put on ice. Of course, people are still welcome to email me with thoughts or questions if they like.
Friday, September 02, 2011
Eighteenth century Scottish "common sense" philosopher Thomas Reid never tires of flogging his opponents with their violations of common sense. Thus, it may seem surprising that he would write this:
[B]oth vegetables and Animals are United to something immaterial, by such a Union as we conceive between Soul and Body, which Union continues while the Animal or Vegetable is alive, & is dissolved when it dies (in Wood, ed., 1995, p. 218-219).In other words, vegetables have immaterial souls -- or, if not souls exactly, immaterial parts analogous to souls. (Not to worry, there's no Vegetable Heaven [see p. 223].) Although Reid doesn't highlight this point or dwell upon it, it flows very naturally from his general system on which nothing material can be the source of its own movement (including growth).
Because of Reid's reputation as the great philosopher of common sense, I'm examining his work as a test case of the following empirical hypothesis, which I currently consider to be well supported: No philosopher has ever been able to construct a detailed and coherent metaphysics of mind that entirely respects our commonsense intuitions. Not even Reid could do so. (Shall we try G.E. Moore next?)
The conclusion I'm inclined to draw, partly on its own merits and partly as the best explanation of this fact about the history of philosophy, is that our commonsense metaphysical intuitions are, at root, incoherent. Thus, there is no way to build a detailed and coherent metaphysics of mind that entirely respects common sense, making it unsurprising that none of the brilliant minds in the history of philosophy have ever done so.
Update, October 3:
Several people have reminded me in the comments that in Reid's era it might not have been contrary to common sense to suppose that vegetables have some sort of vital essence not wholly material. Reid's remarks about the immaterial souls of vegetables need to be seen in light of that. Still: It's a leap from some sort of vaguely folk-approved vital essence to an immaterial soul (or vegetative soul), and Reid does use the word "soul"; and he contemplates the question of afterlife for vegetables; and his view on these issues is tangled up with his view that physical objects and events have no causal powers of their own but require the constant intervention of immaterial beings -- a view that he explicitly confesses to be at odds with what most ordinary people would accept.
So: I still hold my view that Reid is not wholly able to preserve common sense on this matter, but the issue is more complex than I made it seem above.
A few days ago I posted a discussion of Nick Bostrom's Simulation Argument, which aims to show that there is a substantial chance -- perhaps about one in three -- that we are living in a computer simulation. I raised three concerns about the argument but ultimately concluded that although simulationism is crazy (in my technical sense of "crazy"), it's a cosmological possibility I don't feel I can dismiss.
Bostrom and I had an email exchange about that post, and he has agreed to let me share it on the blog.
So, first, you'll want to read my discussion, if you haven't already, and maybe also Bostrom's article (though I hope the summary in my discussion does justice enough to the main idea for the casual reader).
Thanks for your thoughtful comments and for posting on your blog.
A few brief remarks. Regarding (A) and deriving an objection from externalism: There are a few things said in the original paper about this (admittedly quickly). For example, what do you think of the point that if we consider a case where we knew that 100% of everybody with our observations are in sims, we could logically deduce that we are in sims; and therefore if we consider a series of cases in which the fraction gradually approaches one, 90%, 98%, 99%, 99.9%, … , it is plausible that our credence should similarly gradually approach one? (There is also the whole area of observation selection theory, which uses stronger principles from which the “bland indifference principle” needed for the simulation argument drops out as an almost trivial special case.)
Regarding (B), I think it’s a somewhat open question how many conscious fellow traveler the average simulated conscious being has. Note that there need be only a few ancestor simulations (big ones with billions of people) to make up for billions of “me-simulations” (simulations with only one conscious person). Another issue is whether it would be feasible to create realistic replicas of human beings without simulating them in enough detail that they are conscious - if not, we have observational evidence against the me-sim hypothesis.
Regarding possible interaction between (B) and (C): no. 4 in http://www.simulation-argument.com/faq.html might be relevant?
With best wishes,
Thanks for the thoughtful reply! ...
[Regarding the response to A] I don’t accept the slippery slope. As long as there is one non-sim, that person’s grounds for believing she is a non-sim might be substantially different than the grounds of all simulated persons no matter how many sims there are, especially if we accept an externalist epistemology. Compare the Napoleon case. No matter how many crazy people think they are Napoleon, Napoleon’s own grounds for thinking he is Napoleon are different, and (arguably) the existence of those crazy people shouldn’t undercut his self-confidence. It would be very controversial, for example, to accept an indifference principle suggesting that if 10,000 crazy people have thought they are Napoleon, and if (hypothetically) Napoleon does or should know this about the world, then Napoleon himself should only believe he is Napoleon with a credence of 1/10,000. Of course, there are important disanalogies between the sims case and the Napoleon case. My point is only that your argument has a larger gap here than you seem to be granting.
[Regarding the response to B] Agreed. It’s an open question. I wouldn’t underplay the likelihood that many future sims might exist in short-term entertainments or single-mind AIs. I like your suggestion, though, that it might be impractical or impossible to have a solo sim with realistic non-conscious replicas as the solo’s apparent interactive partners – but that point will interact at least with the issue of sim duration. I’ve been solo in my office for half an hour. If I’m a short-duration sim, then probably non-conscious quantities AI will have been sufficient to sustain my illusion of a half-hour’s worth of internet interactions with people. How the openness of the duration/solo question plays out argumentatively might depend on one’s argumentative purposes. For purposes of establishing the sims possibility the non-trivial likelihood of the existence of massive ancestor simulations is sufficient. But for purposes of evaluating the practical consequences of accepting the sims possibility, it might be important to bear in mind that many sims may not exist in long-duration, high-population sims. It seems to me that unless you can establish that most sims do live in long-duration, high-population sims, the sims possibility has more skeptical consequences, and perhaps more of a normative impact on practical reflection, than you suggest.
[Regarding the possible interaction between (B) and (C)] I agree with your reasoning in faq 4. Just to be clear, what I was suggesting in my comment on the interaction between (B) and (C) was not intended as an objection of the sort posed in faq 4. In fact, the second of the two connections I remark on seems to increase the probability that I am a sim (and secondarily, to a lesser extent, that we are sims) by reducing the probability of DOOM/PLATEAU.
Bostrom's Response to My Follow-Up:
[On Issue A] That was intended as a continuity argument rather than a slippery slope argument. I’m not saying one point should be regarded as equivalent to another point because there is a smooth slope between them, but rather that credence should vary continuously as the underlying situation varies continuously. Is the alternative that I should assign credence 1 to being in a sim conditional on everybody being in a sim, but credence 0 to being in a sim conditional on there being at least one person like me who is not in a sim? Would an analogous principle also hold if we imagine that a single individual could be taken in and out of a simulation on alternating days without noticing the transfer? Folks willing to bet according to those obstinate odds would then soon deplete their kids’ college funds.
(For more background, see e.g. http://www.anthropic-principle.com/preprints/cos/big2.pdf.)
[On Issue B] I broadly agree with that. Pending further information, it seems the simulation hypothesis should lead us to assign somewhat greater probability than we otherwise would to a range of outlier possibilities (including variations of solipsism, creationism, impending extinction, among others). To go beyond that, I think one needs to start to think specifically about what motives advanced civilizations might have for creating simulations (and one should then be diffident but not necessarily completely despairing about our ability to figure out the motives of such presumably posthuman people). The practical import of the simulation hypothesis is thus perhaps initially relatively slight but it might well grow as our analysis advances enabling us to see things more clearly.
Tuesday, August 30, 2011
Nick Bostrom argues, in a 2003 article, that there's a substantial probability that we're living in a computer simulation. One way to characterize Bostrom's argument is as follows:
First, let's define a "post-human civilization" as a civilization with enough computing power to run, extremely cheaply, large-scale simulations containing beings with the same general types of cognitive powers and experiences that we have.
The argument, then, is this: If a non-trival percentage of civilizations at our current technological stage evolve into post-human civilizations, and if a non-trivial percentage of post-human civilizations have people with the interest and power to run simulations of beings like us (given that it's very cheap to do so), then most of the beings like us in the universe are simulated beings. Therefore, we ourselves are very likely simulated beings. We are basically Sims with very good AI.
Bostrom emphasizes that he doesn't accept the conclusion of this argument (that we are probably sims), but rather a three way disjunction: Either (1.) Only a trivial percentage of civilizations at our current technological stage evolve into post-human civilizations, or (2.) only a trivial percentage of post-human civilizations are interested in running simulations of beings like us, or (3.) we are probably living in a computer simulation. He considers each of these disjuncts about equally likely. (See for example his recent Philosophy Bites interview.)
Bostrom's argument seems a good example of disjunctive metaphysics and perhaps also a kind of crazyism. I applaud it. But let me mention three concerns:
(A.) It's not as straightforward as Bostrom makes it seem to conclude that we are likely living in a computer simulation from the fact (if it is a fact) that most beings like us are living in computer simulations (as Brian Weatherson, for example, argues). One way to get the conclusion about us from the putative fact about beings like us would be to argue that the epistemic situation of simulated and unsimulated beings is very similar, e.g., that unsimulated beings don't have good evidence that they are unsimulated beings, and then to argue that it's irrational to assign low probability to the possibility we are sims, given the epistemic similarity. Compare: Most people who have thought they were Napoleon were not Napoleon. Does it follow that Napoleon didn't know he was Napoleon? Presumably no, because the epistemic situation is not relevantly similar. A little closer to the mark, perhaps: It may be the case that 10% of the time when you think you are awake you are actually dreaming. Does it follow that you should only assign a 90% credence to being awake now? These cases aren't entirely parallel to the sims case, of course; they're only illustrative. Perhaps Bostrom is on firmer ground. My point is that it's tricky epistemic terrain which Bostrom glides too quickly across, especially given "externalism" in epistemology, which holds that there can be important epistemic differences between cases that from the inside seem identical. (See Bostrom's brief discussion of externalism in Section 3 of this essay.)
(B.) Bostrom substantially underplays the skeptical implications of his conclusion, I think. This is evident even in the title of his article, where he uses the first-person plural, "Are We Living in a Computer Simulation?". If I am living in a computer simulation, who is this "we"? Bostrom seems to assume that the normal case is that we would be simulated in groups, as enduring societies. But why assume that? Most simulations in contemporary AI research are simulations of a single being over a short run of time; and most "sims" operating today (presumably not conscious) are instantiated in games whose running time is measured in hours not years. If we get to disjunct (3), then it seems I might accept it as likely that I will be turned off at any moment, or that godzilla will suddenly appear in the town in which I am a minor figure, or that all my apparent memories and relationships are pre-installed or otherwise fake.
(C.) Bostrom calls the possibility that only a trivial portion of civilizations make it to post-human stage "DOOM", and he usually characterizes that possibility as involving civilization destroying itself. However, he also gives passing acknowledgement to the possibility that this so-called DOOM hypothesis could be realized merely by technologically stalling out, as it were. By calling the possibility DOOM and emphasizing the self-destructive versions of it, Bostrom seems to me illegitimately to reduce its credibility. After all, a post-human civilization in the defined sense, in which it would be incredibly cheap to run massive simulations of genuinely conscious beings, is an extremely grand achievement! Why not call the possibility that only a trivial percentage of civilizations achieve such technology PLATEAU? It doesn't seem unreasonable -- in fact, it seems fairly commonsensical (which doesn't mean correct) -- to suppose that we are in a computational boomtime right now and that the limits on cheap computation fall short of what is envisioned by transhumanists like Kurzweil. In the middle of the 20th century, science fiction futurists, projecting the then-booming increase in travel speeds indefinitely into the future, saw high-speed space travel as a 21st century commonplace; increases in computational power may similarly flatten in our future. Also: Bostrom accepts as a background assumption that consciousness can be realized in computing machines -- a view that has been challenged by Searle, for example -- and we could build the rejection of this possibility into DOOM/PLATEAU. If we define post-human civilization as civilization with computing power enough to run, extremely cheaply, vast computationally simulated conscious beings, then if Searle is right we will never get there.
Thoughts (B) and (C) aren't entirely independent: The more skeptically we read the simulation possibility, the less credence, it seems, we should give to our projections of the technological future (though whether this increases or decreases the likelihood of DOOM/PLATEAU is an open question). Also, the more we envision the simulation possibility as the simulation of single individuals for brief periods, the less computational power would be necessary to avoid DOOM or to rise above PLATEAU (thus decreasing the likelihood of DOOM/PLATEAU).
All that said: I am a cosmological skeptic and crazyist (at least I am today), and I would count simulationism among the crazy disjuncts that I can't dismiss. Maybe that's enough for me to count as a convinced reader of Bostrom.