Thursday, December 30, 2010

Nazi Philosophers

Recently, I've done a fair bit of work on the moral behavior of ethics professors (mostly with Josh Rust). We consistently find that ethics professors behave no better than socially comparable non-ethicists. So far, the moral violations we've examined are mostly minor: stealing library books, not voting in public elections, neglecting student emails. One might argue that even if ethicists behave no better in such day-to-day ways, on grand issues of moral importance -- decisions that reflect one's overarching worldview, one's broad concern for humanity, one's general moral vision -- they show greater wisdom.

Enter the Nazis.

Nazism is an excellent test case of the grand-wisdom hypothesis for several reasons: For one thing, everyone now agrees that Nazism is extremely morally odious; for another, Germany had a robust philosophical tradition in the 1930s and excellent records are available on individual professors' participation in or resistance to the Nazi movement. So we can ask: Did a background in philosophical ethics serve as any kind of protection against the moral delusions of Nazism? Or were ethicists just as likely to be swept up in noxious German nationalism as were others of their social class? Did reading Kant on the importance of treating all people as "ends in themselves" (and the like) help philosophers better see the errors of Nazism or, instead, did philosophers tend to appropriate Kant for anti-Semitic and expansionist ends?

Heidegger's involvement with Nazism is famous and much discussed, but as I see him as a single data point. There were, of course, also German philosophers who opposed Nazism. My question is quantitative: Were philosophers any more likely than other academics to oppose Nazism -- or any less likely to be enthusiastic supporters -- than were other academics? I'm not aware of any careful, quantitative attempts to address this question (please do let me know if I'm missing something). It can't be an entirely straightforward bean count because dissent was dangerous and the pressures on philosophers were surely not the same as the pressures on academics in other departments -- probably the pressures were greater than on fields less obviously connected to political issues -- but we can at least start with a bean count.

There's a terrific resource on philosophers' involvement with Nazism: George Leaman's Heidegger im Kontext, which contains a complete list of all German philosophy professors from 1932 to 1945 and provides summary data on their involvement with or resistance to Nazism. I haven't yet found a similar resource for comparison groups of other professors, but Leaman's data are nonetheless interesting.

In Leaman's data set, I count 179 professors with "habilitation" in 1932 when the Nazis started to ascend to power (including Dozents and ausserordentlichers but not assistants). (Habilitation is an academic achievement after the Ph.D., without an equivalent in Britain or the U.S., with requirements roughly comparable to gaining tenure in the U.S.) I haven't attempted to divide these professors, yet, into ethicists vs. non-ethicists, so the rest of this post will just look at philosophers as a group. Of these, 58 (32%) joined the Nazi Party, the SA, or the SS. Jarausch and Arminger (1989) estimate that the percentage of university faculty in the Nazi party was between 21% and 25%. Philosophers were thus not underrepresented in the Nazi party.

The tricky questions come after this first breakdown: To what extent did joining the party reflect enthusiasm for its goals vs. opportunism vs. a reluctant decision under pressure?

I think we can assume that membership in the SA or SS reflects either enthusiastic Nazism or an unusual degree of self-serving opportunism: Membership in these organizations reflected considerable Nazi involvement and was by no means required for continuation in a university position. Among philosophers with habilitation in 1932, two (1%) joined the SS and another 20 (11%) joined (or were already in) the SA (one philosopher joined both), percentages approximately similar to the overall academic participation in those organizations. However, I suspect this estimate substantially undercounts enthusiastic Nazis, since a number of philosophers (including briefly Heidegger) appear to have gone beyond mere membership to enthusiastic support through their writings. I haven't yet attempted to quantify this -- though one further possible measure is involvement with Alfred Rosenberg the notorious Nazi racial theorist. Combining the SA, SS, and Rosenberg associates yields a minimum of 30 philosophers (17%) on the far right side of Nazism, not even including those who received their university posts after the Nazis rose to power (and thus perhaps partly because of their Nazism).

What can we say about the philosophers who were not party members? Well, 22 (12% of the 179 habilitated philosophers) were Jewish. Another 52 (29%) were deprived of the right to teach, imprisoned, or otherwise severely penalized by the Nazis for Jewish family connections or political unreliability (often both). It's somewhat difficult to tease apart how many of this latter group took courageous stands vs. found themselves insufferable to the Nazis due to family connections or previous political commitments outside of their control. One way to look at the data are these: Among the 157 non-Jewish habilitated philosophy professors, 37% joined the Nazi party and 30% were severely penalized by the Nazis (this second number excludes 5 people who were Nazi party members and also severely penalized), leaving 33% as what we might call "coasters" -- those who neither joined the party nor incurred severe penalty. Most of these coasters had at least token Nazi affiliations, especially with the NSLB (the Nazi organization of teachers), but probably NSLB affiliation alone did not reflect much commitment to the Nazi cause.

Membership in the Nazi party would not reflect a commitment to Nazism (or, also problematic, an unusually strong opportunistic willingness to fake commitment to further one's career) if joining the Nazi party was necessary simply to getting along as a professor. The fact that about a third of professors could be "coasters" suggests that token gestures of Nazism, rather than actual Nazi party membership, were sufficient for getting along, as long as one did not actively protest or have Jewish affiliations. Nor were the coasters mostly old men on the verge of retirement (though there was a wave of retirements in 1933, the year the Nazis assumed power). If we include only the subset of 107 professors who were not Jewish, habilitated by 1932, and continuing to teach past 1940, we still find 30% coasters (28% if we exclude two emigrants).

Here's what I tentatively conclude from this evidence: Philosophy professors were not forced to join the Nazi party. However, a substantial proportion did so voluntarily, either out of enthusiasm or opportunistically for the sake of career advancement. A substantial minority, at least 19% of the non-Jews, occupied the far right of the Nazi party, as reflected by membership in the SS, SA, or association with Rosenberg. Regardless of how the data look for other academic disciplines, it seems unlikely that we will be able to conclude that philosophers tended to avoid Nazism. Nonetheless, given that 30% of non-Jewish philosophers were severely penalized by the Nazis (including one executed for resistance and two who died in concentration camps), it remains possible that philosophers are overrepresented among those who resisted or were ejected.

Monday, December 27, 2010

My Forthcoming Book

... is at a discount for Jan. 1 release, quoted at $18.45 here at Amazon and $19.10 here at Barnes & Noble. (List price $27.95.) Get 'em while they're hot!

Thursday, December 23, 2010

Friday, December 17, 2010

Philosophers Buying Into Nazi Censorship?

This, from a recent article in Science, examining word usage frequencies using Google's huge corpus of books:

We probed the impact of censorship on a person’s cultural influence in Nazi Germany. Led by such figures as the librarian Wolfgang Hermann, the Nazis created lists of authors and artists whose “undesirable”, “degenerate” work was banned from libraries and museums and publicly burned (26-28). We plotted median usage in German for five such lists: artists (100 names), as well as writers of Literature (147), Politics (117), History (53), and Philosophy (35) (Fig 4E). We also included a collection of Nazi party members [547 names, ref (7)]. The five suppressed groups exhibited a decline. This decline was modest for writers of history (9%) and literature (27%), but pronounced in politics (60%), philosophy (76%), and art (56%). The only group whose signal increased during the Third Reich was the Nazi party members [a 500% increase; ref (7)].
One interpretation, perhaps, is that philosophers socked it to Hitler and suffered most. However, given the rate at which philosophers appear to have co-operated with the Nazis (explored by George Leaman in Heidegger im Kontext and hopefully subject of a future post), I don't think we should rule out another interpretation: Philosophers tended to accept the Nazi censorship and stopped referring to the censored authors, more so than academics in other fields.

I wonder if there is a way to tease these hypotheses apart....

HT: Bernie Kobes.

Wednesday, December 15, 2010

German Tour in January

I will be in Germany from January 18-28. Here's the schedule of talks, plus one graduate student conference. There are also a few less formal events (seminar discussions and the like). Please feel free to contact me or the host departments if you'll be in the area and interested.

Jan. 20: Osnabrueck: "Shards of Self-Knowledge" (6 p.m. start)

Jan. 21-22: Osnabrueck: Post-graduate conference on "the Work of Eric Schwitzgebel, the Epistemological Status of First-Person Methodology in Science, and the Metaphysics of Belief"; I will present "The Problem of Known Illusion and the Problem of Undetectable Illusion" and "The Moral Behavior of Ethics Professors"

Jan. 24: Berlin: "Knowing What You Believe" (6 p.m. start)

Jan. 26: Bochum: "Knowing What You Believe" (6 p.m. start)

Jan. 27: Mainz: "Shards of Self-Knowledge" (6 p.m. start)

Sunday, December 12, 2010

Luke Muehlhauser Interviews Me about Self-Knowledge of Conscious Experience and about the Moral Behavior of Ethics Professors

... here. Self-knowledge of conscious experience is the topic until 57:12, and then there's about twenty minutes on ethics professors.

Thursday, December 09, 2010

"Objects in Mirror Are Closer Than They Appear"

... so it says, at least, on my passenger side mirror.


(image from http://amchurchadultdiscipleship.net)

I've been worrying though, are they closer than they appear?  This might seem a strange thing to worry about, but I refuse to be thus consoled.

Here's a case for saying that objects in the mirror are closer than they appear: The mirror is slightly convex so as to give the driver a wider field of view.  As a result, the expanse of mirror reflecting light from the object into my eye is smaller than the expanse would be if the mirror were flat.  Thus, the size of the object "in the mirror" is smaller than it would be in a flat mirror.  If we assume that flat mirrors accurately convey size, it seems to follow that the size of the object in the mirror is inaccurately small.  Finally, apparent distance in a mirror is determined by apparent size in a mirror, smaller being farther away.

The argument for the other side is, at first blush, much simpler: Objects in the mirror are no closer than they appear, at least for me, because as an experienced driver I never misjudge, or am even tempted to misjudge, their distance.

Now both of these arguments are gappy and problematic.  For example, on the first argument: Why should flat mirrors be normative of apparent size?  And why shouldn't we say that the object is larger than it appears (but appearing the right distance away), rather than closer than it appears (but perhaps appearing the right size)? That is, why does it look like a distant, full-sized car rather than a nearby, smallish car?

You might be tempted to mount a simpler argument for the "closer than they appear" claim: A naive mirror-user will misjudge the distance of objects seen in a slightly convex mirror.  The naive mirror-user's misjudgments are diagnostic of apparent size -- perhaps they are based primarily on "appearances"? -- and this apparent size does not change with experience.  The experienced mirror-user, in contrast, makes no mistakes because she learns to compensate for apparent size.  But this argument is based on the dubious claim that the experience of a novice perceiver is qualitatively the same as the experience of an expert perceiver -- a claim almost universally rejected by contemporary philosophers and psychologists.  It's also unclear whether the naive mirror-viewer would make the mistake if warned that the mirror is convex.  (Can apparent size in a mirror be contingent upon verbally acquired knowledge of whether the mirror is slightly convex or concave?)

Should we, then, repudiate the manufacturers' claim, at least as it applies to experienced drivers?  Should we, perhaps, recommend that General Motors hire some better phenomenologists?  Well, maybe.  But consider carnival mirrors: My image in a carnival mirror looks stretched out, or compressed, or whatever, even if I am not for a moment deceived.  Likewise, the lines in the Poggendorff Illusion look misaligned, even if I have seen the illusion a thousand times and know exactly what is going on.  Things look rippled through a warped window, no matter how often I look through that window.  Perhaps you, too, want to say such things about your experience.  If so, how is the passenger-side mirror case different?

Here is one way it might be different: It takes a certain amount of intellectual stepping back to not be taken in by the carnival mirror or the Poggendorff Illusion or the warped window.  The visual system, considered as a subset of my whole cognitive system, is still fooled.  And maybe this isn't so for the passenger-side mirror case.  But why not?  And does it really take such intellectual stepping back not to be fooled in the other cases?  Perhaps there's a glass of water on my table and the table looks warped through it.  I'm not paying any particular attention to it.  Is my visual system taken in?  Am I stepping back from that experience somehow?  It's not like I just ignore visual input from that area: If the table were to turn bright green in that spot or wiggle strangely, I would presumably notice.  Is my father's visual system fooled by the discontinuity between the two parts of his bifocals?  Is mine fooled by the discontinuities at the edge of my rather strong monofocals as they perch at the end of my nose?  And what if, as Dan Dennett and Mel Goodale others have suggested, there are multiple and possibly conflicting outputs from the visual system, some fooled and some not?

Can we say both that objects are farther than they appear in passenger-side mirror (in one sense) and that they aren't (in some other sense)?  I'm inclined to think that such a "dual aspect" view in this case only doubles our problems, for it's not at all clear what these two senses would be: They can't be the same two senses, it seems, in which a tilted penny is sometimes said to look in one way round and in another way elliptical -- for what would we then say about the tilted penny viewed in a convex mirror?  We would seem to need three answers.

Hey, wait, don't drive off now -- we've only started!

Wednesday, December 08, 2010

Not in JFP: Tenure-track Position at UCR in History of Philosophy

Normally, I would just let Jobs for Philosophers carry the U.C. Riverside Philosophy Department ads, but for whatever reason, this ad still isn't posted over there -- which explains, perhaps, why our applicant pool is looking a little thin!

University of California, Riverside, CA. Asst Prof., tenure-track, available July 1, 2011. 4 courses/year on the quarter system, graduate and undergraduate teaching. Thesis supervision and standard non-teaching duties. AOS: History of Philosophy, with particular interests in Ancient, Early Modern, and/or 19th/20th Century European Philosophy. Requires ABD or Ph.D., and compelling evidence of achievement in and commitment to research and publication. In addition, the successful candidate must be committed to teaching effectively at all levels, including graduate mentoring. Furthermore, he or she will be expected to enhance connections among research groups in the department and, where applicable, within the College of Humanities, Arts and Social Sciences. Salary commensurate with education and experience. Position available July 1, 2011. Submit a current CV, writing sample, at least three letters of reference, evidence of teaching excellence and a letter of application by January 3, 2011 to: Professor Mark Wrathall,Chair, Search Committee, Department of Philosophy, University of California, Riverside, CA 92521-0201. Review of applications begins on January 3, 2011 and continues until the position is filled. UC Riverside is an Equal Opportunity/Affirmative Action Employer committed to excellence through diversity.
Please spread the word to relevant parties!

Yes, I did say "Jobs for Philosophers". Every year in North America there are a few hundred exceptions to this apparent oxymoron.

Friday, December 03, 2010

Some Awesomely Beautiful Pictures of Structures in the Brain

here.

Unfortunately, the immaterial soul continues to elude photographic capture.

(HT: Theresa Cook.)

Wednesday, November 24, 2010

Professors' Moral Attitudes about Responding to Student Emails Are Almost Completely Unrelated to Their Actual Responsiveness to Student Emails

... or so say Josh Rust and I in an article were are busily writing up.  (We reported some of the data in an earlier blog post here.)

Below is my favorite figure from the current draft.  On the x-axis is professors' expressed normative view about the morality or immorality of "not consistently responding to student emails", in answer to a survey question, with answers ranging from 1 ("very morally bad") through 5 ("morally neutral") to 9 ("very morally good").  (In fact, only 1% of respondents answered on the morally good side of the scale, showing that we aren't entirely bonkers.)  On the y-axis is responsiveness to three emails Josh and I sent to those same survey respondents -- emails designed to look as through they were from undergraduates, asking questions about such things as office hours and future courses.

(I can't seem to get my graphs to display quite right in Blogger.  If the graph is cut off, please click to view the whole thing.  The triplets of bars represent ethicists, non-ethicist philosophers, and professors in departments other than philosophy, respectively.)

Tuesday, November 16, 2010

Carruthers and Schwitzgebel on Knowledge of Attitudes

... a Philosophy TV dialogue that came out a couple of weeks ago, but which I forgot to link to at the time.

Peter and I both deny that we have privileged self-knowledge of our attitudes (at least in any strong sense of "privilege"), but since we're philosophers we still find plenty to disagree about!

Thursday, November 11, 2010

The Phenomenology of Being a Jerk

Most jerks, I assume, don't know that they're jerks. This raises, of course, the question of how you can find out if you're a jerk. I'm not especially optimistic on this front. In the past, I've recommended simple measures like the automotive jerk-sucker ratio -- but such simple measures are so obviously flawed and exception-laden that any true jerk will have ample resources for plausible rationalization.

Another angle into this important issue -- yes, I do think it is an important issue! -- is via the phenomenology of being a jerk. I conjecture that there are two main components to the phenomenology:

First: an implicit or explicit sense that you are an "important" person -- in the comparative sense of "important" (of course, there is a non-comparative sense in which everyone is important). What's involved in the explicit sense of feeling important is, to a first approximation, plain enough. The implicit sense is perhaps more crucial to jerkhood, however, and manifests in thoughts like the following: "Why do I have to wait in line at the post office with all the schmoes?" and in often feeling that an injustice has been done when you have been treated the same as others rather than preferentially.

Second: an implicit or explicit sense that you are surrounded by idiots. Look, I know you're smart. But human cognition is in some ways amazingly limited. (If you don't believe this, read up on the Wason selection task.) Thinking of other people as idiots plays into jerkhood in two ways: The devaluing of others' perspectives is partly constitutive of jerkhood. And perhaps less obviously, it provides a handy rationalization of why others aren't participating in your jerkish behavior. Maybe everyone is waiting their turn in line to get off the freeway on a crowded exit ramp and you (the jerk) are the only one to cut in at the last minute, avoiding waiting your turn (and incidentally increasing the risk of an accident and probably slowing down non-exiting traffic). If it occurs to you to wonder why the others aren't doing the same you have a handy explanation in your pocket -- they're idiots! -- which allows you to avoid more uncomfortable potential explanations of the difference between you and them.

Here's a self-diagnostic of jerkhood, then: How often do you think of yourself as important, how often do you expect preferential treatment, how often do you think you are a step ahead of the idiots and schmoes? If this is characteristic of you, I recommend that you try to set aside the rationalizations for a minute and do a frank self-evaluation. I can't say that I myself show up as well by this self-diagnostic as I would have hoped.

How about the phenomenology of being a sweetie -- if we may take that as the opposite of a jerk? Well, here's one important component, I think: Sweeties feel responsible for the well-being of the people around them. These can be strangers who drop a folder full of papers, job applicants who are being interviewed, their own friends and family.

In my effort to move myself a little be more in the right direction along the jerk-sweetie spectrum, I am trying to stir up in myself more of that feeling of responsibility and to remind myself of my fallible smallness.

Thursday, November 04, 2010

Not By Argument Alone (by Guest Blogger G. Randolph Mayes)

I just gave a talk at Gonzaga University called “Not by Argument Alone” in which I tried to show how explanatory reasoning figures into the resolution of philosophical problems. It begins with the observation that we sometimes have equally good reasons for believing contradictory claims. This is the defining characteristic of philosophical antinomies, but it is a common feature of everyday reasoning as well.

For example, Frank told me to meet him at his office at 3 PM if I wanted a ride home. But I’ve been waiting for 15 minutes now and still no Frank. This problem can be represented as a contradiction of practical significance: Frank both will and will not be giving me a ride home. One of these claims must go. The problem is that I have very good reasons for believing both. Frank is a very reliable friend, as is my memory for promises made. On the other hand, my ability to observe the time of day and the absence of Frank at that time and location is quite reliable as well.

So how do I decide which claim to toss? I consider the possibility that Frank is not coming, but this immediately raises the following question: Why not? (He forgot; he lied, he was mugged; I am late?) I consider the possibility that Frank will still show. This immediately raises another question: Why isn’t he here? ( He was delayed; I am early; he is here but I don’t see him?) Both of these questions are requests for explanations and producing good answers to them is essential to the rational resolution of the contradiction. Put differently, I should deny the claim whose associated explanation questions I am best capable of answering.

This is one way of explicating the view that rational belief revision depends on considerations of ‘explanatory coherence.’ The idea is typically traced to Wilfrid Sellars, and it has since been developed along epistemological, psychological, and computational lines. Oddly, however, it has not been explored much as a model for the resolution of philosophical questions. I don’t know why, but I speculate that it is because philosophers don’t naturally represent philosophical thinking in explanatory terms. Typically, a philosophical ‘theory’ is represented not so much as a proposed explanation of some interesting fact as it is a proposed analysis of some problematic concept.

In my view, though, philosophers engage in the creation of explanatory hypotheses all the time. Consider the traditional problem of perception. Just about everyone agrees that we perceive objects. But whereas the physicalist argues that we perceive independently existing physical objects, the phenomenalist is equally persuasive that the objects of perception are mind-dependent. Again, one claim must go. Suppose we deny the phenomenalist’s claim. But then how do we explain illusions and hallucinations, which are phenomenologically indistinguishable from physical objects? Suppose we deny the physicalist’s claim. But then how do we explain the origin of experience itself?

When we explicitly acknowledge that explanation is a necessary step in philosophical inquiry, we thereby acknowledge the responsibility to identify criteria for evaluating the explanations that we propose. Too often philosophical theories are defended simply on the basis of their intuitive appeal. But why would we expect this to reflect anything more than our intuitive preference for believing the claims that they preserve? In science, the ability of a theory to explain things we already know is a paltry achievement. A good explanation must successfully predict novel phenomena or unify familiar phenomena not previously known to be related. Are philosophical explanations subject to the same criteria? If so, then let’s explicitly apply them. If not, well, then I think we’ve got some explaining to do.

This is my last post! Thanks very much for reading and thanks especially to Eric for giving me this opportunity to float some of my thoughts on The Splintered Mind.

Friday, October 29, 2010

The Convincing Explanation (by Guest Blogger G. Randolph Mayes)

The Stone is the new section of the New York Times devoted to philosophy and this week it contains an interesting piece called “Stories vs. Statistics” by John Allen Paulos. It is worth reading in its entirety, but for my money the most important point he makes is this:

The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true.
Our tendency to confuse plausibility with probability is also at the heart of a short essay of mine (forthcoming in the journal Think), called “Beware the Convincing Explanation.” Paulos clarifies the excerpt above by reference to the ‘conjunction fallacy,’ which I discussed in an earlier post. In my essay I try to get at it from a different angle, by distinguishing the respective functions of argument and explanation.

Here is the basic idea: Normally, when we ask for an argument we are asking for evidence, which is to say the grounds for believing some claim to be true. An explanation, on the other hand, is not meant to provide grounds for belief; rather it tells us why something we already believe is so. Almost everyone understands this distinction at an intuitive level. For example, suppose you and I were to have this conversation about our mutual friend Toni.
Me: Boy, Toni is seriously upset.

You: Really? Why?

Me: She’s out in the street screaming and throwing things at Jake.
You can tell immediately that we aren’t communicating. You asked for an explanation, the reason Toni is upset. What I gave you is an argument, my reasons for believing she is upset. But now consider a conversation in which the converse error occurs:
Me: Boy, Toni is seriously upset.

You: Really? How do you know that?

Me: Jake forgot their date tonight and went drinking with his pals.
This time my response actually begs the question. Jake blowing off the date would certainly explain why Toni is upset, but an explanation is only appropriate if we agree that she is. Since your question was a request for evidence, it is clear that you are not yet convinced of this and I’ve jumped the gun by explaining what caused it.

What’s interesting is that people do not notice this so readily. In other words, we often let clearly explanatory locutions pass for arguments. This little fact turns out to be extremely important, as it makes us vulnerable to people who know how to exploit it. For example, chiropractic medicine, homeopathy, faith healing -- not to mention lots of mainstream diagnostic techniques and treatments -- are well known to provide little or no benefit to the consumer. Yet their practitioners produce legions of loyal customers on the strength of their ability to provide convincing explanations of how their methods work. If we were optimally designed for detecting nonsense, we would be highly sensitive to people explaining non-existent facts. We aren’t.

Now, to be fair, there is a sense in which causes can satisfy evidential requirements. After all, Jake blowing off the date can be construed as evidence that Toni will be upset when she finds out. However, it is quite weak evidence compared to actually watching Toni go off on him. So, we can put the point a bit more carefully by saying that what people don’t typically understand is how weak the evidence often is when an explanation gets repurposed as an argument.

Following Paulos, we can say that the convincing explanations succeed in spite of their evidential impotence because they are good stories that give us a satisfying feeling of understanding a complex situation. Importantly, this is a feeling that could not be sustained if we were to remain skeptical of the claim in question, as it is now integral to the story.

Belief in the absence of evidence is not the only epistemic mischief that explanations can produce. The presence or absence of an explanation can also inhibit belief formation in spite of strong supporting evidence. The inhibitory effect of explanation was demonstrated in a classic study by Anderson, Lepper and Ross which showed that people are more likely to persist in believing discredited information if they had previously produced hypotheses attempting to explain that information. Robyn Dawes has documented a substantial body of evidence for the claim that most of us are unmoved by statistical evidence unless it is accompanied by a good causal story. Of particular note are studies by Nancy Pennington and Reid Hastie which demonstrate a preference for stories over statistics in the decisions of juries.

Sherlock Holmes once warned Watson of the danger of the convincing explanation: “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” Damn good advice from one of the greatest story-tellers of all.

Thursday, October 28, 2010

Mad Belief?

Mad belief -- in David Lewis's sense of "mad" -- would be belief with none of the normal causes and none of the normal effects. Such belief, I suggest, would not be belief at all. Delusions might sometimes be cases of beliefs gone half-mad -- beliefs with enough of the functional role of belief that it's not quite right to say that the deluded subject believes but also diverging enough from the functional role of belief that it's not quite right simply to say that the subject fails to believe.

So I say, at least, in an essay now available in draft on my website. (As always, comments, revisions, objections welcome -- either attached to this post or emailed directly to me.)

The essay is a commentary on Lisa Bortolotti's recent book Delusions and Other Irrational Beliefs, though it should be readable without prior knowledge of Lisa's book. You might remember Lisa from her recent stint as a guest blogger here at The Splintered Mind.

Sunday, October 24, 2010

Why We Procrastinate (by guest blogger G. Randolph Mayes)

James Surowiecki recently wrote a nice full-length review of The Thief of Time for The New Yorker magazine. It sounds like a fantasy novel by Terry Pratchett, but is actually a collection of mostly pointy-headed philosophical essays about procrastination edited by Chrisoula Andreou and Mark White. Procrastination is a great topic if you are interested in the nature of irrationality, as philosophers and psychologists tend to think of procrastination as something that is irrational by definition. For example, in the lead article of this volume George Ainslie defines procrastination as “generally meaning to put off something burdensome or unpleasant, and to do so in a way that leaves you worse off.”

I recently published an article about cruelty in which I argued that it is a mistake for scientists to characterize the phenomenon of cruelty in a way that respects our basic sense that it is inherently evil. I find myself wondering whether the same sort of point might be raised against the scientific study of procrastination.

Most researchers appear to accept Ainslie’s characterization of procrastination as an instance of "hyperbolic discounting," which is an exaggeration of an otherwise defensible tendency to value temporally proximate goods over more distant ones. Everyone understands that there are situations (like a time-sensitive debt or investment opportunity) when it is rational to prefer to receive 100 dollars today rather than 110 dollars next week. But Ainslie and many others have demonstrated that we typically exhibit this preference even when it makes far more sense to wait for the larger sum.

Hyperbolic discounting subsumes procrastination in a straightforward way. According to Ainslie, whenever we procrastinate we are choosing a more immediately gratifying activity over one whose value can only be appreciated in the long run. When making plans a week in advance, few would choose to schedule the evening before a big exam catching up on neglected correspondence or deleting old computer files. But when the decision is left until then, that’s exactly the sort of thing we find ourselves doing.

One interesting result of defining procrastination as Ainslie does is that whether we are procrastinating at any given time depends on what happens later, not how we feel about it now. For example, reading this blog is something you might describe as procrastinating on cleaning your filthy apartment. But, according to Ainslie’s definition, you are only procrastinating now if you subsequently fail to get the apartment clean before your guests arrive for dinner (because otherwise you aren’t “worse off”). There is nothing absurd about this, and science certainly has no obligation to be faithful to ordinary usage. But this disparity does highlight an interesting possibility, namely that what Ainslie and his colleagues call procrastination is really just the downside of a generally rational tendency to avoid beginning onerous tasks much before they really, really need to be done.

Why would this be rational? Well, you could start cleaning your apartment right now. But- wait! -there is a good chance that if you do you will become the victim of Parkinson’s Law: Work expands so as to fill the time available for its completion. Putting it off until the last minute can be beneficial because you work much more energetically and efficiently when you are under the gun. (And if you don’t, then you will learn to, which is an important life skill.) Of course, this strategy occasionally backfires. We sometimes underestimate the time we need to meet our goals; unanticipated events, like a computer crashing or guests arriving early, can torpedo the deadlining strategy. But these exceptions, which are often uncritically taken as proof of the irrationality of procrastination, may simply be a reasonable price to pay for the value it delivers when it works.

Most of us think of procrastination as a bad thing and we tell researchers that we do it too much. But should this kind of self-reporting be trusted? Do we just know intuitively that we would be generally better off if we generally procrastinated less? Scientists can define procrastination as harmful if they want to, but they also might want to reconsider the wisdom of a definition that makes beneficial procrastination a logical absurdity. In doing so, they may discover that the powerful notion of hyperbolic discounting has made them too quick to accept a universal human tendency as a fundamentally irrational one.

Friday, October 15, 2010

U.C. Regents to Add Air Consumption Fee

Earlier today, the University of California regents unanimously voted to impose a new Air Consumption Fee on students, faculty, and staff. The new fee will go into effect on January 1.

University of California President Mark G. Yudof said, "Most people think of air as free, but they don't realize that it needs to be processed through ventilation systems." Ventilation systems, he added, "cost money both to build and to maintain. In times of economic difficulty for the University of California, we need to look carefully at costs, such as the cost of air."

The new Air Consumption Fee will be $1,210.83 per quarter for students on the quarter system and $1,816.25 per semester for students on the semester system. For faculty and staff, the Air Consumption Fee will be 23% of their base salary. University of California's chief economist for the Office of the President, Muss Erhaben, noted, "That may seem like a lot to pay for air, but recent studies have suggested that demand for air is relatively inelastic" and thus not very sensitive to changes in price.

Student, faculty, and staff advocacy groups were predictably outraged by the move. "The sudden imposition of new fees on students, especially in the middle of the academic year, creates enormous hardships, especially for students already in economic difficulty," commented U.C.L.A. student representative Tengo K. Respirar. "For example, I had been hoping for an iPad for Christmas. Instead, my parents will be buying me air."

Others complained that the fee was unfair to those who use less air. "I can stop my heart and breathing for minutes at a time and consume only a half cup of rice and thin broth every day," said Swami B. Retha Litla. "I should not be expected to pay the same as a football player." Donna M. de l'Air, a U.C. Riverside Associate Professor in Comparative Languages and Literatures, noted that the Air Consumption Fee will be deducted from her salary even though she will be on sabbatical in France for winter quarter, and thus will be consuming no University of California air. In response to this concern, a representative of the Office of the President stated that the University of California is working on exchange arrangements with other universities to ensure that professors and students in residence elsewhere will not be double-charged for air.

In related news, the U.C. regents also voted to institute a new tier for employees. Current employees who wish the university to abide by its previous salary and benefits agreements may elect to join the Traditional Plan tier at an annual cost of 50% of their salary. Alternatively, employees may elect to join the New Plan at no charge. The New Plan involves a 50% reduction in salary. "We are proud that in these difficult budgetary times we have been able to abide by all our agreements and avoid salary cuts, at least for staff who pay to join the Traditional Plan," said President Yudof.

The Illusion of Understanding (by guest blogger G. Randolph Mayes)

Every teacher knows that magic moment when the light snaps on in a student’s head and bitter confusion gives way to the warm glow of understanding. We live for those moments. Subsequent moments can be slightly less magical, however. There is, for example, the moment we begin to grade said student’s exam, and realize that we’ve been had yet again by the faux glow of illusory understanding.

The reliability and significance of our sense of understanding (SOU) has been the subject of research in recent years. I indicated in the previous post that philosophers of science generally agree that there is a tight connection between explanation and understanding. Specifically, they agree that the basic function of explanation is to increase our understanding of the world. But this agreement is predicated on an objective sense of the term ‘understanding,’ typically referring to a more unified belief system or a more complete grasp of causal relations. There is no similar consensus concerning how our subjective SOU relates to ‘real’ understanding, or indeed whether it is of any philosophical interest at all.

One leading thinker who has argued for the relevance of the SOU to the theory of explanation is the developmental psychologist Alison Gopnik. Gopnik is a leading proponent of the view that the developing brains of children employ learning mechanisms that closely mirror the process of scientific inquiry. As the owner of this blog has aptly put it, Gopnik believes that children are invested with ‘a drive to explain,’ a drive she compares to the drive for sex.

For Gopnik, the SOU is functionally similar to an orgasm. It is a rewarding experience that occurs in conjunction with an activity that tends to enhance our reproductive fitness. So just as a full theory of reproductive behavior will show how orgasm contributes or our reproductive success, a full theory of explanatory cognition will show how the SOU contributes to our explanatory success.

Part of the reason Gopnik compares the SOU to the experience of orgasm is that they can both be detached from their respective biological purposes. Genital and theoretical masturbation are both pleasurable yet non (re)productive human activities. Gopnik thinks that just as no one would consider the high proportion of non reproductive orgasms as evidence that orgasm is unrelated to reproduction, no one should take a high frequency of illusory SOU’s as evidence that the SOU is unrelated to real understanding.

But the analogy between orgasm and the SOU has its limits. The SOU can not really be detached from acts of theorizing as easily as orgasm can be detached from acts of reproduction. One might achieve a free floating SOU as a result of meditation, mortification or drug use, but this will be relatively unusual in comparison to the ease and frequency with which orgasms can be achieved without reproductive sex. For the most part SOU’s come about as a result of unprotected intercourse with the world. If illusory SOU’s are common, and this can not be explained by reference to their detachability, it is reasonable to remain skeptical about the importance of the SOU in producing real understanding.

One such skeptic is the philosopher of science J. D. Trout. Trout does not deny that our SOU may sometimes result from real understanding, but he thinks it is the exception rather than the rule. Moreover, Trout thinks that illusory SOU’s are typically the result of two well-established cognitive biases: overconfidence and hindsight. (Overconfidence bias is the tendency to overestimate the likelihood that our judgments are correct. Hindsight bias is the tendency to believe that past events were more predictable than they really were.) Far from being a reliable indicator of real understanding, Trout holds that the SOU mostly reinforces a positive illusion we have about our own explanatory abilities. (This view also finds support in the empirical research of Frank Keil who has documented an ‘illusion of explanatory depth’)

Is it true that illusory SOU’s are more common than veridical ones? I’m not sure about this. I‘m inclined to think most of our daily explanatory episodes occur below the radar of philosophers of science. Consider explanations that occur simply as the result of the limits of memory. My dog is whining and it occurs to me that I haven’t fed her. The mail hasn’t been delivered, and then I recall it is a holiday. I see a ticket on my windshield and I remember that I didn’t feed the meter. I have an dull afternoon headache and realize I’ve only had three cups of coffee. These kinds of explanatory episodes occur multiple times every day. The resulting SOU’s are powerful and only rarely misleading.

But when choosing between competing hypotheses or evaluating explanations supplied by others Trout is surely correct that the intensity of an SOU has little to do with our degree of understanding. We experience very powerful SOU’s from just-so stories and folk explanations that have virtually no predictive value. Often a strong SOU is simply the result of the fact that it allays our fears or settles cognitive dissonance in an emotionally satisfying way.

In the end, I’m not sure that Trout and Gopnik have a serious disagreement. For one thing, Gopnik’s focus in on the value of the SOU for the developing mind of a child. It may be that the the unreflective minds of infants are uncorrupted by overconfidence, hindsight, or the need to believe. It may also be that a pre-linguistic child’s SOU is particularly well-calibrated for the kind of learning it is required to do.

Trout does not argue that the SOU is completely unreliable, and Gopnik only needs it to be reliable enough to have conferred a selective advantage on those whose beliefs are reinforced by it. There are different ways that this can happen. As Trout himself points out, the SOU may contribute to fitness simply by reinforcing the drive to explain. But even if our SOU is only a little better than chance at selecting the best available hypothesis at any given time, it could still be tremendously valuable as part of an iterated process that remains sensitive to negative feedback. As I indicated in the previous post, our mistake may be to think of the SOU as something that justifies us in believing our hypotheses. It may simply help us to generate or select hypotheses that are slightly more likely to be true than their competitors.

Tuesday, October 12, 2010

Poor, Unloved Auguste Comte

... still, almost two centuries later, has no scholarly-quality English translation of his (1830-1842) magnum opus, Cours de Philosophie Positive.  This is, I think, rather a scandal for such an important philosopher.

(How important? Well, Dean Simonton's mid-20th-century measure of the historical importance of thousands of philosophers, according to textbook pages dedicated to them and similar measures, ranks him as the 17th most important philosopher in history, between Rousseau and Augustine -- though I'd guess that Anglophone philosophers in 2010 wouldn't rank him quite so high.)

The standard translation of Cours de Philosophie Positive is The Positive Philosophy of Auguste Comte, "freely translated and condensed by Harriet Martineau" in 1896. Wait, what?!  Freely translated and condensed? What is this, the friggin' Reader's Digest version? You're not planning to quote from it, I hope.

Probably Comte's most famous contribution to philosophy of psychology is his brief argument against the possibility of a science of introspection. Here is Martineau's translation of the passage in which Comte lays out his argument:

In the same manner, the mind may observe all phenomena but its own.  It may be said that a man's intellect may observe his passions, the seat of the reason being somewhat apart from that of the emotions in the brain; but there can be nothing like scientific observation of the passions, except from without, as the stir of the emotions disturbs the observing faculties more or less. It is yet more out of the question to make intellectual observation of intellectual processes. In order to observe, your intellect must pause from activity; yet it is this very activity that you want to observe. If you cannot effect the pause, you cannot observe: if you do effect it, there is nothing to observe (vol. 1, p. 12).
I won't inflict the original French upon you (it is available in Google books here, if you're interested), but for comparison here is William James's translation in his Principles of Psychology:
It is in fact evident that by an invincible necessity, the human mind can observe directly all phenomena except its own proper states.  For by whom shall the observation be made? It is conceivable that a man might observe himself with respect to the passions that animate him, for the anatomical organs of passion are distinct from those whose function is observation. Though we have all made such observations on ourselves, they can never have much scientific value, and the best mode of knowing the passions will always be that of observing them from without; for every strong state of passion... is necessarily incompatible with the state of observation. But as for observing in the same way intellectual phenomena at the time of their actual presence, that is a manifest impossibility. The thinker cannot divide himself into two, of whom one reasons while other other observes him reason.  The organ observed and the organ observing being, in this case, identical, how could observation take place? This pretended psychological method is then radically null and void. On the one hand, they advise you to isolate yourself, as far as possible, from every external sensation, especially every intellectual work, -- for if you were to busy yourself even with the simplest calculation, what would become of internal observation? -- on the other hand, after having with the utmost care attained this state of intellectual slumber, you must begin to contemplate the operations going on in your mind, when nothing there takes place!  Our descendants will doubtless see such pretensions some day ridiculed upon the stage (1980/1981, p. 187-188).
(The ellipses above mark one phrase James omits: "c'est-à-dire précisément celui qu'il serait le plus essential d'examiner" which, in my imperfect French I would translate as "that is to say, precisely that which it would be the most essential to examine". It is perhaps also worth remarking that no emphasis on "passions" or "intellectual" appears in my edition of Comte, though "intérieure" is italicized.)

Not only does the Martineau translation lose the detail and the color of the original, it is philosophically and psychologically sloppy. For example, Comte makes no reference to the "brain" or the "seat of reason"; instead -- as James indicates -- he talks about "the organs... whose function is observation" ("les organes... destinés aux fonctions observatrices"). And what is this that Martineau says about "the stir of the emotions disturbs the observing faculties more or less"? There is no trace of this clause in the original text. Martineau has inserted into Comte's text an observation that she evidently thinks he should have made.

We should no longer cite Martineau's translation as though it were of scholarly quality. There is no scholarly translation of Comte's most important work.

Wednesday, October 06, 2010

Is Explanation the Foundation? (by guest blogger G. Randolph Mayes)

One of my main interests is explanation. I think there may be no other concept that philosophers lean on so heavily, yet understand so poorly. Here are some examples of how critical the concept of explanation has become to contemporary philosophical debates.

1. A popular defense of scientific realism is that the existence of theoretical entities provides the best explanation of the success of the scientific enterprise.

2. A popular view concerning the nature of inductive rationality is that it rests on an inference to the best explanation.

3. A popular argument for the for the existence of other minds is that other minds provide the best explanation of the behavior of other bodies.

4. A popular argument for the existence of God is that a divine intelligence is the best explanation of the observed order in the universe.

This is a short list. The concept of explanation has been invoked in similar ways to analyze the nature of knowledge, theories, reduction, belief revision, and abstract entities. Interestingly, few of the very smart people who defend these views tell us what explanation is. The reason is simple: we don’t really know. The dirty secret is that explanation is just no better understood than any of the things that explanation is invoked to explain. In fact, it is actually worse than that. If you spend some time studying the many different theories of explanation that have been developed during the last 60 years or so, you’ll find that most of them give little explicit support to these arguments.

The reason for this is worth knowing. Most philosophical theories of explanation have been developed in an attempt to identify the essential features of a good scientific explanation. The good-making features of explanation were generally agreed to be those that would account for how explanation produces (and expresses) scientific understanding. There are many different views about this, but an assumption common to most of them is that a good scientific explanation must be based on true theories and observations. That sounds pretty reasonable, but here’s the rub: If truth is a necessary condition of explanatory goodness, then it makes no sense at all to claim that a theory’s explanatory goodness is our basis for thinking it is true.

All of the arguments noted above do just this, invoking a principle commonly known as “inference to the best explanation” (IBE, aka ‘abduction’). This idea, first articulated by Charles Peirce, has been the hope of philosophy ever since W.V.O. Quine pounded the last nail into the coffin of classical empiricism. This latter tradition had sought in vain to demonstrate that inductive rationality could ultimately be reduced to logic. For many, IBE is a principle that, while not purely logical, might serve as a new ‘naturalized’ foundation of inductive rationality.

Bas van Fraassen, the great neo-positivist, has blown the whistle on IBE most loudly, arguing that it is actually irrational. One of his criticisms is quite simple: It is literally impossible to infer the best explanation; all we can infer is the best explanation we have come up with so far. It may just be the best of a bad lot.

One way to understand the disconnect between traditional theories of explanation and IBE is to note that there are two fundamentally different ways of thinking about explanation. In one, basically transactional sense, explanations are the sorts of things we seek from pre-existing reserves of expert knowledge. When we ask scientists why the night sky is dark or why it feels good to scratch an itch, we typically accept as true whatever empirical claims they make in answering our question. Our sense of the quality of the explanation is limited to how well we think this information has answered the question we’ve posed. This, I think, is the model implicit in most traditional theories of explanation. The aim is to show in what sense, beyond the mere truth of the claims, that science can be said to provide the best answers.

In my view, IBE has more to do with a second sense of explanation, belonging to the context of discovery rather than communication of expert knowledge. In this sense, explaining is a creative process of hypothesis formation in response to novel or otherwise surprising information. It can occur within a single individual, or within a group, but in either case it occurs because of the absence of authoritative answers. It is in this sense of the term that it can make sense to adopt a claim on the basis of its explanatory power.

Interestingly, much of the work done on transactional accounts of explanation is highly relevant to the discovery sense of the term. Many of the salient features of good explanations are the same in both, notably: increased predictive power, simplicity, and consilience. (This point is made especially clearly in the work of philosophically trained cognitive psychologists like Tania Lombrozo.) What is not at all clear, however, is that any of the IBE arguments noted above will have the intended impact when the relevant sense of explanation belongs more to what Reichenbach called “the context of discovery” rather than the “context of justification.”

Tuesday, October 05, 2010

Applying to Graduate School in Philosophy

Time to start getting your act together, if that's your plan!

Regarding M.A. programs, I recommend the guest post by Robert Schwartz of University of Wisconsin, Milwaukee.

Regarding applying to Ph.D. programs, I stand by the advice I gave in 2007, with a few caveats:

  • The academic job market is horrible now, after having been unusually good from about 1999-2007.  Hopefully it will recover in a few years, though to what extent philosophy departments will participate in that recovery is an open question.  Bear these trends in mind when looking at schools' placement records.
  • The non-academic job market is also horrible now.  When the non-academic job market is horrible, graduate school admissions is generally more competitive.
  • In my posts, I may have somewhat underestimated the importance of the GRE.  However, I want to continue to emphasize that different schools, and different admissions committees within the same school over time, take the GRE seriously to different degrees, and thus a low GRE score should by no means doom your hopes.  If you have a GRE score that is not in keeping with your graduate school ambitions, I recommend applying to more than the usual number of schools, so that your application will land among at least a few committees that don't give much weight to the GRE.
The original seven posts have comments threads on which you may post questions or comments.

Thursday, September 30, 2010

Explaining Irrationality (by guest blogger G. Randolph Mayes)

In one of the last papers he wrote before dying almost exactly one year ago, John Pollock posed what he called “the puzzle of irrationality”:

Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question ... [Assuming] rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally?
Consider just one example, taken from Philip Johnson-Laird’s recent book How We Reason: Paolo went to get the car, a task that should take about five minutes, yet 10 minutes have passed and Paolo has not returned. What is more likely to have happened?
1. Paolo had to drive out of town.

2. Paolo ran into a system of one way streets and had to drive out of town.
The typical reader of this blog probably knows that the answer is 1. After all (we reason) 2 can’t be more likely, since 1 is true whenever 2 is. But I’ll bet you felt the tug of 2 and may still feel it. (This human tendency to commit the ‘conjunction fallacy’ was famously documented by the Israeli psychologists Daniel Kahneman and Amos Tversky.)

So we feel the pull of wrong answers, yet are (sometimes) capable of reasoning toward the correct ones.

Pollock wanted to know why we are built this way. Given that we can use the rules that lead us to the correct answers, why didn’t evolution just design us to do so all the time? Part of his answer- well-supported by the last 50 years of psychological research - is that most of our beliefs and decisions are the result of ‘quick and inflexible’ (Q&I) inference modules, rather than explicit reasoning. Quickness is an obvious fitness conferring property, but the inflexibility of Q&I modules means that they are prone to errors as well. (They will, for example, make you overgeneralize, avoiding all spiders, snakes, and fungi rather than just the dangerous ones.)

Interestingly, though, Pollock does not think human irrationality is simply a matter of the error proneness of our Q&I modules. In fact, he would not see a cognitive system composed only of Q&I modules as capable of irrationality at all. For Pollock, to be irrational, an agent must be capable of both monitoring the outputs of her Q&I modules and overriding them on the basis of explicit reasoning (just as you may have done above.) Irrationality, then, turns out to be any failure to override these outputs when we have the time and information needed to do so. Why we are built to often fail at this task is not entirely clear. Pollock speculates that it is a design flaw resulting from the fact that our Q&I modules are phylogenetically older than our reasoning mechanisms.

I think on the surface this is actually a very intuitive account of irrationality, so much so that it is easy to miss the deeper implications of what Pollock has proposed here. Most people think of rationality as a very special human capacity, the ‘normativity’ of which may elude scientific understanding altogether. But for Pollock, rationality is something that every cognitive system has simply by virtue of being driven by a set of rules. Human rationality is certainly interesting in that it is driven by a system of Q&I modules that can be defeated by explicit reasoning. What really makes us different, though, is not that we are rational, but that we sometimes fail to be.

Brie Gertler and I Argue about Introspection on Philosophy TV

here.  For what it's worth, I thought it went pretty well.  We were able to home in on some of our central points of disagreement and push each other on them a bit.

Tuesday, September 28, 2010

Are Ethicists Any More Likely to be Blood or Organ Donors Than Are Other Professors?

Short answer: no.  Not according to self-report, at least.

These results come from Josh Rust's and my survey of several hundred ethicists, non-ethicist philosophers, and professors in other departments. (Other survey results, and more about the methods, are here, here, here, here, here, here, here, and here.)

Before asking for any self-reports of behavior, we asked survey respondents to rate various behaviors on a nine-point scale from "very morally bad" through "morally neutral" to "very morally good". Among the behaviors were:

Not having on one’s driver’s license a statement or symbol indicating willingness to be an organ donor in the event of death,
and
Regularly donating blood.
In both cases, ethicists were the group most likely to praise or condemn the behavior (though the differences between ethicists and other philosophers were within statistical chance).  60% of ethicists rated not being an organ donor on the "morally bad" side of the scale, compared to 56% of non-ethicist philosophers and 42% of non-philosophers (chi-square, p = .001).  And 84% of ethicists rated regularly donating blood on the "morally good" side of the scale, compared to 80% of non-ethicist philosophers and 72% of non-philosophers (chi-square, p = .01).

In the second part of the questionnaire, we asked for self-report of various behaviors, including:
Please look at your driver’s license and indicate whether there is a statement or symbol indicating your willingness to be an organ donor in the event of death,
and
When was the last time you donated blood?
The groups' responses to these two questions were not statistically significantly different: 67% of ethicists, 64% of non-ethicist philosophers, and 69% of philosophers reported having an organ donor symbol or statement on their driver's license (chi-square, p = .75); and 13% of ethicists, 14% of non-ethicist philosophers, and 10% of non-philosophers reported donating blood in 2008 or 2009 (the survey was conducted in spring 2009; chi-square, p = .67).  A related question asking how frequently respondents donate blood also found no detectable difference among the groups.

These results fit into an overall pattern that Josh Rust and I have found: Professional ethicists appear to behave no better than do other professors.  Among our findings so far:
  • Arbitrarily selected ethicists are rated as overall no morally better behaved by members of their own departments than are arbitrarily selected specialists in metaphysics and epistemology (Schwitzgebel and Rust, 2009).
  • Ethicists, including specialists in political philosophy, are no more likely to vote than are other professors (though Political Science professors are more likely to vote than are other professors; Schwitzgebel and Rust, 2010).
  • Ethics books, including relatively obscure books likely to be borrowed mostly by professors and advanced students in philosophy, are more likely to be missing from academic libraries than are other philosophy books (Schwitzgebel, 2009).
  • Although ethics professors are much more likely than are other professors to say that eating the meat of mammals is morally bad, they are just about as likely to report having eaten the meat of a mammal at their previous evening meal (Splintered Mind post, May 22, 2009).
  • Ethics professors appear to be no more likely to respond to undergraduate emails than are other professors (Splintered Mind post, June 16, 2009).
  • Ethics professors were statistically marginally less likely to report staying in regular contact with their mothers (Splintered Mind post, August 31, 2010).
  • Ethics professors did not appear to be any more honest, overall, in their questionnaire responses, to the extent that we were able to determine patterns of inaccurate or suspicious responding (Splintered Mind post, June 4, 2010).
Nor is it the case, for the normative questions we tested, that ethicists tend to have more permissive moral views.  If anything (as with organ donation), they tend to express more demanding moral views.

All this evidence, considered together, creates, I think, a substantial explanatory challenge for the approximately one-third of non-ethicist philosophers and approximately one-half of professional ethicists who -- in another of Josh's and my surveys -- expressed the view that on average ethicists behave a little morally better than do others from socially comparable groups.

We do have preliminary evidence, however, that environmental ethicists litter less.  Hopefully we can present that evidence soon.  (First, we have to be sure that we are done with data collection.)

Friday, September 24, 2010

Graduate Student Conference on... Me?

Well, kind of. The full title is:

CoxiMAP: Mind, Action, and Perception II:
Graduate Conference on the Work of Eric Schwitzgebel, the Epistemological Status of First-Person Methodology in Science, and the Metaphysics of Belief

It's in Osnabrueck, Germany, Jan. 21-23, and presenters will be awarded 150 Euros toward travel costs.  The submission deadline is soon: October 20 (300 word abstract, with a full paper in English of 5000 words), sent to Sascha Fink at safink [at domain] uos.de.  Full-length call for papers here.

I have been assured that submissions in philosophy of mind and/or epistemology more generally will also be welcomed.
I will also be giving a series of talks in Osnabrueck:

Jan. 20: "Shards of Self-Knowledge"
Jan. 21: "The Problem of Known Illusion and the Problem of Unreportable Illusion"
Jan. 22: "The Moral Behavior of Ethics Professors"

Also on Jan. 21, I will lead a tutorial and discussion on experience sampling.

Thursday, September 23, 2010

Perplexities of Consciousness: cover design

MIT Press has shown me the cover design for my forthcoming book, Perplexities of Consciousness:


The mind-bending cover art?  That's Pete Mandik's exomusicology (acrylic on canvas, 2001, photo by Rachelle Mandik). (You can get a bit of a closer look at another version of exomusicology here.)

Monday, September 20, 2010

How to Get a Big Head in Academia

Step 1: Get a tenure-track job at a research-oriented institution.

Step 2: Publish some stuff.

Step 3: Get tenure.

After Steps 1-3 -- which are, admittedly, something of a challenge -- the rest comes naturally!

Step 4: Read some stuff. You might especially find yourself reading material related to the subtopics on which you yourself have been publishing -- especially if any of that material cites your own work. The stuff that you choose to read will become especially salient to you in your perception of your field. (So also, of course, are your own publications.)

Step 5: Attend some meetings. The talks you see, the people you gravitate toward, will tend to discuss the same things you do. The field will thus seem to revolve around those issues. If other presenters in your area know you are around, they will be especially careful to mention your important contributions. You might even be sought out by a graduate student or two. That student or two will seem to you representative of all graduate students.

Step 6: Acquire some graduate students. They will tell you that your work is terrific and centrally important to the field.

Step 7: Read some emails. The people who like your work and think it is important are much more likely to email you than the people who ignore your work and think it's crappy. Also, the content of people's emails will tend to highlight what they like or, if critical, will frame that criticism in a way that makes it seem like a crucial issue on which you have taken an important public stand. (Additionally, the criticism will almost always be misguided, demonstrating your intellectual superiority to your critic.)

Finally: Given all the valuable input you have received through reading, attending conferences, talking to graduate students, and professional correpondence, it will seem clear to you that your field (post-Schnerdfootian widget studies) is central to academia, that the issues you are writing on are the most important issues within that field, and that your own contributions are centrally important to the academic understanding of those issues.

Sadly, your colleagues will not seem to fully appreciate this fact.

Tuesday, September 14, 2010

Can We All Become Delusional with Hypnosis? by guest blogger Lisa Bortolotti

Recent studies on hypnosis have suggested that delusions can be temporarily created in healthy subjects (see work by Amanda Barnier and Rochelle Cox). When you are given a hypnotic suggestion that you will see a stranger when you look in the mirror, it is probable that your behaviour in the hypnotic session will strikingly resemble that of a patient with a delusion of mirrored self misidentification. Both the hypnotic subject and the delusional patient deny that they see themselves in the mirror and claim instead that they see a stranger who looks a bit like them. Their beliefs are resistant to challenges and often accompanied by complex rationalisations of their weird experience.

Why would we want to create delusions in healthy subjects? It’s difficult to study the phenomenon of delusions in the wild, and especially the mechanisms responsible for their formation. Here are some reasons why we may need the controlled environment of the lab:

1. it is not always possible to investigate a clinical delusion in isolation from states of anxiety or depression that affects behaviour - comorbidity makes it harder to detect which behaviours are due to the delusion under investigation, and which are present for independent reasons;

2. ethical considerations significantly constrain the type of questioning that is appropriate with clinical patients because it is important to avoid causing distress to them, and to preserve trust and cooperation, which are beneficial for treatment;

3. for delusions that are rare, such as the delusion of mirrored self misidentification, it is difficult to find a sufficient number of clinical cases for a scientific study.
Evidence from the manifestation of hypnotically induced delusions has the potential to inform therapy for clinical delusions. Moreover, the use of hypnosis as a model for delusions can also inform theories of delusion formation, as analogies can be found in the underlying mechanisms. There are good reasons to expect that the hypnotic process results in neural patterns that are similar to those found in the clinical cases.

Given that during the hypnotic session healthy subjects engage in behaviour that is almost indistinguishable from that of clinical patients, reflecting on this promising research programme can not only help the science of delusions, but also invite us to challenge the perceived gap between the normal and the abnormal.

[This is Lisa's last guest post. Thanks, Lisa!]

Friday, September 10, 2010

How Big the Moon Is, According to One Three-Year-Old

A conversation I had with last night my daughter Kate, three years and seven months old:

Me: Which is bigger, the moon or the house?

Kate: The house.

Me: Which is bigger, the moon or a tree?

Kate: A tree.

Me: Which is bigger, the moon or a quarter?

Kate: No.

Me: No? What?

Kate: They're little.

Me: Which is smaller, the moon or a quarter?

Kate: A quarter.

Me: Which is smaller, the moon or a peanut butter jar?

Kate: The moon.

Me: So the moon is between a quarter and a peanut butter jar?

Kate: That's right, Daddy, you got it!

(See also my posts Development of the Moon Illusion? and How Far Away Is the Television Screen of Visual Experience?)

Sunday, September 05, 2010

Are People Responsible for Acting on Delusions? by guest blogger Lisa Bortolotti

Consider this case. Bill suffers from auditory hallucinations in which someone is constantly insulting him. He comes to believe that his neighbour is persecuting him in this way. Exasperated, Bill breaks into the neighbour’s flat and assaults him. Is Bill responsible for his action? Matthew Broome, Matteo Mameli and I have discussed a similar case in a recent paper. On the one hand, even if it had been true that the neighbour was insulting Bill, the violence of Bill’s reaction couldn’t be justified, and thus it is not obvious that the psychotic symptoms are to blame for the assault. On the other hand, psychotic symptoms such as hallucinations and delusions don’t come in isolation, and it is possible that if Bill hadn’t suffered from a psychiatric illness, then he wouldn’t have acted as he did.

In the philosophy of David Velleman, autonomy and responsibility are linked to self narratives. We tell stories about ourselves that help us recollect memories about past experiences and that give a sense of direction to our lives. Velleman’s view is that these narratives can also produce changes in behaviour. Suppose that I have an image of myself as an active person but recently I neglect my daily walk and spend the time in front of the TV. So I tell myself: “I have to get out more or I’ll become a couch potato”. I want my behaviour to match my positive self-image so I can become the person I want to be. Our narratives don’t just describe our past but can also issue intimations and shape the future.

According to Phil Gerrans, who has applied the notion of self narratives to the study of delusions, when experiences are accompanied by salience, they become integrated in a self narrative as dominant events. People with delusions tend to ascribe excessive significance to some of these experiences and, as a result, thoughts and behaviours acquire pathological characteristics (e.g. as when Bill is exasperated by the idea of someone insulting him). Gerrans’ account vindicates the apparent success of medication and cognitive behavioural therapy (CBT) in the treatment of delusions. Dopamine antagonists stop the generation of inappropriate salience, and by taking such medication, people become less preoccupied with their abnormal experiences and are more open to external challenges to their pathological beliefs (“How can I hear my neighbour’s voice so clearly through thick walls?”) In CBT people are encouraged to refocus attention on a different set of experiences from those contributing to the delusional belief, and to stop weaving the delusional experiences in their self narratives by constructing scenarios in which such experiences make sense even if the delusional belief were false (“Maybe the voice I’ve heard was not my neighbour’s.”)

As Gerrans explains, self narratives are constructed unreliably in the light of abnormal experiences and delusional beliefs. If we take seriously the idea that self narratives may play an important role in the governance of behaviour, and accept that narratives constructed by people with delusions are unreliable, then it’s not surprising that people with delusions are not very successful at governing themselves.

Thursday, September 02, 2010

Philosophy TV Launch

The brand-new, launched-today, Philosophy TV site promises to showcase conversations between philosophers, akin to Bloggingheads.tv. Unlike Bloggingheads TV, Philosophy TV will be dedicated solely to philosophy. The inaugural episode features Tamar Gendler and me chatting/arguing about implicit association and belief. Unfortunately, I can't view the episode myself yet because my sound card is busted.

The next few weeks promise Jamie Dreier and Mark Schroeder, Craig Callender and Jonathan Schaffer, Peter Singer and Michael Slote, Ken Aizawa and Mark Rowlands, and Andy Egan and Joshua Knobe. Pretty impressive lineup!

Tuesday, August 31, 2010

Are Ethicists More Attentive Daughters and Sons?

Not by self-report, at least. Here's a bit more data from a survey Josh Rust and I conducted of ethicists, non-ethicist philosophers, and comparison professors in other departments in five U.S. states. (Other preliminary survey results, and more about the methods, are here, here, here, here, here, here, and here.)

In the first part of the survey we asked respondents their attitudes about various moral issues. One thing we asked was for them to rate "Not keeping in at least monthly face-to-face or telephone contact with one’s mother" on a nine-point scale from "very morally bad" (1) through "morally neutral" (5) to "very morally good" (9). As it turned out, the respondent groups were all equally likely to rate not keeping in contact on the morally bad side of the scale: 73% of ethicists said it was morally bad, compared to 74% of non-ethicist philosophers and 71% of non-philosophers (not a statistically significant difference). There was a small difference in mean response (3.4 for ethicists vs. 3.7 for non-ethicist philosophers and 3.3 for non-philosophers), but I suspect that was at least partly due to scaling issues. In sum, the groups expressed similar normative attitudes, with perhaps the non-ethicist philosophers a bit closer to neutral than the other groups. (Contrast the case of vegetarianism, where the groups expressed very different attitudes.)

In the second part of the survey we asked respondents to describe their own behavior on the same moral issues that we had inquired about in the first part of the survey. We asked two questions about keeping in touch with mom. First we asked: "Over the last two years, about how many times per month on average have you spoken with your mother (face to face or on the phone)? (If your mother is deceased, consider how often you spoke during her last two years of life.)" The response options were "once (or less) every 2-3 months", "about once a month", "2-4 times a month", and "5 times a month or more". Only the first of these responses was counternormative by the standards of the earlier normative question. By this measure there was a statistically marginal tendency for the philosophers to report higher rates of neglecting their mothers: 11% of ethicists reported infrequent contact, compared to 12% of non-ethicist philosophers and only 5% of non-philosophers (chi-square, p = .06). (There was a similar trend for the non-philosophers to report more contact overall, across the response options.)

Second, we asked those with living mothers to report how many days it had been since their last telephone or face-to-face contact. The trend was in the same direction, but only weakly: 10% of ethicists reported its having been more than 30 days, compared to 11% of non-ethicist philosophers, and 8% of ethicists (chi-square, p = .82). We also confirmed that age and gender were not confounding factors. (Older respondents reported less contact with their mothers, even looking just at cases in which the mother is living, but age did not differ between the groups. Gender did differ between the groups -- philosophers being more likely to be male -- but did not relate to self-reported contact with one's mother.) So -- at least to judge by self-report -- ethicists are no more attentive to their mothers than are non-ethicist professors, and perhaps a bit less attentive than professors outside of philosophy.

Maybe this isn't too surprising. But the fact that most people seem to find this kind of thing unsurprising is itself, I think, interesting. Do we simply take it for granted that ethicists behave, overall, no more kindly, responsibly, caringly than do other professors -- except perhaps on a few of their chosen pet issues? Why should we take that for granted? Why shouldn't we expect their evident interest in, and habits of reflection about, morality to improve their day-to-day behavior?

You might think that ethicists would at least show more consistency than the other groups between their expressed normative attitudes about keeping in touch with mom and their self-reported behavior. However, that was also not the case. In fact the trend -- not statistically significant -- was in the opposite direction. Among ethicists who said it was bad not to keep in at least monthly contact, 8% reported no contact within the previous 30 days, compared to 13% of ethicists reporting no contact within 30 days among those who did not say that a lack of contact was bad. Among non-ethicist philosophers, the corresponding numbers were 6% and 27%. Among non-philosophers, 4% and 14%. Summarized in plainer English, the trend was this: Among those who said it was bad not to keep in at least monthly contact with their mothers, ethicists were the ones most likely to report not in fact keeping in contact. And also there was less correlation between ethicists' expressed normative view and their self-reported behavior than for either of the other groups of professors (8%-13% being a smaller spread than either 6%-27% or 4%-14%). It bears repeating that these differences are not statistically significant by the tests Josh and I used (multiple logistic regression) -- so I only draw this weaker conclusion: Ethicists did not show any more consistency between their normative views and their behavior than did the other groups.

Perhaps the ethicists were simply more honest in their self-described behavior than were the other groups? -- that is, less willing to lie or fudge so as to make their self-reported behavior match up with their previously expressed normative view? It's possible, but to the extent were were able to measure honesty in survey response, we found no trend for more honest responding among the ethicists.

Tuesday, August 24, 2010

Delusions and Action (by guest blogger Lisa Bortolotti)

As I suggested in my previous post, we sometimes have the impression that people do not fully endorse their delusions. In some circumstances, they don’t seem to act in a way that is consistent with genuinely believing the content of their delusions. For instance, a person with persecutory delusions may accuse the nurses in the hospital of wanting to poison him, and yet eat happily the food he’s given; a person with Capgras delusion may claim that his wife has been replaced by an impostor but do nothing to look for his wife or make life difficult for the alleged “impostor”.

Some philosophers, such as Shaun Gallagher, Keith Frankish and Greg Currie, have argued on the basis of this phenomenon (which is sometimes called “double bookkeeping”) that delusions are not beliefs. They assume that action guidance is a core feature of beliefs and maintain that, if delusions are not action guiding, then they are not beliefs. Although I have sympathies with the view that action guidance is an important aspect of many of our beliefs, I find the argument against the belief status of delusions a bit too quick.

First, as psychiatrists know all too well, delusions lead people to act. People who believe that they are dead Cotard delusion may become akinetic, and may stop eating and washing as a result. People who suffer from delusions of guilt, and believe they should be punished for something evil they have done, engage in self-mutilation. People who falsely believe they are in danger (persecutory delusions) avoid the alleged source of danger and adopt so-called “safety behaviours”. The list could go on. In general it isn’t true that delusions are inert.

Second, when delusions don’t cause people to act, a plausible explanation is that the motivation to act is not acquired or not sustained. Independent evidence suggests that people with schizophrenia have meta-representational deficits, flattened affect and emotional disturbances, which can adversely impact on motivation. Moreover, as Matthew Broome argues, the physical environment surrounding people with the delusion doesn’t support the action that would ensue from believing the content of the delusion. The content of one’s delusion may be so bizarre (e.g., “There’s a nuclear reactor in my belly”) that no appropriate action presents itself. The social environment might be equally unsupportive. One may stop talking about one’s delusion and acting on it to avoid incredulity or abuse from others.

My view that delusions are continuous with ordinary beliefs is not challenged by these considerations: maybe to a lesser extent than people with delusions, we all act in ways that are inconsistent with some of the beliefs we report - when we’re hypocritical - and we may fail to act on some of our beliefs for lack of motivation - when we’re weak-willed.