Tuesday, April 28, 2009

April 29-30: Fullerton Philosophy Conference: Consciousness and the Self

Last minute notice, I know, but L.A. area folks might be interested in considering attending all or part of a conference on "Consciousness and the Self" tomorrow and Thursday at Cal State Fullerton.

The speakers are Fred Dretske, Alex Byrne, Sydney Shoemaker, David Chalmers, Jesse Prinz, and me. Info here.

My talk is called "Self-Unconsciousness", posted here.

Monday, April 27, 2009

When Is It Time to Retire?

(by guest blogger Manuel Vargas)

My tenure of guest-blogging here at the Splintered Mind is coming to an end. My thanks to Eric for having me, and to all the thoughtful comments and responses I received from commentators. In keeping with my retirement from this bit of guest-blogging, I thought I’d post something about retirement and its norms, since I know so little about it.

Everyone knows at least one professor, whether a colleague at their own institution or some other, of whom it is painfully clear to everyone EXCEPT that person that he or she should retire. So I’ve been told. I don’t actually know such a person myself, but it seems a common enough refrain that I’ve started to think about the phenomenon. In particular, I’m worried that some day I’ll be THAT guy, the guy whom everyone (except me) knows ought to retire. So, in support of my then-colleagues and chagrined students of the future, I’m trying to work out some general principles of retirement far in advance, so that I might apply them to my own circumstances. Will you help me?

In what follows, I offer some initial thoughts about the matter, with the acknowledgment that I will surely retract everything I write in this post at some point in the next 40 years.

First, some caveats about the scope of the involved ‘ought’:

(1) Let’s suppose we are talking about professors who have no real financial need to teach, nor whose psychology would collapse in some profound way if he or she were not teaching any longer.

(2) Let us also suppose that retirement here does not necessarily mean that the professor emeritus ceases to participate in life of the profession or perform research in some guise. We are only concerned with retirement from one’s regular full-time faculty position at the university.

And, (3) let us suppose that in surrendering said position the department is left not dramatically worse off from a long-term staffing or workload standpoint. And to anticipate, no, having to hire a replacement doesn’t count as making a department dramatically worse off in the relevant sense. So, the ought in my usage of the phrase “ought to retire” should be regarded as ranging over a somewhat limited set of circumstances.

Given the aforementioned restrictions of scope, then, I’m inclined to put the sense of ought that is my concern in those circumstances as something like this: when ought a (philosophy) professor to retire, from the combined standpoints of the professor’s dignity and the general well being, given no powerful or important disincentives for doing so, but given that there are finite jobs in the profession at large and in one’s own department.

Some further caveats and refinements:

(4) I recognize that some professors have no dignity and/or no aspirations of dignity. Indeed, I may be one. But that is the sort of dirty, specific kind of detail that we shall discretely to the side. It is better to pretend that all professors (and departments) have aspirations of dignity.

(5) Our considered question is manifestly NOT about age. Or, at any rate, it is not directly about age. Age may or may not be correlated with whether some of the conditions I suggest, but chronological age itself is irrelevant to what follows. There are plenty of philosophers working now who, despite having known Kant personally, are under no “ought of retirement” of the sort under present consideration. And, presumably, there are people could never have heard a David Lewis talk but who, if they had any good sense, would do themselves and their departments a favor and would retire from the profession— if only their university had the good sense to offer them a reasonable retirement package!

(Randy Clarke once called my attention to a principled argument to this effect made by Saul Smilansky in Moral Paradoxes, an argument that concludes that most of us should retire immediately in light of the numbers of people who could do our jobs at least as well as we are doing them. Still, let’s ignore this too for the moment.)

These considerations having been noted, I suggest that a professor should retire when some weighted cluster of the following conditions are satisfied (the weights given by contextual features of the person’s dignity and the department’s aspirations for itself and what one’s university values in their faculty members):

When, after 7 years or more since tenure . . .

(a) One’s classes are repeatedly cancelled for low enrollment at a much higher rate than other full-time faculty members.
(b) One’s published research has not been cited in more than 7 years in a scholarly context.
(c) One has not been invited or induced to participate in an extra-departmental committee in more than seven years.
(d) When one has not served the discipline in any notable professional capacity in 10 years (e.g., editing a journal, refereeing papers, organizing conferences, etc.)

Do these conditions seem about right? Should something be added or deleted? How would you weight the conditions for, say, a teaching institution or a research institution? Is there some other sure-fire indicator for when someone should retire?

Admittedly, some of these conditions and numbers are arbitrary. And this is all way too rough and foolish. That’s okay, so long as the arbitrariness and foolishness don’t preclude a useful discussion. And anyway, we shouldn’t expect more precision that then subject matter permits, which must be true since Aristotle said it.

Please bear in mind that I’m not supposing that retirement means retirement from participating in the life of the profession. I’m simply assuming that one is walking away from a formal position that will be promptly filled by a new philosopher delighted by the prospect of employment in a profession with grotesquely fewer jobs than qualified applicants.

So help me out here . . . how will I know when I should retire from the active, full-time professor gig?

Thursday, April 23, 2009

The Purview of Human Subjects Committees

Federal regulations on human subjects research state that Institutional Review Boards (IRBs) should review activity that involves "systematic investigation... designed to develop or contribute to generalizable knowledge" and which further involves "intervention" or "interaction" with, or the acquisition of "identifiable private information" about, human beings.

Here's what I wonder: Given these definitions, why aren't IRBs evaluating journalism projects? (Of course they aren't. For one thing journalism moves too quickly for IRBs, which typically take weeks if not months to issue approvals.)

Journalists interact with people, obviously (according to the code "communication or interpersonal contact" counts as interaction), so if they don't fall under the Human Subjects code, it must be because they don't do "systematic investigation... designed to develop or contribute to generalizable knowledge". Tell that to an investigative journalist exposing the abuses of factory laborers!

Is investigative journalism on factory workers maybe not "systematic"? Or not "generalizable"? I see no reason why investigative journalism shouldn't be systematic. Indeed, it seems better if it is -- unless one works with a very narrow definition of "systematic" on which much of the research IRBs actually do (and should) review is not systematic. And why should the systematicity of the research matter for the purpose of reviewability anyway? I also don't see why investigative journalism shouldn't be generalizable. The problems with this criterion are the same as with systematicity: Define "generalizable" reasonably broadly and intuitively, so that a conclusion such as "undocumented factory workers in L.A. are often underpaid" is a generalization, and journalism involves generalization; define it very narrowly and much IRB reviewed research is not "generalizable". And like systematicity, why should it matter to reviewability exactly how specific or general the conclusions are that come out in the end?

IRBs were designed in the wake of 20th century abuses of human subjects both in medicine (as in the Tuskegee syphilis study) and in psychology (such as the Milgram shock study and the Stanford prison experiment). Guidelines were designed with medicine and psychology in mind and traditionally IRBs focused on research in those fields. However, there are plenty of other fields that study people, and the way the guidelines are written, it looks like much research in those fields actually falls under IRBs' purview. So the U.C. Riverside IRB -- of which I'm a member -- has been reviewing more and more proposals in Anthropology, History, Ethic Studies, and the like. Let's call it IRB mission creep.

We recently got news of a graduate student in Music who interviewed musicians and wanted to use those interviews as the core of his dissertation -- but he didn't think to ask for IRB approval before conducting those interviews. The IRB originally voted to forbid him to use that information, basically torpedoing his whole dissertation project. That decision was only overturned on appeal.

It makes a lot of sense, especially given the history of abuse, for IRBs to examine proposals in medicine and psychology. But do we really want to require university professors and graduate students to go through the rigmarole of an IRB application -- and it is quite a rigmarole, especially if you're not used to it! -- and wait weeks or months every time they want to talk with someone and then write about it?

Here's the core problem, I think: Research needs to be reviewed if there's a power issue that might lead to abuse. If the researcher has a kind of power, whether through social status (e.g., as a professor or a doctor vis-a-vis a student or a patient, or even in the experimenter-subject relationship), or through an informational advantage (e.g., through subjecting people to some sort of deception or experimental intervention whose purposes and hypotheses they don't fully understand), then IRBs need to make sure that that power isn't misused. But no IRB should need to approve a journalism student interviewing the mayor or a music professor interviewing jazz musicians about their careers. In such cases, the power situation is unproblematic. Of course in any human interaction there are opportunities for abuse, but only Big Brother would insist that a regulatory board should govern all human interaction.

Monday, April 20, 2009

Why the Gourmet Report is a Failure

(by guest blogger Manuel Vargas)

I’m a long-time fan of the Gourmet Report.* Nevertheless, I’ve recently started to wonder whether the Report fails to measure faculty quality, even when it is construed in roughly reputational terms, that is, in terms of concrete judgments of faculty quality as seen by the mainstream of research-active elements in the Anglophone portion of the profession.

(Before you start to roll your eyes let me note I’m still a fan of the report, and despite the problem I’m about to note, I think it is like democratic government— deeply problematic, but better than any of the alternatives. Moreover, it isn’t like my department or my work is at stake in anything the report does— I’m in a department with no graduate program and my career, such as it is, is beyond the point at which the reputation of the institution that awarded me a Ph.D. is of much consequence to it. So there.)

Here’s why I suspect that the Report is a failure at measuring faculty quality: we are bad judges of our own estimates of quality. That is, I suspect that we are unreliable reporters about the work that we regard as best, in something like a stable, all-things-considered sense. (I certainly think students are unreliable judges of what teaching they learn the most from, and I suspect something analogous is true of philosophers.**) I suspect the quality of my quality assessment is a function of lots of different things— what I’ve read recently, what first springs to mind when I see their name, whether I had reason to attend very closely to something of theirs, what I’ve forgotten about their work, and if so, whether I disagreed vehemently or lightly with it, and so on.

Even bracketing framing effects, though, I suspect that my explicit deliberative judgments of quality fail to perfectly track my actual positive regard of quality for philosophers and their work in some complex ways. Here’s one way my judgments might fail to track my actual regard: X’s work was underappreciated by me simply because the ideas sat in the back of my mind, and later played a role in my own judgments about what would work and what wouldn’t, but I never picked up on the fact that it was X’s arguments about Y that did that for me.

Here’s another way that might happen: I could be aware of X’s work, and think well of that person’s work, but underrate its importance to my own thoughts in the following way: I might not realize how much of that person’s work I cite and respond to in a way that takes it seriously. That is, I could think that work is of very high quality (perhaps worth more of my time than any other work on the subject matter!) but unless I counted up citations or counted up the number of times I focus on responding to that figure, I might simply fail to realize how significant that person’s work really is for me, and so I might fail to accurately assess the quality of work. (Of course: I might also overinflate importance for a related reason—I spent a lot of time criticizing someone’s work because it is easy, but that makes their name loom larger in my mind than my actual regard for it.)

Here’s another way “under-regarding” might happen: I could be subject to implicit bias effects of a peculiar sort. That is, I could unconsciously downgrade (or upgrade) my global assessment of quality on the basis of perceived race/class/gender/age etc., even if, when asked, I sincerely disavow that these things have anything to do with it. On this picture, the relevant test might be closer to something like: what would I think of this work if I had never known anything about the author? A: We’ll never know.

(Relatedly, implicit bias might work in a more targeted way, only affecting my overall assessments of worth, and not my assessments of a particular argument, or even a specific paper even when conscious of race/class/gender/age/etc.)

Here’s another way that might happen: I could be less good than I think at blocking halo effects of various sorts. So, knowing that X is at Wonderful Institution Y may inflate my estimate of that person’s work unconsciously. Or, my agreement with X on matter M may lead me to think better of X than someone else when filling out a survey, because we share the same beleaguered position on some matter. Or, knowing X has published many times in some journal I think well of might lead me to cast doubt on my own assessments of the quality of the work.

Suppose you thought people in general are subject to these effects. Are philosophers vulnerable to such effects? I think yes, but I’ve been repeatedly told that philosophers are special, and alone among humans immune to these sorts of effects because of our marginally greater reflectiveness. So, I must be wrong.

Still, there is some evidence that at-a-time global self-assessments are subject to priming and framing effects. There is some literature on the way in which people are good at monitoring their own discriminatory behavior only when they have reason to think it will be observed (so, for example, you probably aren’t very good at monitoring your discrimination against groups whose salience is not raised for you: think age, disability, non-black/non-white racial groups, etc.). There is also the fast-growing literature on implicit bias and the way it operates. And, there is a large body of work in cognitive science and psychology casting doubt on the accuracy and efficacy of conscious, deliberative judgments with respect evaluative matters (something that Leiter himself, writing with Joshua Knobe, wrote about in the context of Nietzschean moral psychology!).

I don’t know how to correct for any of this, given the Report’s aim of measuring faculty quality in terms of conscious, explicit, global judgments of quality. Keeping track of citation impact corrects for at least one of the possible misalignments I mentioned, but not all of them. And anyway, citation impact rankings are subject to their own difficulties as well. (Although I think it would be a useful supplement to the Report to track this data, too.)

In sum, although I think the Gourmet Report probably fails to accurately report in fact estimations of faculty quality, it nevertheless is likely the best thing we’ve got going for judging philosophical reputation of departments and their specializations, as seen by the mainstream of research-active elements in the Anglophone portion of the profession.

*Indeed, I may be one of the longest of the long-time fans of the PGR: somehow I stumbled across an early version of it, back when Mosaic was my browser of choice, using email required some degree of sophistication with UNIX commands, and the Report appeared to be something produced on a typewriter. Anyhow, the Report was a big help when thinking about graduate schools and a nice supplement to local advice about where I should consider applying. In several cases the Report highlighted departments than individual advisors had never mentioned, but when I asked them (because it was listed on the Report), the response was invariably something like “Oh yeah— so-and-so is there; that place would be pretty good, too.” I think the report has improved in numerous ways since those early days, and I think that it continues to be excellent at its ostensive function as one of several tools for those thinking about graduate school in philosophy. Indeed, it is out of a sense of its ongoing utility for graduate students that I’m happy to serve as one of the folks providing specialty rankings in philosophy of action.

** Regarding student unreliability, the matter is complicated. But see Mayer et al. “Increased Interestingness of Extraneous Details in a Multimedia Science Presentation Leads to Decreased Learning” Journal of Experimental Psychology: Applied (2008) Vol. 14, No. 4, 329–339. And think about research on what teaching evaluations track. One might worry that too often teaching evals track those things irrelevant to learning, or even—if the Mayet et. al. data proves correct, impediments to learning!)

Thursday, April 16, 2009

Where Does It Look Like Your Nose Is?

Following the suggestion of H. Ono et al. in their weird and fascinating 1986 article on "Facial Vision" in Psychological Research, I drew two lines on a piece of cardboard, and you might want to do the same. The lines start at one edge, about 6 cm apart, and converge to a point at the other edge. (A piece of paper held the long way will work fine, as long as you can keep it rigid.) Hold the midpoint of the 6 cm separation at the bridge of your nose and converge your eyes on the intersection point. If you do it right, it should look like there are three or four lines, two on the sides (one going toward each ear) and one or two in the center, headed right for the bridge of your nose.

The weird thing of course is that there are no lines on the cardboard that aim toward your ears or terminate at the bridge of your nose. Ono et al. suggest that the explanation (of the nose part at least) is that from the perspective of each eye the nose appears to be at the location of the other eye, so that the line headed toward your left eye seems to your right eye to be headed toward your nose and the line headed toward your right eye seems to the left eye also to be headed toward the nose.

With that in mind, I remove the cardboard and close one eye. Where does it seem that my nose is? Well, at first I'm inclined to say my perception is veridical: To the open eye, the nose seems closer than does my closed eye (or my bodily map of where my closed eye should be). But now I open and shut each eye in alternation. It does seem that my nose jumps around, maybe an inch or two side to side when I do this. But maybe that's just because my assumed egocentric position changes, relative to my nose?

Ono et al. also suggest trying to locate your phosphenes with one closed eye. (I had a post on this some time ago.) Phosphenes are those little circles you can see when you press on your eye. I find them easiest to see when I press on the corner of a closed eye and attend to the opposite corner of that same eye, looking for a dark or bright circle. (It may take some trial and error to get this right.) As I noted in the old post, for me at least the phosphenes generated by pressing the outside corner of a closed eye, with the other eye open, appear to be spatially located inside or behind the nose. This seems to me to be the case no matter which part of my closed eye I press. At the time of that post, it didn't occur to me that this might be because my nose was subjectively located as co-positional with the closed eye. Holding my nose with two fingers and pressing my closed eye with another finger from the same hand, to throw some tactile feedback into the mix, doesn't seem to change anything.

Tuesday, April 14, 2009

Armchair Sociology of the Profession IV: Splintered Fields

(by guest blogger Manuel Vargas)

UCR’s Peter Graham once mentioned to me that if you go to different departments, what you’ll find is that different figures will be really prominent in the local conception of a field. So, all the graduate students at School A read figure Y and all the grad students at School B read figure Z. What it takes a while to realize, he said, was that half of the time mostly the same views are in play, just filtered through whatever figures have local prominence. So, everyone is getting their dose of externalism, anti-realism, or whatever, but filtered through the concerns of whichever figures loom large in local graduate education. (Peter had a nice example of this, but I have since forgotten what it was. Go ask him yourself and see if he remembers what he had in mind.)

That picture seems mostly right to me. In different departments, different figures are more and less likely to be taught, even if there is widespread professional consensus outside the department about which figures are worth teaching and which issues are important. Local variation can be explained away in several ways: partly in terms who faculty members are reading or responding to in their own work at the time, partly in light of the literatures faculty members were trained in, and (without a doubt) whether any of the big cheeses in a field are members of the department in which one is getting trained. In many (most?) fields, the overlap is substantial enough so that if, for example, you study metaphysics at Notre Dame, right out the gate you are going to be able to have fruitful, meaningful conversations with people who study metaphysics at Princeton.

Still, there are cases where there are vast gulfs in the conception of fields, both in terms of what positions are worth serious engagement and in terms of what the assumptions are that are governing inquiry into the field. Some places take Wittgenstein seriously. Others don’t have more than the vaguest idea of who he is. Some places love them some Davidson. Other places haven’t had him on a syllabus in decades.

This year, I’ve been struck by some surprisingly deep fractures in philosophy of action. I’ve sat in on a couple of seminars in philosophy of action at my host institution this year and it has been incredibly fascinating to see how different the conception of the field looks in these courses than it did in my own graduate training, my own teaching, and my own work in related parts of the field. Even though all these accounts are in some sense concerned with agency, the will, and the relationship of agents to actions (that’s why it counts as philosophy of action) it seems to me that the local differences are manifestly not a case of the same basic positions, substantive concerns, and the like being presented through a different constellation of figures. (For those who are wondering, it seems to me less of a Causal Theory vs. non-causalists, and more of a divide between those-who-start-with-Davidson and those-who-start-with-Anscombe, where starting with either does not necessarily entail substantial agreement.)

Lest I be misunderstood, I don’t say any of this by way of criticism of anyone’s conception of their field—please, let those flowers bloom. Indeed, I feel fortunate to have gained a sharper sense of my own philosophical presuppositions as a result of the experience. And, I think we all benefit from a variety of conceptions of a field, from a range of philosophical concerns, and from a broad range of philosophical methods and approaches. (I take it that something like this phenomenon is common enough that at least some departments used to resist incestuous hiring precisely out of a concern for limiting the intellectual vision of their local ecosystem.)

Anyway, what I’m wondering is what other fields have gulfs internal to them that make challenging any substantive discussions across these splintered portions of the field. Maybe Nietzsche scholarship is one instance, with the Frenchified Nietzsche interpreters on one side and the broadly “analytic” Nietzsche scholars on the other side. I imagine that there would be lots of head scratching about how to talk to each other, if (assuming the unlikely) either group had any substantial interest in doing so. But surely there are other instances of a big divide in presuppositions that significantly hinders intelligibility across camps internal to the same subfields.

Any thoughts about good candidates for other deeply fractured fields? I’ve heard suggestions of something similar internal to ethics, with (broadly) sentimentalists on one side and a priorists (rationalists, contractualists, etc.) on the other, but I’m less confident that we’re at a very significant degree of head scratching puzzlement about what the other camp(s) are doing internal to ethics. Any of this going on in phil mind? Epistemology? Political phil? Elsewhere?

Why Does the Pacific APA End on Easter?

This year, as usual, the Pacific Division meeting of the American Philosophical Association ended on Easter Sunday. At the beginning of the conference, God is crucified. While He is dead, everyone delivers their grand lectures and stays up late partying. When He rises, we're on our planes out of town.

Monday, April 06, 2009

On Encouraging Children to Reflect about Morality

Consider these two views of moral education:

(1.) The "liberal", inward-out model: Moral education should stress moral reflection, with rules and punishment playing a secondary role. If six-year-old Sally hits her friend Hank, you have to enforce the rules and punish her (probably), but what's really going to help her improve morally is encouraging her to think about things like: Hank's perspective on the situation, how she feels about having hurt Hank, and the best overall norms for behavior. Adults, likewise, make moral progress by thinking carefully about their own standards of right and wrong and whether their behavior lives up to those standards. Thus, mature morality grows from within: It's a natural development of the values people, upon reflection, discover to be already nascent in themselves.

(2.) The "conservative", outward-in model: Moral education should stress rules and punishment, with moral reflection playing a secondary role. You can't understand and apply the rules, of course, without some sort of reflection on them, but reflection should be in the context of received norms. Otherwise, it's likely just to become rationalization of self-serving impulses. Until people are morally well developed, the values that emerge from their independent and free reflection will almost inevitably be inferior to time-tested traditional cultural values. Thus, mature morality is imposed from without: People are forced to obey certain norms until obedience to those norms becomes habitual. Perhaps eventually those norms will be understood and embraced, but that's near the end of the developmental trajectory, not the beginning.

Now academically affiliated researchers on moral development almost universally prefer the first model to the second (examples include rationalists like Piaget and Kohlberg, most their opponents who stress the importance of sympathy and perspective-taking, as well as people like Damon who endorse a hybrid view). The common idea is that children (and the morally undeveloped in general) improve morally when they are encouraged to think for themselves and given space to discover their own reactions and values.

Now I'm sympathetic to this idea, but here's my thought: Suppose Sally hits Hank and a liberally-minded teacher comes up and asks her how it made her feel to hurt Hank. What child, realistically, would say, "Well, I know he didn't deserve it, but it just felt good pounding him to a pulp!"? The reality is that the child is being asked to reflect in a situation where she knows that the teacher will approve of one answer and condemn another. This isn't free reflection; and the answer the child gives may not reflect her real feelings and values. Instead, it seems, it is a kind of imposition -- and one perhaps all the more effective if the child mistakes the resulting judgment for one that is genuinely her own.

Therefore, maybe, a liberal-seeming style of moral education is effective not because we have in us all an inclination toward the good that only needs encouragement to flower, but rather because reflection in teacher-child, parent-child, and similar social contexts is really an insidious form of imposition -- and thus, perhaps, the conservative's best secret tool.

Friday, April 03, 2009

Armchair Sociology of the Profession, part 3: A Manifesto on Geography and Social Networks

(by guest blogger Manuel Vargas)

I’ve spent most of my philosophical life hanging out in philosophy departments up and down California, partly by luck but also by disposition. This year, however, I’ve been living on the East Coast and I’ve been struck by the difference geography makes to the profession. (Caveat: In what follows, I frame things mostly in terms of differences across coasts, but I expect that many of these factors are at play to lesser and greater degrees in the interior of the U.S., and these issues will certainly be salient to philosophers coming into the US from abroad. But I write in terms of coastal examples since that is what I know firsthand. Also, I'm going to focus on West Coast disadvantages, ignoring some of its clear advantages in non-professional ways.)

Consider the dense network of terrific departments in the Boston and New York areas. This proximity is conducive to a range of interactions and a degree of inter-departmental familiarity that is much harder to reproduce nearly anywhere else where geographic clustering of departments is not so tight. MIT, Harvard, BU, BC, and Tufts are all closer to each other than are two schools that are frequently thought of as relatively close, geographically speaking: Berkeley and Stanford. The latter are more than 10 times as far apart from each other as those Boston area schools! Although I didn’t bust out Google Maps to check, I’m pretty sure the same is true of the L.A. area schools vs. those Boston schools, too— the distances on the left coast are much larger. So, in places like NYC and Boston, you’ve got a density of philosophers and departments that can’t be matched elsewhere. And, indeed, something like this is true on the North Atlantic coast as a whole, at least in comparison to the West Coast.

This isn’t to say that there is as much interaction in the greater Boston and (I imagine) New York areas as an outsider might expect—professors everywhere are over-extended and can’t participate in everything. Still, there are lots of effects, many indirect and apart from philosophical feedback and interaction. Here are some:

First: financial effects. It is cheap to go to local talks and conferences in at least the North Atlantic states, because the distances are not huge and the transportation options are good and comparatively inexpensive. So, if you’ve got a fixed research account, you can afford to go to comparatively more conferences than your West Coast brethren on the same budget. Similar economies of distance come into play on the interpersonal axis as well. If you have a family, and a partner who is willing to put up with you going away for professional travel without family, it is surely easier to do so when you can be gone for shorter periods of time, which closer geographic proximity permits.

Second: effects of professional esteem. In a previous post, Eric wondered about the curious stability of UCR’s rankings. I had some things to say about it in the comments, but one of the things I floated was the hypothesis that departments will fare less well in reputational rankings if they are not part of a densely networked collection of departments. Since, if I’m right, this is partly driven by geographic proximity, geography ends up having an impact on things like the Gourmet Report, the perceived quality of degrees for a given graduate program, and so on. That is, philosophers will more highly rate departments they are familiar with, but if familiarity is partly a function of geographic relationships, than geographically isolated departments will suffer from a geographic bias among evaluators, and this propagates through the profession in complicated ways.

Third: early careers. A big problem here is the Eastern APA, where everyone goes to look for a job. Pretty much everything about the Eastern is bad, but for West Coasters it is invariably more so. It is more expensive to get to, more time-consuming to go, and one is less likely to have faculty advisors and supporters present when you get there. It would be interesting to compare how East and West Coast job candidates fared over several iterations of the market if all the East Coast candidates and none of the West Coast candidates had to suffer the effects of jet lag and time zone changes, of having diminished numbers of advisors, committee members, and departmental mentors present during the hiring bloodbath, and so on. My bet is that putting the meeting in San Diego for a few years would help the performance of West Coast folks and hurt the performance of East Coast folks. Anyone want to try?