Tuesday, August 08, 2017

The Ethical Significance of Toddler Tantrums (guest post by Henry Shevlin)

guest post by
Henry Shevlin

As any parent can readily testify, little kids get upset. A lot. Sometimes it’s for broadly comprehensible stuff - because they have to go to bed or to daycare, for example. Sometimes it’s for more bizarre and idiosyncratic reasons – because their banana has broken, perhaps, or because the Velcro on their shoes makes a funny noise.

For most parents, these episodes are regrettable, exasperating, and occasionally, a little funny. We rarely if ever consider them tragic or of serious moral consequence. We certainly feel some empathy for our children’s intense anger, sadness, or frustration, but we generally don’t make a huge deal about these episodes. That’s not because we don’t care about toddlers, of course – if they were sick or in pain we’d be really concerned. But we usually treat these intense emotional outbursts as just a part of growing up.

Nonetheless, I think if we saw an adult undergoing extremes of negative emotion of the kind that toddlers go through on a daily or weekly basis, we’d be pretty affected by it, and regard it as something to be taken seriously. Imagine you’d visited a friend for dinner, and upon announcing you were leaving, he broke down in floods of tears, beating on the ground and begging you not to go. Most of us wouldn’t think twice about sticking around until he felt better. Yet when a toddler pulls the same move (say, when we’re dropping them off with a grandparent), most parents remained, if not unmoved, then at least resolute.

What’s the difference between our reactions in these cases? In large part, I think it’s because we assume that when adults get upset, they have good reasons for it – if an adult friend starts sobbing uncontrollably, then our first thought is going to be that they’re facing real problems. For a toddler, by contrast – well, they can get upset about almost anything.

This makes a fair amount of sense as far as it goes. But it also seems to require that our moral reactions to apparent distress should be sensitive not just to the degree of unhappiness involved, but the reasons for it. In other words, we’re not discounting toddler tantrums because we think little kids aren’t genuinely upset, or are faking, but because the tantrums aren’t reflective of any concerns worth taking too seriously.

Interestingly, this idea seems at least prima facie in tension with some major philosophical accounts of happiness and well-being, notably like hedonism or desire satisfaction theory. By the lights of these approaches, it’s hard to see why toddler emotions and desires shouldn’t be taken just as seriously as adult ones. These episodes do seem like bona fide intensely negative experiences, so for utilitarians, every toddler could turn out to be a kind of negative utility monster! Similarly, if we adopt a form of consequentialism that aims at maximizing the number of satisfied desires, toddlers might be an outsize presence – as indicated by their tantrums, they have a lot of seemingly big, powerful, intense desires all the time (for, e.g., a Kinder Egg, another episode of Ben and Holly, or that one toy their older sibling is playing with).

One possibility I haven’t so far discussed is the idea that toddlers’ emotional behavior might be deceptive: perhaps the wailing toddler, contrary to appearances, is only mildly peeved that a sticker peeled off his toy. There may be something to this idea: certainly, toddlers have very poor inhibitory control, so we might naturally expect them to be more demonstrative about negative emotions than adults. That said, I find it hard to believe that toddlers really aren’t all that bothered by whatever it is that’s caused their latest tantrum. As much as I may be annoyed at having to leave a party early, for example, it’s almost inconceivable to me that it could ever trigger floods of tears and wailing, no matter how badly my inhibitory control had been impaired by the host’s martinis. (Nonetheless, I’d grant this is an area where psychology or neuroscience could be potentially informative, so that we might gain evidence that toddlers’ apparent distress behavior was misleading).

But if we do grant that toddlers really get very upset all the time, is it a serious moral problem? Or just an argument against theories that take things like emotions and desires to be morally significant in their own right, without being justified by good reasons? As someone sympathetic to both hedonism about well-being and utilitarianism as a normative ethical theory, I’m not sure what to think. Certainly, it’s made me consider whether, as a parent, I should take my son’s tantrums more seriously. For example, if we’re at the park, and I know he’ll have a tantrum if we leave early, should I prioritize his emotions above, e.g., my desire to get home and grade student papers? Perhaps you’ll think that in reacting like this, I’m just being overly sentimental or sappy – come on, what could be more normal than toddler tantrums! – but it’s worth being conscious of the fact that previous societies normalized ways of treating children that we nowadays would regard as brutal.

There’s also, of course, the developmental question: toddlers aren’t stupid, and if they realize that we’ll do anything to avoid them having tantrums, then they’ll exploit that to their own (dis)advantage. Learning that you can’t always get what you want is certainly part of growing up. But thinking about this issue has certainly made me take another look at how I think about and respond to my son’s outbursts, even if I can’t fix his broken bananas.

Note: this blogpost is an extended exploration of ideas I earlier discussed here.

[image: Angelina Koh]

Thursday, August 03, 2017

Top Science Fiction and Fantasy Magazines 2017

In 2014, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on awards and "best of" placements in the previous ten years. Since people seemed to find it useful or interesting, I've been updating it annually. Below is my list for 2017.

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies or standalones.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Eugie, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, or Adams "Year's Best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) Lists of this sort do tend to reinforce the prestige hierarchy. I have mixed feelings about that. But since the prestige hierarchy is socially real, I think it's in people's best interest -- especially the best interest of outsiders and newcomers -- if it is common knowledge.

(8.) I take the list down to 1.5 points.

(9.) I welcome corrections.

Results:

1. Asimov's (244.5 points)
2. Fantasy & Science Fiction (182)
3. Clarkesworld (129.5)
4. Tor.com (120) (started 2008)
5. Lightspeed (83.5) (started 2010)
6. Subterranean (79.5) (ceased 2014)
7. Strange Horizons (48)
8. Analog (47.5)
9. Interzone (45.5)
10. Beneath Ceaseless Skies (30.5) (started 2008)
11. Fantasy Magazine (27.5) (merged into Lightspeed 2012, occasional special issues thereafter)
12. Uncanny (19) (started 2014)
13. Apex (15.5)
14. Jim Baen's Universe (11.5) (ceased 2010)
14. Postscripts (11.5) (ceased short fiction in 2014)
14. Realms of Fantasy (11.5) (ceased 2011)
17. Nightmare (10) (started 2012)
18. The New Yorker (8)
19. Black Static (7)
20. Intergalactic Medicine Show (6)
21. Electric Velocipede (5.5) (ceased 2013)
22. Helix SF (5) (ceased 2008)
22. Tin House (5)
24. McSweeney's (4.5)
24. Sirenia Digest (4.5)
26. Conjunctions (4)
26. The Dark (4) (started 2013)
28. Black Gate (3.5)
28. Flurb (3.5) (ceased 2012)
30. Cosmos (3)
30. GigaNotoSaurus (3) (started 2010)
30. Harper's (3)
30. Shimmer (3)
30. Terraform (3) (started 2014)
35. Lady Churchill's Rosebud Wristlet (2.5)
35. Lone Star Stories (2.5) (ceased 2009)
35. Matter (2.5) (started 2011)
35. Slate (2.5)
35. Weird Tales (2.5) (off and on throughout period)
40. Aeon Speculative Fiction (2) (ceased 2008)
40. Futurismic (2) (ceased 2010)
42. Abyss & Apex (1.5)
42. Beloit Fiction Journal (1.5)
42. Buzzfeed (1.5)
42. Daily Science Fiction (1.5) (started 2010)
42. e-flux journal (1.5) (started 2008)
--------------------------------------------------

Comments:

(1.) The New Yorker, Tin House, McSweeney's, Conjunctions, Harper's, and Beloit Fiction Journal are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos, Slate, and Buzzfeed are popular magazines that have published a little bit of science fiction on the side. e-flux is a wide-ranging arts journal. The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Clarkesworld (66.5)
2. Tor.com (61)
3. Asimov's (59)
4. Lightspeed (49.5)
5. F&SF (37.5)
6. Analog (21)
7. Beneath Ceaseless Skies (20)
8. Uncanny (19)
9. Subterranean (16)
10. Interzone (11.5)
11. Strange Horizons (11)
12. Nightmare (9)

(3.) One important thing left out of these numbers is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are an increasingly important venue. Some text-based magazines, like Clarkesworld, Lightspeed, and Strange Horizons also regularly podcast their stories.

(5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

(6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

(7.) The "Sad Puppy" kerfuffle threatens to damage the once-sterling reputation of the Hugos, but the Hugos are a small part of my calculation and the results are pretty much the same either way.

[image source; admittedly, it's not the latest issue!]

Wednesday, August 02, 2017

Welcome to the Blogosphere, Nomy Arpaly

One of my favorite living philosophers, Nomy Arpaly, has a new blog, The View from the Owl's Roost!

It's off to a great start, with a fun, insightful post about our excessive confidence in our limited imaginations.

Tuesday, August 01, 2017

Why Was Sci-Fi So Slow to Discover Time Travel? (Guest Post by Henry Shevlin)

guest post by
Henry Shevlin

Time travel is a more or less ubiquitous feature of modern sci-fi. Almost every long running SF show – Star Trek, Futurama, The X-Files – will have a time travel episode sooner or later, and some, like Doctor Who, use time travel as the main narrative device. The same applies to novels and, of course, to Hollywood – blockbuster SF franchises like the Terminator and Back to the Future employ it, as do quirkier pictures like Midnight in Paris. And of course, there’s no shortage of time travel novels, including old favorites like A Christmas Carol, and perhaps most influentially, HG Wells’s wonderful social sci-fi novella The Time Machine.

I don’t find it particularly surprising that we’re so interested in time travel. We all engage in so called ‘mental time travel’ (or Chronaesthesia) all the time, reviewing past experiences and imagining possible futures, and the psychological capacities required in our doing so are the subject of intense scientific and philosophical interest.

Admittedly, the label “mental time travel” may be a bit misleading here; most of what gets labelled mental time travel is quite different from the SF variant, consisting in episodic recall of the past or projection into the future rather than imagining our present selves thrown back in time. But I think we also do this latter thing quite a lot. To give a commonplace example, we’re all prone to engage in “coulda woulda shoulda” thinking: if only I hadn’t parked the car under that tree branch in a storm, if I only I hadn’t forgotten my wedding anniversary, if only I hadn’t fumbled that one interview question. Frequently when we do this, we even elaborate how the present might have been different if we’d just done something a bit differently in the past. This looks a lot like the plots of some famous science fiction stories! Similarly, I’m sure we’ve all pondered what it would be like to experience different historical periods like the American Revolution, the Roman Empire, or the age of dinosaurs (you can even buy a handy t-shirt). More prosaically, I imagine many of us have also reflected on how satisfying it would be to relive some of our earlier life experiences and do things differently the second time round – standing up to high school bullies, or studying harder in high school (again, a staple of light entertainment).

Given the above, I had always assumed that time travel was part of fiction because it was simply part of us. Time travel narratives, in other words, were borrowed from the kind of imaginings we all do all the time. It was with huge surprise, then, that I discovered (while teaching a course on philosophy and science fiction) that time travel doesn’t appear in fiction until the 18th century, in the short novel “Memoirs of the Twentieth Century”. Specifically, this story imagines letters from the future being brought back to 1728. The first story of any kind (as far as I’ve been able to find) that features humans being physically transported back into the past doesn’t come until 1881, in Edward Page Mitchell’s short story “The Clock That Went Backwards”.

Maybe this doesn’t seem so surprising – isn’t science fiction in the business of coming up with bizarre, never before seen plot devices? But in fact, it’s pretty rare for genuinely new ideas to show up in science fiction. Long before we had stories about artificial intelligence, we had the tales of Pinocchio and the Golem of Prague. Creatures on other planets? Lucian's True History had beings living on the moon and sun back in the 2nd century AD. For huge spaceships, witness the mind-controlled Vimanas of the Sanskrit epics. And so on. And yet, for all the inventiveness of folklore and mythology, there’s very little in the way of time travel to be found. The best I’ve come up with so far is some time dilation in the stories of Kakudmi in the Ramayanas, and visions of the past in the Book of Enoch. But as far as I can tell, there’s nothing that fits the conventional time travel narratives we’re used to, namely physically travelling to ages past or future, let alone any idea that we might alter history.

What’s going on here? One possibility is that something changed in science or society in the 18th century that paved the way for stories about time travel. But what would that be, and how would it lead to time travel seeming more plausible? For example, if the first time travel literature had accompanied the emergence of general relativity (with all its assorted time related weirdness), then that would offer a satisfying answer. However, Newtonian physics was already in place by the late 17th century, and it’s not clear which of Newton’s principles might pave the way for time travel narratives.

I’m very open to suggestions, but let me throw out one final idea: time travel narratives don’t show up in earlier fiction because they’re weird, unnatural, and counterintuitive. Even weirder than the staples of folklore and mythology, like people being turned into animals. Time travel is just not the kind of thing that naturally occurs to humans to think about at all, and it’s only via a few fateful books in the 18th century and its subsequent canonisation in The Time Machine that it’s become established as a central plot device in science fiction.

But doesn’t that contradict what I said earlier about how we all often naturally think about time travel related scenarios, like changing the past, or witnessing historical events firsthand? Not necessarily. Maybe these kinds of thought patterns are actually inspired by time-travel science fiction. In other words, prior to the emergence of time travel as a trope, maybe people really didn’t daydream about meeting Julius Caesar or going back and changing history. Perhaps the past was seen simply as a closed book, rather than (in the memorable words of L. P. Hartley) just “a foreign country”. That’s not to suggest, of course, that people didn’t experience memories and regrets, but maybe they experienced them a little differently, with the past seeming simply an immutable background to the present.

I’m excited the idea that a science fiction trope might have birthed a new and widespread form of thinking. Partly that’s because it suggests that science fiction may be more influential than we realize, and partly it’s because, as a philosopher, I’m interested in where patterns of thought come from. However, I’m very happy to proven wrong in this conjecture – perhaps there are letters from the Middle Ages in which writers engage in precisely this kind of speculation. Or perhaps the emergence of science fiction in the 18th century can be explained in terms of some historical event I’ve missed. Or who knows: maybe there’s an untranslated gnostic manuscript out there where Jesus has a time machine....

[image source]

Thursday, July 27, 2017

How Everyone Might Reasonably Believe They Are Much Better Than Average

In a classic study, Ola Svenson (1981) found that about 80% of U.S. and Swedish college students rated themselves as being both safer and more skilled as drivers than other students in the room answering the same questionnaire. (See also Warner and Aberg 2014.) Similarly, most respondents tend to report being less susceptible to cognitive biases and sexist bias than their peers, as well as more honest and trustworthy -- and so on for a wide variety of positive traits: the "Better-Than-Average Effect".

The standard view is that this is largely irrational. Of course most people can't be justified in thinking that they are better than most people. The average person is just average! (Just between you and me, don't you kind of wish that all those dumb schmoes out there would have a more realistic appreciation of their incompetence and mediocrity? [note 1])

Particularly interesting are explanations of the Better-Than-Average Effect that appeal to people's idiosyncratic standards. What constitutes skillful driving? Person A might think that the best standard of driving skill is getting there quickly and assertively, while still being safe, while Person B might think skillful driving is more a matter of being calm, predictable, and within the law. Each person might then prefer the standard that best reflects their own manner of driving, and in that way justify viewing themselves as above average (e.g., Dunning et al. 1991; Chambers and Windschitl 2004).

In some cases, this seems likely to be just typical self-enhancement bias: Because you want to think well of yourself, in cases where the standards are ambiguous, you choose the standards that make you look good. Changing example, if you want to think of yourself as intelligent and you're good at math, you might choose to think of mathematical skill as central to intelligence, while if you're good at practical know-how in managing people, you might choose to think of intelligence more in terms of social skills.

But in other cases of the Better-Than-Average Effect, the causal story might be much more innocent. There may be no self-flattery or self-enhancement at all, except for the good kind of self-enhancement!

Consider the matter abstractly first. Kevin, Nicholas, and Ana [note 2] all value Trait A. However, as people will, they have different sets of evidence about what is most important to Trait A. Based on this differing evidence, Kevin thinks that Trait A is 70% Property 1, 15% Property 2, and 15% Property 3. Nicholas thinks Trait A is 15% Property 1, 70% Property 2, and 15% Property 3. Ana thinks that Trait A is 15% Property 1, 15% Property 2, and 70% Property 3. In light of these rational conclusions from differing evidence, Kevin, Nicholas, and Ana engage in different self-improvement programs, focused on maximizing, in themselves, Properties 1, 2, and 3 respectively. In this, they succeed. At the end of their training, Kevin has the most Property 1, Nicholas the most Property 2, and Ana the most Property 3. No important new evidence arrives in the meantime that requires them to change their views about what constitutes Trait A.

Now when they are asked which of them has the most of Trait A, all three reasonably conclude that they themselves have the most of Trait A -- all perfectly rationally and with no "self-enhancement" required! All of them can reasonably believe that they are better than average.

Real-life cases won't perfectly match that abstract example, of course, but many skills and traits might show some of that structure. Consider skill as a historian of philosophy. Some people, as a result of their training and experience, might reasonably come to view deep knowledge of the original language of the text as most important, while others might view deep knowledge the the historical context as most important, while others might view deep knowledge of the secondary literature as most important. Of course all three are important and interrelated, but historians reasonably disagree substantially in their comparative weighting of these types of knowledge -- and, I think, not always for self-serving or biased reasons. It's a difficult matter of judgment. Someone committed to the first view might then invest a lot of energy in mastering the details of the language, someone committed to the second view might invest a lot of energy in learning the broader historical context, and someone committed to the third view might invest a lot of energy in mastering a vast secondary literature. Along the way, they might not encounter evidence that requires them to change their visions of what makes for a good historian. Indeed, they might quite reasonably continue to be struck by the interpretative power they are gaining by close examination of language, historical context, or the secondary literature, respectively. Eventually, each of the three might very reasonably regard themselves as a much better historian of philosophy than the other two, without any irrationality, self-flattery, or self-enhancing bias.

I think this might be especially true in ethics. A conservative Christian, for example, might have a very different ethical vision than a liberal atheist. Each might then shape their behavior according to this vision. If both have reasonable ethical starting points, then at the end of the process, each person might reasonably regard themselves as morally better than the other, with no irrational self-enhancing bias. And of course, this generalizes across groups.

I find this to be a very convenient and appealing view of the Better-Than-Average Effect, quite comforting to my self-image. Of course, I would never accept it on those grounds! ;-)

Friday, July 21, 2017

New Journal! The Journal of Science Fiction and Philosophy

This looks very cool:

Call for Papers

General Theme

The Journal of Science Fiction and Philosophy, a peer-reviewed, open access publication, is dedicated to the analysis of philosophical themes present in science fiction stories in all formats, with a view to their use in the discussion, teaching, and narrative modeling of philosophical ideas. It aims at highlighting the role of science fiction as a medium for philosophical reflection.

The Journal is currently accepting papers and paper proposals. Because this is the Journal’s first issue, papers specifically reflecting on the relationship between philosophy and science fiction are especially encouraged, but all areas of philosophy are welcome. Any format of SF story (short story, novel, movie, TV series, interactive) may be addressed.

We welcome papers written with teaching in mind! Have used an SF story to teach a particular item in your curricula (e.g., using the movie Gattacca to introduce the ethics of genetic technologies, or The Island of Dr. Moreau to discuss personhood)? Turn that class into a paper!

Yearly Theme

Every year the Journal selects a Yearly Theme. Papers addressing the Yearly Theme are collected in a special section of the Journal. The Yearly Theme for 2017 is All Persons Great and Small: The Notion of Personhood in Science Fiction Stories.

SF stories are in a unique position to help us examine the concept of personhood, by making the human world engage with a bewildering variety of beings with person-like qualities – aliens of bizarre shapes and customs, artificial constructs conflicted about their artificiality, planetary-wide intelligences, collective minds, and the list goes on. Every one of these instances provides the opportunity to reflect on specific aspects of the notion of personhood, such as, for example: What is a person? What are its defining qualities? What is the connection between personhood and morality, identity, rationality, basic (“human?”) rights? What patterns do SF authors identify when describing the oppression of one group of persons by another, and how do they reflect past and present human history?

The Journal accepts papers year-round. The deadline for the first round of reviews, both for its general and yearly theme, is October 1st, 2017.

Contact the Editor at editor.jsfphil@gmail.com with any questions, or visit www.jsfphil.org for more information.

Wednesday, July 19, 2017

Why I Evince No Worry about Super-Spartans

I'm a dispositionalist about belief. To believe that there is beer in the fridge is nothing more or less than to have a particular suite of dispositions. It is to be disposed, ceteris paribus (all else being equal, or normal, or absent countervailing forces), to behave in certain ways, to have certain conscious experiences, and to transition to related mental states. It is to be disposed, ceteris paribus, to go to the fridge if one wants a beer, and to say yes if someone asks if there is beer in the fridge; to feel surprise should one open the fridge and find no beer, and to visually imagine your beer-filled fridge when you try to remember the contents of your kitchen; to be ready to infer that your Temperance League grandmother would have been disappointed in you, and to see nothing wrong with plans that will only succeed if there is beer in the fridge. If you have enough dispositions of this sort, you believe that there is beer in the fridge. There's nothing more to believing than that. (Probably some sort of brain is required, but that's implementational detail.)

To some people, this sounds uncomfortably close to logical behaviorism, a view according to which all mental states can be analyzed in terms of behavioral dispositions. On such a view, to be in pain, for example, just is, logically or metaphysically, to be disposed to wince, groan, avoid the stimulus, and say things like "I'm in pain". There's nothing more to pain than that.

It is unclear whether any well-known philosopher was a logical behaviorist in this sense. (Gilbert Ryle, the most cited example, was clearly not a logical behaviorist. In fact, the concluding section of his seminal book The Concept of Mind is a critique of behaviorism.)

Part of the semi-mythical history of philosophy of mind is that in the bad old days of the 1940s and 1950s, some philosophers were logical behaviorists of this sort; and that logical behaviorism was abandoned due to several fatal objections that were advanced in the 1950s and 1960s, including one objection by Hilary Putnam that turned on the idea of super-spartans. Some people have suggested that 21st-century dispositionalism about belief is subject to the same concerns.

Putnam asks us to "engage in a little science fiction":

Imagine a community of 'super-spartans' or 'super-stoics' -- a community in which the adults have the ability to successfully suppress all involuntary pain behavior. They may, on occasion, admit that they feel pain, but always in pleasant well-modulated voices -- even if they are undergoing the agonies of the damned. The do not wince, scream, flinch, sob, grit their teeth, clench their fists, exhibit beads of sweat, or otherwise act like people in pain or people suppressing their unconditioned responses associated with pain. However, they do feel pain, and they dislike it (just as we do) ("Brains and Behavior", 1965, p. 9).

Here is some archival footage I have discovered:

A couple of pages later, Putnam expands the thought experiment:

[L]et us undertake the task of trying to imagine a world in which there are not even pain reports. I will call this world the 'X-world'. In the X-world we have to deal with 'super-super-spartans'. These have been super-spartans for so long, that they have begun to suppress even talk of pain. Of course, each individual X-worlder may have his private way of thinking about pain.... He may think to himself: 'This pain is intolerable. If it goes on one minute longer I shall scream. Oh No! I mustn't do that! That would disgrace my whole family...' But X-worlders do not even admit to having pains" (p. 11).

Putnam concludes:

"If this last fantasy is not, in some disguised way, self-contradictory, then logical behaviourism is simply a mistake.... From the statement 'X has a pain' by itself no behavioral statement follows -- not even a behavioural statement with a 'normally' or 'probably' in it. (p. 11)

Putnam's basic idea is pretty simple: If you're a good enough actor, you can behave as though you lack mental state X even if you have mental state X, and therefore any analysis of mental state X that posits a necessary connection between mentality and behavior is doomed.

Now I don't think this objection should have particularly worried any logical behaviorists (if any existed), much less actual philosophers sometimes falsely called behaviorists such as Ryle, and still less 21st-century dispositionalists like me. Its influence, I suspect, has more to do with how it conveniently disposes of what was, even in 1965, only a straw man.

We can see the flaw in the argument by considering parallel cases of other types of properties for which a dispositional analysis is highly plausible, and noting how it seems to apply equally well to them. Consider solubility in water. To say of an object that it is soluble in water is to say that it is apt to dissolve when immersed in water. Being water-soluble is a dispositional property, if anything is.

Imagine now a planet in which there is only one small patch of water. The inhabitants of that planet -- call it PureWater -- guard that patch jealously with the aim of keeping it pure. Toward this end, they have invented technologies so that normally soluable objects like sugar cubes will not dissolve when immersed in the water. Some of these technologies are moderately low-tech membranes which automatically enclose objects as soon as they are immersed; others are higher-tech nano-processes, implemented by beams of radiation, that ensure that stray molecules departing from a soluble object are immediately knocked back to their original location. If Putnam's super-spartans objection is correct, then by parity of reasoning the hypothetical possibility of the planet PureWater would show that no dispositional analysis of solubility could be correct, even here on Earth. But that's the wrong conclusion.

The problem with Putnam's argument is that, as any good dispositionalist will admit, dispositions only manifest ceteris paribus -- that is, under normal conditions, absent countervailing forces. (This has been especially clear since Nancy Cartwright's influential 1983 book on the centrality of ceteris paribus conditions to scientific generalizations, but Ryle knew it too.) Putnam quickly mentions "a behavioural statement with a 'normally' or 'probably' in it", but he does not give the matter sufficient attention. Super-super-spartans' intense desire not to reveal pain is a countervailing force, a defeater of the normality condition, like the technological efforts of the scientists of PureWater. To use hypothetical super-super-spartans against a dispositional approach to pain is like saying that water-solubility isn't a dispositional property because there's a possible planet where soluble objects reliably fail to dissolve when immersed in water.

Most generalizations admit of exceptions. Nerds wear glasses. Dogs have four legs. Extraverts like parties. Dropped objects accelerate at 9.8 m/sec^2. Predators eat prey. Dispositional generalizations are no different. This does not hinder their use in defining mental states, even if we imagine exceptional cases where the property is present but something dependably interferes with its manifesting in the standard way.

Of course, if some of the relevant dispositions are dispositions to have certain types of related conscious experiences (e.g., inner speech) and to transition to related mental states (e.g., in jumping to related conclusions), as both Ryle and I think, then the super-spartan objection is even less apt, because super-super-spartans do, by hypothesis, have those dispositions. They manifest such internal dispositions when appropriate, and if they fail to manifest their pain in outward behavior that's because manifestation is prevented by an opposing force.

(PS: Just to be clear, I don't myself accept a dispositional account of pain, only of belief and other attitudes.)

Thursday, July 13, 2017

THE TURING MACHINES OF BABEL

[first published in Apex Magazine, July 2017]

In most respects, the universe (which some call the Library) is everywhere the same, and we at the summit are like the rest of you below.  Like you, we dwell in a string of hexagonal library chambers connected by hallways that run infinitely east and west.  Like you, we revere the indecipherable books that fill each chamber wall, ceiling to floor.  Like you, we wander the connecting hallways, gathering fruits and lettuces from the north wall, then cast our rinds and waste down the consuming vine holes.  Also like you, we sometimes turn our backs to the vines and gaze south through the indestructible glass toward sun and void, considering the nature of the world.  Our finite lives, guided by our finite imaginations, repeat infinitely east, west, and down.
But unlike you, we at the summit can watch the rabbits.
The rabbits!  Without knowing the rabbits, how could one hope to understand the world?

#

The rabbit had entered my family's chamber casually, on a crooked, sniffing path.  We stood back, stopping mid-sentence to stare, as it hopped to a bookcase.  My brother ran to inform the nearest chambers, then swiftly returned.  Word spread, and soon most of the several hundred people who lived within a hundred chambers of us had come to witness the visitation -- Master Gardener Ferdinand in his long green gown, Divine Chanter Guinart with his quirky smile.  Why hadn't our neighbors above warned us that a rabbit was coming?  Had they wished to watch the rabbit, and lift it, and stroke its fur, in selfish solitude?
The rabbit grabbed the lowest bookshelf with its pink fingers and pulled itself up one shelf at a time to the fifth or sixth level; then it scooted sideways, sniffing along the chosen shelf, fingers gripping the shelf-rim, hind feet down upon the shelf below.  Finding the book it sought, it hooked one finger under the book's spine and let it fall.
The rabbit jumped lightly down, then nudged the book across the floor with its nose until it reached the reading chair in the middle of the room.  It was of course taboo for anyone to touch the reading chair or the small round reading table, except under the guidance of a chanter.  Chanter Guinart pressed his palms together and began a quiet song -- the same incomprehensible chant he had taught us all as children, a phonetic interpretation of the symbols in our sacred books.
The rabbit lifted the book with its fingers to the seat of the chair, then paused to release some waste gas that smelled of fruit and lettuce.  It hopped up onto the chair, lifted the book from chair to reading table, and hopped onto the table.  Its off-white fur brightened as it crossed into the eternal sunbeam that angled through the small southern window.  Beneath the chant, I heard the barefoot sound of people clustering behind me, their breath and quick whispers.
The rabbit centered the book in the sunbeam.  It opened the book and ran its nose sequentially along the pages.  When it reached maybe the 80th page, it erased one letter with the pink side of its tongue, and then with the black side of its tongue it wrote a new letter in its place.
Its task evidently completed, the rabbit nosed the book off the table, letting it fall roughly to the floor.  The rabbit leaped down to chair then floor, then smoothed and licked and patiently cleaned the book with tongue and fingers and fur.  Neighbors continued to gather, clogging room and doorways and both halls.  When the book-grooming was complete, the rabbit raised the book one shelf at a time with nose and fingers, returning it to its proper spot.  It leaped down again and hopped toward the east door.  People stepped aside to give it a clear path.  The rabbit exited our chamber and began to eat lettuces in the hall.
With firm voice, my father broke the general hush: "Children, you may gently pet the rabbit.  One child at a time."  He looked at me, but I no longer considered myself a child.  I waited for the neighbor children to have their fill of touching.  We lived about a hundred thousand levels from the summit, but even so impossibly near the top of our infinite world, one might reach old age only ever having seen a couple of dozen visitations.  By the time the last child left, the rabbit had long since finished eating.
The rabbit hopped toward where I sat, about twenty paces down the hall, near the spiral glass stairs.  I intercepted it, lifting it up and gazing into its eyes.
It gazed silently back, revealing no secrets.

[continued here]

[author interview]

-----------------------------------------

Related:

What Is the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape? (Jul 5, 2017)

Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct 31, 2012)

Wednesday, July 05, 2017

What's the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape?

Your first guess is probably not very likely.

But consider this argument:

(1) A computationalist-functionalist philosophy of mind is correct. That is, mentality is just a matter of transitioning between computationally definable states in response to environmental inputs, in a way that hypothetically could be implemented by a computer.

(2) As Alan Turing famously showed, it's possible to implement any finitely computable function on a strip of tape containing alphanumeric characters, given a read-write head that implements simple rules for writing and erasing characters and moving itself back and forth along the tape.

(3) Given 1 and 2, one way to implement a mind is by means of a rabbit reading and writing characters on a long strip of tape that is properly responsive, in an organized way, to its environment. (The rabbit will need to adhere to simple rules and may need to live a very long time, so it won't be exactly a normal rabbit. Environmental influence could be implemented by alteration of the characters on segments of the tape.)

(4) The universe is infinite.

(5) Given 3 and 4, the cardinality of "normally" implemented minds is the same as the cardinality of minds implemented by rabbits reading and writing on Turing tape. (Given that such Turing-rabbit minds are finitely probable, we can create a one-to-one mapping or bijection between Turing-rabbit minds and normally implemented minds, for example by starting at an arbitrary point in space and then pairing the closest normal mind with the closest Turing-rabbit mind, then pairing the second-closest of each, then pairing the third-closest....)

(6) Given 5, you cannot justifiably assume that most minds in the universe are "normal" minds rather than Turing-rabbit implemented minds. (This might seem unintuitive, but comparing infinities often yields such unintuitive results. ETA: One way out of this would be to look at the ratios in limits of sequences. But then we need to figure out a non-question-begging way to construct those sequences. See the helpful comments by Eric Steinhart on my public FB feed.)

(7) Given 6, you cannot justifiably assume that you yourself are very likely to be a normal mind rather than a Turing-rabbit mind. (If 1-3 are true, Turing-rabbit minds can be perfectly similar to normally implemented minds.)

I explore this possibility in "THE TURING MACHINES OF BABEL", a story in this month's issue of Apex Magazine. I'll link to the story once it's available online, but also consider supporting Apex by purchasing the issue now.

The conclusion is of course "crazy" in my technical sense of the term: It's highly contrary to common sense and we aren't epistemically compelled to believe it.

Among the ways out: You could reject the computational-functional theory of mind, or you could reject the infinitude of the universe (though these are both fairly common positions in philosophy and cosmology these days). Or you could reject my hypothesized rabbit implementation (maybe slowness is a problem even with perfect computational similarity). Or you could hold a view which allows a low ratio of Turing rabbits to normal minds despite the infinitude of both. Or you could insist that we (?) normally implemented minds have some epistemic access to our normality even if Turing-rabbit minds are perfectly similar and no less abundant. But none of those moves is entirely cost-free, philosophically.

Notice that this argument, though skeptical in a way, does not include any prima facie highly unlikely claims among its premises (such as that aliens envatted your brain last night or that there is a demon bent upon deceiving you). The premises are contentious, and there are various ways to resist my combination of them to draw the conclusion, but I hope that each element and move, considered individually, is broadly plausible on a fairly standard 21st-century academic worldview.

The basic idea is this: If minds can be implemented in strange ways, and if the universe is infinite, then there will be infinitely many strangely implemented minds alongside infinitely many "normally" implemented minds; and given standard rules for comparing infinities, it seems likely that these infinities will be of the same cardinality. In an infinite universe that contains infinitely many strangely implemented minds, it's unclear how you could know you are not among the strange ones.

Monday, June 26, 2017

Icelandic Thoughts

Here's where I'm sitting this minute: next to a creek on a steep flowery hill, overlooking the town of Neskaupstathur, Iceland, and its fjord, with snowy peaks and waterfalls in the distance.

[The creek and flowers are visible on the middle left, the snow as barely visible white flecks on the mountains across the bay, the buildings of the town as white smears by the water. As usual, an amateur photo hardly captures the immersive scene.]

I try to write at least one substantive post a week, even while traveling, but I'm finding it hard here -- partly because of the demands of travel, but also partly because my thoughts aren't very bloggish. My mind does often wander to philosophy, psychology, and speculative fiction while hiking (I'm considering a fairy story), but the thoughts seem softer and larger than my usual blogging style. The thoughts that come to me tend to be vague, drifting, uncertain thoughts about value and a meaningful life. I could imagine not needing to do academic philosophy again, if a different environment, like this one, brought different thoughts and values out of me.

Sitting by this creek in Iceland (and expecting internet connectivity!), is that a terrible wasteful indulgence in a world with so much poverty and need? Or is it a fine thing that I can reasonably let the world give me?

Tuesday, June 20, 2017

The Dauphin's Metaphysics, read by Tatiana Grey at Podcastle

My alternative-history story about love and low-tech body switching through hypnosis has just been released in audio at PodCastle. Terrific reading by Tatiana Grey!

PodCastle 475: The Dauphin's Metaphysics

This has been my best-received story so far, recommended by Locus Online, translated into Chinese and Hungarian for leading SF magazines in those languages, and assigned as required reading in at least two philosophy classes in the US.

The setting is Beijing circa 1700, post-European invasion and collapse, resulting in a mashup of European and Chinese institutions. Dauphin Jisun Fei takes a metaphysics class with the the Academy's star woman professor and conceives a plan for radical life extension.

Story originally published in Unlikely Story, fall 2015.

Thursday, June 15, 2017

On Not Distinguishing Too Finely Among One's Motivations

I'm working through Daniel Batson's latest book, What's Wrong with Morality?

Batson distinguishes between four different types of motives for seemingly moral behavior, each with a different type of ultimate goal. Batson's taxonomy is helpful -- but I want to push back against distinguishing as finely as he does among people's motives for doing good.

Suppose I offer a visiting speaker a ride to the airport. That seems like a nice thing to do. According to Batson, I might have one (or more) of the following types of motivation:

(1.) I might be egoistically motivated -- acting in my own perceived self-interest. Maybe the speaker is the editor of a prestigious journal and I think I'll have a better shot publishing and advancing my career if the speaker thinks well of me.

(2.) I might be altruistically motivated -- aiming primarily to benefit the speaker herself. I just want her to have a good visit, a good experience at UC Riverside, and giving her a ride is a way of advancing that goal I have.

(3.) I might be collectivistically motivated -- aiming primarily to benefit a group. I want UC Riverside's Philosophy Department to flourish, and giving the speaker a ride is a way of advancing that thing I care about.

(4.) I might be motivated by principle -- acting according to a moral standard, principle, or ideal. Maybe I think driving the speaker to the airport will maximize global utility, or that it is ethically required given my social role and past promises.

Batson characterizes his view of motivation as "Galilean" -- focused on the underlying forces that drive behavior (p. 25-26). The idea seems to be that when I make that offer to the visiting speaker, that action must have been induced by some particular motivational force inside me that is egoistic, altruistic, collectivist, or principled, or some specific combination of those. On this view, we don't understand why I am offering the ride until we know which of these interior forces is the one that caused me to offer the ride. Principled morality is rare, Batson argues, because it requires being caused to act by the fourth type of motivation, and people are more normally driven by the first three.

I'm nervous about appeals to internal causes of this sort. My best guess is that these sorts of simple, familiar folk (or quasi-folk) categories don't map neatly onto the real causal processes generating our behavior, which are likely to be much more complicated, and also misaligned with categories that come naturally to us. (Compare connectionist structures and deep learning.)

Rather than try to articulate an alternative positive account, which would be too much to add to this post, let me just suggest the following. It's plausible that our motivations are often a tangled mess, and when they are a tangled mess, attempting to distinguish finely among them is usually a mistake.

For example, there are probably hypothetical conditions under which I would decline to drive the speaker because it conflicted with my self-interest, and there are probably other hypothetical conditions under which I would set aside my self-interest and choose to drive the speaker anyway. I doubt these hypothetical conditions line up neatly, so that I decline to drive the speaker if and only if it would require sacrificing X amount or more of self-interest. Some situations might just channel me into driving her, even at substantial personal cost, while others might more easily invite the temptation to wiggle out.

The same is likely true for the other motivations. Hypothetically, if the situation were different so that it was less in the collective interest of the department, or less in the speaker's interest, or less compelled by my favorite moral principles, I might drive or not drive the speaker depending partly on each of these but also partly on other factors of situation and internal psychology, habits, scripts, potential embarrassment -- probably in no tidy pattern.

Furthermore, egoistic, altruistic, collectivist, and principled aims come in many varieties, difficult to disentangle. I might be egoistically invested in the collective flourishing of the department as a way of enhancing my own stature in the profession. I might be drawn to different, conflicting moral principles. I might altruistically desire both that the speaker get to her flight on time and that she enjoy the company of the cleverest conversationalist in the department (me!). I might enjoy showing off the sights of the L.A. basin through the windows of my car, with a feeling of civic pride. Etc.

Among all of these possible motivations -- indefinitely many possible motivations, perhaps, if we decide to slice finely among them -- does it make sense to try to determine which one or few are the real motivations that are genuinely causally responsible for my choosing to drive the speaker?

Now if my actual and hypothetical choices were all neatly aligned with my perceived self-interest, then of course self-interest would be my real motive. Similarly, if my pattern of actual and hypothetical choices were all neatly aligned with one particular moral principle, then we could say I was mainly moved by that principle. But if my patterns of choice are not so neatly explained, if my choices arise from a tangle of factors far more complex than Batson's four, then each of Batson's factors is only a simplified label for a pattern that I don't very closely match, rather than a deep Galilean cause of my choice.

The four factors might, then, not compete with each other as starkly as Batson seems to suppose. Each of them might, to a first approximation, capture my motivation reasonably well, in those fortunate cases where self-interest, other-interest, collective interest, and moral principle all tend to align. I have lots of reasons for driving the speaker! This might be so even if, in hypothetical cases, I diverge from the predicted patterns, probably in different and complex ways. My motivations might be described, with approximately equal accuracy, as egoistic, altruistic, collectivist, and principled, when these four factors tend to align across the relevant range of situations -- not because each type of motivation contributes equal causal juice to my behavior but rather because each attribution captures well enough the pattern of choices I would make in the types of cases we care about.

Wednesday, June 07, 2017

Academic Pyramids, Academic Tubes

Greetings from Cambridge! Traveling around Europe and the UK, I am struck by the extent to which different countries have relatively pyramid-like vs relatively tube-like academic systems. This has moved me to think, also, about the extent to which US academia has recently been becoming more pyramidal.

Please forgive my ugly sketch of a pyramid and a tube:

The German system is quite pyramidal: There is a small group of professors at the top, and many stages between undergraduate and professor, at any one of which you might suddenly find yourself ejected from the system: undergraduate, then masters, then PhD, then one or more postdocs and/or assistantships before moving up or out; and at each stage one needs to actively seek a position and typically move locations if successful.

In contrast, the US system, as it stood about twenty years ago, was more tubular: fewer transition stages requiring application and moving, with much sharper cutdowns between each stage. To a first approximation, undergraduates applied to PhD programs, very few got in, and then if they completed there was one more transition from completing the PhD to gaining a tenure-track job (and typically, though of course not always, tenure after 6-7 years on the tenure track).

Philosophy in the US is becoming more pyramidal, I believe, with more people pursuing terminal Master's degrees before applying to PhD programs, and with the increasing number of adjunct positions and postdoctoral positions for newly-minted PhDs. Instead of approximately three phases (undergrad, grad/PhD, tenure-track/tenured professor), we are moving closer to five-phase system (undergrad, MA, PhD, adjunct/post-doc, tenure-track/tenured).

This more pyramidal system has some important advantages. One advantage is that it provides more opportunities for people from nonelite backgrounds to advance through the system. It has always been difficult from students from nonelite undergraduate universities to gain acceptance to elite PhD programs (and it still is); similarly for students who struggled a bit in their undergraduate careers before finding philosophy. With the increasing willingness of PhD programs to accept students with Master's degrees, a broader range of students can earn a shot at academia: They can compete to get into a Master's program (typically easier to do for people with nonelite backgrounds than being admitted to a comparably-ranked PhD program) and then possibly shine there, gaining admittance to a range of PhD programs that would otherwise have been closed to them. A similar pattern sometimes occurs with postdocs.

The other advantage of the pyramid is that being exposed to a variety of institutions, advisors, and academic subcultures has advantages both for the variety of perspectives it provides and for meeting more people in the academic community. A Master's program or a postdoctoral fellowship can be a rewarding experience.

But I am also struck by the downside of pyramidal structures. In Europe, I met many excellent philosophers in their 30s or 40s, post-PhD, unsure whether they would make the next jump up the pyramid or not, unable to settle down securely into their careers. This used to be relatively uncommon in the US, though it has become more common. It is hard on marriages and families; and it's hard to face the prospects of a major career change in mid-life after devoting a dozen or more years to academia.

The sciences in the US have tended to be more pyramidal than philosophy, with one or more postdocs often expected before the tenure-track job. This is partly, I suspect, just due to the money available in science. There are lots of post-docs to be had, and it's easier to compete for professor positions with that extra postdoctoral experience. One possibly unintended consequence of the increased flow of money into philosophical research projects, through the Templeton Foundation and government research funding organizations, is to increase the number of postdocs, and thus the pyramidality of the discipline.

Of course, the rise of inexpensive adjunct labor is a big part of this -- bigger, probably, than the rise of terminal Master's programs as a gateway to the PhD and the rise of the philosophy post-doc -- but all of these contribute in different ways to making our discipline more pyramidal than it was a few decades ago.

Thursday, June 01, 2017

The Social-Role Defense of Robot Rights

Daniel Estrada's Made of Robots has launched a Hangout on Air series in philosophy of technology. The first episode is terrific!

Robot rights cheap yo.

Cheap: Estrada's argument for robot rights doesn't require that robots have any conscious experiences, any feelings, any reinforcement learning, or (maybe) any cognitive processing at all. Most other defenses of the moral status of robots assume, implicitly or explicitly, that robots who are proper targets of moral concern will exist only in the future, once they have cognitive features similar to humans or at least similar to non-human vertebrate animals.

In contrast, Estrada argues that robots already deserve rights -- actual robots that currently exist, even simple robots.

His core argument is this:

1. Some robots are already "social participants" deeply incorporated into our social order.

2. Such deeply incorporated social participants deserve social respect and substantial protections -- "rights" -- regardless of whether they are capable of interior mental states like joy and suffering.

Let's start with some comparison cases. Estrada mentions corpses and teddy bears. We normally treat corpses with a certain type of respect, even though we think they themselves aren't capable of states like joy and suffering. And there's something that seems at least a little creepy about abusing a teddy bear, even though it can't feel pain.

You could explain these reactions without thinking that corpses and teddy bears deserve rights. Maybe it's the person who existed in the past, whose corpse is now here, who has rights not to be mishandled after death. Or maybe the corpse's relatives and friends have the rights. Maybe what's creepy about abusing a teddy bear is what it says about the abuser, or maybe abusing a teddy harms the child whose bear it is.

All that is plausible, but another way of thinking emphasizes the social roles that corpses and teddy bears play and the importance to our social fabric (arguably) of our treating them in certain ways and not in other ways. Other comparisons might be: flags, classrooms, websites, parks, and historic buildings. Destroying or abusing such things is not morally neutral. Arguably, mistreating flags, classrooms, websites, parks, or historic buildings is a harm to society -- a harm that does not reduce to the harm of one or a few specific property owners who bear the relevant rights.

Arguably, the destruction of hitchBOT was like that. HitchBOT was cute ride-hitching robot, who made it across the length of Canada but who was destroyed by pranksters in Philadelphia when its creators sent it to repeat the feat in the U.S. Its destruction not only harmed its creators and owners, but also the social networks of hitchBOT enthusiasts who were following it and cheering it on.

It might seem overblown to say that a flag or a historic building deserves rights, even if it's true that flags and historic buildings in some sense deserve respect. If this is all there is to "robot rights", then we have a very thin notion of rights. Estrada isn't entirely explicit about it, but I think he wants more than that.

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots deserve remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Estrada gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

[image source] Update: I changed "have rights" to "deserve rights" in a few places above.

Thursday, May 25, 2017

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

At The Deviant Philosopher Wayne Riggs, Amy Olberding, Kelly Epley, and Seth Robertson are collecting suggestions for teaching units, exercises, and primers that incorporate philosophical approaches and philosophers that are not currently well-represented in the formal institutional structures of the discipline. The idea is to help philosophers who want suggestions for diversifying their curriculum. It looks like a useful resource!

I contributed the following to their site, and I hope that others who are interested in diversifying the philosophical curriculum will also contribute something to their project.

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

Primary Texts

  • Allen, James, Hilton Als, John Lewis, and Leon F. Litwack (2000). Without sanctuary: Lynching photography in America. Santa Fe: Twin Palms. Pp. 8-16, 173-176, 178-180, 184-185, 187-190, 194-196, 198, 201 (text only), and plates #20, 25, 31, 37-38, 54, 57, 62-65, 74, and 97.
  • Wells-Barnett, Ida B. (1892/2002). On lynchings. Ed. P.H. Collins. Amherst, NY: Humanity. Pp. 42-46.
  • Mengzi (3rd c. BCE/1970). Trans. B.W. Van Norden. Indianapolis: Hackett. 1A7, 1B5, 1B11, 2A2 (p. 35-41 only), 2A6, 2B9, 3A5, 4B12, 6A1 through 6A15, 6B1, 7A7, 7A15, 7A21, 7B24, 7B31.
  • Rousseau, Jean-Jacques (1755/1995). Discourse on the origin of inequality. Trans. F Philip. Ed. P. Coleman. Oxford: Oxford. Pp. 45-48.
  • Xunzi (3rd c. BCE/2014). Xunzi: The complete text. Trans. E. Hutton. Princeton, NJ: Princeton. Pp. 1-8, 248-257.
  • Hobbes, Thomas (1651/1996). Leviathan. Ed. R. Tuck. Cambridge: Cambridge. Pp. 86-90.
  • Doris, John M. (2002). Lack of character. Cambridge: Cambridge. Pp. 28-61.
  • The Milgram video on Obedience to Authority.
Secondary Texts for Instructor
  • Dray, Philip (2002). At the hands of persons unknown. New York: Modern Library.
  • Ivanhoe, Philip J. (2000). Confucian moral self cultivation, 2nd ed. Indianapolis: Hackett. 
  • Schwitzgebel, Eric (2007). Human nature and moral education in Mencius, Xunzi, Hobbes, and Rousseau. History of Philosophy Quarterly, 24, 147-168.
Suggested Courses
  • Introduction to Ethics
  • Ethics
  • Introduction to Philosophy
  • Evil
  • Philosophy of Psychology
  • Political Philosophy
Overview

This is a two-week unit. Day one is on the history of lynching in the United States, featuring lynching photography and Ida B. Wells. Day two is Mengzi on human nature (with Rousseau as secondary reading). Day three is Xunzi on human nature (with Hobbes as secondary reading). Days four and five are the Milgram video and John Doris on situationism.

The central question concerns the psychology of lynching perpetrators and Milgram participants. On a “human nature is good” view, we all have some natural sympathies or an innate moral compass that would be revolted by our participation in such activities, if we were not somehow swept along by bad influences (Mengzi, Rousseau). On a “human nature is bad” view, our natural inclinations are mostly self-serving and morality is an artificial human construction; so if one’s culture says “this is the thing to do” there is no inner source of resistance unless you have already been properly trained (Xunzi, Hobbes). Situationism (which is not inconsistent with either of these alternatives) suggests that most people can commit great evil or good depending on what seem to be fairly moderate situational pressures (Doris, Milgram).

Students should be alerted in advance about the possibly upsetting photographs, and they must be encouraged to look closely at the faces of the perpetrators rather than being too focused on the bodies of the victims (which may be edited out if desired for classroom presentation). You might even consider giving the students possible alternative readings if they find the lynching material too difficult (such as an uplifting chapter from Colby & Damon 1992).

On Day One, a point of emphasis should be that most of the victims were not even accused of capital crimes, and focus can be both on the history of lynching in general and on the emotional reactions of the perpetrators as revealed by their behavior described in the texts and by their faces in the photos.

On Day Two, the main emphasis should be on Mengzi’s view that human nature is good. King Xuan and the ox (1A7), the child at the well (2A6), and the beggar refusing food insultingly given (6A10) are the most vivid examples. The metaphor of cultivating sprouts is well worth extended attention (as discussed in the Ivanhoe and Schwitzgebel readings for the instructor). If the lynchers had paused to reflect in the right way, would they have found in themselves a natural revulsion against what they were doing, as Mengzi would predict? Rousseau’s view is similar (especially as developed in Emile) but puts more emphasis on the capacity of philosophical thinking to produce rationalizations of bad behavior.

On Day Three, the main emphasis should be on Xunzi’s view that human nature is bad. His metaphor of straightening a board is fruitfully contrasted with Mengzi’s of cultivating sprouts. For example, in straightening a board, the shape (the moral structure) is imposed by force from outside. In cultivating a sprout, the shape grows naturally from within, given a supportive, nutritive, non-damaging environment. Students can be invited to consider cartoon versions of “conservative” moral education (“here are the rules, like it or not, follow them or you’ll be punished!”) versus “liberal” moral education (“don’t you feel bad that you hurt Ana’s feelings?”).

Day Four you might just show the Milgram video.

Day Five the focus should be on articulating situationism vs dispositionism (or whatever you want to call the view that broad, stable, enduring character traits explain most of our moral behavior). I recommend highlighting the elements of truth in both views, and then showing how there are both situationist and dispositionist elements in both Mengzi and Xunzi (e.g., Mengzi says that young men are mostly cruel in times of famine, but he also recommends cultivating stable dispositions). Students can be encouraged to discuss how well or poorly the three different types of approach explain the lynchings and the Milgram results

If desired, Day Six and beyond can cover material on the Holocaust. Hannah Arendt’s Eichmann in Jerusalem and Daniel Goldhagen’s Hitler’s Willing Executioners make a good contrast (with Mengzian elements in Arendt and Xunzian elements in Goldhagen). (If you do use Goldhagen, be sure you are aware of the legitimate criticisms of some aspects of his view by Browning and others.)

Discussion Questions
  • What emotions are the lynchers feeling in the photographs?
  • If the lynchers had stopped to reflect on their actions, would they have been able to realize that what they were doing was morally wrong?
  • Mengzi lived in a time of great chaos and evil. Although he thought human nature was good, he never denied that people actually commit great evil. What resources are available in his view to explain actions like those of the lynch mobs, or other types of evil actions?
  • Is morality an artificial cultural invention? Or do we all have natural moral tendencies that only need to be cultivated in a nurturing environment?
  • In elementary school moral education, is it better to focus on enforcing rules that might not initially make sense to the children, or is it better to try to appeal to their sympathies and concerns for other people?
  • How effectively do you think people can predict what they themselves would do in a situation like the Milgram experiment or a lynch mob?
  • Are there people who are morally steadfast enough to resist even strong situational pressures? If so, how do they become like that?
Activities (optional)

On the first day, an in class assignment might be for them to spend 5-7 minutes writing down their opinion on whether human nature is good or evil (or in-between, or alternatively that the question doesn’t even make sense as formulated). Then can then trade their written notes with a neighbor or two and compare answers. On the last day, they can review what they wrote on the first day and discuss whether their opinions have changed.
[Greetings from Graz, Austria, by the way!]

Friday, May 19, 2017

Pre-Excuse

I'm heading off to Europe tomorrow for a series of talks and workshops. Nijmegen, Vienna, Graz, Lille, Leuven, Antwerp, Oxford, Cambridge -- whee! Then back to Riverside for a week and off to Iceland with the family to celebrate my son's high school graduation. Whee again! I return to sanity July 5.

I've sketched out a few ideas for blog posts, but nothing polished.

If I descend into incoherence, I have my pre-excuse ready! Jetlag and hotel insomnia.

[image source]

Thursday, May 18, 2017

Hint, Confirm, Remind

You can't say anything only once -- not when you're writing, not if you want the reader to remember. People won't read the words exactly as you intend them, or they will breeze over them; and often your words will admit of more interpretations than you realize, which you rule out by clarifying, angling in, repeating, filling out with examples, adding qualifiers, showing how what you say is different from some other thing it might be mistaken for.

I have long known this about academic writing. Some undergraduates struggle to fill their 1500-word papers because they think that every idea gets one sentence. How do you have eighty ideas?! It becomes much easier to fill the pages -- indeed the challenge shifts from filling the pages to staying concise -- once you recognize that every idea in an academic paper deserves a full academic-sized paragraph. Throw in an intro and conclusion and you've got, what, five ideas in a 1500-word paper? Background, a main point, one elaboration or application, one objection, a response -- done.

It took a while for me to learn that this is also true in writing fiction. You can't just say something once. My first stories were too dense. (They are now either trunked or substantially expanded.) I guess I implicitly figured that you say something, maybe in a clever oblique way, the reader gets it, and you're done with that thing. Who wants boring repetition and didacticism in fiction?

Without being didactically tiresome, there are lots of ways to slow things down so that the reader can relish your idea, your plot turn, your character's emotion or reaction, rather than having the thing over and done in a sentence. You can break it into phases; you can explicitly set it up, then deliver; you can repeat in different words (especially if the phrasings are lovely); you can show different aspects of the scene, relevant sensory detail, inner monologue, other characters' reactions, a symbolic event in the environment.

But one of my favorite techniques is hint, confirm, remind. You can do this in a compact way (as in the example I'm about to give), but writers more commonly spread HCRs throughout the story. Some early detail hints or foreshadows -- gives the reader a basis for guessing. Then later, when you hit it directly, the earlier hint is remembered (or if not, no biggie, not all readers are super careful), and the alert reader will enjoy seeing how the pieces come together. Still later, you remind the reader -- more quickly, like a final little hammer tap (and also so that the least alert readers finally get it).

Neil Gaiman is a master of the art. As I was preparing some thoughts for a fiction-writing workshop for philosophers I'm co-leading next month, I noticed this passage about "imposter syndrome", recently going around. Here's Gaiman:

Some years ago, I was lucky enough invited to a gathering of great and good people: artists and scientists, writers and discoverers of things. And I felt that at any moment they would realise that I didn’t qualify to be there, among these people who had really done things.

On my second or third night there, I was standing at the back of the hall, while a musical entertainment happened, and I started talking to a very nice, polite, elderly gentleman about several things, including our shared first name. And then he pointed to the hall of people, and said words to the effect of, "I just look at all these people, and I think, what the heck am I doing here? They’ve made amazing things. I just went where I was sent."

And I said, "Yes. But you were the first man on the moon. I think that counts for something."

And I felt a bit better. Because if Neil Armstrong felt like an imposter, maybe everyone did.

Hint: an elderly gentleman, same first name as Gaiman, famous enough to be backstage among well known artists and scientists. Went where he was sent.

Confirm: "You were the first man on the moon".

Remind: "... if Neil Armstrong..."

The hints set up the puzzle. It's unfolding fast before you, if you're reading at a normal pace. You could slow way down and treat it as a riddle, but few of us would do that.

The confirm gives you the answer. Now it all fits together. Bonus points to Gaiman for making it natural dialogue rather than flat-footed exposition.

The remind here is too soon after the confirm to really be a reminder, as it would be if it appeared a couple of pages later in a longer piece of writing. But the basic structure is the same: The remind hammer-taps the thing that should already be obvious, to make sure the reader really has it -- but quickly, with a light touch.

If you want the reader to remember, you can't just say it only once.

[image source]

Thursday, May 11, 2017

The Sucky and the Awesome

Here are some things that "suck":

  • bad sports teams;
  • bad popular music groups;
  • getting a flat tire, which you try to change in the rain because you're late to catch a plane for that vacation trip you've been planning all year, but the replacement tire is also flat, and you get covered in mud, miss the plane, miss the vacation, and catch a cold;
  • me, at playing Sonic the Hedgehog.
  • It's tempting to say that all bad things "suck". There probably is a legitimate usage of the term on which you can say of anything bad that it sucks; and yet I'm inclined to think that this broad usage is an extension from a narrower range of cases that are more central to the term's meaning.

    Here are some bad things that it doesn't seem quite as natural to describe as sucking:

  • a broken leg (though it might suck to break your leg and be laid up at home in pain);
  • lying about important things (though it might suck to have a boyfriend/girlfriend who regularly lies);
  • inferring not-Q from (i) P implies Q and (ii) not-P (though you might suck at logic problems);
  • the Holocaust.
  • The most paradigmatic examples of suckiness combine aesthetic failure with failure of skill or functioning. The sports team or the rock band, instead of showing awesome skill and thereby creating an awesome audience experience of musical or athletic splendor, can be counted on to drop the ball, hit the wrong note, make a jaw-droppingly stupid pass, choose a trite chord and tacky lyric. Things that happen to you can suck in a similar way to the way it sucks to be stuck at a truly horrible concert: Instead of having the awesome experience you might have hoped for, you have a lousy experience (getting splashed while trying to fix your tire, then missing your plane). There's a sense of waste, lost opportunity, distaste, displeasure, and things going badly. You're forced to experience one stupid, rotten thing after the next.

    Something sucks if (and only if) it should deliver good, worthwhile experiences or results, but it doesn't, instead wasting people's time, effort, and resources in an unpleasant and aesthetically distasteful way.

    The opposite of sucking is being awesome. Notice the etymological idea of "awe" in the "awesome": Something is awesome if it does or should produce awe and wonder at its greatness -- its great beauty, its great skill, the way everything fits elegantly together. The most truly sucky of sucky things instead, produces wonder at its badness. Wow, how could something be that pointless and awful! It's amazing!

    That "sucking" focuses our attention on the aesthetic and experiential is what makes it sound not quite right to say that the Holocaust sucked. In a sense, of course, the Holocaust did suck. But the phrasing trivializes it -- as though what is most worth comment is not the moral horror and the millions of deaths but rather the unpleasant experiences it produced.

    Similarly for other non-sucky bad things. What's central to their badness isn't aesthetic or experiential. To find nearby things that more paradigmatically suck, you have to shift to the experiential or to a lack of (awesome) skill or functioning.

    All of this is very important to understand as a philosopher, of course, because... because...

    Well, look. We wouldn't be using the word "sucks" so much if it wasn't important to us whether or not things suck, right? Why is it so important? What does it say about us, that we think so much in terms of what sucks and what is awesome?

    Here's a Google Ngram of "that sucks, this sucks, that's awesome". Notice the sharp rise that starts in the mid-1980s and appears to be continuing through the end of the available data.

    [click to enlarge]

    We seem to be more inclined than ever to divide the world into the sucky and the awesome.

    To see the world through the lens of sucking and awesomeness is to evaluate the world as one would evaluate a music video: in terms of its ability to entertain, and generate positive experiences, and wow with its beauty, magnificence, and amazing displays of skill.

    It's to think like Beavis and Butthead, or like the characters in the Lego Movie.

    That sounds like a superficial perspective on the world, but there's also something glorious about it. It's glorious that we have come so far -- that our lives are so secure that we expect them to be full of positive aesthetic experiences and maestro performances, so that we can dismissively say "that sucks!" when those high expectations aren't met.

    --------------------------------------

    For a quite different (but still awesome!) analysis of the sucky and the awesome, check out Nick Riggle's essay "How Being Awesome Became the Great Imperative of Our Time".

    Many thanks to my Facebook friends and followers for the awesome comments and examples on my public post about this last week.