Wednesday, June 29, 2016

Short Story Competition: Philosophy Through Fiction

[cross-posted, with bracketed comments, from the APA Blog]

We are inviting submissions for the short story competition “Philosophy Through Fiction”, organized by Helen De Cruz (Oxford Brookes University), with editorial board members Eric Schwitzgebel (UC Riverside), Meghan Sullivan (University of Notre Dame), and Mark Silcox (University of Central Oklahoma). The winner of the competition will receive a cash prize of US$500 (funded by the Berry Fund of the APA) and their story will be published in Sci Phi Journal.

Rationale

As philosophers, we frequently tell stories in the form of brief thought experiments. In the past and today, philosophers have also written longer, richer stories. Famous examples include Simone de Beauvoir, Iris Murdoch, and Jean-Paul Sartre. Fiction allows us to explore ideas that cannot be easily dealt with in the format of a journal article or monograph, and helps us to reach a broader audience, as the enduring popularity of philosophical novels shows. The aim of this competition is to encourage philosophers to use fiction to explore philosophical ideas, thereby broadening our scope and toolkit.

Eligibility

Short stories that are eligible for this competition must be some form of speculative fiction (this includes, but is not limited to, science fiction, fantasy, horror, alternative history, or magical realism), and must explore one or more philosophical ideas. These can be implicit; there is no restriction on which philosophical ideas you explore.

The story should be unpublished, which means it should not have appeared in a magazine, edited collection, or other venue. It should not be published on an author’s personal website or similar online venue either, at least from the time of submission until the editorial board’s decision, or – if it is published – at least six months after its publication in Sci Phi Journal. (This is a common publishing norm in speculative fiction.)

The competition is open to everyone, regardless of geographic location, career stage, age, or specialization. In other words, it is also open to, e.g., (graduate) students and philosophers outside of academia. We encourage philosophers who are new at writing fiction. Submissions should be at least 1,000 words and no longer than 7,500 words.

The submission should be accompanied by a brief “Food for Thought” section (maximum word count: 500, not part of the overall word count), where the author explains the philosophical ideas behind the piece. Examples of such Food for Thought sections appear at the end of these stories: Unalienable Right by Leenna Naidoo and Immortality Serum by Michaele Jordan. [Evaluation of the quality of the Food for Thought sections will be an important part of the process. Please feel free to write something longer and more substantive than these two examples, up to 500 words.]

Dates

The deadline for this competition is February 1, 2017. The winner will be announced by March 31, 2017. The winning story will appear in the following issue of Sci Phi Journal.

Submission requirements

Please submit your story to philosophythroughfiction@gmail.com. You can use the same e-mail address for queries.

Your story should be anonymized, i.e.,contain no name or other form of identification. It should have a distinct title (not “philosophy story submission” but e.g., “The Icy Labyrinth”), and it should be at least in a 12-point clearly legible font. The file format should be doc, docx or rtf. Please use the subject line “submission for short story competition” for your e-mail. Attach the story (the filename should be an abbreviated form of your story title, e.g. “labyrinth.rtf”) to the e-mail. The Food for Thought section should be at the bottom of the same document, with a separate header “Food for Thought”. Please include word counts for both the story and the Food for Thought at the top of the document.

Place your full name, institutional affiliation or home address and the full title of your story in the body of the e-mail. We cannot accept submissions past the deadline of 1 February 2017.

We are planning to publish an edited volume of invited speculative fiction philosophy stories. Strong pieces entered into the competition may be considered for this volume. If you do not want your submission to be considered for this volume, please state this explicitly in the body of your e-mail. In the absence of this, we will assume you agree that your story is simultaneously considered for the competition and the volume.

Review process

All stories will first be vetted for basic quality by a team of readers at Sci Phi Journal. Stories that pass this first stage will be sent in anonymized format to a board of reviewers who will select the winning story. The reviewers will examine how effectively the stories explore philosophical ideas. By entering the competition you agree that their decision is final.

Funding

This story competition is supported by a grant from the American Philosophical Association’s Berry Fund for Public Philosophy, and is hosted at Oxford Brookes University.

*

For inspiration, check out recent discussions on the [APA] blog about reading and writing philosophical fiction.

Monday, June 27, 2016

Susan Schneider on How to Prevent a Zombie Dictatorship

Last week I posted "How to Accidentally Become a Zombie Robot", discussing Susan Schneider's recent TEDx-talk proposal for checking whether silicon chips can be conscious. Susan has written the following reply, which she invited me to share on the blog.

---------------------------------------------

Eric,

Greetings from a cafĂ© in Lisbon! Your new, seriously cool I,Brain case raises an important point about my original test in my Ted talk and is right out of a cyberpunk novel. A few initial points, for readers, before I respond, as Ted talks don’t give much philosophical detail:

1. It may be that the microchips are made of something besides silicon, (right now, e.g., carbon nanotubes and graphene are alternate substrates under development). I don’t think this matters – the issues that arise are the same.

2. It will be important that any chip test involve the very kind of chip substrate and design as that used in the AI in question.

3. Even if a kind of chip works in humans, there is still the issue of whether the AI in question has the right functional organization for consciousness. Since AI could be very different than us, and it is even difficult to figure out these issues in the case of biological creatures like the octopus, this may turn out to be very difficult.

4. For relation of the chip test to intriguing ideas of Ned Block and Dave Chalmers on this issue, see a paper on my website (a section in “The Future of Philosophy of Mind”, based on an earlier op-ed of mine.)

5. As Eric knows, it is probably a mistake to assume that brain chips will be functional isomorphs. I’m concerned with the development of real, emerging technologies, because I am concerned with finding a solution to the problem of AI consciousness based on an actual test. Brain chips, already under development at DARPA, may eventually be faster and more efficient information processors, enhance human consciousness, or, they may be low fidelity copies of what a given minicolumn does. This depends upon how medicine progresses...

Back to my original “Chip Test” (in the Ted talk). It’s 2045. You are ready to upgrade your aging brain. You go to I,Brain. They can gradually replace parts of your biological brain with microchips. You are awake during the surgery, suppose, and they replace a part of your biological brain that is responsible for some aspect of consciousness, with a microchip. Do you lose consciousness of something (e.g., do you lose part of your visual field)? If so, you will probably notice. This would be a sign that the microchip is the wrong stuff. Science could try and try to engineer a better chip, but if after years of trying, they never could get it right, perhaps we should conclude that that kind of substrate (e.g., silicon) does not give rise to consciousness.

On the other hand, if the chips work, that kind of substrate is in principle the right stuff (it can, in the right mental environment, give rise to qualia) although there is a further issue of whether a particular AI that has such chips has the right organization to be conscious (e.g., maybe it has nothing like a global workspace, like a Rodney Brooks style robot, or maybe it is superintelligent, and has mastered everything already and eliminated consciousness because it is too slow and inefficient).

Eric, your test is different, and I agree that someone should not trust that test. This would involve a systematic deception. What kind of society would do this? A zombie dictatorship, of course, which seeks to secretly eliminate conscious life from the planet. :-)

But I think you want to apply your larger point to the original test. Is the idea: couldn’t a chip be devised that would falsely indicate consciousness to the person? (Let’s call this a “sham qualia chip.” ) I think it is, so here’s a reply: God yes, in a dystopian world. We had better watch out! That would be horrible medicine…and luckily, it would involve a good deal of expense and effort, (systematically fooling someone about say, their visual experience, would be a major undertaking), so science would likely first seek a genuine chip substitute that preserved consciousness. (Would a sham qualia chip even clear the FDA :-) ? Maybe only if microchips were not the right stuff and it was the best science could do. After all, people would always be missing lost visual qualia, and it is best that they not suffer like this....). But crucially, since this would involve a deliberate effort on the part of medical researchers, we would know this, and so we would know that the chip is not a true substitute. Unless, that is, we are inhabitants of a zombie dictatorship.

The upshot: It would involve a lot of extra engineering effort to produce a sham qualia chip, and we would hopefully know that the sham chip was really a device designed to fool us. If this was done because the genuine chip substitute could not be developed, this would probably indicate that chips aren’t the right stuff, or that science needs to go back to the drawing board.

I propose a global ban on sham qualia chips in interest of preserving democracy.

---------------------------------------------

I (Eric) have some thoughts in response. I'm not sure it would be harder to make a sham qualia chip than a genuine qualia chip. Rather than going into detail on that now, I'll let it brew for a future post. Meanwhile, others' reactions welcomed too!

Thursday, June 23, 2016

How to Accidentally Become a Zombie Robot

Susan Schneider's beautifully clear TEDx talk on the future of robot consciousness has me thinking about the possibility of accidentally turning oneself into a zombie. (I mean "zombie" in the philosopher's sense: a being who outwardly resembles us but who has no stream of conscious experience.)

Suppose that AI continues to rely on silicon chips and that -- as Schneider thinks is possible -- silicon chips just aren't the right kind of material to host consciousness. (I'll weaken these assumptions below.) It's 2045 and you walk into the iBrain store, thinking about having your degenerating biological brain replaced with more durable silicon chips. Lots of people have done it already, and now the internet is full of programmed entities that claim to be happily uploaded people who have left their biological brains behind. Some of these uploaded entities control robotic or partly organic bodies; others exist entirely in virtual environments inside of computers. If Schneider is right that none of these silicon-chip-instantiated beings is actually conscious, then what has actually happened is that all of the biological people who "uploaded" actually committed suicide, and what exist are only non-conscious simulacra of them.

You've read some philosophy. You're worried about exactly that possibility. Maybe that's why you've been so slow to visit the local iBrain store. Fortunately, the iBrain company has discovered a way to upload you temporarily, so you can try it out -- so that you can determine introspectively for yourself whether the uploaded "you" really would be conscious. Federal regulations prohibit running an uploaded iBrain at the same time that the original source person is conscious, but the company can scan your brain non-destructively while you are sedated, run the iBrain for a while, then pause your iBrain and update your biological brain with memories of what you experienced. A trial run!

From the outside, it looks like this: You walk into the iBrain store, you are put to sleep, a virtual you wakes up in a robotic body and says "Yes, I really am conscious! Interesting how this feels!" and then does some jogging and jumping jacks to test out the the body. The robotic body then goes to sleep and the biological you wakes up and says, "Yes, I was conscious even in the robot. My philosophical doubts were misplaced. Upload me into iBrain!"

Here's the catch: After you wake, how do you know those memories are accurate memories of having actually been conscious? When the iBrain company tweaks your biological neurons to install the memories of what "you" did in the robotic body, it's hard to see how you could be sure that those memories aren't merely presently conscious seeming-memories of past events that weren't actually consciously experienced at the time they occurred. Maybe the robot "you" really was a zombie, though you don't realize that now.

You might have thought of this possibility in advance, and so you might remain skeptical. But it would take a lot of philosophical fortitude to sustain that skepticism across many "trial runs". If biological you has lots of seeming-memories of consciousness as a machine, and repeatedly notices no big disruptive change when the switch is flipped from iBrain to biological brain, it's going to be hard to resist the impression that you really are conscious as a machine, even if that impression is false -- and thus you might decide to go ahead and do the upload permanently, unintentionally transforming yourself into an experienceless zombie.

But maybe if a silicon-chip brain could really duplicate your cognitive processes well enough to drive a robot that acts just as you would act, then the silicon-chip brain really would have to be conscious? That's a plausible (though disputable) philosophical position. So let's weaken the philosophical and technological assumptions a little. We can still get a skeptical zombie scenario going.

Suppose that the iBrain company tires of all the "trial runs" that buyers foolishly insist on, so the company decides to save money by not actually having the robot bodies do any of those things that the the trial-run users think they do. Instead, when you walk in for a trial they sedate you and, based on what they know about your just-scanned biological brain, they predict what you would do if you were "uploaded" into a robotic body. They then give you false memories of having done those things. You never actually do any of those things or have any of those thoughts during the time your biological body is sedated, but there is no way to know that introspectively after waking. It would seem to you that the uploading worked and preserved your consciousness.

There can be less malicious versions of this mistake. Behavior and cognition during the trial might be insufficient for consciousness, or for full consciousness, while memory is nonetheless vivid enough to lead to retrospective attributions of full consciousness.

In her talk, Schneider suggests that we could tell whether silicon chips can really host consciousness by trying them out and then checking whether consciousness disappears when we do so; but I'm not sure this test would work. If nonconscious systems (whether silicon chip or otherwise) can produce both (a.) outwardly plausible behavior, and (b.) false memories of having really experienced consciousness, then we might falsely conclude in retrospect that consciousness is preserved. (This could be so whether we are replacing the whole brain at once or only one subsystem at a time, as long as "outward" means "outside of the subsystem, in terms of its influence on the rest of the brain".) We might then choose to replace conscious systems with nonconscious ones, accidentally transforming ourselves into zombies.

[image source]

----------------------------------------------

Update June 27:

Susan Schneider replies!

----------------------------------------------

Tuesday, June 14, 2016

Possible Architectures of Group Minds: Memory

by Eric Schwitzgebel and Rotem Herrmann

Suppose you have 200 bodies. "You"? Well, maybe not exactly you! Some hypothetical science fictional group intelligence.

How might memory work?

For concreteness, let's assume a broadly Ann Leckie "ancillary" setup: two hundred humanoid bodies on a planet's surface, each with an AI brain remotely connected to a central processor on an orbiting starship.

(For related reflections on the architecture of group perception, see this earlier post.)

Central vs. Distributed Storage

For simplicity, we will start by assuming a storage and retrieval representational architecture for memory.

A very centralized memory architecture might have the entire memory store in the orbiting ship, which the humanoid bodies access any time they need to retrieve a memory. A humanoid body, for example, might lean down to inspect a flower which it wants to classify, simultaneously sending a request for taxonomic information to the central unit. In contrast, a very distributed memory architecture might have all of the memory storage distributed in the humanoid bodies, so that if the humanoid doesn't have classification information in its own local brain it will have to send a request around to other humanoids to see if they have that information stored.

A bit of thought suggests that completely centralized memory architecture probably wouldn't succeed if the humanoid bodies are to have any local computation (as opposed to being merely dumb limbs). Local computation presumably requires some sort of working memory: If the local humanoid is reasoning from P and (P -> Q) to Q, it will presumably have to retain P in some way while it processes (P -> Q). And if the local humanoid is reaching its arm forward to pluck the flower, it will presumably have to remember its intention over the course of the movement if it is to behave coherently.

It's natural, then, to think that there will be at least a short-term store in each local humanoid, where it retains information relevant to its immediate projects, available for fast and flexible access. There needn't be a single short term store: There could be one or more ultra-fast working memory modules for quick inference and action, and a somewhat slower short-term or medium-term store for contextually relevant information that might or might not prove useful in the tasks that the humanoid expects to confront in the near future.

Conversely, although substantial long-term information, not relevant to immediate tasks, might be stored in each local humanoid, if there is a lot of potential information that the group mind wants to be able to access -- say, snapshots of the entire internet plus recorded high-resolution video feeds from each of its bodies -- it seems that the most efficient solution would be to store that information in the central unit rather than carrying around 200 redundant copies in each humanoid. Alternatively, if the central unit is limited in size, different pieces could be distributed among the humanoids, accessible each to the other upon request.

Procedural memories or skills might also be transferred between long-term and short-term stores, as needed for the particular tasks the humanoids might carry out. Situation-specific skills, for example -- piloting, butterfly catching, Antarean opera singing -- might be stored centrally and downloaded only when necessary, while basic skills such as walking, running, and speaking Galactic Common Tongue might be kept in the humanoid rather than "relearned" or "re-downloaded" for every assignment.

Individual humanoids might also locally acquire skills, or bodily modifications, or body-modifications-blurring-into-skills that are or are not uploaded to the center or shared with other humanoids.

Central vs. Distributed Calling

One of the humanoids walks into a field of flowers. What should it download into the local short-term store? Possibilities might include: a giant lump of botanical information, a giant history of everything known to have happened in that location, detailed algorithms for detecting the presence of landmines and other military hazards, information on soil and wildlife, a language module for the local tribe whose border the humanoid has just crossed, or of course some combination of all these different types of information.

We can imagine the calling decision being reached entirely by the central unit, which downloads information into particular humanoids based on its overview of the whole situation. One advantage of this top-down approach would be that the calling decision would easily reflect information from the other humanoids -- for example, if another one of the humanoids notices a band of locals hiding in the bushes.

Alternatively, the calling decision could be reached entirely by the local unit, based upon the results of local processing. One advantage of this bottom-up approach would be that it avoids delays arising from the transmission of local information to the central unit for possibly computationally-heavy comparison with other sources of information. For example, if the local humanoid detects a shape that might be part of a predator, it might be useful to prioritize a fast call of information on common predators without having to wait for a call-up decision from orbit.

A third option would allow a local representation in one humanoid A to trigger a download into another humanoid B, either directly from the first humanoid or via the central unit. Humanoid A might message Humanoid B "Look out, B, a bear!" along with a download of recently stored sensory input from A and an instruction to the central unit to dump bear-related information into B's short term store.

A well engineered group mind might of course, allow all three calling strategies. There will still be decisions about how much weight and priority to give to each strategy, especially in cases of...

Conflict

Suppose the central unit has P stored in its memory, while a local unit has not-P. What to do? Here are some possibilities:

Central dictatorship. Once the conflict is detected, the central unit wins, correcting the humanoid unit. This might make especially good sense if the information in the humanoid unit was originally downloaded from the central unit through a noisy process with room for error or if the central unit has access to a larger or more reliable set of information relevant to P.

Central subordination. Once the conflict is detected, the local might overwrite the central. This might make especially good sense if the central store is mostly a repository of constantly updated local information, for example if humanoid A is uploading a stream of sensory information from its short term store into the central unit's long term store.

Voting. If more than one local humanoid has relevant information about P, there might be a winner-take-all vote, resulting in the rewriting of P or not-P across all the relevant subsystems, depending on which representation wins the vote.

Compromise. In cases of conflict there might be compromise instead of dominance. For example, if the central unit has P and one peripheral unit has not-P, they might both write something like "50% likely that P"; analogously if the peripheral units disagree.

Retain the conflict. Another possibility is to simply retain the conflict, rather than changing either representation. The system would presumably want to be careful to avoid deriving conclusions from the contradiction or pursuing self-defeating or contradictory goals. Perhaps contradictory representations could be somehow flagged.

And of course there might be different strategies on different occasions, and the strategies can be weighted, so that if Humanoid A is in a better position than Humanoid B the compromise result might be 80% in favor of Humanoid A, rather than equally weighted.

Similar possibilities arise for conflicts in memory calling -- for example if the local processors in Humanoid A represent bear-information download as the highest priority, the local processes in Humanoid B represent language-information download as urgent for Humanoid A, and the central unit represents mine detection as the highest priority.

Reconstructive Memory

So far we've been working with a storage-and-retrieval model of memory. But human memory is, we think, better modeled as partly reconstructive: When we "remember" information (especially complex information like narratives) we are typically partly rebuilding, figuring out what must have been the case in a way that brings together stored traces with other more recent sources of information and also with general knowledge. For example, as Bartlett found, narratives retold over time tend to simplify and move toward incorporating stereotypical elements even if those elements weren't originally present; and as Loftus has emphasized, new information can be incorporated into seemingly old memories without the subject being aware of the change (for example memories of shattered glass when a car accident is later described as having been at high velocity).

If the group entity's memory is reconstructive, all of the architectural choices we've described become more complicated, assuming that in reconstructing memories the local units and the central units are doing different sorts of processing, drawing on different pools of information. Conflict between memories might even become the norm rather than the exception. And if we assume that reconstructing a memory often involves calling up other related memories in the process, decisions about calling become mixed in with the reconstruction process itself.

Memory Filling in Perception

Another layer of complexity: An earlier post discussed perception as though memory were irrelevant, but an accurate and efficient perceptual process would presumably involve memory retrieval along the way. As our humanoid bends down to perceive the flower, it might draw examplars or templates of other flowers of that species from long-term store, and this might (as in the human case) influence what it represents as the flower's structure. For example, in the first few instants of looking, it might tentatively represent the flower as a typical member of its species and only slowly correct its representation as it gathers specific detail over time.

Extended Memory

In the human case, we typically imagine memories as stored in the brain, with a sharp division between what is remembered and what is perceived. Andy Clark and others have pushed back against this view. In AI cases, the issue arises vividly. We can imagine a range of cases from what is clearly outward perception to what is clearly retrieval of internally stored information, with a variety of intermediate, difficult-to-classify cases in between. For example: on one end, the group has Humanoid A walk into a newly discovered library and read a new book. We can then create a slippery slope in which the book is digitized and stored increasingly close to the cognitive center of the humanoid (shelf, pocket, USB port, internal atrium...), with increasing permanence.

Also, procedural memory might be partly stored in the limbs themselves with varying degrees of independence from the central processing systems of the humanoid, which in turn can have varying degrees of independence from the processing systems of the orbiting ship. Limbs themselves might be detachable, blurring the border between body parts and outside objects. There need be no sharp boundary between brain, body, and environment.

[image source]

Monday, June 06, 2016

If You/I/We Live in a Sim, It Might Well Be a Short-Lived One

Last week, the famous Tesla and SpaceX CEO and PayPal cofounder Elon Musk said that he is almost certain that we are living in a sim -- that is, that we are basically just artificial intelligences living in a fictional environment in someone else's computer.

The basic argument, adapted from philosopher Nick Bostrom, is this:

1. Probably the universe contains vastly many more artificially intelligent conscious beings, living in simulated environments inside of computers ("sims"), than flesh-and-blood beings living at the "base level of reality" ("non-sims", i.e., not living inside anyone else's computer).

2. If so, we are much more likely to be sims than non-sims.

One might object in a variety of ways: Can AIs really be conscious? Even if so, how many conscious sims would there likely be? Even if there are lots, maybe somehow we can tell we're not them, etc. Even Bostrom only thinks it 1/3 likely that we're sims. But let's run with the argument. One natural next question is: Why think we are in a large, stable sim?

Advocates of versions of the Sim Argument (e.g., Bostrom, Chalmers, Steinhart) tend to downplay the skeptical consequences: The reader is implicitly or explicitly invited to think or assume that the whole planet Earth (at least) is (probably) all in the same giant sim, and that the sim has (probably) endured for a long time and will endure for a long time to come. But if the Sim Argument relies on some version of Premise 1 above, it's not clear that we can help ourselves to such a non-skeptical view. We need to ask what proportion of the conscious AIs (at least the ones relevantly epistemically indistinguishable from us) live in large, stable sims, and what proportion live in small or unstable sims?

I see no reason here for high levels of optimism. Maybe the best way for the beings at the base level of reality to create a sim is to evolve up billions or quadrillions of conscious entities in giant stable universes. But maybe it's just as easy, just as scientifically useful or fun, to cut and paste, splice and spawn, to run tiny sims of people in little offices reading and writing philosophy for thirty minutes, to run little sims of individual cities for a couple of hours before surprising everyone with Godzilla. It's highly speculative either way, of course! That speculativeness should undermine our confidence about which way it might be.

If we're in a sim, we probably can't know a whole lot about the motivations and computational constraints of the gods at the base level of reality. (Yes, "gods".) Maybe we should guess 50/50 large vs. small? 90/10? 99/1? (One reason to skew toward 99/1 is that if there are very large simulated universes, it will only take a few of them to have the sims inside them vastly outnumber the ones in billions of small universes. On the other hand, they might be very much more expensive to run!)

If you/I/we are in a small sim, then some version of radical skepticism seems to be warranted. The world might be only ten minutes old. The world might end in ten minutes. Only you and your city might exist, or only you in your room.

Musk and others who think we might be in a simulated universe should take their reasoning to the natural next step, and assign some non-trivial credence to the radically skeptical possibility that this is a small or unstable sim.

-----------------------------------------

Related:

"Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov 15, 2013).

"Our Possible Imminent Divinity" (Jan 2, 2014).

"1% Skepticism" (forthcoming, Nous).

[image source]