Tuesday, September 01, 2015

A Defense of the Rights of Artificial Intelligences

... a new essay in draft, which I've done collaboratively with a student named Mara (whose last name is currently in flux and not finally settled).

This essay draws together ideas from several past blog posts including:
  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (July 24, 2015)
  • How Weird Minds Might Destabilize Ethics (Aug. 3, 2015)
  • ------------------------------------------

    Abstract:

    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

    Full version available here.

    As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.

    [image source]

    10 comments:

    Lukasz Stafiniak said...

    Do you have recommendations on which side of the moral disambiguation ought to be pursued? Should we try to design very capable AIs as tools, or as persons?

    Eric Schwitzgebel said...

    Persons seems to me the riskier course, but with the more potential benefit down the road. We would want to be very careful about it, both out of concern for those persons themselves and because *perhaps* AIs as persons pose more of a potential risk recently emphasized by Bostrom (depending on what distinguishes persons from non-persons, among AIs). So I'd say keep it at tools unless we're sure we can make a very well thought-out leap with a good assessment of the risks.

    Callan S. said...

    Can't say I get how this applies - currently the cultural gravitation seems to be toward building AI's for, what one might say, the purpose of slavery.

    Are the idea of rights for them supposed to head off their construction (for the purposes of slavery) at the pass, so to speak? That makes sense.

    Arnold said...

    Do inanimate existences occur in the universe...
    This direction when narrowly focused for moral A1 designs, subjects the "rights" of A1 to a place in a horizontal hierarchy and limits its to being to the biological processes only so far found in life...
    As indicated in the abstract our hierarchical place, in part, is a being obligation vertically before or above A1...

    Callan S. said...

    What do you mean by inanimate, unknown?

    Does your brain move around?

    chinaphil said...

    Nice essay - lovely example of one of your multi-pronged arguments.
    I wonder, though, about the dignity of the AIs? Once they achieve moral parity with humans, we have a duty to allow them to mostly self-determine. Messing with their parameters isn't less morally sticky just because we do it pre-birth. I worry that your recommendations here are a kind of eugenics.
    I'm not absolutely against eugenics, but it's a worry. Obviously we'd have to be constantly on guard against, say, not giving our AIs black skin to make them more relatable. I wonder if we might not end up in the same place as we are with eugenics, just avoiding it completely because it's a minefield.

    Eric Schwitzgebel said...

    Thanks for the kind words, chinaphil!

    The eugenics comparison is troubling. I'm inclined to think you're right that we should mention it, and that it might turn out to be such a minefield that people avoid tweaking parameters pre-birth just to avoid that minefield. However, if AIs prove to be *very* useful if properly tweaked, it might be hard for the culture to resist, or it might even turn out that cultures/subcultures that don't have eugenic qualms outcompete the ones that do....

    Arnold said...

    Callan S...that we are ourselves-involved, in part, as animate existences in observation...
    ...then morality can be involved from the influence of observation...

    googled "Does your brain move around?" Hmmmm...

    Callan S. said...

    The whole animate/inanimate distinction thing is nice to make, but to actually base anything around it...it's like trying to base a philosophy on the basis of not liking pickles or something else equally subjective.

    Anyway, you say animate - ignoring the body, how animate is your brain by itself? Not very at all, maybe?

    Arnold said...

    Callan S. In keeping the ideas that may influence Eric's essay simple and partial is my focus...
    Then 'animate/inanimate distinction' can also include distinction from observation...in this format it becomes a description exercise that suggests everything to relationships..like A1 morality to place, related in the universe...'Anyway-maybe' Hmmmm