Page 1 of 1

#1 I, Robot (or, the morality of creating artificial sentience)

Posted: Wed Jul 05, 2006 4:54 pm
by Hotfoot
As I mentioned in the thread concerning bio-servitors, I think that the concept of developing a slave race of sapient machines is morally reprehensible. We know that in many sci-fi, such a situation is often considered the status quo. Star Wars and Star Trek have a rather blase attitude towards technological sapience by and large. The Federation tried to create an army of Datas, still can't deal with holographic persons, and so on. The Republic and the Empire both mercilessly exploit droid labor and slave labor itself is practiced under both regimes.

In any case, modern computer systems are nowhere near sapience. Deep Blue is a glorified calculator, every applicant for the Turing Test is little more than a cleverly coded program. However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves? Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?

Some people are worried about humans having sex with robots now, when all robots can possibly be are poorly controlled puppets, but at what point do we draw the line? When does a computer stop being a computer and a program becomes something more than just lines of code?

Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being? It seems unlikely that we'll be able to build a sapient robot without a much better understanding of a human brain, so at what point do we say that the brain us human, and thus the robot is little more than a vat-grown cybernetic organism?

#2

Posted: Wed Jul 05, 2006 5:26 pm
by frigidmagi
There in lies the problem I think, we do not a unified defination of what makes someone human. A another topic I think.

Moving on, Star Wars is clearly not moral in these terms, the Old Republic allowed slavery to occur in many places without a blink (not even the fucking Jedi give a damn) and the Empire seemed even worse.
However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves?
I believe so, but also what has to be asked is... Well how can the people who makes these things be paid back for their time and effort? Also what kind of rights and protections should an AI have? I think at the very least they should enjoy the same rights that minors have today. At the very least AI's should have the right to exist after they have been activated.
Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
Would a machine even have the desire for that? Futhermore aren't most machines created by other machines today? I imagine propegation among AI's would be the matter of designing and building your offspring.
but at what point do we draw the line? When does a computer stop being a computer and a program becomes something more than just lines of code?
I remember one fictional universe where US law stated that any robot with enough awareness to sue for freedom had to be granted freedom and citizenship. It was however an imperfect solution, no one said you had to keep the robot in it's job or provide for it after it was free...
Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being? It seems unlikely that we'll be able to build a sapient robot without a much better understanding of a human brain, so at what point do we say that the brain us human, and thus the robot is little more than a vat-grown cybernetic organism?
This may be a moot point. I am not comfortable with declaring that all AI's will be partly biological just yet.

#3

Posted: Wed Jul 05, 2006 5:36 pm
by Narsil
The Culture novels deal with robots developing sentience and a heavy sense of morality, to the point where they are declared the rulers of mankind, and democracy vanishes, and then is replaced by a benevolent... well I'm not sure what to call it. There's no 'dictator', merely a group of AIs which allow humanity to live in a hedonistic utopia.

#4

Posted: Wed Jul 05, 2006 5:36 pm
by SirNitram
I have mused on this for my own writings, and I beleive the Enclave is actually right and moral(For once.) on this.

In short, a Machine is required to 'work off' the cost of it's creation before becoming a citizen. Given that a Machine's mind is mature(But not it's personality), this is usually similar to the process of going through school, only the Machine does 'simple', vital processing tasks to pay it's way along, often assisting in the running of a household.

In a sense, they're indentured servants who get their freedom after repaying their creator and, hopefully, having matured psychologically.

#5 Re: I, Robot (or, the morality of creating artificial sentie

Posted: Wed Jul 05, 2006 6:23 pm
by Destructionator XV
Hotfoot wrote:As I mentioned in the thread concerning bio-servitors, I think that the concept of developing a slave race of sapient machines is morally reprehensible.
I would ask why make a slave race of machines sapient in the first place, but I'll put aside that debate for another thread.
Deep Blue is a glorified calculator, every applicant for the Turing Test is little more than a cleverly coded program.
Is a human anything more than a cleverly coded program? A better definition of what it truely means to be sapient is really needed.
However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves?
Yes, for the exact same reason taking other humans as slaves is immoral. You could say you created humans: your own children. You certainly can't enslave them.
Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
The only argument I can think of against it is if they can't reproduce on their own, the fate of their species ultimately rests on us, but I don't see this applying to the individual droid's rights. They can be created and I see no problem with that.

Secondly, making a droid that can reproduce would be harder than you think. Unlike biological organisms, they aren't just going to sprout from an egg and sperm nor be able to split and regrow like bacteria. They are going to need the infrastructure to manufacture delicate parts and the expertise to correctly assemble them.

They would need a full fledged factory, which a truely intelligent android could probably learn to operate, but would be infeasable to carry those facilities on his person.

Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being?
I am reminded of a conversation Captain Picard had with Professor Moriarty in "Elementary, Dear Data", from TNG season 2:

Moriarty: "Is the definition of life not congito ergo sum, I think, therefore I am?"

Picard: "Yes, that is one possible definition."

Moriarty: "It is the most important one."


I don't think being biological or not has anything to do with the questions here. It depends on how it thinks.

#6 Re: I, Robot (or, the morality of creating artificial sentie

Posted: Wed Jul 05, 2006 7:01 pm
by SirNitram
Destructionator XV wrote:They would need a full fledged factory, which a truely intelligent android could probably learn to operate, but would be infeasable to carry those facilities on his person.
This presumes that the AI is the machine; but it's not, it's the thinking software. Software can be moved onto compatiable hardware. Since the 'being' is nothing more than extremely advanced computer code, they can infact reproduce. The things they use to interact with humans(Robot bodies, vehicles, even computers) are seperate.

Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?

#7

Posted: Thu Jul 06, 2006 2:06 am
by Narsil
Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?
I personally think that the soul itself isn't the source of sentient thought, no, it is sentient thought.

#8

Posted: Thu Jul 06, 2006 5:58 am
by Stofsk
I don't see why you have to have AI in order to have robotic workers. I mean... we get by all right nowadays.

#9

Posted: Thu Jul 06, 2006 9:23 am
by Batman
And today's robots are of very limited utility. They're very good at a very limited selection of tasks and virtually incapable of doing anything else.
If we want the 'metal human' style of robot that's so preeminent in SciFi some level of AI is inevitable. The question is do we nee sentience-level AI, and do we need all-purpose robots in the first place.

#10

Posted: Thu Jul 06, 2006 9:26 am
by SirNitram
Narsil wrote:
Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?
I personally think that the soul itself isn't the source of sentient thought, no, it is sentient thought.
Sadly, your personal opinions do not do away with the reality of such uncomfortable questions in the real world.

#11

Posted: Fri Jul 07, 2006 8:13 am
by Hotfoot
My apologies for a late response, but I've been a bit busier than normal.

I personally don't think we need to create sapient robots to do things we cannot do on our own. We already sent machines into conditions far too hazardous for human life (Jason deep sea probe, interplanetary probes and landers, and so on). While they are currently of somewhat limited use, what real benefit could there be to creating a sapient mars robot to the red planet to die? Any complex machine is far more likely to break down than a simple and overengineered counterpart. Given that the primary reason for developing sapient robots seems to be "to do the work that humans can't", I have to raise this issue: for all of our frailties, humans can do one thing that robots current cannot - self repair and maintenance. We don't need to lubricate our joints or replace our outer covering, we do that well enough on our own. Now, I'm sure someone will say that with sufficiently advanced technology, we could do the same with robots, but let's be fair - any self-repairing system is going to be less durable than one that just exists for protection. This puts us more or less back at square one.

The real benefit, I would think, would be a situation that does not allow for remote control. The only options that would come to mind are deep space exploration, attempting to find solar systems with planets that are similar enough to earth for colonization, and let's face it, even if we found a planet suitable for human life, we wouldn't be able to colonize it without extreme cost to the planet at this point or in the forseeable future.

Concerning the existance of a soul, Nitram brings up a good point. The ability to copy software makes this a very interesting situation if the technology to create sapient computers can be copied as easily as a movie. However, since we don't know how the technology would work, this may be a premature conclusion. For all we know, each AI takes a blank slate quantum system and then makes changes to it that can't be measured without changing it, thus making copying a rather difficult process, to say the least. The changing of bodies is less of a problem - children grow up and "change bodies" throughout their lifetime.

I don't think that the simple computers that we have today are capable of creating truly sapient thought processes - such an act would likely require a distributed network of "processors" with a level of finesse and complexity beyond what we are currently capable of creating. Even if it's not organic, it will almost certainly have to be modelled at least on the human brain on some level, because we are the most sapient of all the beasts.

Another interesting question is what is sapience without life? A robot does not meet our criteria of being alive, it would not breathe, it can not reproduce by itself or others of its own kind, it would not eat - what would come from such a sapience? It would know pain as we would, or even joy or anger - many of our emotions are ruled by chemical interactions that would have no real analog to a mechanical being.