I, Robot (or, the morality of creating artificial sentience)
Moderator: Charon
#1 I, Robot (or, the morality of creating artificial sentience)
As I mentioned in the thread concerning bio-servitors, I think that the concept of developing a slave race of sapient machines is morally reprehensible. We know that in many sci-fi, such a situation is often considered the status quo. Star Wars and Star Trek have a rather blase attitude towards technological sapience by and large. The Federation tried to create an army of Datas, still can't deal with holographic persons, and so on. The Republic and the Empire both mercilessly exploit droid labor and slave labor itself is practiced under both regimes.
In any case, modern computer systems are nowhere near sapience. Deep Blue is a glorified calculator, every applicant for the Turing Test is little more than a cleverly coded program. However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves? Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
Some people are worried about humans having sex with robots now, when all robots can possibly be are poorly controlled puppets, but at what point do we draw the line? When does a computer stop being a computer and a program becomes something more than just lines of code?
Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being? It seems unlikely that we'll be able to build a sapient robot without a much better understanding of a human brain, so at what point do we say that the brain us human, and thus the robot is little more than a vat-grown cybernetic organism?
In any case, modern computer systems are nowhere near sapience. Deep Blue is a glorified calculator, every applicant for the Turing Test is little more than a cleverly coded program. However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves? Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
Some people are worried about humans having sex with robots now, when all robots can possibly be are poorly controlled puppets, but at what point do we draw the line? When does a computer stop being a computer and a program becomes something more than just lines of code?
Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being? It seems unlikely that we'll be able to build a sapient robot without a much better understanding of a human brain, so at what point do we say that the brain us human, and thus the robot is little more than a vat-grown cybernetic organism?
- frigidmagi
- Dragon Death-Marine General
- Posts: 14757
- Joined: Wed Jun 08, 2005 11:03 am
- 19
- Location: Alone and unafraid
#2
There in lies the problem I think, we do not a unified defination of what makes someone human. A another topic I think.
Moving on, Star Wars is clearly not moral in these terms, the Old Republic allowed slavery to occur in many places without a blink (not even the fucking Jedi give a damn) and the Empire seemed even worse.
Moving on, Star Wars is clearly not moral in these terms, the Old Republic allowed slavery to occur in many places without a blink (not even the fucking Jedi give a damn) and the Empire seemed even worse.
I believe so, but also what has to be asked is... Well how can the people who makes these things be paid back for their time and effort? Also what kind of rights and protections should an AI have? I think at the very least they should enjoy the same rights that minors have today. At the very least AI's should have the right to exist after they have been activated.However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves?
Would a machine even have the desire for that? Futhermore aren't most machines created by other machines today? I imagine propegation among AI's would be the matter of designing and building your offspring.Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
I remember one fictional universe where US law stated that any robot with enough awareness to sue for freedom had to be granted freedom and citizenship. It was however an imperfect solution, no one said you had to keep the robot in it's job or provide for it after it was free...but at what point do we draw the line? When does a computer stop being a computer and a program becomes something more than just lines of code?
This may be a moot point. I am not comfortable with declaring that all AI's will be partly biological just yet.Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being? It seems unlikely that we'll be able to build a sapient robot without a much better understanding of a human brain, so at what point do we say that the brain us human, and thus the robot is little more than a vat-grown cybernetic organism?
"it takes two sides to end a war but only one to start one. And those who do not have swords may still die upon them." Tolken
- Narsil
- Lord of Time
- Posts: 1883
- Joined: Fri Aug 19, 2005 3:26 am
- 19
- Location: A Scot in England
- Contact:
#3
The Culture novels deal with robots developing sentience and a heavy sense of morality, to the point where they are declared the rulers of mankind, and democracy vanishes, and then is replaced by a benevolent... well I'm not sure what to call it. There's no 'dictator', merely a group of AIs which allow humanity to live in a hedonistic utopia.
- SirNitram
- The All-Seeing Eye
- Posts: 5178
- Joined: Thu Jun 30, 2005 7:13 pm
- 19
- Location: Behind you, duh!
- Contact:
#4
I have mused on this for my own writings, and I beleive the Enclave is actually right and moral(For once.) on this.
In short, a Machine is required to 'work off' the cost of it's creation before becoming a citizen. Given that a Machine's mind is mature(But not it's personality), this is usually similar to the process of going through school, only the Machine does 'simple', vital processing tasks to pay it's way along, often assisting in the running of a household.
In a sense, they're indentured servants who get their freedom after repaying their creator and, hopefully, having matured psychologically.
In short, a Machine is required to 'work off' the cost of it's creation before becoming a citizen. Given that a Machine's mind is mature(But not it's personality), this is usually similar to the process of going through school, only the Machine does 'simple', vital processing tasks to pay it's way along, often assisting in the running of a household.
In a sense, they're indentured servants who get their freedom after repaying their creator and, hopefully, having matured psychologically.
Half-Damned, All Hero.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
- Destructionator XV
- Lead Programmer
- Posts: 2352
- Joined: Sun Jun 12, 2005 10:12 am
- 19
- Location: Watertown, New York
- Contact:
#5 Re: I, Robot (or, the morality of creating artificial sentie
I would ask why make a slave race of machines sapient in the first place, but I'll put aside that debate for another thread.Hotfoot wrote:As I mentioned in the thread concerning bio-servitors, I think that the concept of developing a slave race of sapient machines is morally reprehensible.
Is a human anything more than a cleverly coded program? A better definition of what it truely means to be sapient is really needed.Deep Blue is a glorified calculator, every applicant for the Turing Test is little more than a cleverly coded program.
Yes, for the exact same reason taking other humans as slaves is immoral. You could say you created humans: your own children. You certainly can't enslave them.However, if the technology existed, wouldn't it be immoral to use it to create a race of servants and slaves?
The only argument I can think of against it is if they can't reproduce on their own, the fate of their species ultimately rests on us, but I don't see this applying to the individual droid's rights. They can be created and I see no problem with that.Would it even be right to create a creature with Sapience, but without the ability to propegate on its own?
Secondly, making a droid that can reproduce would be harder than you think. Unlike biological organisms, they aren't just going to sprout from an egg and sperm nor be able to split and regrow like bacteria. They are going to need the infrastructure to manufacture delicate parts and the expertise to correctly assemble them.
They would need a full fledged factory, which a truely intelligent android could probably learn to operate, but would be infeasable to carry those facilities on his person.
I am reminded of a conversation Captain Picard had with Professor Moriarty in "Elementary, Dear Data", from TNG season 2:Moreover, with more and more work being done with biological computers, at what point does something cease to be a robot and instead become a living being?
Moriarty: "Is the definition of life not congito ergo sum, I think, therefore I am?"
Picard: "Yes, that is one possible definition."
Moriarty: "It is the most important one."
I don't think being biological or not has anything to do with the questions here. It depends on how it thinks.
- SirNitram
- The All-Seeing Eye
- Posts: 5178
- Joined: Thu Jun 30, 2005 7:13 pm
- 19
- Location: Behind you, duh!
- Contact:
#6 Re: I, Robot (or, the morality of creating artificial sentie
This presumes that the AI is the machine; but it's not, it's the thinking software. Software can be moved onto compatiable hardware. Since the 'being' is nothing more than extremely advanced computer code, they can infact reproduce. The things they use to interact with humans(Robot bodies, vehicles, even computers) are seperate.Destructionator XV wrote:They would need a full fledged factory, which a truely intelligent android could probably learn to operate, but would be infeasable to carry those facilities on his person.
Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?
Half-Damned, All Hero.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
- Narsil
- Lord of Time
- Posts: 1883
- Joined: Fri Aug 19, 2005 3:26 am
- 19
- Location: A Scot in England
- Contact:
#7
I personally think that the soul itself isn't the source of sentient thought, no, it is sentient thought.Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?
- Batman
- The Dark Knight
- Posts: 4357
- Joined: Mon Feb 06, 2006 4:47 am
- 18
- Location: The Timmverse, the only place where DC Comics still make a modicum of sense
- Contact:
#9
And today's robots are of very limited utility. They're very good at a very limited selection of tasks and virtually incapable of doing anything else.
If we want the 'metal human' style of robot that's so preeminent in SciFi some level of AI is inevitable. The question is do we nee sentience-level AI, and do we need all-purpose robots in the first place.
If we want the 'metal human' style of robot that's so preeminent in SciFi some level of AI is inevitable. The question is do we nee sentience-level AI, and do we need all-purpose robots in the first place.
'I wonder how far the barometer sunk.'-'All der way. Trust me on dis.'
'Go ahead. Bake my quiche'.
'Undead or alive, you're coming with me.'
'Detritus?'-'Yessir?'-'Never go to Klatch'.-'Yessir.'
'Many fine old manuscripts in that place, I believe. Without price, I'm told.'-'Yes, sir. Certainly worthless, sir.'-'Is it possible you misunderstood what I just said, Commander?'
'Can't sing, can't dance, can handle a sword a little'
'Run away, and live to run away another day'-The Rincewind principle
'Hello, inner child. I'm the inner babysitter.'
'Go ahead. Bake my quiche'.
'Undead or alive, you're coming with me.'
'Detritus?'-'Yessir?'-'Never go to Klatch'.-'Yessir.'
'Many fine old manuscripts in that place, I believe. Without price, I'm told.'-'Yes, sir. Certainly worthless, sir.'-'Is it possible you misunderstood what I just said, Commander?'
'Can't sing, can't dance, can handle a sword a little'
'Run away, and live to run away another day'-The Rincewind principle
'Hello, inner child. I'm the inner babysitter.'
- SirNitram
- The All-Seeing Eye
- Posts: 5178
- Joined: Thu Jun 30, 2005 7:13 pm
- 19
- Location: Behind you, duh!
- Contact:
#10
Sadly, your personal opinions do not do away with the reality of such uncomfortable questions in the real world.Narsil wrote:I personally think that the soul itself isn't the source of sentient thought, no, it is sentient thought.Which leads us to difficult and perplexing theological questions: Like if there's a soul, and a Machine is alive.. Does it's soul move? Can you backup a machine and pull it's soul along?
Half-Damned, All Hero.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
Tev: You're happy. You're Plotting. You're Evil.
Me: Evil is so inappropriate. I'm ruthless.
Tev: You're turning me on.
I Am Rage. You Will Know My Fury.
#11
My apologies for a late response, but I've been a bit busier than normal.
I personally don't think we need to create sapient robots to do things we cannot do on our own. We already sent machines into conditions far too hazardous for human life (Jason deep sea probe, interplanetary probes and landers, and so on). While they are currently of somewhat limited use, what real benefit could there be to creating a sapient mars robot to the red planet to die? Any complex machine is far more likely to break down than a simple and overengineered counterpart. Given that the primary reason for developing sapient robots seems to be "to do the work that humans can't", I have to raise this issue: for all of our frailties, humans can do one thing that robots current cannot - self repair and maintenance. We don't need to lubricate our joints or replace our outer covering, we do that well enough on our own. Now, I'm sure someone will say that with sufficiently advanced technology, we could do the same with robots, but let's be fair - any self-repairing system is going to be less durable than one that just exists for protection. This puts us more or less back at square one.
The real benefit, I would think, would be a situation that does not allow for remote control. The only options that would come to mind are deep space exploration, attempting to find solar systems with planets that are similar enough to earth for colonization, and let's face it, even if we found a planet suitable for human life, we wouldn't be able to colonize it without extreme cost to the planet at this point or in the forseeable future.
Concerning the existance of a soul, Nitram brings up a good point. The ability to copy software makes this a very interesting situation if the technology to create sapient computers can be copied as easily as a movie. However, since we don't know how the technology would work, this may be a premature conclusion. For all we know, each AI takes a blank slate quantum system and then makes changes to it that can't be measured without changing it, thus making copying a rather difficult process, to say the least. The changing of bodies is less of a problem - children grow up and "change bodies" throughout their lifetime.
I don't think that the simple computers that we have today are capable of creating truly sapient thought processes - such an act would likely require a distributed network of "processors" with a level of finesse and complexity beyond what we are currently capable of creating. Even if it's not organic, it will almost certainly have to be modelled at least on the human brain on some level, because we are the most sapient of all the beasts.
Another interesting question is what is sapience without life? A robot does not meet our criteria of being alive, it would not breathe, it can not reproduce by itself or others of its own kind, it would not eat - what would come from such a sapience? It would know pain as we would, or even joy or anger - many of our emotions are ruled by chemical interactions that would have no real analog to a mechanical being.
I personally don't think we need to create sapient robots to do things we cannot do on our own. We already sent machines into conditions far too hazardous for human life (Jason deep sea probe, interplanetary probes and landers, and so on). While they are currently of somewhat limited use, what real benefit could there be to creating a sapient mars robot to the red planet to die? Any complex machine is far more likely to break down than a simple and overengineered counterpart. Given that the primary reason for developing sapient robots seems to be "to do the work that humans can't", I have to raise this issue: for all of our frailties, humans can do one thing that robots current cannot - self repair and maintenance. We don't need to lubricate our joints or replace our outer covering, we do that well enough on our own. Now, I'm sure someone will say that with sufficiently advanced technology, we could do the same with robots, but let's be fair - any self-repairing system is going to be less durable than one that just exists for protection. This puts us more or less back at square one.
The real benefit, I would think, would be a situation that does not allow for remote control. The only options that would come to mind are deep space exploration, attempting to find solar systems with planets that are similar enough to earth for colonization, and let's face it, even if we found a planet suitable for human life, we wouldn't be able to colonize it without extreme cost to the planet at this point or in the forseeable future.
Concerning the existance of a soul, Nitram brings up a good point. The ability to copy software makes this a very interesting situation if the technology to create sapient computers can be copied as easily as a movie. However, since we don't know how the technology would work, this may be a premature conclusion. For all we know, each AI takes a blank slate quantum system and then makes changes to it that can't be measured without changing it, thus making copying a rather difficult process, to say the least. The changing of bodies is less of a problem - children grow up and "change bodies" throughout their lifetime.
I don't think that the simple computers that we have today are capable of creating truly sapient thought processes - such an act would likely require a distributed network of "processors" with a level of finesse and complexity beyond what we are currently capable of creating. Even if it's not organic, it will almost certainly have to be modelled at least on the human brain on some level, because we are the most sapient of all the beasts.
Another interesting question is what is sapience without life? A robot does not meet our criteria of being alive, it would not breathe, it can not reproduce by itself or others of its own kind, it would not eat - what would come from such a sapience? It would know pain as we would, or even joy or anger - many of our emotions are ruled by chemical interactions that would have no real analog to a mechanical being.