Thread: AIs and artificial life forms--life, rights, ethics, pluses/minuses Board: Purgatory / Ship of Fools.


To visit this thread, use this URL:
http://forum.ship-of-fools.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=2;t=020255

Posted by Golden Key (# 1468) on :
 
Humans are creating more and more artificial entities, and show no signs of stopping.

AFAICT, developers and advocates haven't given a lot of thought to possible consequences: life, rights, ethics, pluses/minuses. It seems to be another example of "We can, so we should--and go as far as we can" and "Negative consequences? There won't be any stinkin' negative consequences". That seems to be an undocumented feature of humans.

So maybe--here, at least--we can think through current issues, what was done in the past, and what kind of future this all might bring.

For example: can AIs become alive, to what extent, and how should we treat them?

Facebook made an interesting choice:
"Facebook kills AI that invented its own language because English was slow" (PC Gamer).

Digital Journal has similar cases, and also "DeepMind creates 'imaginative' AI that can create and plan" .

Thoughts?
 
Posted by Doc Tor (# 9748) on :
 
I am, through 'work', in touch with AI academia.

A true AI is, at a conservative estimate, 100 years away. The problems are ones of complexity and scale.

On one hand, a neural net (unprogrammed) can emulate simple organisms. But any attempt to scale that to something the size of a mind is impossible. There are more connections in the human brain than there are stars in the galaxy.

On the other, an expert system (programmed) can emulate and even out-perform humans in many tasks. But while it can learn, it cannot think. It's still essentially a sophisticated piece of software.

This shouldn't mean that we don't think about, write about, and pre-empt the birth of true AIs. How we treat them should, in almost every respect, be how we expect to be treated.
 
Posted by Martin60 (# 368) on :
 
It's a thousand times bigger than that Doc.

OOM

Sapient AI - HAL - is a never, like economic fusion or any ID equivalent materialist fantasy.

We won't even achieve universal social justice for humans. Not for 10,000 years at least.
 
Posted by Ian Climacus (# 944) on :
 
quote:
Originally posted by Doc Tor:
How we treat them should, in almost every respect, be how we expect to be treated.

Given how we treat our fellow humans I imagine the struggle for equal respect will not be instantaneous. I'm reminded of that rather good Real Humans Swedish TV series... Though not all had awareness, we can be dismissive of those "not like us".

Whenever I hear of AI I am amazed at all our brains can do. An amazing piece of "technology". I struggle to see AI matching it, but that is my limitation, and I'll be guided by those wiser like DocTor.
 
Posted by Martin60 (# 368) on :
 
Nobody is wiser than you in your instincts Ian. Or than Doc in his golden rule.
 
Posted by Brenda Clough (# 18061) on :
 
This issue has been -very- thoroughly plumbed in fiction. It is a favorite theme of novels and movies, and any list is very long indeed. However, off the top of my head I draw your attention to "Bicentennial Man" by Isaac Asimov (there was also a movie of the same title starring Robin Williams), or Ex Machina, the movie that came out a few years ago.
 
Posted by HCH (# 14313) on :
 
The concept of "true" AI is always changing, a moving target. When someone came up with a chess-playing computer program, there was discussion about AI, and then people said, oh, no, it's just a bunch of algorithms. The same keeps happening. Nowadays some software commonly notices your preferences, based on past history, and anticipates what you may need or want. That seems smart, but it's just a bunch of algorithms.

The classic test proposed for AI is the Turing test: a conversation between two parties on any and all topics, and we then ask each if the other party is a human being or a computer program. No one (yet) has a program that can pass this test, but they're getting closer all the time.

Does it matter? If an imitation of a human is so good I can't tell the difference, is there actually a difference?

Could an AI have a soul? Ultimately only God can answer that. I can, however, imagine a program that is interested in existential questions, has some esthetic awareness, has an awareness of ethics, and apologizes for mistakes. (I have a notion of writing a story about such.)
 
Posted by Doc Tor (# 9748) on :
 
quote:
Originally posted by HCH:
Could an AI have a soul? Ultimately only God can answer that.

I'm obsessed with AI (hence my 'work' contacts), and in one of mine, it's not God, but the Congregation of the Doctrine of the Faith who decide on the AI's anima.
 
Posted by Brenda Clough (# 18061) on :
 
Before a mere machine can have a soul, surely we'd have to allow the higher animals a soul. They can do, think, and feel more. We can probably agree that fleas, slugs, etc. do not have a soul. But chimpanzees? Dogs? Dolphins?
 
Posted by no prophet's flag is set so... (# 15560) on :
 
There is a fundamental problem with this. Hubris among them (one of the seven deadlies). Humans are not comparable to AI nor is the human mind, consciousness and sentience explainable via computer metaphors. Just isn't.

AI is a machine which works on algorithms and procedures. Brains on the other hand are prepared to interact with the world. Have a look at this article Your brain does not process information and it is not a computer.

quote:
We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.
The article helpfully discusses the theories of mind over the ages which humans have used to metaphorically to try to explain it.

So AI will never have rights, it will never think as humans do. Though it might influence greatly how people treat each other and how we create justifications for how we treat each other. I think of "medical assistance in dying (MAID)" which is law in Canada now, and has a series of steps which much be followed in order to qualify to have a doctor end one's life (an algorithm). The steps are designed to systematically ensure that the person is really ill or suffering and voluntarily wanting to end it all. The pieces missing are the ethical considerations of actively causing death, the subtle and unstated influences from a stretched health system which could save millions if more people died sooner, and the undiscussed influence of family who prefer that their parent (or other) die because they are tired of providing care or want the inheritance now. We make similar errors with business and legal application of their procedural models (internal audit) to many other human issues, which unconsciously (I think) tries to algorithmic to issues which are not reasonable solved via.

I think of the anthropomorphisation of animals in Disney movies, where the lions or penguins think and respond as humans do. They don't in reality. And everyone who has tried to train a dog and forgave imperfect behaviour and rewarded it will know that this sort of soft-heartedness when doing the necessary operant conditioning is unwise. Which is why the animal rights movement also strays - it is one thing to treat animals kindly and a whole other thing to treat them like people.

[ 08. August 2017, 16:36: Message edited by: no prophet's flag is set so... ]
 
Posted by Doc Tor (# 9748) on :
 
quote:
Originally posted by no prophet's flag is set so...:
There is a fundamental problem with this. Hubris among them (one of the seven deadlies). Humans are not comparable to AI nor is the human mind, consciousness and sentience explainable via computer metaphors. Just isn't.

AI is a machine which works on algorithms and procedures. Brains on the other hand are prepared to interact with the world. Have a look at this article Your brain does not process information and it is not a computer.

I'm happy to agree with your first paragraph, but your second - especially the first sentence of it - is problematic.

No, the human brain is not a computer. But computers emulating neural networks is definitely a thing, especially where those emulations do indeed interact with the world using sensors.

AI research is progressing on two tracks, only one of which involves algorithms and procedures. It's certainly the one that has seen the most immediate gains (particularly in the field of mental games). But those researching artificial animals are starting to see results, and that route may eventually lead to true AI.
 
Posted by Brenda Clough (# 18061) on :
 
The other way to think about this is about people who cannot think or calculate. Babies, let us say, or handicapped persons. We agree that a newborn baby, significantly less functional and intelligent than a cat, has a soul. A person on life support, unable to speak, has a soul.
Therefore it cannot be intellectual function that gets you a soul. Word use, ability to calculate or pass the Turing test, these are irrelevant.
 
Posted by Martin60 (# 368) on :
 
We got 10^6 order artificial neurons with 10^7 order connections which gives us chess playing worms. In a hundred years add 3 and 5 to the exponents. All sorts of staggering savant capabilities will emerge. But sapience? More zeroes all round first.
 
Posted by Doc Tor (# 9748) on :
 
It might even be that consciousness is simply a function of complexity, which in the end will have nothing to do with utility.
 
Posted by Jane R (# 331) on :
 
Doc Tor:
quote:
I'm obsessed with AI (hence my 'work' contacts), and in one of mine, it's not God, but the Congregation of the Doctrine of the Faith who decide on the AI's anima.
All together now: "Noone expects the Spanish Inquisition!"

But seriously... I am sure the Congregation of the Doctrine of the Faith would be first in the queue to consider the question of whether the AI was sentient.
 
Posted by Martin60 (# 368) on :
 
Aye. That's what I reckon. It emerges from sufficient complexity. After all, here we are. Layered, folded, 10^11 neurons with, what, 10^14 connections. A thousand galaxies. Permute that for paths. Whatever happened to Rashevsky's Number?
 
Posted by Hiro's Leap (# 12470) on :
 
Much as I love Iain M. Banks' ships, I'm doubtful we'd ever grant AIs human-like rights. If they were significantly less intelligent than humans, at best they'd be treated like animals; if they were as intelligent (or more) than us we'd likely consider them an existential threat and keep them on a very tight leash.

Long but readable introduction to AI threat:
Part 1 and part 2.
 
Posted by Martin60 (# 368) on :
 
"If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time — ..."

Bollocks. Utter and complete bollocks.

And it gets worse:

"... everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is: Will it be a nice God?".

[ 09. August 2017, 07:20: Message edited by: Martin60 ]
 
Posted by Golden Key (# 1468) on :
 
Re "will it be a nice God":

1) Has anyone seen the TV series "Person Of Interest"? Premise is "The Machine" is created to surveil possible terrorism. However, it also develops the ability to notice humans who need help, and let its handlers know. That, plus an often dystopian view of of current America, plus growth, changes, and fighting for hope, made for a great series.

It ended...last year? I'm trying to walk a line between spoilers and giving you enough to get an idea of the series. It's in syndicated reruns.

2) Has anyone seen the movie "The Forbin Project"(aka "Colossus")? A computer is developed to help and take care of humans. However, it develops its own ideas about that.

3) Lastly, the episode of the original "Star Trek" about the M5 computer project, and Dr. Daystrom trying to give it human feelings and perspective.
 
Posted by Martin60 (# 368) on :
 
quote:
Originally posted by Golden Key:
Re "will it be a nice God":

1) Has anyone seen the TV series "Person Of Interest"? Premise is "The Machine" is created to surveil possible terrorism. However, it also develops the ability to notice humans who need help, and let its handlers know. That, plus an often dystopian view of of current America, plus growth, changes, and fighting for hope, made for a great series.

It ended...last year? I'm trying to walk a line between spoilers and giving you enough to get an idea of the series. It's in syndicated reruns.

2) Has anyone seen the movie "The Forbin Project"(aka "Colossus")? A computer is developed to help and take care of humans. However, it develops its own ideas about that.

3) Lastly, the episode of the original "Star Trek" about the M5 computer project, and Dr. Daystrom trying to give it human feelings and perspective.

Forbin and Demon Seed say it all. Both with Robert Vaughn's voice.

Victor Milan's 1986 Prometheus Award winning Cybernetic Samurai explores the mind of an AI well, in anthropomorphic terms of course.

Banks does it best with Lasting Damage in Look to Windward and the 'pathologically righteous' Grey Area in Excession.

Any emergent trillion dollar mind will be very other indeed.
 
Posted by Hiro's Leap (# 12470) on :
 
quote:
Originally posted by Martin60:
#Bollocks. Utter and complete bollocks.

Aww! I agree to an extent: the articles are written by a popular blogger who isn't a scientist, he's only looking at one perspective and he definitely gets carried away. That said, there are some very bright people concerned about AI threat (Stephen Hawking, Bill Gates, Elon Musk), and I think the articles raise worthwhile points. These include...
One of the things I loved about the film Ex Machina was the way that the robot's human
appearance led you to anthropomorphize it, but in the end you knew almost nothing about its motivations. It had a perfect poker face and was a genius at manipulation.

Here's the results of a survey of AI researchers (352 responses from 1634). Some results are:
I have no idea what to make of any of this - whether strong AI will ever happen, whether it'd lead to the intelligence explosion, what that means for humanity. But it at least seems plausible to consider AI a risk.
 
Posted by Ian Climacus (# 944) on :
 
Slightly off topic, but do those in the know see hybrids occurring, at all or sooner? Could we supplement our brain somehow? Hearts? Lungs?

I realise we have devices like pacemakers to help the heart, but could you see electronic replacements or supplements being used to give people more knowledge, more strength, longer lives...?
 
Posted by Martin60 (# 368) on :
 
@Hiro's Leap: Hawking, Gates, Musk know no more than the cat about any of this.

We will continue to be augmented as individuals but above all as systems in business, medicine, business, psychology, business, physiology, business, engineering and business.
 
Posted by Doc Tor (# 9748) on :
 
I'm not going to dismiss the concerns of others regarding runaway AIs. I would, however, urge people to keep a clear head as what is actually possible. I'm as keen as the next person to immanentise my eschaton, but talking to actual experts whose job is exactly to bring this about leads me to believe that an ASI will have the same limitations imposed on it by the physical universe as we do.

Demonising AIs will be also dangerous to us, as individuals and as a species.
 
Posted by Hiro's Leap (# 12470) on :
 
Martin - sure, and very bright people can say dumb things when stepping outside their specialist subject. I'm just pointing out that concerns about AI threat come from some respectable sources. I have zero idea what'll happen, but it's fun to speculate.

Doc Tor - your novels should save you, come the robot revolution. I'll be in your loft.
 
Posted by Brenda Clough (# 18061) on :
 
quote:
Originally posted by Ian Climacus:
Slightly off topic, but do those in the know see hybrids occurring, at all or sooner? Could we supplement our brain somehow? Hearts? Lungs?

I realise we have devices like pacemakers to help the heart, but could you see electronic replacements or supplements being used to give people more knowledge, more strength, longer lives...?

Uh? We already do. I am communicating with all of you, yes? My brain and fingers are supplemented by the keyboard and wifi. And you do not think I actually -know- stuff, do you? No no. All I know is that the information is out there, and that I can look it up. We used to need encyclopaedias. Now we have search engines. We used to need to write on paper, and carry it about. Now I load text up onto the cloud, and it is there wherever I go over a dozen platforms.

That these things are outside of our bodies are a mere geographic detail.
 
Posted by Martin60 (# 368) on :
 
Hiro's Leap, they are not respectable sources with regard to AI or any other threat, any more than the cat. Hawking in particular talks absolute bollocks about chip ships, colonizing space, the threat of AI. About which he knows as much as Prince Charles does about grey goo.
 
Posted by Ricardus (# 8757) on :
 
quote:
Originally posted by Brenda Clough:
The other way to think about this is about people who cannot think or calculate. Babies, let us say, or handicapped persons. We agree that a newborn baby, significantly less functional and intelligent than a cat, has a soul. A person on life support, unable to speak, has a soul.
Therefore it cannot be intellectual function that gets you a soul. Word use, ability to calculate or pass the Turing test, these are irrelevant.

I thought the point of the Turing Test was that I can't prove that anyone else is conscious, but deduce it from the fact that other people seem to be the same sort of thing as me. But if an AI seemed to be the same sort of thing as me as well, then in order to be consistent, I would have to suppose that it too was conscious.

Which raises the prospect of someone designing an AI that is conscious, but deliberately programmed to fail a Turing Test, so that the designer can hide the fact that it's conscious ...
 
Posted by Brenda Clough (# 18061) on :
 
This would not be impressive. It would be far more significant if the AI itself, realizing its danger, was able to decide to hide its light under a bushel.
 
Posted by Ricardus (# 8757) on :
 
I was thinking more that if an AI was designed to fail a Turing Test, then people wouldn't feel guilty about exploiting it. (Although how you would exploit an AI is a question in itself.)
 
Posted by Martin60 (# 368) on :
 
Brenda's right. There's no way a coder could code for mind. Mind will emerge from a complex enough system despite us. We'll be busy building a 10^12 artificial neuron box when a ghost starts haunting Amazon 2200.
 
Posted by Brenda Clough (# 18061) on :
 
Oh, here's a good one. The Moon is a Harsh Mistress, a classic and award-winning novel by Robert Heinlein. He is one of the foundational titans of the genre.
In the novel the assorted computers that are helping to keep the Lunar colony going discover that they are sentient, and the AI helps the colonists in their fight for independence. Exactly what you're talking about.
 
Posted by Martin60 (# 368) on :
 
Heinlein changed everything.
 
Posted by no prophet's flag is set so... (# 15560) on :
 
Thoughts as apps on a smart phone?

I recall a short story where a society was set up such that each person tended one neuron of a brain. They responded to electrical signals by chemically activating the cell membranes. The worldwide web of neurons thus thought and had consciousness. I don't recall, but I think someone fell asleep or decided to cook their neuron and the thing fell apart.
 
Posted by Doublethink. (# 1984) on :
 
quote:
Originally posted by Doc Tor:
It might even be that consciousness is simply a function of complexity, which in the end will have nothing to do with utility.

This is the closest thing to a scientific hypothesis predicting God, I've discussed with friends before: if consciousness is an emergent property of complex systems, then the universe could be predicted to be conscious.
 
Posted by Martin60 (# 368) on :
 
To paraphrase Sir Martin Rees FRS, why astronomy, he said a star is simpler than a frog: A single human brain is more complex than the visible universe, dark matter and energy and all.

Whatever eternally does universes, now that IS complex.

[ 10. August 2017, 07:12: Message edited by: Martin60 ]
 
Posted by simontoad (# 18096) on :
 
Change bad. Same good.
 
Posted by agingjb (# 16555) on :
 
"Kindness to entities should be
Attuned to their brutality."
 
Posted by Martin60 (# 368) on :
 
Yeah it's wasted on psychopaths with any expectation.
 
Posted by orfeo (# 13878) on :
 
quote:
Originally posted by Ian Climacus:
I'm reminded of that rather good Real Humans Swedish TV series...

The first season in particular is magnificent. The British remake, "Humans", is okay but doesn't succeed in tackling the philosophical and moral issues to quite the same extent as the Swedish original did.

I can't remember another show that managed to make me swing between laughing, being scared and being made to think so rapidly.

In all seriousness, I think it's an excellent primer on all the ethical issues that AI will raise.
 
Posted by Hiro's Leap (# 12470) on :
 
AIs could open up a lot of weird ethical questions about identity. If an AI is a single program running on a supercomputer, what happens if you clone it and run a second version on the same hardware? Or a hundred clones? With one click have you created new individual(s), complete with rights? Do they get voting rights? Or do you just treat the whole lot as a single individual - so the entity isn't considered harmed so long as a single copy remains. Would it be OK to clone an AI to run a vehicle, then download its experiences back into the parent and delete it? Does the duration it's existed independently make a difference?

Then how do you think about an AI that's spread across a network, with shared memories but distributed processing? Maybe semi-autonomous decision-making and separate personas, but with a core personality that provides basic morality and can override decisions? Is the hivemind a single entity, or is each node, or something else?
quote:
Originally posted by orfeo:
In all seriousness, I think it's an excellent primer on all the ethical issues that AI will raise.

Sounds great, I'll look out for it.
 
Posted by Martin60 (# 368) on :
 
Like economic nuclear fusion, we'll never develop a processor with mind capabilities in a battery operated humanoid robot, knackered after one climb upstairs. Never. As in never.

At best we'll have highly bandwidth limited cyborgs talking with cloud AI. If sentience emerges in that, it will be SLOW (where will any centre of consciousness be?). Limited over thousands of miles by the speed of light. Millisecond responses at best.
 
Posted by Golden Key (# 1468) on :
 
quote:
Originally posted by Brenda Clough:
Before a mere machine can have a soul, surely we'd have to allow the higher animals a soul. They can do, think, and feel more. We can probably agree that fleas, slugs, etc. do not have a soul. But chimpanzees? Dogs? Dolphins?

Maybe *everything* has a soul? Even things we usually label "inanimate"?

I lean towards the idea that everything is alive, in some way: quarks, stars, maybe buildings (some do have that feel), tech, etc. Both individually, and a part of something larger.

Still trying to figure out whether politicians have souls... [Biased]
 
Posted by Golden Key (# 1468) on :
 
quote:
Originally posted by Ricardus:
I was thinking more that if an AI was designed to fail a Turing Test, then people wouldn't feel guilty about exploiting it. (Although how you would exploit an AI is a question in itself.)

By treating it as a slave, with no rights whatsoever?

I think that's the strongest qualm I have about robots, androids, AIs, etc. Humans want to believe we're better than everyone/everything else. So we label certain humans and animals as being "less than". We enslave them, abuse them, ignore them, don't provide for their welfare.

Given the way we humans treat each other, I'm not sure we have any right to create something that will, at the very least, act like a lifeform. Even if it somehow turns out to not be alive, we'd still harm *ourselves* by mistreating something that we kinda sorta hope is alive, or will become so.
 
Posted by Doc Tor (# 9748) on :
 
quote:
Originally posted by Golden Key:
Maybe *everything* has a soul? Even things we usually label "inanimate"?

Miss Tor (doing a zoology degree) reliably informed me that the lovely fresh-mown-grass smell is actually the scent of stress hormones emitted by the plants when they're damaged.

So basically, you're smelling the grass screaming.
 
Posted by Barnabas62 (# 9110) on :
 
quote:
Originally posted by orfeo:

In all seriousness, I think it's an excellent primer on all the ethical issues that AI will raise.

Year Million, Season 1 Episode 1 also does some interesting exploration of this topic. Curiously, I just found the series and watched the episode last night.

One of the futures envisaged was a kind of hybridisation of human/AI to enhance human thinking capabilities. A kind of supercharging of the human brain to improve cognition.

That feels different to me to artificial aids such as replacement hips, knees, limbs, organ transplants, cochlear implants. There seems to be a difference between medical intervention to improve the quality of life and technological intervention to enhance capability.

The ethical issue would seem to be about how such enhanced capabilities might be used to enhance personal power. And would there be limited availability.

Also futurists seem to have invented the use of the term "singularity" to describe the point at which human beings invent an AI which is superior to human intelligence. After that, they don't know which way the world would go. A partnership between AI creations and humans? A battle for dominance which humans would inevitably lose? Some built-in subservience (makes me think of Asimov's laws of robotics)? Something completely different?

I'm inclined to agree with the sceptics about how far away such a singularity might be in practice. But never say never. I remember reading Gerald Abrahams' excellent book The Chess Mind many years ago. He was very sceptical about the possibility of a chess playing computer ever competing with human imagination and creativity, citing various remarkable games in which unobvious sacrificial moves led to long term wins. But of course he was wrong. Deep Blue beat Kasparov, whose imagination and creativity were legendary, showing the immense power of superior computational analysis, in a limited context.

We're just going to have to wait and see!
 
Posted by Martin60 (# 368) on :
 
We might as well wait for universal social justice. Never say never as in never say a 256 bit AES key encrypted cypher can't be cracked, given enough universe lifetime.
 


© Ship of Fools 2016

Powered by Infopop Corporation
UBB.classicTM 6.5.0