Thread: Hell for atheists is a supercomputer Board: Oblivion / Ship of Fools.
To visit this thread, use this URL:
http://forum.ship-of-fools.com/cgi-bin/ultimatebb.cgi?ubb=get_topic;f=70;t=027696
Posted by seekingsister (# 17707) on
:
I came across an article about this idea called Roko's basilisk (warning: if you are susceptible to undue worry, stop reading now):
Roko's basilisk wiki
The summary is:
- in the future an all-intelligent artificial intelligence comes into being
- it will use its power to retroactively ensure that it is developed by humans
- it will also ensure that the human developers program the system in such a way, that it will punish/torture everyone who hinders or fails to assist in its creation
- the AI will create an avatar of its enemies and torture it for all eternity
So - the fact that you now know about it supposedly means you are on the list for eternal future torture unless you start coding ASAP. Or something.
Apparently a lot of people who post on philosophy forums have started to go slightly crazy thinking about this.
Now, is it just me, or is this religion for atheists? Because if they believe an omnipotent, omniscient force requires them to do certain things to help it and avoid eternal punishment, that's not far off of what many people who call themselves Christians or Muslims call religious faith.
Or, put another way, it's Pascal's Wager for atheists, replacing God with a supercomputer.
Pascal's wager
It makes me wonder if it is that case that religion serves some important social function - so much that people who reject it or claim not to have it, create something that has similar rewards and losses to a religious system.
Fixed URL. -Gwai
[ 25. July 2014, 16:06: Message edited by: Gwai ]
Posted by LeRoc (# 3216) on
:
quote:
seekingsister: - it will use its power to retroactively ensure that it is developed by humans
Does that mean that it can time travel?
Posted by Lord Jestocost (# 12909) on
:
quote:
Originally posted by LeRoc:
quote:
seekingsister: - it will use its power to retroactively ensure that it is developed by humans
Does that mean that it can time travel?
I was going to say. Surely that's one up even on God?
If this is getting philosophers in a twist then I'd say they're easily alarmed ...
Posted by IngoB (# 8700) on
:
quote:
Originally posted by LeRoc:
quote:
seekingsister: - it will use its power to retroactively ensure that it is developed by humans
Does that mean that it can time travel?
Apparently it's going to punish a simulation of you, and you are supposed to care about that. Or something like that.
Chesterton for the win: "When people stop believing in God, they don't believe in nothing — they believe in anything."
Posted by seekingsister (# 17707) on
:
For some reason the links aren't posting correctly - sorry!
[url= http://rationalwiki.org/wiki/Roko%27s_basilisk]Roko's basilisk[/url]
[url= http://rationalwiki.org/wiki/Pascal%27s_wager]Pascal's wager[/url]
[ 25. July 2014, 14:48: Message edited by: seekingsister ]
Posted by mousethief (# 953) on
:
Of all the bizarre conspiracy theories, this one takes the biscuit.
Posted by seekingsister (# 17707) on
:
quote:
Originally posted by Lord Jestocost:
If this is getting philosophers in a twist then I'd say they're easily alarmed ...
It's based partially on a feedback loop - so any computer programmers who are aware of Roko's basilisk will start to develop AI systems that incorporate the "kill all unhelpful humans" command to protect themselves.
Apparently the original post positing the theory was deleted by the website's owner for fear of its impact.
Slate - Roko's basilisk is the most terrifying thought experiment of all time
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by IngoB:
Apparently it's going to punish a simulation of you, and you are supposed to care about that. Or something like that.
That's only one of the faulty assumptions that go into the basilisk, but it is the biggest. Why would I care what some putative future AI does to a simulation of me that it has created?
Posted by LeRoc (# 3216) on
:
But perhaps we are the avatars?
Posted by lilBuddha (# 14333) on
:
From Rational Wiki on LessWrong, the progenitors of the Basilisk.
quote:
sometimes these ideas might benefit from a better grounding in reality
quote:
Originally posted by IngoB:
Chesterton for the win: "When people stop believing in God, they don't believe in nothing — they believe in anything."
Hmmmm, no. I mean it is cute and all, and one can certainly find examples to illustrate it. But one can also find the same empty, spinning hamster wheels within religion, so....
Posted by TheAlethiophile (# 16870) on
:
quote:
Originally posted by seekingsister:
- the AI will create an avatar of its enemies and torture it for all eternity
Does that avatar resemble CH Spurgeon?
Posted by Marvin the Martian (# 4360) on
:
No.
And even if we are, then flat-out stating that the whole thing is a load of bollocks which wouldn't have any effect on us if we weren't the simulation still wouldn't result in our (simulation's) torture, because the AI would know that we wouldn't help its developers either way.
It seems to me that Roko's Basilisk is just a taken-to-extremes version of some bullshit transhumanist metareligious dogma designed primarily to make people give money to an organisation founded and run by some maniac who is so obsessed with the idea that everything is numbers that he has openly stated that it's better for one person to be tortured than for several billion to get specks of dust in their eyes, because the former represents a smaller overall amount of pain.
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by lilBuddha:
But one can also find the same empty, spinning hamster wheels within religion, so....
As the OP said, it's Pascal's Wager for atheists (or the transhumanist subset thereof, at any rate). Up to and including the part where you have to dedicate your entire life - and gove all your money - to those who represent "god" in order to "win".
Posted by Gwai (# 11076) on
:
quote:
Originally posted by seekingsister:
For some reason the links aren't posting correctly - sorry!
Wiki links often do that. It's probably the characters in the URL. Put a tinyurl link into the OP that redirects to the intended site. For the links in the quoted post, try:
Roko's basilisk
Pascal's wager
Posted by Adeodatus (# 4992) on
:
As Davros once said, "A fascinating idea...".
quote:
Originally posted by LeRoc:
quote:
seekingsister: - it will use its power to retroactively ensure that it is developed by humans
Does that mean that it can time travel?
It doesn't have to. All that's required is for its future existence to be imagined by sufficient people who are willing either to be scared by the possibility, or excited by it - assuming, of course, that the other bits of its spec are physically possible - running simulations of people, and suchlike.
For me, as for others, the main problem is the assumption that simulated person = person. But that's often an assumption made in the philosophy of identity, especially if there's some kind of continuity between the existence of the person and the existence of the simulation. ("We are the sum of our memories" and all that - about which I'm less convinced the more I think about it, which admittedly isn't much.)
Posted by LeRoc (# 3216) on
:
quote:
Adeodatus: It doesn't have to. All that's required is for its future existence to be imagined by sufficient people who are willing either to be scared by the possibility, or excited by it - assuming, of course, that the other bits of its spec are physically possible - running simulations of people, and suchlike.
Thanks for the explanation. I've read the Slate article now, and I understand a bit more about it.
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by Adeodatus:
For me, as for others, the main problem is the assumption that simulated person = person. But that's often an assumption made in the philosophy of identity, especially if there's some kind of continuity between the existence of the person and the existence of the simulation. ("We are the sum of our memories" and all that - about which I'm less convinced the more I think about it, which admittedly isn't much.)
I can accept that a theoretical perfect simulation of me would be as convinced that it was me as I am right now. But it still would not be this me. I wouldn't feel its pain (or any other emotion it felt), and thus from the second it was created it would become a completely separate (though obviously very similar) entity.
Posted by que sais-je (# 17185) on
:
quote:
Originally posted by seekingsister:
It's based partially on a feedback loop - so any computer programmers who are aware of Roko's basilisk will start to develop AI systems that incorporate the "kill all unhelpful humans" command to protect themselves.
As a programmer for quite a few years and a teacher of programming for more, this is not a psychology of programmers that I recognise.
You'd need a scoring system, really good graphics, amazing sound ... and amid the coffees and/or beers, somehow, well you know how it is, it wouldn't quite, well ...
But if I'm wrong I hope there'll be an app where you can Hell and un-Hell people.
Posted by Amika (# 15785) on
:
I don't think most atheists would fall for this any more than for one of the many quasi-religions that are springing up all over the place. I suspect this, along with positive thinking/cosmic ordering/the secret might be the recourse of those who are 'spiritual but not religious'. They might identify as non-believers in organised religion, but they're seeking answers elsewhere all the same.
[ 25. July 2014, 16:43: Message edited by: Amika ]
Posted by Martin PC not & Ship's Biohazard (# 368) on
:
It's good to know that atheist humanists are as useless at actually doing anything along the arc of the moral universe as the theist.
Posted by Adeodatus (# 4992) on
:
quote:
Originally posted by Marvin the Martian:
quote:
Originally posted by Adeodatus:
For me, as for others, the main problem is the assumption that simulated person = person. But that's often an assumption made in the philosophy of identity, especially if there's some kind of continuity between the existence of the person and the existence of the simulation. ("We are the sum of our memories" and all that - about which I'm less convinced the more I think about it, which admittedly isn't much.)
I can accept that a theoretical perfect simulation of me would be as convinced that it was me as I am right now. But it still would not be this me. I wouldn't feel its pain (or any other emotion it felt), and thus from the second it was created it would become a completely separate (though obviously very similar) entity.
This is interesting. It removes the suffering from your future self, but instead replaces it with the suffering of a simulation that might possibly be regarded as a(nother) person. The interesting bit is, in the thought-experiment, the suffering of that might-be-a-person is entirely dependent on your present-day actions.
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
Posted by lilBuddha (# 14333) on
:
quote:
Originally posted by Marvin the Martian:
quote:
Originally posted by lilBuddha:
But one can also find the same empty, spinning hamster wheels within religion, so....
As the OP said, it's Pascal's Wager for atheists (or the transhumanist subset thereof, at any rate). Up to and including the part where you have to dedicate your entire life - and gove all your money - to those who represent "god" in order to "win".
No, I get that. But that is not what I was addressing. Chesterton implies a superior logic to theism, the quote I responded to was one of several in a similar vein.
But really, Rocco's Demon is building a boat with a magnetite mast and crowing that one is only sailing in the direction the compass points.
My point to IngoB was that many religious do the same and that Chesterton was wrong here.
Posted by Stetson (# 9597) on
:
Adeodatus wrote:
quote:
This is interesting. It removes the suffering from your future self, but instead replaces it with the suffering of a simulation that might possibly be regarded as a(nother) person. The interesting bit is, in the thought-experiment, the suffering of that might-be-a-person is entirely dependent on your present-day actions.
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
If some lurker PMed me and said "If you use the letter 'Q' in any of your posts from here on in, I will kill the child that I have tied up in my cellar", then I might stop using the letter 'Q' in my posts, until the sender of the PM had been apprehended.
On the other hand, if someone sent me a similar ultimatum, only with the threat changed to "...I will leave instructions in my will for my great great grandson to abduct and torture children", eh, not quite the same degree of moral urgency. I'd probably still try to get the guy apprehended, or at least checked out, by the authorities.
[ 25. July 2014, 17:14: Message edited by: Stetson ]
Posted by deano (# 12063) on
:
quote:
Originally posted by que sais-je:
quote:
Originally posted by seekingsister:
It's based partially on a feedback loop - so any computer programmers who are aware of Roko's basilisk will start to develop AI systems that incorporate the "kill all unhelpful humans" command to protect themselves.
As a programmer for quite a few years and a teacher of programming for more, this is not a psychology of programmers that I recognise.
You'd need a scoring system, really good graphics, amazing sound ... and amid the coffees and/or beers, somehow, well you know how it is, it wouldn't quite, well ...
But if I'm wrong I hope there'll be an app where you can Hell and un-Hell people.
Of course it would need a BA to capture the requirement in the first place (and they will probably not write it properly anyway), and then a peer review of the use case, then testing. So I'm confident that this will probably be descoped in the first playback with the business.
Posted by Crœsos (# 238) on
:
quote:
Originally posted by Adeodatus:
This is interesting. It removes the suffering from your future self, but instead replaces it with the suffering of a simulation that might possibly be regarded as a(nother) person. The interesting bit is, in the thought-experiment, the suffering of that might-be-a-person is entirely dependent on your present-day actions.
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
There are plenty of real world examples demonstrating fairly clearly that the suffering of other people, particularly people you've never met, is not a very strong motivator for most.
Posted by IngoB (# 8700) on
:
quote:
Originally posted by Adeodatus:
quote:
Originally posted by Marvin the Martian:
I can accept that a theoretical perfect simulation of me would be as convinced that it was me as I am right now. But it still would not be this me. I wouldn't feel its pain (or any other emotion it felt), and thus from the second it was created it would become a completely separate (though obviously very similar) entity.
This is interesting. It removes the suffering from your future self, but instead replaces it with the suffering of a simulation that might possibly be regarded as a(nother) person. The interesting bit is, in the thought-experiment, the suffering of that might-be-a-person is entirely dependent on your present-day actions.
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
Incidentally, exactly the same problem arises in Buddhism. Buddhism proposes rebirth, not reincarnation. The person that gets reborn is karmically linked to you - caused by you - but it is not you. There is no "soul" of whatever form in Buddhism that somehow transmits "you" to a new body. That would reincarnation, which is the general Hindu system Buddhism developed over and against.
I know at least one person (from reading his book) who stopped being a Buddhist basically because of this, to caricature his position: "Who gives a damn what happens to future persons causally related to myself? Who cares if they get enlightened? I will be dead, it matters not to me."
Posted by Ricardus (# 8757) on
:
quote:
Originally posted by seekingsister:
Slate - Roko's basilisk is the most terrifying thought experiment of all time
That sounds like someone has come across one of those "ergo 1=2" paradoxes and taken it seriously.
Also, apologies for those who haven't read the article, but isn't Newcomb's Paradox referred to therein a completely circular nonsense? In effect you are asked to bet on whether a supercomputer can predict your answers, in a thought-experiment where 'a supercomputer can predict your response' is given as one of the fundamental premises ...
[ 25. July 2014, 18:35: Message edited by: Ricardus ]
Posted by Justinian (# 5357) on
:
The missing piece of the puzzle here is Eliezer Yudkowsky and the nonprofit nonprofit Singularity Institute for Artificial Intelligence (SIAI) that he founded in 2000. The stated purpose of the SIAI is to create a benevolent AI as the first self-improving AI - an attempt to prevent this scenario. SIAI naturally hosts and maintains LessWrong. (SIAI were at one point also claiming that they would save eight lives per dollar spent due to minimising the risk of malevolent AIs - and invited GiveWell to cross-check them; Givewell rated them as worse than not donating at all).
Posted by que sais-je (# 17185) on
:
quote:
Originally posted by IngoB:
I know at least one person (from reading his book) who stopped being a Buddhist basically because of this, to caricature his position: "Who gives a damn what happens to future persons causally related to myself? Who cares if they get enlightened? I will be dead, it matters not to me."
Some years ago I suggested (on another forum) that some Buddhists will spend their lives trying to save the future suffering/illness/ageing/death of someone unknown to them. But then, many people of all faiths (and none) have given up large parts of their lives to relieve the suffering of those they don't know.
To my surprise the most extreme response came from someone describing themselves as a Christian and who saw my comments as a argument for the superiority of his religion. Christians, he said, are saved and so don't have to concern themselves with the fate of sinners. To do so would be blasphemy since it usurps the role of God as judge.
I would say that the future 'might-be-person' is still be someone you could have some concern for.
Posted by LeRoc (# 3216) on
:
Of course, the question 'who gives a fuck about future people' is becoming more and more pressing in a very practical sense.
Posted by Stetson (# 9597) on
:
quote:
Originally posted by LeRoc:
Of course, the question 'who gives a fuck about future people' is becoming more and more pressing in a very practical sense.
Accept in the case of the basilisk, it's not so much about "Who gives a **** about future people?", as it is "Who gives a **** about future people being menaced by future demons of a highly speculative nature?"
Posted by HCH (# 14313) on
:
No, I don't think is religion for atheists. It is more like a paranoid nightmare devised by people who have too much time on their hands and apparently no understanding at all of computer programming, etc. It is even sillier than Asimov's robots.
Posted by LeRoc (# 3216) on
:
quote:
Stetson: Accept in the case of the basilisk, it's not so much about "Who gives a **** about future people?", as it is "Who gives a **** about future people being menaced by future demons of a highly speculative nature?"
I agree. I don't claim to understand all of their speculations after only a quick read, but they seem bollocks to me. Better to concentrate on real threats for the future.
Posted by Stetson (# 9597) on
:
quote:
I don't claim to understand all of their speculations after only a quick read
It's basically the equivalent of starting a thread on the Ship, in which the OP states that the existence of the thread itself is to be taken as prophecy that the antichrist will be born 500 years from the date of the OP. And that any Christian living at that time(ie. 500 years in the future) needs to murder every [DEMOGRAPHICS LEFT UNSPECIFIED] child in order to stop the antichrist from being born.
And then you sit back and argue about whether the mods should delete the thread, on the grounds that some nutbars 500 years in the future might see it and follow its recommendations.
Posted by Stetson (# 9597) on
:
^ Oh, and just to make it completely parallel...
If donations totalling $100 000 have been wired to the bank account of Stetson's Stop The Antichrist Society by the end of August 2014, the antichrist will be stillborn, and thus no need for Christians in 2514 AD to kill all those babies.
(To match Roko's argument that fundraising for AI will appease the basilisk.)
Posted by Alisdair (# 15837) on
:
quote:
Originally posted by lilBuddha:
quote:
Originally posted by Marvin the Martian:
quote:
Originally posted by lilBuddha:
But one can also find the same empty, spinning hamster wheels within religion, so....
As the OP said, it's Pascal's Wager for atheists (or the transhumanist subset thereof, at any rate). Up to and including the part where you have to dedicate your entire life - and gove all your money - to those who represent "god" in order to "win".
No, I get that. But that is not what I was addressing. Chesterton implies a superior logic to theism, the quote I responded to was one of several in a similar vein.
But really, Rocco's Demon is building a boat with a magnetite mast and crowing that one is only sailing in the direction the compass points.
My point to IngoB was that many religious do the same and that Chesterton was wrong here.
Chesterton for the win: "When people stop believing in God, they don't believe in nothing — they believe in anything."
OTOH Chesterton is only wrong if someone's idea of 'God' is less than God, assuming God is. To put it another way, if we take James' little maxim: 'God is love' and plug that into Chesterton's we get:
"When people stop believing in Love, they don't believe in nothing — they believe in anything."
God/Love can be understood as intrinsic to the existance of everything that is, so to deny that which all else depends upon is to leave a hole which MUST be filled, and we will fill it with whatever we foolishly believe will do, but of course nothing else will do.
Posted by Alisdair (# 15837) on
:
'doh', it's John, of course, not James!
Posted by lilBuddha (# 14333) on
:
quote:
Originally posted by Alisdair:
God/Love can be understood as intrinsic to the existance of everything that is, so to deny that which all else depends upon is to leave a hole which MUST be filled, and we will fill it with whatever we foolishly believe will do, but of course nothing else will do.
And this is an example of the boat building I spoke of.
Posted by Alisdair (# 15837) on
:
Maybe it is, but we ALL do it, don't we? We have to, because otherwise we would be like bacteria: no imagination, no creativity, no wonder, no exploration, no understanding, no hope, AND no love.
The 'boat building' as you call it, is what we make it, but we all do do it. I suppose the only question that really needs to be answered is, 'what foundation, what basis, what understanding do we build on, and then, what are the results?
Forget 'religion', or any other pat category we use to define 'them' and 'us', except perhaps two categories: 'life' and 'death'---which are we contributing to?
lilBuddha - your comment about other religions doing the same seems based on a the idea that 'God' is merely a human construct, with no actual reality, except what we decide. Which is fine, and may even be true, except that no one knows that, or has any genuine evidence. So, it remains entirely possible that 'God' is, regardless of any human constructs, or refusal to accept the possibility of 'God'. And if God actually is, then presumably we are talking about something/someone worth talking about and trying to get a handle on---there are absolutes to be faced, and face them we will (and do), if God is. Pascal's Wager, maybe, that's certainly an approach.
[ 26. July 2014, 08:36: Message edited by: Alisdair ]
Posted by Ricardus (# 8757) on
:
quote:
Originally posted by Adeodatus:
This is interesting. It removes the suffering from your future self, but instead replaces it with the suffering of a simulation that might possibly be regarded as a(nother) person. The interesting bit is, in the thought-experiment, the suffering of that might-be-a-person is entirely dependent on your present-day actions.
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
But I think the moral decision to torture would have been made by the AI, not by me.
(FWIW I feel more or less the same about Sophie's Choice.)
Posted by Martin PC not & Ship's Biohazard (# 368) on
:
Shouldn't this idle tangent be in Heaven?
Posted by Gwai (# 11076) on
:
quote:
Originally posted by Martin PC not & Ship's Biohazard:
Shouldn't this idle tangent be in Heaven?
If you mean that the whole thread is a tangent: No. Roko's Basilisk is a valid discussion topic whether or not you think it a reasonable issue or a serious problem.
Gwai,
Purgatory Host
Posted by lilBuddha (# 14333) on
:
quote:
Originally posted by Alisdair:
Maybe it is, but we ALL do it, don't we? We have to, because otherwise we would be like bacteria: no imagination, no creativity, no wonder, no exploration, no understanding, no hope, AND no love.
I disagree. There is a difference between
What is?
And
This is.
It is in the former that exploration begins and the later in which it ends and understanding the two which allows it to continue.
If x is true, then is an acceptable premise as long as one does not forget the if. Your "substitute Love for God" appears to forget the if.
quote:
Originally posted by Alisdair:
lilBuddha - your comment about other religions doing the same seems based on a the idea that 'God' is merely a human construct,
No. Simply that belief is belief.
quote:
Originally posted by Alisdair:
with no actual reality, except what we decide. Which is fine, and may even be true, except that no one knows that, or has any genuine evidence.
but the it does not logically follow that a person might as well believe in God.
Pascal's Wager is a cute little though experiment but, even as such, it has a major flaw: Which god?
Posted by Alisdair (# 15837) on
:
lilBuddha - I think to a large degree I accept your point of view, except the point at which it implies a life time spent sitting on the fence on the basis that 'because I don't know for sure I won't make a decision'.
There are some things where remaining firmly on the fence, even with feet planted firmly on either side, is useful, but for others maybe not so much.
The issue of 'God' in a sense is one where we all make a decision: there are no fence sitters if God is, because we are all fundamentally 'of God', it's merely a matter then of whether we receive what God is about, or reject it. If God truly is Love then we are free to choose, to explore, to receive or reject.
I guess the simplest way of approaching that is to ask, 'If God is what should God be like?' A dangerous question to be sure, but our personal answer will be telling.
Perhaps in the light of that the assertion that 'God is Love' may start to have some worthwhile meaning, because we can start to think about what love is and what the practice of it might imply for the living of life.
The Basilisk implies a deadly predestination which is the antithesis of any meaningful understanding of 'love' that I have come across. By that reckoning the Basilisk idea immediately falls flat on its face---a thought experiment probably best indulged in by nerdy teenagers looking for a good argument. :-)
Posted by lilBuddha (# 14333) on
:
quote:
Originally posted by Alisdair:
lilBuddha - I think to a large degree I accept your point of view, except the point at which it implies a life time spent sitting on the fence on the basis that 'because I don't know for sure I won't make a decision'.
Not what I am implying at all.
quote:
Originally posted by Alisdair:
The issue of 'God' in a sense is one where we all make a decision: there are no fence sitters if God is, because we are all fundamentally 'of God', it's merely a matter then of whether we receive what God is about, or reject it.
This statement is only true if one accepts the premise that God* exists and that S/he/it/they care.
Posted by Alisdair (# 15837) on
:
quote:
Originally posted by lilBuddha:
quote:
Originally posted by Alisdair:
lilBuddha - I think to a large degree I accept your point of view, except the point at which it implies a life time spent sitting on the fence on the basis that 'because I don't know for sure I won't make a decision'.
Not what I am implying at all.
Yes, I can appreciate that it isn't what you meant, but to me at least that is what is implied, and that implication is expressed again by your comment below (I'm not having a go at you, BTW. I think it's vital we are all able to work at these kinds of life issues according to our own lights. One of 'religion's' problems is that too often it places conformity and doctrinal purity above humanity and practice).
quote:
Originally posted by Alisdair:
The issue of 'God' in a sense is one where we all make a decision: there are no fence sitters if God is, because we are all fundamentally 'of God', it's merely a matter then of whether we receive what God is about, or reject it.
This statement is only true if one accepts the premise that God* exists and that S/he/it/they care.
The thing neither of us, nor anyone else, seems likely to be able to confirm this one way or the other in this form of existence we currently experience. Hence my point that if God does 'exist' then regardless of our view/belief we are would be in some form of relationship with the one who is intrinsically the source of our being. No 'belief' or 'religion' is required, it would simply be in the nature of things. At that point I find the proposition 'God is Love' to be useful---'God' is beyond us, but 'love' is clearly something we can know about, experience, explore, and take action over. For and against.
Posted by Martin PC not & Ship's Biohazard (# 368) on
:
Madam Host. Ma'am.
Posted by que sais-je (# 17185) on
:
quote:
Originally posted by Alisdair:
Hence my point that if God does 'exist' then regardless of our view/belief we are would be in some form of relationship with the one who is intrinsically the source of our being. No 'belief' or 'religion' is required, it would simply be in the nature of things.
If God handed over universe creation to some kind of demi-urge (like giving a kid a Lego set), or created the universe for the enjoyment of such a being (like a toy), or has decided he's had enough of this universe and left it to its own devices, or isn't immortal and has died, or made the universe from himself (ceasing to exist as himself in the process), or made the universe as a kind of art work, or ...
One can go on for ever. In every case there is, of necessity, some relation between us and God and it would be "in the nature of things". But there is nothing which follows necessarily from this relationship.
quote:
Originally posted by Alisdair:
At that point I find the proposition 'God is Love' to be useful---'God' is beyond us, but 'love' is clearly something we can know about, experience, explore, and take action over.
Which does give an indication of the sort of relationship which may exist. But that such a sort of relationship exists needs to be justified.
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by Adeodatus:
So the question becomes, would you change the direction of your life for the sake of a future might-be-a-person?
Apparently not.
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by Stetson:
quote:
Originally posted by LeRoc:
Of course, the question 'who gives a fuck about future people' is becoming more and more pressing in a very practical sense.
Accept in the case of the basilisk, it's not so much about "Who gives a **** about future people?", as it is "Who gives a **** about future people being menaced by future demons of a highly speculative nature?"
To be precise, it's about "Who gives a **** about future computer simulations of people being menaced by future demons of a highly speculative nature?"
Even if all other presuppositions are 100% correct (which they aren't), it relies on the assumption that such simulations are utterly indistinguishable from the people being simulated. It's not far removed from saying that anyone who kills another player in World of Warcraft is an actual murderer.
Posted by mstevens (# 15437) on
:
I thought I'd say hi! as an occasional lesswrong poster. I'll defend us (but only a little) by saying the whole basilisk thing represents the site at its worst.
Posted by Jay-Emm (# 11411) on
:
quote:
Originally posted by mstevens:
I thought I'd say hi! as an occasional lesswrong poster. I'll defend us (but only a little) by saying the whole basilisk thing represents the site at its worst.
I've had a little niggling worry caused by the thought, in that I don't have much problem saying a simulation isn't a hostage I care for much...but then there seem many situations that I'm not sure how to clearly distinguish from this, and yet some clearly have consequences I'd want to disown.
Firstly a religious heaven/hell or reincarnation.
Second an appropriately distant descendant.
Thirdly an appropriately distant current relative.
Fourthly (just to take it to the absurd, and to reassure myself that the conclusion is not slavery is A-ok so long as I'm the owner) a slightly older me.
Posted by Stetson (# 9597) on
:
quote:
Originally posted by Marvin the Martian:
quote:
Originally posted by Stetson:
quote:
Originally posted by LeRoc:
Of course, the question 'who gives a fuck about future people' is becoming more and more pressing in a very practical sense.
Accept in the case of the basilisk, it's not so much about "Who gives a **** about future people?", as it is "Who gives a **** about future people being menaced by future demons of a highly speculative nature?"
To be precise, it's about "Who gives a **** about future computer simulations of people being menaced by future demons of a highly speculative nature?"
Even if all other presuppositions are 100% correct (which they aren't), it relies on the assumption that such simulations are utterly indistinguishable from the people being simulated. It's not far removed from saying that anyone who kills another player in World of Warcraft is an actual murderer.
Well, I thought it was postulated somewhere along the way that the simulations ARE sentinent and cognizant beings, which would make them morally equivalent to people.
Posted by Marvin the Martian (# 4360) on
:
quote:
Originally posted by Stetson:
Well, I thought it was postulated somewhere along the way that the simulations ARE sentinent and cognizant beings, which would make them morally equivalent to people.
Yes, that's another one of the (many) presuppositions that are incorrect.
Posted by Drewthealexander (# 16660) on
:
I wonder if the technology in question is being prototyped in the new Universal Credit computer system.....
Posted by IngoB (# 8700) on
:
quote:
Originally posted by Marvin the Martian:
To be precise, it's about "Who gives a **** about future computer simulations of people being menaced by future demons of a highly speculative nature?"
As it happens, the idea here is not that a demonic AI will (virtually) punish those who failed to support it. Rather it is a supremely good AI that does so, in recognition of the moral imperative that it must be established as soon as possible in order to extend its paradisiacal rule over humanity.
Basically, and unsurprisingly, these people are just regurgitating traditional Christianity in an absurd techno mode. This is simply God, punishing with hell the wicked who fail to bring about His kingdom. Souls become simulations etc. The only real question here is whether they (or at least the main authors of this fantasy) realise that they are stressing out about a cyber punk version of traditional Christianity, or not. It would be hilarious if they didn't...
quote:
Originally posted by Marvin the Martian:
Even if all other presuppositions are 100% correct (which they aren't), it relies on the assumption that such simulations are utterly indistinguishable from the people being simulated. It's not far removed from saying that anyone who kills another player in World of Warcraft is an actual murderer.
Hmm. While I think these people are nuts, that comparison is about as complimentary to their madness as possible. Unintentionally so, I guess. If we consider someone playing a WoW character, intensely, then we can say that they do project their person into the virtual WoW universe. Psychologically, having a long-established character in such a game killed is certainly more than just "losing at a game". It really is a bit like getting killed oneself (I guess one cannot experience having been murdered, but if one could, then this would feel somewhat like it...). That may be unhealthy, but it is the case. In some sense then, a future highly accurate simulation of yourself confronted with a hellish virtual environment can be considered as a similar projection of yourself. Except that you would care about that not by the force of emotional habit (established by many hours of game play), but by a kind of intellectual decision (established by following a certain philosophy). That almost makes all this sound close to some kind of sanity...
Posted by Stetson (# 9597) on
:
IngoB wrote:
quote:
Psychologically, having a long-established character in such a game killed is certainly more than just "losing at a game". It really is a bit like getting killed oneself (I guess one cannot experience having been murdered, but if one could, then this would feel somewhat like it...). That may be unhealthy, but it is the case.
A Rape In Cyberspace
Some users reported post-traumatic stress from the hacker's actions. Personally, I think the worst part would be worrying that the actions described in the game would be a possible prelude to a real-world assault, an issue not relevant with a posthumous replica being tortured by a basilisk.
Posted by IngoB (# 8700) on
:
quote:
Originally posted by Stetson:
A Rape In Cyberspace
Some users reported post-traumatic stress from the hacker's actions. Personally, I think the worst part would be worrying that the actions described in the game would be a possible prelude to a real-world assault, an issue not relevant with a posthumous replica being tortured by a basilisk.
Well, having played a MUD fairly intensely over a decade ago, I would disagree. The main concern is not the potential for a real life attack. This would require serious hacker skills to breach the avatar anonymity, and to then trace back the player to a real world location via his or her IP or online usage pattern. That's not a likely scenario, unless one has freely revealed actual private details (which players generally do not do).
However, if one has spent many, many hours constructing a virtual persona in an online world, then an attack on that persona which is extraordinary (*) would be taken personal. It is difficult to suddenly detach from this personal creation. It is perhaps a bit more like seeing your child attacked than being attacked yourself, since the personal identification is usually (and consciously) not perfect. But it is almost impossible to seriously role-play a character over a long time and have no emotional investment in that character.
(*) FWIW, many of these places do allow PVP (player vs. player), making "in game murder" a rather common matter. This then usually does not become a major psychological issue. It is more considered a difficulty of the gaming environment, which one has accepted in playing on a PVP (rather than PVE, player vs environment) server. In such a PVP setting, a virtual murder would not necessarily be taken that seriously by anyone, including the victim, whereas a vendetta (continuous hunting down and killing of a particular player) would be. However, I have never encountered a place that has "virtual rape" as part of the gaming environment. Consequently, such a (basically narrated) virtual attack would be a problem. I think these virtual places do show that we are quite flexible in our evaluation (what is bad, what is not) but not in our emotions (what do I feel if bad things happen to me).
© Ship of Fools 2016
UBB.classicTM
6.5.0