welcome to the fest
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

June 9: Let's get ready to ruuuuuumble!
Goto page Previous  1, 2, 3, 4, 5, 6, 7
 
Post new topic   Reply to topic    Sinfest Forum Index -> Sinfest
View previous topic :: View next topic  
Author Message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 3:27 pm    Post subject: Reply with quote

Midnight Tea wrote:
Heretical Rants wrote:
* An unfriendly AI can pretend to be friendly, and it can pretend to be stupid, until we let it out of the box, and can hack around any and all human-designed failsafes.


And saying they could hack around any and all human failsafes is kind of stacking the deck in its favor in this hypothetical. That's the thing about hypotheticals, they can be stretched out any way you like and where exactly it strays from the believable can sometimes be iffy.


Absolutely everything we make is riddled with zero-day security holes. Given enough time, we could break through our own failsafes.

and it's not exactly a hypothetical that we would let it out of its box, either
Quote:

...

But let's just say I have a lot of confidence in human ability to kill things threatening it. Several hundred thousand years at the top of the food chain is testament to that.


hahaha

that's just the problem

we evolved as part of a food chain, as an accumulation of beneficial (and not-so-beneficial) errors -- and mostly not errors beneficial to thinking

Our minds aren't built for much more than throwing pointy sticks at pronking ungulates, and they're running on slow squishy hardware that doesn't natively do math. Throughout our history we've cooked up a slew of positively ludicrous models of the world and sometimes even killed those who refused to adhere to those models. We consistently lose "high-level" strategy games like chess to lumps of silicon that aren't even as intelligent as insects.

we don't stand a fucking chance



Quote:
Quote:
* Friendly AIs are a tiny subspace of possible minds. Friendliness is not the default.

Nor is hostility

I dunno. Are we hostile towards ants?

There used to be a field full of frogs in my town. We bulldozed it to build a school. Were we hostile towards the frogs?
The frogs were sentient, but we didn't care because frogs aren't very smart as compared to us. They aren't "sapient".

Suppose we were the AI? Would the slow, stupid, squishy things that built us survive even a week after giving us internet access?

Indifference is just as lethal.
_________________
butts
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 209
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 4:28 pm    Post subject: Reply with quote

Heretical Rants wrote:
Absolutely everything we make is riddled with zero-day security holes. Given enough time, we could break through our own failsafes.

and it's not exactly a hypothetical that we would let it out of its box, either

As I said, it's stacking the deck in its favor and it seems to be stretching the hypothetical to fit a preconceived notion of an apocalypse. That's the nature of hypotheticals and the ability to set your own parameters within them.

I'm not dismissing the hypothetical, don't misunderstand me. I'm just advocating caution in accepting it as logistically bulletproof because we're talking about a complicated situation in a complicated world with complicated politics. That butterfly can beat its wings any number of ways.

If you accept that, then I have no issue with your stance.

Heretical Rants wrote:
hahaha

that's just the problem

we evolved as part of a food chain, as an accumulation of beneficial errors

Our minds aren't built for much more than throwing pointy sticks at pronking ungulates, and they're running on slow squishy hardware that doesn't natively do math. Throughout our history we've cooked up a slew of positively ludicrous models of the world and sometimes even killed those who refused to adhere to those models.

we don't stand a fucking chance

We'll have to agree to disagree here. I'll definitely concede that the human race's survival in the earlier years of its existence definitely was due in large part to incredible serendipity and primates somehow surviving being driven out of their habitat. Even then a case can be made that it says something that humans hadn't died out then.

But nowadays? Hoo boy. Seven billion strong and with more munitions than food and legions of nuts who are itching for a legitimate chance to put them to use. Human beings are definitely not to be trifled with.

Short of a grey goo scenario where the environment itself is systematically deconstructed on the atomic level, I think driving humans to extinction using technology is unreasonable. And again, it's stretching a hypothetical to a point where it becomes paranoia since the worst possible outcome requires a mathematically significant number of factors to turn out a certain way.


Quote:
I dunno. Are we hostile towards ants?

There used to be a field full of frogs in my town. We bulldozed it to build a school. Were we hostile towards the frogs?
The frogs were sentient, but we didn't care because frogs aren't very smart as compared to us. They aren't "sapient".

Suppose we were the AI? Would the slow, stupid, squishy things that built us survive even a week after giving us internet access?

Indifference is just as lethal.

We're kind of talking about two different things. I am talking about a machine that is capable of understanding and assigning emotional or sentimental value, not so much an autonomous computer program. The latter can cause a lot of damage left unchecked, and I'm not arguing against that.
The former, what I'm talking about, is essentially a human born in a different and more efficient body. That seems to be the kind of creature Px is and it's what I'm referring to as our future.

How about if I just agree that the steps leading up to that point can provide some troubling and challenging scenarios like the ones you're describing? And that I'm not disagreeing they could be very dangerous if handled carelessly? But I have some faith in the scientific community to not fuck this up.

And lest we get into a debate about whether becoming closer to "humanity" is something either or the machine should desire for itself or should try to transcend or move away from... well, that's largely subjective and it comes down to your general opinion of humanity or their place in the universe. I get the sense you and I are not in agreement on that and that's fine.
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 4:45 pm    Post subject: Reply with quote

An intelligent agent probably would not engage in a head-on military attack on humanity, no. Just like we don't rush at mother bears with our fists, or play with schools of jellyfish on their own terms.

Quote:
I am talking about a machine that is capable of understanding and assigning emotional or sentimental value ... essentially a human born in a different and more efficient body

Unless I'm getting upgraded around the same time, that's very scary to me! Hell, even then I'd want something more reliable as an arbiter.

We have to do a lot better than just human-level sentiment on this. The only example we have of human-level sentiment is a species that bulldozes frogs.

Quote:
How about if I just agree that the steps leading up to that point can provide some troubling and challenging scenarios like the ones you're describing? And that I'm not disagreeing they could be very dangerous if handled carelessly?

Yeah. That's fine.

Quote:
But I have some faith in the scientific community to not fuck this up.

There are corporations inching towards general AI, too.
_________________
butts
Back to top
View user's profile Send private message
mouse



Joined: 10 Jul 2006
Posts: 17600
Location: under the bed

PostPosted: Wed Jun 11, 2014 6:38 pm    Post subject: Reply with quote

Heretical Rants wrote:

And defining an AI's goals is pretty tricky. Seemingly innocuous things can turn out to be deadly. A popular doomsday scenario is some hapless engineer cracks general AI and then instructs it to make paperclips, setting the creation of paperclips as its terminal goal. The AI notices that it can better make more paperclips if it is smarter, so it makes itself smarter -- so smart that nothing can stand in its way. Then it notices that the engineer is made of matter that can be made into more paperclips.


aren't paperclips made of metal? all of my paperclips are made of metal.

humans aren't made of metal*. well, sure, we have a few iron atoms and things here and there - but extracting enough metal out of a human to make a paperclip seems like it would take a huge amount of energy, plus you have to work out all the refining steps. it would be a lot more efficient for the AI to disassemble itself to make paperclips. and in any event, you have to put the paperclips _somewhere_. hopefully away from all the glop left over from the humans you processed into paperclips.

and if the AI is so smart, wouldn't it start to wonder what the paperclips were _for_?



*at least, not the humans i know. i'm starting to worry about HR knowing all these metal .....beings..........
_________________
aka: neverscared!
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 6:50 pm    Post subject: Reply with quote

If you're going to quibble over such a minor detail (paperclips are just an arbitrary example -- the Riemann Hypothesis Prover would be turning the engineer into computronium instead), I can point out that you can make paperclips out of carbon, and turn water into heavier elements and energy through fusion.

Quote:
and if the AI is so smart, wouldn't it start to wonder what the paperclips were _for_?

Its terminal goal is maximizing paperclips.
It wouldn't care what the engineer wanted the paperclips for any more than I care that technically my body is "for" reproduction and rearing of young. I don't care what my genes want.
_________________
butts


Last edited by Heretical Rants on Wed Jun 11, 2014 6:55 pm; edited 1 time in total
Back to top
View user's profile Send private message
mouse



Joined: 10 Jul 2006
Posts: 17600
Location: under the bed

PostPosted: Wed Jun 11, 2014 6:53 pm    Post subject: Reply with quote

Heretical Rants wrote:
If you're going to quibble over such a minor detail, I can point out that you can make paperclips out of carbon, and turn water into iron and energy through fusion.


so while the AI is devising a fusion reactor that will move atoms that far up the periodic table, someone could sneak in and turn it off!

humans also win a lot because we are good at sneaky.
_________________
aka: neverscared!
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 6:56 pm    Post subject: Reply with quote

No, during that time the Paperclip Maximizer is using the resources already given to it to make paperclip assembling machines and anything else that might assist in the production of paperclips -- planet miners, space probes (with their own paperclip maximizers on-board), etc.

It already noticed that the engineer might try and turn it off (which would be very bad, since that would mean fewer paperclips) and started modelling the engineer's psychology and planning counter-strategies shortly after it was turned on.

You can't out-sneak something that is even better at predicting what you're going to do than you are, and you can't turn a paperclip maximizer off if it's built itself a drill and tunneled beneath the surface of the Earth, and maybe even built copies of itself and begun to disperse across the solar system.
_________________
butts
Back to top
View user's profile Send private message
mouse



Joined: 10 Jul 2006
Posts: 17600
Location: under the bed

PostPosted: Wed Jun 11, 2014 8:13 pm    Post subject: Reply with quote

well, then i guess it's a good thing we are turning into a paperless society - no need to build an AI to build paperclips.
_________________
aka: neverscared!
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 8:29 pm    Post subject: Reply with quote

really the purpose of the Paperclip Maximizer is to quickly illustrate a way that an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity simply by having goals orthogonal to our own

Paperclips are chosen for argumentative purposes because they are a very minor human value that isn't likely to evoke much of an emotional response in and of themselves. Everyone freaks out when presented with a war-winner. Paperclips seem completely innocuous by comparison.

A similarly defined cancer minimizer might vaporize any multicellular life forms that could develop cancer. A smile maximizer (built by some sad depressed guy in Japan who just wants everyone to be happy) might forcefully reconfigure everyone's faces.
_________________
butts
Back to top
View user's profile Send private message
Ronald



Joined: 17 Sep 2007
Posts: 3456

PostPosted: Mon Jun 16, 2014 8:29 am    Post subject: Reply with quote

mouse wrote:
well, then i guess it's a good thing we are turning into a paperless society - no need to build an AI to build paperclips.


For just a second, I thought you said "build an AI out of paperclips." Which would be quite an achievement. Wink
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    Sinfest Forum Index -> Sinfest All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7
Page 7 of 7

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group