welcome to the fest
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

June 9: Let's get ready to ruuuuuumble!
Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next
 
Post new topic   Reply to topic    Sinfest Forum Index -> Sinfest
View previous topic :: View next topic  
Author Message
khan



Joined: 10 Feb 2013
Posts: 168

PostPosted: Wed Jun 11, 2014 4:29 am    Post subject: Reply with quote

Midnight Tea: Been 'busy', and I personally hate posting via mobile, but thems the breaks. Thats also why not 'quote', just gonna refer back to page 3. Cuz yeah, scrolling sucks balls on a touch screen sometimes. Especially this text box. Also, the forum loves esting my tedious mobile posts. Anyways.

Trollbating online is amusing, but usually a selfdemeaning excercise. Most people get over it eventually.

I think we can agree that factory fsrming tends to result in some bad situations, but this doesnt really absolve the inherent absurdity of declaring that 'human-like' srntience is somehow magically worthy of special treatment beyond that due to all life. I try not to kill mosquitos for this reason, and am non-violent. A pretty easy srgument can be made that pain is worse to inflict on crestures thst are less sentient, as they cant cope. A wise man need never truly suffer, though he endures pain. I am floored that you would argue its more acceptable to hurt a being incapable of complaint than one with that capacity. I would expect someone as 'enlightened' as you feel yourself to be to have some trace of compassion for those without a voice.

...oookay, so because reality is that machines are, in fsct, machines, I have my head in the sand! Right, 'well, people will do it anyways!!' is a shitty defence. Thats why we make laws. There is also no proof that life-like sentience will come any time soon. Boringly enough, it might never be possible other than in supercomputers. There are limits to ministureization, and computers are going to close in. Sentience of our kind requires a shocking heap of computer to truly match. Besides, i dont buy into 'emotional' computers, ie AI capable of suffering. And no, I asuredly didnt pull 'valueless' from my ass, but economics. Its something liberals ussually fucking suck at btw, so yeah. I wouldnt expect you to understand why nobody will use a tool that is less productive.

Realiy isnt like sci-fi very much in some ways, and this is a fucking obvious one: anyone that designs and builds a robot capable of suffering (ie not just self-preservation, but mental anguish) wil be commiting a crime. This will be law long before small, intelligent robots can be built.

Frankly, even if I agreed that 'sentient bots' were inevitable and somehow just had to get made, sentience will dictate they be allowed their voice. You're pissing in the wind on this, since you have no clue what a robot could want. What if we made them all submissive masochists that loved to work? Would it be exploitive to then 'let them be happy'? I'm more concerned with how we treat actual living thinfs, that you know, exist and shit, vs conjecturing about what a robot would want if a robot could want anything period.

Edit: you know what, arguing opinions on the internet is stupid. Nobody online ever has a change of opinion based on online discussions as immature as these usually end up. Trying to claim something that cannot reproduce biologically on its own is alive is fucking stupid. Biology gets to decide what is alive, and if you hate souls, fine, argue with biology. Be the voice for virus'! They arent alive either, no ability to reproduce. Arguing that something capable of feeling pain shouldnt for no reason is basic decency.


Last edited by khan on Wed Jun 11, 2014 4:48 am; edited 1 time in total
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 4:39 am    Post subject: Reply with quote

You're arguing at a really low level here, khan. Most of the things you are highlighting as problems are not the real problems.

For instance, the human brain runs at ~100Hz. Hardware is not the bottleneck here.

And defining an AI's goals is pretty tricky. Seemingly innocuous things can turn out to be deadly. A popular doomsday scenario is some hapless engineer cracks general AI and then instructs it to make paperclips, setting the creation of paperclips as its terminal goal. The AI notices that it can better make more paperclips if it is smarter, so it makes itself smarter -- so smart that nothing can stand in its way. Then it notices that the engineer is made of matter that can be made into more paperclips.
_________________
butts
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 4:52 am    Post subject: Reply with quote

Response to khan's edit:

holy shit newly introduced inconsistency, Batman!

my opinion of your arguments has dropped even lower

"sentience doesn't magically make something worth protecting"

"DURHUR LIFE ARGUE WITH BIOLOGY"
_________________
butts


Last edited by Heretical Rants on Wed Jun 11, 2014 4:54 am; edited 1 time in total
Back to top
View user's profile Send private message
Miss Magenta



Joined: 09 Jun 2011
Posts: 1854
Location: im probably asleep right now

PostPosted: Wed Jun 11, 2014 4:54 am    Post subject: Reply with quote

idk about you but sentience definitely does magically make something worth protecting if you ask me
_________________
Back to top
View user's profile Send private message Visit poster's website
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 5:05 am    Post subject: Reply with quote

as for Biology, we can take my argument to Istancow up a few levels of abstraction from quantum waveforms to human brains, and it still applies:
we are the pattern, the process, not the substrate on which that process runs

Going back down those levels of abstraction and looking at what we know about the nature of matter just makes this clearer.

And reproduction? First, I honestly don't see how that's relevant. We don't treat infertile humans any differently. And second, do you really think that a strong AI wouldn't be capable of making more strong AIs?
_________________
butts
Back to top
View user's profile Send private message
Dogen



Joined: 10 Jul 2006
Posts: 10730
Location: Bellingham, WA

PostPosted: Wed Jun 11, 2014 7:44 am    Post subject: Reply with quote

It was all a little odd. I mean, entire books have been written about how to handle animal rights as we come to understand their consciousness as being more complex than we'd previously given them credit, but here khan argues that we should afford them less rights than... what, exactly? Paper clips? House plants? ... and then we got into some kind of weird "harhar, liberals can't do economics" thing in the middle that's so tritely partisan and reminiscent of a cnn.com comment section that it really jumps out at you here... then we go back to postulating about how making robots have the full range of human emotions should be criminalized.

I think khan is drunk. That would be an acceptable reason to post something so poorly thought out.

He says, posting drunk, having just gotten home from the bars.
_________________
"Worse comes to worst, my people come first, but my tribe lives on every country on earth. Iíll do anything to protect them from hurt, the human race is what I serve." - Baba Brinkman
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 200
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 7:57 am    Post subject: Reply with quote

Nicely done, Heretical Rants. I was just smacking my head and going "are you fucking kidding me" and getting ready to shove this crap back up where it came from but you pretty much nailed the salient points. And yes, I'm also with MM that once something is sentient, we need to treat it differently. Obviously.

I just have to point out one more thing: khan's ludicrous argument from incredulity.

khan, you shot whatever you had to say in the vein of "omg why are you talking about sci-fi so srsly?! lol" in the genitals with your opening paragraph. You might've even set a record for self-pwnage on this forum, even barring your silly appeal to political partisanship later in your drunken stumbling:

khan wrote:
Midnight Tea: Been 'busy', and I personally hate posting via mobile, but thems the breaks. Thats also why not 'quote', just gonna refer back to page 3. Cuz yeah, scrolling sucks balls on a touch screen sometimes. Especially this text box. Also, the forum loves esting my tedious mobile posts. Anyways.


Dude.

You're communicating to us via a network of computers with a device that responds to your touch and voice with a satellite uplink that lets you check the weather.

You're in a science fiction story already, motherfucker.

This shit was sci-fi when I was a little kid in the mid 80's. In fact, some of the sci-fi media I grew up with didn't even come close to predicting some of what we have now. And good god, my parents who are baby boomers -- they had no way of predicting what was going to happen in their lifetimes, even the very idea of a personal computer in every household was crazy to them. Today's science fiction is tomorrow's science.


Is the singularity way off? Probably. It could come along a lot faster than anyone has ever realized if there are certain technological breakthroughs in the meantime, but I'll grant you that the current pacing it's nothing to worry about right away. But that's my whole point; you really can't assume a constant predictable pace when it comes to technology, computer science especially.
You'd be an idiot to ever bet against the march of technology. Or to dismiss the raw determination of people with more engineering skill than interest in anything else that could occupy their time, including making money or managing their own social lives.

The sooner we have answers for what we're going to do when we wind up creating the race that may in fact survive us, the better off we'll be. It's worth preparing for.
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 8:12 am    Post subject: Reply with quote

Quote:
I was just smacking my head and going "are you fucking kidding me"

Yeah, I did that, too. My face spent several long moments plastered to my palm before I could muster a response.

Midnight Tea wrote:
I'll grant you (khan) that the current pacing it's nothing to worry about right away.


The problems that we have to solve in order to navigate this minefield safely are so difficult that I don't think we stand much of a chance if we don't work on them now.

This is very much a current problem.

Though if you're not good enough at math to contribute anything and you aren't funding the research, or you are already working on some other important problem, you might as not worry about it, I guess.
_________________
butts
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 200
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 8:34 am    Post subject: Reply with quote

Heretical Rants wrote:
The problems that we have to solve in order to navigate this minefield safely are so difficult that I don't think we stand much of a chance if we don't work on them now.

This is very much a current problem.

Though if you're not good enough at math to contribute anything, aren't funding the research, or are already working on some other important problem, you might as not worry about it, I guess.

Yeah, I was mostly speaking from the perspective of someone who isn't directly involved the CS industry and thus not really sure of what I can do right now.

I will say that when I keep referring to them as "children", I'm not actually strictly just attempting an appeal to emotion but keeping the focus on something rather important. It's my contribution to the discussion, however small. When a sapient A.I. comes into being, it's not likely going to be an accident. Not impossible like your paperclip example, but I do have some faith that we'll see it coming.
But the most important thing to remember is that it won't do anything we don't teach it to do. It won't hate if we don't teach it hate. It won't love if we don't teach it love. We are in a unique position to do something truly beautiful and transcendent. I'd like it very much if, as a species, we don't blow it.

I'm kind of relieved that Japan is on the forefront of this, mostly because their media isn't saturated with Terminator imagery or tired jokes about robot uprisings. They treat their robots with adoration and trust. I think we'll be okay if they make the leap first.
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 9:10 am    Post subject: Reply with quote

Eh? Japan?

...

... their robotics programs?

I don't consider robotics to play much of a role in this particular problem. If the AI needs a robot to accomplish some goal, let it design its own.

Midnight Tea wrote:
their media isn't saturated with Terminator imagery or tired jokes about robot uprisings

maybe not, but they do have some more relevant disturbing imagery in their media

...such as Serial Experiments Lain (more like extended fictional case study Lain, durhurr)
_________________
butts


Last edited by Heretical Rants on Wed Jun 11, 2014 9:41 am; edited 1 time in total
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 200
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 9:41 am    Post subject: Reply with quote

True, but then consider how much adoration and respect that we see given to, say, Hatsune Miku. That's pure software. I don't see it as impossible or even unlikely they'd be very kind/loving to their first sapient A.I. . Usually when science goes wrong in anime, they put the blame on the humans who done fucked it up with the monster being more of a victim themselves. And that's about right if you ask me.

I'm kind of a seasoned optimist at the end of the day. I believe humans naturally adhere to a good path, but there are a lot of bumps, loose rocks and potholes on that road. That we're able to have these conversations though, across geographical distance and sometimes even across languages fills me with a lot of hope.
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 9:47 am    Post subject: Reply with quote

It doesn't matter if you're nice to a non-friendly AI. If it has any use for our atoms other than sustaining us, it'll still annihilate humanity before we even know what's happening.
_________________
butts
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 200
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 10:07 am    Post subject: Reply with quote

Well, that's just thing. A non-friendly AI isn't going to materialize from the aether. We'll have to have made it first. That's why I keep banging on about the children analogy.
I'm not too worried about a disaster though because I'm fairly confident in our abilities to have appropriate failsafes or a response of some sort. Humans are pretty remarkable in their ability to employ countermeasures even against seemingly overwhelming threats. It's why zombie apocalypse stories are such horseshit from word one.
Back to top
View user's profile Send private message
Heretical Rants



Joined: 21 Jul 2009
Posts: 5344
Location: No.

PostPosted: Wed Jun 11, 2014 11:01 am    Post subject: Reply with quote

This isn't really relevant to the ethics discussion that this started with, since that was more about highly constrained human-like artificial intelligences and, as you point out, a Friendly AI won't be that unless we make it so (it doesn't even have to be sentient in order to do its job), but it is important:


* An unfriendly AI can pretend to be friendly, and it can pretend to be stupid, until we let it out of the box, and can hack around any and all human-designed failsafes.

* Friendly AIs are a tiny subspace of possible minds. Friendliness is not the default.

* The first AI won't necessarily be intelligently designed. Someone suggested using genetic algorithms on fast hardware? Gagh. A naive implementation of that won't work, but there are other uncontrolled mechanisms that might.

* Bad things happen even if we get friendliness nearly right.

* Things that seem easy to express are really difficult to formalize mathematically, and when you're devising an AI's base goals you have to do it formally.

* We only get one chance. An optimizing agent with a particular goal system is going to be dead-set against changing its goals once they are in place, because not having an optimizing super-intelligence with those exact goals makes those goals far less likely to come to fruition.

* The rewards of creating a Friendly AI are immense. The only way to prevent people from trying would be to nuke all of the infrastructure.
_________________
butts
Back to top
View user's profile Send private message
Midnight Tea



Joined: 15 Jul 2012
Posts: 200
Location: In the Haunted Lands

PostPosted: Wed Jun 11, 2014 2:21 pm    Post subject: Reply with quote

Heretical Rants wrote:
* An unfriendly AI can pretend to be friendly, and it can pretend to be stupid, until we let it out of the box, and can hack around any and all human-designed failsafes.

Eh. Well. Human society is already built around extreme trust against unknowns. For instance, when you cross the street you generally aren't expecting for stopped motorists you're crossing in front of to suddenly hit the gas and mow you down. You don't even know the people in those automobiles generally. Likewise, in terms of overall damage to your life friends and family can fuck you up like no stranger ever could.

And saying they could hack around any and all human failsafes is kind of stacking the deck in its favor in this hypothetical. That's the thing about hypotheticals, they can be stretched out any way you like and where exactly it strays from the believable can sometimes be iffy.

But let's just say I have a lot of confidence in human ability to kill things threatening it. Several hundred thousand years at the top of the food chain is testament to that.

Quote:
* Friendly AIs are a tiny subspace of possible minds. Friendliness is not the default.

Nor is hostility, though, which is why I don't advocate society's fixation on the hostile outcomes. I don't want the new mind to think we fear it.

Quote:
* The first AI won't necessarily be intelligently designed. Someone suggested using genetic algorithms on fast hardware? Gagh. A naive implementation of that won't work, but there are other uncontrolled mechanisms that might.

It's hard to really speculate here, I'll just say you could be right. Speculating where computer science might go is a multi-billion dollar industry. I do know it'll be embarrassing as hell if the first true A.I. came from a bored Minecraft player who found some really scary unorthodox uses for redstone. (I'm joking)

Quote:
* Bad things happen even if we get friendliness nearly right.

* Things that seem easy to express are really difficult to formalize mathematically, and when you're devising an AI's base goals you have to do it formally.

That's kinda why I'm hoping those insanely obsessive-compulsive but enthusiastic Japanese computer science engineers cross the threshold first. It also helps that their society has some very scripted behavioral standards that can help define parameters. And yes, their media isn't completely fixated on worst case scenarios that I fear could be self-fulfilling prophecy.

Quote:
* We only get one chance. An optimizing agent with a particular goal system is going to be dead-set against changing its goals once they are in place, because not having an optimizing super-intelligence with those exact goals makes those goals far less likely to come to fruition.

* The rewards of creating a Friendly AI are immense. The only way to prevent people from trying would be to nuke all of the infrastructure

Agreed.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    Sinfest Forum Index -> Sinfest All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next
Page 6 of 7

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group