Author Topic: Rise of the Machines  (Read 8201 times)

0 Members and 1 Guest are viewing this topic.

Offline Her3tiK

  • Suffers in Sanity
  • The Beast
  • *****
  • Posts: 1940
  • Gender: Male
  • Learn to Swim
    • HeretiK Productions
Re: Rise of the Machines
« Reply #30 on: September 01, 2013, 04:13:51 pm »
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
Give it a gun and see what it does.
Her3tik, you have groupies.
Ego: +5

There are a number of ways, though my favourite is simply to take them by surprise. They're just walking down the street, minding their own business when suddenly, WHACK! Penis to the face.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #31 on: September 01, 2013, 05:31:27 pm »
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.

Omnipotent, no, but for all practical purposes impossible to defeat. In human experience, "someone much smarter than you" invokes pictures of Einstein or von Neumann, but this is the wrong reference class. Imagine something as far beyond the smartest human as the smartest human is beyond the smartest dog. Considering something like that, it doesn't need much in terms of physical resources. Give it an internet connection and it will take over the world. Hell, give it just about any way of interacting with a human, and it will use that to convince the human to give access to the resources it needs (when you consider the ability charismatic humans have to manipulate other humans, it'd be optimistic to the point of ridiculousness to assume a superintelligence wouldn't be able to trick us into doing what it wants).

Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?

...why would it? The sort of personality a very smart human has is a result of a thousand tangled details in the design of human brains, most of which are results of the way human brains came about as a result of natural selection in a very specific environment, or were accidental side-effects of other things. There's no  reason an AI would follow that particular path of all the myriad possible paths a mind can take.

To assume that a superintelligence must be human-like in personality is severe anthropocentric bias; the space of possible minds is not constrained to human-like minds, it just seems that way intuitively because we interact only with human-like minds.
Σא

Offline PosthumanHeresy

  • Directing Scenes for Celebritarian Needs
  • The Beast
  • *****
  • Posts: 2626
  • Gender: Male
  • Whatever doesn't kill you is gonna leave a scar
Re: Rise of the Machines
« Reply #32 on: September 01, 2013, 06:43:31 pm »
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.

Omnipotent, no, but for all practical purposes impossible to defeat. In human experience, "someone much smarter than you" invokes pictures of Einstein or von Neumann, but this is the wrong reference class. Imagine something as far beyond the smartest human as the smartest human is beyond the smartest dog. Considering something like that, it doesn't need much in terms of physical resources. Give it an internet connection and it will take over the world. Hell, give it just about any way of interacting with a human, and it will use that to convince the human to give access to the resources it needs (when you consider the ability charismatic humans have to manipulate other humans, it'd be optimistic to the point of ridiculousness to assume a superintelligence wouldn't be able to trick us into doing what it wants).

Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?

...why would it? The sort of personality a very smart human has is a result of a thousand tangled details in the design of human brains, most of which are results of the way human brains came about as a result of natural selection in a very specific environment, or were accidental side-effects of other things. There's no  reason an AI would follow that particular path of all the myriad possible paths a mind can take.

To assume that a superintelligence must be human-like in personality is severe anthropocentric bias; the space of possible minds is not constrained to human-like minds, it just seems that way intuitively because we interact only with human-like minds.
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.
What I used to think was me is just a fading memory. I looked him right in the eye and said "Goodbye".
 - Trent Reznor, Down In It

Together as one, against all others.
- Marilyn Manson, Running To The Edge of The World

Humanity does learn from history,
sadly, they're rarely the ones in power.

Quote from: Ben Kuchera
Life is too damned short for the concept of “guilty” pleasures to have any meaning.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #33 on: September 01, 2013, 06:57:33 pm »
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.

Cars are man-made, and we don't expect them to use their wheels as feet to run.  An AI theory with some insight into intelligence itself should be able to build an AI without just copying the blind design that is the human brain, in the same way that we can have a theory of motion that allows us to build things that move and aren't just copies of things in nature.

Barring the case where we do sped-up whole-brain emulation for AI or whatever, of course. Which is really not the case I'm discussing here.
Σא

Offline PosthumanHeresy

  • Directing Scenes for Celebritarian Needs
  • The Beast
  • *****
  • Posts: 2626
  • Gender: Male
  • Whatever doesn't kill you is gonna leave a scar
Re: Rise of the Machines
« Reply #34 on: September 01, 2013, 09:43:11 pm »
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.

Cars are man-made, and we don't expect them to use their wheels as feet to run.  An AI theory with some insight into intelligence itself should be able to build an AI without just copying the blind design that is the human brain, in the same way that we can have a theory of motion that allows us to build things that move and aren't just copies of things in nature.

Barring the case where we do sped-up whole-brain emulation for AI or whatever, of course. Which is really not the case I'm discussing here.
Well, if we are discussing a machine that is basically a super-ultra-god-computer, it wouldn't emotions, but pure logic. The only example in nature that would be logical to emulate to give a machine emotions is humans.
What I used to think was me is just a fading memory. I looked him right in the eye and said "Goodbye".
 - Trent Reznor, Down In It

Together as one, against all others.
- Marilyn Manson, Running To The Edge of The World

Humanity does learn from history,
sadly, they're rarely the ones in power.

Quote from: Ben Kuchera
Life is too damned short for the concept of “guilty” pleasures to have any meaning.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #35 on: September 01, 2013, 10:24:42 pm »
But you don't need to emulate humans to use logic, or reasoning in general. You can actually have a theory of intelligence that you can contrast with human reasoning*, and it's basically a prerequisite for any reliable strong AI (as opposed to an AI you get through obscure methods, like genetic algorithms or neural networks. Some of these cases will have biases in the same way humans do, but the similarities will end long before you get to the 'specific personality traits' level. That's not an artefact of us being a neural network, it's a result of our specific evolutionary history which the AI won't share).


*We do have the beginnings of something like that right now. A fair bit of the literature on cognitive biases works by comparing human deductions with the results you'd get from mathematically correct probabilistic or logical reasoning (e.g. the conjunction fallacy). Obviously there's a ways to go still, but we actually sit down and do the math and say 'this is roughly how an ideal rational agent would behave', without emulating human reasoning.
Σא

Offline RavynousHunter

  • Master Thief
  • The Beast
  • *****
  • Posts: 8108
  • Gender: Male
  • A man of no consequence.
    • My Twitter
Re: Rise of the Machines
« Reply #36 on: September 01, 2013, 10:31:29 pm »
In the end, a proposed hyper-intelligent machine would mostly be a crapshoot, influenced by how it was initially designed in the first place.  If it was designed with our well-being in mind and care was taken to ensure that it had few, if any, loopholes that could be abused into allowing it the power of killing humans, then it might end up being more of a benevolent overlord than SHODAN.
Quote from: Bra'tac
Life for the sake of life means nothing.

Offline Canadian Mojo

  • Don't Steal Him. We Need Him. He Makes Us Cool!
  • The Beast
  • *****
  • Posts: 1770
  • Gender: Male
  • Υπό σκιή
Re: Rise of the Machines
« Reply #37 on: September 01, 2013, 11:51:23 pm »
In the end, a proposed hyper-intelligent machine would mostly be a crapshoot, influenced by how it was initially designed in the first place.  If it was designed with our well-being in mind and care was taken to ensure that it had few, if any, loopholes that could be abused into allowing it the power of killing humans, then it might end up being more of a benevolent overlord than SHODAN.
The problem is that it has to be able to kill through act or omission in order to rule over a planet. Hell, your average town council make decisions like that when they vote on whether or not to put stoplights in and what sort of funding they are going to provide for emergency services and snow removal. Don't make the mistake of thinking the job can be done without death in the equation.

Offline mellenORL

  • Pedal Pushing Puppy Peon
  • The Beast
  • *****
  • Posts: 3876
  • Gender: Female
Re: Rise of the Machines
« Reply #38 on: September 02, 2013, 12:37:54 am »
The institutional entities that have the money going into AI theory R & D are not necessarily so keen about "for the greater good of Humanity and Earth". They are a bit more bottom line oriented, and a bit more complacent about knowing what's good for us all.

I distrust over-centralizing power. An AI construct assigned to govern is by default an automatic autocrat - pun unavoidable. Unless we create multiple AI governing constructs, and set them up a la No Exit to argue with each other? Which is completely anthropomorphic and not gonna happen. Web connected AI's would necessarily flow right through each other and merge and change constantly. At that level of bandwidth and connectivity, spontaneous evolution of the AI's is all but inevitable, also considering packet loss and line noise causing some spontaneous coding errors - tiny, mostly harmless mutations, basically. To protect the AI constructs from rapid, harmful code mutation accumulation (all occurring at the speed of light, mind you, since that is the nature of these beasties), it probably Would be necessary to allow AI's access to their source code, as problems (from our point of view at least, maybe not from the AI's standpoint) would happen and develop much too quickly for human monitors to correct in time.  Anyway, I snarkily think that what the working AI development groups' bosses envision for a governing AI is something more like a monstrously powerful, all-invasive spyware adbot/cop than a self-improving beneficial care taker for Humanity and all the pretty trees and clouds and stuff. 
Quote from: Ultimate Chatbot That Totally Passes The Turing Test
I sympathize completely. However, to use against us. Let me ask you a troll. On the one who pulled it. But here's the question: where do I think it might as well have stepped out of all people would cling to a layman.

Offline RavynousHunter

  • Master Thief
  • The Beast
  • *****
  • Posts: 8108
  • Gender: Male
  • A man of no consequence.
    • My Twitter
Re: Rise of the Machines
« Reply #39 on: September 02, 2013, 09:05:09 am »
I'd just use it for the most realistic game AI ever made by man.  Also, it'd probably a right bitch to best...but, hey, challenge is good!
Quote from: Bra'tac
Life for the sake of life means nothing.