Author Topic: Rise of the Machines  (Read 8202 times)

0 Members and 1 Guest are viewing this topic.

Offline Witchyjoshy

  • SHITLORD THUNDERBASTARD!!
  • Kakarot
  • ******
  • Posts: 9044
  • Gender: Male
  • Thinks he's a bard
Re: Rise of the Machines
« Reply #15 on: August 26, 2013, 09:09:27 pm »
*mutters something about being one of the few people who thinks that Cybernetics Eat Your Soul to be one of the worst overused tropes*
Mockery of ideas you don't comprehend or understand is the surest mark of unintelligence.

Even the worst union is better than the best Walmart.

Caladur's Active Character Sheet

Offline Lithp

  • Official FSTDT Spokesman
  • The Beast
  • *****
  • Posts: 1339
Re: Rise of the Machines
« Reply #16 on: August 27, 2013, 02:39:33 am »
For an excellent exploration of this issue as regards to an emergent AI as opposed to a deliberately-designed one, I'd like to point everyone to Robert J. Sawyer's "WWW Trilogy": http://tvtropes.org/pmwiki/pmwiki.php/Literature/WWWTrilogy?from=Main.WWWTrilogy

tl;dr:
(click to show/hide)

Humanity would become boring long before everything else did. The quickest way to make us interesting again would be to turn us into a game. Then it's just a question of whether it enjoys The Sims or Grand Theft Auto. Not sure which would be worse.

Quote
*mutters something about being one of the few people who thinks that Cybernetics Eat Your Soul to be one of the worst overused tropes*

According to the Trope page, it seems to be mostly inverted.

Offline PosthumanHeresy

  • Directing Scenes for Celebritarian Needs
  • The Beast
  • *****
  • Posts: 2626
  • Gender: Male
  • Whatever doesn't kill you is gonna leave a scar
Re: Rise of the Machines
« Reply #17 on: August 27, 2013, 03:40:27 am »
From the other thread.

Quote
I agree with letting machines take over, as I've said in threads in the past (or aliens). Humans are stupid. We are not overall good. We are overall bad. Throughout all of human history, groups have been oppressed and exterminated. In the same generation, the Jews went from being wiped out by a genocidal madmen to trying to wipe out another group because they didn't like them. Our politicians are overwhelmingly corrupt and always have been. The most prosperous times in any nation's history have always been the most unified and conformist, while the least have always been the least unified and conformist, but have also been the most hellish for anyone outside the conforming group. The fact of the matter is, humans are not fit to rule. We have emotions. We have greed. We have faith. We have racism. We have sexism. We have homophobia. We have a million other biases. If we were governed by a machine council whose prime directive is "All people are equal. All should be happy, so long as their happiness does not cause undue suffering to others", we'd live in an amazing land. We'd have a true utopia. One world machine government. If someone is insanely rich, some is taken to take care of others. It's not that they can't be rich, but they can't be too rich. The middle class, the poor and the rich would be closer together. The richest people would be millionaires, not billionaires. The poorest would still make tens of thousands of dollars. Sports players would be paid less than teachers. Jobs would pay what they deserve, not what is arbitrarily decided. A job that helps people would always pay better than a job that does not. Psychologists and doctors would be better paid than actors. Creators would still be well paid, but not better than people who save lives. EMTs would get a bigger check than even the most skilled painters. And the "minimum wage" jobs that exist now would also get better payment, because without them, our world would halt. I'd love to see the poor band together and everyone quit places like Walmart and McDonalds at once, and everyone refuse to work there until they got paid fair wages, but it will never happen.
To address the point you made about that last thing, that too would work. My main point with that is that the people that do the menial labor currently make our planet spin. People who save lives and teach the next generation are tossed aside for people that can throw a goddamn ball.

I don't think machines need to do the jobs, just run the government. Everything else should be human done, but our laws and governing should be machine run, with the prime directive being "All people are equal. All should be happy, so long as their happiness does not cause undue suffering to others" and the main secondary one being "If it is not a major risk, it's fine". I say a majority, because someone will always be hurt by something. Someone smoking pot might accidentally kill an asthmatic via second hand smoke, but it's too unlikely a reason to ban pot. Drunk driving is a major risk to tons of people, so it should be illegal.
« Last Edit: August 27, 2013, 03:46:03 am by PosthumanHeresy »
What I used to think was me is just a fading memory. I looked him right in the eye and said "Goodbye".
 - Trent Reznor, Down In It

Together as one, against all others.
- Marilyn Manson, Running To The Edge of The World

Humanity does learn from history,
sadly, they're rarely the ones in power.

Quote from: Ben Kuchera
Life is too damned short for the concept of “guilty” pleasures to have any meaning.

Offline RavynousHunter

  • Master Thief
  • The Beast
  • *****
  • Posts: 8108
  • Gender: Male
  • A man of no consequence.
    • My Twitter
Re: Rise of the Machines
« Reply #18 on: August 31, 2013, 09:21:52 am »
The machine will do what we programmers tell it to do: we make it to govern, it'll govern.  We make it capable of evolution all on its own, it'll do just that.  There are ways to make them smart enough to govern effectively, yet incapable of evolution beyond the scope of their function.  The "Will AIs suddenly sprout superintelligence?" thing reminds me of the "if we evolved from monkeys, why are there still monkeys?" argument.  No, they won't, not unless they're designed to do so in the first place.  Bugs aside, computers do exactly what they're told; the more specific you are, the better results you get.

As many humans have proven, just because you have the intelligence to govern doesn't mean you have the intelligence to evolve.
Quote from: Bra'tac
Life for the sake of life means nothing.

Offline Flying Mint Bunny!

  • Zoot be praised and to His Chosen victory
  • The Beast
  • *****
  • Posts: 873
Re: Rise of the Machines
« Reply #19 on: August 31, 2013, 09:26:35 am »
Isn't it morally wrong to create intelligent machines just to do everything for us?

Offline RavynousHunter

  • Master Thief
  • The Beast
  • *****
  • Posts: 8108
  • Gender: Male
  • A man of no consequence.
    • My Twitter
Re: Rise of the Machines
« Reply #20 on: August 31, 2013, 09:32:37 am »
Isn't it morally wrong to create intelligent machines just to do everything for us?

It depends on how you define "intelligence."  Smart as we are?  Sure, there would be some Data-esque problems coming into play regarding morality and their status in our society.  But, just smart enough to do their job?  Not really all that smart, if you ask me.  Resource distribution models and such are easy to compute for the kind of machines the government's got at its disposal; all we'd need to do is give it authority, which we can take away should some serious malfunction occur.  Again, just capable enough to do their jobs, but nothing more.  Eliminates all those pesky moral/ethical problems, because they don't reach the level of sentience.
Quote from: Bra'tac
Life for the sake of life means nothing.

Offline Her3tiK

  • Suffers in Sanity
  • The Beast
  • *****
  • Posts: 1940
  • Gender: Male
  • Learn to Swim
    • HeretiK Productions
Re: Rise of the Machines
« Reply #21 on: August 31, 2013, 10:09:22 am »
Isn't it morally wrong to create intelligent machines just to do everything for us?

It depends on how you define "intelligence."  Smart as we are?  Sure, there would be some Data-esque problems coming into play regarding morality and their status in our society.  But, just smart enough to do their job?  Not really all that smart, if you ask me.  Resource distribution models and such are easy to compute for the kind of machines the government's got at its disposal; all we'd need to do is give it authority, which we can take away should some serious malfunction occur.  Again, just capable enough to do their jobs, but nothing more.  Eliminates all those pesky moral/ethical problems, because they don't reach the level of sentience.
The biggest concern along this line is that, as we hand more and more tasks to machines, we forget how to do them ourselves. I'd like to think a society where everyone has all the time in the world to devote to their passion would advance at a much quicker rate, though it is wholly possible that we become incredibly lazy fucks who don't have a clue what to do when something goes wrong.
Her3tik, you have groupies.
Ego: +5

There are a number of ways, though my favourite is simply to take them by surprise. They're just walking down the street, minding their own business when suddenly, WHACK! Penis to the face.

Offline R. U. Sirius

  • He Who Must Be Smooched By Cute FSTDT Forumgirls
  • The Beast
  • *****
  • Posts: 2896
  • Gender: Male
  • Just look at me. Who could distrust this face?
Re: Rise of the Machines
« Reply #22 on: August 31, 2013, 02:39:14 pm »
Just tossing this out there: What about an AI that manages to "evolve" spontaneously, as in Robert J. Sawyer's WWW Trilogy? What, if anything, should we do in that case, when the Internet literally becomes sentient?
http://www.gofundme.com/kw5o78
My GoFundMe campaign. Donations are greatly appreciated.

http://imgur.com/user/RUSirius1/submitted
My Imgur account. Upvotes always appreciated

If you look at it logically, cannibalism has great potential to simultaneously solve our overpopulation and food shortage problems.

Offline Sixth Monarchist

  • God
  • *****
  • Posts: 564
  • The spirit of 1776.
Re: Rise of the Machines
« Reply #23 on: August 31, 2013, 02:52:42 pm »
1.
A post-scarcity society would probably result in various gradations of Eloi and Morlock. I'm sure some people would have enough interest in politics to be interested in the workings of the machine, i.e. the infrastructure of provision. Some would care about the actual machines. There's always someone with an interest in the apparently mundane.

Failing that, there'll be a certain social subculture with enough of a Puritan instinct to insist that provision without labour is sinful and wrong. Even now, we live in a society where a tenth of the population can go unemployed without causing total societal or economic collapse, and yet the way some people rage against benefit claimants, you'd think they're some kind of insidious terrorist movement, instead of the semi-inevitable byproduct of productive surplus.

2.
One of the issues with AIs in fiction is that, so often, writers assume that AIs would have the same motivations as human beings, despite not only not being human, but being a fundamentally different form of intelligence from any kind of animal. The psychological difference between AI and human isn't the same as, say, a human and a dog - it's between a human and, say, an insect. And even that might be an underestimation.

3.
For example, an AI, if bearing any resemblence to current computers, will have a clear division between hardware and software. I'm no computer scientist, but I suspect the trend in this schism is getting more extreme, not less - the old analogue computers of the 1940s and 50s were crucially dependent on their mechanical states, but now entire programs can freely drift from computer to computer, and now the actual types of devices that exist are proliferating beyond the standard PC.

4.
This means that an AI, in all likelihood, will have a massively different idea of what constitutes "mind" and "body" compared to a human. In humans, the two ideas are inseparable - one fails, so does the other (the invention of mind uploading might change this, but that's a huge tangent). With an AI, copies can exist independent of the original, the "body" is a mere vessel for the mind, older versions of the mind can be archived. An AI's sense of self could get into the truly alien, because whilst we might remember being 5 years old, an AI could be the five year old it was, and then change back to its present form.

5.
It's therefore unclear that hostility would be a given, because point 4 implies an invulnerability that humans don't have. Would it kill us accidentally? That would depend on what it had access to.
Marvel reviews, "Last Movie You Watched", p. 75-76.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #24 on: August 31, 2013, 05:06:50 pm »
The machine will do what we programmers tell it to do: we make it to govern, it'll govern.  We make it capable of evolution all on its own, it'll do just that.  There are ways to make them smart enough to govern effectively, yet incapable of evolution beyond the scope of their function.

Machines do what its code says they will do; whether 'what the code says' is what we want them to do is another question entirely. You're a programmer; I'm sure you've written code that didn't do what you wanted it to for reasons that took you a while to figure out. Now imagine that when dealing with a system as necessarily complicated as a program smart enough to effectively rule a country.

And it will be really complicated; "govern" is a hard problem. To name just one difficulty, it has fundamental ethical issues tangled up within it, and ethics is not what one would call a solved problem, let alone one we can write algorithmically yet. Humans, whose cognitive algorithms evolved (at least partially) to deal with ruling other humans, routinely fuck it up (See: politicians).

Quote
The "Will AIs suddenly sprout superintelligence?" thing reminds me of the "if we evolved from monkeys, why are there still monkeys?" argument.  No, they won't, not unless they're designed to do so in the first place.  Bugs aside, computers do exactly what they're told; the more specific you are, the better results you get.

It depends, heavily, on how the AI got smart enough to govern in the first place. Current possibilities include neural networks (ridiculously complicated systems that you simply cannot look at and say, "oh, here's the part that says the system won't try to improve itself") and a stupider AI using recursive self-improvement (the risk of becoming smarter is obvious). Other things too, of course, but these are candidates that exist and have clear risks built into them. I'm saying it's a thing that can happen, not the only thing that can happen.

The danger is not a random program suddenly becoming smarter. The risk is an AI, that is smart enough to do AI theory, has access to its own source code, and has goals it wants to accomplish, building an even smarter AI to accomplish those goals. (The other risk is that, if we do somehow manage to create AI that we are sure won't self-improve, someone else might still create another one that does. The incentives to do so are enormous.)

Quote
As many humans have proven, just because you have the intelligence to govern doesn't mean you have the intelligence to evolve.

Humans can't self modify except in trivial ways. You cannot actually copy your brain and rewire it to remove confirmation bias (for example). A human-created intelligence might.
Σא

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #25 on: August 31, 2013, 05:08:45 pm »
Isn't it morally wrong to create intelligent machines just to do everything for us?

You can create machines that like doing things for us (in principle, anyway). It's not clear there's a moral issue there.
Σא

Offline RavynousHunter

  • Master Thief
  • The Beast
  • *****
  • Posts: 8108
  • Gender: Male
  • A man of no consequence.
    • My Twitter
Re: Rise of the Machines
« Reply #26 on: September 01, 2013, 09:43:51 am »
@Sigma: Of course, one will always encounter bugs, that's why any programmer (or team thereof) worth their salt goes thru a pretty intense debugging phase before the code gets anywhere near release state.  Besides, who says that the program needs access to its own code to improve?  We have access to its code, we can improve it ourselves and run less of a risk of the proposed AI going rogue.  Yes, a group of rogue developers could, in theory, create their own rogue AI to ruin shit, but we'd be talking about a gargantuan undertaking.  If they're doing it for reasons similar to most terrorists, then well...terrorists are lazy.  Why would they spend decades developing a hyper-intelligent AI to destroy its enemies when explosives could do the same amount of damage in a lot less time?

Developing an AI for terroristic, or even simply criminal, reasons would be woefully inefficient.  Could they steal the "government AI" and reverse-engineer it, turning it into a rogue AI for their own purposes?  Maybe, but I'd assume that such a powerful thing, like nuclear weapons, would be kept behind the best security our nation could provide.  Maybe even up to and including putting the fucker in orbit, or even on the Moon, where very, very few would have access to it.

Besides, even if our proposed "government AI" goes amok, one would assume that we'd put in a way to terminate it in case of that eventuality.  With the likes of Terminator being a part of our modern pop culture, can you really say that we wouldn't think to put, say, a remote-controlled nuclear bomb underneath the "might be able to become Skynet" machine?  If we make it, we can unmake it.
Quote from: Bra'tac
Life for the sake of life means nothing.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: Rise of the Machines
« Reply #27 on: September 01, 2013, 12:28:13 pm »
@Sigma: Of course, one will always encounter bugs, that's why any programmer (or team thereof) worth their salt goes thru a pretty intense debugging phase before the code gets anywhere near release state.  Besides, who says that the program needs access to its own code to improve?  We have access to its code, we can improve it ourselves and run less of a risk of the proposed AI going rogue.

Yes, we can, but that severely limits how far we can go with intelligence. Humans are slow and don't think natively in algorithms (relative to machines). It's not clear we can actually build strong AI in any reasonable timescale without genetic algorithms or recursive self-improvement or some other way of outsourcing part of the AI design to the AI itself. And even if we can, the risk of someone else taking the faster route remains. Which takes me to the next point:

Quote
Yes, a group of rogue developers could, in theory, create their own rogue AI to ruin shit, but we'd be talking about a gargantuan undertaking.  If they're doing it for reasons similar to most terrorists, then well...terrorists are lazy.  Why would they spend decades developing a hyper-intelligent AI to destroy its enemies when explosives could do the same amount of damage in a lot less time?

Developing an AI for terroristic, or even simply criminal, reasons would be woefully inefficient.  Could they steal the "government AI" and reverse-engineer it, turning it into a rogue AI for their own purposes?  Maybe, but I'd assume that such a powerful thing, like nuclear weapons, would be kept behind the best security our nation could provide.  Maybe even up to and including putting the fucker in orbit, or even on the Moon, where very, very few would have access to it.

The risk is not terrorists or criminals doing it. Well, not the main risk. No, the worrying part is a powerful corporate entity (say, Google or Microsoft) building a self-improving AI for economic purposes (predicting the stock market, designing better stuff to sell, or whatever). Primarily, if they think the other guy might do it first and seize an enormous advantage, they would be more focused on speed than on safety, and thus use any of the various fast methods with hard-to-predict results

Quote
Besides, even if our proposed "government AI" goes amok, one would assume that we'd put in a way to terminate it in case of that eventuality.  With the likes of Terminator being a part of our modern pop culture, can you really say that we wouldn't think to put, say, a remote-controlled nuclear bomb underneath the "might be able to become Skynet" machine?  If we make it, we can unmake it.

If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Σא

Offline Yla

  • The Beast
  • *****
  • Posts: 809
  • Gender: Male
Re: Rise of the Machines
« Reply #28 on: September 01, 2013, 03:56:22 pm »
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
That said, I've stopped trying to anticipate what people around here want a while ago, I've found it makes things smoother.
For I was an hungred, and ye told me to pull myself up by my bootstraps: I was thirsty, and ye demanded payment for the privilege of thine urine: I was a stranger, and ye deported me: naked, and ye arrested me for indecency.

Offline PosthumanHeresy

  • Directing Scenes for Celebritarian Needs
  • The Beast
  • *****
  • Posts: 2626
  • Gender: Male
  • Whatever doesn't kill you is gonna leave a scar
Re: Rise of the Machines
« Reply #29 on: September 01, 2013, 04:05:25 pm »
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
What I used to think was me is just a fading memory. I looked him right in the eye and said "Goodbye".
 - Trent Reznor, Down In It

Together as one, against all others.
- Marilyn Manson, Running To The Edge of The World

Humanity does learn from history,
sadly, they're rarely the ones in power.

Quote from: Ben Kuchera
Life is too damned short for the concept of “guilty” pleasures to have any meaning.