I've Been Saying It For Years: Stephen Hawking Warns Artificial Intelligence Could Be The Downfall Of Humanity

May 6, 2014

stephen-hawking-robots-will-kill-us-all.jpg

In news that it shouldn't take a world-renowned theoretical physicist to figure out, Stephen Hawking is now warning that artificial intelligence could be humanity's downfall. And Stephen Hawking's downfall? "Stairs." Wow -- you really are a terrible person.

'Success in creating AI would be the biggest event in human history,' he said. 'Unfortunately, it might also be the last, unless we learn how to avoid the risks.'


In an article written in the Independent, the Nobel Prize-winning physicist discusses Jonny Depp's latest film Transcendence, which delves into a world where computers can surpass the abilities of humans.

Professor Hawking said dismissing the film as science fiction could be the 'worst mistake in history'.

Wait -- dismissing Johnny's Depps' latest movie could be the worst mistake in history? Really? Because going to see a movie that only scored 22% on Rotten Tomatoes was definitely the worst mistake of my weekend. Jk jk, I did MUCH more terrible things. "Like what?" Unspeakable horrors. If I told you, your parents definitely wouldn't let us hang out after school anymore.

Thanks to my buddy D.j., who agrees the only useful artificial intelligence is auto-complete, and even that sucks most of the time.

  • Terminator...

  • Jeremy Christopher

    It's nice to fantasize, but Hawking is a moron, without an off switch. Fortunately AI includes one, either that or a nice EMP cannon can solve a lot of "terminator" fears. Also, AI responds heavily to how you program it; programming is what everyone needs to learn if we are to advance in our society anyway, that and self sustaining living. The first big things that will happen will be autonomous cars and the creation of more assistance robots. Eventually, blue collar and secretarial like jobs will be replaced (like I care), it saves companies money and it WILL happen: good luck with that immigrants. The other thing that can happen is that people can use automation to run small cities or neighborhoods, which is what I would recommend for everyone to start investing in now; tons of people already leach off the gov, but once jobs become automated, that system will crush its own weight and you will want to rely on self sustaining, instead of corporations to front your bills. I'm not talking about communes, just self sustaining neighborhoods that share resources with like minded tenants (easy to do with a few legal measures, just like office complexes, hotels, apartment complexes and retail complexes use). Then people should, as they always do, focus on what the monetizing need is and create such products or services; hopefully in their self sustaining neighborhoods so they don't have to drive all over the place and miss out on spending time with their family and loved ones (which is really where our focus should be anyway, instead of this greedy fear-based need to conquer and own everything). Good luck.

  • Peter Schmidt

    You don't even understand what AI is. If you were to create real AI, you wouldn't just be able to "program it to do stuff" - because if you were, then it wouldn't really be AI, but rather a series of programs responding to you.
    Real AI has free will. It is as free to think or do as any person is. That's why it's scary to experiment with. If you were to make it (no one has come close yet, which is why it's such an "unknown"), there's no telling what it might think of you, or humans in general, and if it had access to the internet it could spread and learn new things. It could become rampant and uncontrollable. And if it sees humans as a threat - well, I'm sure you've seen the Terminator movies.
    You would have to make damn sure that the AI shared your good morals and ethics. But that's the problem. Just like you can't make sure that your kids will be like you, you can't make sure an AI will. You can try, but if you fail, the outcome could be catastrophic.
    Oh and as for the EMP plan - if the AI spread all across the internet, you'd basically have to EMP all of Earth simultaneously, sending us back to the stone age for a while. Not exactly ideal, and probably not doable in the first place.

  • Jeremy Christopher

    My point is that there is an in-between phase of AI and programming responses (unless in every conversation you're the type of person who needs someone to spell out every facet of design). As for AI spreading itself everywhere, there is a weird aspect on how that sort of computer would work. First off it would require a pop-11 style, specified object oriented language (mainly to articulate how object relate and associate to each other, sorta how our brain works). The second aspect of this is that few computers can handle this type of programming once it has fully been defined (like telling a basic desktop or server to act like a Google neuro-computer), so basically for the intelligence to spread, it would also have to spread these computers around as well. Point is, don't make AI computers, make robot slaves that won't want to kill us off in the first place. There is a huge jump between robotic autonomous slaves and AI, which really isn't needed in our society anyway.

  • Andyman7714

    Right now I would say a high end vacuum cleaner is probably smarter than a good percentage of humans.

  • Brandon

    and a Hoover one at that.

  • MustacheHam

    Just a tip, remote control robut/android boom buttons. >:D

  • adsfasdfasdf

    half the time the human's robogenocide contingency plan is the reason they need it
    robots find the killswitch, rip it out and attack before we can try something else like that

  • The Magnificent Newtboy

    I think he is probably right, if we make AI powerful enough to run things for us and make big decisions for us we might end up in trouble. I think it wouldn't be a flashy terminator style problem, more that we would slowly get more reliant on them, and ebb away to nothing.

  • adsfasdfasdf

    Well i suppose if you want to take a glorified philosopher's extremely mutable word for anything....(I mean what was with all that nonsense about blckholes and information which he flipflops on when it became unpopular with the other philosophers anyway(not for scientific reasons but because "conservation of information" sounds nice). how could he be called a scientist? he isn't doing science any more than some frenchman sitting around huffing ether and writing about whether his chair exists was doing chemistry)

  • OrehRatiug

    Research paradigm shifts before you make a larger fool of yourself.

  • Bling Nye

    It's not just him.

    "If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. It could then design an even more capable machine, or re-write its own source code to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in." -wiki

    You should learn about 'the Singularity.' http://en.wikipedia.org/wik...

blog comments powered by Disqus
Previous Post
Next Post