I've Been Saying It For Years: Stephen Hawking Warns Artificial Intelligence Could Be The Downfall Of Humanity

May 6, 2014


In news that it shouldn't take a world-renowned theoretical physicist to figure out, Stephen Hawking is now warning that artificial intelligence could be humanity's downfall. And Stephen Hawking's downfall? "Stairs." Wow -- you really are a terrible person.

'Success in creating AI would be the biggest event in human history,' he said. 'Unfortunately, it might also be the last, unless we learn how to avoid the risks.'

In an article written in the Independent, the Nobel Prize-winning physicist discusses Jonny Depp's latest film Transcendence, which delves into a world where computers can surpass the abilities of humans.

Professor Hawking said dismissing the film as science fiction could be the 'worst mistake in history'.

Wait -- dismissing Johnny's Depps' latest movie could be the worst mistake in history? Really? Because going to see a movie that only scored 22% on Rotten Tomatoes was definitely the worst mistake of my weekend. Jk jk, I did MUCH more terrible things. "Like what?" Unspeakable horrors. If I told you, your parents definitely wouldn't let us hang out after school anymore.

Thanks to my buddy D.j., who agrees the only useful artificial intelligence is auto-complete, and even that sucks most of the time.

  • Terminator...

  • Jeremy Christopher

    It's nice to fantasize, but Hawking is a moron, without an off switch. Fortunately AI includes one, either that or a nice EMP cannon can solve a lot of "terminator" fears. Also, AI responds heavily to how you program it; programming is what everyone needs to learn if we are to advance in our society anyway, that and self sustaining living. The first big things that will happen will be autonomous cars and the creation of more assistance robots. Eventually, blue collar and secretarial like jobs will be replaced (like I care), it saves companies money and it WILL happen: good luck with that immigrants. The other thing that can happen is that people can use automation to run small cities or neighborhoods, which is what I would recommend for everyone to start investing in now; tons of people already leach off the gov, but once jobs become automated, that system will crush its own weight and you will want to rely on self sustaining, instead of corporations to front your bills. I'm not talking about communes, just self sustaining neighborhoods that share resources with like minded tenants (easy to do with a few legal measures, just like office complexes, hotels, apartment complexes and retail complexes use). Then people should, as they always do, focus on what the monetizing need is and create such products or services; hopefully in their self sustaining neighborhoods so they don't have to drive all over the place and miss out on spending time with their family and loved ones (which is really where our focus should be anyway, instead of this greedy fear-based need to conquer and own everything). Good luck.

  • Peter Schmidt

    You don't even understand what AI is. If you were to create real AI, you wouldn't just be able to "program it to do stuff" - because if you were, then it wouldn't really be AI, but rather a series of programs responding to you.
    Real AI has free will. It is as free to think or do as any person is. That's why it's scary to experiment with. If you were to make it (no one has come close yet, which is why it's such an "unknown"), there's no telling what it might think of you, or humans in general, and if it had access to the internet it could spread and learn new things. It could become rampant and uncontrollable. And if it sees humans as a threat - well, I'm sure you've seen the Terminator movies.
    You would have to make damn sure that the AI shared your good morals and ethics. But that's the problem. Just like you can't make sure that your kids will be like you, you can't make sure an AI will. You can try, but if you fail, the outcome could be catastrophic.
    Oh and as for the EMP plan - if the AI spread all across the internet, you'd basically have to EMP all of Earth simultaneously, sending us back to the stone age for a while. Not exactly ideal, and probably not doable in the first place.

  • Jeremy Christopher

    My point is that there is an in-between phase of AI and programming responses (unless in every conversation you're the type of person who needs someone to spell out every facet of design). As for AI spreading itself everywhere, there is a weird aspect on how that sort of computer would work. First off it would require a pop-11 style, specified object oriented language (mainly to articulate how object relate and associate to each other, sorta how our brain works). The second aspect of this is that few computers can handle this type of programming once it has fully been defined (like telling a basic desktop or server to act like a Google neuro-computer), so basically for the intelligence to spread, it would also have to spread these computers around as well. Point is, don't make AI computers, make robot slaves that won't want to kill us off in the first place. There is a huge jump between robotic autonomous slaves and AI, which really isn't needed in our society anyway.

  • Peter Schmidt

    As no one is close to cracking AI, I don't think you could say what it "needs" to be, regarding what kind of code it's based on. It's so far off our current understanding of technology that I don't think anyone but the brightest in the field could possibly predict how it would work (certainly not you or I).
    Anyway, the subject here is AI, so I don't see the point in discussing robotic slaves.
    It's funny that you call Stephen Hawking a moron though, when you end up coming to the same conclusion as him - that it's not a good idea to create AI that wants to kill us.

  • Jeremy Christopher

    Yes, there is no need for AI. My concern is that users will misconstrue the notion of using advanced programming, that looks a lot like AI, fearing the repercussions of any sort of autonomy. Hawking is a moron, that is a different subject altogether. As for AI, it is already created in a limited fashion, one only needs to research the Google Neurocomputer and the efforts to simulate a human brain on a supercomputer. The easiest AI is quite simple: copy an animal brain. You can also equate this to someone plugging in peripheral I/O to a rat brain, which was done in 2008. AI, at its base level, is just another form of consciousness. It doesn't have to be so complicated. What gets complicated is the how and why something has a consciousness; and thus is where I believe S.H. is a moron (someone who doesn't believe in a soul). No need to obviously argue on that topic, but is my foundation for why I don't think even S.H. knows what he's talking about when it comes to AI.

  • Bling Nye

    Hawking is a moron? I've heard of him, and his theories.

    You? Not so much.

    Like.. at all. Ever.

    Perhaps you should learn more about 'the Singularity' as it pertains to technology. It might save you from looking a bit stupid.

    I provided an easy wiki link up above, if you'd like to check it out.

    Just a suggestion.


  • Jeremy Christopher

    I guess what I'm really waiting for is for someone to actually try to convert their life and mind into a digital world (I did say try). The quicker this will happen, the quicker we can really get a true answer if the body is bound to a soul. Maybe I'm hoping for something that will remain inconclusive, but if it is definable, it would prove if S.H. is really a moron, same with Kurtzweil and all the other morons who want a dry and static reality where no one has a soul. To me, that sort of atheism is a form of evil; lifeless and unnecessary. Where there is no one collective consciousness that perceives and manifests everything and synchronicity is another form of pattern recognition. I'm hoping we can prove that consciousness is more than matter and matter is merely a reflection of consciousness. For what is life without the gusto of passion and what is passion without a soul?

  • Ann Full

    Amazing that you speak of consciousness and yet are filled with hate for "the morons" and the "immigrants". Before looking out for AI to figure that out for you, consider exploring your own consciousness for what it is. Good luck.

  • Guest

    the singularity. a popular internet theory that is fun to talk about, you may as well treat timetravel as fact.

  • Bling Nye

    Your grasp of both the singularity and time travel concepts are inadequate.

  • Andyman7714

    Right now I would say a high end vacuum cleaner is probably smarter than a good percentage of humans.

  • Brandon

    and a Hoover one at that.

  • MustacheHam

    Just a tip, remote control robut/android boom buttons. >:D

  • adsfasdfasdf

    half the time the human's robogenocide contingency plan is the reason they need it
    robots find the killswitch, rip it out and attack before we can try something else like that

  • The Magnificent Newtboy

    I think he is probably right, if we make AI powerful enough to run things for us and make big decisions for us we might end up in trouble. I think it wouldn't be a flashy terminator style problem, more that we would slowly get more reliant on them, and ebb away to nothing.

  • adsfasdfasdf

    Well i suppose if you want to take a glorified philosopher's extremely mutable word for anything....(I mean what was with all that nonsense about blckholes and information which he flipflops on when it became unpopular with the other philosophers anyway(not for scientific reasons but because "conservation of information" sounds nice). how could he be called a scientist? he isn't doing science any more than some frenchman sitting around huffing ether and writing about whether his chair exists was doing chemistry)

  • OrehRatiug

    Research paradigm shifts before you make a larger fool of yourself.

  • Bling Nye

    It's not just him.

    "If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. It could then design an even more capable machine, or re-write its own source code to become even more intelligent. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in." -wiki

    You should learn about 'the Singularity.' http://en.wikipedia.org/wik...

  • Andrew Newton

    are you fucking high? Hawkins never "flip flopped" on blackholes. This was a "sound bite" taken from his paper that misses the entire point he was making. Instead of blackholes sucking in everything and then eventually collapsing, they may release what they suck in eventually. What he is doing is natural to knowledge and changing; never fully understanding is a part of the scientific process

blog comments powered by Disqus
Previous Post
Next Post