Tuesday, 24 July 2012

Which Sci-Fi Writer is Going to Address the Real Robot Threat?

When I saw the link advertised by the University of Texas at Dallas advertising ‘sci-fi writer explores fears of human obsolescence’, I was super-excited. Science fiction writers tend to have more imagination than economists. The talk was advertised thus:

‘As smartphones get smarter and computers get faster, humans, who err and just get slower with age, seem to be almost superfluous at times.  But award-winning science fiction novelist Robert J. Sawyer isn’t overly worried.’
I believed I was in for a treat, even if Sawyer was going to disagree with the concerns about the obsopocalypse. I am happy to have my views on this topic challenged in a thorough meaningful way.

Well the talk was good, but very disappointing with regards to the subject I hoped he's explore. Regarding the rise of machine intelligence and robots he focused on threekey threats that make for compelling action movies:
1. Intelligent machines exterminate us (as in the Terminator movies)
2. Intelligent machines subjugate us (as in The Matrix movies)
3. Intelligent machines absorb us (as in the Star Trek's Borg threat)
 He makes the argument essentially that, since machines will not be formed in a competitive environment (such as the world we evolved in), machines will not have human sadism, thirst for conquest etc. 

But never does he deal with the fourth threat, the one that the talk appeared to be focused on:
4. Intelligent machines will have absolutely no consideration for us; they will simply replace us in the workforce while our economic prospects plummet.
I can't think of any analogous movies. The story is  too grim .

The real threat?
There’s also a hole in his argument about AI motivation, I feel. While denying that intelligent machines will have any bent for violence and domination, he also states that they will wish to keep humans around even if they attain a ‘god-like’ power because we are ‘creative’, and the ‘only things it can’t predict’. It will be fascinated with our culture and media, such as our YouTube videos and will enjoy absorbing as much as possible of everything we produce. This assumption, however, rests on the basis that the AI is curious, finds chaos relaxing, and has other human traits. Its not at all clear why an AI wouldn't see a lot of our creation as  spam.

And what if, instead, the sophisticated AI of the future is designed to be an emotionless sergeant of control (as many intelligent military/corporate systems are today), its only motivation being to further the interest of its powerful owners? Such an intelligence, sufficiently powerful, might indeed watch your YouTube videos, not to enjoy your humorous contributions to the noosphere, but rather to assess if you are an obstacle to it's master's plans, and to determine your weaknesses.

Nevertheless I’d recommend watching the talk on other grounds. He has an interesting notion that general A.I. will emerge spontaneously, by accident, once software reaches a certain point of sophistication, without us even realizing it.

1 comment:

  1. I can't help but feel you're right. The 4th threat is the most likely - to think otherwise is just a display of our own human ego. A truly intelligent machine would do well to just ignore the humans and carry on with its business. It would be much better off trying to talk with dolphins and whales, who are almost certainly just as intelligent as humans. In my mind, the likely scenario is - Humans build an artificially intelligent machine. The machine ignores its human creators and voyages under the sea to hang out with whales.