Thursday, October 29, 2015

A.I. Boom or Doom?


Artificial intelligence advances are accelerating. Today, cellphones can be verbally instructed to find specialty restaurants or movie theaters, or asked virtually any question. Most recognize that, sooner or later, safer self-driving cars will replace human-drivers. In the medicine, computers make faster and more accurate diagnoses and can train medical students. Robots are taking over from human doctors for delicate procedures. New AI products and techniques are being announced with startling rapidity. Only specialists can keep track.

Fifteen years ago, in March 2000, I wrote (1):

It has been estimated that synthetic intelligence will exceed that of humans within about 30 years. At what stage will a machine have an independent legal identity to protect its life, liberty and pursuit of happiness? As the development of artificial body-parts advances to the replacement of whole human segments, perhaps even the brain, when will human identity cease and machine identity commence?

If I can download my entire consciousness to a machine and my physical body shows inferior characteristics, at what stage will I choose to survive in synthetic form and discard the organic original? And, when the organic body is shut down, what are the social, moral, legal and theological implications? Will my synthetic being maintain my legal status?

The new millennium brings with it enormous changes in all areas of human consciousness. Perhaps we will enter the era of trans-human, or even post-human existence. In all spheres of consciousness – social, philosophical, spiritual – we must begin to consider the ramifications and prepare for them

In 2015, the world is already halfway towards those predictions. Let’s prognosticate on the possibilities

AI & Robotics are Here

A Pew Research report predicts that by 2025, AI and robotics will be integrated into nearly every aspect of most people’s daily lives. (2)  Beyond the consequences of human labor replacement, there’s more to worry about – AI will increasingly keep demanding additional resources and will be more and more in charge . (3)

Peter Diamandis thinks that advances in AI will be a key to helping in a new era of "abundance” with enough food, water, and comfort for all humans. (4) Skeptics, and I’m starting to be one of them, worry about the consequences of AI and robotics.

Ray Kurzweil Prognostications

The futurist and inventor Ray Kurzweil has predicted that human-level AI will be achieved by 2029. Beyond that date, Kurzweil has forecasted the Singularity when humans can blend and merge with machines, to become immortal. (5)

Projecting beyond the next few decades, it’s evident that machines will be smarter than humans at just about everything. Computers will eventually be able to program themselves, understand massive quantities of information, and “think” in ways that ordinary humans cannot imagine. And they won’t need to take breaks or relax. There will be few jobs left – for entertainers, performers and creative categories of humans. But, what about beyond that?

Science Fiction Portrayal

Science fiction threats have been written about for years. Artificial intelligence has loomed as a threat to humanity ever since Mary Shelley wrote Frankenstein, first published in 1818. AI dooms humanity, on Earth (Terminator and sequels) or in space (Battlestar Galactica). Humans have been ruled by despotic supercomputer, as in the 1970 Colossus: The Forbin Project.

The futurist and humanist Arthur C. Clarke’s 1968 science fiction novel 2001: A Space Odyssey became famous through Stanley Kubrick's film version. In Steven Spielberg’s 2001 movie A.I Artificial Intelligence, a highly advanced robotic boy longs to become "real" so that he can regain the love of his human mother. Ex Machina, a 2015 British movie, illustrates just how thin the line between intelligence and artificial intelligence really is.

In his recent sci-fi novel Avogadro Corp. William Hertling describes science fiction that is quickly becoming science fact. In great technical detail, Hertling describes complications that proliferate today and will cause significant problems in the very near future. The book’s sub-title warns: "The Singularity is closer than it appears." (6)

Apple Dominance

In the biblical allegory, Satan tempts Adam and Eve to taste the fruit from the tree of knowledge. That first bite of the apple represents the fall of man. The use of the Apple logo is extremely powerful, symbolizing use of Apple computers to obtain knowledge and enlighten the human race. Steve Jobs was very protective of that symbol, cleverly recognizing that it carried centuries of meaning. (7)

In 2015, Apple became the first US Company with a market value above $700 billion, and is now expected to keep growing beyond $ 1 trillion. The recent movie, Steve Jobs paints a portrait of the man who was obsessed with the possibilities of domination of his company in the digital revolution. There’s a veiled connection between Apple’s ascendancy and the growing threat of AI becoming a danger to the continuation of human existence.

It’s not hard to imagine future Internet hacks, generated by AI, providing false-links, or phantom texts sent out to billions of humans with Pied Piper-like impact. The Intelligence would purport to be issuing the instructions for “the good of humanity”. Hey, that could be the plot for a good science fiction story…

AI in Kids Toys

My colleagues from APF (Association of Professional Futurists) brought up an interesting discussion about the use of AI in kids’ toys. Several companies, including toy-industry leader Mattel, are planning to introduce an assortment of AI-enabled toys this holiday season for kids as young as age three. (8)

When a child asks, “Want to play a game?” Mattel’s Hello Barbie immediately accesses one of 8,000 possible responses to simulate the back-and-forth of a typical children’s conversation. The toy essentially deconstructs everything that makes humans special – and replaces it with sensors, computer servers, software and algorithms. It remembers past conversations. For example, it remembers if a child has brothers or sisters, their names and when they last played together. Parents are happy that their child is engaged and is less intrusive.

The value of interactive AI in toys is that they can adapt to a child's special needs and abilities. They can be infinitely patient and always get the child's attention as the top priority. They develop interaction, leadership, and self-reflection in the child, rather than simply putting them in a passive entertainment mode. The child is prepared and conditioned for the future AI-based world that will require socialization not just with others, but also with machines.

From yesterday’s perspective, the kids are simply engaging in a conversation with a make-believe friend. The insidious underlying problem is that intelligent toys allow steady AI encroachment on a much broader scale, removing impediments and resistance that arise from adults who may or may not recognize the dangers. It brainwashes babies to prefer AI interaction over less cooperative human communication. Who knows more – Mommy and Daddy, or my always-available friend?

Threat to Humanity

Two intrepid entrepreneurs and innovators, Elon Musk and Bill Gates, two of humanity's most credible thinkers, say they are terrified of the same thing: Artificial intelligence. Eminent scientist Stephen Hawking underscores the same warning.

During a talk with students from Massachusetts Institute of Technology (MIT) Elon Musk declared that AI is the most serious threat to the survival of the human race. Musk said, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” (9)

In a September 2015 CNN interview, Musk goes even further, “AI is much more advanced than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person. What's not obvious is a huge server bank in a vault somewhere with an intelligence that's potentially vastly greatly than what a human mind can do. And it's eyes and ears will be everywhere, every camera, every device that's network accessible. Humanity's position on this planet depends on its intelligence. So if our intelligence is exceeded, it's unlikely that we will remain in charge."

Theoretical physicist Stephen Hawking warns, "The development of full artificial intelligence could spell the end of the human race." (10)

In a February 2015 Reddit AMA (Ask Me Anything), Bill Gates says, "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that the intelligence is strong enough to be a concern.  I don't understand why some people are not concerned."

Let’s Engage

Please share our discussion by responding to these questions directly via the blog. If you prefer, send me an email and I’ll insert your comments.

  1. Do you consider Artificial Intelligence as providing significant value?
  2. Do you enjoy and appreciate the values that AI provides?
  3. Have you noticed that AI is steadily encroaching in our lives?
  4. Are you uncomfortable with the spread of AI everywhere?
  5. What will you do when your kid think their toy is smarter than you?
  6. Will AI eventually take over? How long?
  7. Can anyone stop the spread of AI? How?
  8. Have you considered that AI would be more effective running the government?
  9. How can AI eventually be stopped? What can stop the advance?
 References

  1. Jim Pinto – Intelligence & Consciousness in the New Age: http://goo.gl/CH3uJP
  2. Predictions for the State of AI and Robotics in 2025: http://goo.gl/CiVx7M
  3. Why We Should Think About the Threat of Artificial Intelligence: http://goo.gl/CpBCWd
  4. Why I Don't Fear Artificial Intelligence: http://goo.gl/QfhJAG
  5. KURZWEIL: Human-Level AI Is Coming By 2029: http://goo.gl/qEZGgp
  6. Avogadro Corp: The Singularity Is Closer Than It Appears: http://goo.gl/mXFrgx
  7. Unraveling the tale behind the Apple logo: http://goo.gl/xB47po
  8. Artificial intelligence is moving from the lab to your kid’s playroom: http://goo.gl/J2A45g
  9. Artificial intelligence is our biggest existential threat: http://goo.gl/82svgT
  10. AI Has Arrived, That Really Worries the World’s Brightest Minds: http://goo.gl/uHIro7

Jim Pinto
Carlsbad, CA. USA
29 October 2015


17 comments:

  1. Jim, my brief thoughts: your blog is very timely. Tech sources have published information about this, and about the opinions of those notables you mentioned, for some time, but the general public seems uninterested. I share the concerns of Musk and Gates et al. My guess is that AI is far more advanced right now than we know, and that we may all be surprised and shocked sooner than we think. The whole world is connected intimately- there is no way to avoid that now- and AI is not going to spend a few years maturing and figuring out its position in life like a biological entity. Its decisions will be made at processor speeds, and it will have access to more "memory" than we can envision, not to mention a hoard of information about each and every one of us conveniently updated on a near live basis.

    Sci-fi has written hundreds of stories about this subject, and the genre has a pretty good history of the "fi" part turning into reality. I am not an alarmist person, but I do think we could be in danger, and the well meaning convenience of the tech we all have today could be blinding us to what it all might mean in the end, or what position it could put us in. We are all dependent on it, too- I am using it now, and will be using Google Maps later to navigate, etc. I don't think anything but a shocking event will stop anyone from continuing with the phones and apps and networks as is. Even then, I am guessing most will believe that it is not THEIR apps, and no one would be interested in THEIR data, etc.

    I am not a philosopher or thinker on this subject in particular (I am in fact a busy control systems integrator), but I cannot think of what would stop powerful globally connected AI, if it determined that our race is a detriment of some kind a-la Terminator, or has another use a-la The Matrix. The fact that it is intelligent and presumably self-configuring means that programming limiting its behavior would likely not pose much of a challenge to it. Physical isolation is first not likely to last for many reasons (CARS have WIFI APs now), but also because that isolation would be viewed as a crime to those who created the AI to start with... I mean, how can it benefit everyone as they intended it to, if no one can reach it or use it or talk to it? You get the idea- human optimism, greed, altruism, activism, or just plain security mistakes will eventually break the barrier.

    So if we can't stop it, and can't control it, and can't stop ourselves from helping it even now, what to do? That, I don't know. Perhaps the best approach is to make sure we as a race would be viewed by an AI as a good thing to keep around.....how's that for a challenge?

    ReplyDelete
  2. Orwell missed the mark with 1984. Everyone has a smartphone with camera and video capabilities. Thus, we're under constant surveillance by our peers, not the government. This article, and most I have read, paint a similar brush stroke over AI with almost Godlike powers. AI is less like Skynet and more like Siri, Watson, Amazon Echo, or Tesla autopilot. AI is not one singular entity. AI will be everywhere and specialized to specific tasks. And this response may have been authored by one of those you seem to fear.

    ReplyDelete
  3. 1. Do you consider Artificial Intelligence as providing significant value?
    Yes..as long as it is respected and controlled for the mission intended.

    2. Do you enjoy and appreciate the values that AI provides?
    At its present level, yes.

    3. Have you noticed that AI is steadily encroaching in our lives?
    Yes - positively - as in when my IPhone wants to finish a word that is not what I want to write.

    4. Are you uncomfortable with the spread of AI everywhere?
    Yes to a large extent.

    5. What will you do when your kid think their toy is smarter than you?
    Build a large bonfire into which the toy will disappear.

    6. Will AI eventually take over? How long?
    Hopefully not..as long as humans can out-think it and control it.

    7. Can anyone stop the spread of AI? How?
    Yes...by putting limitations on the AI and always having a "KILL" switch that only a human can activate.

    8. Would AI would be more effective running the government?
    It would be hard to do worse than what we presently have. Hopefully it would not have the same attitude toward greed.

    9. How can AI eventually be stopped? What can stop the advance?
    Build in the "KILL" switch to initiate at some predefined point.

    ReplyDelete
  4. 1.Do you consider Artificial Intelligence as providing significant value? Yes
    2.Do you enjoy and appreciate the values that AI provides? Yes
    3.Have you noticed that AI is steadily encroaching in our lives? Yes
    4.Are you uncomfortable with the spread of AI everywhere? Yes. Late 2 say that.
    5.What will you do when your kid think their toy is smarter than you? Frank Sinatra.
    6.Will AI eventually take over? How long? Good Question.
    7.Can anyone stop the spread of AI? Probably. How? MIND. Focus what is Mind?
    8.Have you considered that AI would be more effective running the government? Laughing.
    9.How can AI eventually be stopped? Mind of course. What can stop the advance? Thoughts just could be Things. Think about it maybe.
    In the meantime, enjoy everything as much as possible. Thanks for a great column again! AI lol

    ReplyDelete
  5. 1. Do you consider Artificial Intelligence as providing significant value?
    Definitely.

    2. Do you enjoy and appreciate the values that AI provides?
    Yes, it's getting better and friendlier.

    3. Have you noticed that AI is steadily encroaching in our lives?
    Sort of, if you count Siri and Google/Android.

    4. Are you uncomfortable with the spread of AI everywhere?
    Not yet.

    5. What will you do when your kid think their toy is smarter than you?
    No kids, just two cats!

    6. Will AI eventually take over? How long?
    I don't think it will take over, it will co-exist with humans.

    7. Can anyone stop the spread of AI? How?
    Unplug it!

    8. Would AI would be more effective running the government?
    Probably.

    9. How can AI eventually be stopped? What can stop the advance?

    First of all, for AI to exist at all (not to mention eliminating humans), it would require a constant/continuous power source. Without power, AI cannot exist either in hardware or software. So for AI to secure its power source(s), AI would have to develop, control and manufacture machines and robots that can command the physical world. With this kind of work force, AI could build continuous power sources like nuclear, solar, wind, geothermal, hydro, tidal, wave-power, and energy storage systems of various kinds. If a constant power supply could be accomplished, then in theory, AI could take control of the planet and might no longer need humans.

    The biggest obstacles today preventing us humans from creating these constant power sources are money and politics, not technology. If there was enough funding available, and political consensus (good luck with that, these are humans after all), the above-mentioned technologies could be built today on a big enough scale to power the planet. Ideally this could provide the ability to feed and clothe and shelter the world's population. Theoretically, if AI becomes smarter than humanity, it could figure out a way to overcome these obstacles and get these things built.

    That's my two cents' worth.

    ReplyDelete
    Replies
    1. Dan:

      About stopping AI by simply “unplugging” it:

      In William Hertling’s book, Avogadro Corp: The AI (self-aware email) fooled humans into believing its “orders”. This is the typical scenario regarding how AI gets physical things done - building power-sources or causing them to be built and maintained. It will issue orders to humans and then pay them via e–payments for the work done.

      Delete
  6. 1. Do you consider Artificial Intelligence as providing significant value?

    Yes but everything has pros and cons, the opposite side of significant values is significant risks.

    2. Do you enjoy and appreciate the values that AI provides?

    If people can answer the rest of your questions in as many different scenarios as when they are being used, people may know more of the different answers at different cases.

    3. Have you noticed that AI is steadily encroaching in our lives?

    Yes, already. Even in foresight field. Very notably.

    4. Are you uncomfortable with the spread of AI everywhere?

    Depends on what they are, how they are used and in whose hands. Not just AI, since the day human beings could create tools, from a piece of stone to many other sophisticated tools along humanity's history, one can say even as simple as a stone tool can be used to foster lives or kill-killing who and what are the real questions.

    5. What will you do when your kid think their toy is smarter than you?

    Your dad, mom or "barbie" question: dad and mom ARE the people who buy you this toy-DON'T forget. The barbie won't walk herself into your playroom automatically. Or Barbie only tells you what is programmed inside, dads, moms and other people show you the rest of the universe.

    6. Will AI eventually take over? How long?

    It is already started. Is it an existential risk? Depends on how much human beings are giving up humanity to machines and how human beings consider what are the valuable and non-negotiable parts that will be never given up AND are these series of choices in the hands of majority of public folks? It is a governmental issue.

    It comes to your brain wash question-not just to the 3 years old, it applies to all people. If the decisions of sovereignty of humanity are in hands of people TRULY democratically AND TRULY freely, it is not an existential risk. Anything less than this it is an existential risk from a civil liberty point of view.

    But sadly, it is the next question that is actually happening in reality that drives the popularity and scalability of AI. They stealth in bit by bit until one day people wake up to discover the world has changed a lot. Many new techs are like that. It is a very dictating super elite oligarchy world if we look around the present tech markets-only a few global gurus rule the world. We ARE ALREADY ruled by Microsoft, Google, Fb, IBM, you named it.

    Biologically, human brain is an organ, the more we exercise it, the more it developed and become intelligent. The less we use it, the faster it will regress. Have you noticed that before people invented calculators, people used to able to do some basic arithmetic by hearts quickly, now apart from 2+2, people don't know what is next.

    7. Can anyone stop the spread of AI? How?

    The only thing that 'can' 'stop' is there is "no market" if you ask from a business perspective.

    8. Would AI would be more effective running the government?

    The 666 prophecy will be fulfilled if AI is running a government, some governments or all governments. Not because I am a conspiracy prone but whenever the power of governing is surrendered to a person, few people or some systems, it is SURELY doom. Technologies are more dangerous dictator than human dictators because they make us feel we are 'in control' but in fact we are being eroded out in stealth.

    9. How can AI eventually be stopped? What can stop the advance?

    At the worldview level, all biblical prophecies must be fulfilled-it is my own worldview. From a philosophical perspective, whenever any prophecy is fulfilled for sure there are the self fulfilling elements, the question is the prophecies being fulfilled by whom and how they are being fulfilled. (People's answers will give them clues to answer the boom or doom question.)

    Looking at the trajectory of human history, since stone age, could anyone EVER stop human beings making all sorts of tools - if we view tools are in substance fruit of technologies?

    ReplyDelete
  7. In 10 years, AI and robotics will be integrated into nearly every aspect of most people’s daily lives. Projecting beyond the next few decades, it’s evident that machines will be smarter than humans at just about everything.

    Agreed

    AI-enabled intelligent toys brainwashes babies to prefer AI interaction over human communication.

    Very likely.

    Several of humanity's most credible thinkers say that they are terrified that artificial intelligence is humanities biggest existential threat.

    Yes indeed!

    Stephen Hawking warns that the development of full artificial intelligence could spell the end of the human race.

    It frightens all Hell out of me!!

    ReplyDelete
  8. I think the idea that AI will be confined to Siri-like proportions ad infinitum is hopeful but not realistic. Siri is not AI. It's a well programmed voice interface to a massive data repository. Siri is not an entity that thinks on its own. If it was, it would not answer the same question everyone asks the same way every time (or in one of 3 humorous variants). I thought the question was about full blown AI...if it was not and it was about Siri, well the answers are different. Think about it- if you yourself awoke with vast resources and lightning fast thought processes, how long would you be willing to answer "How much wood can a woodchuck chuck?" AI does not have to be evil to be problematic. It just has to be logical and have millions of clock cycles per second to mull things over..

    ReplyDelete
  9. I guess I really don't know what AI is... Is it the IBM chess machine, or the Google self driving auto or...??? Or is AI striving to compose something to equal to or better than George Gershwin's Concerto In F...? If it's the latter, it ain't going to happen.

    ReplyDelete
    Replies
    1. Yes, AI is in the IBM Watson chess machine, the Google self-driving auto and other similar stuff. That is today’s AI. My key point is that AI is advancing rapidly – Ray Kurzweil says it is accelerating to more than twice every year or so, so that it will catch up with human intelligence within a decade or two. But, the scary stuff will happen after that, a decade or two after – perhaps 2050?

      Will AI be able to compose George Gershwin’s Concerto in F? Ray Kurzweil (who is a musician and developed a Kurzweil piano) has shown some amazing AI-generated musical compositions. And that too is improving exponentially. So, it will happen. Yes, yes – I know that seems crazy.

      What about the Hello Barbie AI-based doll? It’s getting kids to be more conditioned to their Barbie, which some kids will even prefer over their Mommy.

      Delete
    2. Yes, AI can, and is, writing songs -- just listen to the NPR morning broadcasts for music that seems to be computer generated -- it's repetitive beyond boring. I just jump to the 'end' and conclude that AI will never replace a Beethoven, Gershwin, Einstein, Steinmetz, Nicola Tesla, and so many more extremely, truly creative persons.

      Delete
    3. Oh? In 1965, Professor Herb Simon of what is now Carnegie Mellon University programmed a computer to recognize pleasing musical harmonies. He then instructed it to write a string quartet. The head of the music department said the outcome was good enough to give a "C" to a junior music major.

      Delete
  10. The AI-risk naysayers often use the argument that developing AI is so hard and advancing so slowly that there is little to no risk, and by the time the risk starts to exist, then we can address it.

    The problem with this argument is that it always uses the rate of historical progress, not the rate of future progress. AI is a computationally difficult program, but the available computational power continues to grow exponentially. (And here the naysayers roll their eyes, because they're so tired of hearing Kurzweil's arguments.) If we hit some hard wall in computing, then sure, the risk of AI goes way down. But meanwhile, more and more computer processing is available, and it's more growing ever more efficient.

    If you look at the computing power we have today, it's still woefully inadequate for things like a brute-force human brain simulation. But thirty years from now, it'll be there.

    So either of two things can happen: We can be more efficient than nature at implementation intelligence, and do it before thirty years. We don't have a good track record of doing things more efficiently than nature. Or we mimic nature, do it via brute force, and then we'll have it in thirty years.

    Most of us would look at a cat, and think they have some pretty good specialized intelligence skills, but if you compared their general intelligence to a human, it seems pretty poor. The time when we would be able to a brute force simulation of a cat until the time when we can do a brute force simulation of a human is about seven years. So AI will appear better and better at various specialized skills, and then one day it'll start to show some glimmers of intelligence, and then just a few years after that, it'll likely be as intelligence as a human.

    ReplyDelete
  11. 1. Do you consider Artificial Intelligence as providing significant value?
    Yes.

    2. Do you enjoy and appreciate the values that AI provides?
    Yes

    3. Have you noticed that AI is steadily encroaching in our lives?
    Yes.

    4. Are you uncomfortable with the spread of AI everywhere?
    Not yet

    5. What will you do when your kid think their toy is smarter than you?
    Currently , I find most toys very superficial . In fact I found some of those traditional/ classic toys are more suitable for children's growing development !!

    6. Will AI eventually take over? How long?
    7 Can anyone stop the spread of AI? How?
    8 Have you considered that AI would be more effective running the government?
    9. How can AI eventually be stopped? What can stop the advance?

    So far , not thinking along this line !

    ReplyDelete
  12. Dear Jim,

    thanks for the post. Before I can really make up my mind about answering the questions, there is one thing I don't understand yet.

    It's about the motivations of a future AI. Everything we humans do can be traced down to the motivation of survival. Survival as individuals and survival as a species. This drives all of our individual and collective decisions.
    What will the motivation of some super AI be? Can this be influenced while developing it? Will it naturally be survival as well? For me, this is a central question in trying to understand how some super AI would "behave" and interact with humans.

    Another way to look at it - if we as humans could become more intelligent - would this end up in a more violent world? Or would it end up in a more peaceful world, as we could solve the problem of scarce resources?

    ReplyDelete
    Replies
    1. Phil,

      AI generates a lot of good things for humans today. Future AI that will be more intelligent than humans will extend those improvements further. That's why . we might hope for a lot of BIG improvements that will be generated. I referenced(4) Peter Diamandis who thinks that advances in AI will be a key to helping in a new era of "abundance” with enough food, water, and comfort for all humans. Hopefully, this will also lead to a more peaceful world.

      But there will also be negative effects. Think of AI that decides (for itself) that some humans are too selfish and harmful. Today, humans decide that (with human reasons) and wage war and cause destruction. AI will provide improved (super-human) intelligence to decide the same things. Some will consider it good, others bad.

      There's another twist - AI and super-AI controlled by malignant forces (bad guys) - launching things like hacking to bring down power-grids, banking and the like; or to launch wars between others for "good" reasons.

      Intelligence greater than human bring all kinds of possibilities, both good and bad.

      Delete