Make Contact

Close
Invalid Input
Please type your name. Please type your company Invalid email address.
Invalid Input

Make Contact

608.294.5460
634 W Main St., Ste 201
Madison, WI  53703

Map
[email protected]

Apr 19 2016

Brint Paris

Developer

Food For Thought: Artificial Intelligence

Food For Thought: Artificial Intelligence

We've seen some interesting MSM coverage on AI lately, from stories on the Tay debacle to AlphaGo's victory and more. Here, our resident AI enthusiast, Brint Paris, dives into the possibilities (probabilities?) of artificial intelligence and how it may impact our future lives. At the end of the post we ask Brint to respond to some recent articles that offer a more skeptical vision for the future of AI.

Artificial Intelligence is probably going to be the most important revolution our world has ever seen, but it’s mostly happening behind the scenes. The average person doesn’t understand the potential of AI - and in some respects, neither to do the experts working on it. We’ve created AIs that are so sophisticated and powerful that we don't actually understand how they work, and the smartest scientists in the world are trying to reverse-engineer them to figure it out. These AIs aren't being programmed – they're being trained. And unlike humans, they can learn faster and they can copy/paste their abilities to others. As a result, they're rapidly becoming uncontested masters of their domain, even far surpassing the abilities of humans.

AIs are learning to perform facial recognition, language and context comprehension, spacial and audio awareness, cognitive intuition, and nearly every human-like ability. In fact, we’re approaching a day when it will be difficult to find an AI that can’t do the things that we can. For the most part, these systems are currently isolated and working alone. But take a minute to consider the bigger picture when they’re all put together.

It's not just gaining knowledge - it's learning to comprehend it

Humanity may not realize it, but we are in the process of creating a new species. For decades, we've been adding the collective knowledge of the entire human race to the internet. We've built a giant brain that wakes up every day to new knowledge and abilities, faster than the day that preceded it and with more impressive skills. And now, with our heavy investments in AI, it's not just gaining knowledge - it's learning to comprehend it. We’re a few years from witnessing the birth of a child that we ourselves can't fully understand.

As this new species learns to harness its power, it will rapidly surpass us in every way. Either that, or we’ll have merged with it to gain abilities of our own. Nobody really knows what will happen, because as others before me have said, humans can't begin to fathom what an IQ of 10,000 looks like – much less when it’s connected to a storage of knowledge magnitudes of order beyond what we can comprehend. What does something with the collective brainpower of all humanity even do?

The impact of autopiloted vehicles alone could mean that nearly 10% of our work force no longer has a job

And how long do you think it will take for AI to reach the level of humans? 200 years? 100 years? If you're ready to believe some of the greatest experts in these fields, your answer might be closer to 10 or 20. And that doesn't even cover the world-shifting changes you can expect to see between now and then. The impact of autopiloted vehicles alone could mean that nearly 10% of our work force no longer has a job in a few years, which could be a disruption of work unlike anything we've ever seen in history. Not to mention the countless other jobs that are about to be automated with software at faster rates than it’s already happened.

And it wouldn't be a proper post about the future of AI without mentioning Ray Kurzweil, lead of Google's engineering lab and one of the most respected scientists in the world. Kurzweil has said that because of the way technology is advancing immortality will be achievable by the Baby Boomers; and he's got the data to support his claims. We're already having global discussions on the ethical implications about modifying the human genome. Why? Because we’ve acquired the technology to do that. Kurzweil has an incredible track record of being right, but even if his estimate was wildly off by 30 years, that would still mean people the age of 30 or younger right now could be contemplating this very day in the year 3016.

I'm no head engineer of Google, but if you pay attention to the trends, the potential of the AI revolution is mind boggling. Artificial intelligence has a lot to offer us, and we haven’t even begun to scratch the surface of it’s potential. One thing is for sure – we've got an interesting few years ahead of us.

 

Q&A with the skeptics

Q. You argue for a much briefer time horizon for AI dominance (10-20 yrs) while in his piece for NYT J. Markoff (http://goo.gl/geS1SD) seems to believe we are many lifetimes (perhaps even beyond) it. Why do you fundamentally differ on your expectations for how quickly AI will advance?

In the article, Markoff effectively makes two claims: that no scientific evidence has been provided for a technological explosion, and that Moore's Law is evidence against it. For a specific quote from the article, he says "At this stage Moore’s Law seems to be on the verge of stalling. Transistors will soon reach fundamental physical limits when they are made from just handfuls of atoms. It’s further evidence that there will be no quick path to thinking machines." This is a piece in the New York Times, so undoubtably is an opinion shared by many people.

Markoff correctly asserts that Moore's law addresses transistor size and that some aspects of it are slowing, but incorrectly correlates it with a conclusion that our technological curve is dependent on it. Aside from the fact that the paradigms of Moore's law can continue on through things like 3D transistors, technology has never been solely dependent on one thing. Imagine if someone told you that the only way to judge a person's health was with their pulse, or the only way to improve a company's profits was by creating more sales brochures. Are these pieces relevant? Sure. But they're a tiny piece of a much larger picture.

A lot of people, and I would argue that Markoff is included amongst them, frame this debate as though we're the ones imaginging a far-fetched scenario where something miraculous needs to happen for it to occur. But the technological curve we're referring to has been consistent throughout the entirety of mankind. Let me say that again: for the entirety of human history, technology has ALWAYS been growing at an exponential pace. And within that exponential growth, many minor aspects of technology come and go in their own exponential curves.

In order to prevent an evolution of intelligence on the scale we're describing, something would have to happen that ENDS this consistent technological trend. The trend that that is thousands of years in the making, and which has never in its existence shown signs of changing. That, to me, seems like the implausible conclusion. It's literally saying "Well, sure, the evidence throughout all of our history has said it will happen... but this time it totally won't."

Okay, why not? Why is this time different? Moore's law is just one of the many pieces that fits perfectly into the overall trend of technology and assisted with the areas it was relevant in. Is every other area of technological advancement going to simultaneously decay as well? What miraculous event do you perceive will change a trend that has not been altered from the beginning of our history?

Not only has plentiful evidence been provided for Markoff on the subject by countless experts in the field, but he's trying to suggest that the entire trend of technology would be capsized by an incorrect assumption - one that actual experts like Ray Kurzweil address in great detail.

To specifically rebut his statement about Moore's law, what about quantum computing? To quote an article by MIT Technology Review, "In a carefully designed test, the superconducting chip inside D-Wave’s computer—known as a quantum annealer—had performed 100 million times faster than a conventional processor."

We know almost nothing about quantum computing. Things like "a carefully designed test" in its infancy is exactly how next steps happen. Major contendors like Google are investing a lot of money to understand quantum computing, hoping to connect these computers to the cloud with incredible results. And remember when computers were the size of a room? So are these. And at the time, nobody believed we'd ever hold something in our hand that had more power than the technology used to put humans on the moon. But now we do. And quantum computing is just yet one more aspect of technological growth we can expect to be arriving.

Markoff also writes about a lot of people who were wrong in their predictions, but that goes both ways too. Look at any movie from 10+ years ago trying to explain the future and we consistently underappreciate our technological growth. We made those mistakes for the same reason: because everyone kept assuming technology would advance linearly rather than exponentially. Looking back, we think the mistakes are laughable - but at the time, those movies put us into suspended belief so that we could enjoy them. If we want to play the "you were wrong about the future" game, he's got a few scientists he can point to - I've got 99.9% of the world I can point to.

In order for Markoff's belief to be true, technology would suddenly have to stop being exponential and start being linear. Is that a possibility? Sure. But Einstein is often quoted as saying "Insanity is doing the same thing over and over again and expecting different results." Assuming that technology will suddenly end its exponential trend doesn't just seem unlikely to me, it seems insane.

 

Q: In his NYT piece on the Go game G. Johnson (http://goo.gl/ibHsTt) states that: “… artificial intelligence is far from rivaling the fluidity of the human mind.” How would you define the “fluidity” of human intelligence and why do you think AI is so close to replicating it when most current examples of AI success are single-purpose?

Johnson is exactly right, we're nowhere near rivaling the fluidity of the human mind. We're nowhere near AI meeting our capacity for pattern recognition, the speed of our ingenunity, having the hardware to match the power of the human mind, etc. And that's one of the greatest things about all of this. Frankly, I think the proponents of the singularity should be the ones glorifying the point. What we've already accomplished with single-purpose AI is incredible - even Johnson acknowledges that he was wrong about just how powerful AI would become, and that experts in the field were saying this level of technology shouldn't have existed for a hundred years.

It's precisely the fact that AI is so "dumb" right now that makes this future so amazing. Would we call a newborn child dumb just because it can't understand how to play chess, or even what it would mean to play chess? We wouldn't rate the intelligence of a newborn in the same way we would rate a two year old because their minds are capable of dramatic differences. The newborn, while developing, can barely grasp its own motor functions. A two year old, however, could contemplate and act upon advanced concepts far beyond of the realm of a newborn; but even a two year old isn't remotely close to the capacity of an adult. Those stages of growth are profound; and in the species we have observed throughout history, inevitable.

Artificial Intelligence is still in its infancy. And despite the obvious limitations of that infancy, it's already defying our expectations and even doing things that humans themselves cannot do. There is no other species on the planet that, in its infancy, could remotely compare to any of the capacities of an adult human. Yet, here we are with this newborn AI that's been labeled "stupid", yet repeatedly crushes the most talented minds our world has to offer.

So I'm not deterred by any of this. We keep hitting stages that people say "Oh, that won't happen," and then when it happens they backpedal and say "Oh, okay, well it did happen, but THIS definitely won't happen. Because THIS type of intelligence is much harder." As of right now, there is no human on this planet that can match the level of an artificial intelligence that we've collectively decided is unintelligent. AI can drive our cars better than we can. They handling our machines with precisions we can't dream of. They can answer harder questions on Jeopardy. They can beat our reaction time, our strategies, our ability to gather information... They're starting to take over our jobs (and will be doing far more quickly in the future) because we can't compare to how incredibly awesome they are.

And we call that dumb.

And maybe it is. But I feel like it's only "dumb" in the way that a newborn is "dumb." It just doesn't have the fluidity that a two year old has. And yes, that's quite a large leap in evolution, but a two-year old had to start with all of the same pieces that the newborn had. Once it developed the ability to piece those things together, a far more powerful thing emerged. Something so powerful that even the newborn could not have predicted its new abilties.

So if our goal is a human brain, let's investigate one. We've got access to many different regions that handle different abilities: language, memory, vision, hearing, reading, motor control, facial recognition, etc. Well, guess what we're doing with AI right now? We're building software for language comprehension, visual and facial recognition, audio recognition, motor control, you name it. We're building all of the same pieces that exist in a brain, but with software. The quality is constantly improving and becoming more available. It's like we're sticking a lot of the right components into a brain and saying "Yeah, but we'll never actually use all of them together." Yes we will. Why is that even being contested? Maybe it's a harder problem to solve, but we've been taking every step that would lead to the development of that future.

 

Q. Both you and J. Markoff refer to a majority of experts that back up your positions on the status of AI relative to human intellect. While you specifically mention Ray Kurzweil, Markoff is less forthcoming. How is a reader to know or believe where most AI scientists fall in their predictions for how rapidly AI will advance?

For starters, the trend for nearly everyone throughout history (researchers included) is to ignore the trend of history and believe it will progress linearally. We've been *really* wrong in the past, and that has always been because we've ignored the exponential trend. If nearly everyone has consistently been wrong about the future, it's not about finding how many people believe something, it's about identifying the rare gem that you CAN believe.

Beyond that, my advice would be nearly the same as anything else that relies on understanding something: figure out who is ignoring the evidence, and figure out who isn't. Learn how to identify the difference between sinister debunking and actual debunking. The method is easy, but a lot of people get caught up pessimism. Here's why:

Learning about stuff - really, truly learning about stuff - takes a long time. Understanding the finer points of someone's arguments isn't something that we all have time to do. But we WILL read about the broad concepts when an opinion of it has been summarized for us. So when a news article shows up about a really smart guy that says AI is coming soon, we'll read it. And now that we've read it, we've clearly got that side of the story, right?

Well, no - what we did was read an opinion with a flake of explanation - not the countless years of dedicated research that was put into it. So along comes another man who writes "Haha, yeah right" and makes a flashy-sounding point that doesn't even begin to respond to any of those many years of research. That response is a lot easier to swallow than hundreds of pages worth of research, most of which we have no context to follow anyway. It doesn't even matter if the skeptic's points crumble under the weight of actual evidence because the reader doesn't see any of that evidence. The writer can single out one thing that sounds really informative and most readers won't have enough background to actually contest it. So to them, it sounds like valid debunking. And we accept it.

It's unfortunate that this happens, but it happens ALL THE TIME. People are easily duped without sufficient research, and many refuse to accept that fact. But we believe things because we assume we've seen both sides of the story, when really we were never even exposed to the first side of it. All we've seen is the broad stroke of an opinion, followed by someone's pessimism about it. And sadly, it's easy for people to accept pessimism. So when something comes along that sounds way too good or impressive to be true, it's easy to casually dismiss it as the improbable one.

To learn the truth, one can't just look at the first two sides - you need to watch the responses between the two sides until you can tell when one side is ignoring the points made by the other, or whose arguments are crumbling beneath the weight of the other. And the argument about AI is littered with examples of this all over.

Everywhere I look, skeptics aren't responding to the points being made; or those that aren't don't really reject the possibilities, they just feel the possibilities are more likely to be dangerous (which is a fair argument, but outside of those scope of this piece). Ultimately, the number of people supporting one side of this argument or another isn't the relevant way to identify what's more likely; it's finding the people educated enough in their field to have something worth saying, and making sure they're not the ones hiding from the evidence.

This is one of the reasons that Ray Kurzweil is gaining the traction that he has. His responses to his critics are fantastic. He doesn't hold back when it comes to demanding that they respond to his individual points, and he responds to EVERY point they make. I've never seen a single point made by a critic that Ray Kurzweil didn't expertly answer. He also goes on to support every single one of his claims. And it doesn't hurt that Kurzweil has an incredible track record of being right and people like Bill Gates have quoted him as "the best person I know at predicting the future of artificial intelligence."

We only get closer to the truth when we discover something we were wrong about. So if I were a reader that wanted to find the truth, I'd start by searching for people who had the experience and evidence to break down my preconceptions most effectively. On this topic, I have yet to see a skeptic whose points are stronger than the evidence poised against them. That to me strengthens the case for AI immensely.

In order for Markoff's belief to be true, technology would suddenly have to stop adhering a historically consistent exponential trend. Is that a possibility? Sure. But Einstein is often quoted as saying "Insanity is doing the same thing over and over again and expecting different results." Assuming that technology will suddenly end its exponential trend doesn't just seem unlikely to me, it seems insane.

Make Contact

Looking for a team to help your idea take flight?
Get in touch and we'll talk it out.

Phone or Email

(608) 294-5460

Address

Earthling Interactive
634 W Main St., Ste 201
Madison, WI 53703