AI Is Not Magic

One of the highlights of attending the GPU Technology Conference in San Jose back in March was the keynote address by NVIDIA CEO Jensen Huang. Of course, there was the predictable push to sell more hardware, but notwithstanding the technology demonstrations were truly impressive.

There was an autonomous car in the parking lot. There was a driver in the conference room. Then, on the big screen, was the driver in the holodeck – the simulated environment for the real car out back. The real driver, in the virtual holodeck, seeing real-time data and imagery, drove the virtual car, which in turn drove the physical car safely into a parking spot. It was impressive. The processing power required to do this all real-time is barely imaginable.

But the biggest insight for me came from NVIDIA’s Director of Developer Programs, William Ramey.

AI is not Magic.

With all the mystery around Machine Learning and Deep Learning in particular, it was insightful to hear such a thing. The problem, of course, is that the hype is so great. It’s easy to assume that because this problem is hard, AI must be the answer.

His statement boiled down to this: If X then Y. Can the problem be defined so simply? Can a human understand, at least conceptually, the process? Is there a defined outcome? If all are true, then AI is likely a viable candidate. If a human can’t articulate the process, then don’t expect AI to provide an answer.

Ultimately it boils down to this: Could a human, with a lot of data and a lot of time, solve this same problem? If not, don’t expect magic from AI.

%d bloggers like this: