It's not necessarily the answer to AI, but it works remarkably well. Is it Skynet or the Terminator yet? No.
There are different ideas for what constitutes AI. Expert systems and knowledge-based reasoners? Pattern-recognizer black boxes? Chatbots? AGI?
Over the years the concept of AI shifted. Until recent years "AI" was mostly used for things like A* search, creating algorithms that play turn-based table games for example (see the Russell-Norvig book), symbolic manipulation, ontologies etc., a few years ago it began to also refer to machine learning like neural networks again.
Neural networks are good at what they are designed for. Whether they will lead to the path to human-like artificial intelligence is a speculative question. But symbolic manipulation alone won't be able to handle the messiness of sensory data for sure. I think neural nets are much better suited for open-ended development than hand engineered pipelines that were state of the art until recently (like extracting corner points, describing them with something like SIFT, clustering the descriptors and using an SVM over bag-of-words histograms). Hand engineering seems too restrictive.
Quibble with the timeline: AI as a field has included a big chunk devoted to machine learning research pretty much continuously, especially since the '80s or so. The specific methods in vogue do change: decision trees, neural networks, SVMs, boosting, association-rule learning, genetic algorithms, Bayesian networks, etc. go through periods of waxing and waning in popularity. A few years ago boosting/bagging and other ensemble methods were very hot and neural networks were out of fashion; now neural networks are hot and the boosting hype has quieted down a bit. But ML is pretty much always there in some form, since learning from data is an important component of AI.
ML was there but at least when I started learning about these things around 8 years ago, the label "AI" was mostly used for symbolic stuff. Courses named "AI" taught from the Russell-Norvig book. Things like resolution, planning in the block world, heuristic graph search, min-max trees, etc. ML existed but it wasn't really under the label of "AI" as far as I can remember. I think it's something of a marketing term that big companies like Google and Facebook reintroduced due to the scifi connotations. But that's just my guess.
I can see that for intro courses, especially because of the book, though it varies a lot by school and instructor. On the research side it's been a big part of the field, though. The proceedings of a big conference like AAAI [1] are a decent proxy for what researchers consider "AI", and ML has been pretty well represented there for a while.
i get the impression that terminology bifurcated into "AI" and "cognitive science" around the time Marr published Vision in the 80's.
quibbles and q-bits aside, i was glad to see the announcement from the perspective of a if-not-free-then-at-least-probably-open-source-ish software appreciator.
There are different ideas for what constitutes AI. Expert systems and knowledge-based reasoners? Pattern-recognizer black boxes? Chatbots? AGI?
Over the years the concept of AI shifted. Until recent years "AI" was mostly used for things like A* search, creating algorithms that play turn-based table games for example (see the Russell-Norvig book), symbolic manipulation, ontologies etc., a few years ago it began to also refer to machine learning like neural networks again.
Neural networks are good at what they are designed for. Whether they will lead to the path to human-like artificial intelligence is a speculative question. But symbolic manipulation alone won't be able to handle the messiness of sensory data for sure. I think neural nets are much better suited for open-ended development than hand engineered pipelines that were state of the art until recently (like extracting corner points, describing them with something like SIFT, clustering the descriptors and using an SVM over bag-of-words histograms). Hand engineering seems too restrictive.