Ray Kurzweil On Future AI Project At Google

Here is a good 11 minute interview with Ray Kurzweil.

In the past Google has been fairly open with publishing details of how their infrastructure works (e.g., map reduce, google file system, etc.) so I am hopeful that the work of Ray Kurzweil, Peter Norvig, and their colleagues will be published, sooner rather than later.

Kurzweil talks in the video about how the neocortex builds hierarchical models of the world through experience and he pioneered the use of Hierarchical hidden Markov Models. It is beyond my own ability to judge if HHMMs are better than the type of hierarchical models formed in deep neural networks, as discussed a lot by Geoffrey Hinton in his class "Neural Networks for Machine Learning." In this video and in Kurzweil's recent Authors at Google talk he also discusses IBM's Watson project and how it is capable of capturing semantic information from articles it reads; humans do a better job at getting information from a single article, but as Kurzweil says, IBM Watson can read every Wikipedia article - something that we can not do.

As an old Lisp Hacker it fascinates me that Google does not use Lisp languages for AI since languages like Common Lisp and Clojure are my go-to languages for coding "difficult problems" (otherwise just use Java <grin>). I first met Peter Norvig at the Lisp Users & Vendors Conference (LUV) in San Diego in 1992. His fantastic book "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp" had just been published in 1991, as had my much less good Springer-Verlag book "Common LISP Modules: Artificial Intelligence in the Era of Neural Networks and Chaos Theory." Anyway, it is not for me to tell companies what programming languages to use :-)

I thought that one of the most interesting parts of the linked video was Kurzweil's mention of how he sees real AI (i.e., being able to understand natural language) will fit into Google's products.

While individuals like myself and small companies don't have the infrastructure and data resources that Google has, if you are interested in "real AI", deep neural networks, etc. I believe that it is still possible to perform useful (or at least interesting) experiments with smaller data sets. I usually use a dump of all Wikipedia articles, without the comments and edit history. Last year I processed Wikipedia with both my KBSPortal NLP software (which, incidentally, I am hoping to ship a major new release in about two months) and the excellent OpenCalais web services. These experiments only took some patience and a leased Hetzner quad i7 32GB server. As I have time, I would also like to experiment with a deep neural network language model as discussed by Geoffrey Hinton in his class.

Comments

  1. Interesting! As I was reading his latest book during the holidays, I kept asking myself two things: (1) even though HHMMs look like they do a good job at implementing the hierarchical aspects of intelligence (i.e. the central argument of the book), why not spend more time describing other approaches, also based on (more or less) the same premise (e.g. Deep Learning) and (2) is he (and his company) really the sole origin of the huge success and ubiquity of HMMs for certain NLP tasks, throughout the 90s and 00s? If that's the case, I can imagine that he isn't as excited by the recent success stories of DL-based speech recognition as we are..

    ReplyDelete
  2. Hello Christian, thanks for the comments. I haven't bought Ray Kurzweil's latest book yet. I am interested but I have a deep stack of books (in the Kindle sense :-) to read. I will get to it though.

    Perhaps it is just because I finished Geoffrey Hinton's class a couple of months ago, but my money right now is on deep neural nets.

    -Mark

    ReplyDelete
  3. Christian, BTW, I just started reading your blog: good stuff!

    ReplyDelete
  4. Hi Mark. I too have completed the excellent NN class recently taught by Hinton: challenging, but very rewarding. In fact, I also did some work on the original NN language model when I was doing my masters in Yoshua Bengio's lab, a couple of years ago.

    I have mixed feelings about Ray Kurzweil: on one hand I find he's a somewhat strange figure in the world of AI (ref. Transcendant Man!), but on the other, I also find that he's intriguing, in part because of his unrelenting enthusiasm and strong ideas.

    That said, his latest book is quite an easy read (with almost no technical material as expected; even the more "philosophical" parts are quite light, I found) but I wanted to read it because of the central insight, about the hierarchical nature of intelligence, which of course is also at the heart of the new Deep Learning paradigm (although from a much more algorithmic perspective, I would say, rather than a neurophysical one, as heavily promoted by Kurzweil).

    Thank you for your kind comments about my blog, I appreciate! I've been following yours for a while, and I'll be very interested to see what happens with the new directions you've recently taken.

    ReplyDelete

Post a Comment

Popular posts from this blog

Ruby Sinatra web apps with background work threads

Time and Attention Fragmentation in Our Digital Lives

My Dad's work with Robert Oppenheimer and Edward Teller