Internet As Entertainment Vs Information Vs Knowledge
Originally posted December 8, 2018
We can look forward to a future where the economy wrapped around tech advances overshadows conventional industries like agriculture and manufacturing. Given this context I am disappointed but not surprised that on international math tests students in the USA continue to fall behind their counterparts in the rest of the world.
Why is this when there are so many opportunities to learn both in school and as a lifetime pursuit?
Given the transformational effect of the Internet on society in general and in particular the economy, I think we are seeing the effects of different peoples’ perception and use of the Internet as a source of entertainment vs. source of information vs. source of knowledge.
Mark’s Hierarchy of Internet Use
Simplifying this discusion, in increasing order of personal value and value to society, Internet use falls in three broad use cases:
- Internet as a source of entertainment: there is real value in engaging with friends and family together playing online games and enjoying at anytime ‘binging’ on Netflix.
- Internet as a source of information: this is a real economic driver as we look to web search engines for directions for fixing our cars, finding what an compiler or runtime error message means and how other people have fixed the coding problem, find a template for giving a presentation at work, finding salary ranges for your job in the city you live in, etc.
- Internet as a source of knowledge: knowledge is ‘digested’ and processed information usually based on experience and interactions with other people. The highest value sources of knowledge on the web are quality online courses taught by the top people in a field of study, videoed conversations by industry or academic ‘thought leaders’, etc. In other words, using the Internet to access the thoughts and knowledge of the brightest people on the planet.
Let’s go back in time to the 1800s to look at the American essayist, lecturer, philosopher, and poet Ralph Waldo Emerson. I like to think of Emerson as Internet or Web version 0.01. In the 1800s wealthy people had libraries and could tap into current knowledge and philosophies but for the common person, a yearly visit of Emerson to your town gave you an opportunity to hear a well thought out digest of currently modern thought and news. The benefit for Emerson was not only the money he was paid for his lectures but also the stories he heard and the conversations he had gave him new ‘material’ to work with. It was a virtuous circle. For the people in a town they not only heard news and information but also knowledge of how the world worked (at least according to Emerson) and potentially changed the way they looked at their own lives and ambitions.
Fast forward to present times: we have great conversations between thought leaders (a good example being Lex Fridman’s artificial general general intelligence course at MIT that is comprised of interviews) and often realtime video feeds of talks given at international conferences. While I am an avid reader (I read about one book a week), I like to hear people talk, even better on a video feed.
How can Information Collecting Software Agents Tap into knowledge? A path to AGI?
Current state of the art for automatically collecting concrete information of the web continues to rapidly improve, noteworthy projects using deep learning sequential models for language modeling, question answering systems, and most recently the BERT project solves difficult natural language processing (NLP) tasks like coreference detection (also known as anaphora resolution), predicting the probability of one sentence following another during discourse, predicting missing words in a sentence, text classification, translation between languages, etc.
While current state of the art allows collecting and using information from the Internet, how can information collecting software agents tap into knowledge, like discussions between top technologists? I think this is a step towards developing artificial general intelligence (AGI) with a very incomplete set of sub-goals being:
- Identify the experts in the world for any specific technology
- Collect written works, lectures, and debates and conversations with other experts
- Build models for both raw information experts know and also their analytic and thought processes.
Yeah, it is the third step that might take decades or even a hundred years. While my personal interests are in NLP and knowledge representation, the other crucial part of building AGI technology is physical embodiments. Robots! I think that the deep models for functioning in a physical world can for the foreseeable future be worked on independently from NLP and general knowledge processing, but once these problems are solved, the achieving AGI will be a form of transfer learning by building joint models using pre-trained living in the physical world models combined with separately trained knowledge models.
It is early days for deeplearning but one lesson can learned by looking at public models in so-called ‘model zoos’ (for example, for TensorFlow, PyTorch, the Julia framework Flux, etc.) and noticing that current advanced capability models are generally no longer simple linear models but rather have complex architectures of sub-models that are trained jointly. Although I expect major “non deep learning” breakthroughs towards AGI using Bayesian Graph Models and other techniques not yet invented, the real lesson from deep learning is that complex predictive and functional behavior is achieved by combining models so I expect AGIs to use many different technologies, probably developed fairly independently, and then trained and tuned jointly.