The InfoStride Forum

TECHNOLOGY => Computing and Internet => Topic started by: ReadWrite on Oct 24, 2013, 01:31 PM

Title: How Researchers Map The Future Of Innovation
Post by: ReadWrite on Oct 24, 2013, 01:31 PM
  (http://readwrite.com/files/readwrite_future_tech.jpg)  

This article is part of ReadWrite Future Tech (http://readwrite.com/series/future-tech#awesm=~ol7lwqUqb8wGqI), an annual series where we explore how technologies that will shape our lives in the years and decades to come are grounded in the innovation and research of today.

In 1991, the United States Defense Advanced Research Project Agency, the research arm of the Pentagon, reorganized its priorities. Part of that reorganization meant much of the funding the agency had provided for projects like artificial intelligence and deep neural networks was pulled.

It may have been the best and worst thing to ever happen to the field of artificial intelligence.

Peter Lee, the corporate vice president for Microsoft Research, describes the time after DARPA's 1991 reorganization as the "AI Winter."

Future Tech: The Future Is Built By People Exploring The Art Of The Possible ... And Pushing Past It (http://readwrite.com/2013/10/21/future-tech-cocktail-party-question-art-of-the-possible#awesm=~ol7jOQPcZihEek)
 "A lot of hope and optimism went out of the sails of AI then," Lee said in an interview with ReadWrite. "I was at Carnegie Mellon and I remember in the late 80s and early 90s how AI a lot of the enthusiasm around AI kind of died. It was overhyped, oversold. Expert systems and neural nets and things like that were interesting but people tried to use them in the real world and they didn't work well. They were really brittle."

The field of artificial intelligence was set back by more than a decade with DARPA's decision. Yet, as with many examples of setbacks in technological innovation, opportunity was born. Instead of focusing artificial intelligence on the concept of neural networks, researchers started raising the fields of machine learning and statistical modeling that are now defining artificial intelligence research.

How Researchers Build The Future Researchers like Lee have a pretty good idea of what the future is going to look like. He was the chair of the Computer Science department at Carnegie Mellon and the head of the Transformational Convergence Technology Office at DARPA before heading to Microsoft.

When Lee came to Microsoft, he brought with him the Four Quadrant model of technological research from DARPA. Organizations that employ the model are able to think big thoughts about innovation, scientific research and even beauty while justifying their investments in both dollars and time spent.

(http://readwrite.com/files/four_quadrant_research_model.jpg)  

Lee described to ReadWrite the purpose and examples of research being done in each quadrant.

Blue Sky
This is unconstrained by any business imperative or time scale. You can refer to this as "Blue Sky" or that is how I refer to it. That doesn't mean theoretical research, there is applied work. At Microsoft Research we did all the foundational research on white spaces, Wi-Fi and dynamic spectrum use. No connection to any Microsoft business. Just exploring and deploying new modes of wireless research. We kind of view this as our mandate to have a deep understanding of what we are doing. This is also how we maintain our recruiting center, where get a lot of great people.
Disruptive (Also Known As "Moon Shots")
Here is what we can do that is disruptive, expand world views and understandings and what is possible with computing. I call this our Disruptive quadrant. This is our mandate to invent. And so there is a lot of focus on this right now. Doors are wide open all over the company for us to invent things. There is a lot going on right here. These are often imagination expanding, demonstrations. About a year and a half ago we did this demo where my boss at the time [Microsoft Research founder] Rick Rashid was able to give a speech to a bunch of students in China, speaking in Mandarin in his own voice. That sort of demonstration is, as you mentioned, sort of like Watson. It doesn't necessarily break new ground in the science but it is a demonstration that changes peoples' views of what computing can do today.
Rashid's demonstration of speech translation from English to Mandarin in real time, in his own voice can be seen in this video. Mission Focused
This is what I call our Problem Solving quadrant. A lot of what we do is solving hard problems. I mentioned the hospital before, that is one thing that we actually worked on with two major hospitals but a lot of what goes on here is really helping product teams in the company. This is very, very much like the Google model where researchers and product developers are really intensively working together.
Sustaining
Here is what I call our Sustaining quadrant and this is kind of continuous improvement of what we do. There is a lot of focus right now on language technologies. So to pick one language technology, machine learning of language, we are searching for these big breakthroughs here but while we are doing that, everyday we make an English to Japanese work a little bit better. Russian to Croatian work a little better. That is the kind of sustaining operating that requires real research minds.
Research In Language Technology & Artificial Intelligence  
Quote: The Babel Fish, from Douglas Adams' seminal The Hitchhiker's Guide To The Galaxy: (http://www.amazon.com/Hitchhikers-Guide-Galaxy-Douglas-Adams/dp/0345391802)

"The Babel fish," said The Hitchhiker's Guide to the Galaxy quietly, "is small, yellow and leech-like, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centres of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish.

Now it is such a bizarrely improbable coincidence that anything so mindbogglingly useful could have evolved purely by chance that some thinkers have chosen to see it as the final and clinching proof of the non-existence of God.
 One of Microsoft's most important research initiatives right now has to do with language technology. Its ultimate goal is to create the universal translator, kind of like the Babel fish from The Hitchhiker's Guide To The Galaxy (see quote).

The maturation of language technology is the type of project that can inhabit all of the quadrants at once at this point in its evolution. The concept of the universal translator would can de described as a Moon Shot, while continuing research into machine learning of human speech can be part of the navel-gazing Blue Sky quadrant. At the same time, Microsoft has a translator product on the market, so it needs to be reactive to what its competitors are doing (Mission Focused), while constantly improving upon it (Sustaining).

"In that upper-right quadrant there are some things that are really important right now. One is that there has been, there is right now for us, a resurgence of hope and optimism in being able to solve some of the longest standing problems in core artificial intelligence. To get machines that see and hear and understand reason at levels that understand or match human capabilities," Lee said.

"I think we are seeing that first in dealing with language. I think language is coming first because it is a little bit of a simpler problem but one that has commercial implications. So, that is moving really fast."

One irony about current language technology research and products is that it is based on the use of deep neural networks. The leaders in speech technology like Nuance (with its Dragon Naturally Speaking products and personal assistants), Google and Microsoft all base their language learning on the neural network approach where audio character recognition, matching, contextualization and understanding are done within the cloud.

So, the concept that was once abandoned by DARPA is now fueling the future of artificial intelligence. And the lessons that have been learned with language technology research can now be applied to other aspects of machine learning, pushing the field even further.

"The application of those ideas to computer vision, to finding patterns and signals on things you wear all day. From looking at all the instrumentation and logging out of factories. From looking at all the electronic health records that hospitals are working with. The applications for deep learning from all of that are pretty impressive," Lee said.

Lead Image: Microsoft Research Managing Director Peter Lee by Dan Rowinski

Source: How Researchers Map The Future Of Innovation (http://readwrite.com/2013/10/23/researchers-create-the-building-blocks-of-the-future)