Working on MIVOR got us thinking more broadly about Natural Language Processing, and machine learning: teaching a computer how to learn by itself. Doing so is effectively engineering intelligence; making computers more like humans. The applications of such programs are vast, and interest in this field is growing rapidly.
These are greatly complex programs, written by only the cleverest engineers, but they’re also the types of tasks that are really easy for human brains to do. The amount of data that these programs can process is massive compared to what any human could.
Siri uses NLP to comprehend your language and control your iPhone. Google uses machine learning to decide if a web page is relevant to a search query. Spotify uses it to provide you music recommendations, and Amazon for product recommendations. Facebook uses it to automatically recognize and tag friends’ faces in uploaded photos. There are countless more examples.
Computers are rapidly becoming more like brains.
So, why are we interested in making computers more like brains? Let’s first consider how we’ve long thought about computers. They were originally tools, with a promise to make us more productive and effective humans. They do this by taking care of automated tasks for us: we build and use programs to more effectively create, organize and communicate. We have collectively created millions of programs to help us perform various tasks.
Computers and the internet have become the fundamental infrastructure of the 21st century, and likely centuries to come. We rely on enormously complex systems to run our modern economy. These programs continue to become bigger, faster, more complex, and smarter.
Even still, computers don’t come close to being truly intelligent. The secret lies in the brain. And it’s mysterious.
Cognitive psychology compares the human brain to a computer. This comparison helps us understand how the brain works, since computers are simple and brains are not. Brains compute very complex things very quickly. We still don’t really understand how they work, but the results of brain research will help us reverse-engineer them into computer systems.
There are various things that brains do really well, and that computers don’t. As we discovered with MIVOR, Natural Language Processing is one of them.
To give an example: when creating MIVOR, we needed to consider how to best interpret any spoken phrase and respond appropriately. At first we tried to keep it simple by asking questions that implied a specific range of responses, like yes or no, or variations of good and bad. This worked great until someone spoke a word that wasn’t in our list of predefined expected responses, or a more complicated phrasing like “I’m not so great.” We know “great” is positive, but this is not a positive phrase.
MIVOR now uses a fairly simple sentiment analysis algorithm that looks up sentiment values from thousands of words, and tries to break down the phrase composition to understand negation, all in the split second after you speak to it. In doing so, we know generally if you spoke something positive or negative, and we can continue the conversation appropriately. However, in doing so (and like many others) we are mimicking intelligence. This sentiment analysis task is one of millions that an intelligent being can perform.
Our new interest in this field took us to the recent EMNLP conference in Seattle. Among various great presentations we got excited about a few in particular:
Margaret Mitchell from the Microsoft Research Lab spoke about her work on sentiment analysis, particularly understanding the sentiment about particular named entities (say, “Seahawks”) through data collected from networks like Twitter. The applications of such programs are many; we can map the sentiment about a particular person or brand over time, assisting our understanding of the success of public appearances, advertisements, or even influencing stock market decisions. It’s very likely that autonomous stock-market robots have been doing this for years, sometimes with amusing results: whenever Anne Hathaway is in the headlines, Berkshire Hathaway’s stock price goes up.
Richard Socher spoke about his recent work with colleagues at Stanford about their new technique for sentiment analysis, which uses machine learning to push the state of the art in this field by over 5% accuracy.
Andrew Ng was an invited keynote speaker, Computer Science professor at Stanford and co-creator of the popular Coursera online classroom. He spoke about Coursera and how they are using machine learning and NLP to make computers more like real teachers. With tens of thousands of students per course, Coursera automatically grades submissions, from multiple choice questions to math and programming problems. It remembers when you mess up, and makes sure you understand before moving forward. There’s a peer-review system for more complicated grading tasks, but Andrew proposed that eventually they’ll be able to automatically grade and give useful feedback to students on submissions even as complex as essays.
We listened to talks on topics like noisy language normalisation (e.g. turning “wanna say happy bday 2 ma boi” into “Want to say Happy Birthday to my boy”); measuring ideological proportions in political speeches; teaching a computer how to cohesively put together a rap verse; techniques for improving discourse analysis (e.g. what “it”, “this issue” or who “he” refers back to in a text) and many more talks on comprehension and text/speech generation techniques.
These computer scientists must build ‘models,’ essentially the programs that interpret and comprehend information, remembering results of algorithms applied; in other words, learning. It’s clear that we’re very far away from the “singularity” of a self-aware sentient artificial intelligence, but each of these models represent a small step towards it.
Machine learning models have countless real-world applications, as we’ve seen already. Particularly within digital platforms and services, discovering new ways to apply machine learning and NLP will make these interactions seem gradually more human.
We’re already at a point technology-wise where we can be improving interactions with apps, making the content and actions presented to the user more informed by context, and information learned from the user’s previous interactions.
Google Now is a good example of this. It tries to give you the information you likely want ‘now.’ The weather when you’re waking up in the morning; traffic information for your route to work; the score of your favorite football team’s match. This line of thinking should become the norm. I want apps to know more about me and my habits, so it can automate things I do repetitively. Computers should work with me rather than as lifeless tools.
We as product creators need to be actively thinking about how to apply machine learning to our products, designing the models that will learn how to serve users smarter content. We need to ask certain questions in our process. What type of contextual information is relevant to particular interactions, and what can we learn from them? How can our app collect and learn from data to improve the interaction for this user next time? Humans naturally and unconsciously do this all day, every day.
This is an interesting design process, a straight-up introspection into our own, human, techniques for perceiving and learning. We are applying the subtle ways we think—perceive, comprehend, reason, rationalize—to interaction design. It’s something we already do to make assumptions about users in every design task, but we can take this a step further. Consider if your application were human, what she would learn from her interactions with the user, and how she would respond. We are not limited by technology to make this a reality, we simply need to make smarter design choices.
Filed under: Perspective