Intel Israel President Mooly Eden tells Julie Mandoyan how perceptual computing will enable machines to communicate like humans, a vision first seen in futuristic novels and films like Star Wars. Eden says we won’t have to learn computer language any more. Machines will have to understand us.
As well as Intel Israel President, you are Intel’s General Manager of Perceptual Computing. What does it mean?
Perceptual computing is the full set of technologies that will eventually enable us to communicate with machines, just as humans communicate with each other. At Intel we are adding senses to the computer’s brain. Providing eyes, ears, voice, touch, emotion and context makes for a life-like experience. In simpler language, we complement the Micro Processor – the computer’s brain – with sensors so we are able to communicate with it. I don’t want you to have to learn the language of the computer. I want the computer to learn your language.
How did you come up with the idea of perceptual computing?
Science fiction writers invented perceptual computing. The novelist Isaac Asimov wrote about Man-Machine Interfaces and in films such as Star Trek, Avatar and Star Wars, when a human makes a gesture the machine knows what it means. When you see what we are developing you will say, “I’ve seen it before”, and you’d be right. It was in the movies. People have always wanted natural interactions with computers, but there was no way to achieve it. Finally, we have enough computing power. All the things we dreamed about can be done. We are taking the fiction out of science fiction and making it science.
When did you start research?
Three years ago marked a breakthrough for perceptual computing. We made a video of what we wanted it to look like. We did not start with the technology, we started with a dream. We knew what we wanted to do and then we developed the technology. Intel’s CEO allocated us a budget of US$50 million and 50 people to work on the project. We showed some prototypes and the rest is history. Now, we are at the research and development stage.
By the end of 2014, Intel will bring out its first perceptual computing consumer product. The Intel RealSense 3D Camera will be integrated into laptops.
Yes, if we can manage to drive prices down and develop enough compelling applications, we believe such technologies will become pervasive. Around 67% of our brain power is dedicated to analyzing what we see. If we are used to seeing in 3D, then computers should be, too. Other examples of compelling applications we are working on include 3D printing and augmented reality. These are the ones people that want. How do I know? Because I do a lot of market testing. We believe we have game-changing products and the market will prove it.
How do you see the future of computers?
I say to my teams, “The very difficult things we should be able to do immediately; the impossible might take us a while”. We are on the verge of developing disruptive technology and, in a few years, you will be surprised at how we use computers today. Let me illustrate my point. Today, keyboards look normal to us, but they are not intuitive or simple to use. Have you ever worked with the old DOS systems of the eighties and nineties? You needed to be a magician to use a computer. Even today, it is not simple. If you try to connect to WiFi at an airport, it might still be complicated. Today, we say computers are user-friendly, but that’s only because you need a lot of friends to use them. I want computers to really be user-friendly. By the way, we say it is about the computer now. Tomorrow, it will be about everything.
Everything? Is that possible?
Yes, every device. We should be able to communicate with machines like our stereo equipment, coffee machine, car, security system and TV. In theory, it is simple: you need to learn three letters. It took us two years to figure it out. We call it NII – Natural, Intuitive and Immersive.
Natural means when you interact with a computer, you communicate with what 4.5 million years of the human species evolved to. When you communicate with me, I do not see any controller, or keyboard. You use voice content and tone, eyes and gestures. I want the computer to look at your facial expressions and understand them. This means you communicate with a computer, the same way you do with me. Why do you need a keyboard? It’s only there because there was no automatic speech recognition (ASR) to translate spoken words into text. We are solving this problem at Intel.
The second factor is Intuitive. When you communicate with a device, I want you to do it intuitively, without instructions. When you interview me, you probably did some background research. But you do not need instructions on how to speak with Mooly. You just speak.
The third element is Immersive. Soon, you will not know the boundary between the real and the virtual worlds. It will start with something that is almost trivial: memory. Our memories suck. People greet me and I cannot recall where I met them. If I could get a memory extension for a few dollars I could remember everything about them. It will be so simple and will be integrated into the glasses I wear. In our labs, this is already being developed. Immersive will be an extension of one’s body and will help us live better lives.
Intel’s camera integrates speech recognition, close-range hand and finger tracking, face analysis, augmented reality, and background subtraction to its perceptual computing devices. Which stage of perceptual computing are we at?
As Churchill put it “It is not the end, it is not the beginning of the end, it is the end of the beginning”. In terms of speech recognition, we have made huge progress. Obviously, it is not yet at the level of you understanding me. Face recognition is going to evolve tremendously in the next 10 years. I believe a first big step will happen this year. Many companies are experimenting in this area.
What will reality look like?
We see huge changes in the way we are working with machines. It will not take a long time. Let’s take the case of coffee machines. Whenever I bring my wife coffee, her response is “you are incapable of making good coffee”. Why can’t the computer make her coffee? It will know the recipe. I’ll just tell it to make Rita’s coffee. This is a simple example of what reality will look like. The rest is complicated technology involving such things as gestures, eyes and augmented reality.
How long before we interact naturally with all objects?
Part of it you will see this year and next. It is a dream, but I believe in the next 10 years, we will see holograms – the optimal solution for what I want from perceptual. I do not want to fly for a meeting in the US. I want my hologram to be there. As I said, you have already seen it in science fiction movies. Once we have machines doing things for us, such as making our coffee, we will live our lives much more efficiently and have more time for other things. You don’t want to waste time on trivial things.
How will business operations and the lives of CEOs be affected?
With any revolution, we cannot predict the full extent. But I believe perceptual computing will revolutionize the way we interact with computers and with each other. It will revolutionize social networking and the education system. It will also revolutionize the way we do conferences. If perceptual computing becomes what we expect, it will affect all areas of our lives. You will be able to talk to your car, which will know your mood and where you are. I am not sure I know what the end result is.
Which partners do you work with?
Intel can provide the infrastructure, but we can’t deliver everything. We must work with an ecosystem of partners. We are notably working with the biggest 3D company in the world: 3D systems. We are also working with American scholars to develop educational uses of perceptual computing. And we are working with Microsoft to deliver better video conferencing.
Which technologies does perceptual computing rely on?
Each technology, such as speech recognition, or in depth-cameras, has been researched by industry and academics for 30 or 40 years. For in-depth cameras, we use two cameras and we correlate the pictures – just as our brain does. There are other things we can use for machines that would not work with humans such as invisible lights and infra-red lights. We can have sensors to detect all that and the computer can then figure out the depth. Perceptual computing is one of the most complicated projects of all. The technology is state-of-the-art. But our challenge it to remove the complexity for the user.