Think of them as psychic computers, computers that will know what you want to do before you start using it. No I am not talking about wiring up your brain with computers, but simple interfaces that can predict user actions based on sensory patterns.
When you hold a remote control for TV, what do you generally do? Well, you point it towards a TV and start pressing buttons. Similarly, when you take pictures you hold the camera horizontally with your finger on the top. What if we have physical devices, resembling a mouse, that understands the way we grasp it. Then, the computer can predict our actions and adopt to it.
Now imagine a different example: If you ever used Google maps, Google earth or Microsoft virtual earth you would know how counter intuitive it is to simply move the map around. What if you can have a device that simulates a hand which can hold and drag the map? In other words, what if you have a device in your hand and an avatar in the computer (avatar of your hand), and when you apply pressure on the device the map reacts to it.
Sounds like science fiction? It shouldn’t! We currently have the technology to build interfaces like that. In fact, there are two specific projects that have the exact ability I described above. The first one is called Tango, and the second one is named Graspable.
So what is Tango?
Tango is created by professor Paul G. Kry and professor D. K. Pai. It is a user interface that can be used in a 3D environment. The physical device looks like a ball that has the ability to measure contact pressures and acceleration. One can hold the ball and apply pressure or move it in different directions, and the pressure and movements will be translated to the computer as an input.
Futhermore, because it has the contact pressures, it will know how it is grasped. In other words, it will know if you are holding the ball with one, two or all fingers. Similarly, it will know when you drop it. This ability can give users a very special way to interact with computers.
For example, like the in the example above, in regards to maps, we can use this device to interact with the maps. Since it can detect hand movements and grasp, it can be translated to movements on the map. For example, you can hold the ball with two fingers and move it to the side, and in response the map in the computer can move accordingly. Similarly, if you rotate the ball, the map can rotate.
Perhaps, one can argue that devices such as Micrsoft Surface already provides very intuitive touch interfaces for maps, so what would be the advantage of this device. The main advantage of this device would be its ability to react to pressure and grasping gestures. Also, I think it is very hard to manipulate 3D objects with a 2D interface. Tango provides a 3d interface; hence, it would be more intuitive.
Watch the device in action, and pass your own judgment:
Taking the idea of grasping objects further MIT student Michael Bove created a device that understands how it is grasped and reacts to it. In other words, the device Bove created will become a camera when you hold it horizontally, and a remote control when you hold it straight- just like the example mentioned above.
You can see the device in action here:
In addition to Tango and Graspable, we previously discussed another interface called siftables. Siftables are small computers that can interact with each other. They are like blocks that can talk to each other (read more on siftables).
These devices provide great potential to change our interactions with computers. However, our challenge will be to combine them in seamless manner. I think there is a need for one device that has the functionality of Tango, Graspable and Siftables. In other words, if we can combine the awareness of Siftables, the grasp detection of Graspables and Tango, we can create an interface that will be truly intuitive and transforming. A devices like that poses great potential, and I think the world is ready for it.
Tango images are courtesy of :