Consider the tongue. It’s sensitive yet muscular, packed with taste buds and nerves, and without its acrobatic ability humans wouldn’t be able to eat or talk. It’s also our most versatile sense organ, and some computer engineers say it’s underused. Wicab, a Middleton (Wis.)-based company, has designed a small, square array of electrodes for the blind. When placed on the tongue like a lollipop, it turns the feed from a video camera into a pointillist pattern of tactile stimulation. The sensation is like sparkling water, or Pop Rocks candy, but after time and practice, blind users report the paradoxical sensation of seeing with their tongues.
Massachusetts Institute of Technology graduate student Gershon Dublon is trying to broaden Wicab’s idea. He’s made a cheap ($10 to $40, he estimates), bare-bones version of the device that can be connected to any set of sensors. He encourages fellow engineers to hook their tongues up to other inputs—microphones, pressure sensors, even a magnetometer, which would give a person a migratory bird’s unerring sense of direction. The device, which Dublon calls the Tongueduino, wouldn’t be just for the blind, or the deaf, but for anyone looking to augment his senses. “It would be useful,” he says, “for anyone who wants a better sense of direction.”
Since the invention of personal computing three decades ago, how we interact with computers has remained about the same: monitor, keyboard, mouse. Monitors have gotten a bit bigger, keyboards are smaller, and mice are wireless, but today’s PCs at Best Buy (BBY) would still be familiar to a computer user from 1984. That’s begun to change, and today there’s an explosion of innovation in interface design, driven by huge strides in processing power, memory, and bandwidth. It started with the iPhone’s touchscreen and swipe controls. It picked up speed in 2010 with Microsoft’s (MSFT) Kinect, a camera and sensor array that lets Xbox players control their video game systems with gestures. Some of the most promising tech startups aren’t building social networks or e-commerce sites, but interfaces.
San Francisco startup Leap Motion has developed a small Kinect-like device to replace the mouse, so that pointing, dragging, dropping, and shaping forms on a screen will be something you do with your hands in the space before your monitor, like a sorcerer. A Canadian company called Thalmic Labs has developed a gesture control cuff, due out late this year, that fits around the forearm like a sweatband and directly senses the firing of the muscles, obviating the need for cameras. Jinha Lee, an MIT graduate student, is creating buzz with a 3D virtual reality display that users reach into with their hands to manipulate files like actual objects. And the much-anticipated Google (GOOG) Glass packs a smartphone’s basic communication, search, and navigation functions into a display that rests in its wearer’s peripheral vision.
Some of these new technologies are intuitive, others are bizarre, but as computers find their way into everything from car dashboards to kitchen appliances, there’s greater need to control them more easily—and to make sense of their data without drowning in it.
Among the visions of computing’s future, Leap Motion’s is likely closest to market. On Feb. 27 the company announced a price ($79.99) and ship date (May 13) for its cigarette lighter-size sensor. The Leap, as its creators call it, contains three tiny cameras that track the movements of a user’s fingers and is meant to sit just in front of a computer monitor. Late last year the company sent sample devices to 12,000 developers who applied to write its software, from spaceship video games to a “super-secret handshake” security app that recognizes a computer user by the shape of his hand, rendering passwords unnecessary. “With the Leap it goes far beyond novelty,” says Nicholas Vidovich, a software engineer at Columbus (Ohio)-based technology researcher Battelle who helped design the handshake app. Battelle is also working on hospital applications for the Leap, including software that would allow a surgeon to control a computer mid-operation without having to touch it with her bloody gloves.
Much of the work on interface design is, like Leap, concerned with the input problem: the realization that a keyboard and mouse aren’t the best ways to control a computer if you’re working in three dimensions, or if the computer is minuscule. A few researchers and companies, though, are also looking at the output end. As they point out, the wealth of data computers can provide is useless if we can’t absorb it, or if it’s so distracting that we can’t do anything else.
One of the leaders in this field is Ambient Devices. The Cambridge (Mass.)-based company’s simple technologies rely on pre-attentive processing, the ability to pick up on things beyond the margins of conscious attention. One of Ambient’s signature products, the Energy Orb, glows red, yellow, or green as electricity prices rise or fall throughout the day, telling consumers when they might want to reset the thermostat. The orb is meant to be sensed more than read, the way that, sitting at a window, we’re unconsciously aware of whether it’s cloudy out or clear. Google Glass adopts a similar principle: The information display in the high-tech spectacles is just at the edge of the user’s field of view. “The trajectory of available information will grow exponentially over the next couple years,” says Ambient Chief Executive Officer Pritesh Gandhi. “Having it be harnessed in a way that allows you to make the right decision at the right time and place, that’s the key.”
The bottom line: After decades of stagnation there’s a wave of new computer interfaces in development, driven partly by strides in processing power.