What We Need For Wearable Computing…
One of the most critical challenges the wearable computing industry will face will be one of user interface design, as we’ve already explored in our ongoing post chronicling this emerging field. How does wearing something on your body change the nature of your relationship to the device? What sort of new opportunities does it afford to shape the way we communicate with these new gadgets and with one another? How do we engineer more subtle input and output methods for passing information back and forth from the networked device, the cloud, and the physical world which we’re navigating?
Some designers believe that, in an effort to make the technology disappear, the obvious next step is to use the human body itself as an interface, positing that as wearables evolve, they’ll need to go beyond the screen and move into a more gestural, more “natural” modes of interaction. Yet design writer John Pavlus argues that your body does not want to be an interface:
The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience. I’m not a cyborg (yet) so I can’t be sure, but I suspect the effect is more the opposite: alienating you from the direct bodily experiences you already have by turning them into technological interfaces to be manipulated.
He makes a compelling argument, especially if you consider how these bodily gestures designed to control our technology might interfere or be confused with, you know, our actual bodily gestures. So what does that mean for wearable UI? Is there a way to strike a middle ground that will help make technology more seamless as we integrate it even deeper into our daily lives yet won’t alienate us from our bodies? We asked Amsterdam-based design director and former design strategist at Frog Design Hans Gerwitz to weigh in:
What do you think the future of the screen is? Is it here to stay or is its time limited?
I take as a given that the glowing rectangle must be dissolved. Floating it in our field of vision, as with Glass, reduces the cost of accessing it but doesn't solve the problem of attention, as it still uses the narrow and critical resource of our visual focus. The industry is in love with transparent displays and has lost sight of the research. The concept of “glancing” is valid but also a slippery slope back into GUI and its virtual object presentation. To get real-life efficient, we need to get really ambient. We will have to experiment with other ways of communicating simple state via temperature, pressure, vibration, sound, or peripheral light without distracting. Not just for status communication or notification, but also feedback for modes and quasimodes--already a challenge with today's GUI. Input will also have to evolve. The metaphor of object manipulation will only get us so far when "worn." We can add controls to the surface of our body, virtually or otherwise, but that doesn't go very far.
What about the oft-cited (and disputed) bodily interaction models outlined by Fjord’s Andy Goodman and Marco Righetto?
Andy and Marco's micro-gestures will be important to being subtle, and aligning them with social expressions will be a fun challenge. Having a large enough library to work with will demand developing sensors that detect what I call "first person gestures" like Myo and Mycestro. My dream smartwatch would even let me air-type on a virtual (chorded) keyboard.
So is the idea here to develop more natural user interfaces? What are the steps towards making that possible?
NUI is a misnomer. There's little that's "natural" about today's interfaces, even if they are easy to learn. What is natural is our social interactions with other people. The progression of our interface with technology is not simply more direct wiring to our bodies (or nervous systems) but in building relationships with systems, like friends with intellectual superpowers.
Regarding your points about input above--who is doing interesting research around that?
I'm concerned that I don't have an answer to that question. The HCI community, for example SIGCHI, tends to focus on usability and making the techniques we already use more powerful or easier to learn. UIST is one of the few reliable sources of new interaction approaches, but even there you'll find that most input explorations are in reaction to emerging display approaches such as projection.
What is the biggest hurdle to achieving the kind of UI you feel is necessary for wearables? Is it a tech issue? Or a user adoption issue?
I think user adoption is easily influenced and complaining that people "aren't ready" is akin to the U.S. GOP complaining that they lost an election due to "demographics." We design for the humans we have, and if they aren't willing to try then it's probably too expensive or not as useful as we've convinced ourselves it is. It may sometimes happen that a social norm holds a
technology back, but fashion is quite malleable and we live in technophilic times. You can't consider wearable tech and not run into the energy issue--we definitely need advances in battery technology to realize the dreams we have for power or subtlety. But I don't think we've come close to exhausting the possibilities of today's technology, and there are very few devices that we're using but wish we didn't have to charge so often. So, really, I think we have a failure of imagination. There's a gulf between our staring-at-rectangles-while-pointing-or-typing reality and the unrealistic fiction of implant-something-in-my-brain. At least Google is pushing us into that gap!
I posted this on www.fastcolabs.com in May 2013 during week 2044.
For more, you should follow me on the fediverse.