Publication Name: EFY Times
Date: 10th September, 2013
“Gesture Interfaces would act as a catalyst to other modes of interaction, rather than being the sole mode of interaction with the product.”
Satish Patil is chief designer – Use Experience, Tata Elxsi. With over 16 years of experience in creative design, he is responsible for crafting consumer Insights and strategies for many global clients in technology and consumer domains for New Product Development, as well as developing UX Solutions across Entertainment/ Media, Communication and Automotive platforms. He speaks to Janani Gopalakrishnan Vikram about the importance of the user interface in making communication with devices more natural, in an era that promises a tech-filled environment that can sometimes be intimidating.
We are surrounded by technology, and indeed it would be overwhelming if we have to consciously interact with all the devices around us. The ‘Internet of Things’ conceptualizes an environment where all objects are connected to the Internet, and are visibly or invisibly collecting information, watching what we do, and helping us. Many of these devices, might of course, expect some kind of input or response from us. But, I for one, would definitely not like to keep typing or swiping on devices all the time! How does gesture-enabled technology help realize the IoT while still being user-friendly? Could you explain with a great example?
The last 50-60 years have seen rapid changes in the way we interact with devices and systems. We have come a long way from the days of DOS computers, thanks to the development of the hardware and software technologies. The way we interact with the devices itself is changing due to advances in hardware technologies. There has been an evolution in the input technologies from key boards to touch screen devices to voice, gestures and literally mind controlled devices. Technology is in constant state of flux creating new use case scenarios and experiences, which continue to map more and more closely the human actions and thought processes making the devices “more natural”.
For instance “text entry strings” as a command used earlier was much like the learning of languages by early humans as rudimentary skills to communicate to each other. The language has evolved further with development of GUI (Graphic User Interface), wherein the language itself became much richer with its own iconographies, which visually mimicked the real word, be it icon of Home or dust bin, making it further easier to relate. Touch screen interfaces took it to the next level, wherein the digital interfaces mimicked physical actions like “press button”. It continues to evolve further and currently voice and gesture interfaces are being explored, which is making interacting with devices almost similar to interacting with humans. For example asking your phone to dial your friend.
Gesture interfaces that are being explored currently is taking the experience to the next level in terms of these behaviors and making the interactions with the products, further ‘humane’. Earlier you could switch off the bulb by just waving hands your hands or automatically switch on the basin tap. Now, you can browse channels on TV by simply waving your remote.
Much like the theory of evolution, products are evolving to embrace gestures as a way to interact in its own sphere of activities. Just look at “shake” as gesture… it’s been used in cell phones to effectively handle call functions, or select the next song from the play list.
While the generic consumer electronics will find its usages and applications, gestures as a mode of interaction would be used far more effectively in certain special circumstances. For instance automotive interfaces, where you need to constantly look ahead on road and possibly it is one effective way of reducing the cognitive load on the driver. Similarly, gesture interfaces will increasingly play a vital role in making products universally usable by differently abled people.
Do you have a gesture experience ‘core’ that you customize for clients? If so, could you explain the basic components of this core technology? How is it different from the other gesture-enabled UIs available/ being proposed these days?
As an organization engaged in product design and development we have all-round experience of building the interfaces, hardware and software. However each of the projects is based on its domain, form, factor and use-case scenarios which might require different “core technology” elements. We have worked on many projects for designing gesture based interfaces; however it is currently being explored only in niche products. I feel gesture technology would be used widely across products in the near future.
As far as the underlying technology and the basic components are concerned, it really depends on the product. The form factors and use-case scenarios would vary from product to product. For instance in mobile phones the use of ‘accelerometers’ has become common. The accelerometers can detect the movement and acceleration of devices, which further can be mapped to particular input.
On the other hand in cars, TVs or for that matter large interactive displays, the depth aware cameras and stereo cameras are deployed to detect gestures and translate them into meaningful interactions. Each of these hardware has their own constraints for now, e.g., visual range for cameras, however as the hardware improves the accuracy, efficacy and reach of the interfaces will get much better.
Can you tell us about the gesture-based automotive HMI recently developed by you for a client?
We have been supporting multiple clients to realize gesture based HMIs based on their technologies/ hardware. Typical Automotive HMIs are always “Multi-Modal” employing multiple modes of interaction such as touch, voice or gestures along with physical controls in tandem.
For the automotive clients, we have helped them to define gesture interfaces by devising a meaningful repository of gestures to accomplish the functionalities in their cars. For instance, we have developed a gesture-based HMI allowing a driver to browse through a playlist or address book and play a song or make a call by incorporating gesture interfaces.
Additionally, to do a visual design for the application based on gesture interface has its own challenges. It’s important to design the screens and controls in terms of their layout and visual design elements for them to be “in sync” with the gestures and its output. Also in dual interaction modes wherein functionalities can be handled by both touch and gestures, it’s challenging to do visual design of interfaces to make sure the users are aware of interaction modes and the design itself support the touch and gesture behavior separately in its own ways.
What are the other exciting applications of gesture interaction?
Gesture interaction can be deployed in various walks of daily life; be it filling a glass of water through a water filter or controlling your kitchen appliances. In large public spaces gesture interfaces are being used across malls, airports and museums which allows people to navigate through large display systems.
At the other end, while gestures are used for a while now for gaming through gaming consoles, increasingly there are many applications/ products which can be used for fitness/ exercise routines.
Many products are being developed for the elderly as well as the differently abled people so that they can effectively use the products. Increasingly gesture based technology is finding its application in healthcare products and services for training and rehabilitation.
The main need for gesture-based input is to make interaction with a device more natural? However, I guess it would not be so natural if the output was always coming as text on a screen? So, what do you think are the possible means to make response from the device equally natural?
Yes, the output of gesture interaction need not be limited to change of status or confirmation of action on the screen. But as I mentioned above in many cases, the gestures deployed will lead to direct “actions” with or without the notification on screens. E.g., in mobile phones, simply shaking the phone can help you change or play the next song. In cars for instance simple hand gesture near the AC panel can change the temperature settings. For instance some of the console-based applications can virtually act as a “gym” for you, wherein the applications based on gesture interactions enable you to do extensive physical activities through multiple stimuli like sports activities, games etc.
Having said this, it’s also important to note that like some of the other interface techniques, gesture based interfaces itself is evolving to provide a complete user experience. They would act as a catalyst to other modes of interaction, rather than being the sole mode of interaction with the product.