Natural User Interface

Multi-touch

In computing, a natural user interface (NUI) is one that is effectively invisible, or becomes invisible with successive learned interactions, to its users, and is based on nature or natural elements (i.e. physics, also known as Natural Philosophy).

The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned. A NUI relies on a user being able to quickly transition from novice to expert. While the interface requires learning, that learning is eased through design which gives the user the feeling that they are instantly and continuously successful. Thus, ‘natural’ refers to a goal in the user experience – that the interaction comes naturally, while interacting with the technology, and that the interface itself is natural.

One strategy to achieve natural interaction with technology is the use of a ‘reality user interface’ (RUI), also known as ‘reality-based interfaces’ (RBI) methods. One example of an RUI strategy is to use a wearable computer to render real-world objects ‘clickable,’ i.e. so that the wearer can ‘click’ on any everyday object so as to make it function as a hyperlink, thus merging cyberspace and the real world. One example of a strategy for designing a NUI not based in RBI is the strict limiting of functionality and customization, so that users have very little to learn in the operation of a device (e.g. ATMs). Provided that the default capabilities match the user’s goals, the interface is effortless to use. This is an overarching design strategy in Apple’s iOS. Because this design is coincident with a direct-touch display, non-designers commonly misattribute the effortlessness of interacting with the device to that multi-touch display, and not to the design of the software where it actually resides.

In the 1970s, ’80s and ’90s Steve Mann at the University of Toronto developed a number of user-interface strategies using natural interaction with the real world as an alternative to a command-line interface (CLI) or graphical user interface (GUI). Mann referred to this work as ‘Natural User Interfaces,’ ‘Direct User Interfaces,’ and ‘Metaphor-Free Computing.’ Mann’s EyeTap (an early wearable camera and heads up display) embodies an example of a natural user interface. Mann’s use of the word ‘Natural’ refers to both action that comes naturally to human users, as well as the use of nature itself. A good example of an NUI in both these senses is the hydraulophone (a musical instrument played by direct physical contact with water), especially when it is used as an input device, in which touching a natural element (water) becomes a way of inputting data. More generally, a class of musical instruments called ‘physiphones,’ so-named from the Greek words ‘physika’ (‘nature’) and ‘phone’ (‘sound’) have also been proposed as ‘Nature-based user interfaces.’

While the NUI has been called the next evolutionary phase following the shift from the CLI to the GUI, this is an over-simplification; NUIs necessarily include visual elements – and thus, graphical user interfaces. A more accurate description of this concept would be to describe it as a transition from WIMP (windows, icons, menus, pointer) to NUI. In the CLI, users had to learn an artificial means of input, the keyboard, and a series of codified inputs, that had a limited range of responses, where the syntax of those commands was strict. Then, when the mouse enabled the GUI, users could more easily learn the mouse movements and actions, and were able to explore the interface much more. The GUI relied on metaphors for interacting with on-screen content or objects. The ‘desktop’ and ‘drag’ for example, being metaphors for a visual interface that ultimately was translated back into the strict codified language of the computer.

In 2010 Microsoft’s Bill Buxton, a pioneer in the field of human–computer interaction, reiterated the importance of the NUI within Microsoft Corporation with a video discussing technologies which could be used in creating a NUI, and its future potential. When Buxton was asked about the iPhone’s interface, he responded ‘Multi-touch technologies have a long history. To put it in perspective, the original work undertaken by my team was done in 1984, the same year that the first Macintosh computer was released, and we were not the first.’ Indeed, the iPhone interface includes elements of WIMP which contribute to the classification of some limited examples of where a NUI is achieved in the design. Multi-Touch is a technology which could enable a natural user interface. However, most UI toolkits used to construct interfaces executed with such technology are traditional GUI interfaces.

One example of a NUI is the work done by Jefferson Han at NYU on multi-touch interfaces. In a demonstration at TED in 2006, he showed a variety of means of interacting with on-screen content using both direct manipulations and gestures. For example, to shape an on-screen glutinous mass, Jeff literally ‘pinches’ and prods and pokes it with his fingers. In a GUI interface for a design application for example, a user would use the metaphor of ‘tools’ to do this, for example, selecting a prod tool, or selecting two parts of the mass that they then wanted to apply a ‘pinch’ action to. Han showed that user interaction could be much more intuitive by doing away with the interaction devices that we are used to and replacing them with a screen that was capable of detecting a much wider range of human actions and gestures. Of course, this allows only for a very limited set of interactions which map neatly onto physical manipulation (RBI). Extending the capabilities of the software beyond physical actions requires significantly more design work.

Microsoft PixelSense (an interactive surface computing platform that allows one or more people to use touch and real world objects, and share digital content at the same time) takes similar ideas on how users interact with content, but adds in the ability for the device to optically recognize objects placed on top of it. In this way, users can trigger actions on the computer through the same gestures and motions as Jeff Han’s touchscreen allowed, but also objects become a part of the control mechanisms. So for example, when you place a wine glass on the table, the computer recognizes it as such and displays content associated with that wine glass. Placing a wine glass on a table maps well onto actions taken with wine glasses and other tables, and thus maps well onto reality-based interfaces. Thus, it could be seen as an entrée to a NUI experience.

3D Immersive Touch is defined as the direct manipulation of 3D virtual environment objects using single or multi-touch surface hardware in multi-user 3D virtual environments. It was coined in 2007 to describe and define the 3D natural user interface learning principles associated with Edusim (an immersive virtual reality environment which can be physically interacted with). Immersive Touch natural user interface now appears to be taking on a broader meaning with adaption of surface and touch driven hardware such as smartphones and tablets.

Xbox Kinect is a product from Xbox that uses spatial gestures for interaction instead of a game controller. Because Kinect allows the sensing of the physical world, it shows potential for RBI designs, and thus potentially also for NUI. That Kinect has been described as a ‘NUI’ is yet another example of a misunderstanding of the term, or at least a cooption of the term for marketing purposes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s