From Natural, it's now Humanized User Interface

NEW YORK-- New year, new name. Last February 22, the NUI (Natural User Interface) Meetup on its fourth year became the HUI (Humanized User Interface) Meetup. It was said to be a more accurate description as “the advancements today enable apps and devices to interact with us like we do with other people,” the organizer Ken Lonyai and Debra Benkler said.

 

http://www.meetup.com/HUI-Central-NY/events/226386935/

 

As the hosts, the organizers talked about where HUI is headed, how to best use it in projects and products and how to develop HUI-based user experiences as well as use the plethora of APIs available right now.

 

Differentiating NUI from HUI, Benkler said NUI  as coined by Steve Mann, are actions that come naturally to human users--the user of nature as an interface itself. For many, the definition has supposedly come to mean any interface that is natural to the user.

 

Natural is not without without its issues while HUI is said to unify human-like experience, reducing barriers to human machine interactions, extending the benefits of technology and engaging greater segments of the population.

 

“HUI is multi-sensory and bi-directional. It mimics real world interactions. It’s immersive. It can make devices effectively invisible,” the hosts said.

 

The hosts discussed HUI technologies from touch, gesture, voice, eye tracking, object/facial recognition, among others.

 

On touch, supposedly the most underdeveloped HUI technology, Lonyai talked about trends in haptics. “Future haptics will stimulate temperature and viscosity. At this point, screen touch will be considered an HUI for a telepresence, in-air haptics, conductive fabric and real-world objects.

 

On gesture, Lonyai said there will be more uses of body movements to interact with a system. Typically, it requires a specialized equipment: 3D depth sensing camera (Kinect); electromyography device (Miyo); and ultrasound transducers. 3D depth cameras are largely peripherals but that is said to change in 2016.

 

How do 3D depth sensing cameras work? They project a field of infraret points and the points are read by the cameras to determine depth.

 

The point data is processed and a primitive image is created. It can also be used for skeletal tracking using algorithms. Changes in position can be measured and correlated to mean or do almost anything. Depth sensing cameras can also track heart rates.

 

For Lonyai, UX Best practices on gesture must know system limitations; design large interaction areas; minimize gorilla arms, avoid customer gestures and avoid creating occlusions in addition to using contextual-based affordances and consider cultural issues.

 

With gorilla arms, he was referring to how you can’t have your arms hanging for long periods of time, pointing out how Tom Cruise in “Minority Report” even got tired having his arms up during the shoot of the film.

 

What about object/facial recognition? Since humans can distinguish over 30,000 visual objects in a few hundred milliseconds, it definitely makes object-facial recognition interesting to explore. He cited how 2D and 3D APIs can make determination of facial “landmarks.” This means that the minute details of your face can be captured--the distance between the eyes, the width of the nose, the depth of the eye sockets, the shape of your cheekbones.

 

“It’s going to be all about ‘authentication vs identification’,” he said.