September 16th, 2013 NUI Central

http://www.meetup.com/NUI-Central-NY/events/121867762/

On Monday, September 16, 2013, OLC Attended NUI Central’s event: Microsoft Kinect: The Present and Future of Natural User Experience, featuring Oscar E. Murillo, Principal User Experience Architect; and Chris White, Senior Program Manager.

Oscar Murillo presented first. He said that the Kinect for Microsoft comes from Kinect for the Xbox. Microsoft [the business side] was interested in scenarios that went beyond gaming: Healthcare, etc. “Kinect was released into gaming because gaming is fun, Murillo said. “NUI is still in its infancy. But we’re still trying to come up with the next best thing. In order to do that, we need to understand what’s natural.” Murillo and his team are interested in how users interact with the system. They work with first and thirdparties to understand touchless scenarios in order to iterate.

At this point, Murillo showed a video to the audience, which essentially depicted what Microsoft is attempting to do with retail. “The Kinect is optimized for a large screen,” Murillo explained. “With a large screen experience—the Kinect was built for .4 meters and farther out—it will excel in context of retail.” The goal is to make retail shopping more immersive and “more fun.”

And because Microsoft is entering the retail space, Murillo revealed that the language they are using [at least, for the consumer-facing side] will reflect common terms like target, manipulate, swipe, hover, and so on. They are focused on developing grammar that is familiar and approachable, not alienating.

Of course, Microsoft is trying to figure out ways to develop models for frictionless interaction, resulting in 10,000 hours of research and analysis of data. One thing the team learned was that swipes are different for everyone.

Chris White took over to talk about the technical side of what Microsoft is doing.

“Analog signals are hard to work with,” White said. “For the Xbox, that was okay, people were building systems to understand the motions,” but apparently, this time, with realtime motion sensors, it became more difficult to do.

White demoed what he and his team were working on at Microsoft. A picture slideshow enticed potential users to walk into the view of the Kinect sensor. “The attract loop is the enticer,” White said. “As you walk towards the Kinect sensor, it projects a silhouette at the screen.” To interact, the user must communicate intent with the system. “You raise your hand to signal to the system that you want to interact and engage with it. The system traces and tracks both hands at the same time and it shows feed back as you push to activate.” Incredibly, users can grab and drag, point and scroll. Second users can even join in, as long as the first user drops their hand to let the second user take control of the system.

White revealed that zooming in is proving more difficult than first thought. “We’re still exploring that function,” he said.

With the demo over, White explained that it’s always imperative to provide a fall back.

“For example, voice is not always reliable.” He went on to say that thousands of hours of machine learning is what helped them out. “