Engineers in Sweden have developed a virtual sphere interface they believe is the key to popularising touchfree computing.

Örs-Barna Blénessy and Holger Andersson met at Lund University, where the idea for the sphere first came to fruition. The pair had been looking at popular culture depictions of our touchfree future, where sweeping gestures and complex instructions need to be learned in order to operate computing systems. It looked, to them, like a lot of needless hardwork and awkward gesticulating. The pair wanted to design a new approach that would feel intuitive, and less exhausting on the arms…

Virtual sphere is new ”touch-free keyboard”LundUniversity

"While these were often very impressive to look at, they seemed neither comfortable to use nor easy to learn," Andersson, now CTO of their five-employee-strong company Erghis, told WIRED.co.uk. "We quickly realised that a touchless interface designed for everyday use would have to be built around small, comfortable gestures; preferably ones that required moving only your hands. In order to be truly useful it would also have to allow complex interaction without being difficult to learn."

The sphere seemed to solve that riddle, a simple system that would allow for "hundreds of commands with what to the user would seem like only a handful of gestures". There are just three core types of motion that are tracked using a Kinect or Leap Motion Controller, and a fourth that is limited to breaking the imaginary orb apart. "By tying the interaction to a virtual object the user can associate gestures to the objects behaviour," explained Andersson. "This way the number of apparent gestures they have to learn is greatly reduced. To them, twisting and tapping the sphere can be thought of as two gestures, but to the computer each finger represents a different command that can be further changed by how the sphere has been twisted."

"It also helps that the most natural way to hold the sphere -- with your hands apart and fingers slightly splayed out -- tends to make it easier for sensors to reliably track your hands."

The algorithms that make up the very foundations of the virtual sphere are in their early stages. But already the system has been somewhat successfully trialled and configured for typing, and for controlling a presentation. "Most people who try it for the first time are able to write their name in minutes," Andersson tells us, explaining that the team is building in an intro that helps users naturally play around with the functions and test its limitations. "The most exciting application, however, has to be a collaboration with the robotics lab at Lund University where we were able to use the sphere to control an industrial robot from ABB."

Despite being the sphere being in its infancy -- though the company already has a patent pending for how the algorithm handles multiple users, for touchfree control in public spaces -- Erghis is apparently in talks with companies in a number of industries where a reliable interface could make a dramatic impact. Healthcare, for instance, where hygiene standards could rocket. Andersson also flagged up the possibilities open to shops -- "it is possible to use the sphere to control screens that normally can't be made interactive, such as screens placed out of reach or behind storefront windows".

By perfecting the algorithm behind this design solution, Erghis hopes to build an interface that will only improve and upgrade as our motion sensors do. Current sensors obviously have their limits and error margins, but the sphere can be adapted as and when next-gen versions crop up.

"Part of our vision is that users should never need to learn than our few gestures, but still be able to quickly adjust to new touch-free enabled technologies," said Andersson. To be ready for that potential revolution, the team is currently running a seed round to add to the angel investment Erghis has already received. After that, the only thing standing in the company's path, they believe, is a dated version of the future we've grown too accustomed to, thanks to films like Minority Report.

"In a sense, our main competitor isn't so much another company as it is a different mindset: that the swiping and poking gestures that work well on touch surfaces would work equally well as a touchless interface. While these gestures might be familiar, this is probably also the reason they just don't 'feel right' away from a touch screen. They were never designed for touchless interaction in a three-dimensional space, and their limitations soon become apparent to anyone who tries to do more than swipe through a menu."

Arguably, though, others have already entered this space. Mycestro, a wearable "fingermouse" launched last year, was specifically designed to allow touchfree control in a narrow space -- so, no giant arm gestures necessary. Again, this focuses on using touchscreens and desktops in the same ways we always have. Whether Erghis has nailed the future populist user experience, remains to be seen.

TECHNOLOGY 17 DECEMBER 14 by LIAT CLARK

18.12.2014 | 2460 Aufrufe

Kommentare

Avatar
Sicherheitscode