Haptic feedback: When a touchscreen touches you
June 08, 2015
Some revolutions you simply have to witness for yourself or more accurately in this case, feel for yourself. I'd first heard of this exciting technolo...
Some revolutions you simply have to witness for yourself or more accurately in this case, feel for yourself. I’d first heard of this exciting technology in passing conversation and prior to the demonstration had mentally prepared for the natural disappointment that often follows any apparently “too good to be true” claim – though the reality was quite the opposite.
Marketed as “bringing surfaces to life”, the Redux ST SurfaceSensation technology provides haptic feedback by cleverly utilises multiple actuators affixed to the underside of a surface, each generating individual and contrasting waveforms. The oscillatory output of each transducer is dictated by complex algorithms, which combine to produce specific sensations at precise positions on the surface via a patented technique of bending waves.
The resulting vibrations are interpreted by the brain as a 3D surface with a configurable texture and intriguingly can be either a protrusion or a depression, with multiple depths; I experienced a pseudo-shutter button found on cameras, which felt identical to its mechanical cousin but in reality was (and remained) an entirely flat surface. The technology provides a rich repertoire of signals, truly replicating the feeling of dials, switches, sliders and buttons. Incidentally the accuracy of that replication isn’t by luck; Redux developed an artificial finger with an abundance of sensors to interrogate real world mechanics and thus determine haptic variable configuration.
Equally as alluring, with SurfaceSound those same actuators can also be used in parallel to generate rich, dispersive audio from any surface, eliminating the need for discrete speaker cones altogether. The icing on the cake is down to the inherent frequency range separation of both functions, they can be employed simultaneously!
Following the demonstration, I found myself pondering the innumerable applications that would benefit from such tactile feedback as I’m sure my readers are now. For this report, I’ll focus on those potential applications that excited me most, though Redux ST has considered the full range of applications in depth.
It continually perplexes me that whilst strict laws govern me even touching my smartphone whilst at the helm of a road vehicle, I’m free to spend as much time as I desire interacting with my vehicle’s integrated touchscreen multimedia system; of course this is at the expense of interacting with the road ahead, which is more than arguably more important. In respect to the constant escalation of the importance of safety in new vehicle design, the ubiquitous deployment of non-tactile touchscreens somewhat an enigma. Employing haptic feedback techniques enables the driver to feel touchscreen buttons, thus keep their eyes fixed firmly on the road ahead. Such an approach is scalable to military vehicles and avionics, improving safety in extreme environments.
Another bugbear of mine is the drive to create ever thinner display devices, principally televisions and smartphones has been at the expense of audio volume and quality, as conical speakers rely on available depth which is decreasingly on offer. My television can only address this with a discrete sound bar, and my smartphone by directing audio at my feet! SurfaceSound not only resolves these issues, but enables even thinner devices by removing that conical speaker requirement altogether.
Historically in the embedded industry, implementing any kind of local audio capability and achieving a worthwhile IP rating were mutually exclusive due to required speaker vents. SurfaceSound permits the most stringent IP ratings for embedded devices and is equally transferable to white goods applications where liquid ingress, heat, and vibration resistance are just as critical design considerations.
I’ve little doubt we’ll see either or both implementations rise exponentially in popularity in the near future. Phase 1 will be driven by need, addressing flaws in existing safety critical applications, whilst Phase 2 will be driven by desire, by an end user hungry for multi-sensory interactivity with their devices. From augmented reality systems with spatialized voice overlays to virtual switches and buttons that talk directly to you and, critically, touchscreens you never need to look at – the future of fallible mechanical human interfaces is looking very shaky indeed.