Typically, as a communications industry, we tend to employ one sense above all others: sight. Even with the introduction of voice assistants, when it comes to interacting with our devices, the Graphical User Interface still dominates – for now.
New platforms and tools may well challenge the visual default. ‘Hands-free’ devices produced primarily for the home, such as the Amazon Echo, give a hint of what might be to come. Yes, they have screens, but interaction with such devices can involve much more than sight and touch. Most have VUIs (Voice User Interfaces) as well as GUIs. But they also introduce the opportunity for interaction using or responding to gesture and proximity.
If the machine knows how far a user is away from it, type size, resolution, volume could all be adjusted for maximum usability. Similarly, if the device knew that its user did not currently have the screen in view – if the user were on the other side of the room and behind it for example, could it switch to voice in addition to visual? Integrated cameras offer the possibility of facial recognition for log-in, with personalised content perhaps triggered by the machine’s knowledge of who is looking at it. Beyond the operation of the device itself, how does a brand exist on a device like that?