HMI in the machine-learning age
Real-time stream HMI
How will machine learning help to define the next generation of screen interfaces?
Moving from static, siloed experiences to a constant, curated feed of the user’s most
relevant
content.
Rob
Pitt,
Director of Product
October 2021
You could argue that HMIs have been similar for more than 30 years. Since the design of Windows 1.0
in
1985 the rules for positioning of elements inside a software product UI have been more or less
static.
There are even UX golden rules for positioning of icons and components, Scheiderman 8 Golden
Rules
amongst them. UX researchers and designers spend their lives analysing how customers react to
feature
position modifications with A/B testing and modern analytics tools like Hotjar.
The quest for the next paradigm of user interface continues.
in 2007, Apple iOS introduced the idea of ‘apps’ – reducing an often previously complex
website-based
software user journey into singular, portable, hyper-focused services. This led to a
revolution in UX to
make tasks achievable in simple, small screen environments, which has then propagated
back up to larger
screen UX design such as TVs and more recently in vehicle head units.
https://www.phonearena.com/phones/Apple-iPhone_id1886#media-9090
[1]
Microsoft Metro interface allowed multi-service dialogs to be clustered into one screen and the
visual
loading to be simplified to key information. It was revolutonary in that it added ‘consumerised
content’
to a previously largely corporate operating system and application set. It was fun and it
directly
influenced smart tv interfaces and streaming services such as Netflix.
https://dougseven.files.wordpress.com/2011/09/screenshot_startscreen_web.jpg
[2]
For service-related tasks, chatbots are increasingly taking over as the default interface between
service
provider and customer. These bots can be partially or totally programmed to solve the
customer’s
problems, through logic or tree-based decision-making or more complex AI tools, in many
cases absolving
the user of any complicated learning or searching for settings / product choice.
https://icdn.digitaltrends.com/image/digitaltrends/yourmd-iphone.jpg
[3]
Another approach is to move away from screens completely and augment the world around you with
UI. The
augmented world, or metaverse, is becoming a major new battleground for the main tech
giants and new
startups, where AR/XR designers and developers are looking to move the interface
to a virtual layer with
which you will interact via eye-tracking, gesture and some form of
controller.
https://developer-content
images.magicleap.com/2RKn2FEzAI4EqeOKkq0QaS/e9993a7cb8bf1fc0a0356dc8b4854b0b/GettingStarted_UsingLuminOS_LaunchingAnApp__4_.png
[4]
Then, of course , there is voice. Tech giants around the world are working to remove the
visual interface
completely and replace it with a highly cognitive and context-aware voice
digital assistant which solves
the user needs. CloudMade is actively involved in this too,
providing driver and vehicle behaviour data
and context to make the assistant more relevant
and useful. As cybernetics and general robotics advance,
we will see more helper robots such
as Amazon’s Astro in the consumer space interacting with their owners
and carrying out
everyday tasks for us, controlled mainly through voice command but with screen
interfaces
for some tasks.
https://thegadgetflow.com/wp-content/uploads/2021/09/Amazon-Astro-Alexa-Enabled-Household-Monitoring-Robot-01-1200×900.jpeg
[5]
In vehicles, leading OEMs have moved from physical UIs to digital screens, and the
disappearance of
physical knobs and levers has meant that the interface has lost its
last anchors in the physical world.
The car’s HMI interface has turned into a somewhat chaotic experience, where you no
longer know what to
expect from the screen next to your hand. It can be a map, and with
a sideways swipe there is a phone
book and a media player, or a vertical swipe and there
is a system control. In order to use this
interface, ordinary common sense is no longer
enough, now you need to understand which of the elements is
a button, and know many
types of gestures.
In an attempt to overcome this chaos, automakers have decided to use predictive features
in order to
collect the functions relevant at this moment in one place, learning user
behaviour from machine-learning
on top of personal data and then automating often-used
functions, thereby helping the driver to interact
with the interface as little as
possible and concentrate on the road.
The market-leaders are now starting to add predictive services, using such names as Zero
Layer or Magic
Moments; this new type of interface proactively acts to provide the user
with the right information at
the right time, in a non-invasive and safe way.
These next generation UIs will reduce user reliance on predominantly screen-based ‘hunt
and peck’
interfaces, by harnessing machine-learning from the driver and from
crowd-sourced behaviour to ‘predict’
what the user will do next or require and automate
the content delivered to them.
https://api.media.mercedesbenz.com/v2/thumbor/v1/format/1800x/https%3A%2F%2Fapi.media.mercedebenz.com%2Fv2%2Fstorage%2Fv1%2Fpublic%2F8c0452c8-1245-4c94bcd96819810cc91c%2Fthumbor_original%2F21c0016
008.jpg
[6]
The main pain-points of switching to this new paradigm of UX are:
- Moving from PUI to GUI in the car nests the controls inside multiple layers as it is
not possible to fit all apps on one screen, causing spatial confusion - Current systems are a confusion of static tools, widgets and IoT notifications
- User is familiar with multi-tasking between apps in a smartphone but this is
dangerous in the car as it leads to driver distraction - Therefore the right ‘tool/app’ must be provided at the right moment through
predicting what the user needs at this particular moment
At CloudMade we’ve been studying user behaviour in the vehicle and believe the next
generation interface
for automotive will replace traditional static interfaces with a
constantly updating supply of
predictive, proactive micro-services.
Consumer behaviour of digital natives will drive this faster, as users ‘software moments’
are increasingly
cut into bite-size interactions.
We call this a real-time stream and the fundamental aspect of this is that the interface
is not static;
simply put the UI elements may change in size, priority and position
according to the importance of the
predicted user need.
https://vimeo.com/embed-redirect/234304690?embedded=true&source=video_title&owner=1380994
[7]
This real-time stream will bring huge benefits to the user. There will be no need to
look inside menus and
multiple tabs to find content, as it will be predicted for you
(mostly by observing your driving
behaviour, your content demand patterns, or
looking at cohort behaviour) and surfaced at the right
moment.
Interfaces may be significantly different at different moments, but this will be
better for the user as
the minimum ‘right content’ will be provided at the right
time, reducing user interaction, visual clutter
and cognitive load.
Using machine-learning to proactively manage software UIs brings many benefits
but there are drawbacks to
also consider and at CloudMade this is where our
expertise comes in, bringing business logic to create
simplicity.
CloudMade’s approach to this ‘real-time stream’ involves a time-based card
feed which provides the user
with all the actions they need, often without
them knowing it was available to them.
This includes:
- Navigation
- Communication
- Entertainment
- Parking & charging
- Safety
- Maintenance
- Introduction of new features
- and many more
In our interface the common system controls are still easily accessible to
the user should they need to
carry out a less frequent interaction, or for
passengers to interact with the HMI.
We’ve tested this new user interface in our own Experience Car environment
and on the bench with clear
results that users prefer this type of
experience, respond better, are less stressed whilst driving and
that levels
of driver distraction are reduced.
We’re now helping major OEM clients to develop and deploy what will be a
paradigm shift in user
experiences in the car and further into the mobility
user journey, connecting all elements of the
internet of things so that this
advanced learning and proactivity can assist the user wherever they
choose
to interact.
[ 1 ]
https://www.phonearena.com/phones/Apple-iPhone_id1886#media-9090
[ 2 ]
https://dougseven.files.wordpress.com/2011/09/
screenshot_startscreen_web.jpg
[
3 ]
https://icdn.digitaltrends.com/image/digitaltrends/yourmd-iphone.jpg
[ 4
]
https://developer-contentimages.magicleap.com
/2RKn2FEzAI4EqeOKkq0QaS/e9993a7cb8bf1fc0a
0356dc8b4854b0b/GettingStarted_UsingLuminOS_
LaunchingAnApp__4_.png
[
5 ]
https://thegadgetflow.com/wp-content/
uploads/2021/09/Amazon-Astro-Alexa-Enabled-
Household-Monitoring-Robot-01-1200×900.jpeg
[
6 ]
https://api.media.mercedes-benz.com/v2/thumbor/v1/format/1800x/https%3A
%2F%2Fapi.media.mercedes-benz.com%2Fv2%2Fs
torage%2Fv1%2Fpublic%2F8c0452c8-1245-4c94-bc
d9-6819810cc91c%2Fthumbor_original%2F21c0016-
008.jpg
[
7 ]
https://vimeo.com/embed-redirect/234304690?embedded=true&source=video_title&owner=1380994