Jony Ive said in an interview that the iPhone X was both the realization of a long-held ambition, and the beginning of a new chapter in the future development of Apple’s smartphones.

That much was predictable. Apple always needs to be working towards the next generation of devices, so of course the iPhone X is going to be just the first of a new line of iPhones.

But it was something else he said that I think could suggest a new direction for Apple’s user-interfaces …

Google Translate rather mangled the exact words he used, so let’s start with the word-for-word translation before I present my own reading of it.

I’d translate that to something like this:

Yes, we have found a way to make the user aware, not the physical touch of touching the button to read fingerprints. Face ID which recognizes and maps the face of the user in a non-contact manner. By adopting that technology, it became possible to use it somewhat pure without touching the iPhone. The thing that is annoying as a designer is that there is a paradox that the physically existing form is important for understanding its function, although it is designed to eliminate shapes.

It was the American architect Louis Sullivan who first coined the phrase ‘form follows function,’ arguing that in order to design anything, you need to first understand its function – and then design accordingly. There should, he suggested, be nothing superfluous, only those elements needed for the thing to do its job.

Ive effectively takes this idea one stage further, suggesting that good design means that the user is barely aware of the form. He long expressed his goal for the iPhone as making it ‘a single slab of glass,’ and it was clear that, in the design of the iPhone X, he believes this goal has essentially been achieved.

Just as he sees the physical form of the iPhone X as the first example of the ‘single slab of glass,’ it also seems clear that he sees Face ID as the first example of ‘pure UI.’ A user-interface that, to borrow a phrase, Just Works.

It doesn’t take too much imagination, I think, to extrapolate from this. With animoji, Apple demonstrated that the phone can recognise expressions. So when the phone asks us whether or not we want it to do something, how about we are able to answer with a nod – or reject an action with the shake of a head?

Or perhaps it shouldn’t require even that much effort. Maybe we could instead approve something by smiling, and deny it by frowning?

As AI plays a larger role, we could perhaps even imagine fully-automated photo edits where facial expression recognition is used to guide the phone. We take a photo and the iPhone uses Portrait Lighting to present what it thinks is the best version of the photo. We smile, and it decides we like that version. It then carries out further tweaks, watching our expression. It deepens the contrast somewhat, and notices that we don’t seem to like that, so it reverses the change. It tries boosting the saturation instead, and sees that we appear to approve of that. It continues boosting it until we look a little puzzled, then backs off to the point we most seemed to like.

That same kind of approach could be taken with almost any app we could name. Let’s say we’re reviewing our email. Gestures or facial expressions could be used to guide the Mail app on everything from deleting spam to snoozing an email to deal with later.

Perhaps the Home app could notice that we look a little cold, and turn up the heating?

Maybe the Health app sees from our heart-rate that we seem a little stressed when we get home, and tries out a few soothing music tracks and mood lighting combinations until it finds one that calms us? Shazam sees that we turn our head as if trying to remember who the artist and track is, and pops up the information without us needing to tap a button to ask for it.

The Podcasts app could spot that we look puzzled and automatically backup by 20 seconds so we can listen again to the thing we didn’t seem to quite catch.

Twitter could spot a raised eyebrow when we read a tweet, and automatically open the link for us to learn more – then immediately close it again when it sees we look annoyed at its assumption!

I could go on, but you get the idea. Combine recognition of facial expressions and gestures with AI and you have a really powerful approach to designing an iPhone and iOS version which understands what we want and need without having to ask for it. This is, I suspect, the kind of thing Ive is hinting at.

Is this a direction you’d like Apple to take, or is it all a bit too Big Brother for your liking? Would it be helpful, or annoying? As always, please take our poll and let us know your thoughts and ideas in the comments.