In continuation of last Thursday’s interview with Golden Krishna, here we’re discussing the problem with screens, observation as a principle and prototyping with Justinmind!
If solving problems with screens is not the answer, what is? In your own words: “Reframe the design context to our natural course of actions”. Can you expand a little?
I often argue that companies should be focused on solving their customer’s problems, not making more screens. But for the past few decades the processes taught in schools, the jobs posted, employee evaluations, and even company evaluations have been largely tied to the same way of thinking. And here I am asking all of those things to change to something better. You might be thinking, “Can you stop ranting and tell us how the fuck that’s supposed to happen?”
If so, good point and question.
The first of three principles outlined in the book is to “Embrace Typical Processes Instead of Screens.”
For the last few decades in making digital products, we’ve been drawing rectangles (screen representations) and wondering how we can solve problems inside these lazy rectangles. Instead, a more interesting sandbox is to start with the typical process of our customers.
This starts with observation. You see someone doing something and you create a system that works around that. If that sounds straightforward to you, that’s a good thing.
For example, recently, a few Ford designers observed that when some drivers returned to their car trunks with their hands full — with things like grocery bags or coolers — they stuck out a leg to try to open the trunk. An impossible task. Well, until Ford designers embraced their customer’s typical process and put in a set of sensors under the bumper to detect a shin and then a leg kick, which opened the trunk. When customers first tried that experience it felt like magic, and when it eventually shipped, the cheap sensor set became one of the most requested features of certain Ford models.
A trunk that drove in more sales. Impressive. And much better than another app with swipes and taps to open your trunk.
In addition to the book, I also wrote about how you can execute on this kind of thinking over at UX Booth.
Observation as a principle. It’s fascinating how we’ve lost sight of this prerequisite to all sorts of projects. Can you tell us a bit more?
Sure. Technology today has a tendency to take us away from what we’re doing. Interrupt our lives. Reduce the time we can spend doing things beyond screens like hanging out with friends.
In the book, I introduced the idea of Digital Chores. A term I coined to describe the chores we undergo for our digital lives. Things like updating operating systems, sorting computer files and folders, archiving email, checking-in for flights, and other chores we have to do today on top of traditional chores.
By embracing our typical processes instead of screens, I hope to guide makers of technology away from these kinds of things and towards the notion that technology, like Mark Weiser once said, should fade into the background. You should be able to do what you normally do and have technology work around that.
What do you mean by “leveraging computers”? Are we talking about the Internet of things and big data?
The second principle of “The Best Interface is No Interface” is to “Leverage Computers Instead of Serving Them”.
There are things machines do much better than us. And we should be focused on taking advantage of those wonderful things.
The incredible rate of processors, low cost of sensors, growth of big data, and development of predictive analytics and machine learning have enabled our machines with powerful methods for solving problems. Yet, what a good chunk of software today attempts to utilize is the graphics card. We’re missing out. What we’re looking at hasn’t changed that much since 1978, the magic that can happen in the background has changed dramatically.
I do love beautiful typography, and relevant information conveyed clearly, but I think we need to look beyond the screen to really embrace these powerful things in our pockets that are vastly more capable than room-sized supercomputers from not long ago.
Eliminate buttons by utilizing the sensors already baked into most smartphones. Get rid of location drop downs by using radios that are prolific, like GPS. Enable data science on design teams to enable smart defaults rather than making form fields.
I’ve actually assembled a free worksheet about how you can implement this kind of thinking with plenty of examples of how sensors are being used today. You can download that PDF supplement to the book for free here.
Download Justinmind today and start rethinking UI design.
As you’ve mentioned, we’ve spent the past decade trying to go paperless. Do you think the next one will be spent trying to go screen-less? If so, what are the challenges going to be?
The move to go paperless started long ago. As far back as the late 1960s I’ve been able to find publications discussing the idea of ridding the office of all paper. They found their lives filled with paper and they dreamed of a paperless world. And yes, as you’ve mentioned, that took until the most recent decade to really start happening.
Today, instead, our lives are filled with screens. And I think it’s time to dream of a screenless world. Although I may be the one that’s shouting it most loudly today, there have been many others who have written about similar thinking in the past. People like Alan Cooper, Mark Weiser, and Donald Norman. In modern times, others like Amber Case and Josh Clark.
I think it’s an important conversation today because now it’s actually possible with contemporary technology. Even a simple radio, like an accelerometer, can empower entire experiences.
That said, there are going to be some enormous challenges. In the book, I discuss how companies will need to consider roles like Data Scientists as being a part of design teams, companies will need change the way they recruit and evaluate roles, and leaders will have to consider new processes. Schools will have to teach new ways of thinking. Policy makers will have to find the right balance of privacy and data collection that enable some of these experiences without being intrusive.
In the end, we could have a world with very little screen time, and that would a wonderful thing.
How can designers think in a non-UI way? Do you think conversational/invisible apps are the future?
If you read Bloomberg Business, you may have seen that I predicted the rise of conversational apps way back in 2012. I like conversational apps that operate through simple channels like text messaging because they reduce the burden of using technology. You don’t have to install anything, you don’t have to learn a new digital routine, it just works within a system you likely use every day.
But I do think conversational apps are just a mid-step to a completely invisible future.
When electric cars were first introduced they promised something awesome. But it was scary to buy the first electric cars because they were so different. There are a number of reasons why fully electric cars didn’t succeed right away, but one was consumer confusion. There was no infrastructure to support them yet and no education for people to understand them. Many consumers understood gas mileage, but not electric mileage. They understood gas stations, but weren’t familiar on where and how to recharge their car. Some knew how to fix their gas engine if it broke down (or at least had a mechanic that did), but had no idea how to fix an electric engine.
It was too much a leap for the everyday person. So, instead, not surprisingly, hybrid cars (part gas, part electric) emerged in popularity. Consumers appeared to be comfortable with a mid-step.
That’s what conversational apps are. Just a hybrid, a more comfortable step towards a fully screenless world to come.
The book and its three principles are a guidebook for designers to figure out how to build this coming paradigm. We’re going to be slowly ridding ourselves of UI. From heavy apps to light apps to conversational apps to UI just as a backup to eventually #NoUI.
UI as a backup: what does this mean for designers?
Ha! You know, once I gave a lecture in Portland, and afterwards a young designer told me that he enjoyed my lecture but hoped that I ultimately fail. Because if I succeed, it means he’s out of work. I actually had a lot of fun recounting this story on the design podcast on Boagworld.
Yes, I do think UI has a place as a backup, as a place to go under the hood in these automatic solutions and see what’s happening.
But what I told that designer is that if you want to be working in technology you always have to be ready for change. The tools we use today are completely different than what we used 10 years ago. Many of the processes have radically changed. And the mediums that we design are new.
Designers should have empathy for customers. That should aim to understand common, everyday problems and then solve them. That purpose will never change.
In one part of the book, I describe what I think we’re here to do, which has been tweeted a number of times and even made into a poster. Here’s one of those tweets.
Have you ever used a prototyping tool? Do you think they can help in the design phase (i.e. detecting when there are too many screens)?
I love prototyping. I’ve prototyped with sheets of paper, with sticky notes, with cardboard, by acting out experiences, with video, and yes, with software too.
Among many benefits, there are two big reasons why I love prototyping:
a) Shared conversation: It’s easy to have a conversation about a real thing, and sometimes very hard to evaluate an idea in the abstract.
b) Validate ideas: I use prototyping to validate ideas. There’s a lot of arrogance and overconfidence in tech that leads some to believe their ideas are correct, and that user research is only necessary to figure out tiny things like button placement. Prototypes can help you validate an idea before you’ve done much real work at all.
Have you tried out Justinmind? How’d you find it?
Yes. I love prototyping tools! And I think the fact that people can put prototypes together without even knowing code on Justinmind is fantastic.
In the years to come as some of the brightest people in tech continue to explore screenless solutions, I wonder how the ability to loop in triggers — like the state of certain sensors or radios — could loop into prototyping tools. Could someone make a Justinmind prototype that launches a certain widget when you walk into a grocery store? Communicate with something via Bluetooth?
Right now accessing those powerful tools are a bit too complex and out of reach for the everyday designer, especially in prototype form, but as the field shifts from software that is all about UI (user interface) to being more about larger UX (user experience) I wonder if we’ll be seeing more and more prototypes that are less and less about buttons.
Here’s to hoping.
Thanks so much for your time, Golden!
Alright, thanks for the fun chat! This was a good cup of coffee. Can’t believe I said so many words in 30 minutes. Shouldn’t have ordered a Grande. I need to go pick up my kids now. Ciao!