Publication: Is that a supercomputer in your pocket or are you just happy to see me?
January 8, 2016
Note: This article was originally published on Medium. To see the orginal with working images and links; The direct link is here
This February 14th, exactly 70 years to the day, the ENIAC, the first electronic Turing Complete computer, was presented to the public. The machine, which incorporated 17,468 vacuum tubes (somewhat like the ones you sometimes still find in guitar amplifiers), consumed 150 kW of electricity and occupied 167 m2 of floor space. It took six operators a few weeks to program, while they had to use an interface containing hundreds of knobs, switches, connectors and cables. The input data was provided through cardboard punch cards (and yes, those holes had to be punched manually as well).
Obviously, much has changed and improved since 1946. Where Moore’s Law dictated the speed of computers to roughly double every two years, the complexity of computer interfaces has decreased significantly. Nowadays, interfaces themselves are becoming more and more invisible: the complex machines and network connections they mitigate, no longer just sit on our desktops, hide in our pockets or are being worn on our wrists. They start to take the form of household appliances, cars and toothbrushes. At first glance this seems all fine and dandy. Let’s face it: technological innovation has become completely natural to us and of course, we always need a new iPhone.
But there is a catch. If those interfaces no longer make us fully aware of the fact we are still dealing with complex devices that bolster weapons-grade computing power; which use numerical and networked reason, algorithms and datasets that have no built-in ethics, we might be in trouble…
…but I still love interfaces.
Interface design has been a large part of my professional life and it has been so for the last nineteen years. I’ve worked on prototypes for industrial touch screens, websites, applications and mobile apps. During those nineteen years, I’ve seen the internet become a vital part of society, even of humanity. Although roughly just half the global population is online now –which is still three billion people, mind you– it is hard to imagine a world without networked computing and connectivity.
Interfaces have become vital for humans to use all that connected technology. They give them means of input, output and feedback. In many ways our world has become a networked society and using interfaces to navigate around that network has become a part of the basic human skill set.
What does this button do?
In January last year, I started researching the effects that the use of computer interfaces have on us. I wondered if design conventions that were originally conceived for screen based computer interfaces would become usable for interaction design in the real (analog) world. The use of tools has changed and shaped human physiology and behaviour since the dawn of man, so the technological revolution sure must have had a similar effect.
Design conventions represent the language that we use to make certain functions clear to people, be it visual, audial, tactile or kinetic. A great example is the desktop metaphor. Since the introduction of the first GUI by Xerox PARC in the early 1980s when they released the Alto, screen based interfaces have followed conventions we recognise. Just look at the windows, desktops, waste bins, buttons and tabs: they were all derived from analog office stationery.
But things are moving in an opposite direction now; people start interpreting their analog environment in a virtual way. Their behaviour is changing because they have become used to computers and interfaces. Here is an amusing example of a toddler interpreting a printed magazine as a broken iPad:
This is what you might expect when screen interfaces are introduced to young children, but let’s be honest: how many phone numbers do you still know by heart? How many birthdays do you still remember?
Even our language changes. To (re)search something online became ‘Googling’, to send a direct message turned into “(w)apping”. But it is more subtle too: ever experienced you heart make a jump when hearing the sound of an email or an app arriving?
Next Nature
We have grown to have an intimate relationship with technology. For example, we search online for information about medical conditions based on symptoms we have (and mostly find just that information that convinces us we will soon die in agonizing pain).
We track our heartbeat with a sport GPS device, we brush our teeth with a Blue Tooth Toothbrush (sic) and we ‘Tinder’ dates for casual sex.
Koert van Mensvoort states that we now have entered a timeframe where we need to declare this relationship with technology as a new manifestation of nature. In short, he thinks that anything that is present on the day you are born can be considered natural. To most of us, you can’t think of life without cars or telephones. They have always been there.
In his Next Nature manifest, Van Mensvoort proposes a ‘Technosphere”, next to a Biosphere and a Troposphere. Why not view traffic jams as a new natural phenomenon? Why not regard a computer virus that runs out of control as a natural disaster? Ponder on it for a while and it makes perfect sense. Things we experience as normal, become natural to us. Just like a suburban kid that thinks milk is made at a factory. (which it isn’t, I assure you)
With his Nano Supermarket Van Mensvoort proposes concepts for products that incorporate integrated and assimilated technology, using nanotechnology and biotechnology. Through these comcepts he asks questions: would you eat meat from your own body if it could be painlessly reproduced? Would you like a room to automatically change the color of its walls to adapt to your mood? If we choose to have technology take the same place as biology does –and accept that technology will be uncontrollable in some way–, how do we prevent human activity from becoming obsolete? And if humans would become obsolete in terms of efficiency and accuracy, would that be a bad thing?
Affirmative, Dave. I read you.
The way Van Mensvoort predicts the future is not entirely new. Actually, Mark Weiser, a computer scientist at Xerox Parc (the company that made the first Graphic User Interface, remember?), predicted an era of omnipresent computing long before most of us had even had bought a desktop computer.
“Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.”
During one of his talks, Weiser stated some principles for Ubiquitous Computing:
- The purpose of a computer is to help you do something else.
- The best computer is a quiet, invisible servant.
- The more you can do by intuition the smarter you are; the computer should extend your unconscious.
- Technology should create calm.
In many ways, Mark Weiser was a visionary. In 1988, he already predicted we’d be using ‘pads’ and small mobile devices. In 1995, when the internet still contained ‘just’ 100,000 pages, he predicted that every single garage rock band would be online within the next ten years. By that time, it still would take eight years before MySpace launched.
Weiser foresaw computers as an omnipresent part of our lives, where they would play the role of a quiet, invisible servant. A kind of distributed butler. In order to create such ‘Calm Technology’, operating these computers would need to become a intuitive task.
Designing human-machine interfaces
In order to function, a machine needs input of some kind. In order to make a machine suitable for operation by a human being, the interface for operating that machine must be practical and understandable for a human being. It needs to be designed. Industrial designer Egmont Arens was very clear what –in his views– good interface design was when he stated in 1944:
“The Job isn’t done when you’ve engineered a machine for mechanical perfection. You’ve got to ‘Humaneer’ the machine so it won’t frazzle the nerves of the operator”
The very fact that Arens coined the term Humaneering, reveals that he regarded product engineering as a human centered exercise. In order to create a good human-machine interaction, the interface must be understandable and usable for a human and provide usable data for a computer.
One of the standard ‘manuals’ on interaction design for the web is written by Steve Krug: ‘Don’t make me Think’. Although this book first was published back in 2000, it still has a powerful title. Personally I can’t think of a better way of summarizing the approach most Interaction/User Experience designers have towards designing interfaces. Because that is what interaction should be, according to Krug: an experience, not a thought process.
By designing an interface that is so clear, so intuitive, so coercive, a user can be nudged in the direction of places you as the designer want him to go. You don’t want him to have to think about a certain option for longer than a few seconds. Then you might run the risk the disgruntled user abandons your product altogether or chooses something you don’t want him to.
Reduce the friction
At a lecture in July 2015, sociologist Ruben Jacobs pointed me in the direction of Henry Dreyfuss, and suddenly the common ground between product design and computer science became clear to me. Henry Dreyfuss, a renowned product designer back in the early fifties, summarized the true goal of a good product design in his work “Designing for People”:
“When the point of contact between the product and the people becomes a point of friction, then the designer has failed. On the other hand if people are made safer, more comfortable, more eager to purchase, more efficient — or just plain happier — by contact with the product then the designer has succeeded.”
The similarities between the ideals of Mark Weiser and the statements of Henry Dreyfuss are easy to see. Krug fits in perfectly as well. They all seek a certain design, a certain concept that removes most effort and friction from the interaction with a product. Just enough interaction to have a pleasant experience. The experience should be nice, easy and effortless. As icing on the cake; Dreyfuss adds a commercial incentive: there is money to be made with frictionless product design.
Friendly user
There we have it. Make interfaces easier and enjoyable to use, and you will get happy users and happy users can be customers that are willing to spend money. Dieter Rams, for instance, followed roughly the same principles when he worked for Braun, and it is not a secret that Apple products have much in common with the principles of Rams.
Take a look at a random Apple product from the past ten to fifteen years and you will see where this is going. An iPhone does not even come with a manual. It is so intuitively built, you can start using it straight away. Its input is not only provided by gesture, but it is based on pictures we take, locations we visit, people we communicate with, things we read, write and buy, our heartbeat, and so on, and so on. Interaction has become some sort of a relationship with the device. Yet, when defining interaction between human and interface, we remain to refer to the human as a ‘user’: user stories, user journeys, user input… The computers we use, often no longer feel like machines, so why treat the operators as users?
Here lies the weird paradox between the way computer engineers and interaction designers see their public and how Egmont Arens talked in 1944 about Humaneering. We think user centered to humaneer an interface. When designing an interface, we like to see these humans as users. This implies that only the real point of contact between human and the product is relevant. Not the entity, but the role this human plays, seems to be the main focus of the design process.
Calm technology?
So, the approach in interface design is mainly functionalistic. It is focussed on the goal of the product itself and the point of contact between the human and the product. The effect the product has on the long run and the possible risks of its use are mostly kept out of the equation. The product should do what it is designed to do. That’s it.
I think this way of approaching interface design is becoming somewhat crooked. If we keep on regarding the people we design interfaces for as users, we are missing the point Marc Weiser tried to make: accepting computers as integral part of human life, society and the world demands a more holistic approach. Treating humans as users reduces those humans to a problem between the chair and the keyboard. (or touchscreen)
I’m sorry, Dave. I’m afraid I can’t do that.
And this is exactly where danger lies, if it is up to Nicolas Carr. In one of his books, The Shallows, Carr tries to convince us that the internet is meddling with our brains. In the book, he describes the complexity and ambiguity of the human brain and the influence logical reasoned computing has on it. By surrendering to the logic of computers, we are losing the finesse of basic human skills.
It is exactly this point, that Richard Sennett tried to make during his 2011 Premsela Lecture “Out of Touch”:
By working in a skillful way with resistance rather than fighting against the presence of the impediment, the artist or scientist can turn outward rather than inward, connecting with the world in all its roughness, hardness and difficulty.
There’s a strong contrast here with certain everyday experiences of resistance.
The more ordinary impulse is to reduce resistance by making it disappear from consciousness. “User-friendly” computer programmes, for instance, do not correspond to a musician’s earned, learned ease, nor are they designed to promote the skill of deploying minimum force.
Sennett thinks we lost subtle skills and senses through user friendly products. We become lesser educated, lesser aware of our surroundings.
And we would become less happy, if it is up to psychologist Mihaly Czikzentmihaily.
In his theory of flow, Czikzentmihaily states that in order to be happy from time to time, we need to work our asses off, invest time, attention and effort in the things we love to do. We need to be pushed right up to the boundaries of our skills in order to simply admire the effort of work. Work makes us happy, work makes us feel useful. Would that still be possible if we even leave the choice of our future spouse to an algorithm? Reasoning from the ‘user’ paradigm, that would be perfectly alright. Reasoning from the ‘human’ perspective tough…
Side effects
This September, I decided to run an experiment to see if people could be coerced in taking “the long way round”. I wanted to see if I could have them choose a less easy way of doing things and, while at it, make them feel good about it.
As an experiment, I wanted to see if people were prepared to take their bikes and shop for groceries that are locally produced, instead of taking their cars to the supermarket. Together with a group of multimedia design students, I devised an mobile app that would plot local sellers of groceries on a map and make a cycling route based on the ingredients of a certain recipe. While using the app, people would have a workout, they would consume local products, which is presumably better for the environment, and they would get to know people from their neighbourhood.
Although the reactions on the app itself were mainly positive, we were confronted with a very unpleasant side effect. The locations of the sellers would reveal the fact that those people were making money selling things. As the tax rules in the Netherlands have quite strict rules on running a business; this would give the Dutch tax authority a very handy tool for collecting some extra VAT and income taxes.
Product recoil
That’s what I’d like to call the recoil of a product. One could argue that it is up to individual people to choose such a certain service, just as it is up to individual people in most US states to choose to buy a gun. But the real effect of such a choice mostly becomes clear after the trigger has been pulled for first time.
The pastor that committed suicide after the Ashley Madison hack, probably did not take into consideration the severe consequences such a hack might have, when he signed up. (in a really user friendly way, I might add).
I’m sure that most of you did not have an Ashley Madison account, but most of you do have online data that you wouldn’t like to end up in the public domain. I’m not just talking about medical data, but simple stuff like a search history can turn quite embarrassing from time to time, I can tell you.
But it is not just leaks and hacks that are problematic. Because storage and transfer of data has become ridiculously cheap, a new kind of economy has emerged: Data brokerage. Companies harvest and trade data for use in advertising, often totally without us knowing or consenting.
A few years ago, there was public outcry on a list with the names and addresses of rape victims for sale. Another striking example is this father who was in for a nasty surprise because Wal-Mart’s marketing algorithms had concluded his teen daughter was pregnant. Based on her purchasing profile Wall-Mart sent discount coupons for maternity items to his address. This illustrates the real power (and danger) of automated data profiling. What would happen if those algorithms would get their ‘hands’ on the data of your bluetooth toothbrush and your medical insurance company decides to deny you a dental plan because –according to the data– you didn’t brush your teeth properly?
This is what privacy is all about. Being sure personal intimate data (be it digital or analog) remain hidden from others unless you explicitly choose it not to be. It is partly naivety and partly ignorance that makes most people shrug and move on like nothing happened. And most interface designers and developers do not seem to really care either. It is not in the interest of their clients and stakeholders to worry much about what the risks the big amounts of data they store and put through their networks have for individual users. Most social media services are free, because we are actually the product. Facebook is not a social media tool. But still, 1.5 billion people do not seem to care.
Now what?
So it actually boils down to this: while being confronted with interfaces on a daily basis, most humans cannot grasp the leverage and risks of the complex systems that well designed interfaces have camouflaged. Some software developers are taking advantage of this fact and make a lot of money while they are at it. Software criminals and security agencies make good use of it too. Data is worth a lot in terms of money and power.
Most interface designers, on the other hand, don’t really know or don’t really care about long term effects of their designs. The risks and side effects of “User Friendliness” are not an issue, as making user friendly interfaces is a fun job and can provide you with a decent income. So who’s your daddy?
Algorithm and Blues
Here is the dilemma: if you want to make a living designing interfaces, you will need to build the best interfaces you possibly can. There will be enough work for you, as most products become increasingly complex because they are ‘smart’ and ‘connected’ now. If you would consider the interface to be a part of that product (which it is), it should reduce friction between the human that uses it and the product itself. But this very act will hide its complexity, leverage and possible side effects. If the true power and risks of a product are not completely clear anymore, the user becomes a kind of Sorcerer’s Apprentice. So what are we going to do? There seems to be no best choice here.
But before we all get really depressed, let’s be reasonable first. I’ve not given up on the internet yet. We still can fix this, I think. Only, it would be unfair to leave it all up to designers.
As far as I can see now, we need an integrated four-way approach.
1. Legislation
Although it took some time and effort to introduce, road markings, signs and speed limits provided traffic to become a lot safer. Trying to fit this analogy to the internet is very hard, though. Marc Weiser acknowledged the privacy risks networked computing could pose and he advised to discuss this with the public and policy makers. But legislation is slow and national, while the internet is fast and global. Speaking of which, we haven’t managed to drive on the same side of the road on this planet yet…
European Cookie Law is good example of why legislation will not provide a solution by itself. The law states that people need to be made aware of the fact that tracking cookies can be present on a site. They have to give their consent via an annoying pop-up window. If you don’t agree, you can’t use the site completely. Same goes for accepting privacy statements and terms and conditions of software: mostly they are lengthy pieces of legal mumbo-jumbo. You have to accept them to use certain pieces of software. If you don’t, you can’t use the software. That in short, is a kind of blackmail. There is no go between.
Eventually, I expect legislation to catch up, but it is going to take time. It is important to keep that debate going.
2. Education
We need to educate ourselves, our students and especially our children. Teaching digital literacy at a very early age helps just as much as teaching children how to cross a street. How well designed future products and their interfaces may ever become, it remains important to show and teach what real power and networks these devices harness.
Organisations like Bits of Freedom are trying to keep the discussion on privacy issues and net neutrality going, while pushing and lobbying for better legislation. Another great example is the Critical Engineering Manifesto, which tries to educate engineers and designers in a more ethical and critical way.
3. Architecture
Some of the problems as stated above, have much to do with how the internet has been constructed. Some choices that have been made when the World Wide Web was conceived, still have direct consequences for problems we face today. In this interesting lecture, Vinay Gupta explains that because of the predecessor of internet (ARPANET) was put together by the US Army, data encryption did not come standard. He uses the term ‘army surplus hardware’. The US did not want data encryption to end up in the wrong hands. While encryption is available now, it still is not an integrated part of most people’s lives. And it should, if we value our privacy. But there is hope.
With the introduction of Bitcoin, Blockchain technology became more mainstream. Blockchain data incorporates distributed and encrypted ownership of data. It would be a step forward when it comes to online ownership and identity. It could make ownership of data safer and easier to control and with that, it might provide possibilities for a better data-based identity.
4. True Human friendly interfaces: real calm technology.
Here is where you, the designers, will have to make the difference.
Bring in the cavalry. To give you an idea: Some designers and artists already have done interesting experiments using tactile and kinetic installations that revealed the inner workings and effects of virtual products. In 1995, Natalie Jeremijenko created the installation ‘Live Wire’ at the XeroX PARC building. Based on the amount of data was going through a certain network, a stepper motor would apply tension on a piece of string. This string would oscillate and reveal the network activity. Mark Weiser called this installation a great example of calm technology.
Another interesting project is the Urban WiFi-project by Immaterials. Using a WiFi-receiver, a pole with LED’s and a camera with a long exposure time, the strength of WiFi signals in an urban aerea is being revealed. Richard Vijgen goes even further with his Architecture of Radio, which shows all ambient radio signals on a certain location through an app.
Space Caviar produced the RAM House to propose new ways of approaching design within the age of wireless networked technology. The house is de facto a Faraday Cage, that blocks all radio signals. To make a call or to send an e-mail, you will need to open a window or door. It proposes an extremely physical and tactile way of negotiating networked and wireless technology.
These examples can help us visualise the way we could approach interface design for the next five to ten years. What we should have to do, is design interfaces in a truly human centered way. If we accept powerful technology as a natural phenomenon, it means that when designing for interaction with this technology, we do not only have to focus on the point of contact, but we need to focus on the complete role a product will play in our lives. For better ánd worse. We need to grasp the half-life time of the products we make and we need take in account all things that might happen in terms of data safety and human behavior. Next, we need to make interfaces that reveal the leverage of the product, without frazzling the nerves of the human that uses it.
Only that would be real calm technology.
This is an open publication; I will keep updating and editing it in these next few weeks. This current version, 0.6 is an open document on Google Docs. Feedback, corrections and additions are highly appreciated.