Each week David Souter comments on an important issue for APC members and others concerned about the Information Society. This week’s blog is about our changing relationship with mobile phones.
The way people use mobiles and other ICT devices is the most dramatic change in life and lifestyle in my lifetime. Mobility is at the heart of that. Fifty years back, you needed a big office to hold a mainframe computer with any kind of power. Thirty years ago, accessing data tied us to our desks and desktops. Today, we walk the streets with handheld devices that have far more computing power than those office-bound mainframes of old.
My theme this week is how our attitudes to mobile phones have changed. I’ll suggest four phases, and then ask how comfortable we are with likely next directions.
Of course, let’s bear in mind, the experience of intensive mobile users isn’t universal. Those who’ve lived through these four phases have lived mostly in developed countries, or been part of wealthy social groups. But the changes I’ll describe have shaped the way mobiles are used by everyone that has one now and will shape how people use them in the future.
Phase one – brick-like mobile phones as status symbols
In the early days of mobiles in developed countries, the late ‘80s and the early ‘90s, they were signifiers of the self-important and the self-employed. They were bulky and expensive, complements not substitutes for fixed telephony, used mainly by those who needed them to do specific things like freelance workers. Often they were tied to cars.
I worked back then for a trade union, and at conferences I carried the union’s mobile phone around with me. This gave me power and status. I was the gateway for other union officers that wanted to make calls. And I was seen by other conference-goers to be trusted by my union with this important new resource. That brick-like mobile phone carried real cachet. I carried it with pride.
Phase two – feature phones as aids to living
That’s how it was until mobile phones became smaller, cheaper, more widespread and gained more features – before the feature phone, in fact. Then they became more useful and more varied aids to living: more than just phones, but also means of access to information resources, radios and personal organisers.
My first memory of someone using their mobile to access the Internet was at the World Summit on the Information Society in Geneva, 2003. When a friend and I couldn’t find the restaurant we were looking for, she searched for it on her new upmarket mobile. (Later that evening, we discussed the then-predicted ‘death of the book’, which I’ll come back to in a later post.)
I like a fictional analogy. Our use of feature phones, I thought then and still think now, was like that of ‘tricorders’ in the TV series Star Trek. For those who’re unfamiliar, when our intrepid heroes explored new planets in that series they carried handhelds called tricorders which they used to find whatever information they then needed: whether the planet was radioactive, for example, or had any other lifeforms present. Tricorders were really plot devices: ways of explaining what was happening without having to go into detail. That made them the ultimate handheld personal assistants, and that’s what feature phones became.
Phase three – smartphone dependency
We’re now in the age of smartphones, and our behaviour’s different again. Smartphones are much, much more than phones, with apps that emulate (and sometimes replace) devices that were previously entirely separate. Smartphones are not just radios, but also televisions and pocket cinemas. Not just access routes to information, but also cameras, torches, calculators, games consoles, music players, book readers, and many many other things. Not just one-to-one communicators, but indispensable intermediaries with the interactive Internet, especially the social networks that have become central to their users’ lives.
We used feature phones when we needed something specific. We use smartphones by default. If I sit on London’s underground railway, half of my fellow passengers will be absorbed in mobiles. And there’s no connectivity on most of London’s underground, so they’re not using them as phones but as companions.
My fictional analogy for this behaviour comes from the novels of Philip Pullman called His Dark Materials. In the universes of those novels, people have ‘daemons’, animal-shaped companions which are physically separate but psychologically part of them. They’re in constant communion with their daemons, which are, in practice, something like their souls. Separation from them, even for the shortest distance or the shortest time, causes mental anguish and physical pain. Remember how you felt last time you couldn’t find your smartphone?
We’ve moved, in short, beyond mobiles as information aids towards mobiles as companions. We use them constantly as interfaces with the wider world – with our friends on social media, with our work colleagues, as information resources and entertainment platforms. We have become dependent on them.
Phase four – so what comes next?
It’s time, therefore, to wonder what comes next: what might the fourth phase of our mobile lives look like. One possibility is that we integrate devices more closely with our physical as well as mental selves. We’ve already seen some signs of this.
Google Glass, for example, was a very public attempt to integrate computing with our physiognomy. Too clumsy to appeal to many people, yes; too obvious, too nerdy. (To Star Trek fans, it made its users look too like the Borg.) But the implication of its failure is that the next attempt to bring together head and hard drive will be more subtle, more unobtrusive, more integrated with our bodies and our senses. Contact lenses, maybe? Samsung’s apparently already on the case.
Apple’s attempt to drive its customers towards bluetooth-enabled earphones looks similar. How long before we’re offered earphones invisibly located in the ear itself, like today’s superior hearing aids?
Wearable RFID tags are widely used to monitor our whereabouts and give us access at conferences and concerts. People have accepted this with scarcely a demur. But there’s an obvious next step. At least one company, in Sweden, has tried out inserting chips under the skin to give employees access to company premises and facilities. That raises questions about ethics and employment rights, but can be seen, too, as just another step along a road already travelled – by spectacles and contact lenses, hearing aids, heart monitors, prostheses, digital watches and other wearable computers.
How far do we want to go?
My point, in this post, is that, over thirty years, we’ve seen extremely rapid change in the technology of mobile phones (from 2G to 4G and beyond), their character (from bricks to smartphones) and our attitudes towards them (from status symbols to information aids to indispensable companions).
For many people now, they’re integral to most aspects of life. They have become dependent on them. And they’re likely to become more dependent as mobile devices become more powerful, more capable, delivering even more of what we need to live our lives. Technology will continue to improve and businesses continue to innovate in pursuit of richer customer experience and corporate gain.
So my question is: how far do we want this to go? How much more dependent do we want to be? And how much say will be actually have, as users? Will we insist on keeping our devices physically separate, tethered to us by cables or by bluetooth? Or will we accept the convenience of physical integration – of being online ourselves rather than just carrying online devices? How much more dependent would that make us on those who supply and monitor technology? What would it do to privacy, or to our relationships? How would we deal with never knowing if what we say and do is being digitally observed/recorded/analysed, not just by governments or tech companies but by the chips in other people’s eyes and handshakes?
It’s often said that today’s science fiction is tomorrow’s innovation. Those steps towards greater integration between our physical and digital selves that we’ve already seen – like Google Glass – should make us think about the ethical and regulatory frameworks that will be needed in the future.
Next week, I’ll look at another aspect of our changing attitudes in the digital age – to data, data management and privacy.
Image used under Creative Commons license available here .