I’m inspired to write this week by the experience of a friend, who fell prey last week to cybercriminals. I’ll come to her experience, and the thoughts it prompts, below. But let me start with three contextual issues that frame responses to what happened to her and happens to others.
The scale of cybercrime
The first’s that cybercrime’s a big issue. Most close observers see it and its parent theme, cybersecurity, as one of the biggest problems now in digital policy development.
Fraud’s now the commonest crime in my country, the UK, and most of it’s online, causing economic harm and individual misery. Its impact reaches deep into social welfare and breeds mistrust in online services. The level of concern about it’s justified.
Online and offline equivalence
Much is rightly made of the UN’s declaration that rights are equivalent online and offline. That becomes more important as the online world gets more pervasive. The same principle of equivalence logically applies in other areas, not least in law.
But equivalence in principle does not readily translate into equivalence in practice. The mechanisms of online fraud differ from that offline. The protections and investigative methods that are needed to tackle it also do so. This poses challenges for law enforcement and raises issues for regulation that affect other protective rights including privacy.
Public policymaking
My third point’s concerned with the ways in which these issues are discussed. I’ve been to many fora on the internet where cybercrime’s a theme, not least the IGF. I’ve heard from government and intelligence agencies, police officers and lawyers, technologists, economists and sociologists. But rarely in those fora, almost never, have I heard the voice of victims.
It’s an axiom in other areas of policy making that we should listen to and learn from the experience of those who’ve been or will be affected directly by the problem with which it is concerned and by the policy prescriptions that might then result.
We need to understand much more than the technology behind criminal entrapment, or generalised statistics. To prevent, investigate and deal with the incidence and impact of cybercrime, we need to understand why individuals succumb to it and the effect it has on their and others’ lives. We need to listen more to victims’ voices.
A victim’s tale
So let me turn to one.
My friend’s not someone that I would expect to fall for it. She’s sharp-witted and she’s cautious by instinct. So why did she?
Well, it wasn’t any simple scam. It was highly sophisticated. Her scammers claimed to be from her Internet service provider and then her bank. They convinced her that her internet and finances were compromised. They knew a great deal about her, and knew how to find out more from her without her realising. And when they did, they showed her fake information about her internet and finances on fake websites that were extremely credible. One of these faked the police’s own reporting site for online fraud.
This isn’t petty crime, of course, but organised, large-scale and international. Cybercriminals are as innovative as any other digital technologists. Their success depends on their being ahead of those policing them, so it’s no surprise that they’re far more digitally aware than individuals who aren’t tech specialists or especially tech savvy: the vast majority of people.
What convinced my friend her predators were genuine matters if we’re thinking about how to protect people from fraud. (Much of this in her case was the fear, instilled by them, that she’d already been attacked, and the credibility of the “evidence” that she was shown of this.)
The extent to which financial systems are capable of intervening to protect victims matters greatly too. And so does the impact of the experience of falling victim – impact that’s psychological as well as economic, that impinges on trust in online services as well as trust in other people, that affects individuals but also (because these crimes are common) social interaction, social welfare and social stability.
My friend was lucky, in some ways. She lost some money, but it could have been much worse; and for many people losing the amount she lost would indeed have been much worse.
Six points
The tale I’ve told, of course, is not uncommon. But my friend’s experience has made me think again about the ways in which our public policy responds to cybercrime. Six points, therefore, arising from this.
Listening to victims
The cybercrime’s not just a cyber crime: it’s linked to wider criminality, and its impacts on individual welfare, social coherence, economic opportunity and even trust in governance. Policymakers need to understand the economics that underpin criminality, and the deficiencies in other cyber policies – for instance, data privacy and online security – that do much to enable it.
To do so they need to understand victims’ experience. What makes individuals and businesses more or less vulnerable? Why did they respond as they did, and what might have made a difference? What have been the effects on victims’ lives and what needs to be done to tackle those effects? What impact has there been on victims’ long-term trust in digital technologies and services?
Listening to victims doesn’t only matter, obviously, where fraud’s concerned. I’m using fraud here as a lens. It’s just as critical, and just as under-represented, in public policy in areas of harassment and bullying, of gender and identity-based violence, and of disinformation (of which much fraud’s a kind). Policymakers can’t understand the experience of falling victim from statistics only; they need to hear the victims' voices.
A responsibility of all stakeholders
This isn’t just a matter for some stakeholders, but all; nor is it just for stakeholders within the digital environment.
Clearly it’s a matter for legislators, for policing, for the legal system. Fraud’s obviously important for financial services, particularly banks (and their customers/shareholders) who’ll bear the brunt of losses if they’re not confined to victims.
It’s a responsibility, too, for online businesses who should address the risks posed by and through the services from which they profit. They shouldn't merely shrug their shoulders, and in the long run can't afford to.
And it’s a responsibility for wider areas of governance. Victims experience financial loss and loss of confidence. Fraud has mental health impacts and impacts on family coherence (as blame’s attributed within the family). These affect non-digital policy areas, including health and social welfare. Policymaking about impacts of fraud needs to incorporate these outcomes and address them, not focus solely on technology and money.
Digital literacy is not enough
A lot of emphasis is put by policymakers on digital literacy as a way of reducing fraud. If people were more savvy, it’s suggested, they’d be at lower risk. In effect, this places the responsibility on users, on (real and potential) victims.
Of course, digital literacy’s important. It should be part of school curricula and it should be (though it isn’t much) promoted by online service providers. But it’s not enough. Reliance on it misunderstands how people lead their lives online.
Digital literacy campaigns don’t reach most people – they’re too busy, they have other priorities (work, childcare, coping), they don’t think it’s going to happen to them. Campaigns especially don’t reach those who are most vulnerable: the digitally inexperienced, the elderly whose lives have been non-digital, those with other problems that are more pressing and require their focus.
And even those who are quite digi-savvy, like my friend, can be caught out by smart people using smart technology to do smart crime.
This is an issue for inclusion and equality
Protection against cybercrime (whether fraud or risks to welfare such as harassment) should be part of access strategies.
Ensuring universal access to the internet and online services has great advantages. It enables people to do many things that they could not previously do or to do things more efficiently. But it also exposes new users (in particular) to threats that differ from their previous experience.
This has two impacts. We know that some potential users on the margins are deterred by the risks that they perceive. And such potential users are also likely in plain fact to be more vulnerable than others because they lack experience.
The need to protect vulnerable users is exacerbated where governments and businesses are seeking to make existing and new services digital-by-preference or digital-by-default, including services directed at the vulnerable. Consider my friend’s experience with the fake websites generated by her predators to assess the reasons why.
Moving services online is often seen by governments and businesses as saving costs. In practice, to address vulnerability, it requires investment in user education, consumer protection, helplines for the anxious and the inexperienced, and strong measures of redress.
Prevention is better than cure
Technologists and management consultants often talk about “solutions” – new tech that seeks to solve old problems; new tech that seeks to solve new problems that arise from tech. “Solutions” to problems such as those described above are often offered by tech companies.
New technology encourages experimentation. This is generally a virtue but not all who experiment with it have the general public’s interest at heart. Cybercriminals identify and take opportunities from technical innovations more quickly than service providers respond to them. The latter are running to catch up. More attention to prediction, prevention and consumer protection’s needed before services go live.
And it is multifaceted
And, finally, service design by governments and businesses in this context needs to be multifaceted, as does policymaking by governments and government agencies.
They should begin by recognising vulnerability, its causes and effects. They should anticipate what might go wrong; explore and build in deterrence, prevention and protection when developing their services. They should alert users to potential risks in ways that cut through to them (many believe that others are at risk, not them) and offer easy access to help and guidance for those who’re targeted or think they’re being targeted.
They should enable redress, including the recovery of stolen assets; and they should support personal recovery, from the psychological harm and loss of confidence arising from victims’ experience. I’ve used fraud here as an example, but the responsibility to do this arises just as much in cases of harm from harassment, abuse and gender or identity-based violence.
In conclusion
I’ve used an individual instance in this blog to highlight broader issues. The benefits arising from a digital society are obviously substantial, but so are risks that need to be alleviated if those benefits are to accrue. I’ll end with four suggested points of principle that could underlie more thinking in this area.
First, internet businesses and governments should recognise the complexity of harms, and especially the way that innovations that are generally beneficial can also be used against consumers by those that wish to do so. The latter’s loss should not be treated as collateral damage but as a challenge of complexity that needs to be resolved.
Second, businesses and governments should recognise that making rights and laws equivalent online and offline requires different strategic approaches to enabling and enforcing them online and offline. This raises complex issues concerning governance and rights, which require sophisticated assessment and policy development, not denial or kneejerk reactions. And it requires cooperation between technologists, businesses, judicial authorities and governments, within and beyond national boundaries.
Third, listening to victims is an essential part of understanding what is happening and why, what is going right and wrong, and what might be done about it through service design, business practice and public policy.
And fourth, internet businesses and governments need to keep the changing nature of technology, its benefits and risks, in mind as those who wish to exploit the digital society to the disbenefit of others get smarter and adapt to each new opportunity.
Image: Online Fraud by Don Hankins via Flickr (CC BY 2.0)