Power and publication: an interview with Onora O’Neill

The Editors
October 28, 2013
KR Interviews

“…one of the things you’ve got to ask about these  technologies is: what can they do to the weakest people, and what  advantages do they give the strongest institutions? We’ve spent twenty  years thinking romantically about the internet – in effect “oh what a  lovely playground” – and I think we’re beginning to see now that it’s being colonised, it’s becoming an arena of power.”

Last week King’s Review talked to Baroness Onora O’Neill, chair of the Equality and Human Rights Commission, crossbench member of the House  of Lords, and Professor Emeritus of Philosophy at Cambridge. Our  discussion began with O’Neill’s evidence to the Leveson Inquiry into the  culture, practices and ethics of the British press, and moved on to  consider the ethics of publication, privacy, anonymity and  data-gathering online in light of the recent GCHQ/NSA scandal brought to  light by the Guardian’s investigative journalism. We hope you enjoy the conversation.


The government recently proposed a Royal Charter, which  would oversee a new press regulator. This step has been described by  some journalists as a shackling of the press, as a regressive step. Do  you think this description is accurate?

Baroness O’Neill:

No, I think that the press has  been accustomed to a particular form of self-regulation, which I would  call self-interested regulation, and the bodies we have had, the PCC  [Press Complaints Commission] and its predecessors, have been  demonstrably ineffective. I don’t say this because there were criminal  acts – there was (by definition) legislation against those acts – but  because their capacity to sanction their members turned out to be too  slight, because the culture ran away. Even their complaints procedure  was ineffective. It’s extraordinary, because up and down the country we  have businesses that have quite effective complaints procedures for good  reputational reasons. The PCC didn’t. They established a very narrow  gateway for what could come in as a complaint and then they did what  they called “reconciliation” – basically they were nice to people until  they went away – and they hardly ever printed any substantive  corrections or apologies, so even that most elementary function wasn’t  being handled well. So I think you have to say that it might be ideal to  have a competent, self-regulating body. But it’s pretty clear, after  decades during which everybody’s agreed that this is what you want, that  you can’t get it just by letting the press regulate themselves.

As for the question of shackles, that is being said by a lot of  people in the media today, and if it were proposed that there should be a  state-controlled body that could censor content, the metaphor might be  appropriate. Nobody has proposed that.

In your 2011 Reuters Memorial lecture you cited Tom Paine’s  1806 essay, in which he writes that “Nothing is more common with  printers, especially of newspapers, than the continual cry of the  Liberty of the Press, as if because they are printers they are to have  more privileges than other people”. Do you think that part of the  problem thus far has been the monopoly on the debate about press freedom  held by the press itself, which has led to a biased conversation?

I think there is quite a complicated set of reasons why the debate is  so essentially trivial within the media. One reason is that it’s  obviously quite difficult even for editors who are very worried about  what’s happened to side with Leveson. It’s also clearly very, very  difficult for politicians to take a stand. And especially if you think  about the process that’s being used: the Royal Charter proposals. As I  see the process, first, Cameron, rightly, seeing that things had got out  of hand, set up an inquiry that was to be both retrospective and  prospective. The retrospective task could not be done properly by  Leveson because so much of it was sub judice – and as you know the  trials are only beginning in some cases. On the prospective task he was  criticised before he’d said a word, and a very typical comment was that  there should be no press regulation; the press should be free to say  what they want within the law. Now that is in my view a completely  inappropriate intervention in a debate about policy, because the point  is to debate what the law should be. We all agree that action has to be  “within the law”. The question is what should those laws be. And Leveson  obviously takes to heart all the classical arguments since the  seventeenth century about censorship and prior restraint and he’s  looking for a way in which you take those arguments seriously but don’t  have Mickey Mouse regulation.

Given that the debate has been set back by the self-interest  of both the press and politicians, what role do you think there is for  academics in this debate?

Well, of course Leveson did take a lot of evidence from academics and  he actually asked to hear some political philosophers. I also persuaded  a group of colleagues to give evidence and I’m aware of a lot of  academics, in a number of disciplines, who are thinking about media  freedom in a way that they were not a few years ago. I started writing  on it in, I think, 2001, and it was a curiously dusty, old-fashioned  topic at the time, as though all that was done and dusted – “we don’t  need to think about it: of course we want a free press and it’s obvious  what it is”. And it was only when I started looking more carefully at  the classical arguments that I began to see that we in fact have some  radically different arguments for press freedom jostling out there.

My perception has been that politicians are frightened. Why do we  have a cross-party agreement and not a parliamentary vote? I think it’s  like having three small children on a diving board where jumping into  the deep water is quite frightening. So they hold hands. When I first  heard the proposal to use the Privy Council I thought “that’s really  odd”. I’ve come to think that doing so has a certain merit, in that it  puts a supermajority in the way of subsequent parliamentary tampering  with the system once it’s established. So while I wouldn’t generally  think we should do things through the Privy Council, there may be a  point here.

One could say that the phone hacking scandal created an  opportunity for politicians to look at this issue. What do you think  gave academics the opportunity around 2001 to start reconsidering press  freedom? Was it considerations about the Internet?

I think a lot of people thought that press standards had been on the  slide in the UK, and more on the slide than in some other countries. I  think of serious journalists like John Lloyd and Andrew Marr writing  about it, and you can see that they’re extremely worried at the  increasing dominance in the media of scandal and innuendo; the loss of  any distinction between news and comment had become much more  commonplace. So I think there were routine worries that were growing for  some years. What’s interesting is that those worries preceded the  widespread use of the Internet. There were worries about the print  media, and in this country about the contrast between the print media  and the broadcast media where we do have forms of process regulation  that have generally produced more reliable standards of journalism. Now  these standards are never going to be 100% effective, but the trend in  the print media had got a lot of serious journalists and other people  worried.

There was also one other political event: in 2003 the Communications  Act lowered the threshold for anti-monopoly provisions so it was  possible to concentrate ownership more. Now, the Leveson Inquiry, and  what’s gone on since, has not actually addressed this question. But I  think it’s a very serious issue. We already have print media ownership  that (1) is highly concentrated, (2) very largely consists of rich  individual proprietors who in many cases are not citizens and not  taxpayers, and I think this is probably an unhealthy situation. We may  have thought that the domination of press barons 80 years ago was an  unhealthy situation, but I worry about where we’ve got to now.

One of the by-products of the phone-hacking scandal and the  Leveson Inquiry has been the crumbling of Murdoch’s media empire. But is  there not something to be said for having substantial financial backing  behind some news organisations in order to counter the huge resources  of, for instance, the state and security agencies? This brings us to the  NSA leaks that the Guardian has published. Is it not important to have news organisations that are able to effectively counterbalance state power?

Well that’s essentially the argument for the importance of  investigative journalism. And I accept that argument. But I think it’s  worth noting that very little of what the media do is genuine  investigative journalism. So it’s not a very good argument for the media  as they are. It’s quite common for people to claim that what they’re  doing is investigative journalism, but when you look at the standards  they are bringing to it, you become a bit doubtful. Nevertheless, there  is a serious argument, and that’s why I would think that we do need a  public interest defence for keeping your sources secret where it is  genuine investigative journalism.

Let me go first to the general case on sources: basically good  journalists, of course, declare their sources. It’s the way they make  what they write credible. And so it is the default for good journalists  to tell their readers “these are my sources, here is the evidence,  here’s the photograph, here’s the quotation”. But there is the case  where you would need, as an exception, to keep your sources hidden. And a  very standard way of doing that would be not to publish your source or  your evidence but for example to tell your editor, “I reckon this is a  reliable source”, and you just have some limited trail that shows that  you weren’t just inventing it. I first became interested in this when a  journalist I met in Cambridge seven or eight years ago told me something  that I happened to know to be false. So I asked her “would you like to  tell me your source on that?” And she said, in the most po-faced way,  “you wouldn’t want me to reveal my sources”. I thought, well either you  have no source or someone’s been having you on. And it was a very  trivial matter, but she hadn’t got an accurate source and was putting up  this screen as an automatic reflex. So we do need to do something to  protect genuine investigative journalism. But I would say that what it  needs most is to regain its reputation – and no longer to be  identifiable with intrusive tittle-tattle.

That’s rather separate from the issue of breaching confidentiality,  be it for commercial, professional or security reasons. That’s a further  matter, but I think we do need protection for investigative journalism.

A recent prominent example of investigative journalism is the Guardian’s  publication of information about the NSA’s Prism surveillance  programme. This has raised slightly different questions about the ethics  of journalism and the public interest than those resulting from phone  hacking. Do you think that, in addition to regulating journalistic  process, there’s also an argument for, if not censoring, then having  some kind of control over the content that’s published where it may not  be in the public interest?

In here there are some really difficult matters, because as we were  just saying, you don’t want the public interest defence to be a trump  card that anyone can play, regardless of what they’re putting out there.  I’ve noted that some criticisms of the publication of the Wikileaks  material or the Edward Snowden leak, has been simply “who are they to  judge that this is risk-free to third parties?” I think that that’s a  pretty serious issue. If you publish other people’s confidential  material it is not enough to assert that this is in the public interest.  You have at the very least to ascertain whether you are endangering  other people; or you’ve got to warn them; or both. So there can’t be a  blanket assumption that if you happen to stumble across somebody else’s  confidential material you’re doing a good thing by putting it into the  public domain. There’s a volume question too: if you put scads of stuff  into the public domain, most of it won’t be found, read, interpreted, or  used; but some of it may enable some people to put two and two together  and infer something that should probably not have been in the public  domain. And I have a bit more of a sense of this in the case of the  Wikileaks material, because much of it was not high-security stuff. To  my mind it was extraordinary that the US government had a website to  which I think three million people had legitimate access – not just Mr  Manning. What I’ve read from it so far, such as reports from diplomats,  sparked the thought that “ah, they’ve got quite a good grasp on this;  they’re writing rather good prose, this is not dumb!” And of course, it  was probably extremely embarrassing, particularly to the Saudi  Government when it became public knowledge that they had been asking the  US to bomb Iran – they were no doubt extremely cross – but on the whole  I think my respect for US officials rather grew on reading the  Wikileaks stuff.

Though it was a straightforward breach of security, if I were the US  government I would not be making a martyr of Mr Manning. But I would  prosecute him, because any employee in that situation has clearly  undertaken obligations, so I don’t have any difficulty with that. But  was it fundamentally in or against the public interest? Well, there may  have been things that were enormously in the public interest in there,  but I think it was more embarrassment. I don’t really know about Mr  Snowden, and some people have said some quite dramatic things and other  people less so. However, I think it’s clear now that leaks cannot be  plugged retrospectively. That’s very different from traditional  technologies, and it makes it very much harder for us to judge how the  argument should now go. I wouldn’t argue for prior restraint, but I  expect some people think that’s the only thing that could be effective  if content that is leaked or just inadvertently communicated can be  round the globe and in the headlines almost instantly. Now there are  those who think that this is terrific because it embarrasses the  powerful. But to my mind it shows a great lack of imagination to think  that this is the only likely outcome.

Are you hinting at the use of information by terrorist organisations, for instance?

I think I’m very much with people who try to keep recipes for making  ghastly weapons from being posted online. I think that it might reach  lunatics, it might reach terrorists, it might reach the armed forces of  countries that don’t have adequate control over their armed forces – it  might lead to any number of things, from lone lunatics upwards – and  there are probably pretty good reasons to try not to distribute that  material wholly freely. And we see all the time instances where someone  with really not much technical know-how has found out how to cook up  something lethal.

So are there good arguments for prior censorship of content,  aside from those of process that you focus on in the Leveson evidence?

I would suppose that yes, distributing instructions for making lethal  cocktails is probably something that you want to keep a bit of a grip  on. Nobody can keep a grip on it except by a combination of state and  civil society organisations. But I think there’d be pretty widespread  agreement on that – as there is pretty widespread agreement on child  pornography.

You invoke something of an appeal there to broader civil  society and the kind of opinions this generates – including those  generated by political institutions. When you have a media organisation  such as the Guardian threatening to bypass the laws of this  jurisdiction in order to publish from the United States, for instance,  do you think this is an important strategy for investigative journalism,  or do you think it is tantamount to escaping from what you call our  “collective geopolitical fate”?

It’s exactly the same thing as extra-territorial tax evasion, and I  think that a world in which some people are confined within the bounds  of a certain jurisdiction and are bound by its laws, pay its taxes,  while other people put different aspects of their lives into different  jurisdictions is a risky and divisive one. It produces a class of people  who can have their money here, their holiday home there, and their  children at school somewhere else, and they don’t carry the burdens of  any society, although they may enjoy the benefits of several. I’m very  interested in the development of extra-territoriality with respect to  taxation. And I think a close parallel can arise when people publish in  other jurisdictions in order to avoid prosecution – whether for  defamation, for breach of privacy or of confidentiality or of security.

We have perhaps made a little progress on online publishing in this  country since the episode where Lord McAlpine was falsely accused of  paedophilia and when upon seeing a photograph of McAlpine the accuser  said, “oh that’s not him”. It was very categorical, very clear,  immediately. But although it had only happened hours before, there were  lots of people who had already blogged and tweeted and sent this  information elsewhere. I gather that, within this jurisdiction at least,  his solicitors are going around requiring donations to charity in  proportion to the size of the readership or followership – and that  seems to me right: if you’ve published, you’ve published, and the laws  of defamation apply to you regardless of medium. But of course that’s  where extra-territorial publication is going to face you with exactly  the problems we now have with non-doms and taxation.

So is there a need for some kind of international organisation to regulate the Internet?

We have devised a remedy for dealing with taxation in multiple  jurisdictions by using tax treaties, for example, the US-UK tax  treaties. So I think this is a model that actually does work, and has  worked for a long time. It doesn’t work so well in jurisdictions that  are simply not in the business of taxing, because they have other ways  of amassing money for the state. I think the taxation issue is in some  ways even more urgent than the informational issue, but there might be  analogous partial remedies where states agree to prosecute breaches of  confidence, privacy, and the like, although the publishing is not  confined to any one jurisdiction.

But within the informational issues, I would take a rather different  view of how the law might view anonymity and how it might view privacy.  Anonymity is publishing something and ensuring others don’t know and  cannot know who the source is. Now that seems to me a pretty bad thing  to do. Of course there can be cases where it’s all right; if a company  publishes its accounts, there’s no need for anyone to know the  accountant’s name because they have the company’s address and it’s  traceable. But there is a lot of untraceable material that readers  cannot check or challenge. I did a TED talk this summer on  trustworthiness and trust, and somebody else talking that day told a  simple story about an episode in Dublin, of a family whose lives were  torn apart because somebody who evidently knew a lot about them was  putting the most scandalous rumours about them in emails to them – not  even online, just in emails. And they, like most families, didn’t know  how to find out who it was. It turned out to be their twelve-year-old  son’s friend down the road. At the case conference the child was in  tears and said, “I just did it because I could and I didn’t know it  would lead to anything”. And it just tells you that when a child of  twelve can do that and the ordinary person cannot see where it comes  from, that is very alarming. So I think that we do have to have some way  of making it easier for people to find out where a claim is coming  from. Which of course you can do, unless people are very clever, through  the service provider (although it took this family some time to find  this out). But when people are very clever you can’t actually find out  who it’s from. So anonymity and anonymous posting are the first things  we should think about here. You’ve probably read the tragic story of  this youngster in Fife who threw himself off the Forth Bridge.  Cyber-bullying, though it does happen in situations where the  perpetrator’s name is known, is typically an activity where perpetrators  rely on anonymity.

There are arguments that the Internet has a democratic potential that print doesn’t have…

I think that involves a pretty trivialised conception of democracy.  Democracy in the end is not merely advertising and counting noses.  Democracy is also about understanding the arguments for and against  policies. If you can’t engage in conversation with someone it’s not  adding to democratic potential.

But one could say that the broadened access that the Internet  provides to the means of producing content, gives a greater possibility  for engagement and for countering misleading content than the print  media. And this may be used to argue that less regulation is needed on  the Internet. Is there nothing in that argument?

First of all, that’s very different from the argument that we should  permit anonymous posting. You can bring in a bit of democratic potential  when people can debate online and discuss a topic, but this assumes  that the other parties are not wholly anonymous (even if they may in  some contexts be using a pseudonym) and are traceable. Where this is the  case people can be held to account for defamation, and that seems to me  quite fundamental, because I’m sure you’ve seen the kind of comments  people feel free to post when they think they’re anonymous – the sort of  vicious nastiness that sometimes happens then. So that’s why I picked  out anonymous posting as something that I think would have to be  controlled.

Might one not fight anonymity with anonymity? Especially if  we have at least some faith in the idea of reasoned debate overcoming  all else.

This is not about reasoned debate, full stop. It really isn’t. If  you’re into reasoned debate you don’t need and you don’t use anonymity.  Citizens just have to have a soupçon of civic courage. And that means  that when I speak as a citizen I speak in my voice and I listen, and  people can come back and can say, “I don’t think that argument works” or  “you have forgotten about” or I can say “I want to refine, rephrase,”  whatever. That’s what debate is. Debate is interactive and consequently  has elements of corrigibility. When you don’t know where the purported  voice on the other side of the debate is coming from, even whether it is  one voice, when you don’t know whether your remarks are being edited  and fed in certain ways into some channels and not others, when you  don’t know how what you say is being spread around – I think that is  really likely to prove utterly destructive of democracy in the end.  Also, you don’t know what interests are being represented in comments  that you can’t source. Or who’s paying for what. These are very basic  matters in considering what the media can contribute to democracy. My  perception of it is that anonymity neat and pure is probably something  that won’t be acceptable, and it certainly won’t contribute to  democracy. So anonymity is probably the easy case.

Now the only serious argument for anonymity that I’ve heard was from  Belle du Jour – I was on a panel with her at the Hay Festival – and she  pointed out that she probably could not have written a blog about the  subject she was writing about had she not been anonymous. That makes one  think is that there may be acceptable uses of anonymity. But anonymity  deprives the audience, small or large, of all protection and all  possibility of response. So I can’t really build an argument for  anonymity being generally acceptable.

I would have thought anonymity is something we will have to deal  with. And there are various ways of doing it. One is where people are  actually interested in debate and discussion and perfectly willing to do  it in their own names. Or you could have a moderated discussion where  people are to some extent protected – not of course by the state but by  the service provider – which, if it degenerates to menace or defamation,  will lead to identification and liability.

What about the related issue of filtering? Many people  receive their news diet now through search engines. And Google, for  instance, filters its search results through algorithms designed by  computer engineers. This means that there’s no editor who curates news  directly and who accepts responsibility for the content presented to  users – not a problem of anonymity so much as one of attributing  responsibility for decisions presented as purely technical. How does the  use of algorithms in this way change our discussion of responsibility?

It’s a difficult one, isn’t it? And of course the bigger example,  even bigger than Google, is the Chinese state building algorithms into  the most popular computers so you will find that your news doesn’t cover  certain things. I also believe Google isn’t using that motto quite so  much now – “don’t be evil”…

In general, one of the things you’ve got to ask about these  technologies is: what can they do to the weakest people, and what  advantages do they give the strongest institutions? We’ve spent twenty  years thinking romantically of the internet – in effect “oh what a  lovely playground” – and I think we’re beginning to see now that it’s  being colonised, it’s becoming an arena of power.

Well I suppose the deeper issue is that many organisations  that are dominant online are using supposedly neutral means of curating  and filtering content, behind which there is very little trace of  decision-making involving moral deliberation. How can we possibly  regulate that? Many of the issues that you bring up in your evidence to  the Leveson Inquiry and your Reuters lecture are to do with the ethics  of process. But when there’s ostensibly very little process in terms of  ethical thinking involved, what kind of regulation is possible?

I think this is an enormous topic, and it’s not one that I can give  answers to – partly I think I’m technically too ignorant – but it does  seem to me that the identifiability of the service provider is pretty  important, and that’s anonymity but at another level. The takedown  notice is obviously a quick but ineffective way of reacting to the  posting of incitement or defamation, or other things that we clearly  think of as “speech wrongs”. But it’s very, very clunky to go after  service providers to do this. The other sort of answer I’ve heard is  “well, won’t the market sort it out, because their reputations will  plunge”. Well, I’m not sure, I’m just not sure. Because, as you say,  these algorithms are often invisible to end-users, so they are unaware  of what is being filtered. Leveson was dealing predominantly with the  standards, ethics, and culture of the print media because the broadcast  media are in fact quite regulated in this country and publishing beyond  the print media is, again, quite well regulated – publishing houses  don’t often do disastrous things because the law can catch up with them.  But the question for the Internet is “can the law catch up with them,  or is it creating a privileged class” – and I suspect it is tending  towards creating a privileged class.

And that privileged class would be a class of very technically able or knowledgeable individuals, would it?

I wonder, because it’s enabled by the people who can write the  algorithms. But when you think about search engine algorithms, I’m not  certain that the engineers really see themselves as serving any interest  or policy. Sometimes of course it’s about getting certain sorts of  sites to the top of the list, and it is an extension of an arms race of  advertising. That doesn’t sound as though it’s seriously dangerous to  the public at large but it could be, ultimately.

This may be very similar to the case of publishing in the  print media, where some editors seem to have a genuine belief that the  publication of information is, in itself, a good thing.

And I have been pretty critical of that view because I think that in  the last decade people have been quite fetishistic about transparency. I  can see good arguments for transparency in certain cases. For example I  can see very good arguments for requiring people to declare their  interests (and by the way not only people in public life and in  business, but also those in the media and in charities). In that case  there is a good case for transparency. But the way transparency has  often been interpreted is that you’re doing something meritorious by  shovelling stuff into the public domain. Most of what is shovelled into  the public domain will not be of interest to, or even found by other  people, except possibly some other institutions. But ultimately I think  that transparency is simply a remedy for secrecy. It doesn’t contribute  in itself to communication, and it doesn’t contribute to democracy. The  present coalition government, when they started even if they weren’t  quite transparency fetishists, were very much down that road. But I  think it has become clearer across the last two or three years that  transparency in and of itself is not a sovereign remedy for all sorts of  things that go wrong. Nevertheless, you still hear people in public  life say, “oh we must be more transparent”. Well, yes, but if you just  mean we must gather all this information and post it somewhere, that is  too little… So I think that the argument can’t stop with saying “this is  transparent”. Of course it got embedded during the late 90s into  certain bits of public life, for example the standards for better  regulation, and the Nolan Principles [KR: the code of ethics for those  in public office in the UK]. And now I listen to people referring to it  and I think it’s claptrap – it just comes out as “more transparency” as  if this were an omnipurpose remedy. The thing I would go for is – I’ve  used the slogan and managed to get it into a few public documents –  “intelligent openness”. Meaning that you’re open, but you try to meet  standards that enable the other party to find, follow and assess what  you say or write. That’s where assessability comes in. Mere openness is  audience-indifferent.

Let me explain why I think you can’t make these kinds of decisions  about public debate unless you’ve got a grip on the content. I can  illustrate this best by talking about privacy. In Europe as you know we  have a data protection approach to privacy, which I think is  conceptually defective because it tries to impose extremely strong  protection on any content that is personal information. But when you  start wondering “what is personal information?”, you realise there is no  clear criterion for demarcating it. When there is a breach of privacy  it is very often done by an inference from information that people  didn’t think was personal. Most breaches of privacy happen because  people have access to a range of information and can make certain  inferences. So how can anybody think you can protect people’s privacy by  demarcating personal and non-personal information? Yet that’s the way  our legislation has gone, and of course it leads people to be  hyper-cautious, very understandably. But in other cases it leads people  to think “that’s not private information” because on the surface it  doesn’t say who it is.

As I mentioned, the present government were initially very keen on  transparency, including transparency of research data. The first sign of  change that I noticed was probably July 2012 when there was a cabinet  office white paper on open data. It had become clear that with  biomedical data, patient data, you have to have proper information  governance. And insofar as I’m involved, which I am a bit, in various  debates about data-governance, it’s quite clear that nobody thinks any  longer that there will be a way in which you can anonymise or  pseudonomise patient data, for example, and put it out there in the  public domain for researchers – it’s too easy to identify persons. So  privacy is a moving frontier. But conceptually there seems to have been  quite a big shift on this issue in the last two or three years; there’s  been a shift away from thinking you can have on the one hand complete  transparency for this domain and on the other hand data protection for  that domain, because inferential lines cross the boundaries between  these domains at many points.

What do you think about the issue of data collection by GCHQ  with regard to this question of privacy? Do you not think that arguments  from the privacy standpoint are very important when it comes to  protecting our data from the state, and also provide good arguments for  anonymity?

They can be. Insofar as GCHQ and NSA collect communications data, not  content, I’m not worried. Telephone companies do that for billing  purposes. Insofar as they collect content, I might be more worried, but  by the same token I would worry equally about Facebook, who collect  content, and in particular a lot of personal content. I heard a Canadian  lawyer talk about the fact that the Canadian data protection  commissioners hadn’t a clue about how younger people were using the  Internet so organised some focus groups with youngsters. One girl said:  “look, you don’t understand – I go online in order to be private”.  Now that is glorious, isn’t it? And I think I understand what she means:  If I’m sitting and talking on my mobile, my mum and dad can overhear;  if I’m online, nobody can hear me, it’s private. But of course it is a  complete illusion. And by the time people are undergraduates, at least  the more savvy realise that at their first job interview the  interviewers will have looked at their Facebook page. But a lot of other  people don’t. And I think that whatever legislation we end up with has  to protect those people too.

Some people will say that in the case of Facebook we consent to the  data being there, but in the case of the various security forces we do  not. I think that line of argument is thin, because the standards for  consent in commercial contexts are very minimal – you tick and you click  and it counts as consent. But the standards for consent in democratic  societies are more robust – of course imperfect – but continuing and  repetitive and with options for dissent to change things. I think that  as things are, sensible people assume that there are a lot of  organisations with access to information about them and take what care  they can. They do not assume that the greatest risks come from their own  state’s security systems. I often console myself by remembering that  nearly all of what nearly everyone does is not of great interest to  anybody else…

I constantly hear people saying that the solution is cultural – that  we must educate people better on how to use the Internet – and of course  I accept that it would be a good thing. But culture doesn’t operate in a  legal vacuum and the forces of the powers that be, whether it be Google  or GCHQ – and much more ordinary businesses operating online – are so  powerful compared to the rest of us that I think we’re probably going to  have to have a legal framework. It’ll be horribly contested because  there are all sorts of romantics out there who think cyberspace is a  wide-open frontier of freedom for which we should all die. I don’t think  that will survive. I think what turned me to that view was reading a  certain number of the cyber-bullying cases, and I don’t think people  will in the long run stand for it being feasible to bully youngsters –  or vulnerable people who are older – or others who are not savvy.  Blackmail, including online blackmail is quite easy, and of course that  in a sense is one of the things that gives politicians pause. It’s not  that they fear that their party would lose an election if they speak  out, but that they fear the personal attacks and rumour spreading to  which they and their families might be subjected.

Is it possible to regulate so widely as to control such attacks?

I think we start by making it clear to people that you are not in a  law-free zone when you defame people online. It will become a bit  different as people with a following realise that putting content out  into the public domain without thought can be a dangerous pastime if you  get it wrong, because what you’re publishing can actually be brought  under the law for defamation.

What about the resources required for this?

Well the law of defamation – which of course has just been changed  and I hope it’s better because it was just too expensive for anybody but  the rich to use it – basically works not by bringing lots of cases but  by people being at least as cautious about defamation as they might be  about assault or petty theft. I’m not saying you will always get public  compliance or that we need lots of apparatus, but people will come to  know that you can’t do this without making yourself liable. Deterrence  is the main way that the law works and then the cultural remedies very  often build on there being a deterrent. Nick Ross’s new book, Crime: How  To Solve It, and Why So Much of What We’re Told Is Wrong (Biteback  Publishing, 2013), is very good; he’s good about evidence, statistics,  science and so on. And he starts out from the fascinating conundrum that  we have had the most amazing drop in crime rates in this country and  indeed in many western countries over the last decade. Meanwhile we have  media who push moral panic about crime all the time. Why this  discrepancy? He has a lot of interesting things to say about what has  driven the decline in crime and some of it is just technological – which  gives one perhaps some hope.


All by
The Editors