Blog


May 24, 2017 | Jim Killock

The Manchester attack

Open Rights Group wishes to express its sympathy for the victims of the vile and brutal attack in Manchester. We condemn these violent attacks, which seem even more abhorrent when deliberately targeted at children and young people.

We hope that law enforcement and intelligence agencies will help to bring those involved in these attacks to justice and we support their work combating terrorism. We believe that these agencies need powers of surveillance to do this.

However, we also believe that there must be limits to these powers in order to preserve the democratic values of freedom and liberty - the same values that terrorists want to undermine. This is the central challenge of the moment, in our view.

There are many emotions and reactions that flow from this event. Solidarity, the need to comfort as best possible; the value we place in our communities and the human aid that people have given to help people directly affected. But there is also fear, hatred and a desire to do anything that could prevent such an attack from happening again.

The political response to this attack is complicated by the fact that it is has taken place in the middle of an election. Campaigning has been put on hold but politicians cannot help but be aware that their response will affect the outcome of the election - and this could see policies that exploit public fears.

The traditional response in the UK is to first commit to British values, and say that terrorists will never remove these; and then to try to reassert a sense of security and control by showing that security measures will be stepped up.

Often these attempts are highly misleading. Security measures can be helpful, but building a security state will never be enough to stop terrorism. Terrorism needs to be dealt with at source, through changes in politics and society. As long as we have failed states in Libya, Syria and elsewhere, we will not be safe. We do not wish to gloss over the complexity and difficulty of tackling these issues, but changes there are the first step to reducing the threats of terrorism.

Meanwhile, surveillance including mass surveillance appears to be leading to more information than can be effectively processed, with known individuals escaping investigation because they are too numerous for the authorities to pursue them all. In this case, even human resources may face limits, as expansion of staff numbers can lead to bureaucratisation and new bottlenecks. Terrorists can also adapt their behaviour to avoid surveillance technologies, by changing their tech, avoiding it altogether, or simplifying their operations to make them less visible.

This does not mean we should give up, nor does it mean that technology can play no role in surveillance. It does however mean that we should not assume that claims of resources and powers will necessarily result in security.

ORG is concerned that the Government’s use of investigatory powers to ostensibly keep us safe can themselves be exploited by criminals and terrorists.

It is worrying to hear that in the wake of these attacks, the Home Office wants to push ahead with proposals to force companies to weaken the security of their products and services through “Technical Capability Notices” (TCNs). These are notices that can be issued to a company to force them to modify their products and services so that the security agencies can use them to access a target’s communications.

The Government already has these powers on the statute book, as they were outlined in the Investigatory Powers Act, passed last December. To make the powers active, they must pass a regulation that gives more detail about how TCNs could be used.

Recently, the Home Office held a ‘targeted’ consultation about the new regulations. The draft was only sent to a few companies for their response, even though, these powers could affect the digital security of people in the UK and beyond.

As a result, ORG leaked the proposals so that affected businesses and individuals could raise their concerns with the Home Office. Over 1,400 ORG supporters sent their comments to the Home Office and ORG also submitted a response that we published here.

Our core concern is that using TCNs to force companies to limit or bypass encryption or otherwise weaken the security of their products will put all of us at greater risk. Criminals could exploit the same weaknesses. Changes to technology at companies merely need to be ‘feasible’ rather than ‘safe’ or ‘sensible’ for users or providers.

The recent #WannaCry hack demonstrated how a vulnerability discovered by the National Security Agency (NSA) to access their target’s communications was then used by criminals. These are powers involving different technologies but the principle remains the same: Governments should be doing all they can to protect our digital security.

Another concern is that TCNs may be served on companies overseas, including WhatsApp, owned by Facebook. These have assets in the UK and can easily be targeted for compliance. Others such as WhisperSystems who produce Signal have no UK assets. The UK appears to be deliberately walking into an international dispute, where much of the legal debate will be entirely hidden from view, as the notices are served in secret, and it is not clear what appeal routes to public courts really exist. Other governments, from Turkey to China, will take note.

Powers must be proportionate, and agencies should not be given a blank cheque. Justification for and oversight of the use of TCNs and vulnerabilities is inadequate, so the risks cannot be properly assessed in the current legal frameworks. There is no regime for assessing the use of vulnerabilities including ‘zero days’.

We urge politicians to take a detailed and considered look at TCNs and the use of vulnerabilities, to ensure that the consequences of their use can be properly evaluated and challenged.

These will seem like narrow issues compared with Monday’s events. And that is true. The wider issue, however, is that we as a society do not react to these events by emulating our enemies, by treating all citizens as a threat, and gradually removing British values such as the rule of law, due process and personal privacy.

[Read more]


May 22, 2017 | Jim Killock

Facebook censorship complaints could award government immense control

Facebook censorship complaints run both ways—and we should be careful when politicians press for more controls

The leaked Facebook Files, the social media company’s internal policies for content regulation published by the Guardian, show that, like a relationship status on Facebook, content moderation is complicated.

It is complicated because Facebook is a near-monopoly in the social media market, making them both a power player and a target for regulation. It is complicated because there is an uneasy balance to strike between what is law, what is code, and what is community decency.

It is complicated because Facebook finds itself in a media landscape determined to label them as either a publisher or a platform, when neither of which are suitable titles. And ultimately, it is complicated because we are talking about human interaction and regulation of speech at a scale never seen before.

Big player. Big target

Facebook are a monopoly. And that is a big problem. With almost 2 billion users on the site, operating in almost all countries around the world, hoarding the data generated by a community of a size never before seen. The leaks show that even they seem unclear how best to police it.

It could be argued that as a private company, they can create their terms and conditions as they see fit but their global domination means that their decisions have global impact on free speech. This impact creates obligations for them to uphold standards of free expression that are not normally expected of a private company.

Operating in so many countries also means that Facebook are an easy target for criticism from many different governments and media, who will blame them for things that go wrong because of their sheer scale. They can see an easy way to impose control, by by targeting them through the media or regulation. Most recently seen in the Home Affairs Committee report where social media companies were accused of behaving irresponsibly in failing to police their platforms.

World Policing in the Community

Facebook’s business model is premised on users being on their site and sharing as much information as possible so that they can use personal data to sell highly targeted advertising.  Facebook do not want to lose customers who are offended, which means that offence is a much lower threshold than what is legal or not.

Facebook is not unregulated. The company has to comply with court orders when served but as the leaked files show making judgments about content that is probably legal but offensive or graphic is much more difficult.

Being a community police for the world is a deeply complicated position, even moreso if your platform is often seen as the Internet.

Law versus community standards

Facebook will takedown material reported to them that is illegal. However, the material highlighted by the Guardian as inappropriate for publication falls into a category of offensiveness, such as graphic material or sick jokes, rather than illegality.

Where people are objecting to illegal material appearing and not being removed fast enough, we should also bear in mind the actual impacts. For instance, how widely has it actually circulated?  In social media, longevity and contacts is what tends to produce visibility for your content. We suspect a lot of ‘extremist’ postings are not widely seen as the accounts will be swiftly deleted.

In both cases, there is a serious argument that it is society, not Facebook, generating unwanted material. While Facebook can be targeted to remove it, this won’t stop its existence. At best, it might move off its platform, and arrive in a less censored, probably less responsible environment, even one that caters and encourages bad behaviour. 4Chan is a great example of this, in that its uncensored message boards attract abuse, sick jokes and co-ordination of attacks.

Ultimately behaviour such as abuse, bullying and harassment needs to be dealt with by law enforcement. Only law enforcement that can deliver protection, prosecutions and work with individuals to correct their behaviour to reduce actual offending. Failing to take serious action against real offenders encourages bad behaviour.

Publisher v Platform

What happens when your idea of bringing the world together, suddenly puts you in the position of a publisher? When people are no longer just sharing their holiday pictures, but organising protests, running campaigns, even publishing breaking news.

Some areas of the media have long delighted in the awkward positioning of Facebook as a publisher (subject to editorial controls and speech regulation) and not a platform (a service where user’s can express their ideas that are not representative of the service). It might be worth those media remembering that they too rely on “safe harbour” regulations designed to protect platforms for all those comments that their readers post below their articles. To place regulatory burdens creating new legal liabilities for user generated content would be onerous and likely to limit free expression, which no-one should want to see.

Safe harbour arrangements typically allow user content to be published without liability, and place a duty on platforms to take down material when it is shown to be illegal. Such arrangements are only truly fair when courts are involved. Where an individual, or the police, can notify without a court, platforms are forced to become risk averse. Under the DMCA copyright arrangements, for instance, a user can contest their right for material to be re-published after takedown, but has also to make arrangement to be taken to court. All of this places the burden of risk on the defendant rather than the accuser. Only a few who are accused will opt to take legal risk, whereas normally, accusers would the ones to be careful about who they take to court for their content.

Money to burn. People to hire

Facebook have enough money that they should be able to go further in their hiring of humans to do this job better. They appear to be doing that and should be trying to involve more human judgement in speech regulation, not less.   

Considering the other options on the market, more human involvement would seem the most reasonable approach. Facebook have tried and failed miserably  to moderate content by algorithm.  

However, the sheer size of the task in moderating content across so many different cultures and countries, reportedly leaving human moderators only 10 seconds to make a decision whether to take down content, is a massive task that will only grow as Facebook expands.

We need to understand that moderation is rules based, not principle based. Moderators strictly match against Facebook’s “rules” rather than working from principles whether something is reasonable or not. The result is that decisions will often seem arbitrary or just bad. The need for rules rather than principles stems from making judgements at scale, and seems unavoidable.

Algorithms, to be clear, can only make rules-based approaches less likely to be sane, and more likely to miss human, cultural and contextual nuances. Judgement is an exclusively human capability; machine learning only simulates it. When a technologist embodies their or their employer’s view of what’s fair into a technology, any potential for the exercise of discretion is turned from a scale to a step and humanity is quantised. That quantisation of discretion is always in the interest of the person controlling the technology.

One possible solution to the rigidity of rules-based moderation is to create more formal flexibility, such as appeals mechanisms. However, Facebook are most likely to prefer to deal with exceptional cases as they come to light through public attention, rather than impose costs on themselves.

Manifesto pledges

Any push for more regulation such as suggested by the Conservative manifesto is highly likely to encourage automation of judgements to reduce costs—and validates this demand being made by every other government. The Conservative pledges here seem to us to be a route straight to the Computer saying No.

Thus, if you are concerned about what would seem to be arbitrary, opaque rules in place for Facebook’s content moderators set by the company, then you should be doubly concerned by the Conservative’s manifesto pledge to bring further state regulation to the Internet.

Governments have been the home of opaque and arbitrary rules for years, and the Conservatives, if elected, would deliver an internet where the state creates incentives to remove anything potentially objectionable (that could create adverse publicity, perhaps) and what level of security citizens should be able to enjoy from the platforms they use everyday . That is not the future we want to see.

So we have a private monopoly whose immense power in deciding what content people view is concerning, but a concerning proposal for too much state involvement in that decision too. A situation where you want to see better rules in place, but not rules that turn platforms into publishers, and a problem so vast that it seems that just hiring more people would not solve the problem alone.  Like we said, it’s complicated.

What is simple, however is that Facebook present a great opportunity for media stories, and complaints followed by power grabs from government to police the acceptability of speech that they would never dare make illegal. We may regret it if these political openings translate into legislation.

 

 

[Read more] (2 comments)


May 16, 2017 | Mike Morel

The UK Government should protect encryption not threaten it

It is difficult to overstate the importance of encryption. A cornerstone of the modern digital economy, we rely on it when we use our digital devices or make transactions online. Physical infrastructure like power stations and transport systems are dependent on it too.

Department of Culture, Media & Sport

 Encryption also strengthens democracy by underpinning digital press freedom. Whistleblowers can’t safely reveal official corruption to journalists without it.

Laws restricting encrypted communications have generally been associated with more authoritarian governments, but lately proposals to circumvent encryption have been creeping into western democracies. Former Prime Minister David Cameron attacked encryption after the Paris attacks in 2015, and Home Secretary Amber Rudd MP recently said that there should be a way around end-to-end encryption on devices like WhatsApp.

As it happens, Amber Rudd already has legislation that claims to give her the power to tell WhatsApp to remove “electronic protection” (read “encryption”). She can issue a technical capability notice (TCN) which instructs commercial software developers to produce work-arounds in their software without outlawing or limiting encryption itself. Just over a week ago, ORG leaked a secret Home Office consultation on the draft TCN regulation, which gives more detail about how this power can be used.

To be clear, this goes way beyond WhatsApp. The Government wants access to all UK telecommunications encompassing a wide variety of services. Any organisation that facilitates communications among 10,000 or more users could be issued a TCN including email account providers, data storage services, games companies, and (they claim) even overseas operators with enough UK users.

The current ransomware outbreak shows how software vulnerabilities used by security agencies can fall into the wrong hands. There is no reason to think backdoors intentionally created for Government access could not be exploited as well. Why start a digital arms race when we may be releasing new weapons to criminals and hostile governments?

The lack of transparency surrounding TCN’s is another problem. The regulation makes no mention of oversight or risk assessment mechanisms, and the consultation’s secrecy reduces accountability even more. Sometimes the Government has good reason for secrecy, but this is not one of those times. When digital services are compromised, people must know because it affects their privacy and security and everyone has a right to protect themselves.

Business owners should be concerned because their products and customers could be seriously affected, and the process by which they might appeal a TCN is unclear. The only real grounds for complaint appears to be “feasibility” — and many things may be ‘feasible’ but a very bad idea.

From securing the economy to underpinning press freedom, the need for strong encryption is vital. We alter it at our own peril, especially if we do so in secret. Tell the Home Office yourself before the secret consultation ends on 19 May.

See ORG’s detailed breakdown of the TCN regulation here.

[Read more] (1 comments)


May 13, 2017 | Jim Killock

NHS ransom shows GCHQ putting us at risk

The NHS ransom shows the problems with GCHQ’s approach to hacking and vulnerabilities, and this must be made clear to MPs who have given them sweeping powers in the IP Act that could result in the same problems recurring in the future.

GCHQ buildingHere are four points that stand out to us. These issues of oversight relating to hacking capabilities are barely examined in the Investigatory Powers Act, which concentrates oversight and warrantry on the balance to be struck in targeting a particular person or group, rather than the risks surrounding the capabilities being developed.

GCHQ and the NSA knew about the problem years ago

Vulnerabilities, as we know from the Snowden documents, are shared between the NSA and GCHQ, as are the tools built that exploit them. These tools are then used to hack into computer equipment, as a stepping stone to getting to other data. These break ins are at all kinds of companies, sites and groups, who may be entirely innocent, but useful to the security agencies to get closer to their actual targets.

In this case, the exploit, called ETERNALBLUE was leaked after a break in or leak from the NSA’s partners this April. It affects Windows XP. It has now been exploited by criminals to ransom organisations still running this software.

While GCHQ cannot be blamed for the NHS’s reliance on out of date software, the decision that the NSA and GCHQ have made in keeping this vulnerability secret, rather than trying to get it fixed, means they have a significant share of the blame for the current NHS ransom.

GCHQ are in charge of hacking us and protecting us from hackers

GCHQ are normally responsible for ‘offensive’ operations, or hacking and breaking into other networks. They also have a ‘defensive’ role, at the National Cyber Security Centre, which is meant to help organisations like the NHS keep their systems safe from these kinds of breakdown.

GCHQ are therefore forced to trade off their use of secret hacking exploits against the risks these exploits pose to organisations like the NHS.

They have a tremendous conflict of interest, which in ORG’s view, ought to be resolved by moving the UK defensive role out of GCHQ’s hands.

Government also needs to have a robust means of assessing the risks that GCHQ’s use of vulnerabilities might pose to the rest of us. At the moment, ministers can only turn to GCHQ to ask about the risks, and we assume the same is true in practice of oversight bodies and future Surveillance Commissioners. The obvious way to improve this and get more independent advice is to split National Cyber Security Centre from GCHQ.

GCHQ’s National Cyber Security Centre had no back up plan

We also need to condemn the lack of action from NCSC and others once the exploit was known to be “lost” this April. Some remedial action was taken in the US by informing Microsoft who created a patch in March, not however issued freely until today.

Hoarding vulnerabilities is of course inherently dangerous, but then apparently not having an adequate US or any UK wide plan to execute when they are lost is inexcusable.  This is especially true given that this vulnerability is obviously capable of being used by self-spreading malware.

GCHQ are not getting the balance between offence and defence right

The bulk of GCHQ’s resources go into offensive capabilities, including hoarding data, analytics and developing hacking methods. There needs to be serious analysis to see whether this is really producing the right results. This imbalance is likely to remain the case while GCHQ is in charge of both offence and defence, who will always prioritise offence. Offence has also been emphasised by politicians who feel pressure to defend against terrorism, whatever the cost. Defence—such as ensuring critical national infrastructure like the NHS is protected — is the poor relation of offensive capabilities. Perhaps the NHS ransom is the result.

Other interesting responses

[Read more] (8 comments)


May 08, 2017 | Mike Morel

A brief chance for better UK data protection law

The EU’s General Data Protection Regulation (GDPR) comes into force next year, updating a number of digital rights for UK citizens in the age of Big Data. Individuals stand to gain more control over their information and improve their awareness of consent, profiling, and automated decision making.

Department of Culture, Media & SportHowever, the GDPR’s enforcement within member countries has considerable flexibility. Of the many options within the law, one particularly crucial rule hangs in the balance–Article 80(2).

This rule permits privacy groups like ORG to independently represent the public in complaints about data protection law. Without it, privacy watchdogs like ourselves, Liberty or Privacy International would instead have to rely on individuals to bring a complaint.

But individuals do not always have the knowledge, expertise or time to identify and dispute faults in arcane terms and conditions. By ensuring Article 80(2) is enforced, privacy advocates will be free to directly address the Information Commissioner when corporations exploit your data.

The good news is there is something we can do about it. The Department of Culture, Media & Sport (DCMS) is currently holding a public consultation on the GDPR. The poor quality of this hurried consultation suggests this could be easily overlooked and forgotten about. That means we need your help to get Article 80(2) brought into UK law.

Time is short. The brief consultation ends Wednesday May 10. We have until then to make our voices heard. Click here to tell the DCMS to enforce Article 80(2).

 

 

[Read more] (2 comments)


May 04, 2017 | Jim Killock

DCMS consultation on data privacy fails to explain why it matters

New data privacy rights under the General Data protection Regulation depend on a UK consultation which tells readers nothing about its implications

The General Data Protection Regulation (GDPR) sets out many new rights for UK citizens, including better notions of consent, the right to obtain and download your information, and to delete it at a company. You can also find out more about profiling and automated decision-making. There are big fines available when companies don’t comply after it comes into force in mid 2018.

However, many of the new rights will depend on enforcement. One of the better ideas in the regulation is to allow privacy groups to represent citizens in complaints, without having to find specific people who have been directly affected. The GDPR requires member states to choose to allow this, or not, in Article 80(2). We of course very much believe this should be legislated for.

There is a consultation being run by DCMS until Wednesday 10 May on all the different options allowed under the GDPR—and there are quite a few.

However, this consultation is another very disappointing piece of work. Shoddy, even, because it calls for evidence and views, but sets out no background at all for the consultation, so only experts can practically respond. It merely states:

Theme 9 - Rights and Remedies

Rights and Remedies

The derogations related to Rights and Remedies include articles:

Article 17 - Right to erasure ('right to be forgotten')

Article 22 - Automated individual decision-making, including profiling Article 26 - Joint controllers

Article 80 - representation of data subjects

Government would welcome your views on the derogations contained in the articles above. Please ensure that you refer to specific articles/derogations.

There is no way that an average reader could understand the implications of this consultation, which, just like the recent Home office consultation on the IP Act Codes of Practice, means that the consultation appears to breach Cabinet Office guidelines, which state that consultations should:

Give enough information to ensure that those consulted understand the issues and can give informed responses.

This consultation provides exactly no background information whatsoever. You wouldn’t begin to understand that they want to know if you are in favour of privacy organisations being able to make complaints to the ICO under Article 80, or not.

We feel sympathy for the staff at DCMS who have been asked to set out this consultation, and presumably have been prevented from spending time developing background documents due to capacity constraints. This should serve as a warning to us.

Once Brexit kicks in, DCMS staff will need to be able not just to recycle existing policy advice from EU and other organisations on legislation prepared elsewhere, but also to have the expertise to evaluate it and recommend changes. Under the Great Repeal Bill, they may have to advise ministers about things to remove, with little Parliamentary involvement — potentially including aspects of the GDPR of course.

Right now, however, DCMS officials appear to lack the capacity to even produce decent consultation documents for key privacy laws like the GDPR. Ministers should be demanding more resources, or we will start to see serious policy mistakes being made.

 

[Read more] (1 comments)


May 03, 2017 | Mike Morel

ORG delivers anti-Espionage Act petition to the Law Commission

Today marks the end of the Law Commission’s public consultation on their proposals to create a new Espionage Act that would jail whistleblowers and journalists who handle official data. Open Rights Group gave them exactly what they asked for―the voices of 23,385 members of the public, delivered right to their offices at the Ministry of Justice.

The Law Commission is located within the Ministry of JusticeORG’s petition broadly rejects The Law Commission’s proposals and demands they be dropped. The threat of up to 14 years in prison would have a chilling effect on whistleblowers and the reporters they contact, weakening free speech and the integrity of UK democracy.

Thank you to all the ORG supporters that signed the petition or emailed the Commission: they now know that thousands of citizens refuse to live in a country where journalists and government staffers are afraid to expose corruption.

We urge the Law Commission to take your requests seriously. That would be a huge improvement over the sham “consultation” that barely took place while the initial report was developed. Contrary to the Commission’s statements, they worked closely with government officials and lawyers while organisations like ORG, Liberty and the Guardian were given short shrift.

Whether the Commission’s final recommendations will take the public consultation into account remains to be seen. Meanwhile ORG supporters have given them no option to claim public support for a new Espionage Act.

ORG also submitted a comprehensive report along with the petition detailing concerns about the Commission’s proposals. Highlights include:

  • The Law Commission is not being upfront about their aims. Their proposals are obviously in response to the Snowden leaks but they do not mention this or other major cases related to the disclosure of official data. It is blatantly disingenuous to overlook such important cases and not consider how the powers in a new Espionage Act could have been used in these cases.

  • Their proposals go against the very essence of whistleblowing by requiring concerns about corruption or malpractice be reported to an internal ombudsman. Whistleblowers have often tried to raise concerns internally and got nowhere. Whistleblowing is a last resort to expose hidden injustices that are not being dealt with within organisations.

  • Their proposals take away far too many rights from the accused. The Government would only have to show that a defendant was aware of the damage that could be caused by disclosing information - even if no actual damage was caused. So even if journalists expose wrongdoing, like the MPs expenses scandal, they could not use a statutory public interest defence.

  • The proposals threaten free speech. Editors, journalists and whistleblowers would be intimidated by the risk of up to 14 years in prison just for handling data.

  • The UK Government recently enacted the most extreme surveillance law of any democracy, the Investigatory Powers Act. At a time when these powers should be scrutinised, these proposals would criminalise whistleblowers and journalists acting in the public interest.

[Read more] (1 comments)


May 01, 2017 | Jim Killock

Automated censorship is not the answer to extremism

Unbalanced Home Affairs Committee recommendations would threaten free expression

Today’s report by the Home Affairs Select Committee brands social media companies as behaving irresponsibly in failing to remove extremist material.

It takes the view that the job of removing illegal extremist videos and postings is entirely the responsibility of the companies, and does not envisage a role for courts to adjudicate what is in fact legal or not.

This is a complex issue, where the companies have to take responsibility for content on their platforms from many perspectives, including public expectation. There are legitimate concerns.

The approaches the committee advocates is however extremely unbalanced and could provoke a regime of automated censorship, that would impact legal content including material opposing extremism.

We deal below with two of the recommendations in the report to give some indication of how problematic the report is.

Government should consult on stronger law and system of fines for companies that fail to remove illegal content

Platforms receive reports from people about content; the committee assume this content can be regarded as illegal. Sometimes it may be obvious. However, not every video or graphic will be “obviously” illegal. Who then decides that there is a duty to remove material? Is it the complainant, the platform, or the original publisher? Or an independent third party such as a court?

The comparison with copyright is enlightening here. Copyright owners must identify material and assert their rights: even when automatic content matching is used, a human must assert the owner’s rights to take down a Youtube video. Of course, the video’s author can object. Meanwhile, this system is prone to all kinds of errors.

However, there is a clear line of accountability for all its faults. The copyright owner is responsible for asserting a breach of copyright; the author is responsible for defending their right to publish; and both accept that a court must decide in the event of a dispute.

With child abuse material, there is a similar expectation that material is reviewed by the IWF who make a decision about the legality or otherwise. It is not up to the public to report directly to companies.

None of this need for accountability and process is reflected in the HASC report, which merely asserts that reports of terrorist content by non-interested persons should create a liability on the platform.

Ultimately, fines for failure to remove content as suggested by the committee could only be reasonable if the reports had been made through a robust process and it was clear that the material was in fact in breach of the law.  

Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead

There is always a case for general taxation that could be used for the police. However, hypothecated resources in cases like this are liable to generate more and more calls for specific “Internet taxes” to deal with problems that can be blamed on companies, even when they have little to do with the activity in reality.

We should ask: is the posting of terrorist content a problem generated by the platforms, or by other wider social problems? It is not entirely obvious that this problem has in some way been produced by social media companies. It is clear that extremists use these platforms, just as they use transport, mail and phones. It appears to be the visibility of extremists activities that is attracting attention and blame on platforms, rather than an objective link between the aims of Twitter and Facebook and terrorists.

We might also ask: despite the apparent volumes of content that is posted and reposted, how much attention does it really get? This is important to know if we are trying to assess how to deal with the problem

Proactive searching by companies is something HASC ought to be cautious about. This is inevitably error prone. It can only lead one way, which is to over-zealous matching, for fear that content is not removed. In the case of extremist content, it is perfectly reasonable to assume that content opposing extremism while quoting or reusing propagandist content would be identified and removed.

The incentives that HASC propose would lead to censorship of legal material by machines. HASC’s report fails to mention or examine this, assuming instead that technology will provide the answers.

 

[Read more] (1 comments)