Blog


September 02, 2019 | Matthew Rice

Out for the count : the £9 million white elephant in London's next election

Electronic counting in London – the subject of criticism from the Electoral Commission and Open Rights Group for many years - is now spiralling in cost. The cost of the election in 2020, for which the contract was tendered in 2018, will be over twice that of the last tendering process in 2010. Worse still, the contractual arrangements are highly obscure and problems with transparency are yet to be addressed.

At the end of July,  the Greater London Authority (GLA) Oversight Committee met to review the activities of the Greater London Returning Officer (GLRO), in the 2020 GLA elections. Election results for the London Mayoral and London Assembly elections have always been counted by machines, which is known as ‘e-counting’. This round, the GLRO and politicians are keen to avoid the problems they had with software and machines in 2016, which led to long delays in announcing the results.

E-counting has been criticised in successive reports by the Electoral Commission for being untransparent, as independent observers are unable to effectively monitor the count inaccurate as clearly marked ballots are frequently counted as unmarked; and expensive as this service cost £4.1 million for the 2012 elections. 

In particular the Electoral Commission advised that the GLRO carried out a ‘cost benefit analysis’ of e-counting, which should include a cost benefit analysis of manual counting. The GLRO has ignored this advice, twice. 

In 2018, the tendering process for the e-counting contract for 2020 elections began. The contract was eventually split between two companies: CGI, the lead contractor, and Smartmatic the sub contractor. The cost of the contract between the last tendering process in 2010 and the most recent of 2018 has more than doubled - from £4.1 million to more than £8.9 million. 

The increase in cost partly reflects the fact that the tendering process was uncompetitive - the market for e-counting providers is small, and they tend to operate in consortiums. In this case a very small number of expressions of interest were given, and the GLRO was itself very worried that they might not find a company able to meet the bid criteria, which included new measures to avoid the problems they experienced in 2016.

As part of the approval process for commencing e-counting procurement the GLRO stated that: 

Should no tender prove satisfactory either quality-wise or price-wise, there will be no commitment to award a contract and consideration will be given to counting manually” .  

Nevertheless, even though the cost has spiralled to almost £9 million, the GLRO have decided to press ahead.

We are concerned that the methodology employed to assess the various bids for the contract is not being made publicly available. Without this information, it is impossible to know why £9 million pounds of public money is being spent. 

There are serious questions about the sub-contractor, Smartmatic. Smartmatic’s “election solutions” have been the subject of criticism in severalcountries where they have run electronic voting systems. Smartmatic says the criticisms are unjustified.  

For example, there were serious technical issues with the machines used in the 2012 Belgium elections in Flanders, with second preference votes incorrectly allocated. Additionally, there is no definitive account of why the Committee on Foriegn Investment in the United States were investigating Smartmatic. For now, Assembly members and the GLRO can only read the reports from the countries concerned, or view the hearings.

Given that the GLRO reported before the meeting of the Oversight Committee that many of the functions of e-counting had been subcontracted out further. The GLRO have stated that Smartmatic and CGI are: 

“Working with Rathmhor, who delivered user training for the programme in Scotland in 2017. Hamilton Rentals will supply hardware and audio-visual services at the count centres, using the latest generation of Fujitsu scanners. FDM will print the ballot papers.” 

 

In this long and unwieldy chain of command, is hard to see what value Smartmatic brings, and where accountability lies if something goes wrong. Only full information about the contracts and subcontracts will make it clear what Smartmatic’s role is, and whether that is appropriate.

The GLRO previously asserted that CGI’s and Scotland’s performance in Scotland where e-counting at local elections is also the norm, warranted the awarding of the contact – that they are tried and tested. In its report to the GLA Oversight Committee the GLRO stated: 

“CGI and Smartmatic have previously delivered successful electronic vote counting systems for Scottish elections.”

 

This was false. The Scottish Government have confirmed to us that Smartmatic have never been awarded an e-counting contract for Scotland. The GLRO have subsequently corrected the record and explained to Assembly members that Smartmatic have not in fact had any involvement with the Scottish e-counting systems.

The other delivery partner, CGI, have also not been without controversy. The company went to court in Scotland after repeated repudiatory breach with a subcontractor on an IT contract for Scottish councils. Agilisys and CGI tussled over who was responsible for the contract not being properly delivered. The judge described CGI’s expert witness as “one sided”, “not balanced” and its submissions as “unconvincing”. 

The GLRO were unable to answer many of the questions put to them by the GLA oversight committee. On costs, the GLRO, at one point attempted to attribute much of the £4.8 million increase in cost on ‘inflation’. 

Eventually, they made it clear that much of the cost was due to new safeguards against error, alongside a further large proportion of the price hike being simply unexpected costs at the real market. This may simply mean the cost of having an uncompetitive bidding process where the two participants do not have to constrain their estimates as they cannot be replaced. 

The GLRO did not explain why they have still not done a cost benefit analysis of the whole exercise, instead preferring  “stakeholder consultation” and “soft market testing”. Nor were the GLRO able to explain why they went ahead with such an expensive contract despite previously saying that they would not. Thankfully, the GLRO have now committed to cost benefit analysis of manual counting. 

However, scrutiny has been avoided for too long. The GLRO has failed to provide answers. We look forward to gaining clarity on the facts in a future meeting with them. We believe that London Assembly members urgently need answers to the following questions:

  • Given the expense and the non competitive nature of the market for e-counting, why did the GLRO press ahead despite saying “Should no tender prove satisfactory either quality-wise or price-wise, there will be no commitment to award a contract and consideration will be given to counting manually”

  • Will the GLRO release the details of the methodology used to award the contract and how individual companies scored? 

  • Was Smartmatic’s controversial reputation and poor performance record taken into consideration in awarding this contract? What role are they actually performing?

Until Assembly members have answers, neither they nor the public should be confident that next year’s elections will be problem free, or value for money. The GLRO should have stuck to its promise, given the difficulties they have had getting competitive bids, and given “consideration to counting manually”. Voting systems need to be robust, trustworthy and inspire confidence. It is vital that this is done right.

[Read more]


July 08, 2019 | By Daniel Markuson, a digital privacy expert at NordVPN

European Net Neutrality is Under Attack

European internet users may not have noticed, but their net neutrality and online freedom is at risk. At least 186 Internet Service Providers (ISPs) in the EU are using Deep Packet Inspection (DPI) to read their users’ traffic. That means they get to decide how much internet freedom you get.

Network neutrality explained

Net neutrality is the idea that ISPs must ensure an equal internet connection to each and every user. Although there’s an EU law that regulates this, some ISPs discriminate against their users by filtering or charging for the content they try to access or the devices they use to connect.

Among other means, such as using DPI to examine our communications, telecom companies apply extra charges for data packages or specific content users connect to. This contradicts the Body of European Regulators for Electronic Communications (BEREC), which states that all internet users must be able to use the web without any restrictions.

How net neutrality is being abused

Any internet service plan involving conditions based on the websites or services you visit and use needs to be able to identify your internet traffic. If your provider offers a plan where you can use certain apps without consuming your data, that’s more than just a fundamental violation of the principles behind net neutrality. It’s also a breach of your privacy, as one way to distinguish between various traffic types is by using DPI.

Once a user accepts DPI, the ISP can use it in many different ways: throttling connections, censoring content, and tracking users’ traffic in greater detail than before. Legal protection against these intrusive actions exists in the EU, so how can users still be at risk?

Net neutrality protection in Europe

Although net neutrality is already dead in the US, Europe is still fighting for open internet access and transparency. In the EU, the right to equal network access is protected by Article 3 of EU Regulation 2015/2120. However, the European Digital Rights (EDRi) association, which for many years has been advocating in favor of strong net neutrality, is warning against the widespread use of privacy-invasive DPI technology in the EU.

EDRi is negotiating for new European net neutrality rules even as some telecom organizations seem to be pushing for the legislation of DPI. These companies claim net neutrality requirements reduce competition in the market and that a neutral internet could obstruct growth, flexibility, and investment in new infrastructure.

This gives some room for thought: will UK citizens be stripped of net neutrality after the country leaves the EU?

The loss of net neutrality

Without net neutrality in Europe, ISPs could limit what you can and can’t see online and how you experience the internet in general. When it’s gone, certain content and services may be completely blocked by some ISPs. They could also force some websites to pay or suffer slow traffic, which might drive many smaller online services out of business. Supporters of net neutrality fear the loss of consumer protections, privacy, and security while ISPs profit.

With the help of BEREC and EDRi, it’s easier to stand for our rights, quality of service, fair competition, and transparency.  If it wasn’t for net neutrality, today we might not have popular streaming services like YouTube or Netflix and we couldn’t freely listen to music via Spotify.

What can we do to fight back?

Although using virtual private network (VPN) services is a good solution to fight ISPs, you’ll only protect yourself and your family. It’s also of crucial importance to make your voice heard and let the EU know that its internet users want net neutrality to be protected and enforced. For now, you can follow and support the great efforts of EDRi and BEREC to prevent the abuse of internet freedom.

By Daniel Markuson, a digital privacy expert at NordVPN

[Read more]


June 29, 2019 | Ed Johnson-Williams and Amy Shepherd

Online Harms: Blocking websites doesn't work - use a rights-based approach instead

Blocking websites isn't working. It's not keeping children safe and it's stopping vulnerable people from accessing information they need. It's not the right approach to take on "Online Harms".

This is the finding from our recent research into website blocking by mobile and broadband Internet providers. And yet, as part of its Internet regulation agenda, the UK Government wants to roll out even more blocking.

The Government’s Online Harms White Paper is focused on making online companies fulfil a “duty of care” to protect users from "harmful content" – two terms that remain troublingly ill-defined.1

The paper proposes giving a regulator various punitive measures to use against companies that fail to fulfil this duty, including powers to block websites.

If this scheme comes into effect, it could lead to widespread automated blocking of legal content for people in the UK.

The Government is accepting public feedback on their plan until Monday 1 July. Send a message to their consultation using our tool before the end of Monday!

Mobile and broadband Internet providers have been blocking websites with parental control filters for five years. But through our Blocked project – which detects incorrect website blocking – we know that systems are still blocking far too many sites and far too many types of sites by mistake. 

Thanks to website blocking, vulnerable people and under-18s are losing access to crucial information and support from websites including counselling, charity, school, and sexual health websites. Small businesses are losing customers. And website owners often don't know this is happening.

We've seen with parental control filters that blocking websites doesn't have the intended outcomes. It restricts access to legal, useful, and sometimes crucial information. It also does nothing to prevent people who are determined to get access to material on blocked websites, who often use VPNs to get around the filters. Other solutions like filters applied by a parent to a child's account on a device are more appropriate.

Unfortunately, instead of noting these problems inherent to website blocking by Internet providers and rolling back, the Government is pressing ahead with website blocking in other areas.

Blocking by Internet providers may not work for long. We are seeing a technical shift towards encrypted website address requests that will make this kind of website blocking by Internet providers much more difficult.

When I type a human-friendly web address such as openrightsgroup.org into a web browser and hit enter, my computer asks a Domain Name System (DNS) for that website's computer-friendly IP address - which will look something like 46.43.36.233. My web browser can then use that computer-friendly address to load the website.

At the moment, most DNS requests are unencrypted. This allows mobile and broadband Internet providers to see which website I want to visit. If a website is on a blocklist, the system won't return the actual IP address to my computer. Instead, it will tell me that that site is blocked, or will tell my computer that the site doesn't exist. That stops me visiting the website and makes the block effective.

Increasingly, though, DNS requests are being encrypted. This provides much greater security for ordinary Internet users. It also makes website blocking by Internet providers incredibly difficult. Encrypted DNS is becoming widely available through Google's Android devices, on Mozilla's Firefox web browser and through Cloudflare’s mobile application for Android and iOS. Other encrypted DNS services are also available.

Our report DNS Security - Getting it Right discusses issues around encrypted DNS in more detail.

Blocking websites may be the Government's preferred tool to deal with social problems on the Internet but it doesn't work, both in policy terms and increasingly at a technical level as well.

The Government must accept that website blocking by mobile and broadband Internet providers is not the answer. They should concentrate instead on a rights-based approach to Internet regulation and on educational and social approaches that address the roots of complex societal issues.


  1. See ORG's response to the Government's Online Harms White Paper:
    https://www.openrightsgroup.org/about/reports/org-policy-responses-to-online-harms-white-paper ↩︎

[Read more]


June 28, 2019 | Pascal Crowe

Hunting for a solution? If it ain’t broke, don’t fix it

It seems apt that it was at this week’s ‘digital hustings’ for the Conservative Party leadership that Jeremy Hunt unilaterally came out in favour of online voting. Or at least, that is what elements of the press and Twitterati reported. What Hunt actually said was (slightly) more nuanced than a straightforward endorsement:

“The big innovation that we need is to introduce online voting...if we can book out holidays online, surely we can find a way that is fool proof to have online voting, and that is the way the world is going, and I think that would encourage much more participation in our democracy.”

There are four things to unpack in that statement that illustrate some core concerns about using the internet to help run elections.

1) Voting should be more like booking a holiday

Voting should not be like booking a holiday. Booking a holiday online requires you to input sensitive, traceable, personal information (like your card details, and name and address) online. The company must be able to identify you in order to follow up on your payment. This is not, however, desirable for an election. Whilst electoral register data is available (Although the type and accessibility of data this contains changes reasonably frequently), digitising the voting process risks undermining the principle of the secret ballot. This opens the door to electoral fraud.

We should also ask ourselves if we really want private companies administering our elections. How would they be held to account? What happens if people decide they don’t trust the company? If there’s a problem with your hotel room, you can get a refund. How would you refund an election? For private companies, elections are a consumer product. Some are offering Democracy-as-Ipod, with e-voting machines that come in “classic” or “premium” models. Should we have to choose between classic or premium democracy?

2) We can have a ‘fool proof’ online voting system

Existing statutory e-voting systems, most notably in Estonia, have been criticised for being insecure. The Netherlands stopped using e-voting machines in 2007 because they could be hacked within 30 seconds of entering a polling booth. Norway discontinued its i-voting trials in 2014, citing security concerns. The UK Electoral Commission has made serious criticisms of the e-counting hardware and software used in the London General Assembly Elections in every assessment of them it has ever done.

Put simply, the technology isn’t there. We shouldn’t turn our democracy into a user testing exercise for private companies. The risk is electoral outcomes that are decided by glitches in code or server security, rather than voters.

3) Online voting would encourage political participation

Enhancing political participation, particularly amongst people for whom the act of voting can be problematic (for example, people with disabilities), is a net positive for our democracy and a noble goal in itself. Further research about how to do this should be encouraged, and it seems likely that modern technology has a part to play.

Currently though, there is no academic consensus about whether e-voting encourages turnout. The limited number of studies makes it difficult to characterise. However, a recent Norwegian trial found no increase in aggregate turnout. In addition, it found that young people (an oft cited target demographic for e-voting) actually preferred walking to the polling station as it felt like a rite of passage for entering adulthood.

Just because your child might like playing Xbox, doesn’t mean they want to ‘play democracy’ on an Xbox.

4) “That is the way the world is going”

This statement assumes that there is a public appetite for elections to go digital.

The level of support of online elections depends on who you ask and what question you ask them (for example, conducting an *online poll* about *online elections* is likely to suggest an unrepresentative level of support, unless you do some reasonably complex sample weighting). The Electoral Commission however, in its 2017 post general election assessment found strong support for the way that the election had been administered. For example:

- 79% of respondents thought the election was well run (down from 91% in 2015).
- 98% of voters thought that the ballot paper was easy to complete.
- 84% of polling station voters were satisfied with the process of voting.
- 80% of postal voters were satisfied with the process of voting.
- 89% of candidates were satisfied with the administration of the election in their constituency.

This is not to say that elements of election administration could not be improved (for example, electoral registration is an issue). But let’s put these statistics into context. This was a national statutory election for which administrators had less than two months to prepare. If a commercial organisation had these levels of customer satisfaction after such an event, they would be cracking open the champagne.

So yes, let’s work out how to make it easier for all members of the electorate to vote. But we also need to encourage a political landscape that allows for a more equitable relationship between citizens and government. There are plenty of initiatives to encourage meaningful political engagement from a civic public: from citizen’s juries, to more localised economic and municipal models through to proportional representation and an independent Yorkshire (although some of these are more likely to be realised than others). In the meantime, the medium of our electoral system – people, pencils, and paper – should be left well alone.

In a world of uncertainty, electoral interference, and waning confidence in our democratic institutions, electronic voting is a surefire way to add to our problems. So please Mr Hunt - if it ain’t broke, don’t fix it.

[Read more]


June 11, 2019 | Jim Killock

EFF and Open Rights Group Defend the Right to Publish Open Source Software to the UK Government

EFF and Open Rights Group today submitted formal comments to the British Treasury, urging restraint in applying anti-money-laundering regulations to the publication of open-source software.

https://www.eff.org/deeplinks/2019/06/eff-and-open-rights-group-defend-right-publish-open-source-software-uk-government

[Read more]


May 09, 2019 | Pascal Crowe

More than money - How to tame online political ads

The Electoral Commission’s Director of Regulation, Louise Edwards, recently put out a call for new laws to regulate online political adverts. She argued that the adverts need to show clearly and directly who has paid for them. [1] Whilst knowing who has paid for online ads is important, it’s only part of the picture. The whole process of online political advertising needs to be more tightly regulated.

Political parties target ads online by using personal data to include or exclude potential voters. This drives down spending by targeting only a narrow slice of the population. In addition, automated messaging is becoming both cheaper and more sophisticated. Both of these practices will significantly reduce the amount of money needed by campaigns.

To regulate online political advertising effectively, we need to look beyond campaign spending. It’s equally crucial to have greater transparency over parties’ use of personal data. Consequently, we should be looking at organisations beyond the Electoral Commission, for example the Information Commissioner’s Office.

Transparency is critical. Both political actors that use online advertising, and the platforms that facilitate them, should be forced to come clean on their sources of personal data and how their targeting works. Public reporting can be supported, to a degree, by initiatives such as Facebook’s online ad library. The limited data that Facebook provides, however, allows shady individuals who pay for ads online to conduct ‘astroturf’ campaigns hidden behind shell companies.

Open Rights Group is concerned that narrowly targeted online political advertising is contributing to the polarisation of democratic discourse. When parties’ messaging is designed only to be seen by the people already most likely to vote for them, it becomes less about consensus and increasingly geared towards riling up supporters in order to drive them to the ballot box.

Britain’s political discourse has never been totally impartial. But rarely has it been more fractured. Properly regulating online political ads takes a first step to repairing it.

[1] https://www.bbc.co.uk/news/business-48174817

 

[Read more]


April 08, 2019 | Jim Killock and Amy Shepherd

The DCMS Online Harms Strategy must “design in” fundamental rights

After months of waiting and speculation, the Department for Digital, Culture, Media and Sport (DCMS) has finally published its White Paper on Online Harms - now appearing as a joint publication with the Home Office. The expected duty of care proposal is present, but substantive detail on what this actually means remains sparse: it would perhaps be more accurate to describe this paper as pasty green.

Read the White Paper here.

Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the ‘harm’ that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn’t in the White Paper elaborated on what its proposed duty would entail. If it’s drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it’s drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can’t always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS’s policy is underpinned by societally-positive intentions, but in its drive to make the internet “safe”, the government seems not to recognise that ultimately its proposals don’t regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall “risk”. But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS’s approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens’ free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.

The House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to “design in safety”. As the process of consultation now begins, we call on DCMS to “design in fundamental rights”. Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

[Read more]


March 19, 2019 | Jim Killock

Jeremy Wright needs to act to avert disasters from porn age checks

Age Verification for porn websites is supposed to be introduced in April 2019. Age Verification comes with significant privacy risks, and the potential for widespread censorship of legal material.

porn viewing histories what could possibly go wrongThe government rejected Parliamentary attempts to include privacy powers over age verification tools, so DCMS have limited possibilities right now. Last summer, BBFC consulted about their draft advice to website operators, called Guidance on Age Verification Arrangements. That consultation threw up all the privacy concerns yet again. BBFC and DCMS agreed to include a voluntary privacy certification scheme in response.

Unfortunately, there are two problems with this. Firstly, it is voluntary. It won’t apply to all operators, so consumers will sometimes benefit from the scheme, and sometimes they won’t. It is unclear why it is acceptable to government and the BBFC that some consumers should be put at greater risk by unregulated products.

There is nothing to stop a an operator from leaving the voluntary scheme so it can make its data less private, more shareable, or more monetisable. It’s voluntary, after all.

Secondly, the scheme is being drawn up hastily, without public consultation. It is a very risky business for a regulator to produce a complex and pivotal security and privacy standard with a limited field of view. It is talking to vendors, but not the public who are going to be using these products. Security experts, of whom there are many who might help, are unable to engage.

This haste to create a privacy scheme seems to be due to the desire of government to commence age verification as fast as possible. That risks the privacy standard being substandard, and effectively misleading to consumers, who will assume that it provides a robust and permanent level of protection.

DCMS and Jeremy Wright could solve this right now

They need to do two things:

  1. Tell industry that government will legislate to make the Privacy Certification scheme compulsory;

  2. Announce a public consultation on BBFC's Privacy Certification scheme.

That may involve a short delay to this already delayed scheme. But that is better, surely, than risking damage to the privacy, personal lives and careers of millions of UK people regularly visiting these websites.

[Read more]