call +44 20 7096 1079

Blog


May 15, 2014 | Javier Ruiz

Landmark ruling by European Court on Google and the "Right to be Forgotten"

The European Court of Justice has concluded that Google has to delete search results linking to outdated but lawful content in order to protect the data protection rights of individuals.

The European Court of Justice has published a landmark ruling forcing Google to remove some search results related to Mr Costeja González, a Spanish national, after he claimed the linked information was outdated and irrelevant, giving a wrong impression of him. The links pointed to an archived newspaper page containing a public notice by a tax authority for the auction of Mr Costeja's home to cover debts related to a business.

The ruling has very far reaching implications and has generated conflicting opinions among digital rights advocates. It introduces some very positive - and some quite dangerous developments.

It is good news that internet companies such as Google, that operate in Europe but are headquartered elsewhere, will now have to comply with data protection laws and take responsibility for the data they process.

The court has also upheld that the "right to be forgotten" exists in European privacy law. But it has not fully considered the need to balance this "right" with the right to freedom of expression. This could create the potential for abuse by individuals who wish to hide damaging information. The court may have created a weak spot for censorship, where individuals don't bother to remove websites because the bar is too high in terms of proving libel or other harms. Instead it might be easier to ask Google to remove search results under data protection laws.

It is worth noting that the ruling will have an immediate impact in Spain, where hundreds of similar requests are awaiting resolution, but it will take some time to spread across other European legal systems. For now the ruling creates a precedent, and an incentive for Google to agree to requests for the removal of personal information without full consideration to freedom of expression.

1. Google has to fully comply with European data protection

Google has long claimed that it did not have# data protection obligations in Europe because Google Inc is the US company that holds the data, while local subsidiaries in Spain or the UK only run commercial activities. For example, Google offers you the option to download some personal data such as emails via their automated tools, but not to request all the information they hold on you, as they would have to do under EU law. The ruling demolishes this position and makes Google responsible if they "advertise and sell" in a member state.

2. Search engines are data controllers

Search engines have generally been seen as simply reproducing existing information. Most discussions on Google and personal data have looked at services such as email, location, etc. but not search results. The ruling defines the activities of search engines in relation to webpages with personal information - indexing, storing and making available - as "processing" under the terms of EU data protection law. Furthermore, Google is the "controller" that "determines the purposes and means of the processing".

3. The "right to be forgotten" already exists in European law

The "right to be forgotten" is based on the premise that outdated and irrelevant information can give a distorted picture of an individual, for example, preventing them from getting a job. This is a very real concern, but there is also a need to preserve a record of social history. Archivists are concerned that this right could mean rewriting history.

There have been fierce debates about the introduction of this right in new legislation, but the court has cut through the knot and found that no new legislation is needed. Existing laws, requiring the personal information that companies hold on people to be relevant and accurate, can be used to enforce this right.

The court did not fully engage with all the problematic wider implications of this right, as it simply considered search results and not the actual deletion of records. But many feel this is a cop out. Such major ruling on the "right to be forgotten" should lay out some criteria for when and how obsolete or distorting personal information should be removed from the public record, or at least made less accessible. This could mean for example stopping the indexing of public records and online archives by search engines and other processors. But it could also mean taking whole digital archives offline.

4. Being a data controller has far reaching implications

Labelling Google a data controller for search results goes beyond the right to delete information. It creates a seismic shift in the responsibilities that Google has to the people whose information appear in the searches. Given that Google indexes pretty much the whole public internet, this could affect anyone who is named in any website. For example, EU law places constraints on controllers around the export of data to certain countries with lower privacy protections. Data controllers have to give "data subjects" a copy of all the information they hold in them. There is even speculation could mean that people can now object to receiving adverts when they use the platform.

It remains to be seen how this can work in practice with millions of results involving potentially hundreds of people sharing the same name.

5. Publicly available information is subjected to data protection

This ruling is a reminder that many internet intermediaries are not exempt from data protection responsibilities. Just because personal information is publicly available it does not mean that you can do as you wish. This issue was considered in relation to open data by the Article 29 working group.

According to the court, the right to data protection of individuals trumps the "mere economic interest of the manager of the search engine" unless there is an explicit public interest. This will be important for many online projects processing personal data.

The court also mentions the need to consider the right to access the information of internet users, particularly if the person affected has a role in public life. But this balancing of privacy and freedom of expression is not really explored in the ruling.

What is clear from this and previous rulings is that public figures will have to expect a lower expectation of privacy, and generally should not be able to get their information deleted so easily.

6. Search engines have a separate responsibility from publishers

It appears surprising that the ruling supports the Spanish data authorities in allowing the original offending article to remain in place, while forcing Google to delete references to it. The ECJ makes a clear distinction between search engines and the publication of the information. This is consistent with the application of the principles above, but it creates a potential weak spot for online censorship.

A key aspect of this ruling is that it doesn't relate to libellous or defamatory information. It censors lawful content that contains personal information because it may yet cause detriment to individuals when processed by search engines because:

  • search engines can combine lawful information to generate a completely new insight. The court sees the search results relating to a person as a personal profile. This is not a neutral list of links because the information is organised (e.g. ranking, possibly removal of duplicate results, etc.)

  • search engines provide access to outdated information that before would simply disappear into dusty archives nobody visits, but now lingers on in accessible webpages. Without search engines you would need to know what you were looking for and make a special visit.

     

7. Google will now be tempted to remove links rather than contest requests

It is hard to evaluate the balance of competing rights involved in these cases. The ruling does not help Google decide in future cases. How old do websites have to be to become irrelevant, how public should a person be, how do you judge the public interest?

Should Google decide on this balance of rights? It is very unclear how the rights of the publisher will be safeguarded in an internal process by a private company. As a general principle, removal of websites, or search links, should be decided by a legal authority, not a business.

We are particularly concerned that the path of least resistance for Google will be to automate the removals. For Google it will be cheaper to delete links automatically and let others complain later on, than to consider the balance of rights in every request.

If any content has to be censored, with due process and consideration for the right to freedom of expression, this should be more consistent across the board.

It may not be the intention, but with the ruling appears to create a lower barrier for censoring search results than for hosting. Freedom of expression in the 21st century is not just about the right to publish, but also about being found online.

[Read more]


May 14, 2014 | Ed Paton-Williams

ORG hands in petition saying no to HMRC's tax data sell off

We handed in our tax data sell-off petition to HMRC earlier today, along with ORG Advisory Council member Julian Huppert MP and campaign groups 38 Degrees and SumofUs. The Guardian's just put a story up covering the petition hand-in.

Over 300,000 people signed petitions, which were started by ORG, 38 Degrees and SumofUs after we found out that HMRC was considering sharing anonymised tax data for commercial research. We're concerned that under these plans it is very difficult to give or withdraw consent about what happens to our tax data. It is also not at all clear that selling tax data to companies is truly in the public interest.

ORG is currently engaged with HMRC and the Cabinet Office in discussions around the sharing of personal data held by the Government. We'll keep you updated with how that's going.

Thanks to everyone who signed the petition.

Handing in the HMRC petition

 

[Read more]


May 13, 2014 | Jason Kitcat

Guest blog: Estonia and the risks of internet voting

In my capacity as an ORG Advisory Council member I've been working with an independent team of election observers researching the Internet voting systems used by Estonia. Why should anyone in the UK be interested in this?

Two reasons: Firstly Estonia is regularly held up as a model of e-government and e-voting that many countries, including the UK, wish to emulate. Secondly, after years of e-voting being off the UK agenda (thanks in part to ORG's previous work in this area), the chair of the Electoral Commission recently put the idea of e-voting for British elections back in play.

Before our or any other government leaps to copy the Estonian model, our team wanted to better understand the strengths and weaknesses of the Estonian system. So several of us monitored the internet voting in operation for Estonia's October 2013 municipal elections as official observers accredited the Estonian National Election Committee. Subsequently the team used the openly published source code and procedures for the Estonian system to build a replica in a lab environment at the University of Michigan. This enabled detailed analysis and research to be undertaken on the replica of the real system.

Despite being built on their impressive national ID smartcard infrastructure, we were able to find very significant flaws in the Estonian internet voting system, which they call "I-voting". There were several serious problems identified:

Obsolete threat model

The Estonian system uses a security architecture that may have been adequate when the system was introduced a decade ago, but it is now dangerously out of date. Since the time the system was designed, state-level cyberattacks have become a very real threat. Recent attacks by China against U.S. companies, by the U.S. against Iran, and by the U.K. against European telecoms demonstrate the proliferation and sophistication of state-level attackers. Estonia itself suffered massive denial-of-service attacks in 2007 attributed to Russia.

Estonia’s system places extreme trust in election servers and voters’ computers — all easy targets for a foreign power. The report demonstrates multiple ways that today’s state-level attackers could exploit the Estonian system to change votes, compromise the secret ballot, disrupt elections, or cast doubt on the fairness of results.

Abundant lapses in operational security and procedures

Observation of the way the I-voting system was operated by election staff highlighted a lack of adequate procedures for both daily operations and handling anomalies. This creates opportunities for attacks and errors to occur and makes it difficult for auditors to determine whether correct actions were taken.

Close inspection of videos published by election officials reveals numerous lapses in the most basic security practices. They appear to show the workers downloading essential software over unsecured Internet connections, typing secret passwords and PINs in full view of the camera, and preparing election software for distribution to the public on insecure personal computers, among other examples. These actions indicate a dangerously inadequate level of professionalism in security administration that leaves the whole system open to attack and manipulation.

Serious vulnerabilities demonstrated

The authors reproduced the e-voting system in their laboratory using the published source code and client software. They then attempted to attack it, playing the role of a foreign power (or a well resourced candidate willing to pay a criminal organization to ensure they win). The team found that the Estonian I-voting system is vulnerable to a range of attacks that could undetectably alter election results. They constructed detailed demonstration attacks for two such examples:

Server-side attacks: Malware that rigs the vote count

The e-voting system places complete trust in the server that counts the votes at the end of the election process. Votes are decrypted and counted entirely within the unobservable “black box” of the counting server. This creates an opportunity for an attacker who compromises this server to modify the results of the vote counting.

The researchers demonstrated that they can infect the counting server with vote-stealing malware. In this attack, a state-level attacker or a dishonest election official inserts a stealthy form of infectious code onto a computer used in the pre-election setup process. The infection spreads via software DVDs used to install the operating systems on all the election servers. This code ensures that the basic checks used to ensure the integrity of the software would still appear to pass, despite the software having been modified. The attack’s modifications would replace the results of the vote decryption process with the attacker’s preferred set of votes, thus silently changing the results of the election to their preferred outcome.

Client-side attacks: A bot that overwrites your vote

Client-side attacks have been proposed in the past, but the team found that constructing fully functional client-side attacks is alarmingly straightforward. Although Estonia uses many security safeguards — including encrypted web sites, security chips in national ID cards, and smartphone-based vote confirmation — all of these checks can be bypassed by a realistic attacker.

A voter’s home or work computer is attacked by infecting it with malware, as millions of computers are every year. This malicious software could be delivered by pre-existing infections (botnets) or by compromising the voting client before it is downloaded by voters by exploiting operational security lapses. The attacker’s  software would be able to observe a citizen voting then could silently steal the PIN codes required to use the voter’s ID card. The next time the citizen inserts the ID card — say, to access their bank account — the malware can use the stolen PINs to cast a replacement vote for the attacker’s preferred candidate. This attack could be replicated across tens of thousands of computers. Preparation could being well in advance of the election starting by using a replica of the I-voting system, as the team did for their tests.

Insufficient transparency to establish trust in election outcomes

Despite positive gestures towards transparency — such as releasing portions of the software as open source and posting many hours of videos documenting the configuration and tabulation steps — Estonia’s system fails to provide compelling proof that election outcomes are correct. Critical steps occur off camera, and potentially vulnerable portions of the software are not available for public inspection. (Though making source code openly available is not sufficient to protect the software from flaws and attacks.) Many potential vulnerabilities and forms of attack would be impossible to detect based on the information provided to the public. So while the researchers applaud attempts at transparency, ultimately too much of how the I-voting system operates is invisible for it to be able to convince skeptical voters or candidates in the outcomes.

To illustrate this point, the team filmed themselves carrying out exactly the same procedural steps that real election officials show in nearly 24 hours of videos from the 2013 elections. However, due to the presence of malware injected by the team before the recordings started, their count produces a dishonest result.

Recommendation: E-voting should be withdrawn

After studying other e-voting systems around the world, the team was particularly alarmed by the Estonian I-voting system. It has serious design weaknesses that are exacerbated by weak operational management. It has been built on assumptions which are outdated and do not reflect the contemporary reality of state-level attacks and sophisticated cybercrime. These problems stem from fundamental architectural problems that cannot be resolved with quick fixes or interim steps.

While we believe e-government has many promising uses, the Estonian I-voting system carries grave risks — elections could be stolen, disrupted, or cast into disrepute. In light of these problems, our urgent recommendation is that to maintain the integrity of the Estonian electoral process, use of the Estonian I-voting system should be immediately discontinued.

Our work shows that despite a decade of experience and advanced e-government infrastructure Estonia are unable to provide a secure e-voting system. So we believe other countries including the UK should learn from this that voting is a uniquely challenging system to provide online whilst maintaining the fundamental requirements of fair elections: secrecy of the vote, security and accuracy. The significant costs of attempting to build such a system would be better directed at other forms of e-government which can provide greater and more reliable benefits for citizens without risking the sanctity of elections.

Read and watch more about this work at https://estoniaevoting.org

 

[Read more] (1 comments)


May 08, 2014 | Jim Killock

Lobby tries to kill private copying with demand for iPod tax

For well over ten years we have been arguing about a private copying exception, to legalise everyday consumer behaviour of copying music to computer disks. Despite the fact that copyright industry groups have always said they'd never sue anyone, they claim that an exception would cause substantial damage that requires compensation.

Right now, both the private copying exception and parody appear to be delayed. The draft Statutory Instruments are now being discussed by a joint committee and the government in a rather opaque process.

The argument from publisher lobby groups is that European law requires compensation for economic harm arising from copyright exceptions. The UK government has so far, reasonably, argued that any harm would be minimal. Negligible might be more accurate. The change to the law would have little impact on people's behaviour. It would merely legalise what many people already do, copy the music they have legally bought from one device to another.

So what would the damage be? How many people will stop buying second copies of music if an exception is introduced? Probably nearly nobody, we imagine.

To put it another way, how much should you have to pay for a private copy of your own music and films? The BPI says that a private copying exception “fair compensation must be granted to rights holders”. UK Music says that “the exception cannot lawfully be made without fair compensation”.

The British Copyright Council says that "The private copying exception does not include a fair compensation mechanism as required by EU law (Article 5(2)(b) Information Society Directive); the harm by private copying is neither minimal nor priced in [to existing sales] … The BCC supports the introduction of a private copying exception for protected works in the UK, but any such exception should provide for fair compensation to rights owners which is limited to copying from physical products.” 

What could compensation look like? In Spain, 2008-11 any “non excluded” hard disk paid a €12 levy; a mobile phone paid €1.10; a 70ppm photocopier €227. Multifunction printers paid from €7.95 to €10. They excluded disks that were used to boot computers.

It is hard to see charges like this as anything except a tax on innovation and investment. It could easly affect mobile phones, tablets, portable hard disks, hitting the cheaper end of the market and poorer customers especially hard.

The Spanish law was killed in 2011 after massive pressure. Over 3 million Spaniards signed a petition to kill it. We're certain the UK doesn't want that fight. But will they bow to lobby pressure, and kill the private copying exception to avoid a fight over an ipod tax?

No politician is likely to agree to a levy for damage that barely exists, in return for a change in the law that merely reflects real behaviour that nobody is going to be prosecuted for. The real victim will be the legitimacy of copyright law: yet again, the copyright lobby groups are resisting change that could improve the perception of their industry and the laws that support it.

[Read more]


April 16, 2014 | Jim Killock

Quiz your MEP candidates on digital rights

Europe makes many of the laws that are shaping privacy and restricting surveillance. Data Protection, for instance, should guarantee that interception is lawful, rather than arbitrary.

Last week, the European Court of Justice declared the Data Retention Directive invalid: which has huge implications for our claim that UK law supervising surveillance is inadequate.

The European Parliament also investigated the Snowden allegations, and took evidence from Edward Snowden himself.

After investigations, the Parliament agreed that data protection “safe harbor” agreements with the USA should be suspended and said that the activities of GCHQ and the NSA “appear illegal”.

I
t was the Parliament, too, that struck down the ACTA treaty, and recently voted to protect net neutrality.

Europe matters for digital rights and our campaign to end mass surveillance in the UK. That's why we are taking part in the wepromise.eu campaign, asking you and candidates to pledge to support digital rights; and why we are asking you to come to the nearest digital rights hustings for EU Parliamentary candidates in May. With the election coming, we can put pressure on candidates to tell us what they will do to protect the right to privacy and free speech if they are elected.

Digital Rights European elections debates

Manchester

When: Tuesday 6th May, 6:30 - 8:30 pm
Where: The Main Hall, The Friends Meeting House, Mount Street
http://www.meetup.com/ORG-Manchester/events/176592492/

Sheffield

When: Thursday 8th May, 6:30 - 8:30 pm
Where: St Mary's, Bramall Lane, S2 4QZ
http://www.meetup.com/ORG-Sheffield/events/176593712/

Bristol

When: Friday 9th May, 6:30 - 8:30 pm
Where: St Werburghs Community Centre
http://www.meetup.com/ORG-Bristol/events/176594452/

Norwich

When: Monday 12th May, 6:30 - 8:30 pm
Where: Norwich Quaker Meeting House, NR2 1EW
http://www.meetup.com/open-rights-group-norwich/events/176603832/

London

When: Thursday 15th May, 6:30 - 9:30pm
Where: Shoreditch Village Hall, 33 Hoxton Square, N1 6NN
http://www.meetup.com/ORG-London/events/176612572/

Brighton

When Friday, May 16, 2014 6:30 PM to 8:30 PM
Where: BMEP Centre 10A Fleet Street. Brighton, BN1 4ZE, Brighton
http://www.meetup.com/ORG-Brighton/events/177466782/

[Read more]


April 15, 2014 | Jim Killock

Help us to re-start the debate about internet filters

At times the campaign to prevent internet filters has bordered on the surreal, such as when the Deputy Children’s Commissioner Sue Berelowitz said, ‘no one should be panicking – but why should there not be a moral panic?’ Or the time when Helen Goodman MP thought parents weren’t capable of switching in filters themselves because, ‘the minute you talk about downloading software, my brain goes bzzzz’. And who can forget Claire Perry MP dismissing overblocking as, ‘a load of cock’?

Against this background of moral outrage and technological incompetence, ORG has been trying to make people aware that filters don’t work, are dangerous for internet freedom and could give parents a false sense of security when it comes to their children’s use of the internet.

But now it looks like Claire Perry has won. Every major internet service provider in the UK is promoting filters that block websites containing material that isn’t appropriate for children. This means that your internet service provider gets to decide what you can or can’t see online, regardless of how old you are.

No laws were passed for this to happen. There was no debate in parliament, just a series of closed meetings, following a report by Claire Perry MP. A report that was sponsored by Christian charity Safermedia and radio broadcaster Premier Christian Media.

This has been done in the name of keeping children safe from pornography, although the filters include a whole load of other categories, including web forums, alcohol, smoking, suicide and anorexia. No one knows exactly which sites are on the list. Recently, the government asked to add secret extremist website lists to the blacklist as well so we can only expect that this list will grow and grow. Then there’s the problem that a whole load of sites get blocked by mistake - from churches (they mention wine!) to political blogs that have been miscategorised as hate speech. And a lot of sites that children should have access to - such as sites on sexual health - are also blocked. Once your website is on a blocked list, there’s no easy way to get off it.

Let’s be honest, no one wants their kids seeing porn or stuff that might upset them but David Cameron’s suggestion of, "one click to protect your whole home and keep your children safe," is deeply irresponsible. It may come as a surprise to Cameron but parents might need to act like grown ups when it comes to adult content. Talking about porn, extremism or self-harming sites might not come naturally to most of us. But we have a responsibility to equip our children with the skills they need to navigate their way in the digital world - just as we do in the non-digital world. Filters don’t do that.

If parents want to switch on filters, that is their choice. But it should be an informed choice and there are alternatives to blanket filters, such as device-level filters, which are more effective.

If parents don’t want filters, they shouldn’t be made to feel ashamed or that they are failing as a parent because they’ve decided to take responsibility for how their kids use the internet. If you don’t have kids, then there is absolutely no reason you should feel pressurised into switching them on. Filters are harmful for people who are browsing for information about domestic violence, safe sex or drugs health but they are not going to stop a tech-savvy teenager who is determined to find adult content.

If it turns out the public don’t want filters to censor what they see online, then politicians will start asking for blocks that are even harder to switch off. They will continue to claim that filters can solve every social ill. We have to discredit this ridiculous idea. We don’t have to put up with censorship just to make their lives easier.

Indiegogo

To get this message across we want to produce a high-quality, funny film that will re-start the debate about why filters are a bad idea. It will cost us £12,000 to get this campaign off the ground.

We have launched a campaign on Indiegogo to help raise the money we need and we have less than four weeks to raise it.

Support this film so we can show exactly how stupid filters are.

Update: In a couple of instances, the word default was used in this article. They have now been removed. April, 29th, 2014.

[Read more] (2 comments)


April 14, 2014 | Richard King

Making progress on monitoring censorship

ORG is running a project to end the imposition of web blocking by ISPs and the Government. Here's how we're getting on and how you can get involved.

Since the start of the year ORG's community of technical volunteers have been turning blocked.org.uk into an automated platform for censorship detection, reporting and research. I joined ORG's staff at around the same time to help support both the project and the community bringing it to life. We are now at a very exciting stage of the project - however there is still a lot of work to be done.

Here's a quick overview of the system we're building, the progress we've made to date, and the many ways in which you can help us finish the job.

Upgrading the website

First of all, we're giving the website itself (www.blocked.org.uk) a facelift, with a new responsive template and graphical design. The form for submitting a URL to check or report as blocked will still be the main feature. New features will include overview statistics, historical data on individual URLs, and a space for user-submitted stories on how censorship affects them.

Blocked.org.uk website facelift

Behind the Scenes

We're also building a benevolent botnet of "probes", each connected to a different company's broadband line.

When given a URL to test, these probes will check whether it can be reached via their ISPs and report the results to our database.

Visitors to blocked.org.uk will be able to ask the database whether a particular URL is being censored and by which networks. They will also be able to see the blocking history of the URL if it has already been registered, request that the site be checked again, and tell us why this particular site is important to them.

We'll be releasing all this data, and our code, under permissive licenses that let others reuse and build on what we're creating.

What are the next steps?

The new website, the probe software, the databases and the Application Programming Interfaces (APIs - mechanisms that let each part talk to the others) are all at advanced stages of development. We have ordered broadband subscriptions from all the major UK ISPs and these are being commissioned right now. Our next challenge is to link all these components together into a working system.

Achieving this first milestone will make web censorship in the UK more transparent - but we won't be stopping there.

We want to improve the system by experimenting with different sets of URLs to keep an eye on, adjusting retest frequencies, iterating our methods for detecting that a site has been blocked, and generating reports and statistics on filtering methods, behaviour and effectiveness.

How can I help?

All this work is being done by our amazing community of technical volunteers - which you are welcome to join! There's plenty to do, from writing copy to writing software, and you can find out how to get involved on our project website: http://www.blocked.org.uk/help. See you on the mailing list!

[Read more]


April 10, 2014 | Jim Killock

Back to the coalition agreement: data retention laws should not be revived

In 2010, the coalition announced that they would roll back the surveillance state including the “Ending of storage of internet and email records without good reason”. The coalition is on the threshold of fulfilling that pledge - at least in relation to data held by ISPs. ISPs meanwhile need to clarify what they are doing now that the law is gone.

No doubt, once the coalition settled down, ministers were briefed that the retention of user data was required by European law: so they could easily forget about this pledge. The European Court of Justice has helped the matter along by deleting the law. We sincerely hope that the coalition sticks by its agreement, and does not try to re-legislate data retention back into UK statutes.

As a result of the law’s death, some ISPs are starting to delete their data in Sweden for instance, where this law caused very significant controversy. Authorities there are letting ISPs do this. It is extremely important that we know what actions ISPs are taking. For this reason, ORG has today written to BT, Sky, TalkTalk and Virgin to ask them to explain how they will be treating user data now that the Directive no longer exists:

… these regulations no longer have a valid basis in UK law.  It is our understanding that ISPs therefore should not be retaining user data unless there is some other legal basis for doing so.

We understand that you should only retain personal data such as IP logs and email communications data for legitimate business reasons or specific legal requirements.

In the interests of your customers, please can you:

(1) Confirm that you are not continuing to abide by the now defunct Data Retention Directive and regulations;
(2) Publish a description of the data you will be continuing to collect for business purposes (and how the data assists you) and what time period you will be holding the data for

 

[Read more]


google plusdeliciousdiggfacebookgooglelinkedinstumbleupontwitteremail