
Digital Privacy
Profiling by Proxy: How Meta’s Data Driven Ads Fuel Discrimination
Data profiling, or behavioural profiling, is one of the most powerful tools in digital advertising, and generates billions of dollars of revenue for the sites that host ads. In 2024, Meta earned over $160 billion from advertising revenue, over 95% of its total global revenue. Their net income was over $62 billion in the same year. Platforms such as Meta’s Facebook and Instagram collect, store and analyse thousands of data points about each individual user, gleaned from the things we do while on their platforms and elsewhere on the Internet. Our individual profiles can contain demographic, behavioural and geographical data as well as the kind of devices we use to access social media.
Executive Summary
Profiling is the recording and categorisation of these thousands of data points into a digital profile. Algorithms look for patterns in our data which provide information about our shopping habits, topics we are interested in, places we like to visit and much more. These characteristics enable advertisers to find audiences that are most likely to respond to their adverts; they can select from extensive lists of interests, behaviours and other characteristics which have been automatically assigned to an individual user by Meta’s AI. This is ad targeting.
Much of the content of a data profile is not information that a user has directly disclosed, but that has been inferred by their online behaviours, and collated with other data into a composite picture. It is this combination that underpins much consumer unease, and creates the potential for ad targeting to cause discrimination and harm.
The combining of multiple data sources and subsequent analysis creates what are called proxies: where the platform or advertiser does not hold data that directly relates to a characteristic they wish to target, other characteristics can stand in for the desired one. Proxies can be created by training an algorithm with historical data about user activity, such as clicking on an advert for a particular product. The algorithm can then ‘learn’ what other characteristics are associated with that user action.
Proxies are powerful tools for advertisers, but can also reveal sensitive information and data which is protected by anti-discrimination and data processing laws. These laws mean that companies like Meta are not allowed to process certain types of data, or make decisions based on that data which may discriminate against certain users. This can happen accidentally, or intentionally by unscrupulous advertisers or platforms, but either way, proxies mean that restrictions on data processing and ad targeting which are meant to prevent discrimination or harm can never be fully effective.
Profiling and ad targeting create huge revenues for the platforms that host adverts, and many advertisers believe that they improve audience engagement and conversion into sales. There is some evidence to support this, though it is not conclusive, and there appears to be increasing dissatisfaction and unease among advertisers with the business and social impacts of ad targeting. At the moment however they have few other options.
Many consumers are concerned about being profiled, and feel uncomfortable about ‘creepy’ adverts following them around the Internet: less than a fifth of UK Internet users said they are happy with their data being used in exchange for a free or personalised service for example. But at the same time, consumers are faced with a lack of other options, either having to accept profiling and targeting or stay off the platforms that have become an essential part of many of our lives.
Many examples exist of discrimination and harm caused by profiling and ad targeting. They include gender discrimination in who sees job ads; racial discrimination in housing and education ads; predatory data collection and targeting by online gambling sites; inappropriate products being promoted to under 18s including alcohol and pharmaceutical drugs; discriminatory and predatory advertising of credit; sharing of sensitive health information between NHS websites and Meta; and the targeting of scams to more vulnerable users.
With the introduction of Generative AI to Meta’s ad tools it is likely that existing problems with opacity and lack of accountability will worsen, and discriminatory targeting could increase.
It is likely that many users of Meta platforms do not fully understand how data profiling works, the volume of information collected and inferred about them, or the uses to which it is put, because so much of the process is opaque. There are some options within the platforms for users to reduce the amount of data collected and processed, but these do not switch off profiling or targeting completely.
Outside of Meta’s own settings, as consumers and users of sites like Facebook we are limited in our ability to opt out of data processing and ad targeting. Under the UK GDPR we have a right to object to our data being processed, which applies absolutely in the case of direct marketing. In March, 2025 Meta settled in legal action brought human rights campaigner Tanya O’Carroll, and said that they would no longer process her personal data for targeted advertising. Since then thousands of people in the UK have requested that Meta stop profiling them for advertising.
In theory we should be able to use sites without giving up our personal information for advertising purposes, but in reality Meta is not respecting our rights, and as consumers we have no way of forcing them to do so. Other models of digital advertising are emerging, and there has been some progress on ad transparency, but there is a long way to go before users can truly choose how much they share and how it is used by platforms and advertisers. These next steps are recommended to begin the journey:
Respect people’s right to consent to targeted advertising
Every user of a site or platform which uses profiling and ad targeting should only see targeted ads if they have consented for their data to be processed for this purpose. People should also be able to simply and effectively use their right to opt out at any time. It should not be a paid-for privilege, but a universally available right. Opting users out of data profiling and targeting should be the default for sites like Facebook, with users who prefer targeted ads able to opt in if they wish.
Improve ad transparency
The transparency introduced by the Meta Ad Library should be built on and strengthened, with all ads subject to a stronger minimum level of transparency, and access to the Library should be freely available without logging into a Meta account.
Develop and support new models of adtech
Ad targeting doesn’t have to be done through data profiling: contextual advertising can achieve similar results without collecting personal data and violating user privacy. This and other models of privacy-preserving online advertising should be developed and supported by advertisers and platforms.
User switching and Interoperability
While the market has an incentive for attention, which has little impact on user retention, the same harms are likely to emerge. For a better ad market to emerge, users need to be able to disengage with platforms with ease. Interoperability and user switching can help markets become more responsive, as users can move to choose better user experiences, including more truthful and less exploitative advertising environments, without losing their friends and contacts.
Introduction
Platforms like Facebook make billions of pounds every year from showing adverts to their users. In fact, for Meta, Facebook’s parent company, online advertising is the primary source of revenue.
As Internet users we’ve probably all experienced the creepy feeling of looking at a product on one website, only for it to follow us around every other site we visit, or when an ad appears on our Instagram feed which is weirdly appropriate to our life or interests.
The reason that Meta makes so much from advertising is the same reason that online ads seem to know us better than we know ourselves: the multi-billion dollar adtech industry. It involves harvesting thousands of data points on everyone that spends time online to create detailed profiles of who we are, what we enjoy doing, where we live, who we’re friends with and crucially, what we like spending money on.
Some people are willing to accept the creepy feeling, preferring to see ads more relevant to them rather than a random selection, but many are unhappy with the trade-off between using a platform and the capture and use of personal data. If you want to opt out your rights and choices are seriously limited, and if you’re part of a community that experiences discrimination or is vulnerable in some way, the data profiling and ad targeting that are at the heart of Meta’s ad offer can cause serious harm.
Targeting does not need to be achieved through the use of sensitive or restricted personal data, it can be done using contextual browsing information, but behavioural profiling is deeply entrenched in the mainstream adtech model.
Meta knows far more about us than we consciously or proactively disclose. By bringing together data from across its own platforms and everywhere else we spend time online, and analysing it using powerful AI, it has the ability to ‘learn’ who we are and which ads we are most likely to respond to. In creating these complex user profiles, seemingly innocuous data can reveal deeply personal information about us, or allow ads to be shown in ways that unlawfully exclude or target people.
Outside of a handful of investigations by civil society and reporters, most of this happens without our knowledge, with no straightforward way to challenge it. Our rights on paper are not enforceable in reality because Meta does not respect them. There needs to be a rebalancing of rights between platforms and users: users must be able to enforce their data rights without having to resort to legal action, or giving up the platforms and sites that have become central to our digital lives.