Connect with us

Tech

Data protection – the US FTC condemns social platforms’ AI practices

Published

on

Data protection – the US FTC condemns social platforms’ AI practices

(© peterhowell – Canva.com)

As the AI Spring continues, the US Federal Trade Commission has slammed the data protection record of social media platforms and video streaming services.

In an 84-page report published on 11 September – with a further 31 pages of appendices – the FTC says that the platforms’ data gathering practices pose a serious threat to privacy and security, with children most at risk from what it termed “vast surveillance”.

According to the FTC, users lack “any meaningful control over how personal information [is] used for AI-fuelled systems”.

The report focuses on nine dominant platforms: Amazon, Facebook, YouTube, Twitter (now X), Snap, ByteDance (developer of TikTok), Discord, Reddit, and WhatsApp (also owned by Meta, alongside Facebook).

Last month, X found itself in a hostile media gaze when it was revealed that, by default, account holders’ data was being used to train the Grok AI without their knowledge or opt-in consent.

The status quo that allows such behaviour – or tolerates it, even if it is in breach of regulations in other territories, such as GDPR – is “unacceptable”, notes the FTC:

The amount of data collected by large tech companies is simply staggering. They track what we read, what websites we visit, whether we are married and have children, our educational level and income bracket, our location, our purchasing habits, our personal interests, and in some cases even our health conditions and religious faith. 

They track what we do on and off their platforms, often combining their own information with enormous data sets purchased through the largely unregulated consumer data market. And large firms are increasingly relying on hidden pixels and similar technologies – embedded on other websites – to track our behavior down to each click. 

In fact, the companies collected so much data that in response to the Commission’s questions, they often could not even identify all the data points they collected, or all of the third parties they shared that data with.”

Writing in the preface to the FTC’s report, Samuel Levine, Director, Bureau of Consumer Protection, said:

The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse. Our privacy cannot be the price we pay to accomplish ordinary basic daily activities, and responsible data practices should not put a business at a competitive disadvantage.

Self-regulation is not the answer.

But the FTC’s criticisms do not end there, though the Commission notes that the federal government is far from blameless:

The absence of legislation has given firms nearly free rein in how much they can collect from users. Two decades ago, some believed that large tech companies could be trusted to establish adequate privacy standards and practices. This report makes clear that self-regulation has been a failure.

The rise of generative AI, Large Language Models, and other artificial intelligence and machine learning systems provides the context for the FTC’s fear of a sector-wide free-for-all. Levine writes:

Predicting, shaping, and monetizing human behavior through commercial surveillance is extremely profitable – it’s made these companies some of the most valuable on the planet – and putting industry in charge has had predictable results.

America’s hands-off approach has produced an enormous ecosystem of data extraction and targeting that takes place largely out of view to consumers.

Indeed, data gathering is not just about customers of the nine platforms, notes the report, but everyone else too. Levine continues:

As many of these firms pivot to developing and deploying AI, while continuing to shroud their practices in secrecy and implementing minimal safeguards to protect users, we must not continue to let the foxes guard the henhouse. Protecting users – especially children and teens – requires clear baseline protections that apply across the board.

So, what is the solution? To fix the system, fix the incentives, urges the Commission:

[Problems] stem from a business model that varies little across the nine firms – harvesting data for targeted advertising, algorithm design, and sales to third parties. With few meaningful guardrails, companies are incentivized to develop ever-more invasive methods of collection.

This incentive structure is especially concerning given the dominant positioning enjoyed by the largest firms, which exert vast power over our economy, our democracy, and our society.

The rewards from data harvesting raise serious risks that firms will seek unfair advantages through a host of anti-competitive behaviors – from pressuring smaller websites to embed their tracking technologies, to leveraging their massive collection efforts to identify and prevent newcomers who want to enter the market, to creating vast ‘walled gardens’ that do much to depress competition and little to protect consumers’ data.

As noted in my previous reports on data protection this month, the FTC’s investigation takes place in a Web and social-media environment that feels increasingly aggressive towards consumers. We are forced to see more and more ‘Suggested Content’, force-fed more and more adverts, or are told to pay up to escape these obligations.

It no longer feels welcoming and collaborative; we are being battery-farmed for clicks.

With the UK now actively pursuing a policy of asking regulators to focus on enabling growth, rather than protecting citizens’ rights as a priority, such behaviour is likely to become more, not less, prevalent as the AI Spring continues.

Regarding AI, the FTC notes that there is already widespread application of the technology to both users’ and non-users’ personal information. AI powers everything from content recommendation to search, advertising, and inferring personal details about users, leaving users with no meaningful control over how their lives and behaviour are analyzed:

This was especially true for personal information that these systems infer, that was purchased from third parties, or was derived from users’ and non-users’ activities off the platform. This also held true for non-users who did not have an account and who may have never used the relevant service.

Nor were users and non-users empowered to review the information used by these systems or their outcomes, to correct incorrect data or determinations, or to understand how decisions were made, raising the potential of further harms when systems may be unreliable or infer sensitive information about individuals.

The FTC concludes:

Overall, there was a lack of access, choice, control, transparency, explainability, and interpretability relating to the companies’ use of automated systems. There also were differing, inconsistent, and inadequate approaches relating to monitoring and testing the use of automated systems.

In particular, the US government is alarmed at the potential harm to children and young people, not just from data being gathered about them, covertly – a serious concern in itself – but also the content being showed to them because of that data, including with reference to their age, gender, and ethnicity.

In some cases, algorithms are prioritizing harmful content to some groups, and in others are treating all teenagers as though they are adults. In most cases, there are no meaningful restrictions on users aged 13 to 17 opening accounts with these platforms, says the FTC.

The report concludes:

Children and teens are a uniquely vulnerable population, but the companies’ policies have failed to adequately protect them – this is especially true of teens, who are not covered by the COPPA [Children’s Online Privacy Protection Rule].

My take

Strong words from a government, and an organization, that has long favoured a hands-off approach to let innovation thrive. So, even as the UK wants to capitalize on the promised growth from these new technologies, the US is waking up to the realization that these companies have already gone too far.

Continue Reading