A Reminder of Who Is In Charge

Spoiler Alert: when it comes to your online privacy, it’s not you, it’s shareholders.

In this post, I’m going to discuss Twitter’s recent decision to remove a privacy feature from their service, including what they did, why they did it, and what this incident can teach us about online privacy and advertising in the present day.

Twitter’s Unexpected Pop-up Notification

Around April 7, 2020, Twitter users began receiving a strange notification (see screenshot) when they visited twitter.com or opened the Twitter app. The notice informed people that a privacy setting related to advertising measurements had been removed from Twitter, leaving people with no choice but to click the “OK” button and agree. Twitter’s decision to summarily remove a privacy feature with no prior notice, explanation, or ability to object left a bad taste in peoples' mouths (including mine).

Bennet Cyphers from the EFF posted a detailed technical background of what Twitter’s privacy settings used to allow, and how the new change impacts data sharing. To summarize: Twitter used to offer a setting (enabled by default) that allowed them to share two kinds of data with third-parties. Previously, Twitter users were free to uncheck this box and disable the data sharing (supposedly, more on this below). The two types of data are:

  1. Conversion Tracking: This data is related to Twitter’s “Mobile Application Promotion” (MAP) advertising product. This product allows app developers to advertise their apps on Twitter, and in turn Twitter reports conversions back to the developer, i.e., Twitter users who viewed the ads, clicked them, and ultimately installed the apps. This style of advertisement is sometimes derogatorily referred to as “pay-per-install”.
  2. Third-party Analytics: This data is related to Twitter’s use of third-party analytics libraries on its website and app (notably developed by Google and Facebook). In this case, Twitter uses the third-party libraries to gauge the effectiveness of their own ads on third-party platforms (e.g., in Google Search and the Facebook news feed).
Screenshot of Twitter's new information sharing option
Screenshot of Twitter's new information sharing option, as shown to US users. Note that this setting is enabled by default, but I have disabled it.

As of April 7, the situation on Twitter has gotten much more complicated.

  • For people in most of the world, the original privacy setting is gone and people can no longer opt-out of Twitter sharing conversion data. In it’s place is a new setting (shown in the screenshot above) that allows people to opt-out of some data sharing with business partners. This refers to the second type of data, i.e., analytics for Twitter’s own advertising purposes, but given the vague language it’s not entirely clear what this setting actually does in practice.
  • In Europe, the original setting remains and is unchecked by default. People must opt-in to conversion sharing because of GDPR.

Why Did Twitter Make This Change

Given that this was not a user-friendly change and instigated unfavorable press coverage and commentary towards Twitter, the question immediately becomes: why did they make this change? The change was motivated by two related problems.

The story starts on August 5, 2019, when Twitter posted an obscure disclosure that the privacy setting in question didn’t actually work. Even if a person unchecked the box to opt-out, Twitter still shared conversion data with third-parties (between May 2018 and August 2019) and analytics cookies with third-parties (between September 2018 and August 2019). Twitter’s post is apologetic but also somewhat defiant – they acknowledge their mistake but reassure everyone that data was only being shared with “trusted” third-parties anyway. Regardless, Twitter fixed the bug so that the privacy setting functioned as intended.

After Twitter fixed the bug and started honoring peoples' privacy choices, they ran into a second problem: their advertising revenue declined precipitously, causing them to miss their Q3 2019 revenue targets. Twitter revealed that fixing the bug is what caused advertising revenue to drop, i.e., mobile app developers were spending less money placing MAP ads because they could no longer target their ads or track conversions as effectively.

Ultimately, Twitter decided that they could no longer lose the advertising revenue from MAP ads, so they eliminated most peoples' ability to opt-out of conversion tracking.

Exercises in Power

I find this whole incident to be quite illuminating in a number of respects.

When trying to defend their privacy practices against encroaching regulators, tech companies often draw on the metaphor of “control”. In this narrative, tech companies are bending over backward to responsibly self-regulate and give people control over their data. The tech companies point to their privacy settings, opt-outs, and data dashboards as reasons why actual privacy regulation is unnecessary – people already have all the tools they need, apparently.

This Twitter incident reveals a fundamental flaw in this narrative: “control” is meaningless when it’s a discretionary capability that can be taken away at any time. In a contest between (A) growing revenue and appeasing shareholders versus (B) user control and privacy, Twitter chose the former. In other words, Twitter is willing to offer people privacy control so long as it doesn’t impact the prime directive: profit. In the US as in most of the world, the power to collect, share, and profit from data, as well as decide the (limited) privacy affordances offered to users, rests squarely with industry.

Yet, as I noted above, people in Europe were the exception to Twitter’s shifting stance on conversion data. In Europe the GDPR forces companies to obtain affirmative, opt-in consent before sharing most types of data with third-parties. This prevented Twitter from removing the conversion data setting and forced Twitter to make the setting opt-in rather than opt-out. The GDPR is not perfect, but it has undoubtedly shifted the power to dictate online privacy norms away from industry and into democratic government.

As the EFF notes, this Twitter incident is one more reminder of why we need comprehensive federal privacy regulation in the US.

The Need for Auditing

Another point concerns the original privacy bug that Twitter fixed in 2019. Prior to the bug fix, people thought they had control over data shared with third-parties by Twitter, but they didn’t.

We all rely on privacy settings in software and services that we use. Yet, as this incident demonstrates, we often have no idea whether these settings actually work as intended, and whether our choices are being honored. Whether through malice or neglect, it’s possible for our privacy to be compromised even when we have taken the time to engage with services and navigate their (often byzantine) privacy settings.

In a world where “control” is our primary tool for affecting online privacy, privacy settings need to be critically examined by independent auditors. Twitter didn’t notice that their service was leaking conversion data for 15 months – we can’t rely on tech companies to self-regulate, let alone implement robust software. In much the same way that good cybersecurity requires independent “red team” reviews, we should demand that tech companies engage in privacy compliance audits, and hold Chief Information Security Officers accountable for failings that are uncovered.

Aside: thus far, I am not aware of any repercussions against Twitter for their misrepresentations regarding their privacy settings, although this certainly seems like a instance where the FTC could (and should) investigate Twitter’s potentially deceptive trade practices.

Taking Power Back

One final point: this incident reminds us that people’s privacy choices matter. In this case, it appears that the decision to opt-out by a minority of Twitter’s users was sufficient to significantly reduce Twitter’s advertising revenue, so much so that Twitter was forced to eradicate the opt-out choice altogether. Does this suggest that the privacy conscious people who opted-out were a particularly lucrative audience for ads, or does it simply imply that strength in numbers is sufficient to push back against surveillance capitalism?

For the scientifically minded, this whole incident is potentially a natural experiment. We know the rough range of time during which Twitter was improperly sharing conversion data within their MAP advertising product, as well as the date on which that flow of data was curtailed. Further, the impact on Twitter’s revenue is known. Does this offer an opportunity to measure the actual value of this user data? Given that only Twitter knows how many people actually opted-out of sharing conversion data, we may never know the answers to these questions.

Regardless, the whole incident makes me hopeful that if more people were mobilized to opt-out of invasive data collection and sharing, it might cause sufficient pain to the industry to force a reckoning with these practices.


Update 04/19/2020: Hat tip to Arvind Narayanan for suggesting a more measured title for this post.