If Customer Data is So Precious, Why Are There So Many Leaks?

[This article is cross-posted on my Medium blog as well]

Data insecurity popped up in the news again last week, and these frustrating stories highlight a sadly perennial pattern for companies relying heavily on technology to power their businesses. At the core of the problems is a failure to embed values and real, considered empathy into the team building the software, and the code that they are writing.

Bad Company Culture Enables Bad Data Decisions

 Photo by  Victoria Heath

The first type of problem concerns the people shaping the code and its use, and those people using that code internally to access customer records and information.

Lyft made the news with customer data leaking from employees using their “God-view,” which essentially means that certain employees at a company are enabled with full access to all data within a system. This is not nefarious, and almost every tech-driven company has a version of this view. It’s necessary for everything from replicating bugs to processing refunds, and may be referred to by teams across tech, customer support, and marketing. What does vary from company to company is: which employees have access, what level of access they have, what that access is for, and how that access is monitored and enforced.


Our source says that the data insights tool logs all usage, so staffers were warned by their peers to be careful when accessing it surreptitiously. For example, some thought that repeatedly searching for the same person might get noticed. But despite Lyft logging the access, enforcement was weak, so team members still abused it…staffers could use Lyft’s backend software to view unmasked personally identifiable information. This was said to be used to look up ex-lovers, check where their significant others were riding and to stalk people they found attractive who shared a Lyft Line with them. Staffers also could see who had bad ratings from drivers, or even look up the phone numbers of celebrities. One staffer apparently bragged about obtaining Facebook CEO Mark Zuckerberg’s phone number. [Source: TechCrunch]

There are numerous examples of this type of failure to treat customer data as sacred, including Lyft’s competitor Uber, which was found to be stalking journalists through user data, and that data was actually also available to drivers (and easily hackable). Uber’s lawsuit with the federal government for this specific issue was finally settled in 2017, and they are now required to have outside firm audits of their privacy practices for the next two decades. I’m assuming Twitter has changed their user data access and internal policies after one of their employees temporarily deactivated Trump’s Twitter account.

THE SOLUTION: This lax privacy and security of private, personal customer data is troubling. While all companies have a need for these “God-view” abilities, it’s the values of the company that determine how it is used, and the personal accountability to monitor and course correct. I imagine that, if it was discussed, this issue became a “we’ll fix it when we get caught” problem, which essentially treats incidents or blowups as forcing mechanisms, when instead, values should be embedded in the infrastructure of the company culture.

Particularly relevant to these ideas are Jocelyn Goldfein’s thoughts.

New hires don’t walk into your company already knowing your culture. They walk in anxious — hoping for success and fearing failure. They look around them to figure out how they are supposed to behave. They see who’s succeeding, and they imitate what they’re doing as best they can. They figure out who’s failing, and they try to avoid being like them.

Failure To Identify Damaging Outcomes

But honestly, why fret about “God-view” when companies may just expose your data, failing to notice potential negative outcomes.

WHAT HAPPENED: Enter fitness app Strava, and their global heat map with 3 billion GPS points from their users. News broke this week that this data is also useful to pinpoint military bases, including supposedly secret ones. The Strava story goes on to unravel further, that not only does the app’s heat map reveal the location and rough layouts of military installations, but the data can be de-anonymized to see who ran where, with The Guardian reporting that, “it is possible to see the names of more than 50 US service members at a base in Afghanistan.”

This is problematic. Even in 2008, Netflix’s database was de-anonymized by two academics, Arvind Narayanan and Vitaly Shmatikov, at the University of Texas at Austin. As the use of social media and connected devices grows, the ability to connect your IRL and online activities are causing alarm, especially with web-browsing histories.

Alternately, Gabriel Weinberg, the founder of DuckDuckGo, writes

The complete loss of personal privacy in the Internet age is not inevitable. Through thoughtful regulation and increased consumer choice, we can choose a brighter path. I hope to look back at 2018 as a turning point in data privacy, where we awoke to the unacceptable implications of [two] companies controlling so much of our digital future. [Source: CNBC]

THE SOLUTION: Creating technology within the product development lifecycle requires anticipating and addressing negative outcomes and possibilities. Many companies work to ensure that projects are business and money-aligned (revenue-generating in some fashion). What’s often missing is a review of the data model, threat modeling and assumed risk, user research, and for the internal team to actively pursue weighing relatives harms, and take responsibility for the burden of proof of accountability (and even pull in outside domain experts, to validate). As features and products are being strategized, there must be an accounting for negative outcomes, rating of risk, and description of how equipped the company is to handle that outcome.

Protection Against Hacking

 Photo by  Wesson Wang

Photo by Wesson Wang

It’s worth noting that all of these data concerns swim in the same pool as regular old data hacking concerns.

WHAT HAPPENED: The most recent large-scale hack, the Equifax data breach of 2017, shows that “the credit-reporting giant had more than two months to take precautions that would have defended the personal data of 143 million people from being exposed. It didn’t.” Equifax bungled the public response, and overall, the company is viewed as negligent with people’s most personal of information. They’ve just released a credit report app that fails to work, further diminishing trust and confidence in their stewardship.

Another truly wild ride through data breaches and hacking is spotlighted on Reply All’s “The Russian Passenger” (thanks Emma Story for reminding me that this exists!). I guarantee that after listening, you will immediately head to have i been pwned to see which (public) data breaches your accounts have been involved with.

THE PROBLEM: Oversight and vetting for keeping private data private are necessary. Self-regulation has proven itself to be missing: Equifax clearly had no active emergency plan, nor did they address the security vulnerabilities with speed and transparency. Having a clear security strategy, and fundamentally prioritizing people’s data security as a human, real priority, are key. We demand fire escape plans (and we have drills!) to ensure everyone knows what to do in case of an emergency — why would we not apply similar emergency plans to managing sensitive data that can have devastating effects on people’s livelihoods, wealth, relationships, and more, if released?

The Case For More Equitable (and Trustworthy) Tech

 Photo by  Viktor Forgacs

I’ve just finished reading Cathy O’Neil Weapons of Math Destruction, which should be required reading at all tech companies, everywhere. A fundamental case that she makes throughout the book through example, and explicitly in the closing chapter, is the moral imperative in advocating that social justice MUST be a core part of the decision making process in building technology (specifically, the algorithms powering the tech). It was so uplifting and validating to read her words as she told tough stories and to feel less crazy by the chapter.

A standout excerpt from the end of the book:

…Numbers can never express true value. And the same is often true of fairness and the common good in mathematical models. They’re concepts that reside only in the human mind, and they resist quantification. And since humans are in charge of making the models, they rarely go the extra mile or two to even try. It’s just considered too difficult. But we need to impose human values on these systems, even at the cost of efficiency…Mathematical models should be our tools, not our masters.

In every single role I’ve held in the tech industry, I’ve advocated for putting the user (people) first, and taking responsibility for their experience and security (even if they are not your own customer, like Equifax’s debacle). We can build better. It requires reinforcing a values- and ethics-led company culture, incorporating threat modeling and outside domain experts, as well as having clear security protocols and strategies for swift and transparent responses.

[UPDATE / ADDITION: There’s also worthwhile discussion by Hagit Katzenelson, also taking on the Strava failure, about transparently communicating with users about how to protect themselves.]

Thanks for reading: Got thoughts or links to swap about tech, ethics, and company culture? Hit me up on Twitter! I’m also happy to talk animatedly about: product /community / user focus / user research / human centered design / design thinking, lifecycle planning as codebase ages, remote teams, civic engagement, and the decentralized web.