An interview with Dave King, CEO of Digitalis Reputation
With all eyes on Facebook and Cambridge Analytica following the outbreak of one of the most politically-charged data controversies of our time, data is being recognized as a more valuable and powerful commodity than ever before. Last week, we sat down in London with Dave King, CEO of Digitalis Reputation, a Concordia Patron Member and leading online reputation and digital intelligence firm, to get his perspective on the debate and his insight into the future of data privacy.
Well, it all started with an app. Most of the time, Facebook uses the data it has on consumers to serve them effective ads based on their needs and interests. But it also has a platform that allows third-party app providers to take data through an API and use that data, or—in this case—arguably abuse that data. In this case, a third-party app producer put together an app with an apparent social purpose, which 305,000 people installed, and that app collected data about them, their friends, and potentially the friends of their friends. The net result was that 87 million users had their data collected and then used by a third-party company to whom that data was sold: Cambridge Analytica.
The allegation here is that that data was used in contravention of Facebook’s terms and conditions—therefore arguably illegally—in political campaigning. But really there are two reasons why news of this particular incident escalated so significantly. The first is its sheer scale—the fact that 87 million users’ data was potentially compromised and abused. The second is its proximity to the Trump election, which has caused the media and now lawmakers to really get hold of this.
Well, it’s interesting. There was a lot of speculation first and foremost as to whether Zuckerberg was going to appear in front of Congress, and in front of the Commons in the UK, as well. What’s interesting is that Facebook has spent months and years avoiding putting him in the limelight and being subjected to this level of scrutiny. So it is an interesting turnaround to see him appear. The biggest takeaway, though, is that it’s become clear that both Zuckerberg and Facebook need to take responsibility. I’m confident that Zuckerberg’s appearance confirms that he realizes regulation is on the way, and that it is now time for Facebook to start engaging with—rather than infuriating—policymakers.
Like every other interested consumer on the planet, I was looking to see how nervous he was, how deep the scrutiny was, how aggressive it became, and I thought he did a brilliant job. However, I was also looking for misunderstanding. I was looking for questions from politicians that demonstrated they didn’t fully understand the technology. And I was looking for answers from Zuckerberg that suggested he didn’t really understand why this was such a big deal. Now, we definitely saw evidence of the former, but we didn’t see evidence of the latter. So it’s clear that Facebook is taking this extremely seriously. But we saw a number of senators ask questions demonstrating that they didn’t understand what they were asking. And that’s fascinating.
It tells us that this is a really, really difficult area around which to legislate, because policymakers and politicians don’t understand the technology as well as the tech companies do.
There are differences in data security and privacy protection in the US and the UK—and the UK within the context of the EU. I’d say that the US has led in data security, whereas the EU has led in privacy protection, and there’s a subtle difference between the two.
The differences between US and EU regulation are about to become more significant when the General Data Protection Regulation (GDPR) comes into force in the EU on May 25. The GDPR will make more stringent the protections around privacy in the EU and will challenge the business models of social media companies operating in Europe. Under the GDPR, social media platforms can process data for services that users request, but any other service will require additional permission from each and every consumer in the form of an express opt-in. And, of course, what we know with opt-ins is that a lot of consumers will not tick the box to opt-in. Interestingly, we think that under the GDPR Facebook will simply be able to offer an opt-out for news feed ads and Instagram ads. And, due to the fact that most consumers don’t bother opting out of anything, that business model will likely persist. However, for highly-personalized ads and other services, they will need an opt-in for each and every service. So it is going to make it tougher for social media platforms.
That’s in the EU. We are yet to see the kind of regulatory framework that will develop following the renewed and growing interest of US regulators in this area. But we envisage that inevitably in the longer term the US will lead in terms of overall regulation of US tech companies. But right now there are differences and it’s more stringent in Europe actually—or will be, as of May 25.
Interestingly, I’d say that’s a philosophical rather than legal question for the most part today, because, of course, regulation is nascent in this space. The particular story in relation to Facebook and Cambridge Analytica is a very real case of data allegedly having been misused and used in contradiction of the terms and conditions that governed its supposed usage. So that’s a legal question. But, more broadly, whether the tech giants are using data appropriately or not is a philosophical question. And I think in answering that you’ve got to look at what consumers think and want in terms of their data. It could be argued that most consumers don’t mind how their data is used and therefore that social media giants are not misusing their data. That said, there’s a big education shortfall in terms of awareness around data risks.
We spend a lot of time talking to people about the fact they’ve spent the last 10-15 years documenting their lives online, and that brings very real risks to physical security, cyber security, identity theft, and reputation, and people need to think more about that. But actually, the majority of people continue to put their information online—information about their children, their whereabouts, and other private information—and those people haven’t thought about the fact that criminals, including burglars, peadophiles, hackers, identity thieves, and others can misuse the data that they’re willingly putting in the public domain. So, those people are probably not particularly worried about a political campaign group using much of the same information. And in that sense you might argue that social media companies aren’t using their data in a way that they would have an issue with.
I hope and suspect that as awareness of the risks grows, individuals will start to care more about their data and, as a result, that tech giants will come under more scrutiny and face tougher legislation.
It’s a good question. And I suppose the background context here in terms of social media companies now trying to rebuild trust is that people are slowly starting to recognize the risk of their data being used and abused. We’re actually seeing evidence of certain groups of users become more private and therefore increasingly using the likes of WhatsApp for messaging between closed groups rather than public forums. So, the tide is hopefully turning in terms of consumers understanding some of those risks. I suppose the platforms need to really push security and privacy, particularly the efforts they’re taking to protect consumers’ data and the tools they’re providing to consumers that allow them to individually govern their data. That’s what Zuckerberg really focused on in lots of his answers in front of Congress and the Senate. He was saying “Look, we give customers the opportunity to decide who their message goes to and who sees it” and what have you, and I think that’s in an effort to rebuild trust.
Clearly, the role of government is a new role, and regulating the private sector’s handling of data—secure or otherwise—is a new challenge. The role of government both in the EU and in the US has grown, and regulation will doubtless follow. I think it’s even more important in this sector than in any other, though, to recognize the likelihood of misunderstanding in terms of the technologies involved. Clearly, regulation could be imposed that will restrict competitiveness, entrepreneurialism, and an ability to develop new technologies and platforms to move forward. But, at the same time, we know that, unregulated, these companies can abuse and exploit private individuals’ data. So there’s a delicate balance to be achieved.
As for the challenges in terms of enforcing the regulation of data collection, we will see how vigorously the GDPR will be enforced in the EU, but I’d say the challenges here, and in due course in the US, will be in terms of the sheer scale of the challenge, because so many companies are now collecting so much data. So the scale of the challenge might make it practically difficult to enforce new regulation across the board.
As for opportunities that could come out of new regulation, consulting firms are helping organizations understand their obligations under the new data protection regulation, and they’ll continue to do so. Beyond that, I am sure there will be opportunities for new products to help consumers protect their data and automatically remove their data from platforms. I’m sure there will be new businesses that will grow out of the concern currently expressed by government and by the traditional media, which is slowly dissipating into consumers who, over time, will increasingly realize the risks involved in their data becoming open to all.
So far, the consumer voice has been pretty irrelevant in demanding change. It wasn’t consumers who had a problem with this data breach, but rather the traditional media, and the proximity of this particular breach to the election of President Trump meant this made for big news. I hope and am sure that as consumers become more aware of the risks involved, their voice will become more important.
We spend so much time talking to people about the risks that result from information that they, their friends, and their family members post online for the world to see. And irrespective of whether a third-party app manages to acquire that data from Facebook, for most of the data we’re talking about, people have willingly put it online for anyone to abuse. And there are risks to children, to personal security, to reputation, and so on, but people don’t think enough about it. As they start to think more about it, there’s no doubt the consumer voice will become more important. There’s no doubt this particular scrutiny has been led by the traditional media and regulators rather than consumers, who—for the most part—remain relatively naive to some of the risks of their data being used.
None of us know what the future of data privacy looks like exactly, and we don’t know how far regulation will go, especially in the US. Questions from a number of senators to Zuckerberg confirmed that they don’t really understand the problem and they certainly didn’t understand the technologies. So it is absolutely critical for tech firms big and small, entrepreneurs, and government to work together collaboratively on these huge, unprecedented philosophical and legal challenges.
Never before, I think, has the risk of misunderstanding on one side or the other been so large and some of the problems raised by this issue are new—people have different views on privacy versus transparency, on the risks associated with everyone’s data being shared.
So it’s crucial that all the people involved discuss these things collaboratively and openly with a view to developing a new framework as to how our data will be governed and managed.