Lawmakers miss the point on Facebook


I am increasingly baffled and disappointed by the scandal and congressional fury surrounding Facebook. Instead of lashing out at Mark Zuckerberg or worrying about who owns our personal data, lawmakers should focus on the real issue: how our data is used.

Let’s start with a few basic truths that seem to be getting lost:

– Cambridge Analytica, the company that scooped up a bunch of Facebook user data, isn’t much of a threat. Yes, it’s super sleazy, but it’s mostly bad for manipulating voters.

– Many other companies – maybe hundreds! – and “malicious actors” also collect our data. They are much more likely to sell our personal information to fraudsters.

– We should not expect Zuckerberg to keep his promises. He’s tried to make it nice before with little real effect. He has a lot of conflict and he’s kind of a naive robot.

– Even if Zuckerberg was a saint and didn’t care about profit at all, chances are social media is just plain bad for democracy.

Politicians don’t want to admit they don’t understand technology well enough to come up with sensible regulations. Now that democracy itself might be at stake, they need someone to blame. Enter Zuckerberg, the perfect punching bag. The problem is that it probably didn’t do anything illegal, and Facebook has been relatively open and forthcoming about its questionable business practices. For the most part, no one really cared until now. (If that sounds cynical, I’ll add: Democrats didn’t care until it looked like Republican campaigns were catching up or even overtaking them with big data techniques.)

What America really needs is a smarter conversation about data usage. It starts with recognition: our data is already available. Even though we didn’t disclose our own personal information, someone did. We are all exposed. Companies have the data and techniques they need to predict all sorts of things about us: our voting behavior, our consumer behavior, our health, our financial future. That’s a lot of power wielded by people who shouldn’t be trusted.

If politicians want to create rules, they should start by coming to grips with the worst possible uses of our personal information – the ways it can be used to deny job opportunities, limit access to health insurance , setting interest rates on loans and deciding who gets out of jail. Essentially, any bureaucratic decision can now be made by algorithm, and those algorithms need to be questioned far more than Zuckerberg.

To that end, I propose a data rights statement. It should have two components: the first would specify the degree of control we can exercise over how our individual information is used for important decisions, and the second would introduce federally enforced rules on how algorithms should be used. more generally monitored.

Individual rights could be loosely based on the Fair Credit Reporting Act, which allows us to access the data used to generate our credit scores. Most scoring algorithms work similarly, so this would be a reasonable model. When it comes to aggregate data, we should have the right to know what information the algorithms use to make decisions about us. We should be able to correct the record if it’s wrong and appeal the scores if we think they’re unfair. We should be entitled to know how the algorithms work: How, for example, will my score change if I miss an electricity bill? That’s a bit more than what FCRA provides now.

Additionally, Congress should create a new regulator – modeled on the Food and Drug Administration – to ensure that every large-scale algorithm can pass three basic tests (Disclosure: I have a company that offers such services). algorithm audit.):

– It’s at least as good as the human process it replaces (this will force companies to admit how they define “success” of an algorithm, which all too often just translates to profit),

– It does not disproportionately fail when dealing with protected classes (as facial recognition software does);

– It doesn’t cause crazy negative externalities, such as destroying people’s trust in facts or feelings of self-esteem. Companies using algorithms that could have long-term negative effects would be monitored by third parties who are not beholden to shareholders.

I am no policy buff, and I recognize that it is not easy to grasp the magnitude and complexity of the mess in which we find ourselves. A few simple rules, however, could go a long way in limiting the damage.

© 2018 Bloomberg L.P.

Tech

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button