Why tech platforms should give users more control — and how they can do it

Dan Gillmor
7 min readMar 27, 2018
Image via Pixabay

For some time now, it’s been clear that one essential response to the flood of misinformation and other deceptive Internet tactics must be empowering users — people like you and me at the edges of the network of networks — to take at least some control of our own information flows and data. Yet the major Internet platform companies have, for the most part, been less than eager to help us.

There are, from their perspectives, good reasons for this reluctance. But I believe it’s time for them to do it anyway.

Facebook, in particular, is facing a nearly perfect storm of anger and frustration from users and governments. One cause is the collection and use of user’s data by the company and third parties Facebook has invited into its data/financial ecosystem. Another, related cause is its centralized control of what its users can see in their “news” timelines and advertising displays — and the abuse of the platform by third parties that have taken advantage of what look like lax controls.

These are related for several reasons, not least the fact that Facebook has become the dominant conversation space online, and one of the dominant online advertising companies of our times. How Facebook manages its ecosystem has therefore become a more-than-legitimate issue for its users, and society as a whole.

But the same issues afflict platforms like Google, including its YouTube subsidiary, and Twitter, though in different ways. Google search and YouTube recommendations, like Facebook’s newsfeed, are a black box from the outside: programmer-designed algorithms that create filter bubbles and are frequently gamed by malicious actors. Twitter, likewise, has demonstrated an ongoing inability to police its ecosystem to filter out the garbage (or worse) that so often makes the experience untenable for some users.

That’s also the rub. The platforms should not be — and have said they don’t want to be — the Internet’s content police.

But they have enormous, even unprecedented, power. Why are so many people calling for them to be what amount to the editors of the Internet, or at least their increasing share of it? Why are people assuming that the solution lies in the corporate policies, and programmers’ decisions, inside exceedingly centralized organizations? If you want censorship to be the rule, not the exception, that’s one way to get it.

Who should be making the decisions about what we see online? We should. But we need better tools to do it.

This is a mostly separate issue from the question of what to do about privacy and personal data. I’ve long believed that the platforms — and all companies that create marketing-oriented databases about us, from whatever sources — should be required to 1) let users remove everything companies have collected about them via their use of online services; 2) make all data, including conversations, “portable” in ways that would enable competitors to woo people onto other services (especially ones that made privacy a feature); 3) limit what they can do with the data they do collect; 4) offer easy-to-use “dashboards” giving users granular control over privacy and data-sharing setting; and 5) disclose absolutely everything they do, in simple language that even a U.S. president could understand.

The complications in doing these things are enormous. Maybe it’s impractical, or even impossible. But it seems inevitable that lawmakers will come under pressure to do something.

When it comes to helping us address misinformation on the demand side, not just the supply side, a dashboard would be a good place to start.

Twitter and Facebook allow their users a tiny bit of control now. For example, there’s a Mute function in Twitter and temporary Mute in Facebook. I use the former liberally — it gives me a certain pleasure to imagine trolls tweeting into infinity — but the latter appears only to work on people, pages, and groups you already follow. Both services let you sort people and organizations you follow into lists that you can look at in a granular way. On Twitter, the norm is to view an unfiltered feed of people you follow, whereas Facebook’s algorithms decide what you’ll see in that feed unless you set it to be the “Most Recent” material (but I frequently have to reset it, which is annoying).

So what else should go into a dashboard? Here are just a few of the items I’d suggest to improve the actual news and community information content:

  1. A filter or no-filter button on Facebook. Give me the feed of whom and what I follow, in reverse chronological order (newest first), or a curated feed. Google should offer a non-curated search that doesn’t also require me to sign out.

2) A selection of ways to alter the curated feed. The platforms should give me the ability to prioritize not according to how they interpret what I do, but how I tell them to prioritize. (One alternative, of course, should be to just take their suggestions — and as we all know, the default usually wins. That’s a bug, not a feature.)

3) Community-driven filtering. Give me a way to organize with others to create streams of information that we decide are useful. Yes, this could end up creating even worse filter bubbles, but that way of seeing the world — however shallow and narrow-minded — should be an option.

4) Filter bubble fixers. Give me a setting that will specifically put items in my feed that I know I’ll disagree with or have different world views. On Twitter, I specifically follow people who make my blood boil or see the world differently. Search and social algorithms need to include these capabilities.

Those are a few of many items I’d have in a platform dashboard, and I’ll bet you have some you’d add, too. Put them in a comment and I’ll update this.

Platform-provided dashboards don’t fully solve the problem, even though they could help, any more than we can fix things solely by improving supply and not addressing demand. We need, even more, tools the platforms don’t create themselves, but which they enable by being more open.

The Twitter ecosystem emerged in a time when the company offered third-party developers hooks into the back-end software that let them create add-ons users craved. These hooks, known as the “application programming interface (API)”, are basically rules of the road for the developers, guiding what they could do with the data and how they could do it. The result was an explosion of useful add-ons, notably dashboards that made Twitter much easier to use and much more functional. But Twitter, in a supremely arrogant and ugly move, ultimately shut down most of the third-party access. This was mostly a business decision: Twitter wanted firm control over the ecosystem in its pursuit of profits, and if that hurt the ecosystem in the short and long run, so be it.

I’m convinced that the Twitter developer community could have gone a long way toward solving the spam and trolling woes that persist on the platform today. I also believe the company should reverse course on its control-freakery and give third parties broader permission once again to help users solve the problems on the demand side.

And I’m doubly convinced that Facebook (including its Instagram and WhatsApp subsidiaries), Google, and other platforms should do likewise. Unfortunately, they’re moving fast — in the wrong direction. A Facebook executive suggested last week that this was the wrong time — given misbehavior by Cambridge Analytica and others — to be asking the company to open up the API. But it’s one thing to open it up to help third-parties solve problems like misinformation. It’s another to open it up in ways that let third parties abuse personal data. I recognize that encouraging one while preventing the other is not a trivial task. Yet with its resources, Facebook could do a lot to ensure that the abuses didn’t take place and shut them down if they did.

Google had the opportunity to be a genuine competitor to Facebook in social media when it launched Google+ a few years ago. But Google relentlessly refused to even consider providing an API, and I’m certain this is one reason Google+ failed to be a serious alternative (even though it’s still there and, in my case, still a source of some interesting conversations and content).

I’m grateful that Facebook was an early backer of our News Co/Lab at Arizona State University, in part because I do believe the company is serious about helping improve demand for better information, which is the lab’s major focus. Our work came out of my (and others’) call for scale — widespread adoption — of a variety of literacies (media, news, information, etc.) we all need in the digital age. In our initial projects, we’re working to help make it scale in education and with news industry partners.

But the place we can achieve the greatest scale, as I’ve said many times before, is at the platform level.

Facebook, Google, Twitter and the rest keep insisting that they don’t want to be forced to decide what is true and what isn’t. I’m asking that they do more to help users be smart arbiters for themselves.

What I’m asking for here is not trivial from a product standpoint, and therefore not trivial from the business side, either. But I’m certain that it’s needed, and soon.

A Facebook employee chided me (on Twitter) for pushing these ideas, saying his company will be attacked no matter what it does at this point. Fair enough, and probably true.

But I’d respond this way: If you’re going to be criticized, why not be criticized for doing the right thing?

--

--