Facebook’s dark design: It’s not just the algorithms

Facebook’s dark design: It’s not just the algorithms

By Daniel Jackson

Scroll to Article Content

In the midst of our current debate about Facebook, have we ignored a core issue? Public scrutiny has focused almost entirely on the company and its practices. Congressional testimony of a whistle-blower earlier this fall—and the Wall Street Journals continuing exposé—have revealed the extent to which its employees knew, through their own research, about the damage that their product causes. And yet the product itself has been strangely absent from much of the discussion.

There has been talk of algorithms, notably how Facebook determines which posts users see, and how rankings favor sensational content, feeding extremism and aiding the spread of disinformation. And user interface experts have long noted the myriad small and subtle ways in which sites like Facebook entice the user into more frequent and impetuous interactions.

These things matter, and their pernicious effects are well known, if not always acknowledged. But the essence of a software product (such as the Facebook app) is not found in the buttons and colors that appear on the screen, nor in the algorithms that prioritize one data item over another. Instead, it lies in the concepts of an app—the behavioral building blocks we interact with—that shape how we use and understand it, and that determine the impacts of our actions.

The concepts of “newsfeed,” “likes,” “friends,” “tagging” and so on—these are the core of Facebook, and scrutinizing them reveals the ways in which Facebook’s design often serves the interests not of users but of Facebook itself. These concepts, in other words, are the drivers behind Facebook’s wider societal impacts, and the damage they cause is not accidental but is by design.

The purpose of the newsfeed is, according to Facebook, connecting people to the stories that matter most to them.” If that were true, you should be able to filter and sort posts as you would items in an online store. And yet Facebooks newsfeed not only lacks the most elementary controls but is not even stable: a refresh of your browser window will show you a new selection of posts, changing not only their order but even dropping top posts that you might have wanted to read.

Were so familiar with this concept that we fail to notice how strange it is. The newsfeed concept has conditioned us to accept what appears to be a near-random selection of posts, opening the void into which Facebook can insert the algorithms that supplant our own choices.

Just imagine how many books Amazon would sell if it connected us to the books that matter most” by showing us ever-changing, endless lists of titles. Now you might counter that these practical concerns are not what Facebooks designers have in mind. It might surprise you, then, to read in their own manifesto of seven guiding principles the one entitled “useful,” which begins: Our product is more utility than entertainment, meant for repeated daily use, providing value efficiently.”

Sometimes the problem is not an individual concept but the way in which multiple concepts are overlaid. We are all familiar with the concept of upvoting, in which usersapprovals or disapprovals of items (such as comments on a newspaper article) are aggregated to rank them by popularity. Weve also seen (in Slack, for example) the concept of emotional reaction, in which readers can respond to a post with a smiley face or a heart. Facebooks like concept ingeniously fuses these two together; reacting to a post with a heart implicitly upvotes it. What not all users realize is that an angry reaction counts as an upvote too, and according to a recent report, any emotional reaction counts for more than a simple like. A design that separated these concepts would empower users to make independent decisions: to express anger, for example, without contributing to a posts promotion. It would not, however, serve the interests of Facebook.

Problems can also arise in the way in which concepts are synchronized together. Some degree of automation, in which actions in one concept can trigger actions in another, is often desirable; if you decline an invitation in your calendar, for example, you expect the associated event to be removed. But such linkages are not always what the user wants. When you tag someone in a photo in Facebook, their name becomes attached to the image. In addition, however, the visibility of the photo changes: now all the friends of the person being tagged can see it. In effect, this means that someone can share your photo not only with their friends but also with yours. You can turn this behavior off, but unfortunately its the default and many users arent even aware of it. Worse, anyone can tag you, even if not a friend, and its not clear what control you have in that case: Facebooks help page warns ominously that tags from people you’re not friends with may appear in your timeline review.”

In all these cases, Facebook’s design is intricate and carefully considered. The problem is not an egregious design flaw that subverts the concept’s purpose. Rather, it is that the actual purpose may not be what we users had in mind: it might be Facebook’s, and not our own.

Consumers today have a greater awareness of design than ever before. We expect our appliances and products to be easy to use, with features that are aligned with our needs. As software products become increasingly pervasive in all aspects of our lives, we must balance an appreciation of the benefits they bring with a cool assessment of the risks they pose. Such an assessment must begin with a product’s core concepts, and by posing a simple question: whose needs are they designed to serve?


Daniel Jackson is professor of computer science at MIT, and author of The Essence of Software: Why Concepts Matter for Great Design.