Zucked: How Users Got Used and What We Can Do About It. Roger McNamee
five hundred million users, half of whom visited the site every day. Average daily usage was thirty-four minutes. Users who joined Facebook to stay in touch with family soon found new functions to enjoy. They spent more time on the site, shared more posts, and saw more ads.
October saw the release of The Social Network, a feature film about the early days of Facebook. The film was a critical and commercial success, winning three Academy Awards and four Golden Globes. The plot focused on Zuck’s relationship with the Winklevoss twins and the lawsuit that resulted from it. The portrayal of Zuck was unflattering. Zuck complained that the film did not accurately tell the story, but hardly anyone besides him seemed to care. I chose not to watch the film, preferring the Zuck I knew to a version crafted in Hollywood.
Just before the end of 2010, Facebook improved its user interface again, edging closer to the look and feel we know today. The company finished 2010 with 608 million monthly users. The rate of user growth remained exceptionally high, and minutes of use per user per day continued to rise. Early in 2011, Facebook received an investment of five hundred million dollars for 1 percent of the company, pushing the valuation up to fifty billion dollars. Unlike the Microsoft deal, this transaction reflected a financial investor’s assessment of Facebook’s value. At this point, even Microsoft was making money on its investment. Facebook was not only the most exciting company since Google, it showed every indication that it would become one of the greatest tech companies of all time. New investors were clamoring to buy shares. By June 2011, DoubleClick announced that Facebook was the most visited site on the web, with more than one trillion visits. Nielsen disagreed, saying Facebook still trailed Google, but it appeared to be only a matter of time before the two companies would agree that Facebook was #1.
In March 2011, I saw a presentation that introduced the first seed of doubt into my rosy view of Facebook. The occasion was the annual TED Conference in Long Beach, the global launch pad for TED Talks. The eighteen-minute Talks are thematically organized over four days, providing brain candy to millions far beyond the conference. That year, the highlight for me was a nine-minute talk by Eli Pariser, the board president of MoveOn.org. Eli had an insight that his Facebook and Google feeds had stopped being neutral. Even though his Facebook friend list included a balance of liberals and conservatives, his tendency to click more often on liberal links had led the algorithms to prioritize such content, eventually crowding out conservative content entirely. He worked with friends to demonstrate that the change was universal on both Facebook and Google. The platforms were pretending to be neutral, but they were filtering content in ways that were invisible to users. Having argued that the open web offered an improvement on the biases of traditional content editors, the platforms were surreptitiously implementing algorithmic filters that lacked the value system of human editors. Algorithms would not act in a socially responsible way on their own. Users would think they were seeing a balance of content when in fact they were trapped in what Eli called a “filter bubble” created and enforced by algorithms. He hypothesized that giving algorithms gatekeeping power without also requiring civic responsibility would lead to unexpected, negative consequences. Other publishers were jumping on board the personalization bandwagon. There might be no way for users to escape from filter bubbles.
Eli’s conclusion? If platforms are going to be gatekeepers, they need to program a sense of civic responsibility into their algorithms. They need to be transparent about the rules that determine what gets through the filter. And they need to give users control of their bubble.
I was gobsmacked. It was one of the most insightful talks I had ever heard. Its import was obvious. When Eli finished, I jumped out of my seat and made a beeline to the stage door so that I could introduce myself. If you view the talk on TED.com today, you will immediately appreciate its importance. At the time I did not see a way for me to act on Eli’s insight at Facebook. I no longer had regular contact with Zuck, much less inside information. I was not up to speed on the engineering priorities that had created filter bubbles or about plans for monetizing them. But Eli’s talk percolated in my mind. There was no good way to spin filter bubbles. All I could do was hope that Zuck and Sheryl would have the sense not to use them in ways that would harm users. (You can listen to Eli Pariser’s “Beware Online ‘Filter Bubbles’” talk for yourself on TED.com.)
Meanwhile, Facebook marched on. Google introduced its own social network, Google+, in June 2011, with considerable fanfare. By the time Google+ came to market, Google had become a gatekeeper between content vendors and users, forcing content vendors who wanted to reach their own audience to accept Google’s business terms. Facebook took a different path to a similar place. Where most of Google’s products delivered a single function that gained power from being bundled, Facebook had created an integrated platform, what is known in the industry as a walled garden, that delivered many forms of value. Some of the functions on the platform had so much value that Facebook spun them off as stand-alone products. One example: Messenger.
Thanks to its near monopoly of search and the AdWords advertising platform that monetized it, Google knew more about purchase intentions than any other company on earth. A user looking to buy a hammer would begin with a search on Google, getting a set of results along with three AdWords ads from vendors looking to sell hammers. The search took milliseconds. The user bought a hammer, the advertiser sold one, and Google got paid for the ad. Everyone got what they wanted. But Google was not satisfied. It did not know the consumer’s identity. Google realized that its data set of purchase intent would have greater value if it could be tied to customer identity. I call this McNamee’s 7th Law: data sets become geometrically more valuable when you combine them. That is where Gmail changed the game. Users got value in the form of a good email system, but Google received something far more valuable. By tying purchase intent to identity, Google laid the foundation for new business opportunities. It then created Google Maps, enabling it to tie location to purchase intent and identity. The integrated data set rivaled Amazon’s, but without warehouses and inventory it generated much greater profits for Google. Best of all, combined data sets often reveal insights and business opportunities that could not have been imagined previously. The new products were free to use, but each one contributed data that transformed the value of Google’s advertising products. Facebook did something analogous with each function it added to the platform. Photo tagging expanded the social graph. News Feed enriched it further. The Like button delivered data on emotional triggers. Connect tracked users as they went around the web. The value is not really in the photos and links posted by users. The real value resides in metadata—data about data—which is what we call the data that describes where the user was when he or she posted, what they were doing, with whom they were doing it, alternatives they considered, and more. Broadcast media like television, radio, and newspapers lack the real-time interactivity necessary to create valuable metadata. Thanks to metadata, Facebook and Google create a picture of the user that can be monetized more effectively than traditional media. When collected on the scale of Google and Facebook, metadata has unimaginable value. When people say, “In advertising businesses, users are not the customer; they are the product,” this is what they are talking about. But in the process, Facebook in particular changed the nature of advertising. Traditional advertising seeks to persuade, but in a one-size-fits-most kind of way. The metadata that Facebook and others collected enabled them to find unexpected patterns, such as “four men who collect baseball cards, like novels by Charles Dickens, and check Facebook after midnight bought a certain model of Toyota,” creating an opportunity to package male night owls who collect baseball cards and like Dickens for car ads. Facebook allows advertisers to identify each user’s biases and appeal to them individually. Insights gathered this way changed the nature of ad targeting. More important, though, all that data goes into Facebook’s (or Google’s) artificial intelligence and can be used by advertisers to exploit the emotions of users in ways that increase the likelihood that they purchase a specific model of car or vote in a certain way. As the technology futurist Jaron Lanier has noted, advertising on social media platforms has evolved into a form of manipulation.
Google+ was Google’s fourth foray into social networking. Why did Google try so many times? Why did it keep failing? By 2011, it must have been obvious to Google that Facebook had the key to a new and especially valuable online advertising business. Unlike traditional media or even search, social networking provided signals about each user’s emotional state and triggers. Relative to the monochrome of search, social network advertising