No Mercy No Malice

How Should We Rein in Facebook?

Let’s consider the tools we have available

Scott Galloway
Marker
Published in
10 min readNov 8, 2021

--

Don Draper suggested that when you don’t like what’s being said, you should change the conversation. Facebook is trying to change the conversation to the Metaverse. But we should keep our eyes on the prize, and not look stage left so the illusionist can continue to depress teens and pervert democracy stage right. In sum, nothing has changed. And something (likely many things) needs to be done to stop the most dangerous person on the planet. I first wrote the preceding sentence three years ago; it seems less novel or incendiary today.

So what would change things? We the people are not without tools, so long as the nation remains a democracy. We need only find the will to use them.

Toolbox

  • Break Up. Forty percent of the venture capital invested in startups will ultimately go to Google and Facebook. The real genius of these companies is the egalitarian nature of their platforms. Everybody has to be there, yet nobody can develop a competitive advantage, as Nike did with TV and Williams-Sonoma did with catalogs. Their advertising duopoly is not a service or product that provides differentiation, but a tax levied on the entire ecosystem. The greatest tax cut in the history of small and midsize businesses would be to create more competition, lowering the rent for the companies that create two-thirds of jobs in America. That would also create space for platforms that offer trust and security as a value proposition, vs. rage and fear.
  • Perp Walk. We need to restore the algebra of deterrence. The profits of engaging in wrongful activities must be subverted to punishment times the chance of getting caught. For Big Tech, the math isn’t even close. Sure, they get caught … but it doesn’t matter. Record fines amount to weeks of cash flow. Nothing will change until someone is arrested and charged.
  • Identity. Identity is a potent curative for bad behavior. Anonymous handles are shots of Jägermeister for someone who’s a bad drunk. Few things have inspired more cowardice and poorer character than online anonymity. Case in counterpoint, LinkedIn: Its users post under their own names, with their photos and work histories public. Yes, some people need to be anonymous. The platforms should, and will, figure this out. Making a platform-wide policy on the edge case of a Gulf journalist documenting human rights violations is tantamount to setting Los Angeles on fire to awaken the dormant seeds of pyrophilic plants in Runyon Canyon.
  • Age Gating. My friend Brent is the strong silent type. But he said something — while we were at a Rüfüs Du Sol concert, no less — that rattled me, as it was so incisive: “Imagine facing your full self at 15.” He went on to say he’d rather give his teenage daughter a bottle of Jack and a bag of marijuana than Instagram and Snap accounts. We age-gate alcohol, driving, pornography, drugs, tobacco. But Mark and Sheryl think we should have Instagram for Kids. We will look back on this era with numerous regrets. Our biggest? How did we let this happen to our kids …

The Sword and Shield of Liability

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today those green shoots have grown into the Amazon jungle. Social media, a category that didn’t exist in 1996, is now worth roughly $2 trillion. Facebook has almost 3 billion users on its platform. Fifty-seven percent of the world population uses social media. If the speed and scale of consumer adoption is the metric for success, then Facebook, Instagram, and TikTok are the most successful things in history.

This expansion has produced enormous stakeholder value. People can connect across borders and other traditional barriers. Once-marginalized people are forming communities. New voices speak truth to power.

If it hadn’t been for social media, you never would have seen this.

However, the externalities have grown as fast as these businesses’ revenues. Largely because of Section 230, society has borne the costs, economic and noneconomic. In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

Social media now has the resources and reach to play by the same rules as other powerful media. We need a new fence.

Unleash the Lawyers

With Section 230, the devil is very much in the details. I’ve gone through an evolution in my own thinking — I once favored outright repeal, but I’ve been schooled by people more knowledgeable than me. One of the best things about having a public profile is that when I say something foolish, I’m immediately corrected by world-class experts (and others). This is also one of the worst things about having a public profile.

I’ve struggled with Section 230, trying to parse the various reform proposals and pick through the arguments. Then, last week, I had lunch with Jeff Bewkes. He ran HBO in the 1990s and 2000s, then ascended the corporate hierarchy to become CEO of TimeWarner, overseeing not just HBO, but CNN, Warner Bros., AOL, and Time Warner Cable. In sum, Jeff understands media and stakeholder value well. Really well.

Traditional media was never perfect, but most media companies, Jeff pointed out, are responsible for harms they cause through liability to those they injure. Similar to factories that produce toxic waste or tire manufacturers whose tires explode, CNN, HBO, and the New York Times can all be sued. The First Amendment offers media companies broad protection, but the rights of people and businesses featured on their channels are also recognized.

“It made us better,” Jeff said. “It made us more responsible and the ecosystem healthier.”

But in online media, Section 230 creates an imbalance between protection and liability, so it’s no longer healthy or proportionate. How do we redraw Section 230’s outdated boundary in a way that protects society from the harms of social media companies while maintaining their economic vitality?

“Facebook and other social media companies describe themselves as neutral platforms,” Jeff said, “but their product is not a neutral presentation of user-provided content. It’s an actively managed feed, personalized for each user, and boosting some pieces exponentially more than other pieces of content.”

And that’s where Jeff struck a chord. “It’s the algorithmic amplification that’s new since Section 230 passed — these personalized feeds. That’s the source of the harm, and that’s what should be exposed to liability.”

Personalized content feeds are not mere bulletin boards. Any single video promoting extreme dieting might not pose serious risk to a viewer. But when YouTube draws a teenage girl into a never-ending spiral of ever more extreme dieting videos, the result is a loss of autonomy and an increased risk of self-harm. The personalized feed, churning day and night, linked and echoing, is a new thing altogether, a new threat beyond anything we’ve witnessed. There’s carpet bombing with traditional ordnance, and there’s the mushroom cloud above the Trinity test site.

This is algorithmic amplification, and it is what makes social media so powerful, so rich, and so dangerous. Section 230 was never intended to encompass this type of weaponry. It couldn’t have been, because none of this existed in 1996.

Redrawing the lines of this fence will require a deft hand. Merely retracting the protections of Section 230 is surgery with a chainsaw. It likely won’t take a single revision — this goes way beyond defamation, and the law will need to evolve to account for the novel means of harm levied by social media. That evolution has thus far been thwarted by Section 230’s overbroad ambit.

Supporters of the law correctly highlight that it draws a bright line, easy for courts to interpret. A reformed 230 may not be able to achieve the current level of surgical clarity, but it should narrow the gray areas of factual dispute. There are a number of bills in Congress attempting to address this, which is encouraging.

New Laws for New Media

Merely declaring “algorithms” outside the scope of Section 230 is not a realistic solution. All online content is delivered using algorithms, including this newsletter. Even a purely chronological feed is still based on an algorithm. One approach is to carve out simple algorithms including chronological ranking from more sophisticated (and potentially more manipulative) schemes. The Protecting Americans from Dangerous Algorithms Act eliminates Section 230 protection for feeds generated by means that are not “obvious, understandable, and transparent to a reasonable user.” Alternatively, the “circuit-breaker” approach would punch a hole in 230 for posts that are amplified to some specified level. Platforms could focus their moderation efforts on the fraction of posts that are amplified into millions of feeds.

But the most dangerous content isn’t necessarily widely distributed, but rather funneled alongside other dangerous content to create in essence new content — the feed. The Justice Against Malicious Algorithms Act targets the personalization of content specifically. That gets at what makes social media unique, and uniquely dangerous. Personalization is the result of conduct by the social media platform; if that’s harmful it should be subject to liability.

I’m not a lawyer, so I’m not interested in debating the legal niceties of standing or scienter requirements. Law serves policy, not the other way around, and I trust that in addition to taking on the technical details, our friends in the First Amendment legal community will join in a good faith effort at reform. That hasn’t always been the case. Anyone who begins an argument by suggesting that Facebook has anything in common with a bulletin board isn’t making a serious argument.

Some opponents of Section 230 reform would put the burden on users. Give us privacy controls, make platforms publish their algorithm, and caveat emptor. It. Won’t. Work. People couldn’t be bothered to put on seat belts until we passed laws requiring it. And there’s no multibillion-dollar profit motive driving car companies to make seat belts as uncomfortable and inconvenient as possible. The feed is the ever-improving product of a global experiment running 24/7. Soshana Zuboff said it best: “Unequal knowledge about us produces unequal power over us.” What’s required is the will to take collective action — for the commonwealth to act through force of law.

In 1996, when Section 230 was passed, it provided prudent protection for saplings, but that was a different age. In 1996, Jeff was CEO of HBO, the premier cable channel, with 30 million subscribers. Its corporate parent, Time Warner, was a 90,000-employee global colossus. His boss, Gerald Levin, was regarded as “perhaps the most powerful media executive in the world.” Meanwhile, on the Internet, the biggest brand was America Online — which had a mere 6 million subscribers and 6,000 employees. Emerging businesses, including AOL, needed the protections of Section 230, and their potential justified it.

At the end of our conversation, I asked Jeff if he’d been concerned about the implications of Section 230 or online media in 1996 — back when he was running HBO. He shook his head. “Not at all,” he said. “Not until 2000.” What happened in 2000? I asked. “AOL bought us.”

Breakups, perp walks, age gating, identity, and liability. We have the tools. Do we have the will?

Life is so rich,

P.S. Last December, I predicted Bitcoin would hit $50K — and I was right. We’ll see what I’m right about next year. My Predictions 2022 event is coming December 7 at 5pm ET. Register now.

--

--

Marker
Marker

Published in Marker

Marker was a publication from Medium about the intersection of business, economics, and culture. Currently inactive and not taking submissions.

Scott Galloway
Scott Galloway

Written by Scott Galloway

Prof Marketing, NYU Stern • Host, CNN+ • Pivot, Prof G Podcasts • Bestselling author, The Four, The Algebra of Happiness, Post Corona • profgalloway.com