Forrest Kim asked • Aug 10th, 2021
Should there be more regulation of social networks and if so what should that regulation look like?
Ben replied • Sep 8th, 2021
In brief, my answer is that the traditional “regulation” dimension of this question has limited scope to confront the distinctive political dimensions of online social networks. Fundamentally new legislation, much less blunt and more modern than Section 230 of the Communications Decency Act, is necessary to
The question of how, and how much, online social networks should be regulated is an urgent one. Recently, the FTC amended its lawsuit and refiled a new antitrust complaint against Facebook, the largest social network. There is considerable evidence that social media platforms such as Facebook and Twitter have a substantial role in shaping public discourse and even election outcomes. In the last decade, major world events from the Arab Spring uprisings [1], violence in Myanmar [2], and the 2016 US election have been shaped significantly by events on social media.
On the other hand, much of the everyday value such platforms deliver has little to do with momentous world events: people tracking down old friends, coordinating during natural disasters [3], building and finding niche support communities [4], video calling grandparents, and sharing jokes. Facebook CEO Mark Zuckerberg CEO recently announced that Facebook would pivot its efforts to facilitating more private interaction, rather than broad sharing.
This points to two dimensions of the regulation question. On the one hand, insofar as platforms play a role comparable to television networks in the mid-twentieth century in shaping politics and culture, there are distinctive questions about their social impact and political power. On the other hand, insofar as they are a communication technology -- in the same broad category as telephones -- the questions concern more traditional ones of privacy, consumer protection and antitrust.
The basic questions that must be resolved are: on the news and content front, to what extent will platforms be accountable to a broader public for their decisions concerning content. To what extent should they be liable for editorial decisions, human or automated? On the more traditional side, what requirements will be imposed on platforms concerning privacy, security, or the monopoly power of these platforms? Ultimately, in terms of governing frameworks, what balance will be obtained between self governance, independent oversight, and government regulations?
US regulators’ concerns have been mainly focused on the latter prong of this dichotomy. While I think this is hugely important, I will focus less on this because I view this part as less intrinsically related to social networks, which is what your question is about. For example, consumer protection issues (e.g., data privacy and data security) show up in a huge variety of markets, from health insurance to air travel. Firms with market power must be held to more stringent standards than they currently are. Institutions like the Federal Trade Commission and Consumer Financial Protection Bureau are acting aggressively on these questions and I think that is great. One subtlety here is that forceful regulation on data handling etc. probably benefits huge companies on the whole, because they are the only ones that can afford to pay the large fixed costs of compliance. This may explain why Facebook’s CEO has actually asked for more regulation [5]. Thus, policymakers would be wise to think about protecting entrepreneurship while they impose more stringent rules: consumers who want to take risks on small companies should be allowed to do so, with adequate warnings; once they grow big enough, they should face oversight.
But I think the biggest questions on social media have to do with their role in cases where traditional consumer protection is not the right paradigm at all. Social platforms are perhaps the first large-scale technology to not only facilitate social interaction, but reshape our connections and some aspects of the fabric of society as they do so. It seems sort of comical to think that antitrust or privacy law is the right tool with which to confront this sort of change. The current legal frameworks for those questions are coarse and new; one example is Section 230 of the Communications Decency Act in the US, which holds that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This law has protected platforms from legal liability for allegedly harmful content and made them fundamentally different from other publishers in the eyes of the law.
My feeling about these tools is that they are almost comically antiquated and blunt relative to the tasks that they confront. For example, is it obvious that all platforms that publish third-party speech, regardless of their size and behavior, should automatically be covered by the protections of Section 230? I believe that a much more sophisticated version of this law is necessary, which balances protection of online discussion with other social interests. For example, it seems like a good thing that a platform such as Yelp is not responsible for individual false statements about food at a restaurant, but it is not obvious that itYelp should retain protections if it facilitates systematic deception of consumers. More broadly, anonymous libel, revenge porn, and other uses of platforms routinely destroy individual lives, and again it does not seem clear that blanket immunity laws for platforms are a well-tailored instrument to align incentives with social welfare. We probably need some domains to introduce more effective mechanisms to to hold individual posters liable for content that would be criminal if it were posted in the town square. Currently, platforms have immense voluntary discretion most of the time. Facebook instituted an effective Supreme Court to decide its most high-profile cases; when we see something like that, there is clearly an awkward gap in mechanisms that have both the authority and the legitimacy to rule on the hardest issues. Here, legislatures must undertake deep debates to understand how to maintain the vibrancy of the Internet while avoiding a harmful lawlessness. Immunity of the style offered by Section 230 can be a reward for stewardship of online spaces that meet defined standards. Here, quantitative benchmarks of transparency, responsiveness, mechanisms for appeal, etc. can play an important role. Again, I think protection of entrepreneurship and innovation are important, and must be considered in these developments.
The above discussion is rather US-centric. One important fact is that social networks are inherently global and transcend geographic boundaries. So the normative discussion, and certainly any practical one, must consider local laws. Other governments have taken very different paths: see, for instance, the Singaporean fake news law [6]; the Indian Government's Information Technology Rules 2021 [7], which basically requires messaging intermediaries such as WhatsApp to decrypt messages to comply, and Germany’s NetzDZl [8]. These regulations are aimed at countering the spread of misinformation and protecting the country’s sovereignty. My own normative commitments are to the muscular protection of public speech and individual privacy rights that are typified by the US, but one silver lining of much less liberal practices is that platforms must evolve their technological capacities to analyze content. If people prefer a platform that enforces certain standards of content quality or user accountability, such a platform may thrive in an informationally and commercially free market even if it was developed to serve authoritarian aims. But my fear is that no such happy accident will happen, the growth of online social networks will pose fundamental challenges to the liberal order and legal regimes of countries such as the US and Western Europe.
I thank Keval Shah and Kweku Opoku-Agyemang for their input into this post; all errors are mine and I take sole responsibility for the views expressed.
[1] https://www.pewresearch.org/journalism/2012/11/28/role-social-media-arab-uprisings/
[2] https://about.fb.com/news/2018/11/myanmar-hria/
[3] https://www.dhs.gov/sites/default/files/publications/Social-Media-EM_0913-508_0.pdf
[4] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5792702/
[5] https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html
[6] https://sso.agc.gov.sg/Act/POFMA2019?TransactionDate=20191001235959
[7] https://wilmap.stanford.edu/entries/information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021
[8] https://en.wikipedia.org/wiki/Network_Enforcement_Act