
Below are several examples of abstracts submitted to PLSC and accepted by the PPC. We provide them here to give everyone a taste of what constitutes a good abstract. Note that these samples may not be perfect, but we believe they can be instructive as you build your own submission.
Abstract 1
Theoretical accounts of power in networked digital environments typically do not give systematic attention to the phenomenon of oligarchy—to extreme concentrations of material wealth deployed to obtain and protect durable personal advantage (Winters, 2011). The biggest technology platform companies are dominated to a singular extent by a small group of very powerful and extremely wealthy men who have played uniquely influential roles in structuring technological development in particular ways that align with their personal beliefs and who now wield unprecedented informational, sociotechnical, and political power. Developing an account of oligarchy and, more specifically, of tech oligarchy within contemporary political economy therefore has become a project of considerable urgency.
As I will show, tech oligarchs’ power derives partly from legal entrepreneurship related to corporate governance and partly from the character of the functions the largest technology platform firms (and through them, their oligarchic leaders) now perform. Of the dominant platform firms, all except Microsoft have a dual-class stock ownership structure that has allowed their founders to retain more than 50% voting control of the publicly traded entity. Nearly a century old, the dual class structure now functions principally as a mechanism for ensuring continuing control of tech startups by founding innovators and (in some cases) venture investors (Fisch & Solomon, 2023; Aggarwal, et al., 2022). At the same time, the architectures and services provided by dominant tech platform firms have become essential infrastructures (Anand, Gupta, & Appel, 2018; Cohen, 2023) for an increasingly wide variety of computing, communication and control-based functions that, in turn, now infuse every conceivable domain of human and social activity.
This account of tech oligarchy has important implications for three large categories of hotly debated issues. First, it sheds new light on the much-remarked inability of nation states to govern giant technology platform firms effectively (Bradford, 2023; Cohen, 2019; Schaake, 2024). Because of the way the dual-class structure intersects with other doctrines relating to the accountability of (ordinary) corporate leaders, legal constraints on technology oligarchs’ choices are few and far between.
Second, it explains why efforts to rebalance the scales by recoding networked digital environments for decentralization—using means such as cryptocurrencies, decentralized social media protocols, and so-called digital autonomous organizations designed to devolve decision making authority—have not produced and will not produce the utopian results their backers promise. Advocates of an updated, hard-coded libertarianism envision an end to state control of money and finance (Andreessen, 2014; Baldwin, 2018; Swartz, 2018). Advocates of decentralized democracy see in hard-coded arrangements for decentralized governance of data and digital protocols the potential to reinvent political and social organization (Allen, et al., 2023; Decenter Report, 2024; DeFilippi & Wright, 2018; Masnick, 2019). In particular, they argue that automated data cooperatives can be used to solve privacy governance problems by controlling access to and processing of their members’ personal data, while federated networks of such cooperatives can ensure that AI models are trained on the right data sets (Ligett & Shadmy, forthcoming). As many able scholars have demonstrated, hard-coded decentralization projects turn out to require governance institutions and to fail when those institutions do (Allen, 2021; Allen, et al., 2023; DeFilippi & Wright, 2018, Werbach, 2018). But that is only part of the problem. Governance institutions create points of entry for oligarchy, and decentralization of power is not really what oligarchs want.
Far more promising, from the oligarchic perspective, are other technological ventures that promise durable exit from the social compact while preserving centralization in digital supply chains. Third, therefore, the essay counsels more careful attention to an array of oligarchic projects—from dreams of space colonization to genome editing to the quest to develop artificial general intelligence—that have struck many observers as fantastical. Through these projects, tech oligarchs are working to hollow out a wide range of existing institutions and reinvent them around privately-controlled, data-driven logics and hyper-rationalist, longtermist ideologies (Andreessen, 2023; Hammond, 2023; Kurzweil, 2024; Srinivasan, 2022; Taplin, 2023; Torres, 2022). Put simply, technology oligarchs are working to define a human future that they alone control, and in which privacy and data protection play no part.
Sample Abstract 2
Tech workers’ evolving discomfort with the political actions and alignments of industry — part of a prolonged and evolving ‘techlash’ (Wheeler 2023) — has led to new action. Some workers are choosing to engage in social and labor activism in their workplaces (Charitsis & Laamenen 2022; Boag et al. 2022; Tan et al. 2024; Schubiner & Dharmaraj 2024), while others opt to leave the private sector entirely in favor of work which more explicitly signals support of the public interest (Rider 2022; Chambers forthcoming). As this latter category of tech talent pivots, they seek out roles newly described as ‘public interest technology’, and they grapple with questions of funders’ influence. In the U.S., the public interest technology organizations that such tech workers aim to join are often based in the civil society sector, which occupies an important and complex role in relation to both the government and the private sector (Dvoskin 2022; Waldman 2024). However, these organizations are frequently funded by the very same Big Tech companies from which they intended to distance themselves (Goldenfein & Mann 2023) or by philanthropies whose influence, from an outsider’s perspective, is unclear. Furthermore, while more and more organizations begin to operate under the banner of public interest technology — a field that itself emerges from philanthropic spending — still many more create and use technology in the public interest outside this circumscribed circle, as organizations funded by alternative models. For example, scholars and community leaders have identified modern initiatives such as the Spanish workers’ cooperative Mondragón (Wright 2010), the San Diego taxi union (Irani et al. 2021), and public data trusts (Chan et al. 2023) which demonstrate socioeconomic organizational structures that aim to reject typical corporate and philanthropic forms and extractivism. Of course, the choices an organization makes regarding its funding sources have profound consequences for organizational remits, goals, incentives, networks, methods, and more (INCITE! 2017).
This study seeks to illuminate the funding landscape of public interest technology work across U.S. civil society in both mainstream and alternative funding models. Inspired by empirical work investigating funding of public interest law organizations (Albiston & Nielsen 2014), we will employ a mixed methodology that combines analysis of public financial documents and data, interviews with tech experts in civil society, and systematic analysis of existing sociotechnical research exploring a diversity of financial models in civic and public interest tech. The goals of this study are twofold. First, we will provide descriptive insight into funding arrangements, strategies, and networks—both those which are commonly adopted and those presented as non-normative alternatives by public interest technology organizations. Second, we will provide analytic insight identifying patterns, traps, and norms that result from such funding structures and arrangements.
In doing so, our contributions to PLSC and the broader research community will be as follows: (1) We will build upon the nascent body of research investigating the adoption of technologies and technologists within civil society and nonprofit organizations, as well as the ways such groups shape sociotechnical processes of policy-making, governance, social reproduction, and community-building (e.g., Voida et al. 2011; Erete et al. 2016; Bopp & Voida 2022; Darian et al. 2023a; Darian et al. 2023b; Lin et al. 2024). (2) In two ways, we will provide insights for trailblazing tech workers and civil society leaders who are navigating the public interest tech landscape. First, we will offer empirical evidence of the realities of civil society technology funding in order to inform strategic decision-making. Second, we seek to describe the opportunities and implications of alternative funding structures, which nonprofits might take into consideration as they evaluate the effects of funding on their own practices and goals. Rigorously surveying the flow of finances across public interest technology, as we aim to do in this work, is an important first step for both sociotechnical researchers and for advocates who seek to understand evolving roles for technology in modern social change-making.
Sample Abstract 3
Foundation Models have become a new frontier of value creation in the digital economy. Whereas digital platforms monetise control over continuous flows of personal data, Foundation Models and Generative AI ingest and process different types of datasets that are monetised through consumer facing AI products. These include massive “pre-training” datasets refined from web-scrapes by entities like Common Crawl, as well as curated “fine-tuning” datasets produced by a variety of actors and distributed on platforms like Amazon Web Services and Hugging Face. Access and management of datasets largely determines Foundation model performance (Orr, Goel and Ré, 2022), creating new incentives and pathways for translating data access into economic capital.
Legal regimes play a critical role in shaping the commercial conditions of data access. Platform business models have benefited from data protection, consumer protection and related regulations’ construction of personal data as a type of individually managed commodity exchanged in a data market (Cohen, 2019). However, those legal ideas are now being deployed to disrupt the production and circulation of datasets critical to AI supply chains. Technology firms that leveraged data markets for platform business models now posit the existence of a data commons available for commercial exploitation into pre-training datasets. New legal settlements to determine data relations between individuals, content industries, and information processing business are in the making.
Through the lenses of ‘Regulatory Capitalism’ (Braithwaite, 2008; Levi-Faur, 2005), and ‘Economization’ (Kaliscan and Callon, 2009), this paper analyses how the arguments made by scholars, businesses, judges, and regulators in relation to AI data governance participate in strategic disputes between different stakeholders to differentially commodify and decommodify datasets at different points in the Foundation Model and Generative AI supply chain. The paper outlines two examples of this dynamic. In relation to fine-tuning datasets, scholars have argued for dataset transparency and provenance tools (Gebru et al, 2018; Pushkana et al, 2022; Jernite, 2023; Longpre, 2023), as well as certifications and assurances around labour and individual consent (Matus and Veale, 2021), to prevent discrimination, exploitation and other harms from machine learning. Following the literature from regulatory studies about supply chains (i.e. Parker et al, 2017), this paper articulates how such transparency efforts, in combination with light-touch regulatory approaches, also function to define and qualify datasets as a market commodity and input for ‘Safe and Responsible AI’, able to circulate through new platformed markets.
On the other hand, the paper describes endeavours to decommodify massive pre-training datasets by framing the “open web” as a type of data commons available for commercial appropriation. New relationships and forms of coordination are developing between technology firms and non-profits like Common Crawl and LAION working to democratize data are supported by scholarly efforts to frame web-data as a type of open data commons (Huang and Siddarth, 2023; Weyl, 2021). These moves are being resisted by companies like Twitter, Reddit, and Stack Overflow who are taking steps to price their data holdings. Similarly, media organisations like Getty Images and New York Times, as well as private individuals, are taking legal action over technology firms’ use of their professionally produced content and personal information, deploying various arguments to recommodify data, and stage its processing into datasets as unfair appropriation (Andersen v. Stability AI Ltd; Authors Guild v. OpenAI, Inc; Getty Images (US), Inc. v. Stability AI, Inc.; Leovy v. Google LLC; New York Times Co. v. Microsoft). The scholarship is highlighting how these contests are unfolding against a background of doctrinal instability (Samuelson, 2023; Henderson et al, 2023; Novelli et al, 2024; Luchi, 2023). The paper highlights the political economic stakes of those normative positions as legal regimes that have long managed markets for personal data are reconceptualized and redeployed in efforts to commodify, decommodify, economize, and platformize datasets and AI supply chains.