Geometries of Data: From Literacy to Numeracy in American Governance
My book project uses archival methods to examine computational media from a literary perspective and looks to provide a genealogy of big data, predictive analytics, and other sites of entanglement between computation, legibility, and legislation. It pays particular attention to the grammar of data collection, processing, storage, and transmission. That grammar in large part constitutes the problematic space in which power is exercised on the body of the citizenry. The project examines the processes by which the state and the public are formalized in order to become legible, which subsequently both delimits the contours of and affords the execution of legislation.
In one sense then, this project is a historical chronicling of code as the bedrock of contemporary literacy, legibility, and law. In another sense, it is a history of the various technological instantiations of the statistical concept of ‘the long run,’ in which individual anomalies become legible commonalities at large enough scales. It is thus also a history of the tendency towards the ubiquitous collection, processing, transmission, and storage of data (and metadata) for future operations of governance. And lastly, this is a history of the spatial orientation of big data, and the subsequent biases that spatiality introduces into computation.
This paper argues that the process behind Google's Knowledge Graph constitutes a machinic rhetoric, by which increasingly autonomous machines are capable of producing their own discursive knowledge-formations, which have aesthetic, ethical, and political implications. In particular, the Knowledge Graph results in what I term the n-arization of thought, which delimits the space of invention and knowledge-production to that which can be made to fit the pattern of its data structure. Graph data is most often structured by what is termed a “triple.” A triple is a relation between two entities all stored in a database, and is most commonly understood as a subject-predicate-object statement. Here, the existence of entities and relations between them are quite literally dependent upon their indexability. The first part of this paper works to adapt rhetorical theory to the critique of new media, and, in particular, posits the existence of a machinic rhetoric. Parts two through five examine the technologies behind Google’s Knowledge graph, moving from the semantic web, to Pregel, to Information Extraction web crawlers, to Google’s TextRunner in particular. The sixth and final part extends an initial critique of Google’s Knowledge graph and articulates some of its potential implications. Read more.
This article offers a critical examination of contemporary graph databases, such as Google’s Knowledge Graph, from the perspective of media theory, philosophy of difference, and epistemology. It argues that the fundamental data structure of the ‘triple,’ in essence a subject-predicate-object statement, constitutes a problem immanent to the database itself. The article begins with a brief meditation on numerical mediation before examining the emergence of Knowledge Graph through Google’s research publications. It then moves on to demonstrate a logic of representation underlies all graph databases that operates similarly to Aristotle’s theory of perception and categorization. Drawing on Gilles Deleuze’s criticism of Aristotle, it argues that graph databases fall into similar traps of identity and representation and are unable to understand difference in itself. In closing, it offers an initial diagnosis of the limitations of graph databases, including an unbridgeable distance from the discovery and invention of the new. Read more.
This article offers theoretical and methodological demarcations for media genealogy, which operates in the work of each scholar interviewed for a special issue of The International Journal of Communication. We first examine the limitations of the media archaeological method in the work of Friedrich Kittler, Wolfgang Ernst, and Siegfried Zielinski. We later provide an outline of what media genealogy might look like, drawing on the work of our interviewees. Read more.
In 2014, the anonymous group Uncertain Commons published Speculate This!, a critical analysis of big data and predictive analytics that operates as a manifesto and a call to envision a new future. This review essay argues that the future they ask us to envision will be limited – specifically by their articulation of possibility and potentiality – and that Gilles Deleuze’s concept of difference in itself is better equipped to critique, resist, and escape capitalist forms of speculation. Furthermore, we warn against any critique that seeks to expose the totality of speculation as if it were a homogenous global mechanism. Instead, we conclude, further critique of speculation must continue to develop concepts, theories, and methods to examine individual speculative apparatuses in their conjunctural emergence and contextual embeddedness. Read more.
In this conversation between Dr. Paul N. Edwards, Professor in the School of Information and the Department of History at the University of Michigan, and Alexander Monea, Doctoral Candidate in the Communication, Rhetoric, & Digital Media program at North Carolina State University, Professor Edwards addresses numerous concerns related to the history and critical analysis of media and technology. In particular, Professor Edwards addresses archival methodology and interdisciplinarity in media studies, theories of technological momentum and infrastructural innovation, the political stakes of historiographic inquiry in terms of media and technology, the importance of the work of Michel Foucault, the production of the self or subjectivization, as well as the contemporary implications of his earlier work on the history of computation and more recent work on climate science. Read more.
This article examines the limitations of critical code studies and critical making. In particular, it argues that both seek methodological rigor in the practice of manipulating technology (be it hardware, software, code, or any other material instantiation of computation). In so doing, they tether critique to the transparency of its objects and risk evacuating the humanities of their capacity to speak to the most important technological conjunctures. This article subsequently outlines a methodology for producing anexact, yet rigorous analyses of blackboxed computational technologies. It looks to discursive analysis to facilitate interdisciplinary translation, and to archival techniques for constituting heterogeneous archives of materials and texts relevant for speculation. In closing, it articulates a vision for how this work might be extended and calls upon other scholars to help flesh out more adequate methods for extending the humanities tradition of critique into the future.