Who uses the computational medium? - Academics
Artists & Academics
Dramaturgical Theorizing, by Jonathan H. Turner, is an extensive definition of dramatic encounter. As a taxonomy of human-human interaction, it is historically grounded rather than demonstrative (in the manner of a script or novel), making it an academic argument about the arousal of emotion (and its subsequent regulation by culturally prescribed rituals and labor), implying the space for its strategic deployment.
Yet by presenting the rules of interaction rituals, presumably an artistic response - to test narrative fiction against these rules, or to act out what has been written, - is engendered. It is tempting to give a computer these rules, and to write against them, as Italo Calvino proposed in an Oulipan take on murder mystery once (Calvino, 1986).
Jonathan H. Turner. “Dramaturgical Theorizing”. Ch.7 in Theoretical Sociology: A Concise Introduction to Twelve Sociological Theories. SAGE Publications. 2013.
Italo Calvino, “Prose and Anticombinatorics”. In Oulipo: A Primer of Potential Literature, ed. Warren F. Motte, 152. 1986.
Meshworks, Hierarchies, and Interfaces, by Manuel De Landa, is a philosophical perspective on user experience of computer interfaces, as they reflect and are reflected by organizational structure. As a warning, it is directed toward academics creating or deploying human-computer interfaces, who may be tempted to embrace domain-agnostic AIs that filter and transform information based on the past behavior of their users.
In describing the remarkable homogenization of post-industrial skills, languages, and other technologies, De Landa invokes the historical conditions leading to such inflexible workplaces whose processes are not agent-based (i.e. assigned to relatively autonomous artisans) in the first place. This reads as a critique of the professional working environment, besides offering no easy way out.
It appears to me that the craft of tools to be used by other workers is an artisinal task, and so De Landa’s argument should fall into this quadrant. Although it grants the composition of the underlying network primacy over its center of control, ultimately this piece is concerned with the fate of institutions, and must therefore be aimed at individuals with the capacity to nudge them.
Manuel De Landa. “Meshworks, Hierarchies, and Interfaces”. 1998.
(I wonder if it’s a problem that these references are all trapped in the academic literature?)
Academics & Professionals
Trending: The Promises and Challenges of Big Social Data, by Lev Manovich, is a perspective on data science from a part of academia whose methods have been slower to adapt to quantification.
The first challenge raised by this piece is that only professionals, not academics in general, have access to the full range (or somewhat less than that) of data collected by social media companies. A fundamental property of our landscape of stakeholders is that professionals have more social and financial capital than (grant-funded) academics, who have more than artists. Autodidacts can fall anywhere, depending on their day job, partner, or savings.
The second challenge is that people project different versions of themselves on social media. Various networks have various characters, ranging from the artists of Tumblr and Twitter (including not a few indie game developers) to the professionals of LinkedIn and Twitter (including games journalists), to name a few.
The third challenge is that data doesn’t come with context, and has often been abducted from the concerns of and stakes for the communities generating it. The fourth challenge is that the skill to converse with cultural artifacts and the skill to collect them en masse don’t often come together. Connecting with the needs of stakeholders is a task that visualization researchers grapple with often, but rarely are the stakeholders also the objects of the data, except in the subfield of data art (Lupi, 2018) and other approaches to narrativization.
A Nested Model for Visualization Design, by Tamara Muzner, is a taxonomy of validation methods, intended to disambiguate ‘inner-loop’ validation from ‘outer-loop’ validation for interactive artifacts. Because a lot of inner-loop work has to be redone if any of the downstream outer-loop validations fails, not many projects in the literature reach widespread adoption, or even sustained external use.
The paper is essentially an internal guide to members of an institution on how to recognize, by internal or external review, to what extent a project (yielding an interactive artifact) has been validated so far. For institutions experiencing worker turnover, this amounts to a data structure representing collective memory.
For communities of interest, this isn’t necessary because people have enough bandwidth to seek out the original artifact and mess around with it. (Which implies a different constraint on how easy the artifact is to deploy.) Moreover, the evaluation is not concerned with the narrative of each artifact (including its potential cultural impact), because it is abstracting over a vast possibility space of stakeholders.