Author Blogs

what is digital professional communication? (the video)

Digital Digs (Alex Reid) - 20 April, 2018 - 10:42

Having asked students in my classes to experiment with video, I took on the task myself to make a video where I think about this question and the course I’m teaching in the fall. At this point I could offer various caveats regarding first attempts and such but to be honest I had quite a bit of fun making this and getting back to digital composing (though you can’t really go back, especially not with digital media).

Categories: Author Blogs

what makes professional digital communication interesting?

Digital Digs (Alex Reid) - 18 April, 2018 - 12:28

First, a tangent. Over the last week I’ve been part of a listserv conversation that reprises the now familiar question about how English Studies majors should change (or not). As I noted there, this has become a familiar genre of academic clickbait, like this recent buffoonery from the Chronicle of Higher Ed.  Among other things I pointed out that, from a national perspective, the number of people earning communications degrees (which was negligible in the heyday of English majors 50-60 years ago), surpassed the number getting English degrees around 20 years ago. Since then Communications has held a fairly steady share of graduates as the college population grew, while English has lost its share and in recent years even shrank in total number, as this NCES table records. In short, students voted with their feet and, for the most part, they aren’t interested in the curricular experience English has to offer (i.e. read books, talk about books, write essays about books). Anyway, in such conversations the prospect of teaching professional writing, technical communication, and/or digital composing is often raised. The predictable response is a rejection of such curricula on the grounds that it is instrumentalist, anti-intellectual, and generally contrary to the values of English Studies, both in terms of literary studies and rhetoric/composition. Though I have no interest in defending the kind of work I do, there’s really a more important response. First, nothing is going to “save” English. It’s over. Second, by over, I mean it will just be small, serving 1-2% of majors. It will probably remain larger than Math or Philosophy, and no one is say those fields aren’t valuable. English can be small and valuable.

That said,  I do find amusing the evidence such conversations (and clickbait articles) offer of the narrow utility of the intellectual capacity afforded by disciplinary thinking in English Studies. For example, I’m teaching a course called professional digital communication this semester and look to do so again in the fall in an online format. What is it? I suppose you  could imagine it to be an instrumental tour of how-to’s for various business-related digital genres: how-to make a brochure website, how-to design a powerpoint slide, how-to use desktop publishing to write a report, how-to make a professional web portfolio, how-to write a professional email, etc. etc. But think about these three words. Professional. Digital. Communication. What are they? Asking what communication is opens the entire field of rhetorical study. And digital? What does that comprise from technical answers to histories and cultural values/associations? How does “digital” modify “communication”?

However, I actually think it is “Professional” that is the most confounding. On first glance, it should be simple. It should just mean the kind of communication (i.e., the genres I guess) that “professionals” use (and, in this case, are “digital” somehow). Well… basically all workplace genres are “digital somehow,” even if that only means they are composed in MS-Word. I suppose, the implication is that professional also indicates some technical facility with digital tools beyond the typical office suite. These can, somewhat clumsily, be divided into two categories. The first contains genres that have been softwarized (e.g. reports, technical manuals, printed materials like brochures) that existed as genres 40 years ago. The second are born-digital genres and genres that have been significantly transformed by their softwarization (e.g., the way an instruction manual might become a how-to video/screencast or a brochure becomes a brochure website). So the first category might get one thinking about the role of XML/DITA in organizing content in large technical databases or visual communication principles deployed in InDesign or similar desktop publishing software. The second category is more elusive as the genres are fluid: video, podcast, social media, game, infographic, mobile app, website. Of those maybe the last has a stable genre. Furthermore, the changing nature of the work carried about by these professionals, the “adhocracies” in which they often work (to use Clay Spinuzzi’s term), and the continual churn of the technology makes it very difficult to define “professional.” In short, there’s a lot to investigate in those three words, which is why there is extensive scholarship across rhetoric, professional-technical communication, and communications faculty on these subjects.

But, in my experience, the most challenging part of running a course like this is the pedagogical shift into learning through cycles of experimentation and reflection.  Part of what I do is say “We will read scholarship from these fields for the purpose of understanding how it might inform the development of our own practices.” So we do read and talk. But mostly we are engaged in experimental composition. Through our experimentation with various digital tools and genres we aim to understand what “professional digital communication” might be, with the hope that our readings provide some useful terminology and apparatus for doing so.

And for me that’s what makes professional digital communication interesting. It’s not so much reading the scholarship and writing scholarly responses. (Though that interests me too; I am an academic after all.) Instead, it’s the doing of it, the composing, and the insight those experiences give me into the research we do and visa versa.

Categories: Author Blogs

The empty space of the academic presentation

Digital Digs (Alex Reid) - 9 April, 2018 - 17:02

So there’s a fairly good chance you know more about Casey Neistat that I do. He’s something of a YouTube sensation with over 9 million subscribers. He also had an HBO series (I guess you’d call it). In my “copious spare time,” I’ve been hunting around, trying to catch up on the world of digital composing that passed me by while I was sentenced to several years as a WPA, and Neistat is someone I’ve only recently (and belatedly) encountered. Below is today’s episode in Neistat’s newly revived daily vlog on his efforts to build something (not quite sure what) in a space at 368 Broadway in Manhattan.

But this post isn’t really about that. It’s about this particular episode from an angle to which many academics could relate (is that the right preposition, “to”?).

 

This episode sees Casey traveling to Montreal to give a presentation. We see nothing of the presentation. Instead we see the antics of travel. In particular we see his struggles with his motorized scooter-luggage combo.

This relates to my earlier post on “meatspace meetspace.” (BTW, I love how my browser complains that meetspace isn’t a word but is mum about meatspace.) Whether you’re headed to a conference where you’re one of 100s giving presentations or giving an invited talk somewhere, your primary experience is not about the presentation; it’s about all the miscellanea around it: the travel, the hotel, the food, socializing, etc. That’s what this particular episode captures.

I may regret saying this, but I think I’d love to see thousands of 10-minute videos of my colleagues’ travails going back and forth to some BS conference. I’m thinking those would be far more compelling, far more likely to convince me to take interest in their work, than 15-20 minutes of their reading a paper with bulletpoint slides.

Categories: Author Blogs

On the importance of deep mixture density networks and speech synthesis for composition studies

Digital Digs (Alex Reid) - 28 March, 2018 - 08:11

Eh? What’s that?

I’m talking about AI approaches to the synthesis of speech on your smartphone and related devices. I.e., how does Siri figure out how to pronounce the words its saying? OK. But what does that have to do with us?

Another necessary detour around the aporias of disciplinary thought… This is really about recognizing the value of computer simulation in articulating the possibility spaces from which reality emerges. Put in the most general terms, if you want to know why one thing is composed rather than another, why there is something rather than nothing, you need a way of describing how one particular thing emerges from the virtual space of possibilities where many other things were more or less probable. Simulation is a longstanding concept in the humanities, mostly through Baudrillard, but the intensification in computing power changes its significance. As DeLanda writes, in Philosophy and Simulation, 

Simulations are partly responsible for the restoration of the legitimacy of the concept of emergence because they can stage interactions between virtual entities from which properties, tendencies, and capacities actually emerge. Since this emergence is reproducible in many computers it can be probed and studied by different scientists as if it were a laboratory phenomenon. In other words, simulations can play the role of laboratory experiments in the study of emergence complementing the role of mathematics in deciphering the structure of possibility spaces. And philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.

So this post relies at minimum on your willingness to at least play along with that premise. It is, as DeLanda remarks elsewhere, an ontological commitment. You may typically have different commitments which of course is fine. Within a realist philosophy like DeLanda’s the value proposition for an ontological commitment is its significance rather than its signification, by which he means it’s less about its capacity to represent/signify Truths and more about its capacity to create capacities that make a difference (its significance).

In this case, we have the simulation of speech. Basically what happens (and basic is the best I can muster here), Siri’s voice is constructed from recorded human speech. That speech is divided up into constitutive sounds and then the purpose of speech synthesis is to figure out  how to recombine those sounds to make natural sounding speech.  [n.b. A common error in this conversation is to identify the semblance between a computer and a human at the wrong level: to assert that the human brain is like a computer. However I don’t think anyone would suggest that humans operate by having a database of sounds that they then have to probabilistically assemble in order to speak.] While humans don’t form speech this way, we do obviously have a cognitive function for speaking that is generally non-conscious (exceptions being when we are sounding out an unfamiliar word, learning a new language, etc.). Generally we don’t even “hear” the words we read in our minds (though I bet you’re doing it right now, just like you can’t not think of a pink elephant).

One thing that is clear in speech synthesis is that process that seeks to approximate the sounds of “natural speech” does not know the meaning of the words being spoken or  need to know that the sounds being made are connected to meaning or even that meaning exists. It is a particular technological articulation of Derrida’s deconstruction of logo-phonocentrism whose heritage he describes as the “absolute proximity of voice and being, of voice and the meaning of being, of voice and the ideality of meaning” (Of Grammatology). Diane Davis takes this up as well, writing “it is not only that each time ‘I’ opens its mouth, language speaks in its place; it is also that each time language speaks, it immediately ‘echos,’ as Claire Nouvet puts it, diffracting or laterally sliding into an endless proliferation of ‘alternative meanings that no consciousness can pretend to comprehend’” (Inessential Solidarity). None of that is to suggest that meaning does not exist or even that the words Siri speaks are meaningless. No, instead it leads one toward a new task of describing the mechanisms (or assemblages, to stick with DeLanda’s terms) for signification and significance are separate–though certainly capable of relating to–the assemblages by which speech is composed.

But getting back to speech synthesis. I’ve been clawing my way through a couple pieces on this subject like this one from Apple’s Machine Learning journal and this one coming out of Google research. This is highly disciplinary stuff and at this point my understanding of it is only on a loose conceptual level. However, I’m trying to take seriously DeLanda’s assertion regarding “the role of mathematics in deciphering the structure of possibility spaces,” as well as his claim that “philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.” It is that last part that I am pursuing and which, at least for me, is integral to rhetoric and composition.

Here however is my hypothesis. Despite the arrival (and digestion) of poststructuralism in English Studies in the last century, rhetoric and composition remains a logo-phonocentric field. The digital age (or software culture as Manovich terms it) has put serious pressures on those ontological commitments (and that’s what logo-phonocentrism ultimately is, an ontological commitment). The mathematical description of the possibility spaces of speech synthesis and the subsequent simulation of speech are just one small part of those pressures, a part so esoteric as to be difficult for us to wrap our minds around.

But what happens when we start disambiguating (decentering) the elements of composition that we habitually unify in the idea of the speaking subject? To return to DeLanda here as I conclude:

The original examples of irreducible wholes were entities like “Life,” “Mind,” or even “Deity.” But these entities cannot be considered legitimate inhabitants of objective reality because they are nothing but reified generalities. And even if one does not have a problem with an ontological commitment to entities like these it is hard to see how we could specify mechanisms of emergence for life or mind in general, as opposed to accounting for the emergent properties and capacities of concrete wholes like a metabolic circuit or an assembly of neurons. The only problem with focusing on concrete wholes is that this would seem to make philosophers redundant since they do not play any role in the elucidation of the series of events that produce emergent effects. This fear of redundancy may explain the attachment of philosophers to vague entities as a way of carving out a niche for themselves in this enterprise. But realist philosophers need not fear irrelevance because they have plenty of work creating an ontology free of reified generalities within which the concept of emergence can be correctly deployed. (Philosophy and Simulation)

I would suggest an analogous situation for rhetoricians. Perhaps we fear irrelevance in the face of “reified generalities” that form our disciplinary paradigms. What happens when not just “voice” or “speech” is distributed but expression itself becomes described as emerging within a distributed cognitive media ecology?

In any case, that’s where my work is drifting these days and it was useful for me to glance back toward the discipline here to get my bearings vis-a-vis some future audience I hope to address.

Categories: Author Blogs

distributed deliberation and Cambridge Analytica

Digital Digs (Alex Reid) - 19 March, 2018 - 10:11

One of the major stories of the weekend has surrounded the interview with Christopher Wylie, former employer turned whistleblower of Cambridge Analytica. Here’s that interview if you haven’t seen it.

It’s good to see this story getting attention, but it’s also something we’ve basically know for a while, right? For example, here’s a NY Times op-ed from right after the election talking about how the Trump campaign used the data from Cambridge Analytica to target votes. Or you can watch this BBC news interview with Theresa Hong, who was Trump’s Digital Content Director, which is from last August. In the interview, she gives a tour of the office where she worked–in an office right next to where the Cambridge Analytica folks were working. If you watch that interview, right before the 3-minute mark she explains how people from Facebook, Google, and YouTube would come to their office and help them. They were, in her words, their “hands-on partners.” Unless she’s straight up lying about that, which would seem pretty weird in the context of this video where she otherwise gleefully recounts her role in an information warfare campaign, then it’s essentially impossible to believe that Fb didn’t know what Cambridge Analytica was doing.

The funniest part of the recent news cycle is when the newscaster turns to the expert and says “do you think this affect the outcome of the election?” Hmmmm…. do you think the Trump Campaign spent $100M+ ($85M on Fb advertising alone) in order to not affect the outcome?

So that’s the news. Now, here’s my part. Let’s be good humanists and start with a straight dose of Derrida and notion of the pharmakon. In considering the pharmacological operation of media, beginning with writing, one might investigate the cognitive effects emerging from technologies. As I’ve written about before, Mark Hansen picks up on this in Feed Forward and is really the thesis of this book:

Like writing— the originary media technology— twenty-first-century media involve the simultaneous amputation of a hitherto internal faculty (interior memory) and its supplementation by an external technology (artifactual memory). And yet, in contrast to writing and all other media technologies up to the present, twenty-first-century media— and specifically the reengineering of sensibility they facilitate— mark the culmination of a certain human mastery over media. In one sense, this constitutes the specificity of twenty-first-century media. For if twenty-first-century media open up an expanded domain of sensibility that can enhance human experience, they also impose a new form of resolutely non-prosthetic technical mediation: simply put, to access this domain of sensibility, humans must rely on technologies to perform operations to which they have absolutely no direct access whatsoever and that correlate to no already existent human faculty or capacity.

If you’re wondering what a “non-prosthetic technical mediation” might be, well one example is the underlying technical operations that drive this Cambridge Analytica story. Non-prosthetic suggests, contra McLuhan, media that are not “extensions of man” (sic).

Think of it this way (and this is a little slapdash but should get the idea across). There’s always a price to be paid to gain access to new capacities. As Hansen suggests, with writing you give up interior memory for artifactual memory. With photography and then film we extend artifactual memory into the visual but at the cost of access to unmediated experiences. Think of selfies or Don DeLillo’s “most photographed barn in America,” or Walter Benjamin’s remark that film introduces us to unconscious optics. Hansen’s 21st century media, what I would think of as networked, mobile digital media, offers a range of capacities (I won’t attempt to enumerate them) but at what cost? Basically everything. We give the network everything that we know how to give.

It strikes me that you can think of the development of media technologies as an incremental distribution of human cognition. This only works because cognition is always already relational and distributed. I.e., the biological capacity for thought emerges from an existing environmental/ecological platform–we think only because there are things to think about and with. I don’t want to go down the rabbit hole right now, but the ultimate conclusion is that we have become, are becoming, interwoven with our digital selves: politically, psychologically, affectively, cognitively and so on in just about any dimension you can imagine.

And as the saying goes, with these Cambridge Analytica revelations we aren’t seeing the beginning of the end of this story but rather the end of the beginning. We can make laws, create tools, and try to educate people, but, without falling for techno determinism, human cognitive capacities are shifting and nothing short of turning off history is going to change that. I don’t think the nature of the shift is pre-determined, but we cannot go backward and we cannot stand still.

Distributed deliberation is all over this story. It’s in the mechanisms of the apps in which FB users answered questions to figure out which Hogwarts school they belonged in (or whatever). It’s in the processes by which that data was collected and transformed into psychographic profiles. It’s in the way those profiles were organized and targeted with specific messages by the Trump campaign. It’s in how those messages were promoted and made visible to those users by social media platforms like Facebook. It’s in the way those messages were then further spread through those users’ networks of friends. Fundamentally distributed deliberation, particularly as it applies to digital media ecologies, is the way nonhuman information technologies–from bots to server farms to algorithms–participate in evaluation of information  (making judgments for us) and in the making and circulating of deliberative arguments. They don’t do it alone. Humans are part of the system. Sometimes, certainly as in the case of the figures in these news stories, in very intentional ways. However that’s just dozens of people. Tens or hundreds of millions more participated by sharing data that was harvested, unbeknownst to them, millions were targeted by these messages, thousands worked for companies doing innocent jobs helping with customer service or maintaining servers, without whom none of this was possible (you could keep “thanking” people until the orchestra at the Oscars passed out).

We are not helpless in the face of this, but we do need new rhetorical practices. And that may sound hopelessly academic and disciplinarily solipsistic, but I would argue that it is rhetoric that gave us some modicum of agency over speech and writing and other media over time. And that’s what we need now: to invent new rhetorical tools and capacities.

Categories: Author Blogs

4C’s and the rhet/comp slasher

Digital Digs (Alex Reid) - 17 March, 2018 - 12:00

This is not the working title for my academic murder mystery, so feel free to take it if you like.

No. It’s about a growing conversation over the role of rhetoric in composition studies, and the emergence–at least as perceived by some–of 4Cs as a composition studies conference and not (or at least less so) a rhetoric conference. In phrasing it that way, I hope that I am emphasizing that the trends at the conference are an effect of a larger disciplinary (or is it now inter-disciplinary?) evolution.

Undoubtedly it’s been a long time coming. Some might say that it is a division that has been baked into composition from its formation more than a century ago. Or, we might look at the 1980s when the solidification of rhet/comp in the form of phd programs brought with it debates over the historiography of the field and the “cultural turn.” [I’d also point to the arrival of PCs, the internet, and what Manovich terms the softwarization of culture, but that’s a subject for a different day.] And since then we’ve had a proliferation of specializations, which I think you could find in the language of job ads, the networks of journal article citations, the birth of journals, book series, and conferences, and so on. My department’s new certificate in Professional Writing and Digital Communication is one small example of that. It’s certain not composition studies or pedagogy. It’s not rhetoric. It’s not cultural studies. It’s not even technical communication exactly.  Sure, it touches all those things–all these fields abut one another to some degree–but it’s something else.

The current conversation, as I’ve encountered it, is that rhetoric (e.g., history of rhetorical, rhetorical theory, etc.) has slowly disappeared from CCCC. Meanwhile RSA membership has expanded. I’m not sure if the conference is growing but I do think there’s a sense that some scholars who might have viewed CCCC as their home conference and organization a couple decades ago, now look at RSA instead. RSA now has about the same number of panels as Cs.

I honestly don’t know if that’s a good thing or a bad thing. You might compare this with MLA. The MLA conference is clearly interdisciplinary. It doesn’t include much rhetoric but it does include language departments along with English literary studies. These days the conference is about half the size it was when I was first going on the market. I think that may be because there are fewer jobs and fewer institutions interviewing at MLA rather than there being fewer panels. But with attendance in the 5-6K range, it’s about 50% larger than 4Cs, which I think is in the 3-4K range. MLA is one day longer. It has over 800 sessions. 4Cs this year is over 500, so again MLA is roughly 50% larger. I guess if you tack on ATTW, which always runs the day before 4Cs then you get closer in size. I suppose my point is that, hypothetically, the conference could grow to MLA size and perhaps address this trend if it wanted. That’s hypothetical though. I’m sure there would be many logistical challenges to doing so.

More important though is the question of whether or not we need to be all in the same space.

Just thinking about this from my own scholarly perspective (and figuring many have analogous situations), there’s one part of my work that is really outside of rhet/comp or even English studies that draws on media study, digital humanities, new materialist philosophy, and tangentially a bunch of other stuff like cognitive science, engineering, etc. And then inside of rhet/comp there are dozens of people whose work is very close to mine and hundreds more that are nearby or coming into digital rhetoric as graduate students. It’s probably impractical to keep close track of all the scholarship that self-identifies as either “digital rhetoric” or “computers and writing.” It certainly is for me when I’m also following these other extra-disciplinary conversations. So really in doing my scholarly work, my focus necessarily has to start here.

If one thinks of the primary purpose of a conference is to learn about what other people who are doing scholarship like yours are up to and then do some networking with them, then, at least for me, 4Cs is pretty inefficient. There may be nearly 4K attendees at Cs but if there are ~100 people I’d want to catch up with–people whose research impacts mine directly–then less than 20 were at the conference. If you think about a conference like Cs as a place to get some slice of the broader picture of composition studies, then maybe it works well for that. IDK. I mean, do people do that? I know many people say they always try to go to at least one panel that’s sort of random. I do that too, including this year. And that’s fine, but really only as a supplement to that primary task of connecting to people who aren’t random.

Again I want to reiterate that this isn’t a criticism of Cs. I certainly don’t want the job of organizing that beast. I have zero interest in making an argument along the lines of there should be more people like me at Cs! I think that’s an unworkable argument at scale for everyone who identifies as rhet/comp.

I actually think these trends point to a more interesting problem. I think we may be in a paradigmatic crisis of sorts in the sense that I don’t know what we share as scholars–methods, objects of study, foundational assumptions, research questions? And we don’t need to share those things, but if we don’t then in what sense are we connected? By a shared history, I guess, but I don’t know if that’s enough, especially as those shared moments are receding from living memory.

 

Categories: Author Blogs

Composing in and with data-driven media #4C18

Digital Digs (Alex Reid) - 16 March, 2018 - 07:04
china social credit.jpgCredit: Kevin Wong, Wired

[A text version of my presentation from yesterday]

At the beginning of the century, Lev Manovich identified five principles of new media the last of which he termed “transcoding.” Transcoding describes the interface between cultural and computational layers. In part, transcoding observes that while digital media objects appear to human eyes as versions of their analog counterparts–images, texts, sounds, videos-they are also data and as such subject to all the transformations available to calculation. Such familiar transformations include everything from cleaning up family photos to making memes to share online or even changing the font in a word document. As the quality of video and the computational power available to average users increases, such transformations have also come to include altering videos to make people appear to say things they haven’t said or even putting someone’s head on another person’s body.

Perhaps unsurprisingly much of the early exploration with this video technology has been with fake celebrity porn. That’s certainly NSFW so I’ll leave it to you to investigate that at your own discretion. The point is that the kinds of media we are able to compose is driven by our capacity to gather, analyze, and manipulate data.

The other part of transcoding, however, points to the way in which data-driven, algorithmic analysis of user interactions shapes our experience of the web from Netflix recommendations to trending Twitter hashtags. In recent months, the intersection of these two data-driven capacities-the ability to create convincing fake media and then spread it across online communities-has become the subject of national security concerns and national political debate. I’m not here to talk specifically about unfolding current events but they do offer an undeniable backdrop and shape the situation in which these rhetorical processes can be studied.

Certainly there will be technical efforts to address the exploited weaknesses in these platforms. However, computers are by definition machines that manipulate data, and, as long as these machines operate by gathering data about users and using that data to create compelling, some might say addictive, virtual environments, there will be ways to exploit those systems. After all, one might say these digital products are designed to exploit vulnerabilities in the cognitive capacities of humans, even as they also expand them.

Within this broad conversation, my specific interest is with deliberation. Classically, deliberative rhetoric deals specifically with efforts to persuade an audience to take some future action and, as Marilyn Cooper observes, “individual agency is necessary for the possibility of rhetoric, and especially for deliberative rhetoric” (“Rhetorical Agency” 426). This is not a controversial claim. Essentially, in order for deliberative rhetoric to work, one’s audience must have the agency to take an action. More generally, deliberation requires a cognitive capacity to access and weigh information and arguments. Regardless of whether those arguments come in the form of logical deductions or emotional appeals, the audience still requires the capacity to hear, evaluate, and act on them. However, in emerging digital media ecologies the opportunities for conscious, human deliberation are increasingly displaced by information technologies. That is, machines make decisions for us about the ways in which we will encounter media. In some respects, one can view this trend as inevitable and benign if not beneficial. Without Google and other search engines, for example, how could any human find information on the web? One might even look at recommendations from a subscription media streaming service like Netflix or an online store like Amazon as genuine, well-intentioned efforts to improve user experience, though clearly such designs also serve corporate interests. Similarly, changes to social media experiences such as Facebook’s massaging of what appears on one’s feed or the automatic playing of videos might improve the value of the site for users or might be deliberate acts designed to sway future user actions. Ultimately though, the increasing capacity of media ecologies to record and process our searches, writing, various clicks and other online interactions-to say nothing of our willingness to have our bodies monitored from biometrics to our geographic movements-produces virtual profiles of users which are then fed back to them and reinforced.

To address these concerns, drawing upon a new materialist digital rhetoric, I will describe a process of “distributed deliberation.” This process references the concept of distributed cognition. Distributed cognition is not meant to suggest machines doing “our” thinking for us but rather to describe the observable phenomenon in which humans work collectively, along with a variety of mediating tools, to perform cognitive tasks no individual human could accomplish alone. Distributed deliberation works in the same way. It is useful to think about this in Latourian terms. That is, through the networks in which we participate we are “made to act.” That is not to say that we are necessarily forced to act but rather that we become constructed in such a way that we gain the capacity to act in new ways. For example, through their participation in a jury room or a voting booth citizens are made to deliberate in ways that would not otherwise be possible. However, that is a little simplistic. While we may only be able to vote in that booth, there are many agents pushing and pulling on us as we deliberate. Typically it is far more difficult to discern the direction in which agency flows. As Latour observes, “to receive the Nobel Prize, it is indeed the scientist herself who has acted; but for her to deserve the prize, facts had to have been what made her act, and not just the personal initiative of an individual scientist whose private opinions don’t interest anyone. How can we not oscillate between these two positions?” (An Inquiry into Modes of Existence, 158-9). That is the oscillation between facts demanding certain actions and the agency of the scientist. For Latour, the resolution of this oscillation lies ultimately in the quality of the resulting construction, which, of course is just another deliberation and it is one that requires an empirical investigation, the following of experience. To put it in the context of my concern, as a Facebook user hovering the mouse over the buttons to like and then share a news story placed into her feed, how do the mechanisms of deliberation swarm together and make the user act? Is the decision whether to share or not a good one? Furthermore, while we can and must pay attention to the experience of the human user, so much of the work of deliberation occurs beyond the capacity of any human to experience directly. As such, in charting distributed deliberation we must also investigate the experience of nonhumans, which will require different methods, and that’s where I will turn now.

Understanding the specific operation of those nonhuman capacities is a task well suited to Ian Bogost’s procedural rhetoric, which he describes as “the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures. This type of persuasion is tied to the core affordances of the computer: computers run processes, they execute calculations and rule-based symbolic manipulations” (Persuasive Games, ix). Though Bogost focuses on the operation of persuasive games as they seek to achieve their rhetorical goals through programming procedures, he recognizes that procedural rhetoric has broader implications. Selecting a movie or picking a route home with the help of Fandango or Google Maps may be minor deliberative acts, but they offer fairly obvious examples of how deliberation can be distributed.

Yelp, for example, combines location data with a ratings system and other “social” features such as uploading reviews and photos, “checking in” at a location, and providing map directions. These computational processes compose a media hybrid and expression with the capacity to persuade users. Certainly one might be persuaded by the text of a review or a particularly pleasing photo; text and image play a role here as they might in a video game. But the particular text and photos the user encounters are the product of a preceding procedural rhetoric that decides which businesses to display. It is not only restaurants and other business that are reviewed but the reviews and reviewers as well, which serve as part of a process that determines which among the dozens or hundreds of reviews a business might receive that one is first to see. In the case of Yelp, users write reviews and rate businesses on a 5-star scale. Yelp then employs recommendation software to analyze those reviews and weigh them. Does it matter that they claim the recommendation software is designed to improve user experience and the overall reliability of the reviews on the site? Maybe. What is key here, however, is that such invisible procedures undertake deliberations for us. The fact that it would be practically impossible for users to undertake this analysis of reviews independently or that users are still presented with a range of viable options when looking for a local restaurant, for example, does not alter the role that such procedures perform in our decision-making process. In Yelp one finds a digital media ecology that includes juxtaposed multimedia (e.g., photos, icons, text, maps), computational capacities (e.g., linking, searches, location data, data entry for writing one’s own reviews), algorithms or procedures (e.g., ranking businesses, evaluating and sorting reviews), and media hybrids (e.g., combining with mapping applications to provide directions or linking with your phone to call a business). Indeed one might look at Yelp itself as a media hybrid with its own compositional processes, rhetorical procedures, and genres.

The softwarization of media did not take off fully until personal computing hardware was powerful enough to run it. Social and mobile media obviously rely on the various species of smartphones and tablets. They require the hardware of mobile phone and Internet networks and server farms. Whole new industries and massive corporations have emerged as part of this ecology, and this means people: HVAC technicians keeping server farms cool, customer service representatives at the Apple Genius bar, engineers of all stripes, factory workers, miners digging for precious metals in Africa, executives, investors, and so on. It also involves a shifting higher education industry with faculty and curriculum to produce research and a newly-educated workforce, an infrastructure that relies upon these products to operate, and students, faculty, and staff who feed back into the media ecology. In short, a media ecology cannot be only media just as rhetoric cannot only be symbolic. While digital media ecologies create species with unique digital characteristics, they cannot exist in a purely digital space anymore than printed texts can exist in a purely textual space.

As such, whatever rhetorical power the algorithmic procedures of software might have, their most powerful rhetorical effect might lie in the belief users have in the seeming magic of a Google search or similar tools. However, as Bogost observes, algorithms are little more than legerdemain, drawing one’s attention away from the operation of a more complicated set of actors:
If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work…SimCity isn’t an urban planning tool, it’s a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like. (“The Cathedral of Computation”)
Indeed, in some comic bastardization of Voltaire one might say that if algorithms didn’t exist that we would have to invent them as a means of making sense of media ecologies and our role in them. That is, in the face of a vast, unmappable monstrosity of data, machines, people, institutions, and so on intermingling in media ecologies, the procedural operations of software produce answers to questions, build communities, facilitate communication, and generally offer responses to our requests, even as they shape those questions, communities, communications, and requests. In other words, the distribution of deliberation and other rhetorical capacities among the human and nonhuman actors of digital media ecologies is necessary and inevitable. Describing and understanding the complexities of these relations as they participate in our deliberations, rather than simply celebrating or bemoaning the apparent magical abilities of the tools we employ becomes the first step toward building new tools, practices, and communities that expand our rhetorical capacities.

It’s worth noting that powerful entities are already intentionally at work on these goals. A year ago, when the concerns about Facebook and fake news were really taking off, Mark Zuckerberg published a manifesto declaring that “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” In a related vein, as has been widely reported, the Chinese government has a different idea of the role distributed deliberation can play in its creation of a social credit system. I don’t know about you, but I’m not especially sanguine about the notion of Facebook engineers building the social infrastructure for a global community. I’m even less enthused about the possibility of other nations importing these Chinese practices, as if we do not live in a thoroughly monitored environment as it is.

I won’t pretend there are any easy answers, any simple things one can slip into a lesson plan. The task begins with recognizing the distributed nature of deliberation and describing such processes to the extent that we can. This includes paying attention to the devices we keep closest to us and understanding the particular roles they play. And it means inviting those nonhumans into our disciplinary and classroom communities. Just as universities, departments, and faculty are capable of creating structures that encourage conformity to existing rhetorical and literate traditions, they might conversely create structures that are more open to these investigations. This might mean finding ways to use rather than restrict access to digital devices in the classroom and creating assignments that push students to creative uses of the collaborative and distributed cognitive potential of digital networks rather than insisting on insular and individualized labor. It might mean asking questions that cannot be answered by close reading or setting communication tasks that cannot be accomplished by one person writing a text. From there the classroom has to proceed to create solutions to these tasks rather than assuming the answers already exist, which is not to suggest that many answers might not be readily available but the emergent quality of digital media means, in part, that new capacities can always been considered.

 

Categories: Author Blogs

academic nostalgia for meatspace meetspace

Digital Digs (Alex Reid) - 14 March, 2018 - 10:19

Here we are once again, another national conference. I’m waiting out a three hour delay on my first flight with the second rescheduled, so fingers crossed. And let’s not even get started about the return trip. I don’t want to curse my luck that badly!

Tell me again why we do this? I mean I understand it’s a professional obligation. It’s part of my job to go to conferences. So I suppose the short answer is that it’s just another onerous, unproductive part of the academic bureaucracy, another holdover from a past century. I guess that’s enough of an answer. After all, we all have stupid things we need to do as part our jobs. So as I sit in this terminal at least I can console myself with the fact that I’m getting paid.

But let’s try to summon up the fantasy of academic freedom one last time and imagine that we could actually set the terms of what we considered valuable in terms of scholarly work.

There are two obvious alternatives to the national conference meatspace meetspace. The first is the fully online conference. We stay home. We do videoconferencing to discuss papers/presentation that are posted online. The second option really just adds the dimension of a regional meet-up (i.e. something you could comfortably drive to and maybe not even need a hotel). E.g., maybe there’s a 3-4 day conference with 1 or 2 days where people get together.

What would be the point of adding that part? I don’t know. What’s the point of a meatspace meetspace? The short answer is socializing. It’s certainly not the presentations or the discussion following the presentations, which, for whatever value you can attribute to them, can easily be replicated online. So the value all lies in the informal dimensions of the conference: the serendipity of meeting a stranger who shares your research interests and becomes a new colleague/collaborator; catching up with colleagues; and maybe some esprit de corps of being surrounded by so many people in your discipline. It’s catching a drink or meal with friends you only otherwise see on Fb.

In other words, people enjoy socializing. I also enjoy socializing at conferences (for certain values of enjoyment). But if that’s what it’s about, then maybe we could just look to get group rates on cruises or something.

Setting aside the minor irritations of air travel and hotel stays (which compare to 21st century teleconferencing in the same way that air travel compares to 19th century train travel), there is an expanding list of drawbacks to national conferences:

  • The direct costs, especially for graduate students and contingent faculty;
  • The indirect costs of time taken away from other work;
  • The political and material concerns that now arise regularly with each convention location.
  • The carbon footprint.

I don’t know. I’m sure I’ve been writing roughly this same blog post for at least a decade. I like to go to conferences. I’m looking forward to enjoying my time in Kansas City. Maybe someone will have something interesting to say about my presentation… probably not but whatever. Hopefully I will catch up with friends.

But seriously… To me, the practicality of national conferences will eventually wane. It’s a when not if scenario. It’s really only a matter of tweaking some technical matters and figuring out the social mechanics.

 

 

Categories: Author Blogs

rhetoric of podcasts, podcasts of rhetoric

Digital Digs (Alex Reid) - 5 March, 2018 - 11:39

earbuds.jpg

One of the very best things about no longer running the composition program is having the time and mental space to get back to digital rhetoric in a more practical and compositional way. This has got me thinking, in this post, about podcasting in terms of its various rhetorical structures but mostly about the kinds of podcasts that are out there in my field.

Before I started at Buffalo, I regularly taught classes on digital production and I did a fair amount of it myself. My 2010 Enculturation article, made the year before I started being composition director, included a video. And in 2008, right before I left Cortland, I’d published an article in Kairos about teaching podcasting in my professional writing courses there. All the dropped away for me when I took on the WPA job here at Buffalo. That’s a story for a different time, but that’s part of the context for where I am now. The other part is that we’ve started a new graduate certificate in professional writing and digital communication, and this has provided some real exigency for me to get my hands dirty again with production.

You can do your own Google search or take my word for it that podcasts have become increasingly popular. A decade ago when I was teaching this stuff, we didn’t have the smartphone and mobile data networks we have now that make following podcasts so easy. (OK, here’s one quick stat from this article: these days 42 million Americans listen a podcast every week.) Personally I like podcasts. I also like audiobooks. I use them for entertainment purposes. (Mostly I listen to podcasts about soccer.)

Given this I think there are several good reasons to teach students in a professional writing curriculum how to podcast including

  1. Though they may never podcast professionally, this is a significant genre in our media ecosystem about which professional communicators need to have an understanding.
  2. Creating an audio recording is a fairly simple task, at least at a basic level. But it also opens a path for becoming more sophisticated. (I.e., minutes to learn, lifetime to mater). In this sense it’s a practical entry point into a larger field of sound rhetorics.
  3. Many students I encounter have a fairly narrow usage of media and an ever smaller experience as a composer. So podcasting becomes one in a series of experiments with media production that begins to alter our relationship to composing from one that says “I’m a writer” meaning I put words in a row on a paper to something more capacious.

My challenge though is that I always want to do the things I ask our students to do. And so I come to podcasting. The real challenges with podcasting are rhetorical and compositional. How does one create a compositional space in which one produces hopefully interesting podcasts on a semi regular basis?

So what kind of podcasting is there in rhetoric and composition? Well, this FB page tracks several of the more prominent ones. Among the active ones on that page there’s Rhetoricity produced by Eric Detweiller, Rhetorical Questions produced by Brian Amsden, Eloquentia Perfecta Ex Machina produced by the St Louis University composition program, and the CCC Podcasts produced by NCTE. I’m sure there are more. I’ve listened to a few episodes of each, and they all follow an interview format with some interviews more formal than others. So really you have someone new in each episode. That’s an entirely familiar and sensible format.

Another common format I encounter in listening to soccer podcasts is basically punditry/fan banter. There you have two or more regulars who discuss the events of the last week. I haven’t really seen that in rhetoric/composition, perhaps because we don’t really have “events” to discuss. Obviously you could discuss the rhetorical angles of current events, which would be a kind of application of rhetorical scholarship. It might be a good kind of podcast for someone to make, but that’s probably not an angle I’d want to take.

Part of the issue then is the periodicity of scholarship or at least the periodicity of the communication of scholarship. It maybe should be noted that the later is not “natural,” of course. The pattern of article publication, the length of articles, the length of conferences, the patter of the academic conference schedule: these are as much a by-product of the material affordances of mid-20th century communication technologies as they are anything to do with the qualities of the objects and practices we study or our methods. Clearly there’s some feedback in that loop and cybernetic/homeostatic impulses are at work.

So I’m wondering if it is possible to shift that periodization. I still think that blogging is an opportunity to do that, to have a more ongoing and less precious conversation about ideas and discoveries than what publishing allows (not as a replacement but as an enrichment). It never really became that. Maybe podcasting is a better medium for that. A few people get together and talk about their work, what they are reading, what’s happening in their classes. Is that interesting? I don’t know. I think almost anything has the potential to be interesting or boring depending on the audience, the situation, its production/performance/composition.

Like this blog for example… to quote Mitch Hedberg, “I played in a death metal band. People either loved us or they hated us… or they thought we were OK. “

Categories: Author Blogs

the affects of gun control

Digital Digs (Alex Reid) - 26 February, 2018 - 13:15

Conversations in America about gun control, public space, and safety–which are related but not equivalent–are grounded in affect, cultural/ideological identity, and ontology. I’ll swing around to the ontological element later, as that’s what is most relevant for my work, but I’ll stick with the more familiar elements first. Most strong opposition to gun control begins with the following:

  • Affect: an affection or love for guns; a pleasure found in gun ownership and/or use.
  • Cultural/ideological identity: being a gun owner is part of who you are. Criticism of gun ownership or use is an attack on your identity, your personhood, as well as the culture in which you participate.

Conversely, most strong support for gun control reflects little or no affection for guns (perhaps even antipathy toward them) and no personal or cultural sense of identity tied to gun ownership or use (and perhaps even identification with the rejection of guns). However, I think it is fair to say, those individuals tend not to define themselves by their opposition to guns to the degree that others identify with guns. Instead, their support of gun control is just a part of a larger bundle of cultural identifiers. Because affects are a matter of intensity, there are, of course, people who feel less strongly in either direction, though I would hesitate to put them “in the middle” because that suggest they feel strongly about some third position that is somehow a mixture of the extremes. If we are going to insist on some abstract political geometric model, I wouldn’t suggest a line but rather some multi-dimensional space with others at a tangent to these two opposed positions.

To make any kind of deliberative argument, there needs to be some common sense of a future. And put quite simply, we don’t have one. On the one side you have a vision of an America where everyone carries a gun. Public spaces are secured by individuals. And the only laws are natural/divine ones. The core of this ontological position is straightforward. My abilities to think and act are divinely granted qualities for which I bear an obligation to g-d. If I have a gun available to me, then I have more power and agency than I have without a gun. Even if, I am, in some respects, at greater danger, that’s always the case with power. As an individual I am always responsible for my exercise of power, and the degree of power I have doesn’t change that. I want a future society founded on powerful individuals acting responsibly and being held accountable for their actions.

The opposing ontology understands humans not as individuals with divinely granted powers but as social and historical animals. Here capacities for thinking and acting emerge from collective action through the assemblages, networks, and institutions we construct and maintain. Because I see my agency as relational rather than inherent, I would view any decision about gun control as an act of the state. That is, where the gun advocate would see restricting guns as a state action but not restricting guns as permitting a natural/divine condition to exist unregulated, I would view either decision as the operation of social-historical structures. Either course results in the creation of new conditions for agency but there would be no way to imagine a lack of gun control as the assertion of a more “free” condition in absolute terms because there are no absolute terms.

Hypothetically, from this second ontological perspective, one could argue in favor of arming citizens though I am having a hard time formulating one. You’d have to argue that increasing citizens’ ability to harm and kill one another directly produced a desirable set of social conditions. Or maybe you could argue that this negative effect of arming citizens was worth the cost in order to address some other problem or fulfill some other desire. I don’t think those are easy arguments to make if one is starting from an otherwise neutral position. You’re either arguing that 30K people dying every year is a good thing or that 30K people dying every year is a price worth paying to achieve _____. And it’s not easy to make those latter arguments in terms of safety or hobbies. If you’re worried protecting citizens from crime then you would seek to mitigate the social conditions that lead to criminal activity rather than arming individuals. If you wanted to allow for gun hobbies like target shooting or hunting then you could create carefully regulated spaces for such actions where guns are stored and ammo handed out more parsimoniously than opioids are today.

As I point out above though, from this perspective the gun control issue is bundled with a larger constellation of matters that begins with the premise that agency is created and destroyed through collective social action. Basic income, health care, education, environmental protection, equality before the law: through such collective actions agency can be increased. Of course the devil is in the details. But from this perspective collective inaction is just another form of collective action. The key from this ontological perspective is that political action is about working collectively to make things better, even though sometimes we fail to do so.

With that in mind, the key lies in the collective, democratic restructuring of social assemblages which results in a shift in affect. In our democracy, it’s hardly about changing the minds of your political opponents. It’s almost entirely about activating the undecided and non-participating votes. E.g., If you’re in favor of gun control how many of those folks can you convince to come out and vote against anyone who has an “A” grade from the NRA? In how many current red districts can support for the NRA become a political liability?

 

Categories: Author Blogs

“How Hard Do Professors Work?” Why do you want to know?

Digital Digs (Alex Reid) - 8 February, 2018 - 14:08

Variations of this question have become a genre unto themselves, as this recent article in The Atlantic exemplifies. The article takes the occasion of some “Twitter battle” to revisit this topic. But really, why do you want to know? Are you just curious? Is my job so very mysterious?

  • maybe the high cost of college has led you to turn your focus on how I spend my time and how much I get paid
  • maybe you’re an administrator or a general hobbyist with an enthusiasm for spreading Taylorist efficiency in the workplace
  • or maybe you just hate academics for one reason or another.

SUNY salaries are public data, so you could track mine down if the topic really interested you. It’s pretty close to the average mentioned in the article, which is ~$80K/yr. I’m not sure how there’s a complaint about that. It’s not a ridiculous amount of money. It’s market-driven, like everyone else’s salary. If you want to change the nature of our economy, we all want to hear the plan (to quote John Lennon), but otherwise… As for reducing the cost of college, well if you’re a NYS resident in a household earning less than $125K then you’re not paying tuition at a SUNY school like mine. If you’re paying full in-state tuition, then it’s less than $7K/yr. So 4 years tuition is less than the average price of a new car. Do you go into the car dealership and tell everyone in the auto industry they should be making less money so that you can buy your car for less?

In many ways, I don’t think being a professor is that different from many professions that require creative thinking and address open-ended problems. That is, in theory you could spend every waking hour working, especially if you consider the time you spend pondering a problem while in the shower or walking the dog or driving to the grocery store as “work.” That’s one of the issues noted in the article, how do you define work? This can be particularly true for professors who are not only occupied by their jobs (i.e., it’s their occupation) but are also pre-occupied (e.g., they can’t stop sharing work stuff on Facebook… or blogging about it). Early on in my academic career, when my kids were young, I made a decision to draw some firm boundaries on this, because I grew up pretty much without a dad and I didn’t want that “cat’s in the cradle” bs for my kids. (I know not every academic is in a situation where they feel empowered/able to do that, but that’s my story.)

But for simplicity sake, let’s take the most limited, commonplace notion of work and restrict it to time spent working directly on accomplishing a clearly defined task.  40-50% of my time (2-3 days week) is teaching, grading, class prep, office hours, student emails, etc. I try to spend about 30-40% (1-2 days per week) on research, reading, writing for publication. That leaves me with a few hours per day that are devoted to the service aspects of my job (committee work and so on). That’s during the academic year. In June and July, I spend more time on research. I give myself a chance to learn new things I might want/need to incorporate into my teaching or scholarship. It’s looser for sure. Then I take a vacation. And then in mid-August it starts over. During the school year, I spend about 60% of  my work time on campus. Otherwise I’m working from home.

Pretty boring, huh? Certainly not mysterious or really all that weird. There are plenty of people who have jobs where they work from home part of the time, that are busier in certain times of the year than in others, or where the nature of their work is seasonal for some reason.

Still the drive to answer this question moves on. As this article suggests, it must be researched!

The research could also help paint a clearer picture of how academics divvy up their time—how many hours are spent teaching students, doing research, attending conferences, frittering away in meetings. That information could prove especially useful at universities that are rethinking the demands they place on professors and striving to enable faculty to spend more time in the classroom.

This week’s viral Twitter battle over the workload of professors was a fun, insider debate, but it also opened up serious questions about the purpose of college.

I don’t get that. In part because it’s obvious. I’m at a research institution where I teach two courses per semester. Other colleges that are less research intensive (who expect their faculty to produce less scholarship) ask professors to teach more courses. So if one wants faculty to spend more time in the classroom than one could start hiring faculty with more teaching intensive workloads–and many universities do that.

I understand that some folks may not see the value in the time I spend on research or professional development. That’s fine. Come by my office and we’ll talk if it really matters to you. On the other hand, I may not really understand the value of the work you do either. But that doesn’t matter because you’re employers do and they’re the ones paying you to do it.

See how that works?

Categories: Author Blogs

tolerance, forbearance, and campus culture

Digital Digs (Alex Reid) - 3 February, 2018 - 07:55

Last week I read what I think is an excellent articulation of the current struggles of our republic. Steven Levitsky and Daniel Ziblatt’s “How Wobbly is our Democracy?” examines the underlying principles of tolerance and forbearance they suggest makes our democracy function. Tolerance and forbearance are not especially complicated concepts. As Levitsky and Ziblatt writes “When mutual toleration exists, we recognize that our partisan rivals are loyal citizens who love our country just as we do… Forbearance is the act of not exercising a legal right. In politics, it means not deploying one’s institutional prerogatives to the hilt, even if it’s legal to do so.” It’s not too hard to see that the US has become increasingly partisan since the 1960s. Conservative movements such as Gingrich’s Contract for America, then the Tea Party, and now Trumpism have all arisen to roll back liberal policies from Johnson’s Great Society (or even FDR’s social security) through Obamacare and DACA. During elections candidates and parties have long said nasty things about one another. They’ve been partisan and hostile. However, after the voting is over, historically in general the tide turned back to toleration and forbearance. 

Credit: Michael George Haddad

Of course, that’s not what happened in the Civil War, and it’s not what has happened since the 90s, though the pace is increasing. I think we’ve reached a point where we no longer necessarily view the opposition as “loyal citizens,” and really just about any notion of forbearance has disappeared, probably with the GOP blocking of Obama’s Supreme Court nomination. As was said at the time, this was within the legal power of the senate to do. It is within the President’s power to fire people he thinks are personally disloyal and since only Congress has the power to put him on trial, as long as Congress remains loyal to a president, there is no stopping him (or her) from committing any crime while in office.

It is only toleration and forbearance that prevent things like that from happening. It ultimately relies upon the notion that American voters, in the end, value the future of the nation over their partisan political aims. Think about it on a more personal level. As an adult you can do what you want and say what you want within the limits of the law. If you’re married and act without toleration and forbearance in respect to your spouse, then probably your marriage won’t last very long. That’s where we are right now, acting as if we no longer want to be part of this union.

A similar set of principles underlie the open exchange of views on college campuses. Such conversations implicitly begin with a shared value regarding the continuation of higher education… even though we do not always agree with what students, faculty, staff, administrators, and invited speakers say. So there’s toleration. But there’s also forbearance in the sense that while one has a legal right to speak freely, one will respect the rhetorical expectations and practices of the community in terms of how one expresses ones views. Without that forbearance, the capacity for toleration weakens.

So what we sometimes see today (and these occurrences are still rare given the 1000s of speakers invited to campuses every week) are campus events that 1) intend to weaken or undermine higher education (so intolerant of the entire premise), 2) express intolerance of people who are part of the campus community, and 3) lack any forbearance in terms of a moderation of speech or willingness to engage in academic debate (which cannot occur in the context of intolerance anyway). Basically, I think it’s fair to say that conservatives view higher education as an ideological foe that they will not tolerate and thus refuse to accept the implicit rhetorical values and practices of the community.

Why conservatives oppose higher education will have to be a subject for another day. The point is that given the larger political culture lack of tolerance and forbearance, there is almost no chance of insulating campuses from those changing conditions. Without those underlying values at work, the academic habit of inviting a free and open exchange of views becomes impossible. And obviously these conservatives know this and their principle intention is to disrupt and destroy campuses through this practice.

So what should campuses do? First, I think they need to make their implicit values explicit. Then, they can establish requirements for the structure of presentations and debate. They can articulate expectations regarding tolerance of others and moderation in speech, which would at least allow them to label particular events and the organizations that support them. Within these weakened conditions of forbearance I think this would be within their rights. If we cannot expect speakers to respect implicit values of toleration and forbearance then we need to make them explicit, and we have to admit to ourselves that we cannot make these values hold up on our own. One person cannot make a marriage work if the other is intent on destroying. I realize that conservatives view practices such as the scientific method, mathematical calculation, and the attempts to study and account for the entirely to humans rather than just select racial/ethnic subgroups as a threat to their white nationalist Christian ideology. I don’t know how one gets around this.

I’m not saying that such moves would be without political consequences. But the thing is that the political right has already declared colleges and universities are their enemies, has already asserted their desire to tear down higher education, and is already putting as much political will behind that objective as they can. I know higher education imagines itself occupying some more distant moral high ground, but what I think we must learn from this historical moment is that such high ground is not something that colleges and universities can create for themselves or sustain on their own. It is something that relies upon a broader social contract that is now unraveling.

Categories: Author Blogs

Digital video, the index, and your own eyes

Digital Digs (Alex Reid) - 26 January, 2018 - 11:20

Perhaps you have seen recent stories about the developing technical ability to map one person’s face onto another person’s body in video. This article details the unsurprising result of people mapping celebrity faces onto porn videos. Apparently the capital of the porn industry is moving from San Fernando valley to the uncanny valley. I’m hoping you don’t need me to tell you that it’s wrong to do that to women–actresses, ex-girlfriends, anyone–without their consent, but that’s the world we live in right now. These technologies also constitute an emerging format of fake news video. That’s another obvious nasty application. You might remember this related story from last year detailing the technical ability to put words into someone else’s mouth on video.

Maybe there are appropriate artistic and political purposes for creating videos of this type, things that would be something like SNL impersonations, with public figures as along as the product is clearly marked as fake. And there could be other consensual uses. I just finished reading The Circle, which I’m teaching this semester, and there’s a brief scene in the novel about a technology that maps your face onto that of a movie protagonist. Maybe you’d like to see that, or play as yourself in a video game. I don’t know.

As the linked article discusses, and you can see in the definitely NSFW subreddit it mentions if you care to, the current technology, at least as its available to the average user is not so seamless that you can’t tell it’s faked. Yet. Of course, you look at Grand Moff Tarkin in Rogue One as this NY Times article details. In the case of the film, a body double similar in size/shape to Peter Cushing imitated Tarkin while wearing motion capture materials. In the fake porn scenarios, there are similar technical demands if one is seeking verisimilitude (though I suppose that isn’t necessary and all kinds of desires might be addressed with this technology). Certainly an enemy state intelligence agency would have access to the technologies capable of making very convincing fake videos. 01ROGUE1-superJumbo-v4.jpg

My interests in this begin with our strong faith in the indexical character of video, that is that video is an objective record of real events. You can think of security cameras or the role of video replay in sports. But film and video have never quite been that reliable. Since the early days of film we’ve had special effects, including faking people. A scene was shot with half the lens covered, then the film rewound and the cover switched before the same scene was shot again thus creating the appearance of the actor encountering himself/herself. Those are obvious examples. And I suppose this new faking technology is part of this tradition of special effects that goes from slow motion and montage through the “bullet-time photography” of The Matrix and onto the the virtual world of Avatar and beyond. However, we should equally keep in mind that even documentary and recording/surveillance video produced with the best possible intentions of recording “what really happened” always has technological limits. If nothing else video is always limited by what the lens is able to see and the machine is able to process or capture. What happens before or next? or behind the door or behind the camera lens?

The knee-jerk reaction is to once again throw up our hands in the face of another source of unreliability turned out by digital media. But if you’re paying attention then you know that film/video have always been problematic presenters of the truth, as have audio recordings, as have writing and speech. No rhetorical act can be taken as unvarnished truth. This fake porn is deplorable. But equally deplorable things have been accomplished in any media you might pick. It’s entirely likely that this composing technology could lead to the composition of valuable knowledge, just as you can also find in any media you might pick.

The question has always been how do we evaluate the value of a composition. Those standards shift from genre to genre and community to community. That relativism does not mean that there is no truth. It doesn’t mean that one group gets to call a video evidence of a real event and another to say that it is faked and that both claims get to be equally true. It seems that it has become increasingly difficult for Americans to discern the difference between recognizing that different audiences/people will be more or less persuaded by any given argument (and that’s ok) and disagreeing over the basic establishment of facts and information. That is, there’s a difference between disagreeing over the beauty of a sunset and disagreeing over whether or not the sun set at all.

Many times, of course, we rely on others to convey those facts and we need to ask them how they came to know such things. Researchers describe their methods and results and cite sources. Journalists and other nonfiction writers cite their sources too. Both are reviewed and fact-checked by their own professional systems. To function as a society we need to be able to rely on these institutional processes and we live in a political moment when powerful people (primarily on the political right I have to say) have enacted years of attacks against institutions in furtherance of their political goals–science, schools, universities, news media, and lately the FBI. Each has been accused of being nothing more than political actors, as having no other basis or purpose for knowledge production than the pursuit of personal ideology. I’m not suggesting we need to swing to some other extreme and give these institutions blind faith; we know there are humans working there too. Instead it means developing our own understanding, our own literacy if you like, about how such knowledge and media are composed and tested, and putting such works to our tests.

As we all know, these tests are most important when it comes to knowledge (whether research or current events) that confirms our existing beliefs and biases. Given the current political climate surrounding US immigration policy, we’re likely to encounter all kinds of media. Media, including video, that is entirely fake. Ideologically curated news that is designed to appeal to one’s existing biases. News stories with some factual basis that is spun one way or another by pundits and politicians appearing online, on TV, in newspapers and so on. The underlying question always has to be–how has this been composed? How can I test the strength of this composition?

To go back to the Marx brothers, today you can believe neither me nor your own eyes. Instead one must understand the available affordances of knowledge and media composition, recognize the institutional processes, genres and expectations by which those compositions are produced, and then test them against that knowledge. Whew! What a lot of work. Who knew being in a democracy meant having such responsibilities?!?

Categories: Author Blogs
Syndicate content