Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
digital rhetoric and professional communication
Updated: 5 hours 59 min ago

Blackboard and the arse end of the internet

23 May, 2018 - 08:59

There’s no doubt there is no dearth of cesspools on the web, and I wouldn’t want to get into a debate about which is the worst. But Blackboard is it’s own special circle of internet hell.

As I’ve mentioned a few times here, after ending my stint as WPA, I’m back to teaching a regular load this year. So I decided to use UB’s course management system for at least part of what I was doing. There were really two basic reasons I did this. First, the students use UBlearns (as well call our version of Blackboard) for many of their classes and just expect to see things there. Second, it had been a long time since I’d even considered using a CMS. As WPA, I was teaching grad classes which were small, so there’s wasn’t really a need for it. Before that, I had sought out all manner of alternatives to using a university CMS, because the things were so awful 15 years ago.

Apparently they are still awful. In some respects they are even worse as the capacities of the web around them have left them in the dust. Think about WordPress, YouTube, Twitter, Facebook, Instagram, Google Docs, or Reddit. Consider how easy they are to use, how flexible, how fast, how mobile. Think about how easy it is to create, edit, and share content. UBneverlearns, as I’ve now decided to call it, like any CMS, is basically a graveyard of content and conversation. Or maybe it’s more accurate to call it a morgue, where the instructors do their version of CSI before pronouncing a grade.

Of course these other sites present their own pedagogical problems. There are privacy concerns, not only in terms of the data these sites collect but also in terms of how, as faculty, one will communicate grades and such to students. There’s the problem of having to ask students to create multiple accounts (e.g., we’ll have discussion on WordPress but upload your videos on YouTube, then let’s use Google Docs to work collaboratively on a document, etc.). And the reality is that a fair segment of students will struggle with the digital literacy demands of using multiple sites, even though there maybe is a legitimate argument for saying that they should learn how to do that.

From the faculty perspective, one can either take the default route of using Blackboard and following its path of least resistance, or one can devote a non-trivial amount of time to rolling one’s own learning environment. At least for me, as a digital rhetorician, there’s some overlap between figuring this stuff out for pedagogical purposes and the research that I do. For 99% of faculty this isn’t the case.

This is why I get a sardonic chuckle out of views like that offered by the Horizon Report, a document produced by experts in educational technology, who steadfastly claim that teaching digital literacy is a “solvable challenge” by which they mean one that they understand and know how to solve. Show me evidence that a significant portion of faculty are digitally literate? Products like Blackboard do little to convince me that even educational technologists are digitally literate. I mean higher education can’t even manage to produce a platform where one could even start to teach digital literacy.

The more I think about this, the more sick it makes me. 18 year olds entering college in the fall would have typically started kindergarten in 2005. Still we’ve spent the last decade teaching them to sit quietly in rows, take notes, read textbooks, complete worksheets, and pass standardized exams. Pretty much like I did in the 70s and 80s. While they may get the majority of their entertainment from the web, they’re barely better prepared to learn, communicate, collaborate, or work in a digital environment than I was at their age. And, obviously, faculty, overall, are barely better prepared to teach them such things and universities are barely better prepared to support such teaching and learning. Instead they give us products like Blackboard as if their sincerest wish is to persuade faculty to keep learning in meatspace. That’s the oddest thing about this since we all know that universities desire those online students.

So one of my goals for this summer will be figuring out some constellation of applications that I can integrate to teach my classes. I’m sure I will use UBneverlearns in a minimal way since the students will look there first: probably as a syllabus and a gradebook but nothing beyond that.

Categories: Author Blogs

A different direction for asking why you heard what you did

18 May, 2018 - 10:41


So the basic story of the recent “Laurel or Yanny” story, as near as I can figure, is this. You’re listening to a degraded digital recording of a voice that has distorted some of the low frequency sounds a human voice makes. So depending on a number of factors–some physiological, some psychological, some technological (e.g. the speakers or headphones you’re using), and some environmental–you might hear one or the other.

So what does this mean for us? Adam Rogers puts it this way in Wired

There is a world that exists—an uncountable number of differently-flavored quarks bouncing up against each other. There is a world that we perceive—a hallucination generated by about a pound and a half of electrified meat encased by our skulls. Connecting the two, or conveying accurately our own personal hallucination to someone else, is the central problem of being human. Everyone’s brain makes a little world out of sensory input, and everyone’s world is just a little bit different.

As he goes on to opine? lament? “It’s hard to imagine a more rube-goldbergian way of connecting with another person. Their thoughts to their mouth to pulsations of air molecules to a vibrating membrane inside a hole in your skull to bones going clickety-click to waves of electrical activity to thoughts.”

In many respects this is a familiar story. It’s the modern divide Latour describes in our efforts to patrol the border between nature and culture. It’s the divide between empiricism and idealism in which different stripes of academics argue over the possibility of knowing something about the “world that exists” besides a “hallucination” of it.

And it’s also the founding problem of rhetoric, the one Rogers terms “the central problem of being human.” In the contemporary secular world where there is no recourse to divinely secured presence (aka souls) to secure intentions, rhetoricians turn to solutions that sound similar to Rogers, like Thomas Kent’s practice of “hermeneutic guessing.” On the other hand, Rogers’ neuroscientific turn is also a matter of concern for rhetoricians who are skeptical of the tendency/hope that brain science can explain away these problems with an fMRI or something. (It’s worth noting that neuroscientists are also often skeptical of what the mainstream takes from their research.)

For me, this odd viral story is an opportune moment to consider the usefulness of Latour’s “second empirical” method, one which he builds upon William James. Here’s Latour from An Inquiry into Modes of Existence:


The first empiricism, the one that imposed a bifurcation between primary and secondary qualities, had the strange particularity of removing all relations from experience! What remained? A dust-cloud of “sensory data” that the “human mind” had to organize by “adding” to it the relations of which all concrete situations had been deprived in advance. We can understand that the Moderns, with such a definition of the “concrete,” had some difficulty “learning from experience”—not to mention the vast historical experimentation in which they engaged the rest of the globe.

What might be called the second empiricism (James calls it radical) can become faithful to experience again, because it sets out to follow the veins, the conduits, the expectations, of relations and of prepositions —these major providers of direction. And these relations are indeed in the world, provided that this world is finally sketched out for them—and for them all. Which presupposes that there are beings that bear these relations, but beings about which we no longer have to ask whether they exist or not in the manner of the philosophy of being-as-being. But this still does not mean that we have to “bracket” the reality of these beings, which would in any case “only” be representations produced “by the mental apparatus of human subjects.” The being-as-other has enough declensions so that we need not limit ourselves to the single alternative that so obsessed the Prince of Denmark. “To be or not to be” is no longer the question!

I realize that’s a long quote, so if you just skipped over it, here’s the summary. Classically empiricism divides observations into primary and secondary qualities, where primary qualities are objective (e.g., length, width) and secondary qualities (e.g. color, smell) are subjective. This is the worldview Rogers implies when he suggests that brains hallucinate about reality. That somewhere between things in the world and the electrochemical pulses of the brain, all the actual relations that hold things together (their relations) become lost and inaccessible to us and our brains just have to guess at what those things are based on a slush of sensory data.

So the weird thing about that is that our bodies and brains are just more things in the world. And our thoughts are just more events/actions in the world. If the lamp sits in relation to the table, then don’t my perceptions and thoughts about the lamp and the table also sit in relation to them as just more things? Why would one imagine that somewhere along the way, one passes into a parallel universe with a different ontology? I actually think the answer to that is fairly simple. One thinks that because one imagines oneself to be divine in some special way.

Rogers ends his essay this way. “Telling the person next to you what’s going on in your head, what your hallucination is like—I think that’s what we mean by ‘finding connection,’ by making meaning with each other. Maybe it’s impossible, in the end. Maybe we’re all alone in our heads. But that doesn’t mean we can’t work on being alone together.” And I don’t really mean to pick on this guy. I just think he neatly expresses a commonly held perception. The title of this article is “The Fundamental Nihilism of Yanny vs. Laurel,” which captures this feeling that this story reminds us that we don’t really share the world in common and that we are all alone in the end. However I think this is more a romantic fantasy than a moment of nihilistic, existential angst.

We aren’t alone in our heads. We aren’t even only in our heads in the sense that this meat is designed to operate in an environment. There’s stuff coming into our heads all the time–through our senses, through the blood brain barrier, through electromagnetic waves. And there’s stuff coming out of our heads too.

Sure. It can be hard to communicate. It’s also hard to hit a curve ball or play the guitar. But this is not a story that begins with us being fundamentally alone and romantically struggling to understand one another. That’s backwards. You’re not alone in the world and thus hear Yanny rather than Laurel. Instead, hearing one rather than the other contributes to your individuation. But that individuation, in my thinking, is just an iteration of a possibility space that one largely shares with the human population. Thinking about individuality as the output of a relational environmental process rather a starting point that never really quite gets going because it can manage to connect to the world strikes me as a much more productive approach to investigating these experiences.

Categories: Author Blogs

the late age of late ages

16 May, 2018 - 18:47

dance marathonl.jpg It’s undeniably a quizzical situation. For the middle-aged rhetorician it’s the comically late age of the humanities/English Studies and the tragically late age of humans (cf. climate change) in the midst of a still spry rhetorical universe that will go on without us. I can only imagine a generation of mid-century factory workers punching clocks in steel mills and auto plants looking upon those industrial edifices with their cornerstones proclaiming “Look upon my Works, ye Mighty and despair!” and imagining an indefinite unchanging future for themselves and their children, just as tenured faculty still do (or at least used to until recently).

If you’re in the humanities, have you ever wondered why tenure appears to be the one thing in all of human history, if not the entire universe, that is seemingly immune from the critical apparatuses of the humanist gaze? Is not tenure also a mechanism of colonialism? patriarchy? capitalism? neoliberalism? etc.? If not, what is tenure’s magic? Surely tenure has always operated inside hegemony. I can go on, but it isn’t necessary because anyone with a Phd in the humanities should be able to play this game. And in defending tenure, one simultaneously reveals the rhetorical playbook for turning back that critical gaze in any context. (Un)fortunately for us, no one was paying attention in the first place.

In the end building and working in disciplines is not unlike building/working in factories or pyramids. For my particular journey through the discipline of rhetoric, the seeming hypocrisy in critiquing anything/everything except tenure isn’t much of a problem. Maybe you think I’m confessing that I’m full of crap. Maybe. But then maybe you should have read the ToS before you agreed to it. The seeming hypocrisy isn’t a problem but the fading efficacy of such arguments is.

In the late age of late ages, we’re all too tired for this, aren’t we?. (I will concede that it might just be me getting old.)

That said, nihilism is as an immature response to life as belief. One must imagine the professor teaching an ever-shrinking number of students as happy. After all these things were plain to see decades ago. Playfully let’s suggest this is Camus, not Sartre: there is an exit. The late age of “humans” presented by anthropogenic climate change and digital life need not mean the late age of “us:” to offer a Latourian malapropism we have never been human. But what does the “humanities” have to offer to such beings, their cultures and economies? That’s a good question. I’m not sure. But I’m fairly certain that it begins with rejecting the immunity we have granted ourselves.

I know that sounds crazy. It’s the last, great counter-intuitive move for a disciplinary tradition that has laid all its other bets on counter-intuitive thinking.

In the late age of late ages, it’s time to go all in. If not now, then when?

Categories: Author Blogs

when AIs start vlogging

10 May, 2018 - 12:48

Right now I have two scholarly/professional interests, and I’m wondering how they intersect. On a general thematic level they appear to share a lot as they are both about digital technologies and communication/rhetoric. However, they also represent two very different segments of digital culture. I’ve been writing/speaking about both recently on this blog. The first has to do with the role of artificial intelligences as rhetorical agents. From speech synthesis to natural language processing to negotiating deals, AIs occupy rhetorical spaces. Their rhetorical behaviors are interesting for two reasons. The more immediate one is that we humans are increasingly interacting with AIs, having rhetorical encounters with them. The other reason is that the rhetorical actions of AIs might tell us something about how rhetoric functions outside human domains. If we think of rhetoric as a thing or process or capacity that is in itself not human but with which humans interact, then understanding how rhetoric interacts with other nonhumans might give us some broader insight into rhetoric itself.

The second thing I’ve been musing about is the emergence of digital genres, particularly over the last five years or so. Sure vlogging, podcasting, infographics and so on have longer histories than that, but these are all things that had limited cultural roles a decade ago compared to where we are now. It’s hard to imagine that much of that wasn’t driven by the adoption of smartphones with the capacity to both deliver and produce video and audio content. Anyway, as I’ve been saying, even those these formats have been around for 15 years (and build upon decades of video, film, radio, and so on), they are still relatively immature in terms of the genres built on them. I think this is especially true as one looks at professional and academic genres. I find it hard to imagine that we can go another decade without these genres become more prevalent.

So how do these things intersect? The first answer that comes to mind is that AIs will become increasingly able to help people make and access media, starting with the technical qualities of image and sound. With natural language and image processing, one has the potential of creating indexical access to audio and video, as in “Show the part where there are monkeys” or “Go to where they talk about monkeys.” Then there are all the possibilities for using AI to help identify fake news and other bad actors. In short, there are a range of procedural-rhetorical ways in which AI will shape the composition, circulation, and consumption of video and audio.

In short, there are a number of places to start, and these are all viable. However, it’s not quite what I’m thinking about. For me, the interesting intersection is at once both more abstract and more material. Video/audio capture the environment in which one composes, starting with one’s body. Even with staging, editing, and such, that environment is always there in a way that it is masked in text. (BTW, that doesn’t mean the environment doesn’t shape textual composition but only that it’s less visible/harder to trace.) My interest in AI rhetorical actors is similarly what they can tell us about our shared rhetorical environment. If we think about this from a “cognitive ecology” (or a cognitive media ecology) perspective then sensors (cameras, mics, IoT, etc.), data storage, composing/editing applications, AIs, humans, networks, mobile tech, etc. form an expansive environment. All the media compositions in which humans get actively involved–as vast as that is–is a thin veneer on the massive amount of expressive data of machines reporting and circulating their sensations. Similarly the rhetorical negotiations involving humans will soon become a minor part of the larger conversation. We represent a small population of moving parts in this rhetorical environment.

Of course, the “always already” argument is always already available to us. We’ve always been immersed in a denser and richer rhetorical environment. It’s just that we’ve been too anthropocentric (perhaps unavoidably) and too sure of our privileged, exceptional ontological condition in the universe (less unavoidably) to recognize that immersion. While that observation is valuable (for purposes of humility if nothing else), it’s also necessary to recognize the shift that we are experiencing (without falling into the hype of that either).

Recognizing that our rhetorical capacities emerge from our participation in populations of assemblages within a cognitive media ecology is a fruitful starting point for describing the particular capacities that arise among us. And that’s fine for a broad research agenda that has legs. But it’s not the kind of thing I can teach to undergraduate students or even to graduate students–at least not without a larger curricular structure to support it.

One answer is re-separating the chunks. I.e., teach some media production in one place and some media theory some place else. Of course it’s all Eternal September stuff with no curricular follow-up or through. I mean there aren’t many English departments where students can systematically develop digital rhetorical/compositional knowledge, skills… let’s call it phronesis in say the way they can march through a series of literary-historical periods. That said, if there was some follow-through structure then at some point you could start to think about how the construction of emerging genres is a structural-environmental conversation we’re having with these digital nonhumans. In other words, even for those who weren’t going in a scholarly direction in relation to these questions, there is usable knowledge to be gained here.

Categories: Author Blogs

30 years on…

2 May, 2018 - 19:10

30 years ago, I was an undergrad and just starting a job working for a start-up, family business in the nascent IBM PC clone market. We assembled computers and sold them on to retails. We distributed hard drives and other components. We consulted with small businesses to provide them with IT solutions for point-of-sale, inventory control, accounting, etc. Those of you who were there know the drill. Monochrome computers, C prompts, no mouse, no windows, 8088 CPUs, RAM measured in kilobytes, 10MB hard drives the size of a kid’s lunch box. Could I have envisioned 2018? Sure, sort of. I mean I read Neuromancer.

I was also an English and History double-major at Rutgers. They were the two largest majors in Rutgers College at the time (the Arts and Sciences college of the university in New Brunswick). Shakespeare, Detective Fiction, Arthurian Romance, The Crusades, World War 1, America in Vietnam: these were all English and History classes with hundreds of students enrolled in giant lecture halls: semester after semester, year after year. Maybe it’s still that way at Rutgers. If so, they’re a bit of an outlier.

Imagining the future/present of computers would have been easier, I think, than imagining the demise of the humanities. In the end the two were bound together. Not because computers are necessarily anti-humanistic but rather because the humanities, especially English was born in the early 20th century with an expiration date. Despite the hypothetical possibility that English Studies need not be bound to print technology and culture, in the end, that’s what has happened. And there’s nothing wrong with the humanistic study of print culture. I’m sure it will carry on, in some fashion, for a very long time. It just isn’t going to be central to how we understand communication as it operates in our living culture.

So it makes me think about 30 years forward. Assuming we still have something like today’s tenured professors at universities (a future that is far from guaranteed in America), I am confident there will be faculty who research the contemporary cultural, political, and professional practices of communication, in whatever media prevails. In short, there will be professors who extend the rhetorical tradition into them media ecologies in which they and others live and work. Of the rest of English Studies, who knows? I would guess some smaller version–akin to classics, art history, or philosophy today–will exist.

And it probably won’t take 30 years to get there.

In part, I’ve been thinking about this because I’ve been idly following the vlogger Casey Neistat recently in his efforts to imagine this new business of his called 368. Basically, it’s a kind of school… sort of. It’s a place where vloggers and podcasters might come, gain access to tools and support, learn how to up their production game, and get business advice in terms of marketing and finding an audience. In a recent vlog, Neistat made the astute analogy between contemporary vloggers and Buster Keaton in that they are both working in an emergent medium/genre and trying to figure what is possible and how to achieve it. His business model seems essentially based on this observation and the opportunity that lies in figuring out how to become the MGM of YouTube (or maybe more accurately United Artists).

I suppose in part I’ve thought of my own work as a less entrepreneurial/commercial and more scholarly version of this objective: to invent/discover/study rhetorical practices and genres for an emerging media ecology. That’s me as a member of the Florida school who was raised by wolves and a copy of Ulmer’s Heuretics read by moonlight.

I will admit that I’ve never been the best professor for the student who wants to be told what to do. Maybe that sounds like a fake criticism, like saying in an interview that your greatest weakness is that you work too hard or that you’re too honest, but I really don’t mean it that way. There’s a place and time for direct instruction, and I try to give it, but I can admit that’s not my strong suit. I’ve always come more from the perspective of saying “Here are some tools. Find something interesting about them and give it a go. If the whole thing falls about, I don’t really care as long as you learned from it.” It’s a fine pedagogy, in its way, but it really only works with students who have some intrinsic motivation related to their work. I’m capable of enough reflection to know that’s my own version of a mini-me pedagogy, because that’s how I learn. I find something I care about and then I bang and grind away until I get somewhere.

I think it’s premised on the notion that while we think it is unwise to try to reinvent the wheel, trying to figure out how one invents something like that, taking your own journey through invention… well that’s what learning is to me.

What does this have to do with looking 30 years back and forward? Good question. I suppose it has to do with my basic plan for moving forward–claiming an interest, banging, grinding, experimenting, inventing. It’s the opposite of institutional/disciplinary humanistic methods, which are fundamentally homeostatic

Categories: Author Blogs

what is digital professional communication? (the video)

20 April, 2018 - 10:42

Having asked students in my classes to experiment with video, I took on the task myself to make a video where I think about this question and the course I’m teaching in the fall. At this point I could offer various caveats regarding first attempts and such but to be honest I had quite a bit of fun making this and getting back to digital composing (though you can’t really go back, especially not with digital media).

Categories: Author Blogs

what makes professional digital communication interesting?

18 April, 2018 - 12:28

First, a tangent. Over the last week I’ve been part of a listserv conversation that reprises the now familiar question about how English Studies majors should change (or not). As I noted there, this has become a familiar genre of academic clickbait, like this recent buffoonery from the Chronicle of Higher Ed.  Among other things I pointed out that, from a national perspective, the number of people earning communications degrees (which was negligible in the heyday of English majors 50-60 years ago), surpassed the number getting English degrees around 20 years ago. Since then Communications has held a fairly steady share of graduates as the college population grew, while English has lost its share and in recent years even shrank in total number, as this NCES table records. In short, students voted with their feet and, for the most part, they aren’t interested in the curricular experience English has to offer (i.e. read books, talk about books, write essays about books). Anyway, in such conversations the prospect of teaching professional writing, technical communication, and/or digital composing is often raised. The predictable response is a rejection of such curricula on the grounds that it is instrumentalist, anti-intellectual, and generally contrary to the values of English Studies, both in terms of literary studies and rhetoric/composition. Though I have no interest in defending the kind of work I do, there’s really a more important response. First, nothing is going to “save” English. It’s over. Second, by over, I mean it will just be small, serving 1-2% of majors. It will probably remain larger than Math or Philosophy, and no one is say those fields aren’t valuable. English can be small and valuable.

That said,  I do find amusing the evidence such conversations (and clickbait articles) offer of the narrow utility of the intellectual capacity afforded by disciplinary thinking in English Studies. For example, I’m teaching a course called professional digital communication this semester and look to do so again in the fall in an online format. What is it? I suppose you  could imagine it to be an instrumental tour of how-to’s for various business-related digital genres: how-to make a brochure website, how-to design a powerpoint slide, how-to use desktop publishing to write a report, how-to make a professional web portfolio, how-to write a professional email, etc. etc. But think about these three words. Professional. Digital. Communication. What are they? Asking what communication is opens the entire field of rhetorical study. And digital? What does that comprise from technical answers to histories and cultural values/associations? How does “digital” modify “communication”?

However, I actually think it is “Professional” that is the most confounding. On first glance, it should be simple. It should just mean the kind of communication (i.e., the genres I guess) that “professionals” use (and, in this case, are “digital” somehow). Well… basically all workplace genres are “digital somehow,” even if that only means they are composed in MS-Word. I suppose, the implication is that professional also indicates some technical facility with digital tools beyond the typical office suite. These can, somewhat clumsily, be divided into two categories. The first contains genres that have been softwarized (e.g. reports, technical manuals, printed materials like brochures) that existed as genres 40 years ago. The second are born-digital genres and genres that have been significantly transformed by their softwarization (e.g., the way an instruction manual might become a how-to video/screencast or a brochure becomes a brochure website). So the first category might get one thinking about the role of XML/DITA in organizing content in large technical databases or visual communication principles deployed in InDesign or similar desktop publishing software. The second category is more elusive as the genres are fluid: video, podcast, social media, game, infographic, mobile app, website. Of those maybe the last has a stable genre. Furthermore, the changing nature of the work carried about by these professionals, the “adhocracies” in which they often work (to use Clay Spinuzzi’s term), and the continual churn of the technology makes it very difficult to define “professional.” In short, there’s a lot to investigate in those three words, which is why there is extensive scholarship across rhetoric, professional-technical communication, and communications faculty on these subjects.

But, in my experience, the most challenging part of running a course like this is the pedagogical shift into learning through cycles of experimentation and reflection.  Part of what I do is say “We will read scholarship from these fields for the purpose of understanding how it might inform the development of our own practices.” So we do read and talk. But mostly we are engaged in experimental composition. Through our experimentation with various digital tools and genres we aim to understand what “professional digital communication” might be, with the hope that our readings provide some useful terminology and apparatus for doing so.

And for me that’s what makes professional digital communication interesting. It’s not so much reading the scholarship and writing scholarly responses. (Though that interests me too; I am an academic after all.) Instead, it’s the doing of it, the composing, and the insight those experiences give me into the research we do and visa versa.

Categories: Author Blogs

The empty space of the academic presentation

9 April, 2018 - 17:02

So there’s a fairly good chance you know more about Casey Neistat that I do. He’s something of a YouTube sensation with over 9 million subscribers. He also had an HBO series (I guess you’d call it). In my “copious spare time,” I’ve been hunting around, trying to catch up on the world of digital composing that passed me by while I was sentenced to several years as a WPA, and Neistat is someone I’ve only recently (and belatedly) encountered. Below is today’s episode in Neistat’s newly revived daily vlog on his efforts to build something (not quite sure what) in a space at 368 Broadway in Manhattan.

But this post isn’t really about that. It’s about this particular episode from an angle to which many academics could relate (is that the right preposition, “to”?).


This episode sees Casey traveling to Montreal to give a presentation. We see nothing of the presentation. Instead we see the antics of travel. In particular we see his struggles with his motorized scooter-luggage combo.

This relates to my earlier post on “meatspace meetspace.” (BTW, I love how my browser complains that meetspace isn’t a word but is mum about meatspace.) Whether you’re headed to a conference where you’re one of 100s giving presentations or giving an invited talk somewhere, your primary experience is not about the presentation; it’s about all the miscellanea around it: the travel, the hotel, the food, socializing, etc. That’s what this particular episode captures.

I may regret saying this, but I think I’d love to see thousands of 10-minute videos of my colleagues’ travails going back and forth to some BS conference. I’m thinking those would be far more compelling, far more likely to convince me to take interest in their work, than 15-20 minutes of their reading a paper with bulletpoint slides.

Categories: Author Blogs

On the importance of deep mixture density networks and speech synthesis for composition studies

28 March, 2018 - 08:11

Eh? What’s that?

I’m talking about AI approaches to the synthesis of speech on your smartphone and related devices. I.e., how does Siri figure out how to pronounce the words its saying? OK. But what does that have to do with us?

Another necessary detour around the aporias of disciplinary thought… This is really about recognizing the value of computer simulation in articulating the possibility spaces from which reality emerges. Put in the most general terms, if you want to know why one thing is composed rather than another, why there is something rather than nothing, you need a way of describing how one particular thing emerges from the virtual space of possibilities where many other things were more or less probable. Simulation is a longstanding concept in the humanities, mostly through Baudrillard, but the intensification in computing power changes its significance. As DeLanda writes, in Philosophy and Simulation, 

Simulations are partly responsible for the restoration of the legitimacy of the concept of emergence because they can stage interactions between virtual entities from which properties, tendencies, and capacities actually emerge. Since this emergence is reproducible in many computers it can be probed and studied by different scientists as if it were a laboratory phenomenon. In other words, simulations can play the role of laboratory experiments in the study of emergence complementing the role of mathematics in deciphering the structure of possibility spaces. And philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.

So this post relies at minimum on your willingness to at least play along with that premise. It is, as DeLanda remarks elsewhere, an ontological commitment. You may typically have different commitments which of course is fine. Within a realist philosophy like DeLanda’s the value proposition for an ontological commitment is its significance rather than its signification, by which he means it’s less about its capacity to represent/signify Truths and more about its capacity to create capacities that make a difference (its significance).

In this case, we have the simulation of speech. Basically what happens (and basic is the best I can muster here), Siri’s voice is constructed from recorded human speech. That speech is divided up into constitutive sounds and then the purpose of speech synthesis is to figure out  how to recombine those sounds to make natural sounding speech.  [n.b. A common error in this conversation is to identify the semblance between a computer and a human at the wrong level: to assert that the human brain is like a computer. However I don’t think anyone would suggest that humans operate by having a database of sounds that they then have to probabilistically assemble in order to speak.] While humans don’t form speech this way, we do obviously have a cognitive function for speaking that is generally non-conscious (exceptions being when we are sounding out an unfamiliar word, learning a new language, etc.). Generally we don’t even “hear” the words we read in our minds (though I bet you’re doing it right now, just like you can’t not think of a pink elephant).

One thing that is clear in speech synthesis is that process that seeks to approximate the sounds of “natural speech” does not know the meaning of the words being spoken or  need to know that the sounds being made are connected to meaning or even that meaning exists. It is a particular technological articulation of Derrida’s deconstruction of logo-phonocentrism whose heritage he describes as the “absolute proximity of voice and being, of voice and the meaning of being, of voice and the ideality of meaning” (Of Grammatology). Diane Davis takes this up as well, writing “it is not only that each time ‘I’ opens its mouth, language speaks in its place; it is also that each time language speaks, it immediately ‘echos,’ as Claire Nouvet puts it, diffracting or laterally sliding into an endless proliferation of ‘alternative meanings that no consciousness can pretend to comprehend’” (Inessential Solidarity). None of that is to suggest that meaning does not exist or even that the words Siri speaks are meaningless. No, instead it leads one toward a new task of describing the mechanisms (or assemblages, to stick with DeLanda’s terms) for signification and significance are separate–though certainly capable of relating to–the assemblages by which speech is composed.

But getting back to speech synthesis. I’ve been clawing my way through a couple pieces on this subject like this one from Apple’s Machine Learning journal and this one coming out of Google research. This is highly disciplinary stuff and at this point my understanding of it is only on a loose conceptual level. However, I’m trying to take seriously DeLanda’s assertion regarding “the role of mathematics in deciphering the structure of possibility spaces,” as well as his claim that “philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.” It is that last part that I am pursuing and which, at least for me, is integral to rhetoric and composition.

Here however is my hypothesis. Despite the arrival (and digestion) of poststructuralism in English Studies in the last century, rhetoric and composition remains a logo-phonocentric field. The digital age (or software culture as Manovich terms it) has put serious pressures on those ontological commitments (and that’s what logo-phonocentrism ultimately is, an ontological commitment). The mathematical description of the possibility spaces of speech synthesis and the subsequent simulation of speech are just one small part of those pressures, a part so esoteric as to be difficult for us to wrap our minds around.

But what happens when we start disambiguating (decentering) the elements of composition that we habitually unify in the idea of the speaking subject? To return to DeLanda here as I conclude:

The original examples of irreducible wholes were entities like “Life,” “Mind,” or even “Deity.” But these entities cannot be considered legitimate inhabitants of objective reality because they are nothing but reified generalities. And even if one does not have a problem with an ontological commitment to entities like these it is hard to see how we could specify mechanisms of emergence for life or mind in general, as opposed to accounting for the emergent properties and capacities of concrete wholes like a metabolic circuit or an assembly of neurons. The only problem with focusing on concrete wholes is that this would seem to make philosophers redundant since they do not play any role in the elucidation of the series of events that produce emergent effects. This fear of redundancy may explain the attachment of philosophers to vague entities as a way of carving out a niche for themselves in this enterprise. But realist philosophers need not fear irrelevance because they have plenty of work creating an ontology free of reified generalities within which the concept of emergence can be correctly deployed. (Philosophy and Simulation)

I would suggest an analogous situation for rhetoricians. Perhaps we fear irrelevance in the face of “reified generalities” that form our disciplinary paradigms. What happens when not just “voice” or “speech” is distributed but expression itself becomes described as emerging within a distributed cognitive media ecology?

In any case, that’s where my work is drifting these days and it was useful for me to glance back toward the discipline here to get my bearings vis-a-vis some future audience I hope to address.

Categories: Author Blogs

distributed deliberation and Cambridge Analytica

19 March, 2018 - 10:11

One of the major stories of the weekend has surrounded the interview with Christopher Wylie, former employer turned whistleblower of Cambridge Analytica. Here’s that interview if you haven’t seen it.

It’s good to see this story getting attention, but it’s also something we’ve basically know for a while, right? For example, here’s a NY Times op-ed from right after the election talking about how the Trump campaign used the data from Cambridge Analytica to target votes. Or you can watch this BBC news interview with Theresa Hong, who was Trump’s Digital Content Director, which is from last August. In the interview, she gives a tour of the office where she worked–in an office right next to where the Cambridge Analytica folks were working. If you watch that interview, right before the 3-minute mark she explains how people from Facebook, Google, and YouTube would come to their office and help them. They were, in her words, their “hands-on partners.” Unless she’s straight up lying about that, which would seem pretty weird in the context of this video where she otherwise gleefully recounts her role in an information warfare campaign, then it’s essentially impossible to believe that Fb didn’t know what Cambridge Analytica was doing.

The funniest part of the recent news cycle is when the newscaster turns to the expert and says “do you think this affect the outcome of the election?” Hmmmm…. do you think the Trump Campaign spent $100M+ ($85M on Fb advertising alone) in order to not affect the outcome?

So that’s the news. Now, here’s my part. Let’s be good humanists and start with a straight dose of Derrida and notion of the pharmakon. In considering the pharmacological operation of media, beginning with writing, one might investigate the cognitive effects emerging from technologies. As I’ve written about before, Mark Hansen picks up on this in Feed Forward and is really the thesis of this book:

Like writing— the originary media technology— twenty-first-century media involve the simultaneous amputation of a hitherto internal faculty (interior memory) and its supplementation by an external technology (artifactual memory). And yet, in contrast to writing and all other media technologies up to the present, twenty-first-century media— and specifically the reengineering of sensibility they facilitate— mark the culmination of a certain human mastery over media. In one sense, this constitutes the specificity of twenty-first-century media. For if twenty-first-century media open up an expanded domain of sensibility that can enhance human experience, they also impose a new form of resolutely non-prosthetic technical mediation: simply put, to access this domain of sensibility, humans must rely on technologies to perform operations to which they have absolutely no direct access whatsoever and that correlate to no already existent human faculty or capacity.

If you’re wondering what a “non-prosthetic technical mediation” might be, well one example is the underlying technical operations that drive this Cambridge Analytica story. Non-prosthetic suggests, contra McLuhan, media that are not “extensions of man” (sic).

Think of it this way (and this is a little slapdash but should get the idea across). There’s always a price to be paid to gain access to new capacities. As Hansen suggests, with writing you give up interior memory for artifactual memory. With photography and then film we extend artifactual memory into the visual but at the cost of access to unmediated experiences. Think of selfies or Don DeLillo’s “most photographed barn in America,” or Walter Benjamin’s remark that film introduces us to unconscious optics. Hansen’s 21st century media, what I would think of as networked, mobile digital media, offers a range of capacities (I won’t attempt to enumerate them) but at what cost? Basically everything. We give the network everything that we know how to give.

It strikes me that you can think of the development of media technologies as an incremental distribution of human cognition. This only works because cognition is always already relational and distributed. I.e., the biological capacity for thought emerges from an existing environmental/ecological platform–we think only because there are things to think about and with. I don’t want to go down the rabbit hole right now, but the ultimate conclusion is that we have become, are becoming, interwoven with our digital selves: politically, psychologically, affectively, cognitively and so on in just about any dimension you can imagine.

And as the saying goes, with these Cambridge Analytica revelations we aren’t seeing the beginning of the end of this story but rather the end of the beginning. We can make laws, create tools, and try to educate people, but, without falling for techno determinism, human cognitive capacities are shifting and nothing short of turning off history is going to change that. I don’t think the nature of the shift is pre-determined, but we cannot go backward and we cannot stand still.

Distributed deliberation is all over this story. It’s in the mechanisms of the apps in which FB users answered questions to figure out which Hogwarts school they belonged in (or whatever). It’s in the processes by which that data was collected and transformed into psychographic profiles. It’s in the way those profiles were organized and targeted with specific messages by the Trump campaign. It’s in how those messages were promoted and made visible to those users by social media platforms like Facebook. It’s in the way those messages were then further spread through those users’ networks of friends. Fundamentally distributed deliberation, particularly as it applies to digital media ecologies, is the way nonhuman information technologies–from bots to server farms to algorithms–participate in evaluation of information  (making judgments for us) and in the making and circulating of deliberative arguments. They don’t do it alone. Humans are part of the system. Sometimes, certainly as in the case of the figures in these news stories, in very intentional ways. However that’s just dozens of people. Tens or hundreds of millions more participated by sharing data that was harvested, unbeknownst to them, millions were targeted by these messages, thousands worked for companies doing innocent jobs helping with customer service or maintaining servers, without whom none of this was possible (you could keep “thanking” people until the orchestra at the Oscars passed out).

We are not helpless in the face of this, but we do need new rhetorical practices. And that may sound hopelessly academic and disciplinarily solipsistic, but I would argue that it is rhetoric that gave us some modicum of agency over speech and writing and other media over time. And that’s what we need now: to invent new rhetorical tools and capacities.

Categories: Author Blogs

4C’s and the rhet/comp slasher

17 March, 2018 - 12:00

This is not the working title for my academic murder mystery, so feel free to take it if you like.

No. It’s about a growing conversation over the role of rhetoric in composition studies, and the emergence–at least as perceived by some–of 4Cs as a composition studies conference and not (or at least less so) a rhetoric conference. In phrasing it that way, I hope that I am emphasizing that the trends at the conference are an effect of a larger disciplinary (or is it now inter-disciplinary?) evolution.

Undoubtedly it’s been a long time coming. Some might say that it is a division that has been baked into composition from its formation more than a century ago. Or, we might look at the 1980s when the solidification of rhet/comp in the form of phd programs brought with it debates over the historiography of the field and the “cultural turn.” [I’d also point to the arrival of PCs, the internet, and what Manovich terms the softwarization of culture, but that’s a subject for a different day.] And since then we’ve had a proliferation of specializations, which I think you could find in the language of job ads, the networks of journal article citations, the birth of journals, book series, and conferences, and so on. My department’s new certificate in Professional Writing and Digital Communication is one small example of that. It’s certain not composition studies or pedagogy. It’s not rhetoric. It’s not cultural studies. It’s not even technical communication exactly.  Sure, it touches all those things–all these fields abut one another to some degree–but it’s something else.

The current conversation, as I’ve encountered it, is that rhetoric (e.g., history of rhetorical, rhetorical theory, etc.) has slowly disappeared from CCCC. Meanwhile RSA membership has expanded. I’m not sure if the conference is growing but I do think there’s a sense that some scholars who might have viewed CCCC as their home conference and organization a couple decades ago, now look at RSA instead. RSA now has about the same number of panels as Cs.

I honestly don’t know if that’s a good thing or a bad thing. You might compare this with MLA. The MLA conference is clearly interdisciplinary. It doesn’t include much rhetoric but it does include language departments along with English literary studies. These days the conference is about half the size it was when I was first going on the market. I think that may be because there are fewer jobs and fewer institutions interviewing at MLA rather than there being fewer panels. But with attendance in the 5-6K range, it’s about 50% larger than 4Cs, which I think is in the 3-4K range. MLA is one day longer. It has over 800 sessions. 4Cs this year is over 500, so again MLA is roughly 50% larger. I guess if you tack on ATTW, which always runs the day before 4Cs then you get closer in size. I suppose my point is that, hypothetically, the conference could grow to MLA size and perhaps address this trend if it wanted. That’s hypothetical though. I’m sure there would be many logistical challenges to doing so.

More important though is the question of whether or not we need to be all in the same space.

Just thinking about this from my own scholarly perspective (and figuring many have analogous situations), there’s one part of my work that is really outside of rhet/comp or even English studies that draws on media study, digital humanities, new materialist philosophy, and tangentially a bunch of other stuff like cognitive science, engineering, etc. And then inside of rhet/comp there are dozens of people whose work is very close to mine and hundreds more that are nearby or coming into digital rhetoric as graduate students. It’s probably impractical to keep close track of all the scholarship that self-identifies as either “digital rhetoric” or “computers and writing.” It certainly is for me when I’m also following these other extra-disciplinary conversations. So really in doing my scholarly work, my focus necessarily has to start here.

If one thinks of the primary purpose of a conference is to learn about what other people who are doing scholarship like yours are up to and then do some networking with them, then, at least for me, 4Cs is pretty inefficient. There may be nearly 4K attendees at Cs but if there are ~100 people I’d want to catch up with–people whose research impacts mine directly–then less than 20 were at the conference. If you think about a conference like Cs as a place to get some slice of the broader picture of composition studies, then maybe it works well for that. IDK. I mean, do people do that? I know many people say they always try to go to at least one panel that’s sort of random. I do that too, including this year. And that’s fine, but really only as a supplement to that primary task of connecting to people who aren’t random.

Again I want to reiterate that this isn’t a criticism of Cs. I certainly don’t want the job of organizing that beast. I have zero interest in making an argument along the lines of there should be more people like me at Cs! I think that’s an unworkable argument at scale for everyone who identifies as rhet/comp.

I actually think these trends point to a more interesting problem. I think we may be in a paradigmatic crisis of sorts in the sense that I don’t know what we share as scholars–methods, objects of study, foundational assumptions, research questions? And we don’t need to share those things, but if we don’t then in what sense are we connected? By a shared history, I guess, but I don’t know if that’s enough, especially as those shared moments are receding from living memory.


Categories: Author Blogs

Composing in and with data-driven media #4C18

16 March, 2018 - 07:04
china social credit.jpgCredit: Kevin Wong, Wired

[A text version of my presentation from yesterday]

At the beginning of the century, Lev Manovich identified five principles of new media the last of which he termed “transcoding.” Transcoding describes the interface between cultural and computational layers. In part, transcoding observes that while digital media objects appear to human eyes as versions of their analog counterparts–images, texts, sounds, videos-they are also data and as such subject to all the transformations available to calculation. Such familiar transformations include everything from cleaning up family photos to making memes to share online or even changing the font in a word document. As the quality of video and the computational power available to average users increases, such transformations have also come to include altering videos to make people appear to say things they haven’t said or even putting someone’s head on another person’s body.

Perhaps unsurprisingly much of the early exploration with this video technology has been with fake celebrity porn. That’s certainly NSFW so I’ll leave it to you to investigate that at your own discretion. The point is that the kinds of media we are able to compose is driven by our capacity to gather, analyze, and manipulate data.

The other part of transcoding, however, points to the way in which data-driven, algorithmic analysis of user interactions shapes our experience of the web from Netflix recommendations to trending Twitter hashtags. In recent months, the intersection of these two data-driven capacities-the ability to create convincing fake media and then spread it across online communities-has become the subject of national security concerns and national political debate. I’m not here to talk specifically about unfolding current events but they do offer an undeniable backdrop and shape the situation in which these rhetorical processes can be studied.

Certainly there will be technical efforts to address the exploited weaknesses in these platforms. However, computers are by definition machines that manipulate data, and, as long as these machines operate by gathering data about users and using that data to create compelling, some might say addictive, virtual environments, there will be ways to exploit those systems. After all, one might say these digital products are designed to exploit vulnerabilities in the cognitive capacities of humans, even as they also expand them.

Within this broad conversation, my specific interest is with deliberation. Classically, deliberative rhetoric deals specifically with efforts to persuade an audience to take some future action and, as Marilyn Cooper observes, “individual agency is necessary for the possibility of rhetoric, and especially for deliberative rhetoric” (“Rhetorical Agency” 426). This is not a controversial claim. Essentially, in order for deliberative rhetoric to work, one’s audience must have the agency to take an action. More generally, deliberation requires a cognitive capacity to access and weigh information and arguments. Regardless of whether those arguments come in the form of logical deductions or emotional appeals, the audience still requires the capacity to hear, evaluate, and act on them. However, in emerging digital media ecologies the opportunities for conscious, human deliberation are increasingly displaced by information technologies. That is, machines make decisions for us about the ways in which we will encounter media. In some respects, one can view this trend as inevitable and benign if not beneficial. Without Google and other search engines, for example, how could any human find information on the web? One might even look at recommendations from a subscription media streaming service like Netflix or an online store like Amazon as genuine, well-intentioned efforts to improve user experience, though clearly such designs also serve corporate interests. Similarly, changes to social media experiences such as Facebook’s massaging of what appears on one’s feed or the automatic playing of videos might improve the value of the site for users or might be deliberate acts designed to sway future user actions. Ultimately though, the increasing capacity of media ecologies to record and process our searches, writing, various clicks and other online interactions-to say nothing of our willingness to have our bodies monitored from biometrics to our geographic movements-produces virtual profiles of users which are then fed back to them and reinforced.

To address these concerns, drawing upon a new materialist digital rhetoric, I will describe a process of “distributed deliberation.” This process references the concept of distributed cognition. Distributed cognition is not meant to suggest machines doing “our” thinking for us but rather to describe the observable phenomenon in which humans work collectively, along with a variety of mediating tools, to perform cognitive tasks no individual human could accomplish alone. Distributed deliberation works in the same way. It is useful to think about this in Latourian terms. That is, through the networks in which we participate we are “made to act.” That is not to say that we are necessarily forced to act but rather that we become constructed in such a way that we gain the capacity to act in new ways. For example, through their participation in a jury room or a voting booth citizens are made to deliberate in ways that would not otherwise be possible. However, that is a little simplistic. While we may only be able to vote in that booth, there are many agents pushing and pulling on us as we deliberate. Typically it is far more difficult to discern the direction in which agency flows. As Latour observes, “to receive the Nobel Prize, it is indeed the scientist herself who has acted; but for her to deserve the prize, facts had to have been what made her act, and not just the personal initiative of an individual scientist whose private opinions don’t interest anyone. How can we not oscillate between these two positions?” (An Inquiry into Modes of Existence, 158-9). That is the oscillation between facts demanding certain actions and the agency of the scientist. For Latour, the resolution of this oscillation lies ultimately in the quality of the resulting construction, which, of course is just another deliberation and it is one that requires an empirical investigation, the following of experience. To put it in the context of my concern, as a Facebook user hovering the mouse over the buttons to like and then share a news story placed into her feed, how do the mechanisms of deliberation swarm together and make the user act? Is the decision whether to share or not a good one? Furthermore, while we can and must pay attention to the experience of the human user, so much of the work of deliberation occurs beyond the capacity of any human to experience directly. As such, in charting distributed deliberation we must also investigate the experience of nonhumans, which will require different methods, and that’s where I will turn now.

Understanding the specific operation of those nonhuman capacities is a task well suited to Ian Bogost’s procedural rhetoric, which he describes as “the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures. This type of persuasion is tied to the core affordances of the computer: computers run processes, they execute calculations and rule-based symbolic manipulations” (Persuasive Games, ix). Though Bogost focuses on the operation of persuasive games as they seek to achieve their rhetorical goals through programming procedures, he recognizes that procedural rhetoric has broader implications. Selecting a movie or picking a route home with the help of Fandango or Google Maps may be minor deliberative acts, but they offer fairly obvious examples of how deliberation can be distributed.

Yelp, for example, combines location data with a ratings system and other “social” features such as uploading reviews and photos, “checking in” at a location, and providing map directions. These computational processes compose a media hybrid and expression with the capacity to persuade users. Certainly one might be persuaded by the text of a review or a particularly pleasing photo; text and image play a role here as they might in a video game. But the particular text and photos the user encounters are the product of a preceding procedural rhetoric that decides which businesses to display. It is not only restaurants and other business that are reviewed but the reviews and reviewers as well, which serve as part of a process that determines which among the dozens or hundreds of reviews a business might receive that one is first to see. In the case of Yelp, users write reviews and rate businesses on a 5-star scale. Yelp then employs recommendation software to analyze those reviews and weigh them. Does it matter that they claim the recommendation software is designed to improve user experience and the overall reliability of the reviews on the site? Maybe. What is key here, however, is that such invisible procedures undertake deliberations for us. The fact that it would be practically impossible for users to undertake this analysis of reviews independently or that users are still presented with a range of viable options when looking for a local restaurant, for example, does not alter the role that such procedures perform in our decision-making process. In Yelp one finds a digital media ecology that includes juxtaposed multimedia (e.g., photos, icons, text, maps), computational capacities (e.g., linking, searches, location data, data entry for writing one’s own reviews), algorithms or procedures (e.g., ranking businesses, evaluating and sorting reviews), and media hybrids (e.g., combining with mapping applications to provide directions or linking with your phone to call a business). Indeed one might look at Yelp itself as a media hybrid with its own compositional processes, rhetorical procedures, and genres.

The softwarization of media did not take off fully until personal computing hardware was powerful enough to run it. Social and mobile media obviously rely on the various species of smartphones and tablets. They require the hardware of mobile phone and Internet networks and server farms. Whole new industries and massive corporations have emerged as part of this ecology, and this means people: HVAC technicians keeping server farms cool, customer service representatives at the Apple Genius bar, engineers of all stripes, factory workers, miners digging for precious metals in Africa, executives, investors, and so on. It also involves a shifting higher education industry with faculty and curriculum to produce research and a newly-educated workforce, an infrastructure that relies upon these products to operate, and students, faculty, and staff who feed back into the media ecology. In short, a media ecology cannot be only media just as rhetoric cannot only be symbolic. While digital media ecologies create species with unique digital characteristics, they cannot exist in a purely digital space anymore than printed texts can exist in a purely textual space.

As such, whatever rhetorical power the algorithmic procedures of software might have, their most powerful rhetorical effect might lie in the belief users have in the seeming magic of a Google search or similar tools. However, as Bogost observes, algorithms are little more than legerdemain, drawing one’s attention away from the operation of a more complicated set of actors:
If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work…SimCity isn’t an urban planning tool, it’s a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like. (“The Cathedral of Computation”)
Indeed, in some comic bastardization of Voltaire one might say that if algorithms didn’t exist that we would have to invent them as a means of making sense of media ecologies and our role in them. That is, in the face of a vast, unmappable monstrosity of data, machines, people, institutions, and so on intermingling in media ecologies, the procedural operations of software produce answers to questions, build communities, facilitate communication, and generally offer responses to our requests, even as they shape those questions, communities, communications, and requests. In other words, the distribution of deliberation and other rhetorical capacities among the human and nonhuman actors of digital media ecologies is necessary and inevitable. Describing and understanding the complexities of these relations as they participate in our deliberations, rather than simply celebrating or bemoaning the apparent magical abilities of the tools we employ becomes the first step toward building new tools, practices, and communities that expand our rhetorical capacities.

It’s worth noting that powerful entities are already intentionally at work on these goals. A year ago, when the concerns about Facebook and fake news were really taking off, Mark Zuckerberg published a manifesto declaring that “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” In a related vein, as has been widely reported, the Chinese government has a different idea of the role distributed deliberation can play in its creation of a social credit system. I don’t know about you, but I’m not especially sanguine about the notion of Facebook engineers building the social infrastructure for a global community. I’m even less enthused about the possibility of other nations importing these Chinese practices, as if we do not live in a thoroughly monitored environment as it is.

I won’t pretend there are any easy answers, any simple things one can slip into a lesson plan. The task begins with recognizing the distributed nature of deliberation and describing such processes to the extent that we can. This includes paying attention to the devices we keep closest to us and understanding the particular roles they play. And it means inviting those nonhumans into our disciplinary and classroom communities. Just as universities, departments, and faculty are capable of creating structures that encourage conformity to existing rhetorical and literate traditions, they might conversely create structures that are more open to these investigations. This might mean finding ways to use rather than restrict access to digital devices in the classroom and creating assignments that push students to creative uses of the collaborative and distributed cognitive potential of digital networks rather than insisting on insular and individualized labor. It might mean asking questions that cannot be answered by close reading or setting communication tasks that cannot be accomplished by one person writing a text. From there the classroom has to proceed to create solutions to these tasks rather than assuming the answers already exist, which is not to suggest that many answers might not be readily available but the emergent quality of digital media means, in part, that new capacities can always been considered.


Categories: Author Blogs

academic nostalgia for meatspace meetspace

14 March, 2018 - 10:19

Here we are once again, another national conference. I’m waiting out a three hour delay on my first flight with the second rescheduled, so fingers crossed. And let’s not even get started about the return trip. I don’t want to curse my luck that badly!

Tell me again why we do this? I mean I understand it’s a professional obligation. It’s part of my job to go to conferences. So I suppose the short answer is that it’s just another onerous, unproductive part of the academic bureaucracy, another holdover from a past century. I guess that’s enough of an answer. After all, we all have stupid things we need to do as part our jobs. So as I sit in this terminal at least I can console myself with the fact that I’m getting paid.

But let’s try to summon up the fantasy of academic freedom one last time and imagine that we could actually set the terms of what we considered valuable in terms of scholarly work.

There are two obvious alternatives to the national conference meatspace meetspace. The first is the fully online conference. We stay home. We do videoconferencing to discuss papers/presentation that are posted online. The second option really just adds the dimension of a regional meet-up (i.e. something you could comfortably drive to and maybe not even need a hotel). E.g., maybe there’s a 3-4 day conference with 1 or 2 days where people get together.

What would be the point of adding that part? I don’t know. What’s the point of a meatspace meetspace? The short answer is socializing. It’s certainly not the presentations or the discussion following the presentations, which, for whatever value you can attribute to them, can easily be replicated online. So the value all lies in the informal dimensions of the conference: the serendipity of meeting a stranger who shares your research interests and becomes a new colleague/collaborator; catching up with colleagues; and maybe some esprit de corps of being surrounded by so many people in your discipline. It’s catching a drink or meal with friends you only otherwise see on Fb.

In other words, people enjoy socializing. I also enjoy socializing at conferences (for certain values of enjoyment). But if that’s what it’s about, then maybe we could just look to get group rates on cruises or something.

Setting aside the minor irritations of air travel and hotel stays (which compare to 21st century teleconferencing in the same way that air travel compares to 19th century train travel), there is an expanding list of drawbacks to national conferences:

  • The direct costs, especially for graduate students and contingent faculty;
  • The indirect costs of time taken away from other work;
  • The political and material concerns that now arise regularly with each convention location.
  • The carbon footprint.

I don’t know. I’m sure I’ve been writing roughly this same blog post for at least a decade. I like to go to conferences. I’m looking forward to enjoying my time in Kansas City. Maybe someone will have something interesting to say about my presentation… probably not but whatever. Hopefully I will catch up with friends.

But seriously… To me, the practicality of national conferences will eventually wane. It’s a when not if scenario. It’s really only a matter of tweaking some technical matters and figuring out the social mechanics.



Categories: Author Blogs

rhetoric of podcasts, podcasts of rhetoric

5 March, 2018 - 11:39


One of the very best things about no longer running the composition program is having the time and mental space to get back to digital rhetoric in a more practical and compositional way. This has got me thinking, in this post, about podcasting in terms of its various rhetorical structures but mostly about the kinds of podcasts that are out there in my field.

Before I started at Buffalo, I regularly taught classes on digital production and I did a fair amount of it myself. My 2010 Enculturation article, made the year before I started being composition director, included a video. And in 2008, right before I left Cortland, I’d published an article in Kairos about teaching podcasting in my professional writing courses there. All the dropped away for me when I took on the WPA job here at Buffalo. That’s a story for a different time, but that’s part of the context for where I am now. The other part is that we’ve started a new graduate certificate in professional writing and digital communication, and this has provided some real exigency for me to get my hands dirty again with production.

You can do your own Google search or take my word for it that podcasts have become increasingly popular. A decade ago when I was teaching this stuff, we didn’t have the smartphone and mobile data networks we have now that make following podcasts so easy. (OK, here’s one quick stat from this article: these days 42 million Americans listen a podcast every week.) Personally I like podcasts. I also like audiobooks. I use them for entertainment purposes. (Mostly I listen to podcasts about soccer.)

Given this I think there are several good reasons to teach students in a professional writing curriculum how to podcast including

  1. Though they may never podcast professionally, this is a significant genre in our media ecosystem about which professional communicators need to have an understanding.
  2. Creating an audio recording is a fairly simple task, at least at a basic level. But it also opens a path for becoming more sophisticated. (I.e., minutes to learn, lifetime to mater). In this sense it’s a practical entry point into a larger field of sound rhetorics.
  3. Many students I encounter have a fairly narrow usage of media and an ever smaller experience as a composer. So podcasting becomes one in a series of experiments with media production that begins to alter our relationship to composing from one that says “I’m a writer” meaning I put words in a row on a paper to something more capacious.

My challenge though is that I always want to do the things I ask our students to do. And so I come to podcasting. The real challenges with podcasting are rhetorical and compositional. How does one create a compositional space in which one produces hopefully interesting podcasts on a semi regular basis?

So what kind of podcasting is there in rhetoric and composition? Well, this FB page tracks several of the more prominent ones. Among the active ones on that page there’s Rhetoricity produced by Eric Detweiller, Rhetorical Questions produced by Brian Amsden, Eloquentia Perfecta Ex Machina produced by the St Louis University composition program, and the CCC Podcasts produced by NCTE. I’m sure there are more. I’ve listened to a few episodes of each, and they all follow an interview format with some interviews more formal than others. So really you have someone new in each episode. That’s an entirely familiar and sensible format.

Another common format I encounter in listening to soccer podcasts is basically punditry/fan banter. There you have two or more regulars who discuss the events of the last week. I haven’t really seen that in rhetoric/composition, perhaps because we don’t really have “events” to discuss. Obviously you could discuss the rhetorical angles of current events, which would be a kind of application of rhetorical scholarship. It might be a good kind of podcast for someone to make, but that’s probably not an angle I’d want to take.

Part of the issue then is the periodicity of scholarship or at least the periodicity of the communication of scholarship. It maybe should be noted that the later is not “natural,” of course. The pattern of article publication, the length of articles, the length of conferences, the patter of the academic conference schedule: these are as much a by-product of the material affordances of mid-20th century communication technologies as they are anything to do with the qualities of the objects and practices we study or our methods. Clearly there’s some feedback in that loop and cybernetic/homeostatic impulses are at work.

So I’m wondering if it is possible to shift that periodization. I still think that blogging is an opportunity to do that, to have a more ongoing and less precious conversation about ideas and discoveries than what publishing allows (not as a replacement but as an enrichment). It never really became that. Maybe podcasting is a better medium for that. A few people get together and talk about their work, what they are reading, what’s happening in their classes. Is that interesting? I don’t know. I think almost anything has the potential to be interesting or boring depending on the audience, the situation, its production/performance/composition.

Like this blog for example… to quote Mitch Hedberg, “I played in a death metal band. People either loved us or they hated us… or they thought we were OK. “

Categories: Author Blogs

the affects of gun control

26 February, 2018 - 13:15

Conversations in America about gun control, public space, and safety–which are related but not equivalent–are grounded in affect, cultural/ideological identity, and ontology. I’ll swing around to the ontological element later, as that’s what is most relevant for my work, but I’ll stick with the more familiar elements first. Most strong opposition to gun control begins with the following:

  • Affect: an affection or love for guns; a pleasure found in gun ownership and/or use.
  • Cultural/ideological identity: being a gun owner is part of who you are. Criticism of gun ownership or use is an attack on your identity, your personhood, as well as the culture in which you participate.

Conversely, most strong support for gun control reflects little or no affection for guns (perhaps even antipathy toward them) and no personal or cultural sense of identity tied to gun ownership or use (and perhaps even identification with the rejection of guns). However, I think it is fair to say, those individuals tend not to define themselves by their opposition to guns to the degree that others identify with guns. Instead, their support of gun control is just a part of a larger bundle of cultural identifiers. Because affects are a matter of intensity, there are, of course, people who feel less strongly in either direction, though I would hesitate to put them “in the middle” because that suggest they feel strongly about some third position that is somehow a mixture of the extremes. If we are going to insist on some abstract political geometric model, I wouldn’t suggest a line but rather some multi-dimensional space with others at a tangent to these two opposed positions.

To make any kind of deliberative argument, there needs to be some common sense of a future. And put quite simply, we don’t have one. On the one side you have a vision of an America where everyone carries a gun. Public spaces are secured by individuals. And the only laws are natural/divine ones. The core of this ontological position is straightforward. My abilities to think and act are divinely granted qualities for which I bear an obligation to g-d. If I have a gun available to me, then I have more power and agency than I have without a gun. Even if, I am, in some respects, at greater danger, that’s always the case with power. As an individual I am always responsible for my exercise of power, and the degree of power I have doesn’t change that. I want a future society founded on powerful individuals acting responsibly and being held accountable for their actions.

The opposing ontology understands humans not as individuals with divinely granted powers but as social and historical animals. Here capacities for thinking and acting emerge from collective action through the assemblages, networks, and institutions we construct and maintain. Because I see my agency as relational rather than inherent, I would view any decision about gun control as an act of the state. That is, where the gun advocate would see restricting guns as a state action but not restricting guns as permitting a natural/divine condition to exist unregulated, I would view either decision as the operation of social-historical structures. Either course results in the creation of new conditions for agency but there would be no way to imagine a lack of gun control as the assertion of a more “free” condition in absolute terms because there are no absolute terms.

Hypothetically, from this second ontological perspective, one could argue in favor of arming citizens though I am having a hard time formulating one. You’d have to argue that increasing citizens’ ability to harm and kill one another directly produced a desirable set of social conditions. Or maybe you could argue that this negative effect of arming citizens was worth the cost in order to address some other problem or fulfill some other desire. I don’t think those are easy arguments to make if one is starting from an otherwise neutral position. You’re either arguing that 30K people dying every year is a good thing or that 30K people dying every year is a price worth paying to achieve _____. And it’s not easy to make those latter arguments in terms of safety or hobbies. If you’re worried protecting citizens from crime then you would seek to mitigate the social conditions that lead to criminal activity rather than arming individuals. If you wanted to allow for gun hobbies like target shooting or hunting then you could create carefully regulated spaces for such actions where guns are stored and ammo handed out more parsimoniously than opioids are today.

As I point out above though, from this perspective the gun control issue is bundled with a larger constellation of matters that begins with the premise that agency is created and destroyed through collective social action. Basic income, health care, education, environmental protection, equality before the law: through such collective actions agency can be increased. Of course the devil is in the details. But from this perspective collective inaction is just another form of collective action. The key from this ontological perspective is that political action is about working collectively to make things better, even though sometimes we fail to do so.

With that in mind, the key lies in the collective, democratic restructuring of social assemblages which results in a shift in affect. In our democracy, it’s hardly about changing the minds of your political opponents. It’s almost entirely about activating the undecided and non-participating votes. E.g., If you’re in favor of gun control how many of those folks can you convince to come out and vote against anyone who has an “A” grade from the NRA? In how many current red districts can support for the NRA become a political liability?


Categories: Author Blogs

“How Hard Do Professors Work?” Why do you want to know?

8 February, 2018 - 14:08

Variations of this question have become a genre unto themselves, as this recent article in The Atlantic exemplifies. The article takes the occasion of some “Twitter battle” to revisit this topic. But really, why do you want to know? Are you just curious? Is my job so very mysterious?

  • maybe the high cost of college has led you to turn your focus on how I spend my time and how much I get paid
  • maybe you’re an administrator or a general hobbyist with an enthusiasm for spreading Taylorist efficiency in the workplace
  • or maybe you just hate academics for one reason or another.

SUNY salaries are public data, so you could track mine down if the topic really interested you. It’s pretty close to the average mentioned in the article, which is ~$80K/yr. I’m not sure how there’s a complaint about that. It’s not a ridiculous amount of money. It’s market-driven, like everyone else’s salary. If you want to change the nature of our economy, we all want to hear the plan (to quote John Lennon), but otherwise… As for reducing the cost of college, well if you’re a NYS resident in a household earning less than $125K then you’re not paying tuition at a SUNY school like mine. If you’re paying full in-state tuition, then it’s less than $7K/yr. So 4 years tuition is less than the average price of a new car. Do you go into the car dealership and tell everyone in the auto industry they should be making less money so that you can buy your car for less?

In many ways, I don’t think being a professor is that different from many professions that require creative thinking and address open-ended problems. That is, in theory you could spend every waking hour working, especially if you consider the time you spend pondering a problem while in the shower or walking the dog or driving to the grocery store as “work.” That’s one of the issues noted in the article, how do you define work? This can be particularly true for professors who are not only occupied by their jobs (i.e., it’s their occupation) but are also pre-occupied (e.g., they can’t stop sharing work stuff on Facebook… or blogging about it). Early on in my academic career, when my kids were young, I made a decision to draw some firm boundaries on this, because I grew up pretty much without a dad and I didn’t want that “cat’s in the cradle” bs for my kids. (I know not every academic is in a situation where they feel empowered/able to do that, but that’s my story.)

But for simplicity sake, let’s take the most limited, commonplace notion of work and restrict it to time spent working directly on accomplishing a clearly defined task.  40-50% of my time (2-3 days week) is teaching, grading, class prep, office hours, student emails, etc. I try to spend about 30-40% (1-2 days per week) on research, reading, writing for publication. That leaves me with a few hours per day that are devoted to the service aspects of my job (committee work and so on). That’s during the academic year. In June and July, I spend more time on research. I give myself a chance to learn new things I might want/need to incorporate into my teaching or scholarship. It’s looser for sure. Then I take a vacation. And then in mid-August it starts over. During the school year, I spend about 60% of  my work time on campus. Otherwise I’m working from home.

Pretty boring, huh? Certainly not mysterious or really all that weird. There are plenty of people who have jobs where they work from home part of the time, that are busier in certain times of the year than in others, or where the nature of their work is seasonal for some reason.

Still the drive to answer this question moves on. As this article suggests, it must be researched!

The research could also help paint a clearer picture of how academics divvy up their time—how many hours are spent teaching students, doing research, attending conferences, frittering away in meetings. That information could prove especially useful at universities that are rethinking the demands they place on professors and striving to enable faculty to spend more time in the classroom.

This week’s viral Twitter battle over the workload of professors was a fun, insider debate, but it also opened up serious questions about the purpose of college.

I don’t get that. In part because it’s obvious. I’m at a research institution where I teach two courses per semester. Other colleges that are less research intensive (who expect their faculty to produce less scholarship) ask professors to teach more courses. So if one wants faculty to spend more time in the classroom than one could start hiring faculty with more teaching intensive workloads–and many universities do that.

I understand that some folks may not see the value in the time I spend on research or professional development. That’s fine. Come by my office and we’ll talk if it really matters to you. On the other hand, I may not really understand the value of the work you do either. But that doesn’t matter because you’re employers do and they’re the ones paying you to do it.

See how that works?

Categories: Author Blogs