Author Blogs

The post WPA life: one year on

Digital Digs (Alex Reid) - 15 August, 2018 - 09:41

From the summer of 2010 through the summer of 2017, I served as the director of composition at UB. In the last year I’ve gone back to being a rank and file professor in the department. I’ve been thinking about writing this reflection for a little while but wanted to mark the occasion of a full year on. In my seven years on the job, I certainly learned a lot, and I’d like to think we accomplished a great deal (not that there isn’t always more to do). I’m not going to go into that here as this is not an epideictic post.

I will say that on a daily basis the job was basically an exercise in pissing into the wind. It was labor-intensive, stressful, anxiety-inducing, frustrating, and infuriating in equal measure. Though WPA jobs certainly vary from school to school I’d strongly warn against pursuing one if what makes you happy is accruing plaudits.

I think the easiest way to measure the professional impact–at least on me–of WPA work is in relation to scholarly production. As a WPA I had summer administrative duties (for which I received a stipend) and normal academic year duties (for which I received course release), but basically the expectations for scholarly production for me were not different from those of my other colleagues. I can compare my production over those seven years with my productivity at Cortland from 2002-2009. It’s worth noting that while at Cortland I worked a 4-3 and then a 3-3 teaching load. So during that time at Cortland I published 7 articles and a book. At UB, I published 12 articles. I also completed a book manuscript but that’s still in the works.  So the way I look at it, I was a little less productive as a WPA than I was at Cortland as a more junior scholar (which makes publishing more challenging) with a higher teaching load. In short, being a WPA had a significant impact on my scholarly productivity, which I don’t think is surprising to anyone whose done the job, especially if your scholarship is not related to WPA work or even composition studies (as mine is not).

The real effects though were more personal/mental. I don’t mean to suggest that it was continually miserable being a WPA. It really wasn’t. But over time I just got used to the regular role in my life of addressing complaints, solving logistical problems, strategizing, arguing/persuading/wheedling, and so on. I got used to planning to spend my day doing one thing only to wake up to some BS email that sent me down one rabbit hole or another. But beyond that was the constant awareness that:

  • the adjuncts and others I employed were getting screwed over
  • the TAs in my care were too
  • the administration was always giving me the runaround
  • the students weren’t getting what they needed or at least what they might have gotten if things were better, and
  • there was always more that could be done.

It was like a filter of unhappiness over the lens through which I viewed the world, one that I’d gotten used to and forgotten was there until one day a few months ago I realized that it was gone.

But anyway, the good news.  Thanks to the aptitude of my successor, I was able to extricate myself smoothly from the daily operations of WPA life. But it took my mind more than a semester to go through decompression. I think that, combined with the fact that I had completed my manuscript around the same time as I was ending my WPA stint, left me with an open space that has taken some time for me to figure out. So it’s really just been in the last four or five months that I’ve started moving down some new paths: doing new research, teaching new courses, picking up some new technical skills and renewing some old ones. Not for nothing I also took off 50 pounds this summer, so I feel like a new man with literal and figurative weights lifted from me.

I’m looking forward to seeing what the next year brings.


Categories: Author Blogs

the future of the English major, or “Hey Mister, where do you want these deck chairs?”

Digital Digs (Alex Reid) - 18 July, 2018 - 16:53

The Association of Departments of English (ADE) released a report today on the changing English major. As those who tuned into the last episode (or the last 1000 episodes) of this program will remember, there has been a steady decline in English majors (going back to the early 90s when measured as a share of the total degrees granted) and a sharp decline in recent years (in both share and raw number). So that’s the impetus for the report, which mostly focuses on how departments have responded to this through curricular changes.

Before I get to that though, there’s an important topic to address. Many will argue, and I believe with merit, that the plight of English majors and departments is largely a product of the rise of neoliberalism over the last 40 years. Higher education overall has been adversely affected by a sharp decline in public support (which is why college costs students so much more than it used to), but also English and the other humanities have been devalued in that context. College has been redefined as a means to an end in a career and majors oriented in very specific ways to specific careers have proliferated. There are far more majors now than 30 years ago. The extent of these neoliberal effects vary from institution to institution as it’s partly a reflection of local institutional choices, state governments, etc.

What that means to me is that the historical contexts that benefited English and attracted majors in the 50s-70s changed. Part of those changes are very specific neoliberal strategies. Part is changing demographics of students and their changing attitudes regarding college education. As I always argue here when this topic comes up, a significant part is the shift in literacy practices. It is not mere coincidence that these declines follow developments in digital media and culture. Whatever argument anyone wants to make about the continuing value of the traditional literacy practices at the heart of English curricula, it is self-evident that students do not agree.  And of course the fact that most students don’t agree isn’t really an argument for changing our discipline… But if that’s where we’re headed (and it is always where we are headed when this topic is raised) then really what was the point in doing this study?

But anyway, to the report…

So I’ll just focus on the recommendations. Here’s a snippet:

Media studies, including digital work, has begun to find its way into many English programs, although to a degree less than one might expect. We recommend that, where appropriate, departmental curricular discussion expand its attention to media and digital studies.

Rhetoric and composition continues to be an important component of the English major, and enhanced opportunities for advanced study in writing and areas such as technical and professional writing are becoming well established. We recommend that departments give continued attention to writing studies and to its connection to other parts of the major.

There are a couple other paragraphs in that report about these fields (which tend to make up nearly 50% of the tenure-track jobs every year), which is roughly about the same amount of space as the report spends wringing its hands over the “Place of Shakespeare.” Most of the report is about literary history. Why is that? Well, not to confuse the deck chair metaphor in the title but it’s because literary history is the anchor dragging English departments down, and the literary scholars driving departments seemingly have no choice but to hold onto it.

Now I’m not saying there is no role for literary history, but the report clues one into the problem. For example “One trend among departments is to think of the English program not just as a program in literature but more expansively as one in English studies, a term intended to show self-aware hospitability to media, composition, rhetoric, film, cultural studies, and other interests that reside in English departments at all types of institutions” (my emphasis). WTF is hospitability? What kind of relationship does this presume among faculty within a department? Is this meant to suggest that there’s a breathable atmosphere? Or that someone’s going to offer me a drink while I’m visiting my own department?

But don’t worry. It gets worse. In fostering this “hospitability,” “Accordingly, tracks and concentrations within the major are becoming increasingly common… The track and concentration models have appeal, since they respond to the shape of the profession as it exists. Yet these structures are not without risks. Specialization may be forced prematurely on students, or the major may balkanize and conflicts may harden between the fields.” Conflicts may harden? If the hardening doesn’t subside after four hours, should we consult a physician? Imagine being in a department where there’s simply no role for you in the major. Ultimately the report recommends the consideration of tracks, but it remains worried.

I don’t really mean to blame the report writers. I think they’re accurately describing English departments. The recommendations they make strike me as sound but unimaginative and probably 20 years too late. They cannot get outside of the idea that literary studies is the foundation of English. As they write at one point “The literary (creative works, authors, periods, movements, genres, tropes and figures, and the like) remains a defining feature of the English curriculum that distinguishes the discipline from other textually oriented fields in the humanities, such as history or philosophy.” They can’t get outside the literary, printcentric condition. Either they can’t get there in their minds or they are just helplessly constrained by what is. I can’t tell.

Anyway, the real thing that has English screwed isn’t this lit/writing division anyway. It’s its total failure to address digital media and culture 20 years ago. And I’ll end on the funniest line in the whole report. “For digital studies, skepticism still lingers in some quarters about the field’s usefulness.” OK. Meanwhile on the topic of the “usefulness” of literary studies… I think I feel some hardening.


Categories: Author Blogs

Manuel Delanda in rhet/comp?

Digital Digs (Alex Reid) - 16 July, 2018 - 10:52

Being somewhat in between projects right now, I’ve started working on an article that, at least at this point, begins with exploring the value of DeLanda’s assemblage theory for rhetoric and composition. DeLanda often comes up on this blog and has been an important thinker for me for 10-15 years at least. His earlier works were interesting to me but it was Intensive Science and Virtual Philosophy (2002) that really demanded my attention as intersecting with and helping me develop my own thoughts at that time related to a Deleuzian posthumanism and new media.

However, aside from my own work, I find little treatment of DeLanda in rhetoric and composition. There are a couple passing references here and there. Those are typically to A New Philosophy of Society, which is a useful text and also one that is likely more accessible to humanists than Intensive Science, Philosophy and Simulation, Assemblage Theory, and some of the other texts that are more heavily mathematical and scientific in their subject matter. As far as I can tell, you won’t find more than a handful of citations of DeLanda across all the traditional major print rhet/comp journals combined. Most have none. His name starts to come up here and there in relation to the recent interest in new materialism, though he’s not often recognized as a central figure there, even though he coined the term in the mid-90s (as did Rosi Braidotti separately at roughly the same time).

One issue I’m not going to get into in the article that instead I’ll take up here is a speculation about why this is the case. I.e., why overlook DeLanda?

  1. The mathematical-scientific content. I already mentioned this, but it bears repeating. It’s not just that DeLanda talks about science. Science studies does that, and rhetoricians can obviously handle science studies. It’s that DeLanda isn’t really doing the critique of science thing. He’s developing a realist philosophy that is consistent with (though expands beyond) empirical science. I will not claim to understand all the science he references. It’s complicated, disciplinary stuff. Usually I get the gist of it, especially with some googling. But it’s hard to digest. Plus it’s a challenge to some entrenched disciplinary views to take up science that way. Maybe that’s a turn off.
  2. Dense philosophical content. To some extent rhet/comp has taken on the Derridas et al of the world. At least there’s been a place for such work. But that’s not everyone’s cup of tea. DeLanda is clearly building on Deleuze (and Guattari), though, especially since Intensive Science he’s developed his own version of assemblage theory. I think you combine that with the math-science stuff and it’s a kind of double-whammy. DeLanda’s aware of this himself, as he notes at the beginning of Intensive Science about the audience dangers of his work: “such a danger is evident in a book like this, which attempts to present the work of the philosopher Gilles Deleuze to an audience of analytical philosophers of science, and of scientists interested in philosophical questions.” Who knows if he ever reached that audience, but one can see how rhetoricians might not see themselves here.
  3. The “realist philosophy” controversy. The notion of proclaiming not only oneself but also Gilles Deleuze to be realist philosophers in the mid-90s might seem to be the height of absurdity. But that’s what DeLanda did. In that same introduction he describes realist philosophers as those “who grant reality full autonomy from the human mind, disregarding the difference between the observable and the unobservable, and the anthropocentrism this distinction implies.” Much of his work since then has gone on to articulate how a realist philosophy might operate without a reliance upon essentialism. To anyone who grew up intellectually bathed in postmodernism the idea of a contemporary realist philosophy seems simply out of bounds.
  4. The historical-material focus. Ours is a text-based field. Our lingering postmodern conceit is to focus on representation and ideology. DeLanda’s work has always been about energy and matter, whether the topic is bonds within atoms or the formation of trans-Atlantic trade routes. Though our discipline has no problem talking about  material consequences when making critiques, we’re almost always beginning and ending with language in a kind of immaterial way, or at least in a way that its materiality isn’t especially significant. I suppose that’s a consequence of being rhetoricians, but it’s also a way that makes DeLanda foreign.
  5. The nonhuman. And by this I mean that DeLanda’s work just isn’t anthropocentric. Of course it is in some respects. He is, after all, a human and likely has a central role in his own work. However, he tends to treat humans as just more assemblages, more individual singularities (i.e. actual, real, material, historical individuals) than as the central actors of history. I know one of the common concerns about new materialism is that it doesn’t grant humans enough agency or that it take agency away or something along those lines. Personally I think it’s very strange that scholars imagine theories have this grand agency-granting power one way or another. This wouldn’t be a realist philosopher’s view. However I would say this. One way that you can test if a realist philosophy has any value is by constructing practices based on its principles. That is, one way we can know the value of science is that its concepts inform technologies that actually work. That’s not to say that science is “true” but that it’s “true enough.” Still, rhetoric is such an anthropocentric discipline that it could be hard to take up theories that are less so like DeLanda’s. However this is the angle I’m taking up in my article–that the value in DeLanda’s realist-assemblage theory might be weighed by how its concepts lead to building things that work.

So those are some hypotheses as to why you don’t see much of DeLanda in rhetorical scholarship. Obviously I think that’s too bad, but one of my pet peeves about scholars is when they insist that others should be citing this or that person more. So I don’t mean to come off that way here. I will say one other thing as way of conclusion. I know that one other complaint about new materialism is that it’s a white male thing. There’s some merit to that (though really there’s a disproportionate number of white men all over the academy so it’s not surprising to find it here too to some degree). I think maybe that complaint does a disservice to the many excellent non-male rhetoricians doing work in new materialism and posthumanism. I’d note, fwiw, that Manuel DeLanda–who I will maintain is a central founding figure of this area of inquiry–is Mexican born in 1952 and living thereuntil arriving in NYC as a young man in 1975 (as near as I can tell from his biography). According to his Wikipedia page he did recently get a phd through the European Graduate School (where he’s the Gilles Deleuze chair) but otherwise he has a BFA and began his professional life as an experimental filmmaker in NYC. I don’t think he’s ever been a tenured professor anywhere. In short, though he has unarguably become one of the most prominent philosophers of our time, he has done so by taking a singular path.

Categories: Author Blogs

the most stupid superintelligence possible

Digital Digs (Alex Reid) - 12 July, 2018 - 13:07

I’m reading Nick Bostrom’s Superintelligence as a kind of light reading tangentially related to my scholarly interests. If you’re unfamiliar, it’s basically a warning about the dangers of artificial intelligences spinning out of control. There are plenty of respectable folks out there who have echoed these worries—Hawking, Gates, Musk, etc. And Bostrom himself is certainly an accomplished, serious scholar. Superintelligence is not a hack-job, but more of a popular audience, entertaining treatment of a philosophical problem. There are plenty of places to read counter-arguments, suggesting the unlikely, even nonsensical, elements of this book.

For the sake of argument, let’s agree with the premise that IF AI superintelligence developed that we mere humans wouldn’t understand how that intelligence worked. Our relationship to that AI would be like the relationship between a bug and a human. I think that’s a safe assumption since we don’t understand how our own cognition functions (or even really a bug’s cognition for that matter). We wouldn’t understand how the AI thought, what it thought, or what capacities for action might arise from that thinking.

To get to this, I need to mention what Bostrom terms the “wise-singleton sustainability threshold,” which he defines as “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” He notes “One could even argue that Homo sapiens passed the wise-singleton sustainability threshold soon after the species first evolved. Twenty thousand years ago, say, with equipment no fancier than stone axes, bone tools, atlatls, and fire, the human species was perhaps already in a position from which it had an excellent chance of surviving to the present era.” The human analogy is an interesting one. In part it suggests that a superintelligence has already emerged (i.e. us) and one can follow out Bostrom’s alarmism regarding AI to think about how the human superintelligence might destroy the planet before an AI one gets the chance.

Indeed, right now it makes more sense to be concerned about humans than AIs, which is not to say there isn’t time in the day to worry about both I guess.

But let’s think about motivation—what would such an AI do/want to do? Bostrom admits to the difficulty of knowing this. It’s akin to a bug guessing at human motivations. Still he argues that there are a few intermediary goals that we can suspect: self-preservation, goal-content integrity (i.e. seeking to pursue goals, though they might change), cognitive enhancement, technological perfection, and resource acquisition. OK. Those all seem like rational/reasonable things to do. But if humans are the current, extant example of superintelligence then we can hardly say these things explain our motives or actions, either collectively or individually. I mean sometimes they do, but how would they explain my writing this blog post? It’s not self-preservation. It’s not the pursuit of a goal (except in a tautological sense of “I want to write a blog post so I’m writing one”). And I don’t see how it could be about any of the other three either. Why am I doing this? IDK. Ask my therapist. I want to. I enjoy it. I like the way thinking through a blog post feels.

Sure, an AI superintelligence would be far more intelligent than us, so maybe it would behave as Bostrom describes, but this strikes me more as being along the lines of what Bostrom thinks we would do if we were smarter. And I doubt that. Bostrom also writes of humans “Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” Presumably the same might be said of an AI superintelligence. It might seem omnipotent to us, but it wouldn’t be. It would just be the stupidest thing that could outsmart humans.

One of the curious aspects of Bostrom’s imaginings is that for the most part there is only one superintelligence. Even when he considers the notion of a “collective superintelligence” they’re all pieces of single entity. I’m not really sure how a notion of motivation or even intelligence works in the absence of some relation. Now if the premise is that a superintelligent AI would be completely inscrutable to us, then I suppose it might do anything. But you can’t really spin that out into a book manuscript, can you? And I don’t think it would be wholly inscrutable.

Instead, I think it’s necessary to conceive of cognition and agency (i.e. motivation, will, goals, etc.) as emergent in relations to others. A “single” superintelligent being would have little or no cause to think or act. It would end up being no smarter than an amoeba. And really that’s what Bostrom’s monstrous AIs sound like. They are beings driven by a single arbitrary goal (calculating Pi or making paper clips) and end up destroying the world and sucking up all the resources in the available universe to achieve this end. So they grow tremendously in power and sophistication, adding all these new capacities, but never manage to escape from the inane task given to them at the outset by some lazy, unthinking human programmer.

Oddly enough, the problem Bostrom settles on—as near as I can tell—is the same one we have right now as humans. It’s not simply that we don’t know what core value should drive our actions but that language is finally insufficient to articulating such a value.

As you probably already know, it basically comes down to “try to do the right thing, even though I can’t tell you what the right thing will be and most of the time you won’t be certain either. Oh, and when you feel certain it’s probably because the action is fairly rote and inconsequential. If it isn’t rote and you’re certain, then you’ve probably missed something important.”

You don’t have to be superintelligent to know that, and if you fantasize that superintelligent beings will be more certain than us, well then we have very different notions of intelligence. On the other hand if you want to say that we shouldn’t build stupid but powerful killing machines capable of destroying the planet (that is, other than ourselves), then I agree.

Categories: Author Blogs

civility: “You keep using that word…”

Digital Digs (Alex Reid) - 2 July, 2018 - 12:23

I don’t think it means what you think it means.

The interesting thing about civility, even in its staid dictionary definition, is that there is this seemingly narrow crack that has become a gulf in American politics. One definition has to do with maintaining civil order and the other is the more familiar one having to do with politeness. Often they could go together, but not necessarily.

Let’s take this back to Aristotle for moment. His virtue ethics are often brought up in the context of civility, though I don’t know that he specifically addressed that concept. Aristotle’s moral virtue is fundamentally defined as a kind of moderation, “a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions.” And civility, particularly the second definition dealing with politeness, is often associated with a moderation of behavior. And sometimes, maybe in most cases, being polite and being moderate appear to be the same thing, but, again, not necessarily.

So now you’ve got these three terms: civility, moderation, and politeness. Of the three, politeness is clearly the most stylized and “rhetorical” (in a pejorative sense). Etymologically, politeness is about polish; it’s about appearance. Politeness moderates behavior inasmuch as it constrains actions to a certain range. However, the assertion that politeness is a virtue is harder maintain. That seems far more situational/contextual, as politeness might provide a cover for all kinds of immoderate behavior. The most cliché example is probably cowardice, as when someone asserts that the reason they did not stand up for what they believed was right was because it would have been impolite to do so.

And of course this is exactly the kind of argument that could be made, and has been made, by those who have been accused of incivility/impoliteness. Indeed, to the contrary, one can easily argue that protesting and confronting politicians in a nonviolent manner are clear examples of moderate, civil behavior: moderate in the sense that clearly there are more extreme possibilities and civil in the sense of abiding to one’s responsibilities as a citizen toward the maintenance of a civil order under threat.

Unsurprising the social media angle is of most interest to me as a scholar. I think it’s quite clear that social media continues to intensify political divisions, and I suppose the underlying question is how irreconcilable are those divisions? I wouldn’t plan on social media being a place where those divisions might heal. I am concerned (I think that’s the right word) that more obvious incivility is about to arise. I think there was a time, not that long ago under Obama, when the notion of the left was one that sought to expand civil rights but also continue to make space for conservative views (not that many on the right saw it that way). Now that the right is quite clearly intent on squashing civil liberties, there is an element on the left that is no longer willing to accept the more tempered views of the past. In other words, since at least the beginnings of the Tea Party movement, the right has had it in mind that the values (and often people) of the left have no place in their future vision of America. And now increasingly those on the left have a similar view of the right; and I’m not sure what the “civil” response is to a politics that is aimed at destroying you. The key question is to what degree do these remain extremist views? And then what happens if/when they are not?

To end with a return to Aristotle: “the man who flies from and fears everything and does not stand his ground against anything becomes a coward, and the man who fears nothing at all but goes to meet every danger becomes rash.” It is courage that defines the moderate virtue.

Courage, of course, is not always civil/polite, but it can have everything to do with the defense of civility, of the civil order of a nation.

Categories: Author Blogs

ambient cybernetics of the noise floor

Digital Digs (Alex Reid) - 26 June, 2018 - 10:52

Sometimes you come across a term commonly used in other professions and it just strikes you as thought-provoking… or at least that happens to me, and it happened recently with “noise floor.” Maybe that’s a term you’ve heard many times before (if you’re an engineer or work with sound or maybe as an audiophile). If not, then basically the noise floor is the measurement of all the unwanted signals within a system/environment. Some of these are what we would typically call natural, ranging from background cosmic noise to wind, weather and so on. Others are human/cultural such as voices, body movements, breathing, etc. And there are artificial/technological noises including things like the hum of machines or electrical interference. Often the goal is to reduce the noise floor in engineering contexts, laboratories, and recording studies. Basically who wants more noise?

Of course we also talk about noise in cybernetic contexts in relation to signals and information. There one encounters the somewhat counter-intuitive notion that noise is an integral part of information and not simply an obstruction to communication. The first way to understand that is to think about information as relative to the receiver. If I write “George Washington was the first president of the United States,” then that’s probably not information for you. If I write that Denmark and France are currently scoreless in the eighth minute of their World Cup group stage match, then that might be news (if you were reading this in real time) but you aren’t so it probably won’t be information when you read this (of it is, then it’s because you don’t follow the World Cup and you probably don’t care). Since information is something you don’t know then it could easily be recognized as noise. Words spoken in an unfamiliar language are basically noise. Words in a familiar language but on a highly technical subject might also be noise.

But there’s another way of thinking about it which is maybe reminiscent of Heidegger (a la present-at-hand/ready-at-hand).  E.g., when you’re driving your car it’s making noise that you probably ignore, but you know when your car starts to make a funny sound. Or you can think of that cliche horror movie line, “It’s quiet around here, toooo quiet.” Or, since I’m watching soccer right now, watching sports but not being able to hear the fans. So both of these ways of thinking about noise vs. information put me in mind of Rickert’s ambient rhetoric in considering the ecological/environmental conditions of rhetoric.

As such one might think of rhetorical practices as composing with/from noise. In conventional terms this is mostly done through sensory deprivation: e.g., read a book in a quiet place. In some digital media–video games and simulations–it’s done through immersion. I suppose the same may be said of traditional cinema, but now with streaming videos on different devices it’s less true, though I’m not really aware of anyone composing with that in mind. YouTube videos are done that way though. They are part of the now familiar digital marketplace of the attention economy.

At what point do the various social media streams, smartphone notifications, email alerts and so on become part of the noise floor? The digital analog of the hum of conversations in a mall food court? Perhaps at some point–not yet I think–we can listen to it like animals around a watering hole listening to the chatter of their community, as something comforting that can be safely ignored unless there’s a sudden change. Much as with sound engineering, the rhetorical engineering of the noise floor of social media ecologies begins with making judgments about which sounds are desirable and which are not and then proceeds through technical solutions.

This starts with thinking about what machines and software can do, but much as with sound engineering, one has to put the machines and software into the context of larger designed/built environments. E.g., do you carry your phone around with you in your house or do you minimize its potential to distract by setting it down? Once one gets to that level, whether the environments are private, workplaces, or public spaces, less obviously-technological cultural processes are also clear. What laws, regulations, policies, habits, and expectations will shape these environments?

These are familiar questions. The potentially interesting thing in coming at them through the notion of the noise floor is that we don’t have to think of this simply as an on/off question. The conspicuous silence of the smartphone turned off  is “tooo quiet.” But can we design that reassuring background hum? Fundamentally technology companies want our eyeballs so they may not have much incentive in designing this way.  IDK, it may be one of those killer apps waiting to be designed and whose path to monetization will be as initially uncertain as Facebook’s a decade or so ago.

In the end though it isn’t just a design problem. It’s a human attitude problem. We keep thinking that these streams and notifications are information rather than noise, that we are missing out. Oddly, we don’t feel that way about the content of the hundreds of cable channels or millions of websites we never view. Managing to shift that perspective slightly to view social media more as background noise might help.

Categories: Author Blogs

the ends of digital rhetoric

Digital Digs (Alex Reid) - 22 June, 2018 - 12:16

Two personal data points: a meandering FB thread about the future of the Computers and Writing conference; another conference conversation over the implications of asserting that “these days” everything is digital rhetoric. It’s a related observation, taken one step further, that leads Casey Boyle, Steph Ceraso, and James J. Brown Jr. to conclude in a recent article “It is perhaps too late to single out the digital as being a thing we can point at and whose fate we can easily determine. Instead, the digital portends to be a momentary specialization that falls away and becomes eventually known as the conditions through which rhetorical studies finds itself endlessly transducing.” So there are multiple ends (or sense of the word end) for digital rhetoric, both as a disciplinary specialization and as a discrete phenomenon (with the two being related since as long as the latter exists there’s reason for the former to exist). We can ask about the purposes/ends of digital rhetoric (as a field of study but perhaps also as a phenomenon). We can seek out the boundaries between digital and non-digital rhetoric (i.e. where digital rhetoric ends and something else begins).  Finally we can think about those boundaries temporally as Boyle et al suggests by “momentary specialization.”

I guess my first observation is that the fact that something might become pervasive doesn’t necessarily mean that it ceases to be a subject of study/specialization. Just think about the ways we study race, gender, class, or culture–obviously these are all pervasive elements of human life. We identify many meaningful distinctions here and one imagines significant objection to the idea that the transduction of rhetoric passing through such bodies portends the erasure of their study as discrete phenomena. But I don’t think that’s Boyle at al’s point. Instead, the exact opposite is what might be suggested. If I may use this analogy, we have seen intersectionality result in a vast proliferation in the way human rhetorical practices are studied over the last couple decades. This is the result not only from the proliferation of identities but from their combination as well. The same is true of digital technologies and rhetorical practices. In this respect it becomes increasingly problematic to speak of a discrete, cohesive digital rhetoric, but no more so than it is possible to speak of rhetoric or writing in general terms.

As Boyle, Ceraso, and Brown also write “The digital is no longer conditional on particular devices but has become a multisensory, embodied condition through which most of our basic processes operate,” though they move from that description to “offering the concept of transduction to understand how rhetorical theory and practice might engage ‘the digital’ as an ambient condition” (a la Rickert). So these things are all true. We can describe (and study) digital media ecologies as ambient conditions. This involves first understanding rhetoric as a not-necessarily human phenomenon that can itself  be ecological and ambient, which can in turn encounter and interact with digital media in a variety of ways resulting in a digital-rhetorical ecological-ambient condition. At the same time, one can study and describe particular devices and/or particular devices as they operate in specific communities or for certain purposes.

I think that either of those approaches–studying ambient digital rhetorical phenomenon or specific devices/practices–promise sustainable areas of specialization within rhet/comp (inasmuch as any of this is sustainable). So a journal like Kairos or a conference like Computers and Writing has a sustainable path forward, at least in terms of disciplinary logics (usually its material support and logistical problems that are the main challenges to sustainability anyway). I think starting a phd specializing in digital rhetoric in 2018 makes at least as much sense as starting any other kind of English Studies phd in 2018. And, on a more practical level, the differences between vlogging and blogging, between web design and podcasting, between mobile apps and infographics (i.e. among all these transducing digital rhetorical practices) are where the curriculum will happen. So again, you can’t just study “the digital.” The field is proliferating.

I suppose one could say that’s where the field ends (though literary studies hasn’t ended as a field because there are dozens of specializations within it), but even then we’re a long way off. If/when there are a half-dozen once and future “digital rhetoricians” in your department, then we can talk about how each is really in a sub-specialization. I’m not holding my breath on that one.

Indeed I’d think the opposite future is just as likely to be true. Rather than “the digital” becoming pervasive and being subsumed again into a rhetoric that is implicitly but no longer explicitly digital rhetoric, the field of rhet/comp, along with the rest of English Studies, becomes subsumed into a humanistic study of the digital that, from our perspective, might be described as interdisciplinary. That is we become explicitly digital and implicitly rhetorical. And from there one will see new sub-specializations. But honestly I doubt either is probable. Almost all the internal energy and resources across English Studies, including in rhet/comp, are powerfully conservative. Almost all the external energy and resources are indifferent to English Studies at best; there’s nothing about us that screams good investment to them.

The upshot is that I think for at least the next decade we’ll continue to see rhetoricians who specialize in the study of digital media while the vast majority continue their research with little thought to the implications of digital media for it. And on some level that’s not only fine but totally reasonable; you can’t study everything. And I suppose in the end that’s the potential problem with digital rhetoric–if you start seeing it as “everything” (or at least a lot of things). But I wouldn’t (don’t) panic about it. It’s always already intermezzo; you’re always beginning and ending in the middle of somethings else.

Categories: Author Blogs

collapse porn: MLA edition

Digital Digs (Alex Reid) - 7 June, 2018 - 16:25

So two articles, one from MLA’s Profession helpfully titled “The Sky is Falling” and another in The Atlantic that suggests “Here’s How Higher Education Dies,” perhaps from some kind of sky impact. Put together one might wonder if the MLA and its disciplines might manage to hold on long enough to die with the rest of higher education.

This is what academics call optimism.

I don’t think higher education is dying. I still think Americans (and people around the world) view getting a college degree as their best strategy for getting and maintain economic prosperity. The problem is that that is a long-term view and the short-term risks are increasingly high, particularly the risks of student loan debt. Our general economic strategy has been to emphasize profiting in the short-term directly from students learning through making them pay tuition (and loan interest) rather than emphasizing profiting indirectly in the long term from the increased capacities of a better educated workforce. This kind of short-term profit-making version of capitalism combines with a willingness on the right to feed into anti-intellectual populism among their voters to cast higher education as a cultural foe. It’s really just incredibly stupid. It’s basically eating a bunch of your seed corn and then leaving the rest to rot. The result has been considerable damage to higher education, particularly at smaller and the more accessible public institutions (community colleges, comprehensive colleges, small liberal arts, etc.). However, I don’t think this means higher education dies.

Nevertheless this context doesn’t bode well for the humanities, as we know all too well. I find it curious that there continues to be such a struggle to understand why students do not choose to major in English. It’s because they don’t want to. That should be obvious but somehow we tend to reduce want to some rational decision-making process. Sure many students attempt degrees they hope will lead more directly to specific well-paying careers, but many more do not. If you look at the NCES data, roughly the same percentage of students are getting engineering and business degrees now as 25-30 years ago. It’s communications and psychology to whom English has lost students. In 1991 we were all roughly the same size ~50K degrees awarded. Now those two have doubled their numbers while English has shrunk. And no one’s choosing those fields because of jobs or salary. In fact, they average less than English majors.

So the thing is, imho, while the general cultural-economic situation of higher education certainly doesn’t help us, that’s not our problem. Our problem is that students are rejecting the experience of English and/or the particular disciplinary knowledge we offer. Eric Hayot makes a similar observation in the Profession article when he writes “my guess is that the humanities are going to survive by expanding and extending their general interdisciplinarity, by realizing that the separation of disciplines produces appeals to certain kinds of expertise that at this point may not be enough to retain our traditional audiences. Our market has changed; we probably need to change with it.”

I think that’s fine as far as it goes, and maybe it goes far enough to attract some students into an elective or to a particular course in a gen ed curriculum. But I think he gets at the key issue, maybe unintentionally, further on: “The problem, that is, is not disciplinarity in general (economics, as I’ve said, is doing fine); the problem is humanistic disciplinarity, in this particular socioeconomic situation.” Now by “particular socioeconomic situation” I’m guessing he means to suggest something he believes can be reversed and probably will be reversed. However for me, I would define that particular situation as “not the 20th century,” which is, as far as I know, an irreversible situation.

But here is the good news(?). The problem isn’t STEM or business. And the problem isn’t that students can’t get good jobs with English degrees. And the problem isn’t disciplinarity itself. The problem/challenge is a paradigm shift within English Studies and the complication is that the broader context of higher education likely means doing it with few resources. It may come as surprise but in this context the often separately-discussed challenges of graduate and undergraduate education come together. If we can shift the paradigm such that people with English phds have clearer value in roles beyond replicating the discipline then we will simultaneously create a discipline that makes more sense to undergraduates who will also seek to use their education for purposes other than replicating the discipline.

How should that paradigm shift? Maybe you can start by discovering how it formed in the first place. Then you might ask how did psychology and communications beat English over the last 30 years? Was it something they did? Was it some larger cultural shift? How might we shift our pedagogical paradigms as well?

There’s no real option to collapse. People will require tertiary education to be successful in first-world economies. They will need to learn how to communicate in increasingly specialized/technical ways across a variety of media; they will require a cross-cultural understanding of aesthetic and rhetorical experience; they will need tools to address ethical concerns with others who might not share their cultural, let alone personal-embodied, context. There will be many professionals who make careers out of addressing these matters.

I think that’s what we were always doing, just in a rather ham-fisted way. I would suggest psych and comm beat us by going more directly at those matters than we have been willing to do. But I know we can offer something different/complementary to those social scientific approaches, especially when it comes to know-how.



Categories: Author Blogs

podcasting: basic rhetorical questions

Digital Digs (Alex Reid) - 4 June, 2018 - 07:22

As you may have seen, I posted a podcast yesterday. I think I am finding starting up a podcast to be similar in its rhetorical challenges to starting this blog many a year ago. Back then I was asking myself some basic rhetorical questions about audience, genre, and purpose. These are all one question in a sense, or at least different parameters of the same assemblage. Podcasts, like blogs, aren’t really genres or maybe they are genres that contain many genres like books or essays. I’m not sure what the right term is for them, maybe platforms or media types. Whatever. The point is that podcasts do set up certain kinds of capacities some of which are related to the particular composing technologies in use (e.g. what kinds of audio recording, production, and editing technologies does one have available) and proficiency with those technologies, as well as the technologies of delivery/circulation. Then there are some general cultural-discursive practices and values that podcasts largely share. Listeners tend to use of limited range of technologies for consuming podcasts. Often the mobile capacities of those technologies (e.g. smartphones) are in play as people listen while driving, jogging, walking the dog, etc. These practices put some general limits on the length of a podcast. There’s probably also some argument to be made about the general attentional habits/capacities of people as well. There’s a reason most tv shows, podcasts, and so on tend to be no longer than an hour.

Still, even within that loosely described field, there’s a near infinite number of possibilities. That might sound great, but it’s really vertiginous. Those possibilities are quickly and starkly reduced and organized through remediation–where podcasts connect back to the formats of tv and radio talkshows, documentaries, journalism, and in some cases to fiction and radio drama. Is a podcast a solo voice? Is it a dialogue or roundtable? Is it an interview? Are there field recordings? Diegetic ambient sounds? Non-diegetic sounds (e.g., music)?

With all that in mind as ways to speak in general terms about podcasting, one still has to decide on topic, audience, and the particular format from all the choices above. The typical (and I think quite sound) advice is to do something that interests you and that you are able to replicate. I think this starts with topic. There are things I’m interested in as a hobbyist–soccer, science fiction, fitness, etc.–and then there are my professional interests, which you know. With the latter, I could speak in a very academic way about my research or in a more pedagogical way, as I would to students. That would be similar to my blog.

However, I’m thinking about addressing the topics of digital rhetoric in a more journalistic/public discursive way. That too is a familiar podcasting genre and one can already see other academics doing that kind of thing. I’m starting with the solo podcast approach this summer because that reduces one layer of logistical complexity at the start, but I don’t think that’s desirable in the long term… if it turns out there is a long term.

Much like with the blogging though, a big part of my interest in podcasting is to gain insight into a subject of scholarly interest to me–digital composing–from the inside of actually doing it. As I’ve been saying for a very long time (and I’m hardly alone on this one either), rhetoricians–and the humanities in general–need to evolve in their scholarly genres. We are so deeply fixed in the gravity well of text/print that we near unquestionably believe that rigorous academic thought must take the form of text or at least that media that cannot replicate the “rigor” we associate with writing and reading texts are inherently lesser. In fact, for the most part, I don’t even think we recognize that as a belief we hold but rather something more like a matter of fact. It’s like we’ve forgotten that publishing monographs is a relatively recent phenomenon in academic work (c. mid-20th century) and one tied to a particular set of economic and technological conditions that are now long passed.

Categories: Author Blogs

Ep 101: Siri’s Voice

Digital Digs (Alex Reid) - 3 June, 2018 - 15:05

This is my first attempt at this, so I’d love your feedback.

And here’s a link I mention in the podcast: Dennis Klatt’s History of Speech Synthesis.

Categories: Author Blogs

distributed deliberation #rsa50

Digital Digs (Alex Reid) - 31 May, 2018 - 14:30

In the wake of 2016 US presidential election, many questions were raised about the role of Facebook in disseminating fake news. Mark Zuckerberg’s response, posted to Facebook, points to the challenge fake news presents.

Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves. (“I want”)

Perhaps unwittingly, Zuckerberg reveals the fundamental tension between content that users find “meaningful” and content that is factual. In addition, what we might traditionally call deliberative rhetoric, essays that employ evidence to make arguments about what should be done fall into the nebulous space of “opinion,” a space made even more uncertain as the facticity of the evidence supporting those arguments is often uncertain. Certainly one might reasonably argue that whatever precession of deliberation digital media accomplish, as readers we remain capable of consciously analyzing and evaluating the credibility of media we encounter. We have not yet been brainwashed to accept whatever Facebook tells us: the ability of users to reject, almost reflexively, content that appears at odds with their own ideological convictions demonstrates that. However Facebook’s mechanisms seek to discern what each individual user might find “meaningful” and foreground that media, reinforcing the echo chambers users might deliberately create by hiding or unfriending users who disagree with them. Rather than mitigating our cultural and ideological biases, applications like Facebook work to show users a world they will “like” with the primary goal of keeping those users’ attention on the website.

But the deeper problem is this. Even if we imagine all this as a primarily technological problem that will eventually be fixed by engineers (and I agree that requires a prodigious feat of imagination), we are still left in a situation in which machines have taken responsibility for a significant amount of the work that we have termed deliberation. And perhaps we are forced to acknowledge that given the sheer speed and volume of information, there is no other option for us.

This is the rhetorical condition that I term distributed deliberation, a term that nods toward Edwin Hutchins’ concept of distributed cognition. And what I want to focus on today is how one might go about inventing proleptic or anticipatory responses to distributed deliberation that ultimately result in an expansion rather than contraction of our rhetorical and cognitive capacities. And I’m going to take up a new materialist method and consider what it might offer us.

A new materialist digital rhetoric might describe a user-based, distributed-deliberative prolepsis in terms of population thinking. I going to discuss this as three steps. The first step is to reorient one’s concerns away from the individual human subject and conceive of this challenge as an ecological one involving a constellation of human and nonhuman actors. Casey Boyle observes this in his discussion of smart cities: “Where former understandings of democratic organization relied largely on the techniques of communicating language for deliberating civic activity, the [smart city] looks to sensor technologies and big data methodologies to track and nudge realtime movements and conditions” (“Pervasive” 270-1). That is, civic concerns are not worked out through dialogue among individuals but rather through an expansive, data-intensive, collective process. In some sense, humans have always been in populations: as a species and as family units, tribes, nations, etc. As Boyle recalls, even in Aristotle, one finds a discussion on limiting the size of a democracy’s population to one that can all hear a single speaker and be seen by that speaker, recognizing that “civic organization depends not only on persuasive debate but also on the means for circulating information” (270). As such, the shift here is not that we have suddenly become a population but rather that we have become part of a new population. In the simplest terms, one might call this a network-user population.

Once one begins thinking in terms of a network-user population, the second step is to describe that population’s functioning. The population of social assemblages always includes heterogeneous elements, both human and nonhuman. However, once formed, an assemblage begins to act as a means for shaping and homogenizing the agential capacities of its components as processes of parameterization influence the degree of territorialization or deterritorialization, coding or decoding, present in the population. For example, the heterogeneity of humans on a college campus are homogenized into populations of faculty, students, and staff who have different capacities in relation to the campus and who might each be addressed as a separate population and modified according to certain parameters: when an institution changes general education requirements, those changes likely affect all three groups but in different ways as members of those distinct populations. A college campus is a comparatively heavily territorialized and coded social assemblage: a military base would likely be more so while a local farmers’ market would likely be less so.

At first glance, social media networks would appear to be deterritorialized and coded. They are global in operation, allowing virtually any person to join as a user, and they are obviously coded, not only in the literal sense of being made from programming code, but in the conceptual sense as well through the algorithmic processes that I have been discussing here. However assemblages are rarely just one way. Instead they have tendencies moving in all directions along these axes, so while social media may be deterritorialized and coded, they also have tendencies toward territorialization and decoding. For example one might understand the strong territorial boundedness of such sites in terms of their cybersecurity. While social media users represent a heterogeneous group of humans in conventional macrosocial terms, these sites homogenize those humans as users and gives each such user equal capacities on the site. These tendencies to create secure borders and homogenize users represent forces of territorialization. In addition, while social media sites are certainly coded, conventionally coding in assemblages operates through written rules and procedures governing all manner of behavior. Despite the terms of service, as is self-evident to any social media user, there is little regulation over the behavior or expression of users. Almost anyone can join Facebook, those users can form almost any kind of internal community or friend network, and they can share almost any kind of media and write whatever, whenever, and wherever they want. Compare this, for example, with the restrictions on expression in the traditional classroom or workplace environment. In this respect one might say that the programming code behind Facebook and similar sites has a decoding effect because of its capacity to process a wide range of user actions and inputs and feed them forward into the production of a customized user experience. In the same way, one might think of social media populations as emerging from a series of decoded, microsocial actions rather than top-down, coded limitations like laws. The end result is a population that is both territorialized and decoded. That is, as is directly observable by almost any Facebook user, one finds oneself in an assemblage with few restrictions on expression but is nonetheless homogeneous, especially if, as a user, one goes about creating an “echo chamber,” as many users do.

If we can understand ourselves as populations in an assemblage of network-users with certain tendencies toward territorialization-deterritorialization and coding-decoding, then the third step is describing the agential and rhetorical capacities that become available to us through those collective bodies, even as other capacities we have had historically either become less accessible or less effective. There is an array of possible tactical resistances available to network-user populations from hacking to culture jamming. One might also produce alternative technologies and applications and effectively create new user populations. In DeLanda’s terms, these would be efforts toward deterritorializing and decoding the population to create greater heterogeneity. In some respects though, the homogenizing power of digital media—its fundamental capacity to reduce user interactions to computations—is unavoidable. Billions of people are networked together and will, as a result, produce wealths of information. One might create temporary autonomous zones through non-participation or misinformation or even hope to instill some critical understanding among users, but digital media ecologies will continue to function and adapt to such strategies. Put differently, adaptive processes of de- and re-territorialization are ongoing. Alternately, through various legal means one might establish codes restricting not only the participation of network-user populations but the ways in which social media corporations collect, analyze, and employ user data. Codes need not only take the form of laws, though, and it is possible that network-user populations could devise their own discursive codes and genres, as one finds in some self-policing user communities like Wikipedia. In short, one might try to grab ahold of DeLanda’s parameterizing knobs and turn them in a different direction.

In developing such strategies, one key element is recognizing that assemblages are not monolithic and are invariably composed of heterogeneous elements, even if they have tendencies to homogenize them. Corporations like Google and Facebook have become powerful cultural forces but only in the last decade. While there are immediate concerns related to these corporations, the deeper issue lies more generally with the role intelligent machines perform in media ecologies. In building the smart, decision-making capacities of these machines, the resulting agency created for machines can only arise alongside a deliberative capacity to take the right action. Simply put, there’s little use in creating a self-driving automobile that cannot make decisions that keep humans safe (or at least as safe as they are when other humans are behind the wheel). In describing these distributed deliberative processes, it is important to recognize that algorithms aren’t magical but rather part of a larger, heterogeneous system. As such, the solution does not lie in creating machines that work independently of humans but rather in concert with us. This presentation has described some parts of this larger system, but a more extended description is needed, one that follows experience “all the way to the end,” as Latour puts it. Though here I have focused on illuminating the role of social media agents in deliberation, any strategy for shifting the operation of distributed deliberation would need a fuller account of the other human and nonhuman participants in the ecosystem.

Ultimately such descriptions become the foundation for instaurations that seek to foster new capacities experimentally. There is no going back to some earlier media ecology. Instead, invention is required. In developing proleptic tools for deliberation in a digital media ecology, one is already acting in the dark, unable to account for all the data streaming through and unable to predict how algorithms will act or machines will learn. Though it is understandable that as humans our focus might be on our individual deliberative agency, our ability to evaluate rhetorical acts and make decisions about our individual actions, that agency is necessarily linked to larger deliberative networks. An individual user might seek means for anticipating and responding to the social media world presented to her, but those individual responses only have value in the context of a larger network, which, of course, has always been the case. As such, it is not as if we have suddenly lost our legacy capacities to deliberate upon a text or other piece of media placed before us, either as individuals or collectively through discourse; it is rather that those capacities have been displaced by the new capacities of data collection and analysis. This is Boyle’s point as well in his description of an emerging “pervasive citizenship” and rhetoric where every action we take (or at least every digitally recorded action) becomes an action of citizenship and an opportunity for persuasion. As we become assembled as a population of networked users, we become accessible by and gain access to a new sensory-media experience and our deliberative capacities in this context remain undiscovered or at least under-developed.

That said, a new materialist approach to building mechanisms for deliberation understands the task quite differently from the way, for example, that Mark Zuckerberg describes the challenge Facebook has in dealing with fake news. He articulates the problem as one of determining what is or isn’t true. However one might say that deliberation operates precisely in such contexts, where the truth cannot be finally resolved. Though one can view deliberation as an interpretive, hermeneutic process that goes in search of truth or seeks consensus around truth, as in a jury’s deliberation, a new materialist rhetoric views deliberation as an inventive, heuristic process where the measure of the resulting instauration isn’t against a standard of truth but one of significance, of its ability to create knowledge that is durable and useful. Facebook’s machines do not need to know if the content they promote is true; they need to know what it does—how it is made to act and how it makes others act (faire faire): a far more empirically achievable task one imagines than discerning truth. In making this observation, I do not mean to suggest a total disconnect between truth and agency or that there is never a need to separate truth from lies and misinformation. To the contrary, one might say new materialism connects truth to agency by investigating the actions that construct knowledge and observing the capacities that knowledge engenders.

On February 16th 2017, in an open letter to the Facebook community, Zuckerberg describes his corporation’s mission in the following way: “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us” (“Building”). This proposed social infrastructure is also a rhetorical infrastructure and a deliberative infrastructure. I am not prepared to cede the responsibility for building the global community to Zuckerberg and the engineers at Facebook. If we want to participate in building a better ecology of distributed deliberation then we need to begin with describing the rhetorical operation of these extensive assemblages of human and nonhuman populations.



Categories: Author Blogs

Blackboard and the arse end of the internet

Digital Digs (Alex Reid) - 23 May, 2018 - 08:59

There’s no doubt there is no dearth of cesspools on the web, and I wouldn’t want to get into a debate about which is the worst. But Blackboard is it’s own special circle of internet hell.

As I’ve mentioned a few times here, after ending my stint as WPA, I’m back to teaching a regular load this year. So I decided to use UB’s course management system for at least part of what I was doing. There were really two basic reasons I did this. First, the students use UBlearns (as well call our version of Blackboard) for many of their classes and just expect to see things there. Second, it had been a long time since I’d even considered using a CMS. As WPA, I was teaching grad classes which were small, so there’s wasn’t really a need for it. Before that, I had sought out all manner of alternatives to using a university CMS, because the things were so awful 15 years ago.

Apparently they are still awful. In some respects they are even worse as the capacities of the web around them have left them in the dust. Think about WordPress, YouTube, Twitter, Facebook, Instagram, Google Docs, or Reddit. Consider how easy they are to use, how flexible, how fast, how mobile. Think about how easy it is to create, edit, and share content. UBneverlearns, as I’ve now decided to call it, like any CMS, is basically a graveyard of content and conversation. Or maybe it’s more accurate to call it a morgue, where the instructors do their version of CSI before pronouncing a grade.

Of course these other sites present their own pedagogical problems. There are privacy concerns, not only in terms of the data these sites collect but also in terms of how, as faculty, one will communicate grades and such to students. There’s the problem of having to ask students to create multiple accounts (e.g., we’ll have discussion on WordPress but upload your videos on YouTube, then let’s use Google Docs to work collaboratively on a document, etc.). And the reality is that a fair segment of students will struggle with the digital literacy demands of using multiple sites, even though there maybe is a legitimate argument for saying that they should learn how to do that.

From the faculty perspective, one can either take the default route of using Blackboard and following its path of least resistance, or one can devote a non-trivial amount of time to rolling one’s own learning environment. At least for me, as a digital rhetorician, there’s some overlap between figuring this stuff out for pedagogical purposes and the research that I do. For 99% of faculty this isn’t the case.

This is why I get a sardonic chuckle out of views like that offered by the Horizon Report, a document produced by experts in educational technology, who steadfastly claim that teaching digital literacy is a “solvable challenge” by which they mean one that they understand and know how to solve. Show me evidence that a significant portion of faculty are digitally literate? Products like Blackboard do little to convince me that even educational technologists are digitally literate. I mean higher education can’t even manage to produce a platform where one could even start to teach digital literacy.

The more I think about this, the more sick it makes me. 18 year olds entering college in the fall would have typically started kindergarten in 2005. Still we’ve spent the last decade teaching them to sit quietly in rows, take notes, read textbooks, complete worksheets, and pass standardized exams. Pretty much like I did in the 70s and 80s. While they may get the majority of their entertainment from the web, they’re barely better prepared to learn, communicate, collaborate, or work in a digital environment than I was at their age. And, obviously, faculty, overall, are barely better prepared to teach them such things and universities are barely better prepared to support such teaching and learning. Instead they give us products like Blackboard as if their sincerest wish is to persuade faculty to keep learning in meatspace. That’s the oddest thing about this since we all know that universities desire those online students.

So one of my goals for this summer will be figuring out some constellation of applications that I can integrate to teach my classes. I’m sure I will use UBneverlearns in a minimal way since the students will look there first: probably as a syllabus and a gradebook but nothing beyond that.

Categories: Author Blogs

A different direction for asking why you heard what you did

Digital Digs (Alex Reid) - 18 May, 2018 - 10:41


So the basic story of the recent “Laurel or Yanny” story, as near as I can figure, is this. You’re listening to a degraded digital recording of a voice that has distorted some of the low frequency sounds a human voice makes. So depending on a number of factors–some physiological, some psychological, some technological (e.g. the speakers or headphones you’re using), and some environmental–you might hear one or the other.

So what does this mean for us? Adam Rogers puts it this way in Wired

There is a world that exists—an uncountable number of differently-flavored quarks bouncing up against each other. There is a world that we perceive—a hallucination generated by about a pound and a half of electrified meat encased by our skulls. Connecting the two, or conveying accurately our own personal hallucination to someone else, is the central problem of being human. Everyone’s brain makes a little world out of sensory input, and everyone’s world is just a little bit different.

As he goes on to opine? lament? “It’s hard to imagine a more rube-goldbergian way of connecting with another person. Their thoughts to their mouth to pulsations of air molecules to a vibrating membrane inside a hole in your skull to bones going clickety-click to waves of electrical activity to thoughts.”

In many respects this is a familiar story. It’s the modern divide Latour describes in our efforts to patrol the border between nature and culture. It’s the divide between empiricism and idealism in which different stripes of academics argue over the possibility of knowing something about the “world that exists” besides a “hallucination” of it.

And it’s also the founding problem of rhetoric, the one Rogers terms “the central problem of being human.” In the contemporary secular world where there is no recourse to divinely secured presence (aka souls) to secure intentions, rhetoricians turn to solutions that sound similar to Rogers, like Thomas Kent’s practice of “hermeneutic guessing.” On the other hand, Rogers’ neuroscientific turn is also a matter of concern for rhetoricians who are skeptical of the tendency/hope that brain science can explain away these problems with an fMRI or something. (It’s worth noting that neuroscientists are also often skeptical of what the mainstream takes from their research.)

For me, this odd viral story is an opportune moment to consider the usefulness of Latour’s “second empirical” method, one which he builds upon William James. Here’s Latour from An Inquiry into Modes of Existence:


The first empiricism, the one that imposed a bifurcation between primary and secondary qualities, had the strange particularity of removing all relations from experience! What remained? A dust-cloud of “sensory data” that the “human mind” had to organize by “adding” to it the relations of which all concrete situations had been deprived in advance. We can understand that the Moderns, with such a definition of the “concrete,” had some difficulty “learning from experience”—not to mention the vast historical experimentation in which they engaged the rest of the globe.

What might be called the second empiricism (James calls it radical) can become faithful to experience again, because it sets out to follow the veins, the conduits, the expectations, of relations and of prepositions —these major providers of direction. And these relations are indeed in the world, provided that this world is finally sketched out for them—and for them all. Which presupposes that there are beings that bear these relations, but beings about which we no longer have to ask whether they exist or not in the manner of the philosophy of being-as-being. But this still does not mean that we have to “bracket” the reality of these beings, which would in any case “only” be representations produced “by the mental apparatus of human subjects.” The being-as-other has enough declensions so that we need not limit ourselves to the single alternative that so obsessed the Prince of Denmark. “To be or not to be” is no longer the question!

I realize that’s a long quote, so if you just skipped over it, here’s the summary. Classically empiricism divides observations into primary and secondary qualities, where primary qualities are objective (e.g., length, width) and secondary qualities (e.g. color, smell) are subjective. This is the worldview Rogers implies when he suggests that brains hallucinate about reality. That somewhere between things in the world and the electrochemical pulses of the brain, all the actual relations that hold things together (their relations) become lost and inaccessible to us and our brains just have to guess at what those things are based on a slush of sensory data.

So the weird thing about that is that our bodies and brains are just more things in the world. And our thoughts are just more events/actions in the world. If the lamp sits in relation to the table, then don’t my perceptions and thoughts about the lamp and the table also sit in relation to them as just more things? Why would one imagine that somewhere along the way, one passes into a parallel universe with a different ontology? I actually think the answer to that is fairly simple. One thinks that because one imagines oneself to be divine in some special way.

Rogers ends his essay this way. “Telling the person next to you what’s going on in your head, what your hallucination is like—I think that’s what we mean by ‘finding connection,’ by making meaning with each other. Maybe it’s impossible, in the end. Maybe we’re all alone in our heads. But that doesn’t mean we can’t work on being alone together.” And I don’t really mean to pick on this guy. I just think he neatly expresses a commonly held perception. The title of this article is “The Fundamental Nihilism of Yanny vs. Laurel,” which captures this feeling that this story reminds us that we don’t really share the world in common and that we are all alone in the end. However I think this is more a romantic fantasy than a moment of nihilistic, existential angst.

We aren’t alone in our heads. We aren’t even only in our heads in the sense that this meat is designed to operate in an environment. There’s stuff coming into our heads all the time–through our senses, through the blood brain barrier, through electromagnetic waves. And there’s stuff coming out of our heads too.

Sure. It can be hard to communicate. It’s also hard to hit a curve ball or play the guitar. But this is not a story that begins with us being fundamentally alone and romantically struggling to understand one another. That’s backwards. You’re not alone in the world and thus hear Yanny rather than Laurel. Instead, hearing one rather than the other contributes to your individuation. But that individuation, in my thinking, is just an iteration of a possibility space that one largely shares with the human population. Thinking about individuality as the output of a relational environmental process rather a starting point that never really quite gets going because it can manage to connect to the world strikes me as a much more productive approach to investigating these experiences.

Categories: Author Blogs

the late age of late ages

Digital Digs (Alex Reid) - 16 May, 2018 - 18:47

dance marathonl.jpg It’s undeniably a quizzical situation. For the middle-aged rhetorician it’s the comically late age of the humanities/English Studies and the tragically late age of humans (cf. climate change) in the midst of a still spry rhetorical universe that will go on without us. I can only imagine a generation of mid-century factory workers punching clocks in steel mills and auto plants looking upon those industrial edifices with their cornerstones proclaiming “Look upon my Works, ye Mighty and despair!” and imagining an indefinite unchanging future for themselves and their children, just as tenured faculty still do (or at least used to until recently).

If you’re in the humanities, have you ever wondered why tenure appears to be the one thing in all of human history, if not the entire universe, that is seemingly immune from the critical apparatuses of the humanist gaze? Is not tenure also a mechanism of colonialism? patriarchy? capitalism? neoliberalism? etc.? If not, what is tenure’s magic? Surely tenure has always operated inside hegemony. I can go on, but it isn’t necessary because anyone with a Phd in the humanities should be able to play this game. And in defending tenure, one simultaneously reveals the rhetorical playbook for turning back that critical gaze in any context. (Un)fortunately for us, no one was paying attention in the first place.

In the end building and working in disciplines is not unlike building/working in factories or pyramids. For my particular journey through the discipline of rhetoric, the seeming hypocrisy in critiquing anything/everything except tenure isn’t much of a problem. Maybe you think I’m confessing that I’m full of crap. Maybe. But then maybe you should have read the ToS before you agreed to it. The seeming hypocrisy isn’t a problem but the fading efficacy of such arguments is.

In the late age of late ages, we’re all too tired for this, aren’t we?. (I will concede that it might just be me getting old.)

That said, nihilism is as an immature response to life as belief. One must imagine the professor teaching an ever-shrinking number of students as happy. After all these things were plain to see decades ago. Playfully let’s suggest this is Camus, not Sartre: there is an exit. The late age of “humans” presented by anthropogenic climate change and digital life need not mean the late age of “us:” to offer a Latourian malapropism we have never been human. But what does the “humanities” have to offer to such beings, their cultures and economies? That’s a good question. I’m not sure. But I’m fairly certain that it begins with rejecting the immunity we have granted ourselves.

I know that sounds crazy. It’s the last, great counter-intuitive move for a disciplinary tradition that has laid all its other bets on counter-intuitive thinking.

In the late age of late ages, it’s time to go all in. If not now, then when?

Categories: Author Blogs

when AIs start vlogging

Digital Digs (Alex Reid) - 10 May, 2018 - 12:48

Right now I have two scholarly/professional interests, and I’m wondering how they intersect. On a general thematic level they appear to share a lot as they are both about digital technologies and communication/rhetoric. However, they also represent two very different segments of digital culture. I’ve been writing/speaking about both recently on this blog. The first has to do with the role of artificial intelligences as rhetorical agents. From speech synthesis to natural language processing to negotiating deals, AIs occupy rhetorical spaces. Their rhetorical behaviors are interesting for two reasons. The more immediate one is that we humans are increasingly interacting with AIs, having rhetorical encounters with them. The other reason is that the rhetorical actions of AIs might tell us something about how rhetoric functions outside human domains. If we think of rhetoric as a thing or process or capacity that is in itself not human but with which humans interact, then understanding how rhetoric interacts with other nonhumans might give us some broader insight into rhetoric itself.

The second thing I’ve been musing about is the emergence of digital genres, particularly over the last five years or so. Sure vlogging, podcasting, infographics and so on have longer histories than that, but these are all things that had limited cultural roles a decade ago compared to where we are now. It’s hard to imagine that much of that wasn’t driven by the adoption of smartphones with the capacity to both deliver and produce video and audio content. Anyway, as I’ve been saying, even those these formats have been around for 15 years (and build upon decades of video, film, radio, and so on), they are still relatively immature in terms of the genres built on them. I think this is especially true as one looks at professional and academic genres. I find it hard to imagine that we can go another decade without these genres become more prevalent.

So how do these things intersect? The first answer that comes to mind is that AIs will become increasingly able to help people make and access media, starting with the technical qualities of image and sound. With natural language and image processing, one has the potential of creating indexical access to audio and video, as in “Show the part where there are monkeys” or “Go to where they talk about monkeys.” Then there are all the possibilities for using AI to help identify fake news and other bad actors. In short, there are a range of procedural-rhetorical ways in which AI will shape the composition, circulation, and consumption of video and audio.

In short, there are a number of places to start, and these are all viable. However, it’s not quite what I’m thinking about. For me, the interesting intersection is at once both more abstract and more material. Video/audio capture the environment in which one composes, starting with one’s body. Even with staging, editing, and such, that environment is always there in a way that it is masked in text. (BTW, that doesn’t mean the environment doesn’t shape textual composition but only that it’s less visible/harder to trace.) My interest in AI rhetorical actors is similarly what they can tell us about our shared rhetorical environment. If we think about this from a “cognitive ecology” (or a cognitive media ecology) perspective then sensors (cameras, mics, IoT, etc.), data storage, composing/editing applications, AIs, humans, networks, mobile tech, etc. form an expansive environment. All the media compositions in which humans get actively involved–as vast as that is–is a thin veneer on the massive amount of expressive data of machines reporting and circulating their sensations. Similarly the rhetorical negotiations involving humans will soon become a minor part of the larger conversation. We represent a small population of moving parts in this rhetorical environment.

Of course, the “always already” argument is always already available to us. We’ve always been immersed in a denser and richer rhetorical environment. It’s just that we’ve been too anthropocentric (perhaps unavoidably) and too sure of our privileged, exceptional ontological condition in the universe (less unavoidably) to recognize that immersion. While that observation is valuable (for purposes of humility if nothing else), it’s also necessary to recognize the shift that we are experiencing (without falling into the hype of that either).

Recognizing that our rhetorical capacities emerge from our participation in populations of assemblages within a cognitive media ecology is a fruitful starting point for describing the particular capacities that arise among us. And that’s fine for a broad research agenda that has legs. But it’s not the kind of thing I can teach to undergraduate students or even to graduate students–at least not without a larger curricular structure to support it.

One answer is re-separating the chunks. I.e., teach some media production in one place and some media theory some place else. Of course it’s all Eternal September stuff with no curricular follow-up or through. I mean there aren’t many English departments where students can systematically develop digital rhetorical/compositional knowledge, skills… let’s call it phronesis in say the way they can march through a series of literary-historical periods. That said, if there was some follow-through structure then at some point you could start to think about how the construction of emerging genres is a structural-environmental conversation we’re having with these digital nonhumans. In other words, even for those who weren’t going in a scholarly direction in relation to these questions, there is usable knowledge to be gained here.

Categories: Author Blogs

30 years on…

Digital Digs (Alex Reid) - 2 May, 2018 - 19:10

30 years ago, I was an undergrad and just starting a job working for a start-up, family business in the nascent IBM PC clone market. We assembled computers and sold them on to retails. We distributed hard drives and other components. We consulted with small businesses to provide them with IT solutions for point-of-sale, inventory control, accounting, etc. Those of you who were there know the drill. Monochrome computers, C prompts, no mouse, no windows, 8088 CPUs, RAM measured in kilobytes, 10MB hard drives the size of a kid’s lunch box. Could I have envisioned 2018? Sure, sort of. I mean I read Neuromancer.

I was also an English and History double-major at Rutgers. They were the two largest majors in Rutgers College at the time (the Arts and Sciences college of the university in New Brunswick). Shakespeare, Detective Fiction, Arthurian Romance, The Crusades, World War 1, America in Vietnam: these were all English and History classes with hundreds of students enrolled in giant lecture halls: semester after semester, year after year. Maybe it’s still that way at Rutgers. If so, they’re a bit of an outlier.

Imagining the future/present of computers would have been easier, I think, than imagining the demise of the humanities. In the end the two were bound together. Not because computers are necessarily anti-humanistic but rather because the humanities, especially English was born in the early 20th century with an expiration date. Despite the hypothetical possibility that English Studies need not be bound to print technology and culture, in the end, that’s what has happened. And there’s nothing wrong with the humanistic study of print culture. I’m sure it will carry on, in some fashion, for a very long time. It just isn’t going to be central to how we understand communication as it operates in our living culture.

So it makes me think about 30 years forward. Assuming we still have something like today’s tenured professors at universities (a future that is far from guaranteed in America), I am confident there will be faculty who research the contemporary cultural, political, and professional practices of communication, in whatever media prevails. In short, there will be professors who extend the rhetorical tradition into them media ecologies in which they and others live and work. Of the rest of English Studies, who knows? I would guess some smaller version–akin to classics, art history, or philosophy today–will exist.

And it probably won’t take 30 years to get there.

In part, I’ve been thinking about this because I’ve been idly following the vlogger Casey Neistat recently in his efforts to imagine this new business of his called 368. Basically, it’s a kind of school… sort of. It’s a place where vloggers and podcasters might come, gain access to tools and support, learn how to up their production game, and get business advice in terms of marketing and finding an audience. In a recent vlog, Neistat made the astute analogy between contemporary vloggers and Buster Keaton in that they are both working in an emergent medium/genre and trying to figure what is possible and how to achieve it. His business model seems essentially based on this observation and the opportunity that lies in figuring out how to become the MGM of YouTube (or maybe more accurately United Artists).

I suppose in part I’ve thought of my own work as a less entrepreneurial/commercial and more scholarly version of this objective: to invent/discover/study rhetorical practices and genres for an emerging media ecology. That’s me as a member of the Florida school who was raised by wolves and a copy of Ulmer’s Heuretics read by moonlight.

I will admit that I’ve never been the best professor for the student who wants to be told what to do. Maybe that sounds like a fake criticism, like saying in an interview that your greatest weakness is that you work too hard or that you’re too honest, but I really don’t mean it that way. There’s a place and time for direct instruction, and I try to give it, but I can admit that’s not my strong suit. I’ve always come more from the perspective of saying “Here are some tools. Find something interesting about them and give it a go. If the whole thing falls about, I don’t really care as long as you learned from it.” It’s a fine pedagogy, in its way, but it really only works with students who have some intrinsic motivation related to their work. I’m capable of enough reflection to know that’s my own version of a mini-me pedagogy, because that’s how I learn. I find something I care about and then I bang and grind away until I get somewhere.

I think it’s premised on the notion that while we think it is unwise to try to reinvent the wheel, trying to figure out how one invents something like that, taking your own journey through invention… well that’s what learning is to me.

What does this have to do with looking 30 years back and forward? Good question. I suppose it has to do with my basic plan for moving forward–claiming an interest, banging, grinding, experimenting, inventing. It’s the opposite of institutional/disciplinary humanistic methods, which are fundamentally homeostatic

Categories: Author Blogs
Syndicate content