Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
digital rhetoric and professional communication
Updated: 1 day 22 hours ago

the ends of digital rhetoric

22 June, 2018 - 12:16

Two personal data points: a meandering FB thread about the future of the Computers and Writing conference; another conference conversation over the implications of asserting that “these days” everything is digital rhetoric. It’s a related observation, taken one step further, that leads Casey Boyle, Steph Ceraso, and James J. Brown Jr. to conclude in a recent article “It is perhaps too late to single out the digital as being a thing we can point at and whose fate we can easily determine. Instead, the digital portends to be a momentary specialization that falls away and becomes eventually known as the conditions through which rhetorical studies finds itself endlessly transducing.” So there are multiple ends (or sense of the word end) for digital rhetoric, both as a disciplinary specialization and as a discrete phenomenon (with the two being related since as long as the latter exists there’s reason for the former to exist). We can ask about the purposes/ends of digital rhetoric (as a field of study but perhaps also as a phenomenon). We can seek out the boundaries between digital and non-digital rhetoric (i.e. where digital rhetoric ends and something else begins).  Finally we can think about those boundaries temporally as Boyle et al suggests by “momentary specialization.”

I guess my first observation is that the fact that something might become pervasive doesn’t necessarily mean that it ceases to be a subject of study/specialization. Just think about the ways we study race, gender, class, or culture–obviously these are all pervasive elements of human life. We identify many meaningful distinctions here and one imagines significant objection to the idea that the transduction of rhetoric passing through such bodies portends the erasure of their study as discrete phenomena. But I don’t think that’s Boyle at al’s point. Instead, the exact opposite is what might be suggested. If I may use this analogy, we have seen intersectionality result in a vast proliferation in the way human rhetorical practices are studied over the last couple decades. This is the result not only from the proliferation of identities but from their combination as well. The same is true of digital technologies and rhetorical practices. In this respect it becomes increasingly problematic to speak of a discrete, cohesive digital rhetoric, but no more so than it is possible to speak of rhetoric or writing in general terms.

As Boyle, Ceraso, and Brown also write “The digital is no longer conditional on particular devices but has become a multisensory, embodied condition through which most of our basic processes operate,” though they move from that description to “offering the concept of transduction to understand how rhetorical theory and practice might engage ‘the digital’ as an ambient condition” (a la Rickert). So these things are all true. We can describe (and study) digital media ecologies as ambient conditions. This involves first understanding rhetoric as a not-necessarily human phenomenon that can itself  be ecological and ambient, which can in turn encounter and interact with digital media in a variety of ways resulting in a digital-rhetorical ecological-ambient condition. At the same time, one can study and describe particular devices and/or particular devices as they operate in specific communities or for certain purposes.

I think that either of those approaches–studying ambient digital rhetorical phenomenon or specific devices/practices–promise sustainable areas of specialization within rhet/comp (inasmuch as any of this is sustainable). So a journal like Kairos or a conference like Computers and Writing has a sustainable path forward, at least in terms of disciplinary logics (usually its material support and logistical problems that are the main challenges to sustainability anyway). I think starting a phd specializing in digital rhetoric in 2018 makes at least as much sense as starting any other kind of English Studies phd in 2018. And, on a more practical level, the differences between vlogging and blogging, between web design and podcasting, between mobile apps and infographics (i.e. among all these transducing digital rhetorical practices) are where the curriculum will happen. So again, you can’t just study “the digital.” The field is proliferating.

I suppose one could say that’s where the field ends (though literary studies hasn’t ended as a field because there are dozens of specializations within it), but even then we’re a long way off. If/when there are a half-dozen once and future “digital rhetoricians” in your department, then we can talk about how each is really in a sub-specialization. I’m not holding my breath on that one.

Indeed I’d think the opposite future is just as likely to be true. Rather than “the digital” becoming pervasive and being subsumed again into a rhetoric that is implicitly but no longer explicitly digital rhetoric, the field of rhet/comp, along with the rest of English Studies, becomes subsumed into a humanistic study of the digital that, from our perspective, might be described as interdisciplinary. That is we become explicitly digital and implicitly rhetorical. And from there one will see new sub-specializations. But honestly I doubt either is probable. Almost all the internal energy and resources across English Studies, including in rhet/comp, are powerfully conservative. Almost all the external energy and resources are indifferent to English Studies at best; there’s nothing about us that screams good investment to them.

The upshot is that I think for at least the next decade we’ll continue to see rhetoricians who specialize in the study of digital media while the vast majority continue their research with little thought to the implications of digital media for it. And on some level that’s not only fine but totally reasonable; you can’t study everything. And I suppose in the end that’s the potential problem with digital rhetoric–if you start seeing it as “everything” (or at least a lot of things). But I wouldn’t (don’t) panic about it. It’s always already intermezzo; you’re always beginning and ending in the middle of somethings else.

Categories: Author Blogs

collapse porn: MLA edition

7 June, 2018 - 16:25

So two articles, one from MLA’s Profession helpfully titled “The Sky is Falling” and another in The Atlantic that suggests “Here’s How Higher Education Dies,” perhaps from some kind of sky impact. Put together one might wonder if the MLA and its disciplines might manage to hold on long enough to die with the rest of higher education.

This is what academics call optimism.

I don’t think higher education is dying. I still think Americans (and people around the world) view getting a college degree as their best strategy for getting and maintain economic prosperity. The problem is that that is a long-term view and the short-term risks are increasingly high, particularly the risks of student loan debt. Our general economic strategy has been to emphasize profiting in the short-term directly from students learning through making them pay tuition (and loan interest) rather than emphasizing profiting indirectly in the long term from the increased capacities of a better educated workforce. This kind of short-term profit-making version of capitalism combines with a willingness on the right to feed into anti-intellectual populism among their voters to cast higher education as a cultural foe. It’s really just incredibly stupid. It’s basically eating a bunch of your seed corn and then leaving the rest to rot. The result has been considerable damage to higher education, particularly at smaller and the more accessible public institutions (community colleges, comprehensive colleges, small liberal arts, etc.). However, I don’t think this means higher education dies.

Nevertheless this context doesn’t bode well for the humanities, as we know all too well. I find it curious that there continues to be such a struggle to understand why students do not choose to major in English. It’s because they don’t want to. That should be obvious but somehow we tend to reduce want to some rational decision-making process. Sure many students attempt degrees they hope will lead more directly to specific well-paying careers, but many more do not. If you look at the NCES data, roughly the same percentage of students are getting engineering and business degrees now as 25-30 years ago. It’s communications and psychology to whom English has lost students. In 1991 we were all roughly the same size ~50K degrees awarded. Now those two have doubled their numbers while English has shrunk. And no one’s choosing those fields because of jobs or salary. In fact, they average less than English majors.

So the thing is, imho, while the general cultural-economic situation of higher education certainly doesn’t help us, that’s not our problem. Our problem is that students are rejecting the experience of English and/or the particular disciplinary knowledge we offer. Eric Hayot makes a similar observation in the Profession article when he writes “my guess is that the humanities are going to survive by expanding and extending their general interdisciplinarity, by realizing that the separation of disciplines produces appeals to certain kinds of expertise that at this point may not be enough to retain our traditional audiences. Our market has changed; we probably need to change with it.”

I think that’s fine as far as it goes, and maybe it goes far enough to attract some students into an elective or to a particular course in a gen ed curriculum. But I think he gets at the key issue, maybe unintentionally, further on: “The problem, that is, is not disciplinarity in general (economics, as I’ve said, is doing fine); the problem is humanistic disciplinarity, in this particular socioeconomic situation.” Now by “particular socioeconomic situation” I’m guessing he means to suggest something he believes can be reversed and probably will be reversed. However for me, I would define that particular situation as “not the 20th century,” which is, as far as I know, an irreversible situation.

But here is the good news(?). The problem isn’t STEM or business. And the problem isn’t that students can’t get good jobs with English degrees. And the problem isn’t disciplinarity itself. The problem/challenge is a paradigm shift within English Studies and the complication is that the broader context of higher education likely means doing it with few resources. It may come as surprise but in this context the often separately-discussed challenges of graduate and undergraduate education come together. If we can shift the paradigm such that people with English phds have clearer value in roles beyond replicating the discipline then we will simultaneously create a discipline that makes more sense to undergraduates who will also seek to use their education for purposes other than replicating the discipline.

How should that paradigm shift? Maybe you can start by discovering how it formed in the first place. Then you might ask how did psychology and communications beat English over the last 30 years? Was it something they did? Was it some larger cultural shift? How might we shift our pedagogical paradigms as well?

There’s no real option to collapse. People will require tertiary education to be successful in first-world economies. They will need to learn how to communicate in increasingly specialized/technical ways across a variety of media; they will require a cross-cultural understanding of aesthetic and rhetorical experience; they will need tools to address ethical concerns with others who might not share their cultural, let alone personal-embodied, context. There will be many professionals who make careers out of addressing these matters.

I think that’s what we were always doing, just in a rather ham-fisted way. I would suggest psych and comm beat us by going more directly at those matters than we have been willing to do. But I know we can offer something different/complementary to those social scientific approaches, especially when it comes to know-how.

 

 

Categories: Author Blogs

podcasting: basic rhetorical questions

4 June, 2018 - 07:22

As you may have seen, I posted a podcast yesterday. I think I am finding starting up a podcast to be similar in its rhetorical challenges to starting this blog many a year ago. Back then I was asking myself some basic rhetorical questions about audience, genre, and purpose. These are all one question in a sense, or at least different parameters of the same assemblage. Podcasts, like blogs, aren’t really genres or maybe they are genres that contain many genres like books or essays. I’m not sure what the right term is for them, maybe platforms or media types. Whatever. The point is that podcasts do set up certain kinds of capacities some of which are related to the particular composing technologies in use (e.g. what kinds of audio recording, production, and editing technologies does one have available) and proficiency with those technologies, as well as the technologies of delivery/circulation. Then there are some general cultural-discursive practices and values that podcasts largely share. Listeners tend to use of limited range of technologies for consuming podcasts. Often the mobile capacities of those technologies (e.g. smartphones) are in play as people listen while driving, jogging, walking the dog, etc. These practices put some general limits on the length of a podcast. There’s probably also some argument to be made about the general attentional habits/capacities of people as well. There’s a reason most tv shows, podcasts, and so on tend to be no longer than an hour.

Still, even within that loosely described field, there’s a near infinite number of possibilities. That might sound great, but it’s really vertiginous. Those possibilities are quickly and starkly reduced and organized through remediation–where podcasts connect back to the formats of tv and radio talkshows, documentaries, journalism, and in some cases to fiction and radio drama. Is a podcast a solo voice? Is it a dialogue or roundtable? Is it an interview? Are there field recordings? Diegetic ambient sounds? Non-diegetic sounds (e.g., music)?

With all that in mind as ways to speak in general terms about podcasting, one still has to decide on topic, audience, and the particular format from all the choices above. The typical (and I think quite sound) advice is to do something that interests you and that you are able to replicate. I think this starts with topic. There are things I’m interested in as a hobbyist–soccer, science fiction, fitness, etc.–and then there are my professional interests, which you know. With the latter, I could speak in a very academic way about my research or in a more pedagogical way, as I would to students. That would be similar to my blog.

However, I’m thinking about addressing the topics of digital rhetoric in a more journalistic/public discursive way. That too is a familiar podcasting genre and one can already see other academics doing that kind of thing. I’m starting with the solo podcast approach this summer because that reduces one layer of logistical complexity at the start, but I don’t think that’s desirable in the long term… if it turns out there is a long term.

Much like with the blogging though, a big part of my interest in podcasting is to gain insight into a subject of scholarly interest to me–digital composing–from the inside of actually doing it. As I’ve been saying for a very long time (and I’m hardly alone on this one either), rhetoricians–and the humanities in general–need to evolve in their scholarly genres. We are so deeply fixed in the gravity well of text/print that we near unquestionably believe that rigorous academic thought must take the form of text or at least that media that cannot replicate the “rigor” we associate with writing and reading texts are inherently lesser. In fact, for the most part, I don’t even think we recognize that as a belief we hold but rather something more like a matter of fact. It’s like we’ve forgotten that publishing monographs is a relatively recent phenomenon in academic work (c. mid-20th century) and one tied to a particular set of economic and technological conditions that are now long passed.

Categories: Author Blogs

Ep 101: Siri’s Voice

3 June, 2018 - 15:05
document.createElement('audio'); https://profalexreid403136382.files.wordpress.com/2018/06/digital-digs-ep101.mp3

This is my first attempt at this, so I’d love your feedback.

And here’s a link I mention in the podcast: Dennis Klatt’s History of Speech Synthesis.

Categories: Author Blogs

distributed deliberation #rsa50

31 May, 2018 - 14:30

In the wake of 2016 US presidential election, many questions were raised about the role of Facebook in disseminating fake news. Mark Zuckerberg’s response, posted to Facebook, points to the challenge fake news presents.

Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves. (“I want”)

Perhaps unwittingly, Zuckerberg reveals the fundamental tension between content that users find “meaningful” and content that is factual. In addition, what we might traditionally call deliberative rhetoric, essays that employ evidence to make arguments about what should be done fall into the nebulous space of “opinion,” a space made even more uncertain as the facticity of the evidence supporting those arguments is often uncertain. Certainly one might reasonably argue that whatever precession of deliberation digital media accomplish, as readers we remain capable of consciously analyzing and evaluating the credibility of media we encounter. We have not yet been brainwashed to accept whatever Facebook tells us: the ability of users to reject, almost reflexively, content that appears at odds with their own ideological convictions demonstrates that. However Facebook’s mechanisms seek to discern what each individual user might find “meaningful” and foreground that media, reinforcing the echo chambers users might deliberately create by hiding or unfriending users who disagree with them. Rather than mitigating our cultural and ideological biases, applications like Facebook work to show users a world they will “like” with the primary goal of keeping those users’ attention on the website.

But the deeper problem is this. Even if we imagine all this as a primarily technological problem that will eventually be fixed by engineers (and I agree that requires a prodigious feat of imagination), we are still left in a situation in which machines have taken responsibility for a significant amount of the work that we have termed deliberation. And perhaps we are forced to acknowledge that given the sheer speed and volume of information, there is no other option for us.

This is the rhetorical condition that I term distributed deliberation, a term that nods toward Edwin Hutchins’ concept of distributed cognition. And what I want to focus on today is how one might go about inventing proleptic or anticipatory responses to distributed deliberation that ultimately result in an expansion rather than contraction of our rhetorical and cognitive capacities. And I’m going to take up a new materialist method and consider what it might offer us.

A new materialist digital rhetoric might describe a user-based, distributed-deliberative prolepsis in terms of population thinking. I going to discuss this as three steps. The first step is to reorient one’s concerns away from the individual human subject and conceive of this challenge as an ecological one involving a constellation of human and nonhuman actors. Casey Boyle observes this in his discussion of smart cities: “Where former understandings of democratic organization relied largely on the techniques of communicating language for deliberating civic activity, the [smart city] looks to sensor technologies and big data methodologies to track and nudge realtime movements and conditions” (“Pervasive” 270-1). That is, civic concerns are not worked out through dialogue among individuals but rather through an expansive, data-intensive, collective process. In some sense, humans have always been in populations: as a species and as family units, tribes, nations, etc. As Boyle recalls, even in Aristotle, one finds a discussion on limiting the size of a democracy’s population to one that can all hear a single speaker and be seen by that speaker, recognizing that “civic organization depends not only on persuasive debate but also on the means for circulating information” (270). As such, the shift here is not that we have suddenly become a population but rather that we have become part of a new population. In the simplest terms, one might call this a network-user population.

Once one begins thinking in terms of a network-user population, the second step is to describe that population’s functioning. The population of social assemblages always includes heterogeneous elements, both human and nonhuman. However, once formed, an assemblage begins to act as a means for shaping and homogenizing the agential capacities of its components as processes of parameterization influence the degree of territorialization or deterritorialization, coding or decoding, present in the population. For example, the heterogeneity of humans on a college campus are homogenized into populations of faculty, students, and staff who have different capacities in relation to the campus and who might each be addressed as a separate population and modified according to certain parameters: when an institution changes general education requirements, those changes likely affect all three groups but in different ways as members of those distinct populations. A college campus is a comparatively heavily territorialized and coded social assemblage: a military base would likely be more so while a local farmers’ market would likely be less so.

At first glance, social media networks would appear to be deterritorialized and coded. They are global in operation, allowing virtually any person to join as a user, and they are obviously coded, not only in the literal sense of being made from programming code, but in the conceptual sense as well through the algorithmic processes that I have been discussing here. However assemblages are rarely just one way. Instead they have tendencies moving in all directions along these axes, so while social media may be deterritorialized and coded, they also have tendencies toward territorialization and decoding. For example one might understand the strong territorial boundedness of such sites in terms of their cybersecurity. While social media users represent a heterogeneous group of humans in conventional macrosocial terms, these sites homogenize those humans as users and gives each such user equal capacities on the site. These tendencies to create secure borders and homogenize users represent forces of territorialization. In addition, while social media sites are certainly coded, conventionally coding in assemblages operates through written rules and procedures governing all manner of behavior. Despite the terms of service, as is self-evident to any social media user, there is little regulation over the behavior or expression of users. Almost anyone can join Facebook, those users can form almost any kind of internal community or friend network, and they can share almost any kind of media and write whatever, whenever, and wherever they want. Compare this, for example, with the restrictions on expression in the traditional classroom or workplace environment. In this respect one might say that the programming code behind Facebook and similar sites has a decoding effect because of its capacity to process a wide range of user actions and inputs and feed them forward into the production of a customized user experience. In the same way, one might think of social media populations as emerging from a series of decoded, microsocial actions rather than top-down, coded limitations like laws. The end result is a population that is both territorialized and decoded. That is, as is directly observable by almost any Facebook user, one finds oneself in an assemblage with few restrictions on expression but is nonetheless homogeneous, especially if, as a user, one goes about creating an “echo chamber,” as many users do.

If we can understand ourselves as populations in an assemblage of network-users with certain tendencies toward territorialization-deterritorialization and coding-decoding, then the third step is describing the agential and rhetorical capacities that become available to us through those collective bodies, even as other capacities we have had historically either become less accessible or less effective. There is an array of possible tactical resistances available to network-user populations from hacking to culture jamming. One might also produce alternative technologies and applications and effectively create new user populations. In DeLanda’s terms, these would be efforts toward deterritorializing and decoding the population to create greater heterogeneity. In some respects though, the homogenizing power of digital media—its fundamental capacity to reduce user interactions to computations—is unavoidable. Billions of people are networked together and will, as a result, produce wealths of information. One might create temporary autonomous zones through non-participation or misinformation or even hope to instill some critical understanding among users, but digital media ecologies will continue to function and adapt to such strategies. Put differently, adaptive processes of de- and re-territorialization are ongoing. Alternately, through various legal means one might establish codes restricting not only the participation of network-user populations but the ways in which social media corporations collect, analyze, and employ user data. Codes need not only take the form of laws, though, and it is possible that network-user populations could devise their own discursive codes and genres, as one finds in some self-policing user communities like Wikipedia. In short, one might try to grab ahold of DeLanda’s parameterizing knobs and turn them in a different direction.

In developing such strategies, one key element is recognizing that assemblages are not monolithic and are invariably composed of heterogeneous elements, even if they have tendencies to homogenize them. Corporations like Google and Facebook have become powerful cultural forces but only in the last decade. While there are immediate concerns related to these corporations, the deeper issue lies more generally with the role intelligent machines perform in media ecologies. In building the smart, decision-making capacities of these machines, the resulting agency created for machines can only arise alongside a deliberative capacity to take the right action. Simply put, there’s little use in creating a self-driving automobile that cannot make decisions that keep humans safe (or at least as safe as they are when other humans are behind the wheel). In describing these distributed deliberative processes, it is important to recognize that algorithms aren’t magical but rather part of a larger, heterogeneous system. As such, the solution does not lie in creating machines that work independently of humans but rather in concert with us. This presentation has described some parts of this larger system, but a more extended description is needed, one that follows experience “all the way to the end,” as Latour puts it. Though here I have focused on illuminating the role of social media agents in deliberation, any strategy for shifting the operation of distributed deliberation would need a fuller account of the other human and nonhuman participants in the ecosystem.

Ultimately such descriptions become the foundation for instaurations that seek to foster new capacities experimentally. There is no going back to some earlier media ecology. Instead, invention is required. In developing proleptic tools for deliberation in a digital media ecology, one is already acting in the dark, unable to account for all the data streaming through and unable to predict how algorithms will act or machines will learn. Though it is understandable that as humans our focus might be on our individual deliberative agency, our ability to evaluate rhetorical acts and make decisions about our individual actions, that agency is necessarily linked to larger deliberative networks. An individual user might seek means for anticipating and responding to the social media world presented to her, but those individual responses only have value in the context of a larger network, which, of course, has always been the case. As such, it is not as if we have suddenly lost our legacy capacities to deliberate upon a text or other piece of media placed before us, either as individuals or collectively through discourse; it is rather that those capacities have been displaced by the new capacities of data collection and analysis. This is Boyle’s point as well in his description of an emerging “pervasive citizenship” and rhetoric where every action we take (or at least every digitally recorded action) becomes an action of citizenship and an opportunity for persuasion. As we become assembled as a population of networked users, we become accessible by and gain access to a new sensory-media experience and our deliberative capacities in this context remain undiscovered or at least under-developed.

That said, a new materialist approach to building mechanisms for deliberation understands the task quite differently from the way, for example, that Mark Zuckerberg describes the challenge Facebook has in dealing with fake news. He articulates the problem as one of determining what is or isn’t true. However one might say that deliberation operates precisely in such contexts, where the truth cannot be finally resolved. Though one can view deliberation as an interpretive, hermeneutic process that goes in search of truth or seeks consensus around truth, as in a jury’s deliberation, a new materialist rhetoric views deliberation as an inventive, heuristic process where the measure of the resulting instauration isn’t against a standard of truth but one of significance, of its ability to create knowledge that is durable and useful. Facebook’s machines do not need to know if the content they promote is true; they need to know what it does—how it is made to act and how it makes others act (faire faire): a far more empirically achievable task one imagines than discerning truth. In making this observation, I do not mean to suggest a total disconnect between truth and agency or that there is never a need to separate truth from lies and misinformation. To the contrary, one might say new materialism connects truth to agency by investigating the actions that construct knowledge and observing the capacities that knowledge engenders.

On February 16th 2017, in an open letter to the Facebook community, Zuckerberg describes his corporation’s mission in the following way: “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us” (“Building”). This proposed social infrastructure is also a rhetorical infrastructure and a deliberative infrastructure. I am not prepared to cede the responsibility for building the global community to Zuckerberg and the engineers at Facebook. If we want to participate in building a better ecology of distributed deliberation then we need to begin with describing the rhetorical operation of these extensive assemblages of human and nonhuman populations.

 

 

Categories: Author Blogs

Blackboard and the arse end of the internet

23 May, 2018 - 08:59

There’s no doubt there is no dearth of cesspools on the web, and I wouldn’t want to get into a debate about which is the worst. But Blackboard is it’s own special circle of internet hell.

As I’ve mentioned a few times here, after ending my stint as WPA, I’m back to teaching a regular load this year. So I decided to use UB’s course management system for at least part of what I was doing. There were really two basic reasons I did this. First, the students use UBlearns (as well call our version of Blackboard) for many of their classes and just expect to see things there. Second, it had been a long time since I’d even considered using a CMS. As WPA, I was teaching grad classes which were small, so there’s wasn’t really a need for it. Before that, I had sought out all manner of alternatives to using a university CMS, because the things were so awful 15 years ago.

Apparently they are still awful. In some respects they are even worse as the capacities of the web around them have left them in the dust. Think about WordPress, YouTube, Twitter, Facebook, Instagram, Google Docs, or Reddit. Consider how easy they are to use, how flexible, how fast, how mobile. Think about how easy it is to create, edit, and share content. UBneverlearns, as I’ve now decided to call it, like any CMS, is basically a graveyard of content and conversation. Or maybe it’s more accurate to call it a morgue, where the instructors do their version of CSI before pronouncing a grade.

Of course these other sites present their own pedagogical problems. There are privacy concerns, not only in terms of the data these sites collect but also in terms of how, as faculty, one will communicate grades and such to students. There’s the problem of having to ask students to create multiple accounts (e.g., we’ll have discussion on WordPress but upload your videos on YouTube, then let’s use Google Docs to work collaboratively on a document, etc.). And the reality is that a fair segment of students will struggle with the digital literacy demands of using multiple sites, even though there maybe is a legitimate argument for saying that they should learn how to do that.

From the faculty perspective, one can either take the default route of using Blackboard and following its path of least resistance, or one can devote a non-trivial amount of time to rolling one’s own learning environment. At least for me, as a digital rhetorician, there’s some overlap between figuring this stuff out for pedagogical purposes and the research that I do. For 99% of faculty this isn’t the case.

This is why I get a sardonic chuckle out of views like that offered by the Horizon Report, a document produced by experts in educational technology, who steadfastly claim that teaching digital literacy is a “solvable challenge” by which they mean one that they understand and know how to solve. Show me evidence that a significant portion of faculty are digitally literate? Products like Blackboard do little to convince me that even educational technologists are digitally literate. I mean higher education can’t even manage to produce a platform where one could even start to teach digital literacy.

The more I think about this, the more sick it makes me. 18 year olds entering college in the fall would have typically started kindergarten in 2005. Still we’ve spent the last decade teaching them to sit quietly in rows, take notes, read textbooks, complete worksheets, and pass standardized exams. Pretty much like I did in the 70s and 80s. While they may get the majority of their entertainment from the web, they’re barely better prepared to learn, communicate, collaborate, or work in a digital environment than I was at their age. And, obviously, faculty, overall, are barely better prepared to teach them such things and universities are barely better prepared to support such teaching and learning. Instead they give us products like Blackboard as if their sincerest wish is to persuade faculty to keep learning in meatspace. That’s the oddest thing about this since we all know that universities desire those online students.

So one of my goals for this summer will be figuring out some constellation of applications that I can integrate to teach my classes. I’m sure I will use UBneverlearns in a minimal way since the students will look there first: probably as a syllabus and a gradebook but nothing beyond that.

Categories: Author Blogs

A different direction for asking why you heard what you did

18 May, 2018 - 10:41

audio-clip-yanny-laurel-debate-promo-1526486169451-articleLarge.jpg

So the basic story of the recent “Laurel or Yanny” story, as near as I can figure, is this. You’re listening to a degraded digital recording of a voice that has distorted some of the low frequency sounds a human voice makes. So depending on a number of factors–some physiological, some psychological, some technological (e.g. the speakers or headphones you’re using), and some environmental–you might hear one or the other.

So what does this mean for us? Adam Rogers puts it this way in Wired

There is a world that exists—an uncountable number of differently-flavored quarks bouncing up against each other. There is a world that we perceive—a hallucination generated by about a pound and a half of electrified meat encased by our skulls. Connecting the two, or conveying accurately our own personal hallucination to someone else, is the central problem of being human. Everyone’s brain makes a little world out of sensory input, and everyone’s world is just a little bit different.

As he goes on to opine? lament? “It’s hard to imagine a more rube-goldbergian way of connecting with another person. Their thoughts to their mouth to pulsations of air molecules to a vibrating membrane inside a hole in your skull to bones going clickety-click to waves of electrical activity to thoughts.”

In many respects this is a familiar story. It’s the modern divide Latour describes in our efforts to patrol the border between nature and culture. It’s the divide between empiricism and idealism in which different stripes of academics argue over the possibility of knowing something about the “world that exists” besides a “hallucination” of it.

And it’s also the founding problem of rhetoric, the one Rogers terms “the central problem of being human.” In the contemporary secular world where there is no recourse to divinely secured presence (aka souls) to secure intentions, rhetoricians turn to solutions that sound similar to Rogers, like Thomas Kent’s practice of “hermeneutic guessing.” On the other hand, Rogers’ neuroscientific turn is also a matter of concern for rhetoricians who are skeptical of the tendency/hope that brain science can explain away these problems with an fMRI or something. (It’s worth noting that neuroscientists are also often skeptical of what the mainstream takes from their research.)

For me, this odd viral story is an opportune moment to consider the usefulness of Latour’s “second empirical” method, one which he builds upon William James. Here’s Latour from An Inquiry into Modes of Existence:

 

The first empiricism, the one that imposed a bifurcation between primary and secondary qualities, had the strange particularity of removing all relations from experience! What remained? A dust-cloud of “sensory data” that the “human mind” had to organize by “adding” to it the relations of which all concrete situations had been deprived in advance. We can understand that the Moderns, with such a definition of the “concrete,” had some difficulty “learning from experience”—not to mention the vast historical experimentation in which they engaged the rest of the globe.

What might be called the second empiricism (James calls it radical) can become faithful to experience again, because it sets out to follow the veins, the conduits, the expectations, of relations and of prepositions —these major providers of direction. And these relations are indeed in the world, provided that this world is finally sketched out for them—and for them all. Which presupposes that there are beings that bear these relations, but beings about which we no longer have to ask whether they exist or not in the manner of the philosophy of being-as-being. But this still does not mean that we have to “bracket” the reality of these beings, which would in any case “only” be representations produced “by the mental apparatus of human subjects.” The being-as-other has enough declensions so that we need not limit ourselves to the single alternative that so obsessed the Prince of Denmark. “To be or not to be” is no longer the question!

I realize that’s a long quote, so if you just skipped over it, here’s the summary. Classically empiricism divides observations into primary and secondary qualities, where primary qualities are objective (e.g., length, width) and secondary qualities (e.g. color, smell) are subjective. This is the worldview Rogers implies when he suggests that brains hallucinate about reality. That somewhere between things in the world and the electrochemical pulses of the brain, all the actual relations that hold things together (their relations) become lost and inaccessible to us and our brains just have to guess at what those things are based on a slush of sensory data.

So the weird thing about that is that our bodies and brains are just more things in the world. And our thoughts are just more events/actions in the world. If the lamp sits in relation to the table, then don’t my perceptions and thoughts about the lamp and the table also sit in relation to them as just more things? Why would one imagine that somewhere along the way, one passes into a parallel universe with a different ontology? I actually think the answer to that is fairly simple. One thinks that because one imagines oneself to be divine in some special way.

Rogers ends his essay this way. “Telling the person next to you what’s going on in your head, what your hallucination is like—I think that’s what we mean by ‘finding connection,’ by making meaning with each other. Maybe it’s impossible, in the end. Maybe we’re all alone in our heads. But that doesn’t mean we can’t work on being alone together.” And I don’t really mean to pick on this guy. I just think he neatly expresses a commonly held perception. The title of this article is “The Fundamental Nihilism of Yanny vs. Laurel,” which captures this feeling that this story reminds us that we don’t really share the world in common and that we are all alone in the end. However I think this is more a romantic fantasy than a moment of nihilistic, existential angst.

We aren’t alone in our heads. We aren’t even only in our heads in the sense that this meat is designed to operate in an environment. There’s stuff coming into our heads all the time–through our senses, through the blood brain barrier, through electromagnetic waves. And there’s stuff coming out of our heads too.

Sure. It can be hard to communicate. It’s also hard to hit a curve ball or play the guitar. But this is not a story that begins with us being fundamentally alone and romantically struggling to understand one another. That’s backwards. You’re not alone in the world and thus hear Yanny rather than Laurel. Instead, hearing one rather than the other contributes to your individuation. But that individuation, in my thinking, is just an iteration of a possibility space that one largely shares with the human population. Thinking about individuality as the output of a relational environmental process rather a starting point that never really quite gets going because it can manage to connect to the world strikes me as a much more productive approach to investigating these experiences.

Categories: Author Blogs

the late age of late ages

16 May, 2018 - 18:47

dance marathonl.jpg It’s undeniably a quizzical situation. For the middle-aged rhetorician it’s the comically late age of the humanities/English Studies and the tragically late age of humans (cf. climate change) in the midst of a still spry rhetorical universe that will go on without us. I can only imagine a generation of mid-century factory workers punching clocks in steel mills and auto plants looking upon those industrial edifices with their cornerstones proclaiming “Look upon my Works, ye Mighty and despair!” and imagining an indefinite unchanging future for themselves and their children, just as tenured faculty still do (or at least used to until recently).

If you’re in the humanities, have you ever wondered why tenure appears to be the one thing in all of human history, if not the entire universe, that is seemingly immune from the critical apparatuses of the humanist gaze? Is not tenure also a mechanism of colonialism? patriarchy? capitalism? neoliberalism? etc.? If not, what is tenure’s magic? Surely tenure has always operated inside hegemony. I can go on, but it isn’t necessary because anyone with a Phd in the humanities should be able to play this game. And in defending tenure, one simultaneously reveals the rhetorical playbook for turning back that critical gaze in any context. (Un)fortunately for us, no one was paying attention in the first place.

In the end building and working in disciplines is not unlike building/working in factories or pyramids. For my particular journey through the discipline of rhetoric, the seeming hypocrisy in critiquing anything/everything except tenure isn’t much of a problem. Maybe you think I’m confessing that I’m full of crap. Maybe. But then maybe you should have read the ToS before you agreed to it. The seeming hypocrisy isn’t a problem but the fading efficacy of such arguments is.

In the late age of late ages, we’re all too tired for this, aren’t we?. (I will concede that it might just be me getting old.)

That said, nihilism is as an immature response to life as belief. One must imagine the professor teaching an ever-shrinking number of students as happy. After all these things were plain to see decades ago. Playfully let’s suggest this is Camus, not Sartre: there is an exit. The late age of “humans” presented by anthropogenic climate change and digital life need not mean the late age of “us:” to offer a Latourian malapropism we have never been human. But what does the “humanities” have to offer to such beings, their cultures and economies? That’s a good question. I’m not sure. But I’m fairly certain that it begins with rejecting the immunity we have granted ourselves.

I know that sounds crazy. It’s the last, great counter-intuitive move for a disciplinary tradition that has laid all its other bets on counter-intuitive thinking.

In the late age of late ages, it’s time to go all in. If not now, then when?

Categories: Author Blogs

when AIs start vlogging

10 May, 2018 - 12:48

Right now I have two scholarly/professional interests, and I’m wondering how they intersect. On a general thematic level they appear to share a lot as they are both about digital technologies and communication/rhetoric. However, they also represent two very different segments of digital culture. I’ve been writing/speaking about both recently on this blog. The first has to do with the role of artificial intelligences as rhetorical agents. From speech synthesis to natural language processing to negotiating deals, AIs occupy rhetorical spaces. Their rhetorical behaviors are interesting for two reasons. The more immediate one is that we humans are increasingly interacting with AIs, having rhetorical encounters with them. The other reason is that the rhetorical actions of AIs might tell us something about how rhetoric functions outside human domains. If we think of rhetoric as a thing or process or capacity that is in itself not human but with which humans interact, then understanding how rhetoric interacts with other nonhumans might give us some broader insight into rhetoric itself.

The second thing I’ve been musing about is the emergence of digital genres, particularly over the last five years or so. Sure vlogging, podcasting, infographics and so on have longer histories than that, but these are all things that had limited cultural roles a decade ago compared to where we are now. It’s hard to imagine that much of that wasn’t driven by the adoption of smartphones with the capacity to both deliver and produce video and audio content. Anyway, as I’ve been saying, even those these formats have been around for 15 years (and build upon decades of video, film, radio, and so on), they are still relatively immature in terms of the genres built on them. I think this is especially true as one looks at professional and academic genres. I find it hard to imagine that we can go another decade without these genres become more prevalent.

So how do these things intersect? The first answer that comes to mind is that AIs will become increasingly able to help people make and access media, starting with the technical qualities of image and sound. With natural language and image processing, one has the potential of creating indexical access to audio and video, as in “Show the part where there are monkeys” or “Go to where they talk about monkeys.” Then there are all the possibilities for using AI to help identify fake news and other bad actors. In short, there are a range of procedural-rhetorical ways in which AI will shape the composition, circulation, and consumption of video and audio.

In short, there are a number of places to start, and these are all viable. However, it’s not quite what I’m thinking about. For me, the interesting intersection is at once both more abstract and more material. Video/audio capture the environment in which one composes, starting with one’s body. Even with staging, editing, and such, that environment is always there in a way that it is masked in text. (BTW, that doesn’t mean the environment doesn’t shape textual composition but only that it’s less visible/harder to trace.) My interest in AI rhetorical actors is similarly what they can tell us about our shared rhetorical environment. If we think about this from a “cognitive ecology” (or a cognitive media ecology) perspective then sensors (cameras, mics, IoT, etc.), data storage, composing/editing applications, AIs, humans, networks, mobile tech, etc. form an expansive environment. All the media compositions in which humans get actively involved–as vast as that is–is a thin veneer on the massive amount of expressive data of machines reporting and circulating their sensations. Similarly the rhetorical negotiations involving humans will soon become a minor part of the larger conversation. We represent a small population of moving parts in this rhetorical environment.

Of course, the “always already” argument is always already available to us. We’ve always been immersed in a denser and richer rhetorical environment. It’s just that we’ve been too anthropocentric (perhaps unavoidably) and too sure of our privileged, exceptional ontological condition in the universe (less unavoidably) to recognize that immersion. While that observation is valuable (for purposes of humility if nothing else), it’s also necessary to recognize the shift that we are experiencing (without falling into the hype of that either).

Recognizing that our rhetorical capacities emerge from our participation in populations of assemblages within a cognitive media ecology is a fruitful starting point for describing the particular capacities that arise among us. And that’s fine for a broad research agenda that has legs. But it’s not the kind of thing I can teach to undergraduate students or even to graduate students–at least not without a larger curricular structure to support it.

One answer is re-separating the chunks. I.e., teach some media production in one place and some media theory some place else. Of course it’s all Eternal September stuff with no curricular follow-up or through. I mean there aren’t many English departments where students can systematically develop digital rhetorical/compositional knowledge, skills… let’s call it phronesis in say the way they can march through a series of literary-historical periods. That said, if there was some follow-through structure then at some point you could start to think about how the construction of emerging genres is a structural-environmental conversation we’re having with these digital nonhumans. In other words, even for those who weren’t going in a scholarly direction in relation to these questions, there is usable knowledge to be gained here.

Categories: Author Blogs

30 years on…

2 May, 2018 - 19:10

30 years ago, I was an undergrad and just starting a job working for a start-up, family business in the nascent IBM PC clone market. We assembled computers and sold them on to retails. We distributed hard drives and other components. We consulted with small businesses to provide them with IT solutions for point-of-sale, inventory control, accounting, etc. Those of you who were there know the drill. Monochrome computers, C prompts, no mouse, no windows, 8088 CPUs, RAM measured in kilobytes, 10MB hard drives the size of a kid’s lunch box. Could I have envisioned 2018? Sure, sort of. I mean I read Neuromancer.

I was also an English and History double-major at Rutgers. They were the two largest majors in Rutgers College at the time (the Arts and Sciences college of the university in New Brunswick). Shakespeare, Detective Fiction, Arthurian Romance, The Crusades, World War 1, America in Vietnam: these were all English and History classes with hundreds of students enrolled in giant lecture halls: semester after semester, year after year. Maybe it’s still that way at Rutgers. If so, they’re a bit of an outlier.

Imagining the future/present of computers would have been easier, I think, than imagining the demise of the humanities. In the end the two were bound together. Not because computers are necessarily anti-humanistic but rather because the humanities, especially English was born in the early 20th century with an expiration date. Despite the hypothetical possibility that English Studies need not be bound to print technology and culture, in the end, that’s what has happened. And there’s nothing wrong with the humanistic study of print culture. I’m sure it will carry on, in some fashion, for a very long time. It just isn’t going to be central to how we understand communication as it operates in our living culture.

So it makes me think about 30 years forward. Assuming we still have something like today’s tenured professors at universities (a future that is far from guaranteed in America), I am confident there will be faculty who research the contemporary cultural, political, and professional practices of communication, in whatever media prevails. In short, there will be professors who extend the rhetorical tradition into them media ecologies in which they and others live and work. Of the rest of English Studies, who knows? I would guess some smaller version–akin to classics, art history, or philosophy today–will exist.

And it probably won’t take 30 years to get there.

In part, I’ve been thinking about this because I’ve been idly following the vlogger Casey Neistat recently in his efforts to imagine this new business of his called 368. Basically, it’s a kind of school… sort of. It’s a place where vloggers and podcasters might come, gain access to tools and support, learn how to up their production game, and get business advice in terms of marketing and finding an audience. In a recent vlog, Neistat made the astute analogy between contemporary vloggers and Buster Keaton in that they are both working in an emergent medium/genre and trying to figure what is possible and how to achieve it. His business model seems essentially based on this observation and the opportunity that lies in figuring out how to become the MGM of YouTube (or maybe more accurately United Artists).

I suppose in part I’ve thought of my own work as a less entrepreneurial/commercial and more scholarly version of this objective: to invent/discover/study rhetorical practices and genres for an emerging media ecology. That’s me as a member of the Florida school who was raised by wolves and a copy of Ulmer’s Heuretics read by moonlight.

I will admit that I’ve never been the best professor for the student who wants to be told what to do. Maybe that sounds like a fake criticism, like saying in an interview that your greatest weakness is that you work too hard or that you’re too honest, but I really don’t mean it that way. There’s a place and time for direct instruction, and I try to give it, but I can admit that’s not my strong suit. I’ve always come more from the perspective of saying “Here are some tools. Find something interesting about them and give it a go. If the whole thing falls about, I don’t really care as long as you learned from it.” It’s a fine pedagogy, in its way, but it really only works with students who have some intrinsic motivation related to their work. I’m capable of enough reflection to know that’s my own version of a mini-me pedagogy, because that’s how I learn. I find something I care about and then I bang and grind away until I get somewhere.

I think it’s premised on the notion that while we think it is unwise to try to reinvent the wheel, trying to figure out how one invents something like that, taking your own journey through invention… well that’s what learning is to me.

What does this have to do with looking 30 years back and forward? Good question. I suppose it has to do with my basic plan for moving forward–claiming an interest, banging, grinding, experimenting, inventing. It’s the opposite of institutional/disciplinary humanistic methods, which are fundamentally homeostatic

Categories: Author Blogs

what is digital professional communication? (the video)

20 April, 2018 - 10:42

Having asked students in my classes to experiment with video, I took on the task myself to make a video where I think about this question and the course I’m teaching in the fall. At this point I could offer various caveats regarding first attempts and such but to be honest I had quite a bit of fun making this and getting back to digital composing (though you can’t really go back, especially not with digital media).

Categories: Author Blogs

what makes professional digital communication interesting?

18 April, 2018 - 12:28

First, a tangent. Over the last week I’ve been part of a listserv conversation that reprises the now familiar question about how English Studies majors should change (or not). As I noted there, this has become a familiar genre of academic clickbait, like this recent buffoonery from the Chronicle of Higher Ed.  Among other things I pointed out that, from a national perspective, the number of people earning communications degrees (which was negligible in the heyday of English majors 50-60 years ago), surpassed the number getting English degrees around 20 years ago. Since then Communications has held a fairly steady share of graduates as the college population grew, while English has lost its share and in recent years even shrank in total number, as this NCES table records. In short, students voted with their feet and, for the most part, they aren’t interested in the curricular experience English has to offer (i.e. read books, talk about books, write essays about books). Anyway, in such conversations the prospect of teaching professional writing, technical communication, and/or digital composing is often raised. The predictable response is a rejection of such curricula on the grounds that it is instrumentalist, anti-intellectual, and generally contrary to the values of English Studies, both in terms of literary studies and rhetoric/composition. Though I have no interest in defending the kind of work I do, there’s really a more important response. First, nothing is going to “save” English. It’s over. Second, by over, I mean it will just be small, serving 1-2% of majors. It will probably remain larger than Math or Philosophy, and no one is say those fields aren’t valuable. English can be small and valuable.

That said,  I do find amusing the evidence such conversations (and clickbait articles) offer of the narrow utility of the intellectual capacity afforded by disciplinary thinking in English Studies. For example, I’m teaching a course called professional digital communication this semester and look to do so again in the fall in an online format. What is it? I suppose you  could imagine it to be an instrumental tour of how-to’s for various business-related digital genres: how-to make a brochure website, how-to design a powerpoint slide, how-to use desktop publishing to write a report, how-to make a professional web portfolio, how-to write a professional email, etc. etc. But think about these three words. Professional. Digital. Communication. What are they? Asking what communication is opens the entire field of rhetorical study. And digital? What does that comprise from technical answers to histories and cultural values/associations? How does “digital” modify “communication”?

However, I actually think it is “Professional” that is the most confounding. On first glance, it should be simple. It should just mean the kind of communication (i.e., the genres I guess) that “professionals” use (and, in this case, are “digital” somehow). Well… basically all workplace genres are “digital somehow,” even if that only means they are composed in MS-Word. I suppose, the implication is that professional also indicates some technical facility with digital tools beyond the typical office suite. These can, somewhat clumsily, be divided into two categories. The first contains genres that have been softwarized (e.g. reports, technical manuals, printed materials like brochures) that existed as genres 40 years ago. The second are born-digital genres and genres that have been significantly transformed by their softwarization (e.g., the way an instruction manual might become a how-to video/screencast or a brochure becomes a brochure website). So the first category might get one thinking about the role of XML/DITA in organizing content in large technical databases or visual communication principles deployed in InDesign or similar desktop publishing software. The second category is more elusive as the genres are fluid: video, podcast, social media, game, infographic, mobile app, website. Of those maybe the last has a stable genre. Furthermore, the changing nature of the work carried about by these professionals, the “adhocracies” in which they often work (to use Clay Spinuzzi’s term), and the continual churn of the technology makes it very difficult to define “professional.” In short, there’s a lot to investigate in those three words, which is why there is extensive scholarship across rhetoric, professional-technical communication, and communications faculty on these subjects.

But, in my experience, the most challenging part of running a course like this is the pedagogical shift into learning through cycles of experimentation and reflection.  Part of what I do is say “We will read scholarship from these fields for the purpose of understanding how it might inform the development of our own practices.” So we do read and talk. But mostly we are engaged in experimental composition. Through our experimentation with various digital tools and genres we aim to understand what “professional digital communication” might be, with the hope that our readings provide some useful terminology and apparatus for doing so.

And for me that’s what makes professional digital communication interesting. It’s not so much reading the scholarship and writing scholarly responses. (Though that interests me too; I am an academic after all.) Instead, it’s the doing of it, the composing, and the insight those experiences give me into the research we do and visa versa.

Categories: Author Blogs

The empty space of the academic presentation

9 April, 2018 - 17:02

So there’s a fairly good chance you know more about Casey Neistat that I do. He’s something of a YouTube sensation with over 9 million subscribers. He also had an HBO series (I guess you’d call it). In my “copious spare time,” I’ve been hunting around, trying to catch up on the world of digital composing that passed me by while I was sentenced to several years as a WPA, and Neistat is someone I’ve only recently (and belatedly) encountered. Below is today’s episode in Neistat’s newly revived daily vlog on his efforts to build something (not quite sure what) in a space at 368 Broadway in Manhattan.

But this post isn’t really about that. It’s about this particular episode from an angle to which many academics could relate (is that the right preposition, “to”?).

 

This episode sees Casey traveling to Montreal to give a presentation. We see nothing of the presentation. Instead we see the antics of travel. In particular we see his struggles with his motorized scooter-luggage combo.

This relates to my earlier post on “meatspace meetspace.” (BTW, I love how my browser complains that meetspace isn’t a word but is mum about meatspace.) Whether you’re headed to a conference where you’re one of 100s giving presentations or giving an invited talk somewhere, your primary experience is not about the presentation; it’s about all the miscellanea around it: the travel, the hotel, the food, socializing, etc. That’s what this particular episode captures.

I may regret saying this, but I think I’d love to see thousands of 10-minute videos of my colleagues’ travails going back and forth to some BS conference. I’m thinking those would be far more compelling, far more likely to convince me to take interest in their work, than 15-20 minutes of their reading a paper with bulletpoint slides.

Categories: Author Blogs

On the importance of deep mixture density networks and speech synthesis for composition studies

28 March, 2018 - 08:11

Eh? What’s that?

I’m talking about AI approaches to the synthesis of speech on your smartphone and related devices. I.e., how does Siri figure out how to pronounce the words its saying? OK. But what does that have to do with us?

Another necessary detour around the aporias of disciplinary thought… This is really about recognizing the value of computer simulation in articulating the possibility spaces from which reality emerges. Put in the most general terms, if you want to know why one thing is composed rather than another, why there is something rather than nothing, you need a way of describing how one particular thing emerges from the virtual space of possibilities where many other things were more or less probable. Simulation is a longstanding concept in the humanities, mostly through Baudrillard, but the intensification in computing power changes its significance. As DeLanda writes, in Philosophy and Simulation, 

Simulations are partly responsible for the restoration of the legitimacy of the concept of emergence because they can stage interactions between virtual entities from which properties, tendencies, and capacities actually emerge. Since this emergence is reproducible in many computers it can be probed and studied by different scientists as if it were a laboratory phenomenon. In other words, simulations can play the role of laboratory experiments in the study of emergence complementing the role of mathematics in deciphering the structure of possibility spaces. And philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.

So this post relies at minimum on your willingness to at least play along with that premise. It is, as DeLanda remarks elsewhere, an ontological commitment. You may typically have different commitments which of course is fine. Within a realist philosophy like DeLanda’s the value proposition for an ontological commitment is its significance rather than its signification, by which he means it’s less about its capacity to represent/signify Truths and more about its capacity to create capacities that make a difference (its significance).

In this case, we have the simulation of speech. Basically what happens (and basic is the best I can muster here), Siri’s voice is constructed from recorded human speech. That speech is divided up into constitutive sounds and then the purpose of speech synthesis is to figure out  how to recombine those sounds to make natural sounding speech.  [n.b. A common error in this conversation is to identify the semblance between a computer and a human at the wrong level: to assert that the human brain is like a computer. However I don’t think anyone would suggest that humans operate by having a database of sounds that they then have to probabilistically assemble in order to speak.] While humans don’t form speech this way, we do obviously have a cognitive function for speaking that is generally non-conscious (exceptions being when we are sounding out an unfamiliar word, learning a new language, etc.). Generally we don’t even “hear” the words we read in our minds (though I bet you’re doing it right now, just like you can’t not think of a pink elephant).

One thing that is clear in speech synthesis is that process that seeks to approximate the sounds of “natural speech” does not know the meaning of the words being spoken or  need to know that the sounds being made are connected to meaning or even that meaning exists. It is a particular technological articulation of Derrida’s deconstruction of logo-phonocentrism whose heritage he describes as the “absolute proximity of voice and being, of voice and the meaning of being, of voice and the ideality of meaning” (Of Grammatology). Diane Davis takes this up as well, writing “it is not only that each time ‘I’ opens its mouth, language speaks in its place; it is also that each time language speaks, it immediately ‘echos,’ as Claire Nouvet puts it, diffracting or laterally sliding into an endless proliferation of ‘alternative meanings that no consciousness can pretend to comprehend’” (Inessential Solidarity). None of that is to suggest that meaning does not exist or even that the words Siri speaks are meaningless. No, instead it leads one toward a new task of describing the mechanisms (or assemblages, to stick with DeLanda’s terms) for signification and significance are separate–though certainly capable of relating to–the assemblages by which speech is composed.

But getting back to speech synthesis. I’ve been clawing my way through a couple pieces on this subject like this one from Apple’s Machine Learning journal and this one coming out of Google research. This is highly disciplinary stuff and at this point my understanding of it is only on a loose conceptual level. However, I’m trying to take seriously DeLanda’s assertion regarding “the role of mathematics in deciphering the structure of possibility spaces,” as well as his claim that “philosophy can be the mechanism through which these insights can be synthesized into an emergent materialist world view that finally does justice to the creative powers of matter and energy.” It is that last part that I am pursuing and which, at least for me, is integral to rhetoric and composition.

Here however is my hypothesis. Despite the arrival (and digestion) of poststructuralism in English Studies in the last century, rhetoric and composition remains a logo-phonocentric field. The digital age (or software culture as Manovich terms it) has put serious pressures on those ontological commitments (and that’s what logo-phonocentrism ultimately is, an ontological commitment). The mathematical description of the possibility spaces of speech synthesis and the subsequent simulation of speech are just one small part of those pressures, a part so esoteric as to be difficult for us to wrap our minds around.

But what happens when we start disambiguating (decentering) the elements of composition that we habitually unify in the idea of the speaking subject? To return to DeLanda here as I conclude:

The original examples of irreducible wholes were entities like “Life,” “Mind,” or even “Deity.” But these entities cannot be considered legitimate inhabitants of objective reality because they are nothing but reified generalities. And even if one does not have a problem with an ontological commitment to entities like these it is hard to see how we could specify mechanisms of emergence for life or mind in general, as opposed to accounting for the emergent properties and capacities of concrete wholes like a metabolic circuit or an assembly of neurons. The only problem with focusing on concrete wholes is that this would seem to make philosophers redundant since they do not play any role in the elucidation of the series of events that produce emergent effects. This fear of redundancy may explain the attachment of philosophers to vague entities as a way of carving out a niche for themselves in this enterprise. But realist philosophers need not fear irrelevance because they have plenty of work creating an ontology free of reified generalities within which the concept of emergence can be correctly deployed. (Philosophy and Simulation)

I would suggest an analogous situation for rhetoricians. Perhaps we fear irrelevance in the face of “reified generalities” that form our disciplinary paradigms. What happens when not just “voice” or “speech” is distributed but expression itself becomes described as emerging within a distributed cognitive media ecology?

In any case, that’s where my work is drifting these days and it was useful for me to glance back toward the discipline here to get my bearings vis-a-vis some future audience I hope to address.

Categories: Author Blogs

distributed deliberation and Cambridge Analytica

19 March, 2018 - 10:11

One of the major stories of the weekend has surrounded the interview with Christopher Wylie, former employer turned whistleblower of Cambridge Analytica. Here’s that interview if you haven’t seen it.

It’s good to see this story getting attention, but it’s also something we’ve basically know for a while, right? For example, here’s a NY Times op-ed from right after the election talking about how the Trump campaign used the data from Cambridge Analytica to target votes. Or you can watch this BBC news interview with Theresa Hong, who was Trump’s Digital Content Director, which is from last August. In the interview, she gives a tour of the office where she worked–in an office right next to where the Cambridge Analytica folks were working. If you watch that interview, right before the 3-minute mark she explains how people from Facebook, Google, and YouTube would come to their office and help them. They were, in her words, their “hands-on partners.” Unless she’s straight up lying about that, which would seem pretty weird in the context of this video where she otherwise gleefully recounts her role in an information warfare campaign, then it’s essentially impossible to believe that Fb didn’t know what Cambridge Analytica was doing.

The funniest part of the recent news cycle is when the newscaster turns to the expert and says “do you think this affect the outcome of the election?” Hmmmm…. do you think the Trump Campaign spent $100M+ ($85M on Fb advertising alone) in order to not affect the outcome?

So that’s the news. Now, here’s my part. Let’s be good humanists and start with a straight dose of Derrida and notion of the pharmakon. In considering the pharmacological operation of media, beginning with writing, one might investigate the cognitive effects emerging from technologies. As I’ve written about before, Mark Hansen picks up on this in Feed Forward and is really the thesis of this book:

Like writing— the originary media technology— twenty-first-century media involve the simultaneous amputation of a hitherto internal faculty (interior memory) and its supplementation by an external technology (artifactual memory). And yet, in contrast to writing and all other media technologies up to the present, twenty-first-century media— and specifically the reengineering of sensibility they facilitate— mark the culmination of a certain human mastery over media. In one sense, this constitutes the specificity of twenty-first-century media. For if twenty-first-century media open up an expanded domain of sensibility that can enhance human experience, they also impose a new form of resolutely non-prosthetic technical mediation: simply put, to access this domain of sensibility, humans must rely on technologies to perform operations to which they have absolutely no direct access whatsoever and that correlate to no already existent human faculty or capacity.

If you’re wondering what a “non-prosthetic technical mediation” might be, well one example is the underlying technical operations that drive this Cambridge Analytica story. Non-prosthetic suggests, contra McLuhan, media that are not “extensions of man” (sic).

Think of it this way (and this is a little slapdash but should get the idea across). There’s always a price to be paid to gain access to new capacities. As Hansen suggests, with writing you give up interior memory for artifactual memory. With photography and then film we extend artifactual memory into the visual but at the cost of access to unmediated experiences. Think of selfies or Don DeLillo’s “most photographed barn in America,” or Walter Benjamin’s remark that film introduces us to unconscious optics. Hansen’s 21st century media, what I would think of as networked, mobile digital media, offers a range of capacities (I won’t attempt to enumerate them) but at what cost? Basically everything. We give the network everything that we know how to give.

It strikes me that you can think of the development of media technologies as an incremental distribution of human cognition. This only works because cognition is always already relational and distributed. I.e., the biological capacity for thought emerges from an existing environmental/ecological platform–we think only because there are things to think about and with. I don’t want to go down the rabbit hole right now, but the ultimate conclusion is that we have become, are becoming, interwoven with our digital selves: politically, psychologically, affectively, cognitively and so on in just about any dimension you can imagine.

And as the saying goes, with these Cambridge Analytica revelations we aren’t seeing the beginning of the end of this story but rather the end of the beginning. We can make laws, create tools, and try to educate people, but, without falling for techno determinism, human cognitive capacities are shifting and nothing short of turning off history is going to change that. I don’t think the nature of the shift is pre-determined, but we cannot go backward and we cannot stand still.

Distributed deliberation is all over this story. It’s in the mechanisms of the apps in which FB users answered questions to figure out which Hogwarts school they belonged in (or whatever). It’s in the processes by which that data was collected and transformed into psychographic profiles. It’s in the way those profiles were organized and targeted with specific messages by the Trump campaign. It’s in how those messages were promoted and made visible to those users by social media platforms like Facebook. It’s in the way those messages were then further spread through those users’ networks of friends. Fundamentally distributed deliberation, particularly as it applies to digital media ecologies, is the way nonhuman information technologies–from bots to server farms to algorithms–participate in evaluation of information  (making judgments for us) and in the making and circulating of deliberative arguments. They don’t do it alone. Humans are part of the system. Sometimes, certainly as in the case of the figures in these news stories, in very intentional ways. However that’s just dozens of people. Tens or hundreds of millions more participated by sharing data that was harvested, unbeknownst to them, millions were targeted by these messages, thousands worked for companies doing innocent jobs helping with customer service or maintaining servers, without whom none of this was possible (you could keep “thanking” people until the orchestra at the Oscars passed out).

We are not helpless in the face of this, but we do need new rhetorical practices. And that may sound hopelessly academic and disciplinarily solipsistic, but I would argue that it is rhetoric that gave us some modicum of agency over speech and writing and other media over time. And that’s what we need now: to invent new rhetorical tools and capacities.

Categories: Author Blogs

4C’s and the rhet/comp slasher

17 March, 2018 - 12:00

This is not the working title for my academic murder mystery, so feel free to take it if you like.

No. It’s about a growing conversation over the role of rhetoric in composition studies, and the emergence–at least as perceived by some–of 4Cs as a composition studies conference and not (or at least less so) a rhetoric conference. In phrasing it that way, I hope that I am emphasizing that the trends at the conference are an effect of a larger disciplinary (or is it now inter-disciplinary?) evolution.

Undoubtedly it’s been a long time coming. Some might say that it is a division that has been baked into composition from its formation more than a century ago. Or, we might look at the 1980s when the solidification of rhet/comp in the form of phd programs brought with it debates over the historiography of the field and the “cultural turn.” [I’d also point to the arrival of PCs, the internet, and what Manovich terms the softwarization of culture, but that’s a subject for a different day.] And since then we’ve had a proliferation of specializations, which I think you could find in the language of job ads, the networks of journal article citations, the birth of journals, book series, and conferences, and so on. My department’s new certificate in Professional Writing and Digital Communication is one small example of that. It’s certain not composition studies or pedagogy. It’s not rhetoric. It’s not cultural studies. It’s not even technical communication exactly.  Sure, it touches all those things–all these fields abut one another to some degree–but it’s something else.

The current conversation, as I’ve encountered it, is that rhetoric (e.g., history of rhetorical, rhetorical theory, etc.) has slowly disappeared from CCCC. Meanwhile RSA membership has expanded. I’m not sure if the conference is growing but I do think there’s a sense that some scholars who might have viewed CCCC as their home conference and organization a couple decades ago, now look at RSA instead. RSA now has about the same number of panels as Cs.

I honestly don’t know if that’s a good thing or a bad thing. You might compare this with MLA. The MLA conference is clearly interdisciplinary. It doesn’t include much rhetoric but it does include language departments along with English literary studies. These days the conference is about half the size it was when I was first going on the market. I think that may be because there are fewer jobs and fewer institutions interviewing at MLA rather than there being fewer panels. But with attendance in the 5-6K range, it’s about 50% larger than 4Cs, which I think is in the 3-4K range. MLA is one day longer. It has over 800 sessions. 4Cs this year is over 500, so again MLA is roughly 50% larger. I guess if you tack on ATTW, which always runs the day before 4Cs then you get closer in size. I suppose my point is that, hypothetically, the conference could grow to MLA size and perhaps address this trend if it wanted. That’s hypothetical though. I’m sure there would be many logistical challenges to doing so.

More important though is the question of whether or not we need to be all in the same space.

Just thinking about this from my own scholarly perspective (and figuring many have analogous situations), there’s one part of my work that is really outside of rhet/comp or even English studies that draws on media study, digital humanities, new materialist philosophy, and tangentially a bunch of other stuff like cognitive science, engineering, etc. And then inside of rhet/comp there are dozens of people whose work is very close to mine and hundreds more that are nearby or coming into digital rhetoric as graduate students. It’s probably impractical to keep close track of all the scholarship that self-identifies as either “digital rhetoric” or “computers and writing.” It certainly is for me when I’m also following these other extra-disciplinary conversations. So really in doing my scholarly work, my focus necessarily has to start here.

If one thinks of the primary purpose of a conference is to learn about what other people who are doing scholarship like yours are up to and then do some networking with them, then, at least for me, 4Cs is pretty inefficient. There may be nearly 4K attendees at Cs but if there are ~100 people I’d want to catch up with–people whose research impacts mine directly–then less than 20 were at the conference. If you think about a conference like Cs as a place to get some slice of the broader picture of composition studies, then maybe it works well for that. IDK. I mean, do people do that? I know many people say they always try to go to at least one panel that’s sort of random. I do that too, including this year. And that’s fine, but really only as a supplement to that primary task of connecting to people who aren’t random.

Again I want to reiterate that this isn’t a criticism of Cs. I certainly don’t want the job of organizing that beast. I have zero interest in making an argument along the lines of there should be more people like me at Cs! I think that’s an unworkable argument at scale for everyone who identifies as rhet/comp.

I actually think these trends point to a more interesting problem. I think we may be in a paradigmatic crisis of sorts in the sense that I don’t know what we share as scholars–methods, objects of study, foundational assumptions, research questions? And we don’t need to share those things, but if we don’t then in what sense are we connected? By a shared history, I guess, but I don’t know if that’s enough, especially as those shared moments are receding from living memory.

 

Categories: Author Blogs

Composing in and with data-driven media #4C18

16 March, 2018 - 07:04
china social credit.jpgCredit: Kevin Wong, Wired

[A text version of my presentation from yesterday]

At the beginning of the century, Lev Manovich identified five principles of new media the last of which he termed “transcoding.” Transcoding describes the interface between cultural and computational layers. In part, transcoding observes that while digital media objects appear to human eyes as versions of their analog counterparts–images, texts, sounds, videos-they are also data and as such subject to all the transformations available to calculation. Such familiar transformations include everything from cleaning up family photos to making memes to share online or even changing the font in a word document. As the quality of video and the computational power available to average users increases, such transformations have also come to include altering videos to make people appear to say things they haven’t said or even putting someone’s head on another person’s body.

Perhaps unsurprisingly much of the early exploration with this video technology has been with fake celebrity porn. That’s certainly NSFW so I’ll leave it to you to investigate that at your own discretion. The point is that the kinds of media we are able to compose is driven by our capacity to gather, analyze, and manipulate data.

The other part of transcoding, however, points to the way in which data-driven, algorithmic analysis of user interactions shapes our experience of the web from Netflix recommendations to trending Twitter hashtags. In recent months, the intersection of these two data-driven capacities-the ability to create convincing fake media and then spread it across online communities-has become the subject of national security concerns and national political debate. I’m not here to talk specifically about unfolding current events but they do offer an undeniable backdrop and shape the situation in which these rhetorical processes can be studied.

Certainly there will be technical efforts to address the exploited weaknesses in these platforms. However, computers are by definition machines that manipulate data, and, as long as these machines operate by gathering data about users and using that data to create compelling, some might say addictive, virtual environments, there will be ways to exploit those systems. After all, one might say these digital products are designed to exploit vulnerabilities in the cognitive capacities of humans, even as they also expand them.

Within this broad conversation, my specific interest is with deliberation. Classically, deliberative rhetoric deals specifically with efforts to persuade an audience to take some future action and, as Marilyn Cooper observes, “individual agency is necessary for the possibility of rhetoric, and especially for deliberative rhetoric” (“Rhetorical Agency” 426). This is not a controversial claim. Essentially, in order for deliberative rhetoric to work, one’s audience must have the agency to take an action. More generally, deliberation requires a cognitive capacity to access and weigh information and arguments. Regardless of whether those arguments come in the form of logical deductions or emotional appeals, the audience still requires the capacity to hear, evaluate, and act on them. However, in emerging digital media ecologies the opportunities for conscious, human deliberation are increasingly displaced by information technologies. That is, machines make decisions for us about the ways in which we will encounter media. In some respects, one can view this trend as inevitable and benign if not beneficial. Without Google and other search engines, for example, how could any human find information on the web? One might even look at recommendations from a subscription media streaming service like Netflix or an online store like Amazon as genuine, well-intentioned efforts to improve user experience, though clearly such designs also serve corporate interests. Similarly, changes to social media experiences such as Facebook’s massaging of what appears on one’s feed or the automatic playing of videos might improve the value of the site for users or might be deliberate acts designed to sway future user actions. Ultimately though, the increasing capacity of media ecologies to record and process our searches, writing, various clicks and other online interactions-to say nothing of our willingness to have our bodies monitored from biometrics to our geographic movements-produces virtual profiles of users which are then fed back to them and reinforced.

To address these concerns, drawing upon a new materialist digital rhetoric, I will describe a process of “distributed deliberation.” This process references the concept of distributed cognition. Distributed cognition is not meant to suggest machines doing “our” thinking for us but rather to describe the observable phenomenon in which humans work collectively, along with a variety of mediating tools, to perform cognitive tasks no individual human could accomplish alone. Distributed deliberation works in the same way. It is useful to think about this in Latourian terms. That is, through the networks in which we participate we are “made to act.” That is not to say that we are necessarily forced to act but rather that we become constructed in such a way that we gain the capacity to act in new ways. For example, through their participation in a jury room or a voting booth citizens are made to deliberate in ways that would not otherwise be possible. However, that is a little simplistic. While we may only be able to vote in that booth, there are many agents pushing and pulling on us as we deliberate. Typically it is far more difficult to discern the direction in which agency flows. As Latour observes, “to receive the Nobel Prize, it is indeed the scientist herself who has acted; but for her to deserve the prize, facts had to have been what made her act, and not just the personal initiative of an individual scientist whose private opinions don’t interest anyone. How can we not oscillate between these two positions?” (An Inquiry into Modes of Existence, 158-9). That is the oscillation between facts demanding certain actions and the agency of the scientist. For Latour, the resolution of this oscillation lies ultimately in the quality of the resulting construction, which, of course is just another deliberation and it is one that requires an empirical investigation, the following of experience. To put it in the context of my concern, as a Facebook user hovering the mouse over the buttons to like and then share a news story placed into her feed, how do the mechanisms of deliberation swarm together and make the user act? Is the decision whether to share or not a good one? Furthermore, while we can and must pay attention to the experience of the human user, so much of the work of deliberation occurs beyond the capacity of any human to experience directly. As such, in charting distributed deliberation we must also investigate the experience of nonhumans, which will require different methods, and that’s where I will turn now.

Understanding the specific operation of those nonhuman capacities is a task well suited to Ian Bogost’s procedural rhetoric, which he describes as “the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures. This type of persuasion is tied to the core affordances of the computer: computers run processes, they execute calculations and rule-based symbolic manipulations” (Persuasive Games, ix). Though Bogost focuses on the operation of persuasive games as they seek to achieve their rhetorical goals through programming procedures, he recognizes that procedural rhetoric has broader implications. Selecting a movie or picking a route home with the help of Fandango or Google Maps may be minor deliberative acts, but they offer fairly obvious examples of how deliberation can be distributed.

Yelp, for example, combines location data with a ratings system and other “social” features such as uploading reviews and photos, “checking in” at a location, and providing map directions. These computational processes compose a media hybrid and expression with the capacity to persuade users. Certainly one might be persuaded by the text of a review or a particularly pleasing photo; text and image play a role here as they might in a video game. But the particular text and photos the user encounters are the product of a preceding procedural rhetoric that decides which businesses to display. It is not only restaurants and other business that are reviewed but the reviews and reviewers as well, which serve as part of a process that determines which among the dozens or hundreds of reviews a business might receive that one is first to see. In the case of Yelp, users write reviews and rate businesses on a 5-star scale. Yelp then employs recommendation software to analyze those reviews and weigh them. Does it matter that they claim the recommendation software is designed to improve user experience and the overall reliability of the reviews on the site? Maybe. What is key here, however, is that such invisible procedures undertake deliberations for us. The fact that it would be practically impossible for users to undertake this analysis of reviews independently or that users are still presented with a range of viable options when looking for a local restaurant, for example, does not alter the role that such procedures perform in our decision-making process. In Yelp one finds a digital media ecology that includes juxtaposed multimedia (e.g., photos, icons, text, maps), computational capacities (e.g., linking, searches, location data, data entry for writing one’s own reviews), algorithms or procedures (e.g., ranking businesses, evaluating and sorting reviews), and media hybrids (e.g., combining with mapping applications to provide directions or linking with your phone to call a business). Indeed one might look at Yelp itself as a media hybrid with its own compositional processes, rhetorical procedures, and genres.

The softwarization of media did not take off fully until personal computing hardware was powerful enough to run it. Social and mobile media obviously rely on the various species of smartphones and tablets. They require the hardware of mobile phone and Internet networks and server farms. Whole new industries and massive corporations have emerged as part of this ecology, and this means people: HVAC technicians keeping server farms cool, customer service representatives at the Apple Genius bar, engineers of all stripes, factory workers, miners digging for precious metals in Africa, executives, investors, and so on. It also involves a shifting higher education industry with faculty and curriculum to produce research and a newly-educated workforce, an infrastructure that relies upon these products to operate, and students, faculty, and staff who feed back into the media ecology. In short, a media ecology cannot be only media just as rhetoric cannot only be symbolic. While digital media ecologies create species with unique digital characteristics, they cannot exist in a purely digital space anymore than printed texts can exist in a purely textual space.

As such, whatever rhetorical power the algorithmic procedures of software might have, their most powerful rhetorical effect might lie in the belief users have in the seeming magic of a Google search or similar tools. However, as Bogost observes, algorithms are little more than legerdemain, drawing one’s attention away from the operation of a more complicated set of actors:
If algorithms aren’t gods, what are they instead? Like metaphors, algorithms are simplifications, or distortions. They are caricatures. They take a complex system from the world and abstract it into processes that capture some of that system’s logic and discard others. And they couple to other processes, machines, and materials that carry out the extra-computational part of their work…SimCity isn’t an urban planning tool, it’s a cartoon of urban planning. Imagine the folly of thinking otherwise! Yet, that’s precisely the belief people hold of Google and Facebook and the like. (“The Cathedral of Computation”)
Indeed, in some comic bastardization of Voltaire one might say that if algorithms didn’t exist that we would have to invent them as a means of making sense of media ecologies and our role in them. That is, in the face of a vast, unmappable monstrosity of data, machines, people, institutions, and so on intermingling in media ecologies, the procedural operations of software produce answers to questions, build communities, facilitate communication, and generally offer responses to our requests, even as they shape those questions, communities, communications, and requests. In other words, the distribution of deliberation and other rhetorical capacities among the human and nonhuman actors of digital media ecologies is necessary and inevitable. Describing and understanding the complexities of these relations as they participate in our deliberations, rather than simply celebrating or bemoaning the apparent magical abilities of the tools we employ becomes the first step toward building new tools, practices, and communities that expand our rhetorical capacities.

It’s worth noting that powerful entities are already intentionally at work on these goals. A year ago, when the concerns about Facebook and fake news were really taking off, Mark Zuckerberg published a manifesto declaring that “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” In a related vein, as has been widely reported, the Chinese government has a different idea of the role distributed deliberation can play in its creation of a social credit system. I don’t know about you, but I’m not especially sanguine about the notion of Facebook engineers building the social infrastructure for a global community. I’m even less enthused about the possibility of other nations importing these Chinese practices, as if we do not live in a thoroughly monitored environment as it is.

I won’t pretend there are any easy answers, any simple things one can slip into a lesson plan. The task begins with recognizing the distributed nature of deliberation and describing such processes to the extent that we can. This includes paying attention to the devices we keep closest to us and understanding the particular roles they play. And it means inviting those nonhumans into our disciplinary and classroom communities. Just as universities, departments, and faculty are capable of creating structures that encourage conformity to existing rhetorical and literate traditions, they might conversely create structures that are more open to these investigations. This might mean finding ways to use rather than restrict access to digital devices in the classroom and creating assignments that push students to creative uses of the collaborative and distributed cognitive potential of digital networks rather than insisting on insular and individualized labor. It might mean asking questions that cannot be answered by close reading or setting communication tasks that cannot be accomplished by one person writing a text. From there the classroom has to proceed to create solutions to these tasks rather than assuming the answers already exist, which is not to suggest that many answers might not be readily available but the emergent quality of digital media means, in part, that new capacities can always been considered.

 

Categories: Author Blogs

academic nostalgia for meatspace meetspace

14 March, 2018 - 10:19

Here we are once again, another national conference. I’m waiting out a three hour delay on my first flight with the second rescheduled, so fingers crossed. And let’s not even get started about the return trip. I don’t want to curse my luck that badly!

Tell me again why we do this? I mean I understand it’s a professional obligation. It’s part of my job to go to conferences. So I suppose the short answer is that it’s just another onerous, unproductive part of the academic bureaucracy, another holdover from a past century. I guess that’s enough of an answer. After all, we all have stupid things we need to do as part our jobs. So as I sit in this terminal at least I can console myself with the fact that I’m getting paid.

But let’s try to summon up the fantasy of academic freedom one last time and imagine that we could actually set the terms of what we considered valuable in terms of scholarly work.

There are two obvious alternatives to the national conference meatspace meetspace. The first is the fully online conference. We stay home. We do videoconferencing to discuss papers/presentation that are posted online. The second option really just adds the dimension of a regional meet-up (i.e. something you could comfortably drive to and maybe not even need a hotel). E.g., maybe there’s a 3-4 day conference with 1 or 2 days where people get together.

What would be the point of adding that part? I don’t know. What’s the point of a meatspace meetspace? The short answer is socializing. It’s certainly not the presentations or the discussion following the presentations, which, for whatever value you can attribute to them, can easily be replicated online. So the value all lies in the informal dimensions of the conference: the serendipity of meeting a stranger who shares your research interests and becomes a new colleague/collaborator; catching up with colleagues; and maybe some esprit de corps of being surrounded by so many people in your discipline. It’s catching a drink or meal with friends you only otherwise see on Fb.

In other words, people enjoy socializing. I also enjoy socializing at conferences (for certain values of enjoyment). But if that’s what it’s about, then maybe we could just look to get group rates on cruises or something.

Setting aside the minor irritations of air travel and hotel stays (which compare to 21st century teleconferencing in the same way that air travel compares to 19th century train travel), there is an expanding list of drawbacks to national conferences:

  • The direct costs, especially for graduate students and contingent faculty;
  • The indirect costs of time taken away from other work;
  • The political and material concerns that now arise regularly with each convention location.
  • The carbon footprint.

I don’t know. I’m sure I’ve been writing roughly this same blog post for at least a decade. I like to go to conferences. I’m looking forward to enjoying my time in Kansas City. Maybe someone will have something interesting to say about my presentation… probably not but whatever. Hopefully I will catch up with friends.

But seriously… To me, the practicality of national conferences will eventually wane. It’s a when not if scenario. It’s really only a matter of tweaking some technical matters and figuring out the social mechanics.

 

 

Categories: Author Blogs

rhetoric of podcasts, podcasts of rhetoric

5 March, 2018 - 11:39

earbuds.jpg

One of the very best things about no longer running the composition program is having the time and mental space to get back to digital rhetoric in a more practical and compositional way. This has got me thinking, in this post, about podcasting in terms of its various rhetorical structures but mostly about the kinds of podcasts that are out there in my field.

Before I started at Buffalo, I regularly taught classes on digital production and I did a fair amount of it myself. My 2010 Enculturation article, made the year before I started being composition director, included a video. And in 2008, right before I left Cortland, I’d published an article in Kairos about teaching podcasting in my professional writing courses there. All the dropped away for me when I took on the WPA job here at Buffalo. That’s a story for a different time, but that’s part of the context for where I am now. The other part is that we’ve started a new graduate certificate in professional writing and digital communication, and this has provided some real exigency for me to get my hands dirty again with production.

You can do your own Google search or take my word for it that podcasts have become increasingly popular. A decade ago when I was teaching this stuff, we didn’t have the smartphone and mobile data networks we have now that make following podcasts so easy. (OK, here’s one quick stat from this article: these days 42 million Americans listen a podcast every week.) Personally I like podcasts. I also like audiobooks. I use them for entertainment purposes. (Mostly I listen to podcasts about soccer.)

Given this I think there are several good reasons to teach students in a professional writing curriculum how to podcast including

  1. Though they may never podcast professionally, this is a significant genre in our media ecosystem about which professional communicators need to have an understanding.
  2. Creating an audio recording is a fairly simple task, at least at a basic level. But it also opens a path for becoming more sophisticated. (I.e., minutes to learn, lifetime to mater). In this sense it’s a practical entry point into a larger field of sound rhetorics.
  3. Many students I encounter have a fairly narrow usage of media and an ever smaller experience as a composer. So podcasting becomes one in a series of experiments with media production that begins to alter our relationship to composing from one that says “I’m a writer” meaning I put words in a row on a paper to something more capacious.

My challenge though is that I always want to do the things I ask our students to do. And so I come to podcasting. The real challenges with podcasting are rhetorical and compositional. How does one create a compositional space in which one produces hopefully interesting podcasts on a semi regular basis?

So what kind of podcasting is there in rhetoric and composition? Well, this FB page tracks several of the more prominent ones. Among the active ones on that page there’s Rhetoricity produced by Eric Detweiller, Rhetorical Questions produced by Brian Amsden, Eloquentia Perfecta Ex Machina produced by the St Louis University composition program, and the CCC Podcasts produced by NCTE. I’m sure there are more. I’ve listened to a few episodes of each, and they all follow an interview format with some interviews more formal than others. So really you have someone new in each episode. That’s an entirely familiar and sensible format.

Another common format I encounter in listening to soccer podcasts is basically punditry/fan banter. There you have two or more regulars who discuss the events of the last week. I haven’t really seen that in rhetoric/composition, perhaps because we don’t really have “events” to discuss. Obviously you could discuss the rhetorical angles of current events, which would be a kind of application of rhetorical scholarship. It might be a good kind of podcast for someone to make, but that’s probably not an angle I’d want to take.

Part of the issue then is the periodicity of scholarship or at least the periodicity of the communication of scholarship. It maybe should be noted that the later is not “natural,” of course. The pattern of article publication, the length of articles, the length of conferences, the patter of the academic conference schedule: these are as much a by-product of the material affordances of mid-20th century communication technologies as they are anything to do with the qualities of the objects and practices we study or our methods. Clearly there’s some feedback in that loop and cybernetic/homeostatic impulses are at work.

So I’m wondering if it is possible to shift that periodization. I still think that blogging is an opportunity to do that, to have a more ongoing and less precious conversation about ideas and discoveries than what publishing allows (not as a replacement but as an enrichment). It never really became that. Maybe podcasting is a better medium for that. A few people get together and talk about their work, what they are reading, what’s happening in their classes. Is that interesting? I don’t know. I think almost anything has the potential to be interesting or boring depending on the audience, the situation, its production/performance/composition.

Like this blog for example… to quote Mitch Hedberg, “I played in a death metal band. People either loved us or they hated us… or they thought we were OK. “

Categories: Author Blogs