Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 19 hours 41 min ago

speaking truth to Twitter

1 July, 2015 - 11:08

To be clear, Twitter has many possible uses, its primary one probably being making money, but, of course, its users, including me, put it to work in a variety of ways. It seems in the last year or two many academics have discovered Twitter (in much the same way that Columbus discovered America). And among academics one can also find Twitter being put to a wide range of uses, both personal and professional. Much of this is benign, but increasingly the public face of academics in social media is being defined around a fairly narrow class of tweets.

Perhaps it would be useful for someone to do a current analysis of the academic uses of Twitter and maybe even identify some subgenres among tweets. I haven’t done that analysis, so this is more like a sketch, but I am writing here about a particular tweet subgenre. In this subgenre, one essentially is making an appeal to pathos that energizes those who agree and incites those who do not. The emotion that is expressed is something like the righteous indignation that arises from an absolute certainty in the justness of one’s view and cause. It would appear as if it is often tweeted in anger, though one can only guess at the mind of another. Though such utterances can occur across media, Twitter is an excellent place to see it because the 140-character limit serves to focus the message. And clearly academics are far from the only people who engage in such expressions, but academics are an interesting case because of the relationship of these expressions to academic freedom and tenure protections.

I am not interested in adjudicating the righteousness of any particular academic’s cause, let alone weighing in on their job status. I am interested though in the rhetorical decisions behind these compositions.

It’s reasonable to propose that some of these tweets are simply posted in anger. People get angry all the time. Typically, when they are in public or professional settings, they manage to control their anger. However, this phenomenon is not simply about users who post without thinking, as a kind of spontaneous outburst. It is also about a perceived obligation to anger, a way of inhabiting online spaces, which makes these tweets a more deliberative act.

As James Poulos notes,

On Twitter, we’re not screaming at each other because we want to put different identities on cyber-display. We’re doing it because we’re all succumbing to what philosophers call “comprehensive doctrines.” Translated into plain language, comprehensive doctrines are grandiose, all-inclusive accounts of how the world is and should be.

But it’s more than that. Often the rhetorical strategy employed here is one of ad hominem attacks. When it isn’t a personal attack, it is often an emotional appeal. I suppose there’s no space for evidence in a tweet. One can only express a viewpoint. Combined with this tendency toward “comprehensive doctrines,” we get a series of increasingly extreme and divergent irreconcilable views.

I understand, in some respects, why everyday people get involved in such rhetorical warfare. I’ve written about this quite a bit recently. Academics, of course, are everyday people, so maybe that’s explanation enough for why they do what they do. However as professionals communicating in a professional capacity, I find this rhetorical strategy simply odd. To raise this question is typically to get one of two responses. First, “I have academic freedom; I can do whatever I want.” Or second, “Are you trying to silence me? Then you most be (insert ad hominem attack here).”

All of this has made me realize that I have been mistaken about the underlying ethics of academia on two crucial accounts.

1. I thought that academia was based on a fundamental pluralism, where we are obligated to be open to multiple possibilities and viewpoints. This doesn’t mean that we cannot hold a particular view or argue for it, but, at least in my view, it would obligate us to participate in forums where different views are heard and considered. Twitter can work that way, but it isn’t easy.

2. We can’t be “true believers” in relation to our subjects. Even in a first-year composition class, a typical piece of advice on a research paper assignment is to say “don’t ask a research question that you think you already know the answer to.” As scholars if we are not open to changing our minds and views on the subject we study, then what’s the point?

But, as I said, I was mistaken about this. Academia is often about espousing a single viewpoint with little or no consideration for alternatives, except for the purposes of developing strategies to attack them. Social media did not create this condition.You can blame postmodernism or cultural studies for creating conditions where we look at all scholarship as ideologically overdetermined, but I don’t think that’s what’s going on here. If anything, such methods should create greater skepticism and uncertainty. Maybe academia has always been this way, only ever pretending to the open consideration of alternative viewpoints that we insist from our students. But I don’t think that’s true. I think, at least in the humanities where I mostly dwell, we have become increasingly entrenched in our views. Maybe that’s in response to perceived threats to our disciplines; maybe it’s evidence of disciplinary fossilization. I don’t know. However it is fair to say that social media has intensified this condition.

Regardless, this practice of speaking truth to Twitter, which would almost seem to require revising the old refrain, “The people, retweeted, can never be defeated” (see, it even rhymes better now), points once again to our continuing struggles to develop digital scholarly practices. Is the future of digital scholarship really going to be clickbait and sloganeering?

Categories: Author Blogs

digital ethics in a jobless future

25 June, 2015 - 13:22

What would/will the world look like when people don’t need to work or at least need to work far less? Derek Thompson explores this question in a recent Atlantic article, “The World Without Work.” It’s an interesting read, so I recommend it to you. Obviously it’s a complex question, and I’m only taking up a small part of it here. Really my interest here is not on the politics or economics of how this would happen, but on the shift in values that it would require.

As Thompson points out, to be jobless in America today is as psychologically damaging as it is economically painful. Our culture more so than that of other industrialized nations is built on the value of hard work. We tend to define ourselves by our work and our careers. Though we have this work hard/play hard image of ourselves but we actually have a hard time with leisure, spending much of our time surfing the web, watching tv, or sleeping. If joblessness leads to depression then that makes sense, I suppose. In a jobless or less-job future, we will need to modify that ethos somehow.  Thompson explores some of the extant manifestations of joblessness: makerspaces, the part-time work of Uber drivers and such, and the possibility of a digital age Works Progress Administration. As he remarks, in some respects its a return to pre-industrial, 19th-century values of community, artisanal work, and occasional paid labor. And it also means recognizing the value of other unpaid work such as caring for children or elders. In each case, not “working” is not equated with not being productive or valuable.

It’s easy to wax utopian about such a world, and it’s just as easy to spin a dystopian tale. Both have been done many times over. There is certainly a fear that the increasing precarization of work will only serve to further exacerbate social inequality. Industrialization required unions and laws to protect workers.  How do we imagine a world where most of the work is done by robots and computers, but people are still able to live their lives? I won’t pretend to be able to answer that question. However, I do know that it starts with valuing people and our communities for more than their capacity to work.

I suppose we can look to socialism or religion or gift economies or something else from the past as providing a replacement set of values. I would be concerned though that these would offer similar problems to our current values in adapting to a less-job future.

Oddly enough, academia offers a curious possibility. In the purest sense, the tenured academic as a scholar is expected to pursue his/her intellectual interests and be productive. S/he is free to define those interests as s/he might, but the products of those pursuits are freely accessible to the community. In the less-job future I wonder if we might create a more general analog of that arrangement, where there is an expectation of contribution but freedom to define that contribution.

Of course it could all go horribly wrong and probably will.

On the other hand, if we are unwilling to accept a horrible fate, then we might try to begin understanding and inventing possibilities for organizing ourselves differently. Once again, one might say that rhetoricians and other humanists might be helpful in this regard. Not because we are more “ethical,” but because we have good tools and methods for thinking through these matters.

 

 

Categories: Author Blogs

hanging on in quiet desperation is the English way

23 June, 2015 - 20:10

The song refers to the nation, of course, and I’m thinking of a discipline where perhaps we are not so quiet.

Here’s two tangentially related articles and both are tangentially related to English, so many tangents here. First, an article in Inside Higher Ed about UC Irvine’s rethinking of how they will fund their humanities phd programs: a 5+2 model where the last two years are a postdoctoral teaching fellowship. Irvine’s English hasn’t adopted it (maybe they will in the future), but it is an effort to address generally the challenges of humanities graduate education that many disciplines, including our own, face. In the second article, an editorial really in The Chronicle, Eric Johnson argues against the perception (and reality) that college should be a site of workforce training. It is, in other words, an argument for the liberal arts but it is also an argument for more foundational (i.e. less applied, commercial) scientific research.

These concerns interlock over the demand for more liberal arts education and the resulting job market it creates to relieve some of the pressure on humanities graduate programs.

Here’s a kind of third argument. Let’s accept the argument that specialized professionalizing undergraduate degrees are unfair to students. They place all the risk on the students who have to hope that their particular niche is in demand when they graduate, and, in fact, that it stays in demand. In this regard I think Johnson makes an argument that everyone (except perhaps the corporations that are profiting) should agree with: that corporations should bear some of the risk/cost of specialized on-the-job-training, since they too are clearly profiting.

Maybe we can apply some of that logic to humanities graduate programs and academic job markets. I realize there’s a difference between undergraduate and graduate degrees, and that the latter are intended to professionalize. But does that professionalization have to be so hyper-specialized to meet the requirements of the job market? I realize that from the job search side, it makes it easier to narrow the field of applicants that way. And since there are so many job seekers out there, it makes sense to demand specific skills. That’s why corporations do it. I suppose you can assume it’s a meritocratic system, but we don’t really think that, do we? If we reimagined what a humanities doctoral degree looked like, students could easily finish one in 3 or 4 years. No, they wouldn’t be hyper-specialized, and yes, they would require on-the-job-training. But didn’t we just finish saying that employers should take on some of that burden?

Here’s the other piece… even if one accepts the argument (and I do) that undergrads should not be compelled to pursue specialized professionalizing degrees, it does not logically follow that they should instead pursue a liberal arts education that remains entrenched in the last century.

In my view, rather than creating more hyper-specialized humanities phds, all with the hope that their special brand of specialness will be hot at the right time so that they can get tenure-track jobs where they are primed to research and teach in their narrow areas of expertise, we should produce more flexible intellectuals: not “generalists” mind you, but adaptive thinkers and actors. Certainly we already know that professors often teach outside of their specializations, in introductory courses and other service courses in a department. All of that is still designed to produce a disciplinary identity. This new version of doctoral students would have been fashioned by a mini-me pedagogy; they wouldn’t identify with a discipline that requires reproducing.

So what kind of curriculum would such faculty produce? It’s hard to say exactly. But hopefully one that would make more sense to more students than what is currently on offer. One that would offer more direct preparation for a professional life after college without narrowly preparing students for a single job title. In turn, doctoral education could shift to prepare future faculty for this work rather than the 20th-century labors it currently addresses. I can imagine that many humanists might find such a shift anti-intellectual, because, when it comes down to it, they might imagine they have cornered the market on being intellectual. Perhaps they’re right. On the other hand, if being intellectual leaves one cognitively hamstrung and incapable of change, a hyper-specialized hothouse flower, then in the end its no more desirable than the other forms of professionalization that we are criticizing.

Categories: Author Blogs

It turns out that the Internet is a big place

16 June, 2015 - 08:56

I suppose this is coincidentally a follow-up of sorts on my last post. It might also be “a web-based argument for the humanities” of a sort. We’ll see.

On The Daily Beast, Ben Collins asks the musical question “How Long Can the Internet Run on Hate?” One might first be inclined to answer, “I don’t know, but we’re likely to find out.” However, on reflection, one might take pause: hold on, does the Internet run on hate? I don’t think I need to summarize Collins’ argument, as we all know what he’s on about here. If one wasn’t sure, then the comments following the article would at least give one a taste.

So a couple observations.

1. The Internet cannot be separated all that easily from the rest of culture. One might as well ask how long can civilization run on hate (the answer? apparently a good long while). Obviously the Internet did not invent hate. Does it make us hate more? Or does it simply shine a light in the dark corners of humanity’s hatred? Probably both.

2. The affordances of social media facilitate particular online genres and affects. Specifically, the comment. If I may be allowed generalize somewhat here, the comment as a genre refers not only to what follows various articles online but also to the acts of commenting in discussion forums, on Facebook, and Twitter (though obviously Twitter’s 140-character limit changes things).  By now, we are familiar with the observation that the immediacy of the comment and the relative ease of commenting results in a lot of reactionary feedback. I would analogize it to the barroom brawl in an old Western movie. It starts with two people shoving. One gets shoved backwards into a third person. That person throws a punch, misses, and hits a fourth party. Before you know it, everyone in the bar is fighting.

3.The Internet is fueled by a number of other desires too:  shopping, pornography, the idle curiosity of the Google search, etc. In other words, it’s not just hate; it’s also lust! And it’s not just Clay Shirky’s “cognitive surplus,” it’s also idle hands doing the Devil’s work. We shouldn’t judge ourselves harshly for having desires.

4. The Internet runs on exposure. This ties into my Enculturation article from a few years back. Even though we commonly say that people tend to live in cultural-ideological echo chambers online, those chambers are not nearly as sealed as the comparatively parochial lives that we lived even in the days of mass media. The simple exposure to expression is enough to generate intense affective responses. Of course it doesn’t have to be hate, and obviously it isn’t only hate. Sometimes that affectivity can be directed through assemblages that result in fairly even-minded academic blog posts. If you think about it though, when one goes on Facebook, for example, one is unlikely to be doing something purposeful. One is just looking for stimulation, like channel-surfing (in the “old days”).

Imagine a kind of P.K. Dick-esque sci-fi world where instead of social media, you were more directly plugged into the affective responses of others online. You’re exposed to various media and you not only feel your own responses but those of others. You’re excited and so are some others, but then others are offended or disgusted or angered or bored. This generates a secondary set of affects. Maybe you’d imagine that it could all turn to love (or at least lust) as easily as it could all turn to hate. But maybe what you’d discover is that it’s all just stimulation, exposure, and the particular name you give the feeling doesn’t really matter. At some level, it becomes an ineffable clicking. In that imagined world, language is entirely bypassed. In fact, all conscious, deliberative thought is bypassed. It’s not even sharing “feelings,” because, at least as I’m using the word, a feeling or emotion would require some kind of reflection, some judgment and identification/representation. This is just responsiveness. It’s a deterritorialization of language. As I mentioned in the previous post. It’s not about communication or information. It’s expression.

Obviously we don’t quite live in that world. We’re still mediating with “human languages” rather than machine languages measuring our embodied responses and communicating them across the web. But the distinction is more subtle than you might at first imagine. As such, one might argue that the Internet is not fueled by hate because the phenomena Collins discusses are not sophisticated enough to be hate. That’s not to suggest that there isn’t plenty of hate out there to go around. It’s just that what we see on the web when we see this things is a kind of stimulus response to exposure.

Or course part of the point here is that the Internet is not all like that. It’s a big place, as it turns out. Even in the spaces of social media commentary it’s not all like that.  Everyone knows that the Internet provides access to an extensive body of cultural knowledge. So much so that it’s kind of a running joke that goes something like “imagine trying to explain the Internet to an alien. Here we have access to many of the great works of human endeavor and cutting edge research about our lives, the world, and the universe, and what do we use it for?” Typically though, this comes off as some kind of moralizing.

I wonder if it’s possible to take the morality out of it though. It’s worth noting that building knowledge, information, and communication out of exposure and expression requires effort. I’d like to think of that statement as being closer to physics or biology or cybernetics than morality. Energy must be expended. So here’s my brief bit about the humanities: maybe we can have a role in this.

If you read the comments on the article above (or really any similarly-themed article), the humanities are often characterized as a kind of prescriptive, moralizing, leftist thought police. In some respects it’s an understandable characterization. That’s a subject for another time. However, the humanities can also be a mechanism for understanding how expression operates through media ecologies. It’s hard to know exactly how the theological moralities of agrarian cultures or the rational, deliberative discourses of the modern bicameral scientific-democratic print-industrial culture will translate into digital media ecologies. As I often say here: some assembly is required. While the humanities cannot (at least not productively) tell us what to do/think, they perhaps can explore the capacities that become available to us in the context of the capacities the past provided.

Ultimately though it always comes back to expending the energy to build something that goes beyond that initial stimulus response.

Categories: Author Blogs

blogging, academics, and the case of The Witcher

14 June, 2015 - 11:30

As most anyone can tell you, academic blogging died off a long time ago. I’m not exactly sure when it supposed to be popular. I’m guessing it was at a time before most academics had much of an online existence, before they all hopped on Facebook and started sharing articles with one another. As I look at it, blogging has been largely replaced with various web news/article sites ranging everywhere from the familiar NY Times to Medium, there’s no end to opportunities for analysis, critique, and opinion. Only a slice of this content is related to academic issues or to issues treated in an academic manner.  I’ve written a fair amount here about the academic clickbait related to tenure, teaching, the crisis in the humanities, and so on, so I’m not going back over that territory.

Instead, I want to go in a different direction. Given my own (and my friends, apparently) interests in science fiction, video games, comic books, digital media, and such, I find a fair number of articles from these kinds of sites shared in my timeline. Here’s one I came across incidentally. This one is from Kotaku, which is a blog about video games. What makes it a “blog” aside from the fact that it calls itself a blog and looks like it uses some blogging platform? I don’t know. Who knows what these terms mean anymore?

Anyway, the article under question is “The Complicated Women of The Witcher 3,” which, if you don’t know, is a popular video game right now. The article is a thoughtful, one might even say academic, treatment of the representation of women in the video game. I agree with the author, Nathan Grayson, that the subject is complicated and that many of the female characters are themselves complex in that they have enough depth to be available for the kind of analysis typically reserved for characters in novels or films. None of that is to say that the representation isn’t without critique. Many women are scantily clad (sometimes almost comically so) and there’s an infamous scene involving a large stuffed unicorn (ahem). There’s also a fair amount of sexual violence, on the order of what has generated a broader conversation in relation to HBO’s Game of Thrones series.

It’s difficult to call these conversations “academic.” It’s difficult to call anything in social media or blogs academic. Not because it isn’t thoughtful or well-researched, but simply because academic still is tied closely to specific genres and blogging/social media aren’t quite one of them.  However, I have many academic colleagues who read such material, treat it seriously, and share it. So, while it is not necessarily academics who are writing these articles, there is an academic conversation on social media around such topics.

I’m not interested in wading into the debate over these specific issues. Instead, what I do find noteworthy is the rhetorical shift that intellectual conversation makes toward judgment. That is to say, the move to say a certain practice or object is “good” or “bad.” That, for example, The Witcher 3 is good or bad for this or that reason for the way it represents women, or perhaps in a slightly more complicated fashion, these parts are good and those parts are bad. Or, as in the example of Game of Thrones, a certain rape scene should or shouldn’t have happened, or if it was going to happen should have been depicted differently.

In some respects these conversations are a familiar part of mainstream conversation about popular culture, where we often say, “I thought the movie was good but it should have had a different ending.” It’s been a long, long time since I taught literature, but I can still recall the desire of students to talk about how they believed characters should have behaved differently or some other plot twist should have happened. We don’t tend to make these kinds of aesthetic judgments about “literature” however. I suppose we find such judgments appropriate for pop culture though because we believe the forces behind pop cultural production to be of a different order than that of art. That is, whether it’s The Avengers, The Witcher 3, or Game of Thrones, we’re talking about commercial products. So we can just say, for example, that in The Witcher 3 the female characters didn’t all need to have so much cleavage. I’m just noting that we never used to talk about literature in the same fashion. Again, maybe this is why such writing isn’t “academic,” though again it is conversation with which many academics have serious investment and participation.

So again, let me reiterate, that I am not saying here that the judgments made in such discourses are wrong or inappropriate. There’s no reason why we can’t talk about what art or media should be like. I would note that these conversations are a marketplace onto themselves and passing judgment on media is going to draw more eyeballs than something more… what is the word… “academic”(?). I wonder though if this is where cultural studies leads, to an activist criticism that seeks to shape media to reflect a certain set of values.

Maybe so. I’m not going to make a judgment about whether or not such a project is admirable.

However it does strike me that it suggests a space for other forms of humanities online writing, perhaps even blogging, that there are other rhetorical gestures to make toward popular culture than judgment. For instance, this article starts to make some interesting comparisons with the way gender and sexuality are handled in certain Bioware games (e.g. Mass EffectDragon Age: Inquisition), though I might also point toward Skyrim. There’s clearly a difference between games that allow one to customize a character’s appearance (including gender and race) and one like The Witcher, which does not. Now, it must be said that academic analysis of video games obviously goes on in more traditional academic genres and perhaps even on (supposedly dead) academic blogs.

I just wonder if there is a way to bridge the gap between the difficult discourse of academic genres and that of more popular websites. I assume that it is, that if such translations are possible for rhetorical gestures other than judgment.

 

 

Categories: Author Blogs

expression is not communication

8 June, 2015 - 08:13

I’ve been struck with a patch of Internet curmudgeon syndrome of late: spending too much time on Facebook probably. One of the ongoing themes of my work as a digital rhetorician is the observation that we do not know how to behave in digital media ecologies. That observation is not a starting point for a lesson on manners (though we certainly get enough of those too!). Instead, it’s a recognition of the struggle we face in developing digital-rhetorical practices.

Those of us who were online in the 90s (or earlier) certainly remember flame wars on message boards and email lists. This was the start of trolling, a familiar behavior to us all which in some respects I think has mutated and become monetized as clickbait. Of course trolls are just looking to get a rise out of you. It may be hard to tell the difference from the outside, but some of these incendiary conversations were genuine disagreements. I know I was part of some very heated exchanges as a grad student on our department email list. Eventually you realize that you’re not in a communication situation but instead you’re part of a performance where the purpose is not to convince the person you’re putatively speaking to but to make that person look foolish in front of a silent audience who has been subjected to your crap by being trapped on the same email list with you. That changes one’s entire rhetorical approach, especially when you realize that the bulk of that captive audience isn’t captive at all but simply deleting emails.

In some respects that practice lives on. I am still on a department email list, and sometimes it gets heated. It’s not very productive but at least it’s limited in scope.

In the early days of this blog, I wrote some fairly strident stuff. These days I still offer views with which many would disagree, but the tone has mellowed, perhaps its middle-age. However, I see around me, mostly through Facebook, the continuing intensification of flaming rhetoric. In the good-old, bad-old days, I used to think that flaming happened because people were at a distance from one another. Because there was never any danger of physical violence, a certain limit on the riskiness of invective was removed. Today though we have the long-tail, echo-chamber strengthening of that feeling. Not only can I be as agonistic as I please without physical threat but I can find others who will agree with me and double-down on the whole business. Needless to say this happens across the political spectrum. Add in the clickbait, Facebook capacity, and one gets rhetorical wild fires.

An academic example of this. Perhaps you saw the recent piece about the liberal professor afraid of his liberal students, or the following piece about the liberal professor who is not afraid of her liberal students. All of this business is driven by serious challenges in higher education. There is the declining authority and power of faculty. This comes in a lot of forms. Most notably it is the disappearance of tenure and the disempowerment of tenure where it still exists. In more general terms though it is also the dilution of authority as opinion, especially in anything that is not empirical, though obviously even science is challenged in certain areas.

There is also this conversation about “triggering,” which I won’t go into here, except to say that in all this rhetoric it often seems difficult to differentiate between someone who has a mental health issue related to a traumatic experience and someone who is unhappy, uncomfortable, or offended. Given that current digital rhetoric practices seem to allow for only two positions, “like” or “I’m offended,” it’s quite hard to avoid the latter, while the former deserves real consideration.

Anyway, I’m not interested in getting into that conversation in substance here. My point is simply to wonder aloud what the rhetorical purpose of such “communications” might be. I use the scare quotes because I’m not sure they are communications. They are expressions. Deleuze and Guattari make this point in A Thousand Plateaus. In their discussion of order-words, collective assemblages of enunciation, incorporeal transformations and such, we encounter the autonomous quality of expression, which is to say that expression obeys its own laws, independent of both the speaker and the listener, as well as whatever other larger network or cultural situation might be in effect.

It is clearly possible to get symbolic expressions to do work. In print culture we created elaborate institutions and genres to do so. The university is one of the best and most successful examples. That’s not to say that it was perfect. Far from it! But it is a good example of how one instaurates (to use one of Latour’s terms) a communicational assemblage from a media ecology.

We really need to build new genres, which means new communities, networks, assemblages, activity systems, however you want to think of it. On some level I imagine that’s what we’re trying to do here, but I’m not very satisfied with the results so far. This strikes me as some of the central work of digital rhetoric. Not to be prescriptive about future genres, but to facilitate rhetorical understanding of current genres, to investigate alternate rhetorical capacities, and perhaps to experiment.

Categories: Author Blogs

the failure to understand digital rhetoric

28 May, 2015 - 12:30

A brief round up of a few articles circulating my social media gaze:

I don’t want to paint these all with the same brush, but there is a fundamental conceptual problem here, at least from my perspective. The emerging digital media ecology is opening/will open indeterminate capacities for thought and action that will shift (again, in a non-determining way) practices of rhetoric/communication, social institutions, the production of knowledge, and our sense of what it means to be human. In other words, its roughly analogous to what happened when we became literate and moved beyond oral cultures. I understand that’s a lot to take in. It’s likely to cause (and does cause) all kinds of reactionary commentary, and clearly we regularly struggle with figuring out how to behave in relation to digital media. So that’s the general point, but let me work through these individually in reverse order.

The Weiner piece is about the enduring value of paper. It’s now a familiar cyber-thriller trope to note that paper is good because it can’t be hacked. I think we still attribute a kind of privacy and intimacy to writing on paper (Kittler discusses this). Weiner discusses recent research about how students who handwrite notes do better on tests than those who use laptops. The key point though seems to be that handwriting forced students to synthesize information more, where laptop users were able to write more quickly, which meant more recording and less synthesizing. So it strikes me that what we’re saying is that we need to learn how to use our laptops in a more effective way… just as we learned, once upon a time, how to take handwritten notes, right?

Jones’ piece is more reactionary. He writes of emojis “After millennia of painful improvement, from illiteracy to Shakespeare and beyond, humanity is rushing to throw it all away. We’re heading back to ancient Egyptian times, next stop the stone age, with a big yellow smiley grin on our faces.” I suppose it’s the kind of copy that gets attention, so job well done there. But comparing emojis to hieroglyphs makes little sense beyond their superficial commonalities. I actually don’t know who should be insulted more by this comparison, the ancient Egyptians or the contemporary emoji user. Ultimately though this argument imagines symbolic systems as independent from the larger media ecologies in which they operate. Binary code is just ones and zeros without the ecology in which it operates.

The McWhorter article though is maybe the most interesting for its basic assertion that we are undergoing a return to orality. He suggests, “Let’s consider that we are seeing a natural movement towards a society in which language is more oral—or in the case of texting, oral-style—where written prose occupies a much smaller space than it used to.” He seems equanimous about the prospect, but I doubt he his. It’s more like he is resigned to living in a world where the art of essay writing is passing. And I agree with that. Essays are a genre of print media ecologies. Today we have something else, remediated essays on their way to being something else. When McWhorter observes Kardashian’s problematic tweet, what he should be seeing is our continuing struggle to figure out the rhetoric of digital spaces. It may be the case (it certainly appears to be the case) that Kardashian lacks a certain print-literate sophistication, that maybe all emoji users do, that the speed of digital communication cuts out the deliberative, synthesizing cognition of print media.

I know that through Ong and McLuhan we can get this idea of a kind of secondary orality in the conversational style of Facebook, Tweets, and blogging. But it’s wrong to deduce from this some kind of devolution, as if we are going back to an oral culture or a culture of hieroglyphs. Similarly it would be misguided to infer progress from change. Instead, we should recognize an opportunity, perhaps even an obligation, for invention.

Categories: Author Blogs

arduino heuretics

25 May, 2015 - 09:23

As those of you who are involved in the maker end of the digital humanities or digital rhetoric know, Arduino combines a relatively simple microcontroller, open source software, and other electronics to create a platform for developing a range of devices. I seem to recall encountering Arduino-based projects at CCCC several years ago. In other words, folks have been playing around with this stuff in our discipline for a few years. (Arduino itself has been around for about a decade.) My own exigency for writing this post is that I purchased a starter kit last week, partly for my own curiosity and partly for my kids growing interest in engineering and computer science. In short, it looks like a fun way to spend part of the summer.

David Gruber writes about Arduino in his chapter in Rhetoric and the Digital Humanities where he observes “digital tools can contribute to a practice-centered, activity-based digital humanities in the rhetoric of science, one that moves scholars away from a logic of representation and toward the logic informing ‘New Materialism’ or the rejection of metaphysics in favor of ontologies made and remade from material processes, practices, and organizations, human and nonhuman, visible and invisible, turning together all the time” (296-7). In particular, he describes a project at North Carolina State which employed an Arduino device attached to a chair that would send messages to Twitter based upon the sensor’s registering of movement in the chair. Where Gruber prefers the term “New Materialism,” I prefer realism: realist philosophy, realist ontology, and realist rhetoric. I think we may mean the same thing. For me the term materialism is harder to redeem, harder to extricate from the “logic of representation” he references, which has discussed materialism and materiality for decades while eschewing the word realism. I would suggest that the logic informing realism or new materialism, the logic juxtaposed to representation, is heuretics, the logic of invention. As I am putting the finishing touches on my book, these are the pieces I’m putting together: Ulmer’s heuretics, Bogost’s carpentry, Latour’s instauration, and DeLanda’s know-how.

As I started to learn more about Arduino, I discovered the split that has occurred in the community/company. It is a reminder of the economic processes behind any technology. Perhaps by serendipity, I have also been involved in a few recent conversations about the environmental impact of these devices (e.g. the rare earth minerals) and the economic-political impact of those (e.g. the issue of cobalt as a conflict mineral). These are all serious issues to consider, ones that are part of a critical awareness of technology that critics of DH of say get overlooked. Of course, sometimes it seems this argument is made as if our legacy veneration of print culture has not been built upon a willful ignorance of slavery, labor exploitation, political strong-arming, environmental destruction, and capitalist avarice that made it possible from cotton production to star chambers to lumber mills. Somehow though no one suggests that we should stop reading or writing in print. That’s not an argument for turning a blind eye now but only to point out that the problems, in principle, are not new. Technologies are part of the world, and the world has problems. While the world’s problems can hardly be construed as good news, they are sites for invention and agency. As Ulmer says at one point of his EmerAgency, “Problems B Us.”

I’m expecting I’ll post a few more times about Arduino this summer as I mess around with it. Given that I’m just starting to poke around, I really don’t have any practical insights yet. It’s always a little challenging to take on a new area beyond one’s realm of expertise. We live in a world where there’s so much hyper-specialization that it’s hard to justify moving into an area where you know you’ll almost certainly never rival the expertise of those who really know what they’re doing. This is a kind of general challenge in the digital humanities and rhetoric, where you might realize that the STEM folks or the art folks will always outpace you, where we seem squeezed out of the market of making. Perhaps that’s why we’re so insistent on the logic of representation, as Gruber terms it. Articulating our ventures into this area as play, while objectionable to many, is one way around this. For me, the window of saying this is fun and something I do with my kids, as a hobby, is part of what makes it doable, part of the way that I can extricate myself from the disciplinary pressures to remain textual. I’ll let you know how it goes.

Categories: Author Blogs

the humanities’ dead letter office

13 May, 2015 - 10:15

Adeline Koh writes “a letter to the humanities” reminding them that DH will not save the humanities (a subject I’ve touched on at least once). Of course I agree, as I agree with her assertion that we “not limit the history of the digital humanities to humanities computing as a single origin point.” Even the most broadly articulated “DH” will not save the humanities, because saving is not the activity that the humanities require: ending maybe, but more generously changing, evolving, mutating, etc.

Koh’s essay echoes earlier arguments made about the lack of critical theory in DH projects (narrowly defined). As Koh writes:

throughout the majority of Humanities Computing projects, the social, political and economic underpinnings, effects and consequences of methodology are rarely examined. Too many in this field prize method without excavating the theoretical underpinnings and social consequences of method. In other words, Humanities Computing has focused on using computational tools to further humanities research, and not to study the effects of computation as a humanities question.

But “digital humanities” in the guise of “humanities computing,” “big data,” “topic modelling,” (sic) “object oriented ontology” is not going to save the humanities from the chopping block. It’s only going to push the humanities further over the precipice. Because these methods alone make up a field which is simply a handmaiden to STEM.

I have no idea what object oriented ontology is doing in that list. Maybe she’s referring to object oriented programming? I’m not sure, but the philosophical OOO is not a version of DH. However, its inclusion in the list might be taken as instructive in a different way. That is to say that I was maybe lying when I said I had no idea what OOO is doing on this list alongside a couple DH tropes. It is potentially a critical theorist’s list of enemies (though presumably any such list would be incomplete without first listing other competing critical theories at the top). And this really brings one to the core of Koh’s argument:

So this is what I want to say. If you want to save humanities departments, champion the new wave of digital humanities: one which has humanistic questions at its core. Because the humanities, centrally, is the study of how people process and document human cultures and ideas, and is fundamentally about asking critical questions of the methods used to document and process. (emphasis added)

So “humanistic questions” are “critical questions.” As I read it, part of what is going on in these arguments is an argument over method. As Koh notes, DH is a method (or collection of methods, really, even in its most narrow configuration). But “critical theory” is also a collection of methods. As the argument goes, if the humanities is centrally defined by critical-theoretical methods then any method that challenges or bypasses those methods would be deemed “anti-humanistic.”

I’ve spent the bulk of my career failing the critical theory loyalty litmus test, so I suppose that’s why I am unsympathetic to this argument. Not because my work isn’t theoretical enough! One can always play the theory oneupmanship game and say “my work is too theoretical. It asks the ‘critical questions’ of critical theory.” But actually I don’t think there’s a hierarchy of critical questions, though there clearly is a disciplinary paradigm that prioritizes certain methods over others, and from within that paradigm DH (and apparently OOO as well while we’re at it) might be viewed as a threat. The rhetorical convention is to accuse such threats of being “anti-theoretical,” as being complicit with the dominant ideology (like STEM), or, perhaps worse, being ignorant dupes of that ideology.

I can certainly account for my view that critical-theoretical methods are insufficient for the purposes of my research. That said, I have no issue with others undertaking such research. The only thing I really object to is the claim that a critical-theoretical humanities serves as the ethical conscience of the university.  If the argument is that scholars who use methods different from one’s own are “devaluing the humanities” then I question the underlying ethics of such a position.

I’m not sure if the humanities need saving or if the critical-theoretical paradigm of the humanities needs saving or if it’s not possible to distinguish between these two. I’m not part of the narrow DH community that is under critique in this letter. I’m not part of the critical-theoretical digital studies community that Koh is arguing for. And I’m not part of the other humanities community that is tied to these central critical-humanistic questions.

I suppose in my view, digital media offers an opportunity (or perhaps creates a necessity) for the humanities to undergo a paradigm shift. I would expect that paradigm shift to be at least as extensive as the one that ushered in critical theory 30-40 years ago and more likely will be as extensive as the one that invented the modern instantiation of these disciplines in the wake of the second industrial revolution. I’m not sure if the effect of such a shift can be characterized as “saving.” But as I said, I don’t think the humanities needs saving, which doesn’t necessarily mean that it will continue to survive, but only that it doesn’t need to be saved.

Categories: Author Blogs

writing in the post-disciplines

3 May, 2015 - 13:40

Or, the disorientation of rhetoric toward English Studies…

In her 2014 PMLA article “Composition, English, and the University,” Jean Ferguson Carr makes a strong argument for the value of rhetoric and composition for literary studies in building the future of English Studies. She pays particular attention to composition’s interests in “reading and revising student writing,” “public writing,” “making or doing,” and using “literacy as a frame.” As I discussed in a recent post, there’s a long history of making these arguments for the value of composition in English, an argument whose proponents one assumes would welcome MLA’s recent gestures toward inclusiveness. Of course the necessity of these arguments, including Carr’s, stems from the fact that mostly the answer to the question “what is the value of composition to English?” has been answered as “nothing” or “not much,” at least beyond the pragmatic value of providing TAships for literary studies Phd students.

I’m more interested in the opposite question though, “what’s the use of literary studies to rhetoric/composition?” It’s not a question Carr really concerns herself with, mentioning only in passing that “a more intentional and articulated relationship between composition and English is still mutually beneficial,” though she doesn’t offer much evidence for this. Presumably she (rightly) identifies her audience as literary scholars for whom this question would likely never arise. However, I think the answer might be similar: nothing, or not much, at least beyond the pragmatic value that the institutional security of an established English department might provide. And with that security wavering, well…

What does this have to do with “writing in the post-disciplines” (whatever that is)? As it turns out, a fair bit. With a little bit of historical fudging that I’ll call fair play in the broad brushstrokes of a blog post, we can see that

  1. We start off with a belletristic, humanistic, essay-writing, product-oriented and literary-focused form of writing instruction.
  2. Then we move to process-oriented, less literary but still humanistic and essayistic composition studies.
  3. Over time, writing instruction becomes more varied both within rhet/comp (e.g. technical-professional writing) and beyond in the growing popularity of WAC and WID programs.

So where are we now? Few would contend with the general principles that 1) writing is a useful tool for learning in many contexts and 2) it is a good idea for as many disciplines as possible to be involved in teaching students in their fields/professions how to write and communicate. That is, we still hold to the principles of WAC and WID. However, the longstanding view that faculty in English Studies are not well-equipped to teach students in other fields (especially STEM) how to communicate has always been founded on a particular expectation of what English faculty are like. What happens if/when that changes?

For example, let’s say I have a cadres of college sophomores who want to major in chemistry, and we want to develop their communication skills in connection with chemistry as a field and with professions they might enter. And let’s say that I give you a blank slate to create a graduate program for the faculty who will take on that job. Would you want them to get chemistry degrees and then provide a little extra professional development? Or would you imagine some kind of science studies/science rhetoric-communication curriculum? I’m thinking the latter makes more sense, not as a replacement for faculty to teach writing in their curriculum but as a way of delivering courses where instruction in writing/communication is the primary focus.

Let me take this a little further afield. Of course we know that only a fraction of undergrads go on to get graduate degrees and even fewer end up really communicating as experts in any discipline. Mostly they go on to careers in corporations or small businesses or with the government.  This is more true in the humanities or social sciences, but even in the sciences, students find themselves in careers where communication is more business than scientific. There are many inter-disciplinary niches, and when it comes down to it, the argument that there are no “general writing skills”  which casts doubt on composition classes can cast doubt on the utility of writing in the disciplines.

Are there general chemistry writing skills? No, of course not.  Maybe one could argue for some common genres among chemistry professors, but chemistry itself is far more varied. So instead (and I think this is the direction WID and “writing studies” approaches go) one might imagine a rhetoric/communication curriculum that 1) teaches students an introduction to rhetorical principles, 2) puts those principles to work in the study of genres at work broadly in their fields/professions, 3) pays attention throughout a disciplinary curriculum to communication practices, and 4) offers a proliferating range of possibilities for writing and communication.

Writing in the post-disciplines moves beyond the historical either/or presumption that writing instruction is either general/introductory, or writing is discipline-specific and tied to the content expertise of faculty. Instead it suggests writing as a vast field of inquiry tied to an expansive set of academic methods that can be given many names and descriptions: empirical, social scientific, data-driven, rhetorical, humanistic, philosophical, theoretical, digital, historical, pedagogical, cultural, etc. Such investigations are post-disciplinary in themselves, though this does not mean that they cannot be coordinated or organized within an institution.

Of course there is writing in the disciplines. There’s writing in most places one finds humans. But writing is always necessarily post-disciplinary as it operates to facilitate relations among disciplines and across varied institutions. Most subjects we study in the university are slippery in this way. Whether it is biology or society or art, our objects always act and connect in ways the push beyond our disciplinary paradigms. Writing is no different in this regard.

So, to bring this full circle, why fall prey to the gravity well of literary studies when there is a vast universe of writing to investigate?

Categories: Author Blogs

“this will revolutionize education”

29 April, 2015 - 09:47

I picked up on this from Nick Carbone here. It’s a video by physics educator Derek Muller (who I think I’ve written about before here but I can’t seem to find it if I did).  Here’s actually two videos.

The share a common there. The first deals with the long history of claims that various technologies will “revolutionize education.” In debunking those claims, Muller argues for the important role of the teacher as a facilitator of the social experience of education and an understanding of learning as a dialogic experience, though he doesn’t quite put it in those terms. The second video discusses research he has done with using video to instruct students in physics (he has a YouTube channel now with around 1M subscribers). Similar to the first video, he finds that a video that enacts a dialogue and works through common misconceptions, while being more confusing and demanding more attention of the viewer, ultimately results in more learning.

As he points out, students have a lot of direct experience with the physical world, but it turns out the knowledge they derive from those experiences is an obstacle rather than an aid in their understanding physics. As such, a dialogic approach that works through those misconceptions and leads to an understanding of physical laws proves most effective. I would suggest composition encounters an analogous challenge in that students have a lot of experience with writing and language before entering the classroom, but the understanding of writing that comes from those experiences can be counter-productive.

That said, I don’t entirely agree with Muller’s characterization of the role of technology in education. (It’s probably just a simplification that is the inevitable result of a short video.) Yes, technology is not revolutionary in the way people claim it to be. I agree that education is a social, even institutional process (as opposed to learning, which is an activity that need not be social or even human). I even agree that it makes sense to characterize the role of technology in learning as evolutionary rather than revolutionary. However, if societies themselves can undergo technological revolutions, and if education is a social process, then education can be subject to technological revolutions, right?

For example, can we compare US education in 1800 with US education in 1900? During that century, the country underwent two industrial revolutions. We saw a rapid expansion of the number of public schools and colleges. Industrialization transformed our capacity to build schools and educational materials. It demanded entirely new literacies and skills from the workforce in a standardized way that schools would now need to provide. The marks of industrialization on education are obvious. Could you really argue that education was not revolutionized during this period?

Muller’s emphasis on social dialogue though would point to a far more ancient pedagogical method, that of Socrates. Despite the fact that Socrates didn’t write, it would be hard to deny that the Socratic method, and then later Plato’s academy, were not products of alphabetic writing technologies. Wasn’t that a technological-educational revolution? Literacy has shaped what we imagine learning to be.

Perhaps these are just semantic differences over revolution and evolution. I certainly agree that education is a socially mediated process that involves human interaction. Since we can imagine fantastical technologies like Data, the all-too-human android in Star Trek: Next Generation, I’m sure we can imagine machines that could replace teachers, but it’s little more than imagination at this point.

I think, in part, our problem is confusing learning with education. I can learn a lot from watching YouTube videos or reading books. I can also forget a lot. And, just as I can learn things about the physical motion of objects from life experience that works perfectly well in aiding my interaction with the world but does me a disservice in physics class, many of the things I learn from videos or books or whatever might not coincide with some educational project. So learning has been revolutionized by books, movies, radio, TV, videodiscs (love that example), video games, and now the Internet and YouTube. But that’s not the same as education.

Education relies on learning (of course) and it relies on mediation, even if its “only” the mediation of speech. It also relies more broadly on the social structures of which it is a part. Revolutions in media (which we certainly have had) can lead to revolutions in learning (which I would argue we are encountering with digital media), but all that might only register as an evolutionary change in education (which we have also seen). What would revolutionize education, what has revolutionized it in the past, are broader socio-technological revolutions (e.g. the Industrial revolution). That’s the “this” in “this will revolutionize education.” So the question is, is “this” happening now?

Categories: Author Blogs

what to do when a professional organization tries to embrace you

28 April, 2015 - 09:06

Yesterday, at least in my disciplinary corner of the online world, there was a fair amount of discussion about the Chronicle of Higher Education report of the Modern Language Association’s upcoming officer elections, which will ultimately result in someone from the field of rhetoric becoming MLA president. I was interviewed and briefly quoted for the article, so I thought I’d be a little more expansive here.

In the most cynical-pragmatic terms, one imagines that MLA can see that rhetoric faculty are underrepresented among its members, so they are an obvious potential market. One can hardly blame an organization for trying to grow its membership, so what does it have to do to appeal to those potential consumers?  In more generous terms, MLA might view itself as having some professional-ethical obligation to better represent all of the faculty it lays claim to when it asserts itself as representing “English.” It specifically names “writing studies” in its mission, though not rhetoric or composition. Of course it is always a happy coincidence when the pragmatic and the ethical are in harmony.

Here are the main problems I think MLA faces. First is its own track record. It’s been around for over 130 years. As far as I can tell it’s marginalized rhetoric for that entire period. Even in the recent history of my 20 years as a rhetorician, there’s been very little to indicate that rhetoric belongs in the MLA. Many in rhet/comp also have strongly held positions regarding adjunctification and are unhappy with MLA’s engagement on that issue. These are some serious issues, but maybe ones that could be resolved with a decade of good will.  MLA just has to hope that there are enough rhetoricians out there like the ones standing for office who are willing to work toward that end. I think in particular those who believe MLA might still play an important role in addressing adjuncts will be interested in working with them.

But those issues are minor compared to the second set of problems, which are not directly MLA’s fault but have to do with the relations between literary studies and rhetoric/writing studies/composition. Here’s the easiest way to understand this. As was reported in the CHE article, there were some 300 rhet/comp jobs in the MLA job list. Rhet/comp and technical writing jobs routinely make up ~40% of the jobs in English. We also know that virtually every English department relies upon teaching first-year composition for its economic survival. Those courses fund the TAs in literary studies graduate programs and make up the bulk of what English departments do on campus. So we know there are a lot of faculty in English departments with rhetoric specializations and that writing instruction forms the foundation of most of these departments. So now go and look at the undergraduate majors of these departments. Perhaps you will find a writing major of some kind or maybe a concentration in writing as an option for students. Undoubtedly there are a growing number of these, but I want you to ignore those for a moment. Look at departments that don’t have those things. Look at the “English BA” itself. Do you see any required courses in writing/rhetoric?

Yesterday I was writing in passing about an article bemoaning the disappearance of the Shakespeare requirement in English majors. Rest assured however, if you peruse some English majors you’ll find plenty of literary requirements–in different historical periods in British and American, in different ethnic literatures, and so on. I doubt you’ll find many such majors with a single required course in rhetoric. What that should tell you quite plainly is that the literary studies faculty, who, by majority rule, design these curricula, do not believe that some exposure to rhetoric is an important part of getting a disciplinary education in “English.” Sure, we can have some cordoned off area where students can study rhetoric and writing, and we might even allow writing electives as part of the English major, but we’re not going to make rhetoric/writing integral to English. The most amusing part of that, of course, is that while English majors minimize rhetoric from one side of their mouth on the other side of their mouth they are claiming to their students that they help them become “good writers.”

As I said, there’s not much or probably anything MLA can do about that situation, but what it means is that literary scholars, as a group, do not view rhetoric/writing studies/composition as an integral part of English. So why would rhetoricians want to be part of an organization that has devoted itself to literary studies for over a century? Maybe it would be in MLA’s interest to try to shift the view of its literary studies members on this matter. There was a long period of time, particularly in the early days of my career, when it seemed that people in my field were demanding some respect from their literary colleagues. There was a time when there were a lot of hard feelings, departments splitting apart and so on. Maybe that’s still the case in some places. Today though I think we’re in a very different situation. I’m not sure that an alliance with literary studies is in the best long term interest of rhetoric.

Categories: Author Blogs

when students get their “money’s worth” and other academic clickbait

27 April, 2015 - 09:11

Without laying this all at the feet of social media, in today’s fast-paced modern world (ahem), the competition in the attention economy appears to push more extreme positions. There’s nothing really new there, as the sensationalism of tabloids attest, but that seemed more avoidable in the past. The modern instantiation of clickbait is far more pervasive, and unlike spam, we pass it around willingly. Indeed we have reached a moment when it is becoming increasingly difficult to differentiate among actual news, genuine concerns, and clickbait, largely because effective clickbait draws on the other two. There’s a nice article in the Atlantic by Megan Garber called “The Argument Economy” which takes on some of this.

But my point is that this is not just social media. Perhaps you’ve seen (on Facebook, of course) news of the recent bill in Iowa whereby professors whose student evaluations fall below a certain level would be automatically fired. Even more amusing (or at least it would be amusing if it were fiction) is the suggestion that the five worst professors above the minimum line would face being voted off the campus by students in some reality game show fashion. The general sense is that this bill will not become law. As such it might be fair to call this clickbait legislation. And if NPR reports on the matter is that clickbait?

Similarly when the Chronicle, theTelegraph, and the National Review all want to report on American Council of Trustees and Alumni’s report of an apparent decline in the requirement of Shakespeare in English majors, do we call that news or clickbait? Is this clickbait curriculum? The promulgation of academic clickbait does not preclude the possibility of more serious conversations about teaching or curriculum. In fact, those conversations are certainly happening, but I imagine they have an effect on those conversations, especially when those participating in the conversation might get most of their information from such outlets.

I see these clickbait offerings as presenting two familiar commonplaces about college education, neither of which is especially helpful. As the NPR article reports, the emphasis on teacher evaluations is ostensibly about ensuring students get their “money’s worth.” This refers, of course, to our concerns about student debt and the cost of college but also to the economic valuation of college degrees as investments in human capital. On the flipside, the cultural conservatism of a group like ACTA and its plea for Shakespeare reflects a competing but equally unhelpful vision of education as the transmission of traditional cultural values.

To be clear, I don’t have any investment in conversations about how to structure a degree in literary studies. And I don’t think there’s anything wrong with students viewing their college education in terms of how it might connect to their professional life after college. Indeed these are really both discussions about how to value a college education. Unfortunately the commonplaces of academic clickbait don’t appear to provide much affordance for trying to address this challenge. In their defense, though, that’s not their purpose, so I guess their ok as long as we understand we won’t get anything productive out of this kind of rhetoric.

In all fairness, there should be some standard of expectations to which faculty are held as teachers, even beyond tenure, with the possibility of losing one’s job as a kind of measure of last resort. However, to get there, we’d really have to create a culture of teaching that doesn’t exist. Graduate students in most disciplines receive little or no training as educators. At best, we tend to rely on mentoring. Furthermore, we know teaching is only part of the job, and research productivity is often the primary measure of tenure. We’d have to shift that (at least at some institutions). So, could you imagine your department offering a series of professional development workshops for faculty in the area of teaching and the faculty showing up on a regular basis? If they did, we’d probably have to have some serious conversations about what constitutes good teaching. That would be weird, if not horrifying. That’s how far we are from valuing teaching as a university culture.

If we did have such conversations in an English department, we would probably want to talk about what we teach and why we teach it. Maybe there would be faculty in such a department that would share ACTA’s view of Shakespeare. This commonplace seems to set up a battle among pragmatic pandering to student interests in pop culture to attract numbers, some version of the culture wars over the canon, and a commitment to the traditions of literary studies. Not surprisingly as a rhetorician in an English department, I think staging a conversation about an English major in terms of literary studies is missing the boat. What would it mean to establish a purpose for an English degree that didn’t mention literature at all? Then one might articulate how literary studies might serve that purpose. Of course, there’s likely a disciplinary issue there, as that would require establishing the study of literature as useful to some end other than its own, as designed to do something other than reproduce its own disciplinary paradigm.

As impossible/comical as it is to imagine sitting in a series of teaching workshops with faculty, it’s even more absurd to imagine English departments entertaining the possibility of an English major that was not at least 75% literary studies. Sure, there could be some separate majors or concentrations, but can anyone imagine an English department with a single major where only 50% of the courses addressed literature? It sounds absurd, even though 50% of the jobs in English every year are not in literary studies. They’re rhet/comp, technical writing, creative writing, and so on. It sounds absurd until one remembers that most of what most English departments do, in terms of raw numbers of students served, is teach writing through first-year composition. It’s like having a department that taught BIO 101, but then was otherwise a Chemistry department. Of course we now have biochemistry departments.

In any case, academic clickbait isn’t doing us any favors in terms of opening some productive dialogue about the values driving higher education. All it likely does is create reactionary positions by espousing extreme views.

Categories: Author Blogs

the humanities’ nonhuman electrate future

23 April, 2015 - 13:31

Earlier this week, Gregory Ulmer spoke on campus. I was happy for the opportunity to see him speak, as I hadn’t met him before and his work, especially Heuretics, has been important to my own since my first semester in my doctoral program. His talk focused on his work with the Florida Research Ensemble creating artistic interventions, which he terms Konsults, into Superfund sites. However, more broadly, Ulmer’s work continues to address the challenge of imagining electracy (n.b. for those who don’t know, electracy is to the digital world what literacy is/was to the print world). I’ve discussed Ulmer’s work many times here, so today my interest is in discussing it in terms of the Bérubé talk I saw last week.

In the Bérubé talk (see my last post), humanities’ focus emerged from dealing with the promises and challenges of modernity and Enlightenment.  Freedom, justice, equality, rationality: they all offer tremendous promise as universals and yet also seem unreachable and treacherous. So the humanities must play this role in the indeterminable pursuit of judgment. In this discourse of right/wrong it supplants religion, though obviously religion continues on, where the humanities is perhaps less willing to settle on an answer than religion often seems to be.

Ulmer offers a different perspective. To the binaries of right/wrong (religion) and true/false (science), he offers pain/pleasure (aesthetics). As he notes this third segment comes from Kant as well but is only realizable as an analog to the first two in an electrate society, with the first being the product of oral cultures and the second the product of literate ones. He makes an interesting point in relation to Superfund sites and climate change more generally where we are largely able to recognize that destroying our climate is wrong and we are able to establish the scientific truth of climate change, but we appear to need to feel it as well.

In a fairly obvious sense, pain/pleasure seems a more basic segment, and one that is available to a wide range of animals, at least. What we get in a control society (Deleuze) and perhaps more so in a feed-forward culture (Hansen) are technologies that operate on this aesthetic level to modulate subjectivity and thought in a way that the symbolic behaviors of oral and print societies did not. That is, we’ve always been able to seduce, persuade, entice, repel, frighten, hurt, and so on with words, but at least there was some opportunity for conscious engagement there.

Many of the challenges Bérubé identified with judgment have to do with the orientation of the individual to societies: e.g. how we view people with disabilities or differing sexual orientations. However, one thing we might take from Ulmer’s argument is the realization that the “self” is a product of literature culture. If we see the self as a mythology, perhaps as the way we might view an oral culture’s notion of spirit, then perhaps the challenges of judgment that arise from Enlightenment become irrelevant, much like the challenge of appeasing gods is to moderns. In some respects we still want the same ends in terms of material-lived experience–we still want a good crop–we just stop appeasing gods or pursuing justice to get it.  No doubt such notions seem absurd. Ulmer would suggest that they are because we are only beginning to grasp at them. He reminds us of Plato’s initial definition of humans as “featherless bipeds.”

In the place of the self Ulmer suggests an avatar as an “existential positioning system,” an analog to GPS. He didn’t get too far into this matter in the talk, but I am intrigued. Of course GPS is a technological, networked identification. The self is also a technological identification, a product of literacy. For Ulmer the EPS is likely image-based. I am interested in its “new aesthetic,” alien phenomenological qualities though as a kind of machine perception. While I argue that language is nonhuman, so both oral and print cultures had nonhuman foundations, electracy might so decenter the human as to allow us to feel the nonhumans in a new way. In this respect, an EPS might be a tool that shows us a very different way of inhabiting or orienting toward the world. Arguably, that’s what writing did.

In any case, trying to figure that out seems like a really interesting project for the humanities, one that would produce an outcome, even if the implications of that outcome may take decades to realize.

Categories: Author Blogs

gravity’s rhetoric and the value of the humanities

17 April, 2015 - 14:35

I attended a talk today at UB by Michael Bérubé on “The value and values of the humanities.” Without rehearsing the entirety of his argument, the main theme regarded how the notion of the human gets defined and struggles in the humanities over universal values. So while we largely critique the idea of universalism, we also seek to expand notions of humans and human rights in universal ways (in particular the talk focused on queer theory and disability studies, but one could go many ways with that), though even that encounters some limits (as when people raise concerns over whether Western values about equality should be fostered in non-Western cultures). The talk is part of a larger conference on the role of the humanities in the university and part of Bérubé’s point is that the intellectual project of the humanities, which he characterized as this ongoing, perhaps never-ending, struggle over humanness, continues to be a vibrant project and should not be confused with whatever economic, institutional, bureaucratic, political crisis is happening with the humanities in higher education.

I don’t disagree with him on these points, but my concerns run at a tangent to his claims.  I think we can accept the enduring value of the humanities project as this ongoing struggle with Enlightenment and modernity. (I.e. we value justice, freedom, equality, rationality, etc. but we can’t really manage to figure those things out.) But, for me, this has little to do with valuing the particular ways that this project is undertaken or the scope of the project. That is, one can completely share in this project and still argue that many of the disciplines that comprise the humanities are unnecessary or at least do not require as many faculty as they currently have.  So in the 19th century we didn’t really have literary studies. We had it in spades in 20th century (literature departments were almost always the largest departments in the humanities and perhaps across the campus). In the 21st century? Well, we’ll see I guess. But those ups and downs would really have nothing to do with the value of this general humanities project. Because, in the end, the argument for or against the importance of literary study in the pursuit of this project has to be made separately. And the same would be true of any humanities discipline.

In fact, it’s not only true of every discipline, it is also true of every theory, method, genre, course, pedagogy, and so on. It does not necessarily mean that we as humanists should continue writing what we write, teaching what we teach, or studying what we study or that such practices should be propagated to a new generation of students and scholars. It doesn’t mean that they shouldn’t, either.

In the discussion following, Bérubé made an observation that anything with which humans interact could be fair game for humanistic study. I think his point of reference was fracking, but I started thinking about gravity, which obviously we all interact with. I also sometimes think about gravity when I think about nonhuman rhetoric as a force and how far it extends. If Timothy Morton is willing to argue that the “aesthetic dimension is the causal dimension,” than might one substitute rhetoric for aesthetic. That is, are all forces rhetorical? Or barring that, might any force have a rhetorical capacity? So, gravity.

Here’s the argument I came up with for saying gravity is rhetorical. Every living thing on Earth evolved under a common and consistent gravitational force. Obviously we didn’t all end up the same because gravity was just one of many pressures on evolution. But clearly our skeletal and muscular structure are partly an expression of our encounter with gravity. This is true not only in evolutionary or species terms but individual ones as well. If I grew up on the Moon then I would look different than I do (as any reader of sci fi knows). It was Michael Jordan’s relationship to gravity that made him so amazing, and we might say the same of dancers, acrobats, and so on. One might proceed to speak about architecture’s aesthetics in gravitational terms. Anyway, I think you get the idea. It might be possible to speak of gravity as an expression, not simply as a constant predictable force, but as an indeterminate force productive of a wide range of capacities that cannot be codified in a law. So while I don’t think I would want to argue that gravity is inherently rhetorical, that the Moon’s orbit of the Earth is rhetorical. I might argue that rhetorical capacities can emerge in gravitational relations.

Maybe you don’t want to accept that argument. Most humanists would not because the humanities, in the end, are more defined by their objects of study, their methods, and their genres than by these larger, more abstract questions of value. That is, no history or English department is going to organize itself in terms of curriculum or faculty around these questions of value. They organize around geographic regions and historical periods. We don’t hire people to study questions of value, we hire them to study particular literary periods or apply specific methods.  We place highly constrained expectations on the results of those studies as well in the production of highly technical genres–the article, the monograph, etc.

So perhaps these broader questions act as a kind of gravitational force on the humanities both drawing the disciplines together and shaping the particular expressions and capacities they reflect, but if so then that only points to the contingent qualities of those disciplines. In addition clearly other forces shaped the particular forms humanities study has taken in the US–from massive shifts like nationalism and industrialization to policies regarding the building of universities (e.g. the Morrill Act or the GI Bill) or demographic shifts in US population. And, of course, I should forget technologies.

I don’t think Bérubé would disagree with any of that, so in the end I guess I’m left thinking that the value of the humanities really tells us very little of its future.

Categories: Author Blogs

against close reading

13 April, 2015 - 16:49

Close reading is often touted as the offering sacrificed at the alters of both short attention spans and the digital humanities (though for probably different reasons). Take for example this piece in The New Rambler by Jonathan Freedman which is ostensibly a review of Moretti’s Distant Reading but manages to hit many of the commonplaces on the subject of digital literacy, including laments about declining numbers of English majors: “fed on a diet of instant messages and twitter feeds, [students today] seem to be worldlier than students past—than I and my generation were—but to find nuance, complexity, or just plain length of literary texts less to their liking than we did.” But it’s not just students, it’s colleagues as well: the distant and surface readers, for example.

In the end though, Freedman’s argument is less against distant reading than it is for close reading: “distant reading doesn’t just have a guilty, complicitous secret-sharer relation to soi-disant close reading: it depends on it.  Drawing on the techniques of intrinsic analysis of literary texts becomes all the more necessary if we are to keep from drowning in the sea of undifferentiated and undifferentiable data.” And as far as I can tell, the distant and surface readers do not really make arguments against close reading in principle. They may critique particular close reading methods in order to argue for the value of their own methods, but that’s a different matter.

So I’ll take up the task of arguing against close reading, just so there’s actually an argument that defenders of close reading can push up against if they want.

I don’t want to make this a specifically literary argument. Yes, “close reading” is a term that we get from New Criticism, so it has terminological roots in literary studies, but it’s come a long way from then. The symptomatic readings of poststructuralism, cultural studies, and so on are all close reading practices, even though they are quite unlike the intrinsic interpretive methods of New Criticism (relying on the text itself). As Katherine Hayles argues in How We Think,

close reading justifies the discipline’s continued existence in the academy, as well as the monies spent to support literature faculty and departments. More broadly, close reading in this view constitutes the major part of the cultural capital that literary studies relies on to prove its worth to society

To borrow an old cattle industry slogan, close reading is “what’s for dinner” in English Studies. And we have made a meal of it. Whether we’ve made a dog’s dinner of it is another matter. Regardless, in the contemporary moment, and certainly for the 2 decades or so I’ve been in the discipline, close reading has also been a central feature of rhetoric. All one has to do is think of the attention to student writing to see that, but it is also characteristic of the way many rhetoricians go about their own scholarship. So what I say here about close reading applies across English Studies.

Now, while I have just said that close reading is a wide-ranging practice, it is still one that is specific to print texts and culture. And, of course it is not just a reading practice, because if it were, how would we know we did it? It’s also a writing/communicating practice. That is, I’d think of close reading as a set of genres of print textual analysis.

The key question, from my perspective, is how these genres operate in a digital media ecology. I wouldn’t want to say that they don’t operate, because people still produce close readings, and I wouldn’t want to gainsay their claim that they do so. Instead, my point is that close reading can no longer operate as it once did. From the early days of the web, across computers and writing research and beyond, it was already clear that multimedia and hyperlinks shifted rhetorical/reading experience. But it has become much clearer in the era of high speed internet, mobile media, and big data, that text just isn’t what it once was. It doesn’t produce meaning or other rhetorical effects in the same way.

Besides that, reading and writing are so much more obviously and immediately two-way streets. As you read, you are being read. As you write, you are being written. Is that an “always already” condition? Maybe, but it certainly has specific implications for digital media ecologies. What does it mean to read your Facebook status feed closely when what is being offered to you has been produced by algorithmic procedures that take account of your own activities in ways that you are not consciously aware? Even if you’re going to read some pre-Internet text (as we often do), you’re still reading it in a digital media ecology. Again, it’s not that one can’t do close reading. It’s that close reading can’t work the same way. Maybe close reading just comes to me that we study something, that we pay attention to it, rather than indicating any particular method or strategy for studying, but that would seem to miss the point. For me, close reading rests on a particular set of assumptions about how text is produced and how it connects with readers, not only in terms of one particular text and one particular reader, but also the whole constellation of texts and readers: i.e., a print media ecology.

Arguing “against close reading” then is not an argument to say that we should stop paying close attention to texts. If anything, it’s an argument that we should pay closer attention to the ways in which the operation of text is shifting.

Categories: Author Blogs

writing epilogues on the 20th-century university

8 April, 2015 - 08:15

Terry Eagleton’s recent Chronicle op-ed is making the rounds. It’s a piece with some clever flourishes but with largely familiar arguments. What I think is curious is that the nostalgia for the good old, bad old days describes a university that we would no longer find acceptable. Looking back at the end of the 20th century, the greatest accomplishment of higher education might be the way that we managed to greatly expand access in the last two decades. We have clearly not found a sustainable way to afford the post-secondary education of this growing portion of the population, and many of our problems revolve around that challenge. However, many of the other changes that Eagleton laments are a result of other aspects of this shift. Students show up on campus with different values, goals, and expectations for higher education than they once did. Governments, businesses, and other “stakeholders” also have shifting views to which universities are increasingly accountable as the role of higher ed becomes further embedded in the economy with more and more jobs requiring it. As I mentioned in my last post, I’m doing some campus visits with my daughter. It may be that the “highly/most selective” colleges and universities still get to select students who fit their educational values, but that’s not the case at public universities.

When I read articles like this one, I tend to have three general reactions. First, I agree that there’s a lot wrong with the way higher education is moving (increased bureaucracy, decreased public support, etc.). Second, I find it odd and a little worrisome how “technology” is scapegoated, as if higher education has always been technological. Third, I find the nostalgia understandable but ultimately unhelpful. As much as we may not like where we are or where we appear to be going, trying to go back is not a viable or even desirable option.

One of the amusing parts of Eagleton’s essay is his description of how it used to be, when faculty didn’t finish dissertations or write books because such things suggested “ungentlemanly labor.” I don’t think we have many colleagues who still share those values, but we still object to notions of “utility.” Maybe it’s the lower middle-class upbringing, or maybe it’s the rhetorician in me, but I’m not insulted when someone finds something I’ve written or a class I’ve taught to be useful. To the contrary, I actually prefer to do work that other people value and makes their lives easier or better, even though that might make me “ungentlemanly.”

In a couple recent conversations I’ve had around this topic, I have heard repeated the value of writing a book that maybe only a handful of people might read. I was struck by the widespread appeal of this value, at least among the audiences of humanities faculty and grad students who were present. I think I understand why they feel that way. They want to pursue their own interests without having any obligation to an audience. If Eagleton’s old colleagues found writing itself to be ungentlemanly then many contemporary humanists find the idea of writing for an audience (or writing something that would be useful) to be an anti-intellectual constraint.

Given that perhaps as a set of disciplines we are not particularly inclined to rhetorical strategies, here’s some fairly straightforward advice. It’s not an especially effective argument to say that everything about the contemporary university is going to hell and that we need to change everything so that we can create conditions were I can pursue my own interests regardless of whether they result in anything useful or even produce something that any one else would bother reading because the humanities are inherently good and must be preserved. Perhaps that seems like a hyperbolic version of this position, but if so, only barely. A better rhetorical strategy would be one that said something along the lines of “here’s how we believe higher education should be adapting to the changing demands of society, and here’s what we in the humanities would do/change to respond to those challenges.” I see a lot, A LOT, of digital ink spilt on the humanities crisis. I almost never see an argument from within the humanities about how the humanities itself should change. It’s almost always about how everyone else should change (students, parents, politicians, administrators, employers, etc.) so that we don’t have to.

Why is that?

 

Categories: Author Blogs

Does it matter where you go to college?

3 April, 2015 - 09:38

By now this is a familiar commonplace in our discussions about the crisis of higher education.  Here’s one recent example by Derek Thompson from the Atlantic that essentially argues that it’s less important that you get accepted into a great college than that you be the kind of person who might get accepted. However, as is painfully evident, the whole upper-middle class desperation of “helicopter parents” and “tiger moms” and whatnot to get their kids into elite schools and away from the state university systems that they’ve helped to defund through their voting patterns creates a great deal of ugliness. I’m assuming there’s no news for you there.

Personally I am in the midst of this situation. My daughter is a junior and we’ll be headed to some campus visits next week during her spring break. Her SATs put here in the 99.7 percentile of test takers and the rest of her academic record reflects that as she looks to pursue some combination of math, physics, and possibly computer science. We live in a school system with a significant community of ambitious students (and families), where the top of the high school class regularly heads out to the Ivies.  I’m sure it’s not as intense as the environment of elite private schools in NYC but it’s palpable.  This also has me thinking back to when I was headed out to college, as a smart kid (“class bookworm” as my yearbook will evidence) in an unremarkable high school, first-generation college grad going to a state university, coming out of a family that had its financial struggles until my mom remarried when I was a teenager. I don’t mean to offer that as a sob story (because it isn’t) but only that my own background gives me a lot of misgivings about the value and faith we put in this race to get into elite colleges.

I think it’s easy to see the ideological investments underlying the way we try to answer this question. Part of the American Dream is believing that education is the great democratizer, that it is meritocratic, and that in the end, overall, the brightest and best students are rewarded. Part of that is also believing that intelligence is not really a genetic trait and that socio-economic contexts are not a roadblock; almost anyone can succeed if they put their mind to it. For those on the Left (i.e. the circles I mostly travel in), there is a clear recognition of socio-economics as largely driving opportunities for academic success, and that’s hard to deny when one looks at the big picture. So that tells you that on a societal level education on its own does not solve economic disparity. However, it doesn’t tell you much on the individual level where ultimately what you want is some sense of agency rather than having your agency taken away by socioeconomics or admissions boards.

Derek Thompson describes the situation he is investigating as affecting the top “3%” of high schoolers, though it’s probably more like 1% if one is thinking of the top 20-25 schools in the country.  Here’s what I think about those kids, including my daughter… They’re going to do ok if they manage to avoid getting a nervous breakdown trying to get into college. Maybe an Ivy-league degree is a surer route to being CEO or senator; it almost certainly is. But you’re probably as likely to be a professional athlete or movie star as become one of those, particularly if you aren’t already a senator’s son (cue the Creedence). Even though we’re still talking about tens of thousands of families, focusing on the top 1 or even 3 percent seems fairly odd. In all honesty it probably is a little beyond the scope of pure individual will to get into a top, top college. You probably do need some natural-born smarts and some socio-economic advantages to have a decent shot.

If we’re going to have a conversation about the importance of where you go to college, it makes sense to me to talk instead about the students in the middle of the college bell curve. What’s the difference between the university ranked #50 and the one ranked #150? Is there a big difference between Florida, Buffalo, Tennessee, and West Virginia?  Setting aside the Ivy bias, what’s at stake at going to Emory or Virginia (a top 20 school) rather than Wisconsin or Illinois or RPI (a top 50 school)? From school #20 to school #150 we’re still talking about students in the top 5-20% of SAT test-takers. I’m thinking all those students are reasonably well-positioned to get a good education that leads to a rewarding career. And what’s the difference between the student in the 80th percentile of college applicants who goes to a big public university and one in the 50th or 60th who goes to a regional state college? And how do these differ from the ones coming up through community colleges?

It seems to me that those are much more interesting questions than the ones about the top 1 or 2%, even if that’s where my own kid is drawing my personal attention.

Categories: Author Blogs

academic capitalism: futures of humanities graduate education

31 March, 2015 - 09:10

Yesterday I attended a roundtable on this topic on my campus. These things interest me both because I have the same concerns as most of us do about these issues and because I am interested in the ways faculty in the humanities discuss these matters. So here are few observations, starting with things that were said that I agree with:

  1. The larger forces of neoliberal capitalism cause problems for higher education and the humanities in particular.
  2. There is a perception of humanistic education as lacking value which needs to be corrected.
  3. We need to take care with any changes we make.

Certainly it’s the case that broader cultural and economic conditions shape, though do not determine, what is possible in higher education and the humanities. This has always been the case. When we invented the dissertation, the monograph, and tenure as we experience them today (which was roughly in the early-mid 20th century), there were cultural-economic conditions that framed that. It’s important to recognize that graduate education is part of a larger network and ecology of relations, that you probably can’t just change it without changing other things.

In terms of actual graduate education issues, our discussion focused on two key points, I think: the possibility of revising the dissertation and concerns about the job market. These are two of the common themes that come out of the MLA report. Here’s my basic thoughts on these two matters.

  1. It’s very difficult to change what the dissertation is like without also changing the scholarship one does after completing the degree.
  2. The casualization/adjunctification of the job market is tied to the operation of graduate education and the work of tenured faculty. You can’t change it without changing those other things.

The upshot, from my perspective, is that while I completely agree that we need fix the way higher education is funded, to reaffirm our understanding of it as a social good and, if necessary, as a strategic national interest, AND that we need to intervene in the popular discourse about humanities education to make clear the value of the things we can do, none of that will be enough on its own. It will also be necessary for us to change what we do as well.

Unfortunately that’s the part I hear the least about and also the part that produces the most resistance. It’s unfortunate because it’s the element over which we have the most direct control. Mostly what I hear are defenses of the value of the work that we do and how people who want us to work differently don’t really understand us. Both of those things might be true. There is value in the work of the humanities, and probably at least some of the people who want humanities to change may not understand the work very well or appreciate that value. But ultimately I don’t think that’s the point either.

So I would put graduate education reform in the following question: what would it take for us to dethrone the monograph as the central measuring stick of scholarly work in the humanities? You would think that the answer should be “not much.” After all, it’s got to be less than 10% of four-year institutions that are effectively “book-for-tenure, two books for full professor” kinds of places. Even if we just switched to journal articles and chapters in essay collections (i.e to other well-established genres) that would be enough. The problem, I would say, is that humanities professors want to write books, or at least have a love/hate relationship with the prospect.

No doubt it is true that one can accomplish certain scholarly and intellectual goals in book-length texts that cannot be achieved in other genres. That’s the case with most genres: they do things other genres do not. How did we become so paradigmatically tied to this genre? So tied that many might feel that the humanities cannot be done without monographs.

If our scholarship worked differently then our graduate curriculum could as well. Not just the dissertations, but the coursework, which in many cases is a reflection of a faculty member’s active book project. Without the extended proto-book dissertation, maybe there would be more coursework, more pedagogical training, more digital literacy (to name some of the goals in the MLA report). If there was more coursework then maybe you’d need fewer graduate students to take up seats in grad courses and make the courses run. If you had three years of coursework instead of two, then you’d need to enroll 1/3 fewer students each year to fill the same number of classes. If you didn’t have dissertations to oversee, then you could free faculty from what can be a significant amount of work, especially for popular professors.

I’m not sure if that would impact adjunctification much, but at least it would reduce the number of students going through the pipeline, which is probably about as much as one could ask graduate education reform to accomplish on its own in this matter.

Now I don’t think any of these things will happen. I am very skeptical of the capacity of the humanities to evolve. Other disciplines across the campus have been more successful at adapting to these changes but they are not as deeply wedded to print literacy as much of the humanities are. However, until we can recognize that it is our commitment to the monograph that drives the shape of graduate education, I don’t think we can do more than make cosmetic changes.

 

Categories: Author Blogs

regarding “invidious distinctions between critique and production”

24 March, 2015 - 11:03

I’m working at a tangent from my book manuscript today, preparing a presentation for a local conference on “Structures of Digital Feeling.” If you have the (mis)fortune to be in Buffalo in March, I invite you to come by. Anyway, my 15 minutes of fame here involve wresting Williams’ “structure of feeling” concept from its idealist ontological anchors, imagining what real structures of feeling might be, and then putting that to work in discussing “debates” around the digital humanities.

Fortunately, Richard Grusin offers the perfect opening for this conversation as he is already discussing “structures of academic feeling” at the MLA conference in his juxtaposition of panels about the “crisis” in the humanities with the more positive outlook of DH panels. (I haven’t been to MLA in a few years so I wonder if this distinction still holds.) The quoted phrase in the title of this post comes from his Differences article on the “Dark Side of the Digital Humanities” (a reformulation of the panel presentation of the same title). It’s a response to the familiar DH refrain of “less yack, more hack.”

As he argues:

Specifically, because digital humanities can teach students how to design, develop, and produce digital artifacts that are of value to society, they are seen to offer students marketable skills quite different from those gained by analyzing literature or developing critiques of culture. This divide between teachers and scholars interested in critique and those interested in production has been central to the selling of digital humanities. My concern is that this divide threatens both to increase tensions within the mla community and to intensify the precarity running through the academic humanities writ large.

His objection to this is twofold. First, he objects to the suggestion that he doesn’t make things too (“tell that to anyone who has labored for an hour or more over a single sentence”). And second is to suggest that making things, in the absence of “critique” echoes “the instrumentalism of neoliberal administrators and politicians in devaluing critique (or by extension any other humanistic inquiry that doesn’t make things) for being an end in itself as opposed to the more valuable and useful act ‘of making stuff work.’” So the net is something along the lines of all humanists make things, but in case we don’t making things is bad. So while Grusin wants DHers to stop making the “invidious distinction between critique and production,” he still wants to make it himself in order to critique DH.

In my view, this is an argument between methods and argument for the primacy and necessity of “critique.” It is an argument that says the humanities are essentially defined by critique.  What else can critique be expected to argue? I am reminded of Vitanza’s “Three Countertheses” essay where he playfully asks if we can imagine CCCC having as its conference theme the question “Should Writing Be Taught?” We might similarly ask MLA to have as its conference them “Should We Be Doing Critique?”

Given this connection (in my head, at least) when I read about “invidious distinctions between critique and production,” I don’t think about DH. I think about rhetoric. I think about how literary studies established these distinctions in order to make critique a master term and devalue production as “skills.” I guess it’s not so funny now that the shoe is on the other foot.

What is funny though is the sudden concern with the precarity of labor. Here Grusin is rehearsing his earlier argument about the role that DH plays in creating alt-ac, non-tenure academic work. It’s a legitimate concern, but it’s a little like focusing on recycling your beer cans while driving a Hummer. If there’s a responsible party for adjunctification in English Studies, it’s got to be the literary critics who turned composition into a mill for graduate student TAs who then turn into adjuncts. I will not ignore rhet/comp’s complicity in this, but it is the “invidious distinctions between critique and production” that allowed writing instruction to become a place where this kind of labor practice could evolve.

But let me end on a point where I agree with Grusin because really I find much of his work valuable even though I disagree with him here. Near the end he writes:

Digital media can help to transform our understanding of the canon and history of the humanities by foregrounding and investigating the complex entanglements of humans and nonhumans, of humanities and technology, which have too often been minimized or ignored in conventional narratives of the Western humanistic tradition.

Grusin may not think of himself as a digital humanist, and by some narrow definition of the term he isn’t. But he’s as much a digital humanist as I am. This is at least partly the way he sees his own work, and it’s a fair description of my approach to digital media as well. And I suppose that given my deep investment in the “theory” of DeLanda, Latour, Deleuze, and so on, one might think I’m hip deep if not neck deep in critique as well. But I don’t look at it that way. I don’t look at it that way because, as I see it, critique only exists by invidiously distinguishing itself from production. However that distinction is unstable.  Production can be uncritical, but criticism cannot exist without being produced. It’s the idealism of critique that prevents it from seeing this, that prevents it from seeing that being tied to books, articles, genres, word processors, offices, tenure, etc., etc. instrumentalizes critique as much as computers and networks instrumentalizes DH. The project Grusin describes addresses the division between critique and production, but critique doesn’t really survive that. Critique needs to be the pure private thought of the critic in order to be what it claims to be. Once critique becomes a kind of production, a kind of rhetoric and composition, it looses its hold as the master discourse of the humanities.

Categories: Author Blogs