Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 17 hours 13 min ago

pedagogy, computers and writing, and the digital humanities #cwdhped

17 July, 2014 - 10:44

Over the past couple days there’s been a Twitter conversation (#cwdhped) and an evolving open Google doc that explores the idea of some summit or FTF discussion among scholars in the digital humanities and those in computers and writing on shared interests in pedagogy. For those that don’t know, “computers and writing” is a subfield of rhetoric and composition that focuses on technological developments. I’ll reserve my comments about the weirdness of such a subfield in 2014 for another day. Let’s just say that it exists, has existed since the early 80s, and that there’s a lot of research there on pedagogical issues. Digital humanities, on the other hand, is an amorphous collection of methods and subjects across many disciplines, potentially including computers and writing and possibly including people and disciplines that are not strictly in the humanities (e.g. education or communications or the arts). So, for example, when I think of the very small DH community on my campus, I’m meeting with people in Linguistics, Classics, Theater/Dance, Anthropology, Education, Media Study, Architecture… Some of these people are teaching students how to use particular media creation tools. Some are teaching programming. Some are doing data analysis. Some are teaching pedagogy. Most of the digital-type instruction is happening at the graduate level. And none of it is happening in what we’d commonly think of as the core humanities departments (i.e. the ones with the largest faculty, grad programs, and majors). Of course that’s just one campus, one example, which begs the following questions:

  • What % of 4-year US colleges have a specific digital outcome for their required composition curriculum?
  • What % of those campuses have a self-described “digital humanities” undergraduate curriculum that extends beyond a single course?

I would guess there are ~1000 faculty loosely associated with computers and writing, maybe less. I’m sure they are doing digital stuff individually in their classrooms, but is there something programmatic going on there on there respective campuses? There are 100s(?) of professional writing majors now, most of which have some digital component, but sometimes it is still just one class. And if we stick to the MLA end of the DH world, how many English and/or language departments have a specific DH curriculum? How many have any kind of DH or digital literacy outcomes for their majors?

This leads me to the following question/provocation: setting aside composition courses, how many different courses does the average US English department offer each year with an established digital learning outcome or digital topic in its formal catalog course description? I think that if I set the over/under at 2.5 you’d be crazy to take the over.

My point is that when we are talking about DH pedagogy, we are talking about something that barely exists in a formal way. If you want to think about 1000s of professors and TAs doing “something digital” in their courses here and there, then yes, it’s all over the place. And yes we are using Blackboard, teaching online, and so on. And maybe we could come up with a list of 25 universities that are delivering a ton of DH content, the 100s of institutions with professional writing majors are offering an above average amount of digital content, and the English departments that are delivering secondary education certification might be delivering the required digital literacy content for those degrees, but put into the context of 3000 4-year colleges and what do you see?

I think the same is true on the graduate level. We can point to some programs and to individual faculty, but nationally, how many doctoral programs have specific expectations in relation to DH or digital literacy for their graduates? I would bet that even at the biggest DH universities in the nation, you can get a PhD in English without having any more digital literacy than a BA at the same school. Rhet/Comp has a higher expectation than literary studies, but only because of the pedagogical focus and the expectation that one can teach with technology. This doesn’t mean that students can’t choose to pursue DH expertise at many institutions, at either the undergrad or grad level. It’s just not integral to the curriculum.

So my first question(s) to the MLA end of the DH community (just to start there) is

  • What role do you see for yourselves in the undergraduate curriculum?
  • Is DH only a specialized, elective topic or should there be some digital outcome for an English major?
  • Should there be some digital component of a humanities general education?
  • What role should DH play in institutional goals around digital literacy?

The same questions apply at the graduate level. Is DH only an area of specialization or does it also represent a body of knowledge that every Phd student should know on some level?

If a humanities education should prepare students to research, understand and communicate with diverse cultures and peoples, then how that preparation is not integrally and fundamentally digital is beyond me. We really don’t need to say “digital literacy” anymore, because there is no postsecondary literacy that is not digital. Why is it that virtually every English major is required to take an entire course on Shakespeare but hardly any are required to have a disciplinary understanding of the media culture in which they are actually living and participating? (That’s a rhetorical question; we all know why.)

From my viewpoint, that’s the conversation to have. Tell me what it is that we want to achieve and what kind of curricular structures you want to develop to achieve those goals. The pedagogic piece is really quite simple. How do you teach those courses? You hire people who have the expertise. Sure there’s some research there, best practices, and various nuances, but that’s about optimizing a practice that right now barely exists.

 

Categories: Author Blogs

rhetoric’s default mode

8 July, 2014 - 11:03

Following on my previous post, a continuation of a discussion of “neurorhetoric.” Generally speaking, rhetoricians, like other humanists, approach science with a high degree of skepticism, especially a science that might potentially explain away our disciplinary territory. As Jordynn Jack  and others have pointed out, there is a strong interest in the prefix neuro- and that those in our field might benefit from looking bi-directionally at both the rhetoric of neuroscience (how neuroscience operates rhetorically as a field), as well as the neuroscience of rhetoric (what neuroscience can tell us about rhetorical practices). In her article with Gregory Applebaum (a neuroscientist), they point to the broader lessons from the rhetoric of science in approaching neuroscientific research, particularly to resist engaging in “neurorealism, neuroessentialism, or neuropolicy,” which are all variants of interpreting research as making certain kinds of truth claims. Similarly I tend to turn toward Latour here to think about the constructedness of science.

With that in  mind, I was following my nose from my last post’s discussion of an article in Science, to this article “Rest Is Not Idleness: Implications of the Brain’s Default Mode for Human Development and Education” by Mary Helen Immordino-Yang, Joanna A. Christodoulou and Vanessa Singh. The “default mode” describes a relatively new theory of the brain/mind that identifies two general networks. One is “task positive,” which is a goal-oriented, outward-directed kind of thinking and the other is “task negative,” which is inward-directed. The latter is the default mode and is concerned with “self-awareness and reflection, recalling personal memories, imagining the future, feeling emotions about the psychological impact of social situations on other people, and constructing moral judgments.” As they continue

Studies examining individual differences in the brain’s DM connectivity, essentially measures of how coherently the areas of the network coordinate during rest and decouple during outward attention, find that people with stronger DM connectivity at rest score higher on measures of cognitive abilities like divergent thinking, reading comprehension, and memory (Li et al., 2009; Song et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; Wig et al., 2008). Taken together, these findings lead to a new neuroscientific conception of the brain’s functioning “at rest,” namely, that neural processing during lapses in outward attention may be related to self and social processing and to thought that transcends concrete, semantic representations and that the brain’s efficient monitoring and control of task-directed and non-task-directed states (or of outwardly and inwardly directed attention) may underlie important dimensions of psychological functioning. These findings also suggest the possibility that inadequate opportunity for children to play and for adolescents to quietly reflect and to daydream may have negative consequences—both for socialemotional well-being and for their ability to attend well to tasks.

As I’ll discuss in a moment, the article goes on to make some interesting claims and recommendations about social media, but let’s just deal with this. Let’s call it unsurprising to discover that the brain is doing different things when one is looking outward and focused on a specific task than when one is daydreaming, speculating, fantasizing, remembering or otherwise being introspective. How “real” those two networks are versus their being products of our perspective on our brains I cannot say. Certainly these are notions that reflect our mundane experience with thinking. I am certainly not going to argue against the wisdom of having down time, taking opportunities for reflection, or developing a meditative practice for children, teens, or adults. I also don’t need a multimillion dollar machine-that-goes-bing to know that.

Here is what might be interesting though as one investigates the ontological dimensions of a rhetoric not restricted to symbolic behavior. Without falling into neuroessentialism, it is not radical, I think, to imagine the rhetorical strategies, such as audience awareness, develop from the way we are able to think and conceive of others, a task attributed here to the “default mode.” It is only speculation, as far as I am concerned, but the capacity to conceive of a self is dependent on the capacity of conceive of a non-self. Following upon that the ability to imagine that others have similar capacities, that there are other “selves” out there develops when? Prior to symbolic behavior? In concert with symbolic behavior? Following symbolic behavior? Who knows? I do, however, think that such neurorhetorical work opens a space for the investigation of a naturalcultural, material, nonsymbolic rhetoric.

That said, it certainly does not resolve such matters. And the discussion of social media in this article is an excellent example of this. To be fair, they conclude that “In the end, the question will not be as much about what the technology does to people as it will be about how best to use the technology in a responsible, beneficial way that promotes rather than hinders social development.” Thanks for that insight. Indeed they do admonish us that “the preliminary findings described here should not be taken as de facto evidence that access to technology is necessarily bad for development or weakens morality.” Of course they only reason that such caveats must appear in the article is that much of what they discuss suggests exactly the opposite of these backpedaling sentences, that “if youths are habitually pulled into the outside world by distracting media snippets, or if their primary mode of socially interacting is via brief, digitally transmitted communications, they may be systematically undermining opportunities to reflect on the moral, social, emotional, and longer term implications of social situations and personal values.” How do they get to this implication? Basically by arguing that effective use of the default mode is necessary for moral behavior and that social media interferes with entering this default mode through its continual demand for attention.

I’ll just toss out a different hypothesis for you, one that doesn’t have to fall into the trap of technology makes us more or less moral, stupid, etc. Or retreat to some version of the “guns don’t kill people; people kill people” commonplace. Here’s my premise: we don’t know how to live in a digitally-mediated, networked world. It’s a struggle. We are trying, unsuccessfully, to import paradigms from an industrial, print culture about what life should be like (and to be fair those are the only paradigms we have to work with). Addressing this struggle is not simply about some rational process of using technology in a beneficial way. It’s a more recursive and mutative process where the notion of benefit shifts as well. It’s unlikely that we will evolve out of our need for “down time” in the near future or develop some scifi wetware implants to do the job for us. So we will need to understand the ontological basis for rhetorical action, in the brain and elsewhere. But we also need to recognize that what constitutes “moral” behavior is a moving target. What are our moral obligations to our Facebook friends or Twitter followers? How to they intersect with and alter our responsibilities to family or neighbors or other citizens? These are all concepts that we learned through rhetorical activity, concepts that shift with rhetorical activity. And though the authors of this article are careful to hedge their claims, it is also clear that they want to raise some concern about social media that rests upon a certain faith about how we should behave, a faith that they seek to confirm through science.

In the end, I am interested in their argument and largely inclined to share their value in the need for down time and reflection. I worry about the time my son spends staring at his iphone. Not because I think it’s making him a bad person; it just seems like a diminished life experience to me. I’m also interested in this idea of the default mode. However I inclined to be a little wary about these claims regarding social media. I am sure these technologies are shaping our cognitive experience, and I am sure that we struggle with these digital shifts, both individually and collectively. But I’d like to avoid falling into these rhetorical commonplaces about emerging media and morality or stupidity.

Categories: Author Blogs

it hurts when I think

6 July, 2014 - 07:43

Perhaps you have seen this recent Science article (the paywall article itself or an Guardian piece on it.) If you haven’t, this is a psychological study where participants are left alone with their thoughts for 6-15 minutes and then asked questions about the experience. The conclusion? Generally people do not enjoy being alone with their thoughts. The article got attention though because the researchers gave participants the option of shocking themselves, and a good number of them, especially men, chose to do so. As Wilson et al note, “what is striking is that simply being alone with their own thoughts for 15 min was apparently so aversive that it drove many participants to self-administer an electric shock that they had earlier said they would pay to avoid.”

I will not pretend expertise, but having engaged in zazen meditation over the years, it doesn’t really surprise me that people don’t enjoy being alone with their thoughts. In this kind of meditation the objective is not to not think, which isn’t really possible, but rather to not hold onto thoughts. In my experience (and I imagine yours), the unpleasantness of thinking comes from holding on to thoughts (or perhaps their holding on to you). As I understand it, this kind of mindfulness training is fundamentally about letting go. The researchers arrived a similar conclusion, writing

There is no doubt that people are sometimes absorbed by interesting ideas, exciting fantasies, and pleasant daydreams. Research has shown that minds are difficult to control, however, and it may be particularly hard to steer our thoughts in pleasant directions and keep them there. This may be why many people seek to gain better control of their thoughts with meditation and other techniques, with clear benefits. Without such training, people prefer doing to thinking, even if what they are doing is so unpleasant that they would normally pay to avoid it. The untutored mind does not like to be alone with itself.

One might argue that the mind is never “alone with itself.” There’s only more or less stimulation. In this study, the participant is sitting on a chair for instance. One might mention air or gravity, but language is the key outsider from my perspective. My inclination would not be to characterize the participants minds as untrained or untutored but to the contrary as specifically trained to “prefer doing to thinking” where “thinking” is narrowly defined as mental activity that is detached from any apparent stimulation/sensation or a particular immediate objective.

In the disciplinary terms of cognitive science and psychology, what we are talking about here is the brain’s “default network,” which is sometimes described as the brain idling or as mind-wandering but is also suggested as the means by which the brain considers the past and future or imagines other people’s mental states. It is, perhaps, our internal self-reflection: the internal mental state that we imagine other’s similarly have. And really what this study is suggesting is that this internal world is generally not all that pleasant. Perhaps it’s a good thing that navel-gazing doesn’t feel that good. Even though we value self-reflection and mindfulness, we wouldn’t want to find ourselves drawn inward as toward a delicious treat.

An article like this attracts attention in part because of the details of participants shocking themselves but also because of our increasing moralizing over media and attention. It feeds our supposition that we have become so dependent on media stimulation that we are losing ourselves. Actually I don’t think the article is making any kind of cultural-historical argument. There are some cultural assumptions here, specifically those who are tutored to be alone with their thoughts would get different results, but there isn’t a value suggesting that there is something wrong with not enjoying this experience. We just bring that morality to the findings.

Whether we are talking about deep breathing exercises, some more developed meditative practice, language, or an iphone, these are all technologies. Even when we are in that default mental mode, we are still in a hybridized, nature/culture, technological, distributed network of thinking. The condition of being “alone” is relative not absolute.

 

Categories: Author Blogs

when the future isn’t like the past

26 June, 2014 - 09:14

A group of scholars respond to MLA’s proposal regarding doctoral education in Inside Higher Ed, another group propose to replace MLA’s executive director with a triumvirate who will focus on the problems of adjunctification, on Huffington Posta university president write in defense of a liberal arts education: these are all different slices of a larger issue. On this blog, there are a few recurring topics:

  • emerging digital media and their aesthetic, rhetorical, and cultural effects;
  • teaching first-year composition;
  • practices in scholarly communication;
  • technologies and higher education teaching;
  • the digital humanities and its impact on the humanities at large;
  • the academic job market, including the issue of part-time labor;
  • doctoral education in English Studies;
  • undergraduate curriculum, including both general education and English majors.

There’s also a fair amount of “theory” talk, though, at least in my mind, it’s always about developing conceptual tools for investigating one or more of these topics. So perhaps it is not surprising that from my perspective these things are all part of a common situation, not one that is caused by technological change in some deterministic way, but one in which the development of digital media and information technologies has played a significant role. And obviously it’s not just about technology, but when we remark on the changing nature of work in the global economy, the resulting growing demand for postsecondary education, the shift in government support and public perception of higher education, and the impacts of these on academia, it’s clear that technological change has played its role there as well. In other words, the challenges we face today were not necessary and the future has not already been written, but there was and is no chance that the future will be like the past.

And this is where I see the biggest contradictions in our efforts to address these problems, contradictions which are rehearsed again in the pieces referenced above. Who can doubt that the way we approach doctoral education, university hiring practices in relation to adjuncts, and our valuing of a liberal arts curriculum are all tied together? The obvious answer is for there to be greater public investment in higher education. Maybe states should think about incarcerating fewer citizens and educating more of them. Maybe the federal government doesn’t need more aircraft carriers than the rest of the world combined. Maybe we need to close some corporate tax loopholes. Maybe.  Maybe. But even if there were more money flowing into the system, would that mean that things would stay as they are/were?

In his Huffington Post piece, Michael Roth, president of Wesleyan, points to a tradition in American higher education dating back to Franklin and Jefferson that emphasize the value of a liberal education for lifelong learning over specific vocational training, as he concludes:

Since the founding of this country, education has been closely tied to individual freedom, and to the ability to think for oneself and to contribute to society by unleashing one’s creative potential. The pace of change has never been faster, and the ability to shape change and seek opportunity has never been more valuable than it is today. If we want to push back against inequality and enhance the vitality of our culture and economy, we need to support greater access to a broad, pragmatic liberal education.

Ok, but what should that “broad, pragmatic liberal education” look like? Does this ability to “shape change and seek opportunity” also apply to higher education itself? The “10 Humanities Scholars” writing in response to MLA’s proposal object to the suggestion that graduation education should be different and instead contend “As long as departments continue to be structured by literary-historical fields and tenure continues to be tied to monographs, a non-traditional dissertation seems likely to do a great disservice to students on the job market and the tenure track.” That’s my emphasis. In short, as long as things remain the same, they should remain the same. (I should note, btw, that with possibly a few exceptions at elite private liberal arts colleges, tenure is only tied to monographs at research universities, which make up less than 10% of American universities. So that claim is not true and has never been true.) But that’s just a side note.

Here’s the point. We want students to receive a liberal arts education in that most medieval of senses: the skills and knowledge needed to succeed as a free individual. And we want to deliver that education without exploitative employment practices. But these movements also want to hold on to the curricular and disciplinary structures of the 20th century. And in the end, the latter are valued over the former. And while the MLA report is obviously focused on MLA fields, this issue extends beyond those departments.

If the solution to our challenges includes changing the curricular and disciplinary paradigms of the arts and humanities are we still committed to finding that solution? Or are we more inclined to stay on this ride until it ends?

What is this future like? Where literary-historical fields are a minor part of the humanities, where the focus turns to digital media and the contemporary global context, where the curriculum focuses on the soft skills of communicating, collaborating and research rather than traditional content, where faculty research efforts, including the genres of scholarly communication, reflect this shift in emphasis, where the elimination of adjunct positions changes both the curriculum offered and the technological means of its delivery, where the focus on graduate programs that train future professors is greatly diminished. In short what if the solution to our problems is to create a future where the job of the humanities professor looks nothing like what it is today?

I’m not saying it has to be that way. My point is only that our conversations about finding solutions to these problems always seem predicating on returning to some imaginary historical moment rather than really trying to shape a future. Didn’t we all receive that “pragmatic liberal education” of which Roth speaks? If we can’t use it to find such solutions, then maybe it isn’t worth saving in the first place.

 

 

Categories: Author Blogs

language, programming, and procedure

23 June, 2014 - 10:52

Following on my last post, by coincidence I picked up a copy of Max Berry’s Lexicon, which is in the sci-fi supernatural genre, light reading but well-reviewed. It’s basic premise is that language triggers neurochemical responses in the brain and that there are underlying operating languages that can compel and program humans. The result is something that is part spellcraft, part cognitive science and sociolinguistics, and part rhetoric, with the identification of different audiences who respond to different forms of persuasion. In this aspect it reminds me somewhat of Stephenson’s Snow Crash or even Reed’s Mumbo Jumbo, in a far more literary vein.

Conceptually what’s most interesting about Lexicon for me is the role of big data and surveillance. Compelling people requires identifying their psychographic segmentation, which is a practice in marketing research; think of it as demographics on steroids. This is the information produced from tracking your “likes” on Facebook, text mining in your Gmail and Google searches, data collected from your shopper card. Perhaps you remember the story from a few years ago about Target identifying a shopper as pregnant. Maybe this happened, maybe not. But that’s the kind of thing we are talking about.

Where does this get us?

  1. If the better you know your audience, the more likely you will be able to persuade them. I don’t think anyone would disagree with this.
  2. Through big data collection and analysis, one can gain a better understanding of audiences not just in broad demographic terms but in surprisingly narrow segments. How narrow, I’m not exactly sure.
  3. The result is the Deleuzian control society version of propaganda where we are not disciplined to occupy macrosocial categories but are modulated across a spectrum of desires.

Certainly there are legitimate, real world concerns underlying Lexicon, as one would hope to find in any decent scifi novel. It’s also a paranoid, dystopian fantasy that gets even more fantastical when one gets down to the plot itself (but no spoilers here). I suppose my reaction in part is to say that I don’t think we are that smart, competent or organized to make this dystopia real. But for me the more interesting question is to ask are we really this way? To what extent are we programmable by language or other means? This is where one might return to thinking about procedural rhetoric.

I suppose the short answer is that we are very programmable and that our plasticity is one of our primary evolutionary advantages, starting with the ability to learn a language as an infant. One might say that our openness to programming is what allows us to form social bonds, have thoughts and desires, cooperate with others for mutual benefit, and so on. If we think about it in Deleuzian terms, the paranoid fear of programming (tinfoil hat, etc.) is a suicidal-fascist desire for absolute purity, but ultimately there’s no there there, just nothingness. If we view thought, action, desire, identity and so on as the products of relation, of assemblage, then “we” do not exist without the interconnection of programming.

Of course it’s one thing to say that we emerge from relations with others. It’s another to investigate deliberate strategies to sway or control one’s thinking by some corporation or government. It’s Latour’s sleeping policeman (or speed bump as we call it) or the layout of the supermarket. Imagine the virtual supermarket that is customized for your tastes. You don’t need to imagine it, of course, because that’s what Amazon is. Not all of these things are evil. Generally speaking I think we imagine speed bumps are a good way to stop people from speeding in front of an elementary school, more effective than a speed limit sign alone. There is an argument for the benefit of recommendation engines. We require the help of technologies to organize our relations with the world. This has been true at least since the invention of writing. Maybe we’d prefer more privacy around that; actually there’s no maybe about it. It’s one thing to have some technological assistance to find things that interest us, it’s another to have some third party use that information for their own purposes.

I also wonder to what extent we are permanently and unavoidably susceptible to such forms of persuasion. Clearly the idea of most advertising and other persuasive genres is not to convince you on a conscious level but to shape your worldview of possibilities, not to send you racing to McDonalds right away but for McDonalds to figure prominently in your mind the next time you ask yourself “what should I have for lunch?” And even then when fast food enters into our mind as a possibility we might consciously recognize that the idea is spurred by a commercial, but do we really care?  Do we really care where our ideas come from? Are our stories about our thoughts and actions ever anything more than post-hoc rationalizations?

Returning to my discussion of Bogost, Davis, and DeLanda in the last post, I think there is something useful in exploring symbolic action as a mechanism/procedure. As a book like Lexicon imagines, we’ve been programming each other as long as there has been history, perhaps longer. Maybe we are getting “better” at it, more fine-tuned. Maybe it’s a dangerous knowledge that we shouldn’t have, though we’ve been using ideas to propel one group of humans to slaughter, enslave, and oppress another group of humans for millennia. That’s nothing new. If anything though, for me it points to the importance of a multidisciplinary understanding of how information, media, technologies, thoughts, and actions intertwine as the contemporary rhetorical condition of humans.

 

Categories: Author Blogs

alien languages and rhetorical procedures

20 June, 2014 - 09:05


Ian Bogost writes about Star Trek: The Next Generation and the unique language of the Tamarians, an alien race encountered in one episode. Picard and the crew eventually figure out how to speak with the Tamarians by interpreting their language as a series of metaphors. Bogost, however, suggests that metaphor is the wrong concept,

Calling Tamarian language “metaphor” preserves our familiar denotative speech methods and sets the more curious Tamarian moves off against them. But if we take the show’s science fictional aspirations seriously and to their logical conclusion, then the Children of Tama possess no method of denotative communication whatsoever. Their language simply prevents them from distinguishing between an object or event and what we would call its figurative representation.

Bogost then proceeds to put the Tamarian language in the context of computers where, from our perspective when we look at the computer we perceive descriptions, appearance, or narrative but what is actually happening are logics and procedures. Picard may think the Tamarians are speaking in metaphors, but they are in fact speaking in procedural logic. There is some insight there for us, Bogost observes,

 To represent the world as systems of interdependent logics we need not elevate those logics to the level of myth, nor focus on the logics of our myths. Instead, we would have to meditate on the logics in everything, to see the world as one built of weird, rusty machines whose gears squeal as they grind against one another, rather than as stories into which we might write ourselves as possible characters.

It’s an understandable mistake, but one that rings louder when heard from the vantage point of the 24th century. For even then, stories and images take center stage, and logics and processes wait in the wings as curiosities, accessories. Perhaps one day we will learn this lesson of the Tamarians: that understanding how the world works is a more promising approach to intervention within it than mere description or depiction. Until then, well: Shaka, when the walls fell.

Perhaps not surprisingly, this episode has received some treatment in rhetorical theory. Both Steven Mailloux and Diane Davis (paywall) have written about it as an opportunity to investigate the challenges of communication with otherness. As Davis points out, the episode ends without any real understanding being achieved between the Enterprise crew and the Tamerians. They do not establish diplomatic relations. The best they can achieve is peace without understanding, which, as Davis argues, “suggests that understanding is not a prerequisite for peace, that a radically hospitable opening to alterity precedes cogitation and volition.” From this she concludes

the challenge is to compare without completely effacing the incomparableness of the “we” that is exposed in the simple fact of the address; that is, the challenge is to refuse to reduce the saying to the said, to keep hermeneutic interpretation from absorbing the strictly rhetorical gesture of the approach, which interrupts the movement of appropriation and busts any illusion of having understood .

In this moment, Bogost and Davis appear like Picard and the Tamarians: two non-communicating entities. However they both recognize the partial-at-best success of Picard’s ability to communicate here and the limits of the hermeneutic gesture. Davis points to a rhetorical gesture that precedes communication. I wonder if that gesture might be procedural, or if I were to put it in more Deleuzian terms, as the operation of an assemblage.

Let’s see where that takes us by bringing in two other sci-fi stories.

  1. Kirk-and-GornThe ST:NG “Darmok” episode is often compared to the original Star Trek episode “Arena” where some omnipotent space race called the Metrons forces Kirk to fight an alien captain from a reptilian race called the Gorn. In the end, Kirk manages to create a makeshift weapon (anticipating every episode of MacGyver) and defeats his enemy. However he chooses not to kill the Gorn and he is rewarded for this decision by the Metrons. It has many of the classic tropes of the original episode: Olympian-styled super aliens, violent bestial aliens, and scrappy, can-do American know-how with the perfect mix of brains and brawn, judgment and courage, etc., etc. One way of comparing the episodes is in terms of the shift from Golden Age to New Wave sci fi, where in the former the heroes are cowboy engineers and in the latter they are anthropologists. In “Arena” there is no hope for communication and apparently no attempt.
  2. Stepping out of the Star Trek universe, China Meiville’s Embassytown focuses on an alien race called the Areikei. They are two-headed creatures, and the only way humans can communicate with them is through genetically-engineered twins called Ambassadors who can speak with two mouths and one mind. Like the Darmok, the Areikei appear to speak through metaphorical concepts but more importantly they cannot create fictions or lies. As such, humans are called upon to stage various actions in order to create concepts for communication. There is a Derridean pharmacological aspect as well, as the Areikei find themselves intoxicated by a new Ambassador’s speech. And then, when they figure out how to lie… Following Bogost, we might also call the Areikei’s language procedural. I see the pharmakon as fitting into a procedural understanding of rhetoric and communication: language is a machine.

It’s tempting to see language, or more generally symbolic behavior, as the proto-machine of modern humans. Today, when we look at technologies, they all are preceded by language, by descriptions, images, narratives, and metaphors. When we think about remediation or just McLuhan’s contention that all media take prior media as their content, that’s what we see. The origins of symbolic behavior are as murky as efforts to define it in the first place, but I think we acknowledge that there are technologies prior to language. Technologies always bridge the modern nature-culture divide, responding to physical laws but also shaped by cultural processes. Language is certainly that way, partly in our nature in evolutionary developments of the mouth and brain but also cultural. From Bogost’s view, as well as Deleuze’s (though the two are quite different in other ways), language is machinic because being is machinic. The machine precedes language. For Davis, rhetoric also precedes language and communication as this opening of a relation to Otherness.

Might we say that rhetoric is also a machine? I don’t think Davis would agree to that, but this is precisely Bogost’s point when he discusses procedural rhetoric. Persuasive Games,where Bogost introduces us to procedural rhetoric, focuses on the contemporary scene of videogames, especially games with a social-political agenda. However, if we say that procedural rhetoric is not only a way to understand how software persuades but more broadly a way of seeing rhetoric as a machine, prior to symbolic behavior, then we move toward a different understanding of these science fiction situations.

Human and alien assemblages grind their gears into one another. (Mis)understanding is one output. Violence, heat, entropy are others. Dis/order is produced as assemblages mutate. One inclination is to say there are no aliens here, just stories written in English. Let’s interpret them with our various hermeneutic methods. But there are aliens here, albeit not extra-terrestrial ones, just nonhumans. What happens if we take Bogost’s advice and not see the “Darmok” episode as description, image, and narrative but rather as a process?

Categories: Author Blogs

speculative politics, academic life and the “legacy” of postmodernism

16 June, 2014 - 13:12

Alex Galloway wrote an interesting post a couple weeks ago that sparked a long conversation (100+ comments), including a more recent post by David Golumbia that makes reference to a post I wrote two years ago. In a nutshell this is a conversation about the politics surrounding speculative realism, object-oriented ontology, and such. It mostly focuses on Graham Harman, less so on other OOO-related folks like Bryant and Bogost, and extends to Latour, DeLanda, and others. The questions of “what is?” (ontology) and “what should be?” (politics) are clearly interrelated. I don’t think anyone believes that some version of Stalinist science is a good idea (where the search for understanding is censored up front by a political agenda). On the other hand, no one in this conversation believes that any search for understanding by humans is not shaped by ideology, politics, culture, and so on.

I agree with Galloway when he writes “The political means *justice* first and foremost, not liberation. Justice and liberation may, of course, coincide during certain socio-historical situations, but politics does not and should not mean liberation exclusively. Political theory is full of examples where people must in fact *curb* their own liberty for the sake of justice.” As far as I can tell, justice isn’t built into the structure of the world. It’s not gravity. Justice is a claim about how the world should be. As Galloway points out, there are plenty of political theories that instruct people on what they should do. Of course there’s also a lot of disagreement over that justice is, as well as how it can be achieved. Much of it is tied to theories of ontology (e.g. do you believe the Genesis story accurately describes how the universe was formed?). If I understand Galloway’s criticism, it is that OOO separates politics from ontology and fails to see how its ontology is informed by politics. He then goes on to demonstrate that the politics that informs OOO is capitalism. Maybe. Ultimately the proof is in the putting, and for me that means not only saying but doing. 

From my perspective this conversation focuses on academic life. Galloway’s post takes up Harman’s references to the political situation in Egypt. He also talks about the Occupy movement, Wikileaks and so on. But this is an academic argument happening between academics. We can say that academic life and work is political in the way that all human life and work is political. Write an article, teach a class, attend a committee meeting: all are political acts. But they are not political in the sense of Occupy or Wikileaks. If they are efforts to make the lived experiences of other humans more just then they are quite circuitous in their tactics. Certainly there are some activist academics who are more explicitly political in their research. There are some who are active with unions or with faculty oversight of institutions. But such things do not characterize academic life in general. Let’s say there are two monographs on Moby Dick. One invokes Zizek as a primary theoretical inspiration. The other one invokes Harman. From Galloway’s perspective the former is preferable on political grounds, but I am having a hard time seeing either as doing much for justice.

To put my own research on digital media technologies, higher education, rhetoric, and teaching composition is similar terms, I suppose I would say DeLanda and Latour are my primary inspiration. Put simply, my work examines the premise in my discipline that symbolic behavior is a uniquely human trait. In my view it is a premise that tends to obscure the way that symbolic behavior (and the broader realm of though and action) relies upon a broader network of actors. In particular, I see our continuing struggles over what to do with digital media as stemming from this premise. Is it sufficiently political? I’m not sure. Who makes that determination? Does “being political” by humanistic academic standards require choosing an argument from among a set of proscribed acceptable positions? I would hope not. Does it require offering some prescription, some strategy or tactic, for increasing justice in the world? Maybe. I would like to think that my work strives to make life better. That is, if I offer to you my very best understanding of how digital rhetoric and composition works and what it might mean for teaching and higher education in general, I think that I am trying to make life better. Does it make the world a more just place? How is one even supposed to measure that? If a butterfly flaps its wings…

Meanwhile, David Golumbia in responding to Galloway, takes issue with a phrase in that earlier post of mine, where I say that  “there is potentially less relativism in a flat ontology than there is in our legacy postmodern views.” The word “potentially” there has to do with point-of-view. In my view, it is almost tautological to say that a flat ontology has less relativism. This is, in some respect, Galloway’s complaint: that a flat ontology does not pursue a “superimposition of a new asymmetry.” But that’s not Golubmia’s concern. His concern is with the phrase “legacy postmodern views.” As near as I can figure though, he is not asserting that there is no such thing as “legacy postmodern views,” but rather that their shouldn’t be. As he writes

the major lights of theory have been presented by many of us to students as a bloc, as doctrine, or even as dogma: as a way of thinking or even “legacy view” that we professors of today mean to “educate” our students about. But we should not and cannot be “educating” or “indoctrinating” our students “into” theory. To the contrary: because that work is a diverse set of responses to several bodies of work, more and less traditional and/or orthodox, it can only be understood well when embedded in that tradition.

I don’t have a problem with his argument that theory should be taught a different way. In the end he makes a fairly disciplinary-conservative argument that students need to read the philosophical tradition. He complains that SR plays into this with its “sweeping dismissal” of prior philosophy and argues that their object orientation isn’t all that new anyway. He blames technology for short-attention spans, a devaluing of proper education, and an unwillingness to give due consideration to the philosophical tradition. Keep in mind that these are professors complaining that other professors don’t take education seriously and don’t read enough. Actually though, these are familiar rhetorical moves. What could be more familiar than saying persons A, B, and C have misread or failed to read persons X, Y. and Z.

I do want to respond briefly to where Golumbia remarks  “That phrase “legacy postmodern views” really strikes me wrong, and rings in harmony with the “‘leftist faculty cabal’ mentioned by Galloway. Among other things, both phrases sound much like the major buzzwords used by the political right to attack all of theory during its heyday in the 1980s and 1990s.” I think he means to suggest that I am taking up some right wing attack on theory.  And I’m not sure why, as we seem to agree that “legacy postmodern views” exist and are taught, even though neither of us believe such things are worthwhile.

If I decide, for example, to focus on Latour and DeLanda rather than Badiou and Zizek, and some other digital rhetorician decides the opposite then… I’ve got nothing. I mean, I’m not sure what the stakes are. We write two different kinds of articles and books. Maybe our classes are a little different but not that different. Is one of us making the world a more just place than the other? According to whom? Either way, we’re both stuck on this treadmill of writing articles and monographs for tenure and promotion. How is it that I am evil and the other scholar some avatar of justice? When there’s maybe a couple thousand people on the planet at best who could tell the difference between us and less than 100 who would bother to. That’s the stuff that I don’t get.

Categories: Author Blogs

what happens when I don’t disagree with you?

6 June, 2014 - 12:43

We are all familiar with the echo chamber that can be the interwebs: pick your own news source, pick your friends, and mute the rest. Despite this familiar complaint, we are all equally familiar with Twitter wars, trolling comments, cyberbullying, and all varieties of textual assault. These things have their academic varieties as well. We hear about the digital humanities and its “niceness” (as well as complaints that niceness is created by erasing difference within the echo chamber). And we can also witness the latest flame war over Rebecca Schuman’s Slate article on Zizek.

How does this work in more formal academic discourses? I don’t know about your graduate training, by my coursework was essentially an exercise in critical-rhetorical knife work. Class time was about critiquing this and critiquing that. The graduate student listserv was mostly theory wars. Writing a dissertation was an extension of that, where the first task was really to find or make a hole in the current research. Every argument can be deconstructed. Every viewpoint has a blindspot. Arguments about capitalism give little attention to patriarchy that don’t account for hegemony that cover over the slipperiness of language games etc., etc.

We do the same thing in teaching academic discourse to first-year students. What’s your thesis? What are you trying to argue? A thesis can’t be a statement of fact. It has to be something with which your audience might disagree. I’ve taught this myself. You can tell your students to take their theses and say the opposite thing. If the opposite statement isn’t one that you can imagine people believing, then your original statement isn’t really a thesis. In other words, your audience are people who hold a view different from yours. That said, their views can’t be too different from yours. Clearly they are people who care about the same issue as you, who would be willing to discuss it in the same terms as you, and who are open to the possibility of being persuaded by the kind of argument and evidence that you will provide. In other words, they are the kind of people who are part of a fairly limited discourse community: your discourse community.

In a way, it’s all a performance and not just for the author but the readers as well. Being an academic reader requires one’s willingness to adopt a very specific position. It’s almost like participating in a child’s magic trick. It must be carefully constructed. And here I suppose I should invoke Latour as a kind of ward against our inclination to take that to mean that the scholarship doesn’t have value. That isn’t what I mean at all. All knowledge of any value must be carefully constructed. We all know that critique is interminable and that any text can be critiqued ad infinitum. Going there breaks the performance. But one equally breaks the performance by simply agreeing to what the author says. As a reader, you must disagree (or at least express skepticism and doubt) with the thesis but only within the scope of the discourse community. You must play by the rules and accept the genre of evidence and argument that is provided. That doesn’t mean that you need to agree in the end of course but only play by the established rules for disagreement.

In short, you must begin with skepticism and allow yourself to be open to persuasion.

It’s an interesting experience to try reading these works from a different position. I’m not talking about major philosophical works, where you’re mostly just trying to figure out what’s being said in the first place. I’m referring to the typical humanistic article or monograph. There’s a clumsiness that results, like a couple dancing together but to different songs.  Is this a criticism? No it’s not, not really. Every genre has conventions that establish roles for authors and readers. I will admit that I sometimes get tired of playing the same role though, as if it were the only way to read, as if serious academic thought required one to adopt this readerly role. Where I end up is with a “what if” game. What if persuasion and argument were not the primary rhetorical functions of academic writing? We could play the believing game, but that’s just the flipside of the same coin. It’s difficult for us, especially us rhetoricians who are inclined to assert that “everything is an argument.” It’s difficult because we really do believe in the value of an agonistic approach to testing and strengthening knowledge.

I suppose I wonder if it is possible to play more than one game.

Categories: Author Blogs

role your own (post)disciplinary future

4 June, 2014 - 14:51

That’s a pun, not a misspelling. The question is, what role do you see for yourself as a academic in 2025? Why 2025? Partly because we like numbers that end in 0 or 5, and partly because by then our entering doctoral students, with their 7-10 year journey toward a phd ahead of them, should have a had a few good whacks at the job market. In other words, we should be thinking about 2025 or thereabouts as we think through the reform of doctoral programs, especially since any reform will take a few years, at best, to take hold. Besides, it’s easy, too easy really, to criticize the MLA. It’s a lot harder to find alternatives, other than shutting down or dramatically shrinking the current enterprise.

About a decade ago, Ann Green and I co-wrote a chapter in a collection called Culture Shock and the Practice of Profession: Training the Next Wave in Rhetoric And Composition about our time as doctoral students in the experimental phd program at SUNY Albany in the mid-nineties. Berlin wrote about our program in Rhetorics, Poetics, and Culture and Steve North later wrote Refiguring the PhD in English Studies which was largely about our program in “Writing, Teaching, and Criticism.” While the program produces some fine graduates (ahem), it imploded because, in my view, it demanded inter- or intra-disciplinary (depending on how you think of the various parts of an English department) collaboration on the part of faculty who were simply not capable of pulling it off.  In short, there was too much personal and professional antagonism in the department for it to work. I’m not sure if the department was unusual in its antagonism. I just think that in most English departments working together is unnecessary. 10 years ago when we wrote that article, the main point as I recall was that if you want to “reform” a discipline, you shouldn’t really make graduate students pay the price for that reform. That is, as long as the available jobs are traditionally defined and expect traditional training, then that’s what doctoral programs should provide.

So you need to begin by reforming the job market. That is, let’s hire different people. If we want the kinds of scholars that the MLA reformers describe then let’s hire them and while we’re at it, let’s change the way we tenure them too. And if we are going to do that, then we are going to change the kinds of courses we ask them to teach, which means reforming undergraduate curricula. Those things probably go hand-in-hand. Propose a new curriculum in your department and create a hiring plan to deliver it. The MLA report suggests that doctoral programs should encourage a diversity of outcomes for jobs beyond the academy, but maybe we need that diversity within departments as well. In my mind all of these things are part of a single puzzle:

  • reform and diversify the major to attract more students and expand the idea of what expertise in “English” might mean in terms of professions.
  • more students is the best argument we have toward sustaining and maybe building the job market
  • a strengthened undergraduate major will increase the value and viability of the MA
  • those things together create better conditions for doctoral programs which will need to be reformed in terms of content to meet the needs of this new disciplinary paradigm and might also be reformed in terms of some of the pragmatic concerns raised by MLA (time to degree, technology training, etc.)

If you think about it that way, the question is where to we get the students from? Think about this. In the last decade, according to NCES, the total number of 4-year grads has remained fixed (around 52K a year nationally), while the number of Communication majors has grown nearly 20% and Psychology majors has grown more than 30%, which is to say that psychology has roughly kept pace with the overall increase in the number of graduates. My point is that a four-year psychology or communication degree, while more professional sounding maybe, doesn’t exactly provide a qualification for a specific career. English used to be a place where students could learn valuable communication skills, get to know something about different cultures around the world and through time, and get some insight into how people tick. We used to think writing literary interpretive essays would give you the first. And we thought reading literature would give you the other two. But we don’t see it that way anymore. So instead students go to communications and psychology for the same thing.

In theory, by curtailing our disciplinary focus on literariness, we might be able to shift the perception once more. If we can move toward the digital then we can regain our claim to teaching communication skills for the average, entry-level professional career. If we are willing to expand greatly the media we study, I think we can still offer some of the most interesting content in terms of aesthetic experience and insight into other cultures/times. And I still think that rhetoric offers us an excellent pragmatic approach to understanding how people tick. I’m not saying that we can do what psychology, communications, or business do, or that we would want to. I’m just saying we could offer a comparable curriculum that might bring back some of those majors. In 90-91, when I got my English BA, English and Communications each represented 4.6% of the total grads. Today, Communications still represents 4.6%, while English is at 2.9%. (As it happens, at UB, communications is five times the size of English in terms of degrees conferred, so part of this is local as well.)

Here’s the thing. I’ve been a tenure line professor for 15 years. I’ve taught first-year comp, technology for teachers, grammar for teachers, creative writing, technical writing, business writing, literary theory, intro to literature, digital writing, poetics, media theory, videogame studies, TA teaching practicum, speculative realism, digital humanities, and those are just the ones that come to mind. There is no book or article that I have taught more than 2 or 3 times in my academic career. I’ve gone from teaching HTML to Dreamweaver to Flash back to HTML/CSS then to video and audio, then to social media and likely in the future on to some other coding. So I don’t have a problem imagining that my English department contemporaries should also be called upon to learn, grow, and shift their areas of expertise. That’s what this is going to require.

Once we do that, this more capacious doctoral program that MLA is proposing will resemble more what departments are doing.

 

Categories: Author Blogs

why five years for a Phd is both too short and too long

29 May, 2014 - 14:24

It seems much of the attention on the MLA report has gone toward the proposal to shorten the time-to-degree. Inside HigherEd wrote about this and Steve Krause has a blog post on the issue. Here’s my question: what is the problem that we are trying to solve here? Here’s what the report says, ”

we consider 9.0 years unacceptable, in great part because of the social, economic, and personal costs associated with such a lengthy time to degree. Long periods of study delay full-fledged entry into the workforce, with associated financial sacrifices. For many there is increased indebtedness; for some, delayed family planning. For some students a long time to degree may not be especially disturbing if funding from their universities—through fellowships or research and teaching assistantships—is available. Here, however, there is also a cost at the level of the university itself. Just as colleges and universities are being urged to steward their resources and encourage undergraduates to complete their degrees in a timely fashion, so should they be urged to apply this policy at the graduate level.

None of that argument is surprising. However, one might say that five years is still too long to spend if “full-fledged entry into the workforce” remains unlikely. That is, if we are graduating 1000 phds for 600 jobs now and doctoral programs were successful in shortening time to degree, we’d likely see an increase in the number of graduates. As it is, doctoral students intentionally delay completing their degrees so that they can acquire additional credentials (e.g. published journal articles) so as to be competitive on the job market. It’s the market expectations as much as anything else that drives time to degree. In that respect, five years isn’t enough time.

Let’s return to the question: what is the problem we are trying to solve? One of the most misguided arguments in the MLA report is

Doctoral education is not exclusively for the production of future tenure-track faculty members. Reducing cohort size is tantamount to reducing accessibility. The modern languages and literatures are vital to our culture, to the research university and to higher education, and to the qualified students who have the dedication for graduate work and who ought to be afforded the opportunity to pursue advanced study in their field of choice. Instead of contraction, we argue for a more capacious understanding of our fields and their benefits to society, including the range of career outcomes.

I appreciate the rhetorical move here. It’s quite savvy. I even think it is sincere. We cannot reduce the size of our programs because that would cheat our students out of access. Plus, this stuff is important, and we think that society should value us more highly. I’m sure you do, MLA. I’m sure you do. But we already make economic calculations about cohort size and access. Presumably there is a reason why Phd production and tenure-track job openings moved proportionally with one another, until the bottom fell out in 2008.

It’s fine, in theory, if we want to make the PhD into a degree that opens doors to a wider range of careers. If so, we have to convince those employers that the degree represents has some value. Clearly the first thing we’d have to do is radically reformulate the dissertation. The dissertation is the main thing which reveals the fraudulent claim about doctoral education not being connected with the tenure-track. As a general rule, the dissertation as it is currently composed only makes sense in relation to the research activity of faculty.

But I don’t think that’s what this is really about. There’s a slippery slope in making doctoral programs smaller. What’s the right size? Fewer graduate students means less need for faculty which means fewer jobs which means fewer grad students, etc. At some point it evens out, but where? That’s a scary prospect for departments and the MLA. If you reduced your program by half now your graduate faculty are teaching a grad course once every two years instead of once a year. Not a popular decision. And do you have anything else for them to teach given the declining undergrad major? Yikes. Then maybe the dean wants to revisit our hiring plan, eh? So maybe five years down the road, due to retirement and such we end up 20-30% smaller in terms of faculty, our grad program is half the size, and we can’t deliver on the idealized version of the literary studies graduate curriculum because we just don’t have enough faculty in all  the different areas to run dissertations in every field. What we’d be talking about is the end of English departments at research universities as we know them in the next decade. Don’t get me wrong. Those departments would still exist. They might even thrive on a new set of terms. They just wouldn’t look the way they do now.

Arguing for a larger range of career outcomes though essentially leads one in the same direction, unless one is being wholly disingenuous about it. What are these other careers going to be? Publishing? Higher Ed administration? Public humanities (museums and such)? Technical or professional communication? K-12 teaching? Tell me how that dissertation in whatever literary period makes sense for such a career path? Maybe indirectly the research and writing experience could be helpful for some of these jobs. But if you’re going to spend 2+ years researching and writing, doesn’t it make sense to choose a topic that directly relates to the profession you want to enter? Isn’t that why students write a dissertation in a particular period, so that they can qualify for a faculty job in that period? Furthermore, wouldn’t one say the same thing about the coursework?

In the end, it seems to me that a both/and approach is needed here.

  • Transform the curriculum which means faculty rethinking their graduate courses and mentoring for purposes other than reproducing the discipline;
  • Shrinking doctoral programs but maybe increasing MA programs by making them more valuable workplace credentials (which requires transforming the curriculum);
  • Reducing time to degree by making the curriculum more pragmatic and fitting it better to the outcomes we are imagining for our students.

The humanities have insisted for decades that their curricula are not practical, that they are not meant to be career-oriented. We have even insisted on that for graduate education, which is barely practical preparation even for a faculty job. I’m not entirely sure, but that stance doesn’t seem to have much viability left in it.

Categories: Author Blogs

MLA, doctoral education, and the benefits of hindsight

28 May, 2014 - 12:29

The MLA has released a task force report on “Doctoral Study in Modern Language and Literature.” Primarily it recommends

  • engaging with technology
  • reducing time to degree
  • rethinking the dissertation (see the bullet point above)
  • emphasizing teaching
  • validating “diverse career outcomes” (my personal favorite)

Not coincidentally, my own department has decided that next year we will have extended conversations about our own doctoral program. So I suppose you could say the report is timely, and that it’s recommendations are interesting and provocative, even to those who might not agree with them. Personally, I think they would have been more interesting 15 years ago, when the Internet was still in its cultural infancy, it would have been forward-thinking, prescient even. 10 years ago, when the web was everywhere and social media were starting to take off, it would have been smart and strategic. Today it’s more a case of 20-20 hindsight.If we had started down this path a decade ago, today we would have doctoral programs like the ones imagined in the report. Of course ten years ago most doctoral programs didn’t have the faculty with the expertise or will to deliver such a curriculum. As far as that goes, I doubt many doctoral programs have such faculty today. The good news (not really) is that there isn’t really any point spending the next 3-5 years rebuilding a graduate program for students who will graduate in 2025 with the goal being to prepare them for the demands of teaching today.

The prospects for doctoral study in modern language and literature are far, far worse than that. I am not going to get out my tarot cards and try to predict the future, but I don’t think it’s a big stretch to imagine that continuing increases in network speed, processing speed, data storage, and mobility will transform literacy as radically over the next decade as they have done in the past decade. What continues to mystify me, and this report is no better, is that literary studies continues to demonstrate what I can only call a willful ignorance of the fact that it is forever tied to print culture.  Will the humanities continue to study print culture as a historical phenomenon? Sure, I guess. But the difference is that 50 years ago, we were in a print culture and this study was connected to the literacy of everyday life. And now it’s not. And, as the report notes, we’ve gone from 1000 new assistant professor ads to 600 such ads a year (n.b. according to the Rhet Map project, 167 of those ads in 12-13 where in rhet/comp). And yes, it was precipitated by the recession, but what’s the explanation now? The explanation, as best as I can understand it, is that the entire institution of higher education is being forced to rethink itself and question all of its practices.

That’s a good explanation, but I’ll offer another one that focuses more on our own disciplinary actions. Here is what the report says about embracing technology:

Some doctoral students will benefit from in-depth technological training that builds their capacity to design and develop research software. Some will require familiarity with database structures or with digitization standards to facilitate the representation and critical editing of documents and cultural artifacts online. Still others will need to add statistical literacy to their portfolios. Still others will need to understand the opportunities and implications of methods like distant reading and text mining. Programs should therefore link technology training to student research questions, supporting this training as they would language learning or archival research and partnering where appropriate outside the department to match students with relevant mentors or practicum experiences. Because all doctoral students will need to learn to compose in multimodal online platforms, to evaluate new technologies independently, and to navigate and construct digital research archives, mastery of basic digital humanities tools and techniques should be a goal of the methodological training offered by every department.

This is not solely a matter of the application of new methods to research and writing. At stake is also increasingly sophisticated thinking about the use of technology in teaching. Future undergraduates will bring new technological expectations and levels of social media fluency to the classroom, and their teachers—today’s doctoral students—must be prepared to meet them with versatility and confidence. Students who understand the workings of analytic tools and the means of production of scholarly communication in the twenty-first century will be better able to engage technology critically and use it to its fullest scholarly and pedagogical potential.

Fine. I can agree with that, but I don’t think this will do us much good because I don’t think it will go far enough. In the end we will still be left with doctoral students who essentially want to do the same kind of work as their professors and what I think we are seeing is that there will be very little future in that kind of work. This reads to me as the “DH will save us” kind of argument. As the report states elsewhere “The traditional hermeneutics of the individual work is not endangered; rather, it is augmented by digital technologies. But the collaborative, interdisciplinary, and interprofessional aspects of much digital scholarship do suggest critical transitions ahead for literary fields.” This is still an argument for saying we need to augment our programs, i.e. make sure students can learn about technology by taking courses in other departments. Maybe that’s a pragmatic and necessary step. But that’s not what this is about.

It’s not the methods of literary study that are the problem. Unfortunately it’s literary study itself. The reality of the job market and the undergraduate English major is that we don’t need hundreds of new professors each year to study and teach literature, regardless of the method or degree of technological savvy they bring with them. Instead we hire these graduates as adjuncts into writing curricula that we have spent decades deprofessionalizing and devaluing precisely so that we could give those jobs to TAs and fill our graduate programs. And now that pyramid scheme has finally come to an end.

Arguably, someone in 2025 will have disciplinary expertise to study and teach digital culture in the way literary scholars study and taught print culture in the 20th century. Possibly those digital scholars would be able to build curricula and student interest that would sustain legitimate academic careers (maybe even tenure if such a thing still exists then). But the reforms described here won’t produce such scholars. Well they could, but they won’t in any systematic way.

Categories: Author Blogs

why teaching college composition is getting harder

22 May, 2014 - 11:15

I realize that’s a provocative assumption. I wasn’t teaching in the 70s or 80s (or earlier), and I don’t mean to suggest that teaching composition was ever easy. I also don’t want to make the declension argument, the “kids today” argument about texting, video games, or whatever. This isn’t even a point-the-finger argument as in blame the parent, blame their high school teachers, blame No Child Left Behind, blame capitalism, and so on, even though, when I look broadly at the common challenges we face at UB with our typical in-state students, it does seem reasonable to consider the role that their increasingly common high school curriculum plays (both helping and hurting us, after all ever cure is also a poison). But that’s not the subject of this post.

  1. A more diverse population: there’s a larger percentage of Americans attending colleges today than every before. That means a greater cultural and ethnic diversity. It also means a wider range of preparation and history of academic success. There are also more international students. At UB our international student population continues to grow. It’s around 15% of our undergrad population, and most of those are English language learners. Just in demographic terms, the composition classroom of students I see at UB is quite different from the classroom of students I joined when I was a freshman at Rutgers back in the 80s. There are many potential benefits to this richness of cultural differences. I don’t mean to present it as a problem, but it is a challenge for the instructor.
  2. The diversity of academic genres: I think it has long been the case that students wrote in different genres around the campus in their majors. However it is only more recently that composition has become attentive to that fact and been self-reflective about the connection between the traditional, humanistic essay-writing it has taught and the composing activities for which it claimed to prepare students. Partly this is because of the shifts in which students view their education. Again, when I was in Rutgers college (which was the arts and sciences college of the university), English and history were the most popular majors. Not so much anymore. As an instructor one cannot possibly be an expert in the many genres of the university. Nor can a single composition classroom provide the context for learning to compose in those genres, even if one where magically such an expert. Nevertheless, one is left facing this challenge of preparing students somehow.
  3. The rise of digital literacy: “digital native” talk aside, our students face real challenges in learning how to communicate academically and professionally via digital means. This includes everything from collaborating online and communicating via various networks to composing with image, sound, video, graphic, and so on. This is a new set of technical and rhetorical skills that are now placed on instructors that did not exist a decade ago AND a new set of curricular expectations for a composition classroom. As it turned out, I began to learn my digital literacy skills doing my college job working for a family friend’s computer business, but it certainly wasn’t happening on campus. Though the typical composition instructor has experience as a successful academic writer in his/her field (usually English literary studies), it’s a real stretch to acquire digital literacy skills to a level of expertise where one can begin to teach them effectively. Obviously instructors can and do acquire this expertise, but it is one more challenge.
  4. Delivering information literacy: again, when I was a student, information literacy meant learning to use the card catalog, printed bibliographies (like MLA), and the microfiche machine (and I’m not even that old!). Today, college students are awash in the firehose of digital information. The web presents a whole new set of information literacy challenges, starting with crap detection as Howard Rheingold calls it but quickly expanding to thinking about how to archive, organize, and curate one’s own compositions and connect them to the larger community of data.

One could say all of these things are part of a single phenomenon: a globalizing digital-information revolution that has expanded literate practices to new populations while raising the literacy stakes for everyone by proliferating genres, inventing new media types, and exploding information production and access. Write whatever history you want of this phenomenon but it has fallen like a ton of bricks on first-year composition in the last decade. We can order students to close their laptops, forbid “Internet sources” (whatever that means anymore), assign “3-5 page papers,” and require students to turn in printed documents with a staple in the corner. That is, we can try to ignore these challenges, but that still doesn’t make the job any easier, because the challenges remain even when we try to ignore them. There’s no doubt that we still want students to be able to read closely and carefully, to analyze and evaluate text/media, to put sources in conversation with one another, and have a point to make, a purpose to achieve, in relation to that conversation.

On an abstract level, in that respect, what we want our students to do is not so different from what I was asked to do. But it turns out to not be that helpful to think about this in abstract terms. Here’s an analogy. The other day, during our program assessment, I was listening to a slidecast where a student asked “What’s the difference between a slave and hammer?” He was trying to make the point that slaves were treated like tools while on route to some argument about capitalism and labor. Overall it was a decent if overly ambitious slidecast. My first thought was, remind me not to send that kid to the hardware store to buy me a hammer! But I understood the point he was trying to make. He was speaking in abstractions. But as with that argument, here thinking only about the abstract critical-rhetorical goals gets us in trouble. If all that was required was some general, abstract ability, then our instructors wouldn’t face challenges shifting from print to digital literacy. In fact, as far as that goes, our students wouldn’t either, as they have their own critical, rhetorical savvy when it comes to the communities where they have matured.

But that’s not how it works. Instead, it’s all about those “missing masses” of actors and objects that comprise our compositional, rhetorical, informational, and cognitive networks and how those networks are shifting dramatically. And I think it’s only going to get harder.

Categories: Author Blogs

the missing masses of educational software

20 May, 2014 - 17:31

It’s the time of year, at the end of term, that one might have a chance to move from the minutiae of daily email requests to thinking bigger, and in this case thinking about educational software. The past couple years have been preoccupied with MOOCS, but this post isn’t about MOOCs or at least not exactly about them. Right now, I am more focused on a different edutech trend: eportfolios. That is, we are thinking about piloting some eportoflio application in our composition program next year. We have a current portfolio requirement, but most of our instructors collect paper portfolios.

Here are the problems I have. I can have portfolio software, but it doesn’t really lend itself to community discussion or providing feedback. I can have a CMS like Blackboard but it cordons off all the classes from one another (and it’s not very good for providing feedback on student compositions). I can use Google docs for feedback, though not for anything else. And though I can use Google Docs, I’m not sure my instructors can manage the various permission structures like sharing a folder among a group of student users so they can upload and comment on each others work. And I’m still left to figure out the rest of it. WordPress might offer a similar kind of solution, where a tech savvy instructor could get a community discussion, have students get feedback, and set up individual portfolios. But I’m not sure how it works as a program wide solution, and I haven’t begun to talk about assessment issues.

I used to feel somewhat befuddled by the absence of what seemed like a fairly straightforward constellation of features for a piece of educational software. After all, they were all features that existed in one place or another already, for example:

  • collaborative composing of text documents
  • short, realtime discussion
  • a journal and portfolio
  • a forum where you can follow users and threads
  • photo and video sharing

All of that is easy to do, and it’s not hard for me to create a class using a few applications in conjunction with one another, though sometimes it’s confusing for the students. It’s another matter entirely to get 2500 students and 80 instructors also doing it and doing it in a way that allows all those students and instructors to form a community. As such, it would be useful to have all this functionality in one place where one could create classes, track students for evaluation purposes, and collect data for grading, as well as creating meta-communities among the classes. The composition student community is a short-lived one, but I still think there could be merit to this experience. It would be even more useful if one thought about, for example, 1000 students majoring in the humanities. (I’m not sure if that’s more or less than we have at UB, but it’s a big enough number to get the point.) Now you have a group of students taking multiple classes over several years who share common issues, interests, opportunities, and challenges.

It used to seem so painfully obvious to me that such an application would be possible to build and would have tremendous pedagogical value. It would allow students to integrate their learning experiences across courses and disciplines. It would foster collaboration and the development of independent interests among student (affinity groups as some call them). We could create on campus a version of the communities we see online but now those communities would be supported and enriched by faculty, curriculum, and all the intellectual resources of a university.

Over time though I’ve come to recognize that that’s not what we call learning at universities, and, more importantly, it’s not what we call teaching. How do we characterize teaching and learning? I won’t go in to that, except to say that it is clearly more individualized, more segmented, and more transactional. In that traditional, banking model approach all that really matters is whether or not individual students can demonstrate that they have received the prescribed message from the curriculum. In such a model all that is needed is a delivery mechanism (a textbook, a lecture) and a testing mechanism. Not only is everything else extraneous, it is potentially damaging as distractive at best and a means of cheating at worst. To be more charitable, maybe one would say “study groups” are ok, but that’s student business, not ours. When one looks at the popularity of a CMS like Blackboard, one can see the continuing dominance of this pedagogical approach. Blackboard is clearly designed to facilitate a banking model of pedagogy. Even it’s “discussion” tool is hard to imagine as anything better than surveillance technology. While one could twist Bb to other purposes, it’s continuing popularity on campuses suggests that it conforms largely to faculty need.

Perhaps the indifference or emnity toward the digital mediation of learning stems from a perception of 20th-century pedagogies as unmediated (or far less mediated) and thus as purer means of learning somehow. In that case, it is a matter of the “missing masses” of traditional pedagogy, which in some respect is hard to believe since some of those missing masses are quite massive (like the campus itself). The campus not only mediates learning with its lecture halls, seminar rooms, libraries, and so on. It also mediates disciplinarity with its placement of departments in different buildings by general area (humanities, sciences, etc.). In fact, one might wonder if disciplines are paradigmatically tied to these traditional forms of learning, but I don’t think so. Certainly there is tremendous inertia, but that’s a different matter. Though we may not give them much attention, we are all reliant on Latour’s nonhuman hybrids (his missing masses) to do our work as scholars and teachers. Turning toward digital pedagogy means neither more or less mediation than in the past, just different mediation, different networks, with different activities.

Maybe it’s just as simple as saying that taking up these tools would require learning to teach in a new way, just as once upon a time we learned how to lecture or design a test. Maybe it just seems like wasted effort… because what we are doing now is working so well, right?

Categories: Author Blogs

Net Neutrality and the Public Sphere

19 May, 2014 - 21:02

This is not an argument for or against “net neutrality,” but rather an examination of the discourses and the beliefs that undergird our identification of this policy as integral to our sense of the nation in which we want to live. Fundamentally, as best as I understand it, net neutrality is about the technical rules the federal government establishes for how internet service providers manage the flow of data along their networks. That is, should users be able to pay for preferential treatment on the network or not? Whether or not one wants to characterize these claims as alarmist, there are many who argue that the failure of net neutrality will create “fast lanes” and “slow lanes” on the web, which will do harm to many individual and public interests to the benefit of large corporations and profit, resulting in an Internet that looks more like the mall and television than what we have now. These policy decisions, though specific to the US, would likely have an impact on the global content of the web. While some argue for government regulation, others argue against increased government intrusion and management of the Internet marketplace.

Tim Wu, the Columbia media law professor who coined the term net neutrality a decade ago, wrote a recent piece for the New Yorker where he observes “the mythology of the Internet is not dissimilar to that of America, or any open country—as a place where anyone with passion or foolish optimism might speak his or her piece or open a business and see what happens. No success is guaranteed, but anyone gets to take a shot. That’s what free speech and a free market look like in practice rather than in theory.” The key word here is mythology. I don’t think anyone would assert that public policies should be founded on myths (although some might contend that that’s all that policies are ever founded on). However, given the growing economic inequalities in the US, internet inequality might seem like the last straw. As Wu continues, “It may be one thing for the rich to drive better cars; it would be another to divide public roads between rich and poor, ostensibly to avoid ‘congestion.’” That is, of course, except for the super rich with their helicopters and private jets, but I digress.

Though the typical Internet activist is probably not citing Habermas, it is his articulation of the public sphere that appears to drive this argument. Wu may be right that there is a kind of frontier, libertarian, Wild West opportunism associated with the Internet, but the more nuanced arguments for net neutrality are grounded on the premise of the importance of an open web for democracy, debate, and the public sharing of information. One may legitimately wonder if such proponents actually spend much time online. Ian Bogost makes this point in a recent Atlantic article, where he argues

The Internet is a thing we do. It might be righteous to hope to save it. Yet, righteousness is an oil that leaks from fundamentalist engines, machines oblivious to the flesh their gears butcher. Common carriage is sensical and reasonable. But there’s also something profoundly terrible about the status quo. And while it’s possible that limitations of network neutrality will only make that status quo worse, it’s also possible that some kind of calamity is necessary to remedy the ills of life online. 

It’s a worthwhile point. What exactly are we saving when we strive to save the status quo? Is it about democracy or is it about this strange affect and habit we have developed in the last decade with our smartphones, Facebook friends, Twitter stream, and so on? That said, I suppose I wouldn’t say the Internet is “a thing we do” so much as “a thing in which we participate.” If the question of net neutrality is, as Wu puts it, a debate over the kind of country we want to live in, then it’s a debate that assumes that the Internet is for us. I would suggest instead that the Internet is as indifferent to us as any environment in which we persist.

That’s not to say that we cannot make decisions about how we relate to the web or make policies about our government’s role in regulating ISPs. We can; we should; we must. But when we imagine the Internet as this utopian public sphere or even as something that might aspire to become this imaginary space, we are operating from a deeply misguided conception of how media operate. We are all familiar with the operation of propaganda or mass media advertising as rather unsubtle efforts to shape our minds and desires. We might also acknowledge the way mass media entertainment and news more generally operate as an ideological force. When we do this, we typically imagine people sitting in a boardroom somewhere, devising plans, and using media to achieve those plans. Or we might imagine ideology as a more spectral, elusive force whose origins are harder to identify. Either way, the media technologies are give the role of mute servants translating an ideological message from somewhere else. Such a view presumes that media are fundamentally neutral until humans common along an employ them for partisan purposes (e.g. guns don’t kill people…). If you believe that, then maybe you think that if we all have equal access to the tools, then we get a fair fight or the Wild West or perhaps some more civilized Habermasian public sphere. On the other end of the spectrum, the McLuhan strain of media theory offers a media deterministic view where it is the medium itself that is acting upon us and determining our situation, to use Kittler’s phrase. From the perspective, the net is anything but neutral. One who ascribed to this media theory would presumably argue that policies around the internet should be based upon an understanding of media’s determining effects.

Obviously there are a large range of possible third views, where the internet, to the extent that we can call it a single thing, has agency (is not neutral) but is also not determinist. The strength and effects of that internet’s agency would be variable depending on the relationships it establishes with others. There would appear to be a lot of space in this third alternative, but it turns out to be a difficult position to hold because it requires recognizing our own agency as something that emerges in our relations with others rather than something that is inherently ours. Otherwise, all we are is saying is that the Internet inhibits, dominates, or extends our natural human rights, and I think to advocate policy, either for or against net neutrality based on a misunderstanding of the relations between media and humans is unwise.

Categories: Author Blogs

university presses and scholarly networks

14 May, 2014 - 11:20

The Nation recently published a thoughtful piece by Scott Sherman on the plight of university presses. It’s a familiar story by now. For some time now, these presses have only been able to count on sales of 300-400 copies of new scholarly monographs. That, combined with the larger economic pressures that many universities are facing, has created conditions where their future is in doubt. At the same time, higher education heavily relies on the mechanisms of university presses to subsidize the faculty research that is expected for tenure and promotion in many fields, especially in the humanities (i.e. book for tenure). As Sherman notes, “ If the University of Colorado Press publishes a monograph by a young professor at Dartmouth that enables that scholar to obtain tenure, then the University of Colorado Press, with its very modest budget, is in effect subsidizing Dartmouth, which has an endowment of $3.7 billion as well as its own small press.”

In a way this post follows on the end of the last one, on Bauerlein’s observation about the 30,000 pieces of scholarship published on Shakespeare in the last 30 years. Of course this isn’t about Shakespeare or even literary studies. It’s about the proliferation of scholarship and the motives that drive it. Here are two key quotes from The Nation article

“It is one of the noblest duties of a university to advance knowledge, and to diffuse it not merely among those who can attend the daily lectures—but far and wide.” So wrote Daniel Coit Gilman, the founder of Johns Hopkins University and its university press, which, established in 1878, is the oldest in the country. 

Says Peter Dougherty of Princeton University Press, “I know what an art it is to publish scholarly books: to attract them, to build a list, to get them edited, to present them to an editorial board, to design them, to get them reviewed—to do all the things you need to do to offer them to the world so they can do what they are supposed to do, which is to help inform a discussion and a conversation.

There’s a consistent message going back 130+ years. University presses serve to diffuse scholarly knowledge “far and wide” so that the knowledge can “help inform a discussion and a conversation.” Clearly presses perform important professional and intellectual work in bringing a monograph to press, and it would be naive to believe those functions can be removed from the process of scholarly publication or taken over magically by free labor on the Internet. At the same time, who can believe that the years spent by scholarly authors, combined with the long hours spent by editors, reviewers, and the rest of a press, for the result of 300 copies sold, is a sensible way to go about achieving the mission that Gilman and Dougherty avow?

[Point of correction, thanks to Doug Armato. The Nation article points to 300-400 copies sold, but that is library sales not total sales. For a better picture of total sales, see his work here, which at least provides a picture of how this works at the University of Minnesota Press.]

Put more pointedly, who really believes that most monographs are published for the purpose of informing a conversation? We don’t have a marketplace of ideas. We have a marketplace of reputation. I am not going to make the argument for digital scholarship. I’m tired of making that argument. It’s like making the argument for using the telephone at this point. Thrown your scholarship down a deep dark hole if you want or open your front door and start yelling it across the neighborhood. Whatever. The argument I am interested in making is against monographs, against 100,000+ word doorjambs.

I wrote about this a while back, pondering how many dissertations are ever read by more than five people, including the author. In theory though, the dissertation becomes the primary material for that first book. So think about this. The average humanities doctoral student spends 4 years writing a dissertation. Let’s imagine this person is working reasonably hard at this, say 800 hours per year (15-16 hours per week). That’s 3200 hours. Then our particularly fortunate example gets a tenure-track job at a research university and spends the next five years getting the monograph published, including one full semester of research leave. We have all heard how very hard academics work, so let’s say that’s another 4000 hours over five years. So 7200 hours on this book. That works out to 3.5 years of full-time, 40 hrs/week of labor. The university presses will argue that if universities want to use the book for tenure process then they should subsidize the publication of books. But universities DO subsidize those books. Taking just the 4000 hours spent researching and writing the book (apparently), the university pays the author two full years of salary to write it. These monographs are incredibly expensive to produce.

Of course we can’t make this all about the money. Research faculty are paid to do research in the field in which they are hired. What is of interest here is the network/assemblage of relations among research and scholarly composing and publishing activities. If we tell a professor that she must publish a monograph, can we safely assume that this requirement shapes not only her compositional activities but the shape of her research itself? Maybe this is a case of the tail wagging the dog or maybe we genuinely believe that the best research activity is the kind that focused on producing a monograph. I’m not sure how we make the latter argument. Even if we say we want faculty to engage in some sustained and focused research activity, I’m not sure how monograph becomes the best tool for dissemination. Once upon a time maybe.

I am happy with the argument that we still need people to perform many of the traditional roles of university presses: reviewing, giving feedback, helping to guide a text into publishable form, etc. etc. And this means devoting resources to those efforts. But we need some intentional shifting of how these activities integrate with the research activities of faculty, the publication of this work, and the fostering of the discussion and conversation that we all claim is the reason for doing it in the first place.

 

Categories: Author Blogs

digital humanities and the “s” word

8 May, 2014 - 13:24

Dear blog, if it weren’t for the slow demise of the humanities and the soap opera that surrounds it, what would there be to discuss? The “s” word, of course, is “save.” And the whole will DH save the humanities in time, tune in next week business should be getting old by now. But it’s part of that larger crisis in the humanities genre of academic journalism (and yes, blogging) that keeps on chugging.

I can’t decide if the humanities crisis is a quantifiable reality or not. The job market does seem not so good (understatement). In local news, there aren’t as many undergrads hanging out in our classes as there used to be. And there seem to be fewer people applying to humanities grad school… though I seem to recall a report from last year that reported the contrary. So who knows. If there is a crisis, it’s the kind of crisis where the people who are supposedly most affected by it (humanities professors and graduate students) go about their lives as if nothing has changed.

In any case, one part of the dynamic is “the humanities don’t need saving” argument, which is what we get here and is an argument that has to be on offer if you want to be able to make the other argument in defense of the humanities. DH plays a mercurial role in this story, like the rich stranger who comes to town, who is maybe going to save the town by buying the factory or is going to destroy the town by buying the factory or is maybe just a charlatan all along. Feel free to cast the rest of the roles if you want. My point is that we have developed this tendency to romanticize the situation. The humanities are Scarlet O’Hara and DH is Rhet Butler and the university is a plantation (of course). Whatever, you figure it out.

Or, if you want a more positive spin. How about the humanities are Jonathan and Martha Kent and DH is Kal-El

If you watch that movie trailer and think of DH as Clark Kent, well then, you’ll get the point (you might even laugh). In which case, “It’s not an S. On my world it’s a symbol that means hope.”

That’s not really helpful here.

What would be helpful? It’s fairly obvious that our students are and will be communicating in a world requiring digital literacy in much the same way as the last century required print literacy… a century that produced and/or expanded humanities disciplines to address this need. So things will change in ways we probably don’t fully understand yet, but someone will need to teach this stuff. And we probably won’t keep doing the things we used to do. But who does? It’s actually somewhat impressive that one can spin such a long running soap opera out of such a fairly straightforward situation.

Anyway, at the end of the IHE article that spurred this post, we get Mark Bauerlien, witness for the defense, saying “We’ve had 30,000 items of scholarship on Shakespeare in the last 30 to 35 years.” It’s kind of an odd non sequitur in the article, but I think the point is that DH is just a drop in the bucket, something to add to this massive corpus of work. That line evoked a different response in me, something like “My tuition dollars paid for what?!?!” But maybe closer to “I think Shakespeare might want to consider a restraining order.” And now I’m not sure if the humanities needs saving or rehab. I guess that’s just another version of the same story.

Tune in next week. 

 

Categories: Author Blogs

Device-ing a humanities pedagogy

7 May, 2014 - 09:04

A recent local study revealed that the typical UB student brings 5 wi-fi enabled devices to campus. Five devices? What could they be?

  • Smartphone (obviously)
  • Laptop/Tablet (almost certainly)
  • Gaming Console (xBox, PS, Nintendo DS)
  • iPod/MP 3 player
  • Kindle/ebook reader
  • Desktop computer (though I’m thinking that would be hardwired)
  • Roku/streaming media player
  • Television?

Clearly some of these are more interesting to us as faculty than others, or at least they are differently interesting from a pedagogical perspective. We are all already familiar with the phone and the laptop in the classroom and the still prevalent pedagogical response of forbidding their use. On the other hand, not long ago, I would have argued for the importance of building computer classrooms for the purposes of a composition pedagogy. Computer labs are likely still necessary for certain kinds of specialized work, where students may not have the software or hardware capacities on their own devices. And, for some schools, it is still the case that students might be as technologically flush as UB students appear to be. From our program’s perspective (and I don’t think we are unusual as a state university in this respect), the composition curriculum is really a BYOD environment, as is the typical humanities classroom.

What does it mean to be in an environment where thousands of students and faculty carry devices that connect them to one another across a wireless network (to say nothing the broader internet connection)? Next fall on Mondays at noon, over 200 students will be taking composition simultaneously in nine different sections across campus. What could this connectivity mean for them? What should it mean? As I’ve asked before on this blog, what should this connectivity mean for the 2500 students enrolled in composition courses next semester? Should we be facilitating ways for them to connect to one another? Do they have something to say to one another? Should they? If our current outcomes and pedagogy have no place or value on such communications does that mean that there’s something wrong with what we are doing?

The very first classroom in which I taught was a non-networked computer classroom (not even a LAN). That was 1992. I’ve taught in many computer labs in the intervening years, and when I was at Cortland I taught almost exclusively in a Mac lab. Teaching in that lab transformed my view of pedagogy and in many respects put me out of step with the mainstream humanities view of pedagogy that holds the tiny seminar as the ideal learning space. In the computer lab with its 20-25 students, the class meeting was a place where work was done. We might spend a few minutes talking through some issue and getting organized. If there was a common concern, I might even spend a little time lecturing. But mostly students worked on projects. They collaborated, gave feedback, and supported one another. I would move around the classroom engaging in the different projects.

I’m not sure how such a pedagogy translates when one moves from a curriculum focused on procedural knowledge to one that understands its object as declarative knowledge. I just finished teaching a graduate seminar on media theory that mostly falls into the latter category: we learned about media theory. We read media theory, and we discussed it. The students are completing their final projects now, and many of them are involved in doing digital work, but the classroom itself wasn’t a place where that work happened. Perhaps that’s my shortcoming. I’m still trying to figure out how to make a graduate seminar a place where students will come and do work in much the same way as my earlier undergrad classes did.

Undoubtedly, in the pre-BYOD Wi-fi era, lecture and discussion were sensible responses to the available technological conditions. The invention of mass printing created textbooks and altered what we might expect students to do outside of class, but typically we haven’t used class time to read textbooks. As such, the lecture/discussion format has remained the same for a very, very long time. Before the BYOD era, a student in a classroom had far fewer options for activity. The lecture/discussion format was the best available option for pedagogy, and the student in a lecture hall might do a crossword puzzle, read a magazine, or just daydream but really there wasn’t much to do besides listen and maybe speak. At the very least, lecture/discussion seemed purposeful. Today, they seem more contrived.

We all already know that for the most part the lecture can be offloaded to video. On my campus, and I imagine yours, we have these huge spaces devoted to lecture. In a time of austerity, these seem like a horrible waste to me. What was once a mechanism for efficiency has become an emblem of inefficiency. Face-to-face in-class discussion, on the other hand, still seems useful, but really as one of many possible modes of engagement. The problem with the BYOD environment and class-wide discussion is that the unengaged students are in fact engaged elsewhere. The same thing might be true for small group work, though it strikes me as easier to imagine group work activities that make productive use of these devices both as means of doing research and reporting back to the class as a whole.

So here’s my example. In my graduate Teaching Practicum one of the topics we address is “responding to student writing.” I will assign some readings on the subject (e.g., Nancy Sommers). We have a class blog, so students write in response to the readings prior to class discussion. Then we come to class and spend time talking about the reading. Depending on my mood (or whatever) we might break into small groups for a while to look at specific parts or think through a particular question, and then report back. I’m assuming you’ve seen this movie before.  We might even bring in some student papers and talk about how we respond to them. And everyone is happy with this, at least from a structural perspective, because reading something and then sitting down to talk about it is our definition of a classroom learning experience.

And then, lo and behold, we go out and replicate that experience.

Should we be doing something different?

Well, I’m going to guess that there will be several thousand graduate students in the US taking a Teaching Practicum similar to mine next fall. And I will further predict that nearly every one of those practicums will address the concern of “responding to student writing” at one point or another. Is there any point in engaging with this topic as a rich, ongoing, disciplinary concern on a national scale? Instead of us just talking to one another, could we be communicating with the rest of you as well? Could we be in a conversation with TAs and faculty across our own university in other disciplines who we might hope have similar concerns? If so, might we spend our time in the classroom working out how to engage in those conversations? The great thing about the Teaching Practicum, and the reason I’ve come to enjoy teaching it so much, is that it already has a tremendous amount of exigency driving it. Those TAs have to go back and face their students in their classrooms. There are consequences for doing a bad job. But we could still strengthen that exigency.

I might have been able to do the same thing with my Media Theory class but it would have required more substantial changes to my approach. It would have meant saying that the purpose of the course is not to read some things and talk about them. Instead, it would have meant saying the purpose of the course is to answer a specific question or solve a particular problem or reach out to a community. It would have meant saying that the way that we learn is by doing something.

I think maybe, once upon a time, listening to a lecture or having a discussion was a way of doing something. And it still is, I suppose, but it is a very limited kind of doing, and one that we are no longer restricted to in the classroom.

 

Categories: Author Blogs

scholarship, impact measurement, and genre

1 May, 2014 - 07:32

Impact measures are becoming an increasing issue at UB just as they are at many institutions. For those who don’t know, in terms of scholarship this typically means citation analysis, or even more bluntly, how many times do you get cited. It doesn’t take an advanced degree in critical thinking to imagine some of the problems with this. There are technical issues with tracking citations and there are disciplinary/genre issues with the rhetorical practices of citation that not only can vary within a field at-large (e.g. literary studies) but within specializations (so medievalists might have different citation practices from Americanists). This practice also seems to stem from the principle that there’s no such thing as bad press. I.e., being cited as a total fool would seem to be the same as being cited as a genius.

However I think there is a larger question for the humanities. Is “impact” what we are after? The familiar argument is that we value “reputation” rather than impact. Ok, but don’t all fields have a sense of reputation. That is, the scientist with a national reputation accrues value in that in terms of winning grants, attracting good students, landing a better job, etc. I doubt most scientists would know their colleagues’ citation/impact metrics. Clearly there’s a feedback loop here. However in many fields impact/citation has less duration than reputation because research loses value over time. This isn’t the case in the humanities where reputation would tend to drive citation. That is, if my next book is very successful it will probably result in an increase in citation of my earlier work.

But let’s separate impact from the metric of citation and try to think about it more abstractly. I would define impact has having an affect on 1) the work of students and colleagues in my field, 2) broader conversations within my discipline, across disciplines or possibly across higher education (depending on the nature of the scholarship), 3) a general public understanding of the topic. When we shape our research projects, to what extent do we define them in relation to the objective of having an impact? In Kuhnian normal science terms we might say that the established paradigms of a discipline provide an built-in answer to this question. Furthermore, granting agencies provide some commonality of focus and purpose. In the humanities we are more independent and diffuse. We tend to follow our noses, and I would say we have a built-in allergy to the value of utility that would tend to push us away from asking rhetorical questions about our audience and the purpose of our work. In other words, I don’t think we research/write/publish for the purpose of being read, let alone cited, or at least those motives are not strong enough to shape our activities.

That’s why I ask if impact is what we are after, and if it is, should we consider a different approach to the work we do? As I asked a couple posts ago, if I want impact, should I really be writing a book? What are the contradictions between the institutionalized expectations of tenure and promotion and the new expectation that we have impact? We might even push this question down to the level of graduate training. Is writing a dissertation a good way to have impact? Does it provide training in doing the kind of research that will have impact in the future?

Though I have mixed feelings about the quantitative, metrical measurement of impact, I am probably more sympathetic to the implicit value than most people in the humanities. That is, I’d like to have impact, and I’d like to know if I am having an impact. While citation is one value, it seems to me that there are a lot of other potential measures such as pageviews and links to online articles and downloads of pdfs from online databases or open access publications. In theory, it seems possible to know how many times someone has viewed an article of mine in Enculturation or Kairos. I can tell you how many times a given blog post here has been viewed and which one is the most popular in the last year (you would be surprised I think). It also seems like we should be able to know how many times my Computers and Composition article has been downloaded from Elsevier’s servers. I would guess that this is the kind of data that publishers would want to know for themselves anyway. It would seem to me that such numbers would be at least as informative as citation.

But what would that imply about the monograph? As a scholar in the humanities, what’s more useful to you? How many times would you say that an article (or some shorter-than-a-monograph genre) would have been of more benefit to you than the monograph you read? I can’t figure out any convincing impact-based argument for writing a monograph. You tell me.

 

Categories: Author Blogs

integration and dissolution in general education

19 April, 2014 - 14:11

We’ve been discussing the relative merits of integration as a curricular value. The basic premise of integration is fairly straightforward. It asserts that students get more out of their education when they can connect their courses together. We know that students commonly complain that the traditional general education curriculum is meaningless to them. It doesn’t connect with their major, and it doesn’t connect with “real life.” How does this connect with the way that faculty view general education? That’s hard to say. Set aside all the economic, resource-driven attachments faculty and departments have to general education (go ahead… I’ll wait), and would we say that we disagree with our students’ assessment? Maybe. Does a short story or a western civilization course connect with a major in engineering or business? Maybe tangentially, though we might as easily say that it provides a counter-balance to the vocational motives of many popular majors. Would we say such general education courses connect with “real life”? Of course, I guess. But we typically don’t spend much time in such courses focused on helping students answer these questions. Instead we take as our focus covering disciplinary content. Asserting that the goal of a course is to present disciplinary content in a disciplinary way is the opposite of doing integration. Can one do both? Certainly. But to do “integration” is to take time away from disciplinary instruction. It probably also draws the professor away from his or her area of expertise.

Typically the way integration works is around a common thematic issue. As an example, let’s say “sustainability.” I might be an English professor teaching an environmental literature course, a philosopher teaching environmental ethics, an historian teaching a course on the settling of the US west, a biologist teaching a course on ecosystems, etc. etc. As a professor I am now teaching a fairly narrow, topical slice of my discipline. Instead of “introduction to ethics,” which sounds like a more traditional general education course, now I’m teaching environmental ethics. I might say “you have to understand ethics first before you can look specifically at ‘environmental ethics.’” I might say the same thing as an historian about US history. And so on and so forth. That’s a reasonable observation: one is giving up on disciplinary content and structure to some degree in order to open these interdisciplinary connections. Furthermore, I may have very little to offer in making connections across disciplines. Most faculty probably can make some interdisciplinary connections, and in a learning community these integrations can be carefully orchestrated, but in a looser cluster, it’s a different matter.

Instead, we affirm a curriculum of dissolution, one where content is divorced, siloed as we sometimes say. The emphasis isn’t on connection or integration. It’s on separation and definition. This is undeniably how the academy works: cutting the universe into every smaller bits. In some ways we might imagine integration as a myth. There is no whole out there. Our students experience the curriculum as disconnected because it is disconnected. Disciplinary knowledge doesn’t translate beyond its borders. And I would assert that that claim is largely accurate. Then I would turn toward Latour and say that where one does find translation one will also find a chain of actors responsible for that mediation. We can’t just simply tell our students to integrate and expect that courses fit together like Lego bricks, not when integration would even be a challenge for the faculty teaching the courses.

I don’t have any easy answer to this challenge. I can see the value in integration. I do think it is possible to build structures to support integration, and I think it is worth the effort to try. But I can hear the other side of the conversation as well, the side that has placed all its wager on disciplinarity. I’m just not sure if that bet is going to pay off for every discipline.

 

Categories: Author Blogs

digital impacts, scholarship, and general education

16 April, 2014 - 10:17

I’m bringing together a couple conversations I’ve been following online and that also have become juxtaposed at least on my campus: general education, with its attendant adjunct concerns, and impact metrics for research as they intertwine with digital activity.

So the second one first. Ian O’Byrne,  Gideon BurtonSean MorrisNate Otto, and Jesse Stommel have a podcast discussing digital scholarship and openness. It’s an interesting conversation that addresses many of the common concerns we have about open digital scholarship: how it “counts,” rigor, risks, and impact. They also venture into connections with teaching because clearly our values about how intellectual work should be done are reflected, though often distorted, in our pedagogy. As such, as a traditional scholar in the humanities one might restrict or forbid “online sources” and require single-author student papers that imitate the style and discourse of the print journal article. On the other hand, someone like myself might more natively think of students interacting with public online conversations and working collaboratively across a range of genres. My detractors might be concerned such work isn’t rigorous enough and that it asks students to take unnecessary risks by writing in public. Thus my point that the discussions of digital scholarship and pedagogy mirror one another.

The first conversation was on Facebook and stemmed from this Chronicle article on adjunct unionizing. The conversation though quickly turned to the linkage between adjuncts and general education. Jeff Rice, who initiated that Fb thread, followed with this post, where I think he makes a key point

The adjunct fight is the fight that most people commit to in one oral/textual manner or another (support, empathy, dissent) but that they really don’t fight over. It’s not really even a fight. Some blog posts. A short film. Discontent. Chronicle of Higher Education first person narratives. Tweets. Some inaccurate coverage in Slate. None of this makes for a fight.These are disconnected and scattered declarations that something sucks. That’s all they are.

Jeff’s post is an interesting read, as usual. He makes these rhetorical moves that combine the personal and public and examines how narrative and rhetorical structures move back and forth. One of those moves is saying something sucks. Then we have a tendency to fit the facts to the story. We do the same thing with general education. We say it sucks. As the Fb thread addressed, general education has become a primary mechanism for distributing university resources, especially to certain departments. Only a portion of that money actually goes directly to delivering the gen ed curriculum of course and by using adjuncts the gen ed curriculum becomes cheaper and that money can then float other activities. The obvious example (and one closest to my work) is the way first-year writing courses float English graduate programs.

We could argue for abolishing general education as a way of addressing the exploitation of adjuncts. It would require some significant reformation of the way university accreditation works. Universities would have to come up with new models for allocating resources if they didn’t want to lose departments and graduate programs. It’s really an extension of the abolition movement in composition from the 90s. Why require courses students don’t want to take and faculty don’t want to teach? Why require a curriculum that only results in the creation of an exploited labor class?

This is where I want to circle back to impact and digital work. We often reference academic freedom, but I think it’s easy to feel constrained as well. It is true that I can research anything I want (in my field, which I chose) as long as I can get it published by a good press in the form of a monograph. Maybe I could do a “digital monograph.” But I think it is very hard to have measurable “impact” that way, especially in proportion to the work involved. Instead of spending 100s of hours over years to publish a monograph that will be purchased by as many individuals as hours I spent writing it, what if I spent that same amount of time here? What if I had as many people (or more) reading my work everyday here as I might hope for purchasing my book in total? What if I was not only reaching people in my field but academics across disciplines, students around the world, and a general public that was carrying my work into journalistic publications?

Maybe it seems natural for me to advocate for that, but I do mean it as an open question. Impact has never been a significant metric in the humanities. We talk reputation instead. And building a significant online identity may or may not contribute positively to reputation. In my view reputation is a more conservative enterprise. It has an advantage of durability where impact can be fleeting. In many ways it is analogous to the print/digital divide. That book will sit on a shelf for a long time.

General education is a similarly conservative enterprise: stuff that every undergraduate should know. And though we’ve updated the list of stuff over the years, the premise is still the same, and it isn’t about impact. It’s about reputation in a way, or at least, it’s about identity. It says that students should embody certain knowledge. What’s the impact of that education? That’s a little harder to trace and though we try to make those arguments now, the opportunities for impact are constrained by the conserving action of reputation. In theory it’s not hard to imagine a replacement for general education that was entirely about impact, about students and faculty discussing, investigating, and reporting on questions that they cared about. Maybe it would be a model that reduced or eliminated the inequities of adjunct hiring. It would not be a model that imagined some ideal plane of knowledge that had to be traversed somehow and required adjuncts as a means to cover it. Instead it would begin by saying “We are the people who are here, the faculty and the students, and we will pursue what we think is valuable and relevant, we will do what it is possible to do with the resources we have, and we will call that general education.”

In practice though these things are almost impossible to imagine. And I certainly do not mean to present them as a panacea. They are more like a pharmakon, replete with their own psychedelic voyages. Who knows where such a course might lead one? I certainly had no idea where blogging might take me a decade ago. And 10 years later I can’t offer it as a solution or a success but only as a different scholarly journey than I might have otherwise had.

 

Categories: Author Blogs