Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
I’m bringing together a couple conversations I’ve been following online and that also have become juxtaposed at least on my campus: general education, with its attendant adjunct concerns, and impact metrics for research as they intertwine with digital activity.
So the second one first. Ian O’Byrne, Gideon Burton, Sean Morris, Nate Otto, and Jesse Stommel have a podcast discussing digital scholarship and openness. It’s an interesting conversation that addresses many of the common concerns we have about open digital scholarship: how it “counts,” rigor, risks, and impact. They also venture into connections with teaching because clearly our values about how intellectual work should be done are reflected, though often distorted, in our pedagogy. As such, as a traditional scholar in the humanities one might restrict or forbid “online sources” and require single-author student papers that imitate the style and discourse of the print journal article. On the other hand, someone like myself might more natively think of students interacting with public online conversations and working collaboratively across a range of genres. My detractors might be concerned such work isn’t rigorous enough and that it asks students to take unnecessary risks by writing in public. Thus my point that the discussions of digital scholarship and pedagogy mirror one another.
The first conversation was on Facebook and stemmed from this Chronicle article on adjunct unionizing. The conversation though quickly turned to the linkage between adjuncts and general education. Jeff Rice, who initiated that Fb thread, followed with this post, where I think he makes a key point
The adjunct fight is the fight that most people commit to in one oral/textual manner or another (support, empathy, dissent) but that they really don’t fight over. It’s not really even a fight. Some blog posts. A short film. Discontent. Chronicle of Higher Education first person narratives. Tweets. Some inaccurate coverage in Slate. None of this makes for a fight.These are disconnected and scattered declarations that something sucks. That’s all they are.
Jeff’s post is an interesting read, as usual. He makes these rhetorical moves that combine the personal and public and examines how narrative and rhetorical structures move back and forth. One of those moves is saying something sucks. Then we have a tendency to fit the facts to the story. We do the same thing with general education. We say it sucks. As the Fb thread addressed, general education has become a primary mechanism for distributing university resources, especially to certain departments. Only a portion of that money actually goes directly to delivering the gen ed curriculum of course and by using adjuncts the gen ed curriculum becomes cheaper and that money can then float other activities. The obvious example (and one closest to my work) is the way first-year writing courses float English graduate programs.
We could argue for abolishing general education as a way of addressing the exploitation of adjuncts. It would require some significant reformation of the way university accreditation works. Universities would have to come up with new models for allocating resources if they didn’t want to lose departments and graduate programs. It’s really an extension of the abolition movement in composition from the 90s. Why require courses students don’t want to take and faculty don’t want to teach? Why require a curriculum that only results in the creation of an exploited labor class?
This is where I want to circle back to impact and digital work. We often reference academic freedom, but I think it’s easy to feel constrained as well. It is true that I can research anything I want (in my field, which I chose) as long as I can get it published by a good press in the form of a monograph. Maybe I could do a “digital monograph.” But I think it is very hard to have measurable “impact” that way, especially in proportion to the work involved. Instead of spending 100s of hours over years to publish a monograph that will be purchased by as many individuals as hours I spent writing it, what if I spent that same amount of time here? What if I had as many people (or more) reading my work everyday here as I might hope for purchasing my book in total? What if I was not only reaching people in my field but academics across disciplines, students around the world, and a general public that was carrying my work into journalistic publications?
Maybe it seems natural for me to advocate for that, but I do mean it as an open question. Impact has never been a significant metric in the humanities. We talk reputation instead. And building a significant online identity may or may not contribute positively to reputation. In my view reputation is a more conservative enterprise. It has an advantage of durability where impact can be fleeting. In many ways it is analogous to the print/digital divide. That book will sit on a shelf for a long time.
General education is a similarly conservative enterprise: stuff that every undergraduate should know. And though we’ve updated the list of stuff over the years, the premise is still the same, and it isn’t about impact. It’s about reputation in a way, or at least, it’s about identity. It says that students should embody certain knowledge. What’s the impact of that education? That’s a little harder to trace and though we try to make those arguments now, the opportunities for impact are constrained by the conserving action of reputation. In theory it’s not hard to imagine a replacement for general education that was entirely about impact, about students and faculty discussing, investigating, and reporting on questions that they cared about. Maybe it would be a model that reduced or eliminated the inequities of adjunct hiring. It would not be a model that imagined some ideal plane of knowledge that had to be traversed somehow and required adjuncts as a means to cover it. Instead it would begin by saying “We are the people who are here, the faculty and the students, and we will pursue what we think is valuable and relevant, we will do what it is possible to do with the resources we have, and we will call that general education.”
In practice though these things are almost impossible to imagine. And I certainly do not mean to present them as a panacea. They are more like a pharmakon, replete with their own psychedelic voyages. Who knows where such a course might lead one? I certainly had no idea where blogging might take me a decade ago. And 10 years later I can’t offer it as a solution or a success but only as a different scholarly journey than I might have otherwise had.
We’ve been reading Manovich’s Software Takes Command in my media theory course. The driving question of the book is “what happens to media after software?” If that question strikes you as McLuhanesque, then I would say you are on the right track. The book has a historical element. It begins in the 60s and 70s, looking particularly at the work of Alan Kay and moves into the 80s and 90s where the process of “softwarization” as Manovich terms it starts to take hold. Here he gives special attention to Photoshop. The book ends fairly close to the present with some discussion of social media. However it would be inaccurate to think of the text as media history. It is really more conceptual in its efforts to understand the impact of software on how we understand media. To that end, I am interested in two particular ideas: software evolution and software epistemology as Manovich calls them.
Manovich’s discussion of evolution comes with the usual caveats that he is using the term metaphorically and that he doesn’t mean that software develops by the same mechanisms as biological entities. Fine. I didn’t really think that Photoshop and InDesign were getting busy on my hard drive anyway. On the other hand, the “metaphor defense” is a little unsatisfying. Isn’t it possible to say that evolution is an more abstract concept and that we can have different kinds of evolution (software and biology for example) without saying that such claims are any more metaphorical than language inevitably is?
Through the process of softwarization, older media (e.g. a photograph and a watercolor painting) get simulated. As Manovich points out, this is not a simulation of the material but a simulation of the tools and interfaces. So for example, with digital video, software simulates the ways in which we were able to interact with traditional film (playing forwards and backwards, cutting, panning, zooming, etc.), but we don’t get the material film. In creating these simulations for a variety of media, software blends them together through the use of common data structures. (n.b. And here, Manovich is using the term data structure different from its use in computer science.) By data structures, Manovich is referring to common file types, so that the photograph and the watercolor painting both become jpegs. That’s all fairly straightforward. Here is where this gets interesting though. Manovich argues:
software simulation substitutes a variety of distinct materials and the tools used to inscribe information (i.e., make marks) on these materials with a new hybrid medium defined by a common data structure. Because of this common structure, multiple techniques that were previously unique to different media can now be used together. At the same time, new previously non-existent techniques can be added as well, so long as they can operate on the same data structure.
So in Photoshop we can use techniques that simulate painting techniques on photographs and visa versa. I’m guessing anyone who has used Photoshop has messed around with all those filters. This is where we begin to see the hybridization of media. Two media that were materially distinct and had techniques that were tied to that distinct materiality are now able to combine to produce new media offspring.
That’s software evolution; now let me turn to software epistemology. Manovich writes, “Turning everything into data, and using algorithms to analyze it changes what it means to know something. It creates new strategies that together make up software epistemology.” Here I am reminded of DH, of course. But I would put it this way: our knowledge of an object isn’t in the object to be discovered but produced through the network of relations by which we encounter that object. This is as true for a novel or photograph as it is for a marmoset or a black hole. There’s no doubt that today that network of relations includes software as a powerful actor. Furthermore the hybridizing process of software evolution does mean that techniques can cross the material boundaries of older media. I.e., we can bring new techniques for creating knowledge about texts, and clearly we have.
My interests are more heuristic than hermeneutic however. So my curiosity now tends toward wondering how/if Manovich’s insight gives us a way to invent new media hybrids, particularly for the purposes of our academic work, though I am generally interested in the activity. My interests tend toward the mobile and the social, the everyday uses of media available to students and faculty. When I start to think about such matters it’s easier to see that it is not just about software. It’s easy to see how one could combine several commonplace social media applications–a collaborative composing tool like a wiki, realtime communication like twitter or google hangouts, and asynchronous discussion and documentation on a blog–to engage in a very different kind of scholarly and/or pedagogical activity than the one that current predominates in the humanities. With location-based data and smarter objects we can provide context-sensitive information. And yes we can find people out there doing those things, but doing it requires more than there being a new media hybrid with the right affordances. It also requires a pedagogical and rhetorical shift. It requires an epistemological shift where I change my mind about what’s worth knowing and what are useful ways to construct that knowledge.
I’m encountering this right now in a conversation on campus about eportfolios. Sure there are many valid questions to ask about eportfolios and we can point to institutions where they haven’t been successful, where the students and faculty find them counterproductive and onerous. On the other hand, I can point to more institutions where students and faculty find giant lecture courses and multiple choice tests counterproductive and onerous. Can we get rid of those too? Eportfolios are a media hybrid born of the recognition that our students work is largely digitally composed, regardless of discipline, and that we can thus put all that work in one place where we can take advantage of certain media-independent processes. The institution might be most interested in the assessment processes, but I am most interested in the sharing and discussing processes. But this is a clear example of an epistemological shift. Instead of putting effort into creating old media types with their historical epistemological constraints, one decides to do new kinds of work and value a new kind of knowledge.
Tomorrow I’m doing a brief digital composing workshop for FYC instructors as part of a Bedford symposium at Buffalo State. So this is me thinking through that somewhat. I am imagining an audience with limited experience teaching digital composition, as well as limited technical resources and a limited amount of time in a semester in which to do it. It’s really the time constraint that interests me the most because it points to the way we still think about digital composing within FYC, but I also want to give some practical advice. It is a workshop after all.
So we will be looking at two types of assignments: the slidecast and a Storify “essay.” As you probably know, the slidecast is technically easy to produce; you can make one with PowerPoint. It also creates opportunities for talking about visual rhetoric and design as well as a kind of oral presentation skill. I typically point my instructors (and their students) to Garr Reynolds’ site for advice on slide design. I typically do an assignment that is 3-5 minutes long with at least 10 slides and we typically pull this off in about 2-3 weeks of class meetings. The Storify essay is an assignment I haven’t actually done, but I think could be an interesting option, especially for a course that has a research component and wants to pay attention to social media conversations surrounding an issue. This kind of assignment would clearly be more text-based, although you can certainly incorporate image and video. Essentially though, you are collecting data from the web around a conversation and then weaving it together with text.
My plan is to introduce these two briefly and then give the participants some time to pick one to play with. However, I also want to say something about time. Typically I believe we still look at digital composing as a kind of add-on to composition courses as secondary to the goal of teaching academic writing. I think our program at UB is guilty of this, and it’s partly my fault. When I introduced the digital composition assignment requirement into our curriculum a few years ago, I presented it as something that could be contained within a short period of the semester. I did this out of a perception that it would be easier for the instructors to accept that way. Unfortunately it can lead to a situation where the digital stuff is in one place, which leaves the rest of the curriculum potentially “non-digital.” That’s not a model that’s going to work for us anymore. We need to recognize that all composing is digital and by that I don’t simply mean that we are using word processors. We are also using web-based research that comes in a variety of media, and we are employing a range of devices for accessing and composing media. The choice to compose a text-only, word-processed document is simply that: a choice. I mentioned in a workshop for our TAs a couple weeks ago that there’s no reason why one could have an entire composition course without producing a single document that looked like the conventional essay from the 1990s.
- A blog with updates about an ongoing research project, discussions about class readings, and reflections on one’s writing;
- A class webzine with essays that incorporate image, video, and audio;
- A collaborative wiki-based site organizing important concepts from the course;
- A slidecast as mentioned above.
I could go on, but you get the idea. When I look at the WPA outcomes, I think they could all be achieved through digital assignments. There is little reason to be passing around these MS-Word documents on paper or electronically.
Marc Bousquet has a piece in the Chronicle that is trending academically-speaking on “The Moral Panic in Literary Studies.” It would be easy to read this piece as a rehearsal of what has now become a familiar song, at least in English, about disciplinary turf wars among literary studies, rhetoric, and the other fields within such departments (but especially the first two). It would appear much of the comment stream gravitates toward those commonplaces. But this isn’t the 1980s or 1990s.
Bousquet’s argument begins with the declining numbers problem: fewer tenure line positions, a paucity of hires for new phds, and fewer undergraduates in classes. He then suggests that the one place in English that bucks the trend is digital composition. As he writes, “Scholars of composition and rhetoric generally teach graduate and upper-division courses packed with students who are passionate about the digital publication and media composition now inevitable in every walk of academic, professional, creative, and community-engaged communication.” He mentions doctoral programs, such as that at Clemson, with its 100% placement rate, that focus on such matters but do so in an interdisciplinary way.
I think to some degree what Bousquet says is accurate. Once upon a time, before I became buried in administrivia, I used to teach such courses, and they were popular. But is is also difficult to deliver that curriculum. To gain more than a passing familiarity with digital composing requires more than one course. There needs to be something a vertical curriculum. And that really means having more than one faculty member doing the work. Logistically, you need to offer three sections of your introductory course in order to end up with enough students to run the most advanced course that sits at the end of the sequence. If you had three or four faculty who were relatively free of administrative burdens so that you could offer 10-12 courses per year (at a typical 2-2 research load), then you probably could have a go at a legitimate professional-technical-digital rhetoric/composing curriculum at the undergraduate and graduate levels.
Bousquet describes the resistance he has encountered at Emory toward the idea of such hires. There aren’t other rhetoricians at Emory to resist the notion, but my sense is that the resistance to the digital is not a literary studies-only phenomenon. I think there is wider acceptance in rhetoric and composition, because computers and writing has been going on as a field of study since the 80s, but I think there are a good number of traditional rhetoricians and some compositionists who see digital composing as secondary (or tertiary) to a field that is focused on writing (i.e. print culture) or cultural studies. Like the literary studies faculty Bousquet describes, they too might look askance at technical and professional communication. In my own experience, I am as likely to find kindred spirits in media study or art or across the digital humanities as I am among faculty in rhetoric and composition.
So this is not the same old turf war. I’d like to think that it isn’t a turf war at all. That it is about trying to grow the pie rather than redistribute the crumbs on the plate. However it is a shifting in the identity of English departments. A few decades back, in a different millennium, literary study needed little defense: one studied literature because literature was valued across the culture. Today English departments need to explain why their majors are worthwhile. Often, the first thing they say is that English majors learn to write well. And today, the notion of what writing is has shifted toward the digital, so you would think that this would be a natural shift. But English has never really been about teaching writing; it’s been about teaching literature with improvement in writing being a claimed side effect.
What would be the role of literary study and literary history in a department where the focus had shifted, as Bousquet suggests, toward production and composition? For me the answer is relatively straightforward and comes from my experience working with students who were professional writing majors. Students are interested in such curricula because they want an education that has a career path. However, in my experience, these students are also interested in creative writing, in experimenting with digital media, and with understanding the role that writing and composing play in our culture. And part of that is literary study.
Swimming upstream is a concept I borrow from zazen meditation. Don’t worry, I’m no expert on such things and will not pretend to be. But the basic idea is the zazen is a mental discipline that requires working against the inclinations of the mind. If you’ve ever tried meditating, even in some informal way, you’ve likely experienced how quickly your mind begins to wander, that it’s hard to keep your focus on your breathing and that the harder you try the more easily you lose grip.
Sometimes, when we think about learning, we think of it as something that is as natural as can be, as easy as falling off of a bicycle. And, indeed, one can learn a lot from falling of a bike, including, eventually how to not fall off the bike but instead ride it. However, if all learning were “natural,” then we wouldn’t need schools. It might be more accurate to imagine learning as a human capacity that gets taken up by schooling and pedagogy to reach certain goals. Many of those goals, like learning to focus a long time on words on a page, require swimming upstream (if you will permit me some liberty with that metaphor). I don’t know that we can call reading an enjoyable activity. Some of us enjoy reading some things, but we can all identify things we don’t enjoy reading. The same is the case with writing but only more so. It’s not exactly pleasure that I am pursuing here as I write this. However I also don’t want to overstate the “nature” part of this (as in as opposed to culture); we’re really looking at natureculture, if you must hold on to these terms.
So all of that is maybe a long-winded way of saying that schooling is a laborious activity that puts the mind to work in ways it wouldn’t work if it weren’t in school. OK. But I’m hoping there’s more to it than that. I’ve never liked the term “critical thinking,” but one understanding of it I might like is learning to swim upstream in one’s mind, against one’s received preferences and intuitions. I don’t think this is something that we are particularly good at, even when we have been quite thoroughly schooled. Our tendency instead is to act as partisans in some agonistic drama where we always argue for our own views and interests with the expectation that everyone else will do the same and that it all will get sorted out through conflict.
Perhaps you are imagining that I am doing some blog version of vaguebooking about some particular conflict. That’s not really the case. Though I must say that some of my recent work on campus has helped me to see how very difficult it can be to communicate across disciplinary boundaries, not only in terms of different academic discourses and methods but the different local cultures that develop around departments, schools, and so forth. It really does require swimming upstream. And not against the flow coming from other people or the bureaucracy but against the flow in your own mind. By this I don’t mean martyring your own interests but rather recognizing that the thoughts flowing through your head and claiming to be “yours” and to be “your interests” aren’t you.
This is one of those ironic spaces in the university. The university is largely built on the idea of academics pursuing their interests. Everyone occupies their myopic research area, teaches in that area, and advocates for resources for their own work (both internally and externally). E.g., if I don’t stand up and argue that we need to hire another person in my field to help do the work of delivering first-year composition, certainly no one else will. If I and my rhet/comp colleague, Arab Lyon, didn’t work on a proposal for a new writing center, no one else would have (though others did certainly work with us). And to a certain extent that makes sense. Universities operate on a culture of expertise so the experts should weigh in on matters of concern in their area. But there’s a distinction to be made between my interests and disciplinary commitments to rhetoric or composing and communication as a matter of curricular concern on my campus (to say nothing of the larger project of general education). So I have to swim upstream against the former in order to better address the latter.
And if I realize that, then maybe I also can recognize that even when I am in my alleged myopic research space writing articles and what not that I might sill be better served to swim upstream against my interests, at least some of the time.
Ted Underwood has a great post exploring the challenges of “fitting” DH into literature departments. He observes
Humanities curricula may evolve, but I don’t think the majority of English or History departments are going to embrace rapid structural change — for instance, change of the kind that would be required to support graduate programs in distant reading. These disciplines have already spent a hundred years rejecting rapprochement with social science; why would they change course now? English professors may enjoy reading Moretti, but it’s going to be a long time before they add a course on statistical methods to the major.
And then a little later, “The reluctance of literary studies to become a social science needn’t prevent social scientists from talking about literature.” The connection between digital humanities and social science is interesting here. I think Ted’s point is that the statistical methods at work in DH are more common in the social sciences, which are generally more quantitative than the humanities (what isn’t?). Ultimately though, he makes a convincing claim that it’s ok if DH doesn’t fit into the paradigms of literary study; the two don’t need to fit together.
I have a slightly different take on this. The paradigms of literary study are shifting. DH isn’t causing this. Instead, I’d say that this paradigm shift and DH are products of a more general digital turn. Ted probably sees more closely than I do the frustrations that occur in English departments when they try to fit DH into their curriculum. In my view, while I think DH has a role to play in English departments, those departments misread the situation if they believe that what they need to do is respond to DH. What they need to do is respond to the broader digital turn happening around them, which might mean exploring the computational/statistical methods of literary studies DH, but mostly means building a digital literacy curriculum in place of the print literacy curriculum that currently exists. What literary studies should do in this regard I have no idea, but I wouldn’t equate English departments with literary studies.
And this is what Ted’s post got me thinking. When he noted the connection between DH and social sciences my first thought was “hmmm… that sounds a lot like rhetoric.” As you know, rhetoric has a humanities side, but it also has a social science side in communications departments. And there is some back and forth. English departments often contain multiple disciplines, why not add DH to the list? We can have print literary scholars, creative writers, journalists, print and digital rhetoricians, media theorists, professional/technical communicators, and digital humanists (and more). To do it though one would need a comprehensive mission that was NOT something about studying literature. In doing so, making DH fit would not be about accommodating it to literary studies or visa versa; it would be about building a larger, more relevant departmental vision.
Ian Bogost has an interesting piece in the Atlantic on speed reading. It addresses a particular new technology called Spritz, but Bogost is more concerned with the underlying problem: the sheer deluge of text and information we face on line. As he writes,
With so much so-called content, “consuming” it by means of comprehension is becoming impossible. And while we might lament such an outcome along with Dr. Henderson, it stands to reason that the technology and media companies might want to compress more and more interactions with content (let’s not mistake them for reading) into a smaller and smaller amount of time. Think of it as an attentional version of data compression: the faster we can be force fed material, the larger volume of such matter we can attach to our user profiles and accounts as data to be stored, sold, and bartered.
By some accounting, we read (and write) more than ever, though is it really reading? And perhaps by extension we might ask, is it really writing? Certainly the traditionalists among us would be quick to argue that the text, status update, and tweet shouldn’t really count as either. Is writing nothing more than putting words (sort of) into a sentence (sort of)? And is reading nothing more than recognizing the words? What kind of thinking needs to accompany these activities?
And then, as Bogost argues, should we see this from the perspective of the marketplace where what counts is the production and consumption of “content” rather than any comprehension that might accompany it? I imagine we are all familiar with the observation that our brains are doing less while we watch TV than when we are sleeping, and perhaps we can hear our mothers’ voices echoing in our heads about how we will rot our brains watching TV. Maybe this is like that: scanning (or “reading”) your facebook news feed is the equivalent of channel surfing, right? There’s no doubt that there are companies seeking to profit from user-generated content. That’s the longstanding story of social media. Someone creates a great app or website that thousands of people want to use (for free), but the question is how do the creators profit from their creation? The answer has been to find value in the content the users create. And the assumption is that there is value in the content, not in the individual updates but in some proprietary aggregation of the data. So the more we post/produce and the more we click/consume, the more value we produce, regardless of the (lack of) thinking involved.
There are a couple interesting observations that strike me out of this situation.
1. Our cultural faith in the value of the aggregated data of our online activities. I don’t doubt that we might be scared and impressed by what Google knows about us. And, we might benefit from a larger public conversation about corporate data-gathering. At the same time, I am reminded by the way people would be impressed by what I “knew” about them by doing a tarot card reading (something I haven’t done for years but picked up on a lark in my 20s). I imagine that I am not so hard to read either. My inclination is to situate the contemporary connections we make between data networks and identity in the historical context of the ways we have understood ourselves in relation to technologies, especially media/information ones.
2. Along those same lines, it is the current schism/shift with media technologies that generates the question Bogost implicitly asks (at least for me): what is reading supposed to be? We know what 20th century reading was and Bogost offers us a picture of what reading is becoming. Perhaps it is just because we’ve been reading Kittler in my media theory class, but I am thinking about this in the following way. If handwriting offered an imagined near-telepathic connection between author and reader (where the author’s thoughts and identity are conveyed) and the typewriter introduced a different cybernetics where text becomes code to be interpreted in term of an autopoiesis, then the data network situates the human reader as a node in a network of distributed cognition where what we consciously think is less important than our role as an input/output device. After all our computer is “reading” too. It’s reading off of the hard drive or flash memory. Gmail “reads” our email. Those agents understand what they need to in order to produce the required output. So do we.
3. This brings me back to two questions. First, an ontological one, which is what are we or what are we capable of being? And second an ethico-political one, what should we be? Inasmuch as we are intertwined with symbolic behavior, the question of how we produce and consume symbols will be involved in these concerns. As an academic, particularly as one in the humanities, I know quite well the version of this that has one spending years, decades, nose-to-page, in the solitary act of reading books, combined with an equally solitary writing activity. It is a combination of close reading and a deliberative writing practice that understands human thinking as emerging through an intimate, slow encounter with words. Ideas that have value are composed through this activity, not, as should be obvious, through the speedy agglomeration of data.
Media-information networks offer a very different version of symbolic behavior, one that we might characterize as less interior and less intimate. What is it like to be that person? The person awash in thousands of words, images, video and so on? I suppose we all have an answer to that question. But clearly we remain largely unhappy with those answers, and I don’t have much to offer on that today. However I will say this. Learning to read like a 20th century college student takes a fair amount of practice and discipline. It requires learning to focus, to filter out the unwanted noises of the day and to set aside one’s inclination to let the mind wander and instead stay with the words. What is the digital version of this practice? I agree with Ian that isn’t the speed reading Spritz offers, but what is it?
Sadly illness precluded my attendance. Not to bore you with the details, but this semester has been quite taxing with the work of general education and such. In any case, here is what I would have said. It’s taken in part from my chapter in Invasion of the MOOCs, so I invite you to go there for more.
In the year of so since Steve proposed this roundtable MOOCs have moved fairly swiftly along the hype-adoption curve off the “peak of inflated expectations” and we’re well into the “trough of disillusionment.” We have come to the surprisingly realization that some videos, a discussion forum, peer evaluations, and a few online quizzes will not simply replace higher education. At the same time, we’ve encountered a proliferation of MOOC-like online learning environments. It’s not all Coursera, Udacity, and EdX out there. That MOOCs failed to live up to their promises is a dog bites man story. Unfortunately for the purposes of entertainment, my overall position on MOOCs isn’t that provocative. As with any emerging educational technology, I think we need to experiment and research, and if we find a technology to be valuable then we need to develop appropriate policies regarding its use. What is more interesting and revealing are the institutional and disciplinary responses to moocs. There are many stories there, but in the next five minutes my interest is with one of the central objections to MOOCs within our discipline, specifically that MOOC composition courses are problematic because of the impossibility of providing good feedback to student writers, and part of the subtext of that argument is our option to machine-scoring of essays.
While I share our discipline’s general concerns with machine reading, the argument in favor of individual feedback is less convincing and, in my view, is a default position reflecting traditional anthropocentric and humanistic views of communication. In the modern era, most writing was for very small audiences, often a single individual: a friend or lover, an employer or customer, a professor. The tutorial model of instructor feedback reflects this information and media ecology. Needless to say, writing activities today are very different from those of a century ago. We may still write for audiences of one on occasion, but we also write for far larger audiences. In addition, we write for machines. Reaching an online audience means composing a text that is findable and accessible. Attracting an audience via Twitter requires mastering the rhetoric of 140 characters. Learning to write in a MOOC immerses students in this rhetorical situation. It requires students to develop a facility with networked rhetoric that simply cannot be learned in the one-to-one writing environment of the traditional classroom. As has been evidenced by current MOOCs, students struggle with this. Faculty struggle with this, which is all the more evidence that even a highly-developed print literacy does not prepare one very well for the challenges of networked communication. And it is, I would argue, networked communication that our students will most need to practice moving forward.
The real value of traditional feedback in a composition classroom is that it reflects the writing situations students conventionally enter later in their academic career. If students can learn how to seek and use good feedback in the composition classroom, then they are better-positioned to do so again in the future, especially in other college courses. On the other hand such tutorial models do little to prepare writers for understanding the feedback provided through networked environments. In a conventional classroom one knows one is writing for a tiny audience (perhaps an audience of one) and thus individualized feedback, especially from that specific audience, is crucial. However in a MOOC and elsewhere online, one is writing for hundreds or thousands, and the responses of a small number of individuals are perhaps less useful. What is the function of feedbac in this environment? It is not so unfamiliar to those with blogs or YouTube channels or large numbers of Twitter followers. As a blogger, the comment offered by a single reader is welcome and helpful, but the evidence of pageviews and reTweets might tell one more about the reception of a post than the single comment. However, interpreting the latter kind of feedback is not straightforward.
This is how one builds a blog readership or Twitter following, by interpreting and responding to network feedback. Not surprisingly, one might discover that the features that make writing valuable in a classroom or in a journal article for that matter are quite different from those that are valued in a MOOC or elsewhere online. This might be an argument in defense of the claim that we shouldn’t use MOOCs as a substitute for composition courses that are designed to prepared students for academic writing. Conversely, it might also be an argument that MOOCs, or some hybrid of the current composition course with the MOOC, are better situated to prepare students for writing in digital media networks. When I think of some future MOOC-like environment, I begin by envisioning the 2500 students who currently take a composition course each semester on my campus. How many of the tens of thousands taking composition in the SUNY system right now are cordoned off in some 25-person online Blackboard environment? More than half I would wager. If we opened those virtual doors, what affinity spaces might develop? What kinds of communities might be built? It is in that direction that I believe we have something to learn from MOOC experiments.
I’m in an essay collection that is now available from Parlor Press, Invasion of the MOOCs: The Promises and Perils of Massive Open Online Courses, edited by Steven Krause and Charlie Lowe.
From the website:
Invasion of the MOOCs: The Promise and Perils of Massive Open Online Courses is one of the first collections of essays about the phenomenon of “Massive Online Open Courses.” Unlike accounts in the mainstream media and educational press, Invasion of the MOOCs is not written from the perspective of removed administrators, would-be education entrepreneurs/venture capitalists, or political pundits. Rather, this collection of essays comes from faculty who developed and taught MOOCs in 2012 and 2013, students who participated in those MOOCs, and academics and observers who have first hand experience with MOOCs and higher education. These twenty-one essays reflect the complexity of the very definition of what is (and what might in the near future be) a “MOOC,” along with perspectives and opinions that move far beyond the polarizing debate about MOOCs that has occupied the media in previous accounts. Toward that end, Invasion of the MOOCs reflects a wide variety of impressions about MOOCs from the most recent past and projects possibilities about MOOCs for the not so distant future.
Contributors include Aaron Barlow, Siân Bayne, Nick Carbone, Kaitlin Clinnin, Denise K. Comer, Glenna L. Decker, Susan Delagrange, Scott Lloyd DeWitt, Jeffrey T. Grabill, Laura Gibbs, Kay Halasek, Bill Hart-Davidson, Karen Head, Jacqueline Kauza, Jeremy Knox, Steven D. Krause, Alan Levine, Charles Lowe, Hamish Macleod, Ben McCorkle, Jennifer Michaels, James E. Porter, Alexander Reid, Jeff Rice, Jen Ross, Bob Samuels, Cynthia L. Selfe, Christine Sinclair, Melissa Syapin, Edward M. White, Elizabeth D. Woodworth, and Heather Noel Young.
The book is available in PDF format via a Creative Commons license.
I’m participating on a roundtable on MOOCs at the 4Cs conference in a couple weeks. It’s one of those experiences that shows the temporal disconnect between the churn of technological innovation and the stately pace of academic discourse. We proposed this roundtable nearly a year ago, and I think the things we would have wanted to say then probably are less applicable now. Or at least that’s the case for me.
Here’s what I’m thinking now (and I imagine I’ll say something along these lines). A year ago some folks were gleefully pronouncing the end of higher education and proclaiming that MOOCs would revolution learning on a global scale. Today that revolution seems very far away. While MOOCs might remind us that education is intertwined with technologies, they might also show us that technologies cannot solve the challenges education presents. Millions of people are not going to log onto a website, interact with some material, and teach each other in any sustained, programmatic way. Perhaps we can imagine some near future AI that will teach us things or even that we could download knowledge into our brains like the Matrix. But those fantasies, like the fantasy of the MOOC seem to misunderstand how learning, pedagogy, and education work. These fantasies are not so different from the one that imagines that we could just read books and learn that way.
To me, the most interesting thing about the MOOC was watching how institutions, disciplines, and faculty responded with a mixture of entrepreneurial opportunism and trade protectionism. This is particularly the case for composition. The basic argument against the composition MOOC is that instructor feedback is crucial to writing pedagogy and cannot scale to the level of a MOOC. It begs the question of whether one can learn to write without instructor feedback. I’d say the answer is yes. I mean I did, unless you count a couple check marks and “very good, B+” as feedback. But that’s not really the point. It is a more systemic, networked issue. If the goal is to be able to write for academic classroom environments with a single primary reader/professor (and then later to write for professional environments with a single primary reader/boss), then the traditional classroom fits into that system/network. If, on the other hand, one needs to learn how to communicate in a distributed, non-hierarchical digital environment with 1000s of members, then a MOOC isn’t a bad place to start writing. However neither of those are good models for understanding how communication works today for the average professional, college educated citizen in the US.
The other side of the MOOC equation is not about pedagogy but about the economics of composition instruction. Sure, it’s pretty cheap in terms of adjunct pay. So maybe there’s this fantasy of even cheaper, automated instruction. There certainly is from the student perspective trying to get out of paying for those composition credits. And as a discipline we have an interesting reaction to that. No one is in favor of the exploitative practice of hiring adjuncts, but no one wants to fire their adjuncts either. As such, compositionists find themselves defending adjunctification and the status quo against moocification. In the continuum between status quo and abolishing fyc, where does the composition mooc stand? Is it better than nothing? Worse than nothing? Do we really prefer adjunctification? Or are we insisting on some idealistic model of well-paid instructor-mentors who will deliver a curriculum that is ultimately anachronistic anyway?
The rapid rise and decline of MOOCs is a familiar tech start-up story. However, the obvious shortcomings of MOOCs have not reaffirmed the continuing value of traditional methods as much as they have shed light on challenges that remain unmet.
My graduate course this spring is on media theory. We’re right between McLuhan and Kittler now, so I suppose I have Kittler’s declaration that media determine our situation on my mind. In our class on Monday, we were looking at Manovich’s selfiecity project, which is an analysis of 3200 selfies taken in six cities around the world and posted to Instagram. The conversation we had got me thinking about the intersection between concepts of media and genre.
I suppose my starting point is to say that genre is an attempt to describe a communicative activity undertaken by a network of humans and nonhumans, which would include media technologies. Genres can suffer from all the familiar effects of generalizing as we see when we try to describe “academic writing” in general. Since I am ultimately not going to come down on the side of arguing that media determine our situation, I’m going to have to figure out how to bring media into conversation with other agents.
The selfie and the selfiecity project are good examples to work with here. How do we talk about selfies in terms of media? In theory there are many devices that could be used to take a selfie, though the most common is a smartphone. Certainly the mobile and instant nature of selfies are typical of the genre. But is the media the smartphone or is it the internet, where the smartphone is just the sensing organ of the web? What kind of media is the web? It is easy to see in McLuhanesque terms how the content of the web is prior media, including photography. The genre of the selfie is clearly more than a self-portrait. It is participation in a network. It is communication for any number of potential purposes. In this sense the Instagram selfie is different from the Facebook profile picture, which is also often a self-portrait. Presumably though Instagram and Facebook are just content on the Internet, as are music, television, movies, video games, ebooks, etc. etc.
Rather than going down the avenue of skepticism regarding determinist arguments, I will admit to having interest in the differences among these kinds of content and in the philosophical question of which differences make a difference. Many people will say the sound fidelity of MP3s is a difference that makes a difference. Others would point to the smartphone-mp3 playing device as changing the culture around music, as well as the ease of single song downloads, music piracy, etc. Would we want to make similar arguments about ebook formats vs. print novels? If so, how do we want to make that argument? We can make the fidelity/aesthetic experience argument about print vs. electronic where we’d say the print and ebook versions of the same novel are different in a way that makes a difference. Or we can make a larger shift argument where we’d say that the development of ebooks has changed/is changing/will change the genres of novels (and other books) in some way. Is that media determinism, or is it just media agency?
And what about the genre? Is it an object/actor with agency as well? To be honest, I’m not sure. Clearly the idea of a genre has an effect on humans that write in it. And in some sense genres are emergent phenomena of communication activities within a network and they have some cybernetic operation so that, for example, journal articles keep replicating. So let’s say yes, provisionally. To return to the selfie, the historical genre of self-portraits, which presumably could go back to cave painting (though maybe not, maybe we want to say that the idea of self and self-image as a concept emerges at some historical point… a question for another time, regardless, we’ve been doing it for a while). The earliest photographic self-portraits fit into the broader genre of portraits. In fact, if you put a camera on a timer and then take the photo it’s probably hard to tell the difference between that and another person pressing the button. Sure, the selfie is often taken at arm’s length, but the head and shoulders shot that results is familiar to the genre historically. If, for some reason, it wasn’t possible to get a head and shoulders shot from an arm-length selfie then I would guess we wouldn’t see that many of them. If you agree with that hypothesis then you’d be suggesting that the historical genre of the self-portrait had an impact on the expectations and requirements for selfies.
This leads to some other questions. If I take a self-portrait in a mirror (another common practice) is this still a selfie? What if I have some kind of remote control or timing device that allows me to set my smartphone at a greater distance? What if I have someone else take a picture of me and it just looks like I took it at arms length? Does it matter if the self is pushing the button on the self-portrait? We can try to set some rules or what not, but really the answers are in the networks. Do the images circulate in the same way as part of the same genre doing the same kind of work for the same kinds of communities?
How about this one: if I take a selfie but don’t upload it, is it still a selfie? If I print it out and frame it instead? Or is that a different genre? Is it a different medium? Certainly the answer to that last one has to be yes.
For my own research questions related to teaching digital literacy and practicing digital scholarship, these are worthwhile questions. When we think about the relations between teaching students to compose print essays and preparing them to be digital communicators, when we think about the move from scholarly articles and monographs to online journals, we are thinking about the intersection of genre and media. I would say that these are differences that make a difference.
The #futureEd Coursera course has moved from the history of education onto the future, onto imagining future versions of higher education. As I’ve discussed in my last few posts, there are a slew of problems connected to higher education in the US in terms of access, cost, and outcomes (i.e. good jobs for graduates). Though these problems manifest themselves on campuses, and these institutions can act in response to them, they are really problems caused elsewhere. Higher education does not control the rapidly increasing demand for degrees, nor does it control the job marketplace its graduates enter. As for costs, so much of that is a factor of state/public support, the role of student loans, and the strength of the overall economy.
However, I do think we can change how education functions on campuses, but to do that we need to alter the operation of research. Research drives the activity of faculty and forms the basis for curriculum. While it varies from campus to campus, I doubt there are many four-year colleges that wouldn’t value research activity as being at least equal to teaching in terms of how tenure-track faculty are evaluated. And, of course, faculty have pursued their careers for the purpose of doing research. It’s easy and obvious to see that higher education is built around these disciplinary-research paradigms. For example, examine the upper-division courses in an English department and you can get a sense of the specializations within the discipline (literary-historical periods of Anglo-American literature, various post-colonial and ethnic literatures, creative writing, journalism, rhetoric, and so on, depending on the particular mix of faculty at that institution). Examine the more customized descriptions of graduate seminars offered in a given semester and you can see the particular kinds of research being done within those specializations by the faculty. But the shaping power of research paradigms doesn’t end there. In English, and many other humanities disciplines, the most-valued research product is a single-author book. The most common is the single-author journal article. Humanities research is, at its base, a solitary activity. The result of this is a kind of hyper-specialization, which I’ve discussed many times on this blog. Perhaps hyper-specialization is necessary for the sciences; I can’t speak to that. In English Studies though, it just seems like we are creating hothouse flowers. Though undergraduates don’t become hyper-specialists, their curricular experience is shaped by that hyper-specialization not only in the sense of how their individual courses connect but how their curriculum intersects with other disciplines.
Here’s an example of what I mean. Let’s say that instead of spending a good chunk of 3-5 years writing a monograph on a highly specialized topic for an audience of a few hundred readers, to get tenure at a research institution, an English professor needed to collaborate on an interdisciplinary project that published research (though maybe not a book), communicated with the general public, and developed curriculum. What might that look like? It might look like what Anne Balsamo has described (which I discussed here). There are a whole range of topics, including: ecological studies, disability, social justice, literacy, education, cultural preservation, and so on. I am fairly certain that most humanities professors believe their work is culturally relevant, so this would simply be a matter of stepping more fully into that role.
In my own case, I could see being in a community of artists, engineers, architects, business faculty, health professionals, social scientists, education researchers, scientists, and other humanists who were investigating digital media communication. How does digital media operate in the workplace? What are its social and cultural effects? What are its impacts on the human body? How does it interact with physical spaces? How do we learn with it? How do we learn to use it? How do we communicate with it? How do we design better technologies? What are the artistic/aesthetic possibilities? What are the ethical and legal issues? And so on. It’s easy to imagine several dozen faculty on a campus working collaboratively on a group of research questions/projects. I could see myself working on a question such as designing hardware and applications to facilitate digital communication and learning in an educational space.
Doing this wouldn’t eliminate our current departmental structures. Those would remain, especially at the graduate level. You need at least two ways to slice the university to get interdisciplinarity. But a student could major in engineering or business or the humanities but also follow a path through one of these research communities that would connect their disciplinary education with its operation in an interdisciplinary problem.
Right now though, it would be professionally unwise for me to do that kind of work. I need to publish my single-author monograph to get my promotion to full professor. Really any work that I do, including what I am doing right now writing this blog, comes with a financial penalty. The smart, efficient professor is the one who organizes his/her teaching and other activities around publication. Everything the university tells us to do insists that we act myopically, regardless of what the university PR machine says.
Change that and you’ll change the university. You’ll change the daily work of faculty. You’ll change the kind of faculty we hire and promote. You’ll change the way graduate students get trained. And you’ll change undergraduate education. You might even change the cultural role of higher education in our society.
It’s week 3 of Cathy Davidson’s MOOC on the Future of Higher Education. I’m not sure where we are going. Though obviously there are far too many forum topics and posts for any one person to follow, from my vantage there are a couple consistent themes which are echoed in the course video lectures and related materials.
- Higher education is (too) expensive, especially in the US.
- Getting a degree is a pathway to better pay and job security over a lifetime, so it’s a good investment.
- The higher education system cannot handle the vast number of people seeking an education. This is true in the US but even more so on a global scale.
- Higher education is doomed. People don’t want to go to college because it’s too expensive and it doesn’t lead to good paying jobs. They are all going to go online and get badges instead.
How can all these things be true at the same time? Wait, I know. It’s a trick question. It has something to do with the contradictory nature of capitalism, or something?. Maybe not. It’s more about point of view. College is more expensive than it used to be, and that expense is a barrier to some people, though there are more college students now in the US than ever before. Though statistically college remains a good investment on average, with more people getting degrees than ever before, the value of the degree is less than it used to be (we can’t all have above average incomes). On the other hand, the demand side is also misleading because the 450000 students on the wait list for community colleges in California (a statistic to which Davidson often refers) are perhaps not so much seeking an education as they are seeking what correlates to that education: a better-paying job. And who could blame them for wanting that? What that means though is that if there was an alternate route to that job, then the demand would disappear. With a quick Google search, you can find plenty of journalistic reports about “talent shortages” and the need for more, better-educated workers. But how real is this demand? For example, the US Bureau of Labor Statistics lists elementary school teachers as one of the areas with the largest job growth in the next decade, but if you talk to professors who teach education you will discover that the numbers of students entering the field of have dropped off. In addition, we hear about layoffs of teachers, states busting unions, and the increasingly poor environment of the teaching profession. Who would want to be a teacher? If we need teachers so desperately then how come the pay isn’t getting better? Of course you could say that about a lot of the jobs on the BLS list like home health aid, maid, childcare worker, and so on. Maybe there’s a projected need in these areas because these are low-paying jobs that no one really wants if they can avoid them. Teacher isn’t in that category but it shares a common characteristic in that teachers perform a necessary function than no one really wants to pay for.
Now, do you see how that paragraph just spun off like crazy? If “the problem” is something like the neoliberal, transnational capitalistic, privatization of society, then in what possible way do we envision that changing the way people get certified for jobs makes a difference? And what, if anything, does this have to do with higher education, except that we are as caught up in this snafu the same as everyone else? Is the question here “what should higher education look like in 10 years?” or is it “how do we go about changing the ethics of the planet?” Because as ridiculous as that second question is, I’m not sure we are even asking that so much as we are each asking “how do I get to be on the not-sharp end of the stick?” In short, there are too many stories and too many different problems and trying to see them all as related just produces aporias as far as I can tell.
That’s why I prefer more localized versions of this question of higher ed’s future. For example, how can one university shift its curriculum and priorities to prepare its undergraduates to be effective communicators in a digital media/network and increasingly global context as both professionals and citizens? I’m asking for a friend. Riddle me that one, my fellow MOOCcupants.
I suppose my point is that I recognize the serious problem that many people face trying to make their way and that higher education at least appears to be a solution to the challenges they face. However, in the end, these are not problems caused by higher education and unfortunately I believe that higher education has only a minor role to play in solving them. That said, higher education does have many problems of its own to face. I don’t know that confusing the two is all that helpful.
I’ve taken some time to get involved in Cathy Davidson’s course on the History and Future of (Mostly) Higher Education. I have to say that while I’ve taken some enjoyment out of participating, I’ve witnessed a fair amount of magical reasoning, or as your grandpa might say “If wishes were horses then beggars would ride.” For example, if K-12 schools could educate students to be self-motivated, independent learners… If students had the digital literacy to seek out educational resources, create their own curriculum, and build collaborative communities to foster creative, engaged learning… If universities or tech companies or someone would create these technologies and offer them for free… If we had a valid accreditation system independent of higher education that employers would recognize as valid… If, after all this, those would-be students could get good jobs… well, then I’ll tell you what, IF all those things, then higher education would be in trouble because the INTERNET.
Let me restart from a different point. Irregardless of technology, what is it that we want higher education (or whatever might replace it) to do?
- support one’s study of a professional area
- certify one’s preparedness to enter into a field
- develop communication skills (written, digital, oral) to be able to participate in professional (and civic/public?) discourse communities
- develop research skills, critical thinking, and creative problem solving
- learn to do all this work independently, collaboratively, online, FTF, in a global/diverse context
- an interdisciplinary understanding of the world so that we are not mono-dimensional.
Do we also want to say that higher education should not only be about contributing to one’s personal long-term earnings potential but also about enhancing one’s ability to contribute to the common/social good? Then we also have to consider whether we are taking the students as we find them or if, in our vision of the future, we are offloading a lot of current problems onto K-12 education. That is, is our vision one that begins by saying “If K-12 did x,y,z then higher ed would do a,b,c.” I agree that K-12 education could be better and that we should try to make it better. However, I am not going to plan a future for higher ed that hinges on K-12 doing things it’s never been able to do.
Now we could get rid of all the trappings of modern education. We could make learning more project/activity-based, more student-centered, and more collaborative, but we will still need experts to teach and mentor students and a larger support system. We could shift more of these activities into online environments, but we will still need some FTF because students benefit from it and because that is still how the professional world works. In fact we see less telecommuting now than we did a few years ago.
So here is a semi-idealized version of how our composition program works (which is not so different from other composition programs, I think).
- we meet 3 hours a week in-class where students write, work in groups, and have class discussion
- our pedagogy is student-centered and project-based: the curriculum is organized around a series of writing projects
- students explore professional genres and learn something about digital composing
- we have a writing center and individual faculty conferences with students to mentor/support them
- we have a significant online component for resources, class discussion, student and instructor feedback on writing projects, and so on
- we learn how to do basic library/database research and evaluate sources
- we practice invention strategies to find creative approaches to problems and assignments
In short, we’re contributing to many of the goals we might establish for higher education, and we are meeting the students where they are, not in some fantasy land where they are all autodidacts. What I am saying is that when you look inside the university you can find a lot of pedagogies that are conceptually sound. Even at a big public university like UB, less than 10% of the classes have over 100 students and more than 2/3 are smaller than 30. So I wonder if the problem isn’t so much resources as it is professional/pedagogical development. In a class of less the 30 you can do project-based learning, class discussion, small group projects, and so on. And maybe those things do happen regularly across the campus. I honestly couldn’t tell you how much lecturing goes on in those smaller classes. In conversations I have with colleagues across the campus though, my sense is that there concern is often that they feel obligated to “cover” some large range of material. The curriculum is set up so really the only thing that you can do is info-dump.
The problem is that the digital “solution” isn’t any better. It’s also an info-dump (plus let the students talk to each other in forums and figure the stuff out). The standard info-dump pedagogy is based on the same fantasy that imagines plugging something into your head and learning kung-fu in 30 seconds. There’s no doubt that the info-dump pedagogy can give you a bunch of disconnected pieces of information that you can hold in your mind until the end of the semester. And if you don’t do the info-dump approach then students probably won’t do as well on a test that is designed to see how much you retained from the info-dump.
So what if we established different goals for education. What if a successful education was measured by the work students were able to accomplish, the skills they developed, and the activities they were able to undertake than by the information that they were able to retain? Maybe our pedagogies would be different. And maybe MOOCs would make less sense as an alternative. That doesn’t mean that social media and other digital technologies wouldn’t play an important role in learning. In fact I imagine they will. It just means that watching some videos, reading some material, and then taking a test on it wouldn’t seem like learning to us regardless of whether those things happened online or in a lecture hall.
This is a reference to one of my favorite lines in Neal Stephenson’s Snow Crash, which offers a satiric, dystopian view of a future America.
When it gets down to it — talking trade balances here — once we’ve brain-drained all our technology into other countries, once things have evened out, they’re making cars in Bolivia and microwave ovens in Tadzhikistan and selling them here — once our edge in natural resources has been made irrelevant by giant Hong Kong ships and dirigibles that can ship North Dakota all the way to New Zealand for a nickel — once the Invisible Hand has taken away all those historical inequities and smeared them out into a broad global layer of what a Pakistani brickmaker would consider to be prosperity — y’know what? There’s only four things we do better than anyone else:
high-speed pizza delivery
So that’s probably not the future, though I think we definitely have a leg up on the pizza delivery thing. Nevertheless we like to predict the future and this Coursera course is filled with amateur prognosticators (and who, after all, is really a professional one?). Not surprisingly when you get thousands of people together in forums you end up with a lot of common tropes. Much of the conversation I’ve seen starts with the complaint/observation that a college education doesn’t lead to a good job (like it is apparently supposed to), that, as a result it is too expensive, and that these two facts mean that there is a crisis in higher education.
Here is my three part response.
- The historical relation between a college degree, lifelong earning potential, and job security is a correlational not a causal one. A college degree does not equal a good job.
- Once you realize that, then the argument about the expense of college doesn’t make sense. I’m not saying that college isn’t expensive or that there might be a social benefit to making it more affordable. I’m just saying that once you realize that college doesn’t equal job that making a cost-benefit analysis based on the job you did/didn’t get doesn’t make sense.
- If there is a crisis in higher education it’s that too many people want one for the current system to support, in part because they misunderstand what college does.
Think about how colleges work. They are increasingly tuition-driven as we know. They try to get the best students they can, but they also just need students. Then they make an effort to keep those students and ensure they graduate, because those statistics are significant for how their institutions are evaluated, but also because faculty and administrators (who are mostly former faculty) do care about students. We don’t control the economy or the job marketplace, and we don’t control the majors our students select. We could decide not to offer certain majors, but we are in a market with other institutions. We are in business just like everyone else.
We all understand that these are tough economic times and that even in good economic times there are plenty of people who are looking to improve their economic standing, looking for anything that might give them a chance. We can all see that better paying jobs require college degrees. But while not having a degree is a barrier, having a degree is not a magic key. I don’t know if we can change the motivations of people coming to college. We could try to better educate them about what college degrees really mean in relation to a career once they get here. But we can’t seem to do that with English phd students, so I’m not sure what shot we have with teenagers.
In Cathy Davidson’s Coursera course there are plenty of people rehearsing the argument about how various alternate credentialing mechanisms will put an end to higher education forever. I think those efforts could be higher education’s salvation, if they worked, which I am afraid they will not. There are nearly 22 million people enrolled in higher education this year. That’s a 50% increase since 1996. Maybe higher education would work better if half of those people did seek some alternate form of credentialing. To be a little crude, if we have millions of students getting four-year degrees and ending up in crappy low paying jobs anyway, then why not let them collect some badges and MOOC certificates on the way to those same crappy jobs instead? The point is that it doesn’t matter how many students get degrees, the job market is going to be the job market. So the Bureau of Labor Statistics tells us there will be a growing number of jobs for nurses, elementary school teachers, accountants, various kinds of managers and software developers. Those are the growing occupations that require college degrees and pay a decent wage right now (over $50K as a median). There will be something like a million new jobs in these fields between now and 2022. That’s great, except there are 22 million college students right now! How many of them are trying to get nursing, teaching, or business degrees? I’d say a high number of them. Of course a lot of them will wash out and end up with communications or psychology degrees (the two most common degrees in the US right now). And they will probably end up in sales or customer service or something. And you can make good money in sales if you’re good at it. Or you might work your way into management and a better salary. But along the way you’ll probably not be happy about the money you spent getting that degree because getting the degree didn’t lead you to the career you imagined.
I think it would be fantastic if some alternate credentialing mechanism that employers would accept came along. The problem is that if you really want to be a nurse, accountant, or software developer you are going to need some intensive, long-term postsecondary education. Employers who hire nurses, accountants, and software developers pay them a good salary. They want to be assured that the people they are hiring have more than the basic technical training required to do the job; they want people who are smart, good communicators, self-motivated, professional, and so on. They want the best and because the jobs are desirable they can demand the best. Maybe what we need to do is what we used to do 40 years ago: stop people at the gateway to higher education. Maybe we should moderate the number of students accepted into college based upon projections of the jobs available in those fields. Isn’t that the argument we are making now about accepting Phd students in English?
Of course we didn’t have this obsessive link between college and jobs back then. Think about The Graduate or 20 years later, Douglas Coupland’s Generation X. All those college-educated slackers in the late 80s and early 90s (I was one). Where was the furor over jobs and degrees then? Maybe it was there, but we just didn’t have the internet to hype it up.
Maybe it will require some crisis, some shrinkage of colleges or something, but if we got to a future educational system where jobs and degrees were decoupled, or at the very least where the real relationship between a degree and a career was widely understood, then I think we’d be far better off. To me there is a subtle but crucial difference between saying that you are coming to college to study nursing (or engineering or accounting or whatever) and saying you are coming to college to get a job as a nurse, engineer or accountant. It is the subtle difference between reality and fantasy. Because universities invite many people to study these fields, but we don’t hire that many nurses, engineers, or accountants ourselves.
To bring it back to Neal Stephenson, which I should given the title, the future of the American economy is uncertain. Universities don’t control it. The America of Snow Crash is one that is rampant with market economies, corporate enclaves, and deregulated everything. It’s been a while since I read it but I remember even the FBI becomes a non-government agency or something. That’s the direction we have been heading for a while. I don’t know if things will get as bad as Snow Crash but we can see the effects of making higher education into a market-driven business over the last 30 years. I’m not saying we need to go backwards or that going backwards is possible. However, even if we did go back to the days when higher education was better subsidized, we still wouldn’t be the magical job and wealth creators that students wish we would be. You’re still going to have to figure out what the four things America will do well (pizza delivery or not) will be and compete like hell, because that’s the country we apparently want to live in. Maybe not.
I’m participating in Cathy Davidson’s Coursera course on the History and Future of Higher Ed. It’s just week one, so we’ll see how it turns out; the introductory material was, well, introductory. She covers a lot of history (beginning with the invention of writing) to establish our current moment as revolutionary in terms of media/information. There was nothing really surprising there, and it sets up the primary task of the course which is to imagine a future for higher education. In related news, Clay Shirky has a post on “The End of Higher Education’s Golden Age, which also makes a familiar argument: that the public funding of higher education has been on decline since the 1970s (and really takes off after the end of the Cold War). Since then we have had a series of rearguard actions trying to preserve a state that cannot work: “Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.”
In short, it’s a brave new world out there, which we already knew. One might expect Shirky to applaud technological solutions, but he takes a different rhetoric tack: “The number of high-school graduates underserved or unserved by higher education today dwarfs the number of people for whom that system works well. The reason to bet on the spread of large-scale low-cost education isn’t the increased supply of new technologies. It’s the massive demand for education, which our existing institutions are increasingly unable to handle. That demand will go somewhere.” I would take this back a step further. Why are all these people going to college? Let’s, for the moment, agree with the premise that we have left behind the “golden age” (and there’s plenty of evidence for that claim). Then we also must leave behind “golden age” values and ambitions (which were only ever fantasies anyway, ubi sunt). In particular, we should leave behind the claims that higher education supports democracy, creates citizens, or otherwise strengthens society by educating better humans. First of all, does anyone actually believe that ever happened in any consistent way? And even if it did, it has only primarily happened for a very small, white, male, and wealthy portion of America. And we all know what great leaders they’ve been of late.
Obviously most students go to college to get a job. Most people view the function of higher education as preparing students for a career. But higher education isn’t really about career preparation, or at best that’s only a small part of what we do. So maybe the answer is that the future of higher education is the expansion of institutions that are willing to respond to the consumer need for job preparation and the shrinking of institutions that will continue to provide a different kind of education that leads toward graduate and professional degrees. Maybe the same university will contain both kinds of institutions as separate colleges. The technical-vocational degree would probably be a kind of terminal certification. It would give you the basic skills needed to get that entry-level college job. Maybe that kind of training is done without much faculty, but with more modestly trained support staff and tutors, along with some masters level faculty. It would be more like a community college but without the community college’s mission of providing entry into 4-year colleges. These institutions could partner with corporations to provide specific kinds of training needed by those businesses. The corporations would underwrite some of the educational costs. In turn, qualified students could go to work for those corporations upon graduation to pay off some of their student debts, like a kind of indentured servitude, but really not that different from paying off student loans today and there’d be job security (only slaves have better job security than indentured servants).
Maybe that modest proposal isn’t to your liking though. I guess the question one has to ask is whether or not students really want to define their education in terms of the job they will get at the end. Because if they do then what they are really doing is defining their education in terms of what corporations want. And my sense of what corporations want is that they want students weeded out and sorted. We can say that higher education underserves many American students, as Shirky claims, but if want they want are jobs, if what they want is what corporate America wants for them, then we aren’t underserving them. They are getting exactly what they want; they are just not happy with the result because they ended up on the sharp end of the stick. Shirky talks about higher education’s desire to stretch out a golden age long after it stopped working. Maybe we are doing the same thing with higher education. When 30% of Americans have four-year degrees then it becomes a pathway to job security and better income. If 50 or 60% of Americans get four-year degrees, do we really think the degree will have the same value? Or will college degrees stop leading to jobs? Or lead to less desirable jobs? The same job you would have had 20 years ago with just a HS diploma.
If you believe that higher education should be a democratizing force that offers opportunity to people, then I agree with you. If you believe that we should offer that opportunity to more people by getting more people into and through college, then I agree with that too. However, I think we have to realize that increasing the number of people who can compete for the opportunities a college education provides will also increase the number of people who lose out on that competition. The future of higher education is that a four-year degree will be less and less valuable every year while simultaneously becoming more expensive because of the increased demand for the degree. I suppose that if you imagine that marketplaces are rational that at some point these things will all even out, though I’m not sure why we need to bring in rationality at this late point.
Shirky believes there is no point in trying to convince governments to increase their funding of higher education, and he points to 40 years of failure in this effort. Maybe he’s right. It does seem to be the case that voters don’t want to make funding higher education an issue. Everyone thinks its important but no one wants to pay for it.
I wish I was more optimistic, as Davidson’s class seems to be. I just can’t get myself there. I see a future with more costly, junk degrees and an increased divide between elite education and what everyone else gets. I think that if you’re in the top 10-20% of first world students, you’ll probably still be able to get a great education at a cost that is reasonable long-term in relation to what you might earn as a future professional. Those are the folks that will go on (for the most part) to do research, run corporations, serve in government, and perform other key professional roles. As for the rest of the population, as long as education is tied to short-term job needs and corporate whims, as long as no one is particularly interested in supporting that education, I don’t see anything great happening. Maybe this Coursera MOOC will change my mind (heh).
I attended the AAC&U eportfolio workshop last weekend in Washington DC. This was my first time at this conference, and it was good to see rhetoricians well-represented among the presenters, including Chris Gallagher, Kathy Yancey, and Darren Cambridge. It was also a conference with a heavy corporate presence, which is not surprising given the potential money to be made in supplying these services to universities (and the general unpreparedness of institutions to think through these issues on their own). However it was not until the very end of the day that I was able to identify a useful “matter of concern,” to use Latour’s phrase, though perhaps in retrospect it could have been predicted.
The closing session was given by Edward Watson, one of the editors of the International Journal of ePortfolios. He offered some analysis of the content of recent articles in the field and made the argument that more quantitative, reproducible research needed to be conducted and published. While he was willing to say that he didn’t mean that the research that was being done wasn’t valuable, he maintained that the field required more “rigorous” research as well. Darren, who was in the audience, spoke up to object to this characterization, which initiated some back and forth around the room. He remarked that portfolios arose from a value placed on epistemological pluralism and complexity that didn’t fit into the model of quantitative, reproducible research that Watson sought.
I’m not so interested in taking sides in a field beyond my own. From a distance, this desire for quantitative certainty to be associated with education is now a familiar theme. Government agencies, accrediting bodies, university administrations and so on want these kinds of assurances. They want to know that if they institute policy A for $XXX then this will result in x, y, and z effects, so that they can evaluate risk and return on investment. That is, they want to know that spending a bunch of money on instituting an ePortfolio system will result in better retention rates, time to degree, job placement, communications skills… something. And the edTech corporations want to make these assertions in their sales pitches too. There are also some education research disciplines that will claim such knowledge can be produced through empirical research. My position on these matters is fairly simple. I’m happy for scholars to pursue their research agendas in their communities. I don’t know that I would call such methods rigorous, unless by rigorous we mean predictable, because I will occasionally read such work and I’ve never found anything surprising about it. If you’ve ever read an empirical, quantitative study about writing pedagogy that produced results you thought were surprising, please let me know. I would honestly be interested in reading it.
Here is what I would assert about eportfolios. They are neither a necessary or sufficient cause for improving student learning. Depending on how they are employed within a curriculum and pedagogy, they can be a useful tool for creating opportunities for reflection and integrating learning experiences across courses and semesters. Depending on how they are structured, they can be a means for fostering learning communities. On the flip side, they can be nothing but a bureaucratic hoop. If eportfolios improve retention or time to degree it is because they way they are structured in an institution helps students to make their work more meaningful to themselves and chart their own development: through portfolios students can take ownership of their learning, but only if they see the portfolios as theirs rather than as another requirement. However, this also requires faculty and staff to be on board with the process. Then eportfolios become a tool for connecting faculty with students. But that’s a lot of work.
In short, if an institution invests heavily in eportfolios (though really eportfolios are just a MacGuffin) and faculty are invested in their operations and students are given ownership over their work, then they can become a tool for improving educational experience and results. Is that a claim that we really need to test? If we all care about something, believe that it is important, and invest our time and resources in it, then that thing will be the means by which we succeed. Our “problem” is that we don’t really have that thing. And I don’t think that eportfolios is a magic pill to solve that problem, but it is an opportunity to address it.
All of this become clear to me as I was onboard my flight home. The speaker system on the plane was not functioning, so when the attendant spoke into intercom system all anyone could hear was static. This fact was as obvious to her as it was to everyone else on the plane. Still she made all her rote announcements through the system. You know the ones: seat trays and seats in upright position, you can use your laptop now, etc. etc. I imagine she is required to make these announcements in this way, even though the system is malfunctioning. This reminds me of higher education. Communication is not really the point. It’s just the wah, wah, wah of the Charlie Brown teacher. So it’s not about eportfolios per se. It’s about opening a new circuit of communication.
There was a fair amount of uproar (at least on my Facebook stream) over Johanna Drucker’s LA Review, “Pixel Dust: Illusions of Innovation in Scholarly Publishing.” The uproar was around Drucker’s surprising skepticism regarding digital-scholarly innovations, surprising given her position in digital humanities, and her apparent misunderstanding of some technical concepts, or in the case of bit rot, nontechnical concepts. Here is where I agree with Drucker: developing a digital solution to replace the role of print publication in academia will neither be cheap or easy. What I think makes her argument difficult to follow though is the way she interweaves a defense of the humanities into her claims. In fact she concludes by writing “we can’t design ourselves out of the responsibility for supporting the humanities, or for making clear the importance of their forms of knowledge to our evolving culture.” So is this an argument about digital publishing or is it about the humanities? Yes.
As she acknowledges, the challenges of digital scholarship are not limited to the humanities. In fact, one might say that the humanities are a minor concern here. It’s scientific publishing and the high price (some would say extortion) paid by academic libraries to Elsevier and such for digital journals that are likely the first area of concern. Indeed one might argue that if academic libraries weren’t devoting so much of their budget to these journals then there wouldn’t be a monograph publishing crisis in the humanities. Either way, I suppose Drucker’s point is that this conflation of the humanities and digital publishing is not of her own making, that, to the contrary, she is responding to the claim that digital publishing can save the humanities and absolve others of the responsibility for investing more resources to support humanities research.
With this in mind, here is a key paragraph from her essay:
Lists that focus on literary studies, philosophy, foreign language studies, poetry and poetics have been cut, slaughtered in full public view, sacrificial lambs in the scholarly publishing market, as if they might make clear the dramatic sacrifice required to keep university presses going. Going for what? And how? To what end and for what audiences? As with other discussions of the costs of higher education in current Euro-American culture, the complexities of publishing need to be seen as an ecological system, not a set of discrete decisions about specific practices divorced from the greater political, ideological, and economically justified (at least on the surface) conditions of which they are a part. Or, to put it very simply, cutting humanities lists is like keeping dessert from a stepchild at the family table — it saves very little money and causes lots of distress. But humanities are not a luxury, and to show that they have a substantive contribution to make to the world we live in, we need to demonstrate their relevance to policy, politics, daily life, and business, not just rehash the same old bromides about critical thinking and imaginative life. The vitality of humanities is the lifeblood of culture, its resounding connection to all that is human makes us who and what we are. The preservation of cultural ecologies is akin to preserving ecologies in the natural world, it is, in fact, the human part of them. The humanities are us. Their survival is our survival.
Ok. One might quibble with the analogy here. If desserts were like humanities lists, then America wouldn’t have an obesity problem. Humanities lists are more like deserts than desserts. In this respect it is true that the humanities are not a luxury. Luxuries are desired, even if they are not practical or necessary. The problem with the humanities is not that they are viewed as luxuries; it’s that they are viewed as outmoded and irrelevant. The humanities aren’t viewed as the Lexus of academia; they’re the carriages of academia. They aren’t considered desserts; they are those horrid 1950s dinner recipes involving Jello and Spam. That said, I agree with Drucker about the need to demonstrate relevance, but then I think she goes too far. The humanities are not us. They are a historical-disciplinary paradigm (or set of paradigms if you prefer). Humans existed before the humanities, and they can survive without them. Presumably Drucker is not suggesting that the current methods, forms, and genres of humanities scholarship must remain with us forever, that our “survival” depends upon them. I would not deny that one can boil the humanities down to some basic questions that generate ongoing inquiry: why do we have the values that we have? how did our cultures and communities develop over time? how can we understand individual experience in the context of others? what role do our cultural practices, including artistic practices, have in our lives? how do we communicate with each other in order to resolve disputes, create new ideas, etc? But the methods for investigating these questions do not have to stay the same. In fact, they might change so much over time that we no longer call these investigations “humanities.” The development of the social sciences is a clear example of this.
So how should this conversation intersect with the development of digital publishing. It might seem like the tail wagging the dog for new publishing methods to shape scholarly practices. It would if our current scholarship was not shaped by the media ecologies of the 20th century. Drucker raises a number of good points about the labor and costs associated with digital publishing: “Every aspect of the old-school publishing work cycle — acquisition (highly skilled and highly valued/paid labor), editing (ditto), reviewing, fact-checking, design, production, promotion, and distribution (all ditto) — remains in place in the digital environment. The only change is in the form of the final production.” And then, of course, maintaing digital information (or making it accessible) isn’t free, even if there isn’t such a thing as bit rot.
However I think of it this way. A car makes a lousy horse. In hindsight we might say that the automobile was a mistake. Pollution, wars over oil and gas, accidents, drunk driving, the suburbs: there are a lot of reasons to complain about cars. But there’s no way we could go back to horse-drawn carriages, not without unmaking our society in fundamental ways (as in some post-apocalyptic sci-fi movie). If we used cars the same way we used horses, then there would be no point in having cars. Or here’s another example: calculators. If we only used calculators to perform the same calculations that we used to do with pencil and paper, there wouldn’t be much purpose to them. Indeed few people would carry a calculator around them because the need for making calculations in daily life is not worth the bother. Of course computers allow us to make calculations not possible by humans and open new avenues of knowledge and investigation.
In the same vein, e-books are of limited utility. I have a Kindle and find ebooks convenient for many purposes, primarily the speed of access. PDFs of journal articles are more useful than scholarly ebooks. In the 90s, I had binders full of photocopied articles. Now I have a folder on my cloud. I think these are significant shifts in scholarly practice that we might tend to ignore. However, they are likely minor compared to the shifts we will see, shifts that will shape our methods and the questions we ask as scholars. I agree with Drucker that we should have some reservations about the claims that are sometimes made about the “digital revolution.” However, in the larger picture, my skepticism is more focused on the other end of the technology-adoption spectrum. In the end, I suppose Drucker and I agree that the future of the humanities is interwoven with developments in digital media in much the same way as the humanities past has been interwoven with print media. We just seem to disagree about what that means and what we need to do.
Levi has a recent post on this subject discussing these 3 models which he terms poststructuralist, contemporary, and Deleuzian. The challenge here is figuring out how to create space for a subject with agency who can undertake political change. Thus he ends this way:
When we talk about resistance we want something approximating decision, choice, reflexivity, self-reflexivity, or, in short, agency. Yet when we adopt an ontological perspective on these issues of political emancipation and resistance, we seem to embrace a perspective where things happen of their own accord. Tornadoes don’t choose to come into existence, they just do come into existence when the requisite gradients of barometric presence are present. This is exactly the sort of thing we don’t want to argue when we discuss subjugation because we acutely sense that resistance might not occur in these circumstances. Decision seems to be required. This is the problem with the idea of a political physics. It somehow misses the dimension of subjective engagement and decision. I don’t know how to get this dimension into the framework I’m trying to think through, but then again I’ve seen no other position that’s able to– though they all assert it –either.
The poststructuralist position with its ideological overdetermination of the subject doesn’t offer that. The contemporary view, where the subject is always non-identical to itself and to those determinations. Though as Levi points out “ it’s not at all clear– and maybe I’m just dense –how a void or emptiness can be a seat of agency.” The Deleuzian model is a model of gradients and thermodynamics, but as noted above, it’s hard to see where decision arises here.
For me subject and agency are separate issues. I tend toward viewing the subject as being roughly analogous to the tornado. It’s an emergent phenomenon. We don’t control our experience of subjectivity. I don’t get to decide what I sense or feel. There’s a feedback loop between working memory, what you’re consciously thinking, and what comes next into the mind. But do I get to decide what I think? Not exactly. I can shape it. I can try to focus on one thing rather than another. I can try zazen meditation and let thoughts slip by. The conscious mind, the site of the subject, is part of the mechanism that shapes subjective experience. As for the rest of the mechanism, some of it is part of the body and brain, and part is not. Symbolic behavior is the best example of this. What is human subjectivity without language?
Agency, at least in this conversation, is about acting out of subjective states. In the most banal example of making a selection from a menu: if I have agency then it is the capacity to choose an item from the list, but I certainly don’t have the agency to control my subjective preference for one item over another, whether I feel hungry or not, etc. Of course without the restaurant, the menu, and the wait staff, I can’t order lunch. As Levi points out, the idea that agency is some magical ability should be viewed with skepticism. I can’t magically order lunch. We all experience what we call decision-making. As subjects it certainly feels like we are making a selection from the menu. However since we can only make one order, there’s no evidence that we could have chosen something else. It’s the same thing with politics or other social values. We all have them. Did we choose them? Do you recall sitting down and deciding whether you are a liberal or conservative? Do you have the freedom to switch views?
What is it that we want agency to be able to do? It seems to me that the desire for agency is the desire to have the potential to act other than the way we do. Assuming that we largely agree with our own actions, even though the results are often imperfect, I would have to guess that our belief in agency is our hope that others have the potential to act differently so that there is at least some chance that in the future we might persuade them to do so. But here’s the thing. We already do that. It’s called rhetoric, right? I choose from the menu, but the menu also persuades me. So let’s say that agency is a force. It is directed by thinking, which is an activity over which we have limited input, an activity that is shaped by our relation with objects both within and without the body. Our agency includes our capacity to affect others’ thinking and decision-making. But agency isn’t something that we have as a ontological quality.
At work on the third chapter in my book this break, titled “digital nonhumanities.” Here is a brief excerpt discussing Alan Liu’s 2013 PMLA article “The meaning of the digital humanities,” along with Matthew Jockers and Ted Underwood. The point here is fairly straightforward. The mainstream humanities’ objection to digital methods is the belief that it represents a scientistic faith in machines that is really incompatible with the humanistic traditions that Liu calls its “residual yearnings for spirit, humanity, and self—or, as we now say, identity and subjectivity.” Critical theory builds on these yearnings as I discuss below. Either way though, the argument against the digital is that it places an uncritical faith in the capacities of machines, on what Liu calls the fallacy of separate human and machine orders. My point is that the critique of DH relies upon the exact same fallacy. As such, the real “crisis” represented by DH (at least potentially) is not that it values machine over human but that it might actually move beyond the human-machine divide into a new ontological order…
The complaints often lodged against the digital humanities accuse its practitioners of overlooking the lessons gained from critical theory in terms of understanding the dynamics of ideology, power, and other, variously named cultural forces in shaping knowledge. Another, more Latourian approach would investigate the many actors operating in the formation of digital humanities research. Liu notes this as well, writing that a science and technology studies approach would recognize that “any quest for stable method in understanding how knowledge is generated by human beings using machines founders on the initial fallacy that there are immaculately separate human and machinic orders, each with an ontological, epistemological, and pragmatic purity” (416). Instead Liu suggests that digital humanities methods, like those in the sciences, require “repeatedly coadjusting human concepts and machine technologies until (as in Pickering’s thesis about ‘the mangle of practice’) the two stabilize each other in temporary postures of truth that neither by itself could sustain” (416). For Latour this coadjusting is not flaw; instead it is precisely the way in which knowledge is constructed. For Liu, however, such processes put the humanities in a crisis, where “humanistic meaning, with its residual yearnings for spirit, humanity, and self—or, as we now say, identity and subjectivity—must compete in the world system with social, economic, science-engineering, workplace, and popular-culture knowledges that do not necessarily value meaning or, even more threatening, value meaning but frame it systemically in ways that alienate or co-opt humanistic meaning” (419). This is a familiar story, at least as old as Matthew Arnold, in its identification of the technoscientific world as a threat to humanity, but it is more interesting in this context as it would seem that the humanities here suffer from the same “initial fallacy” as the sciences in seeking a separation from machines. Perhaps it is not so much the sciences that ignore the role of machines in the “temporary postures of truth” that they produce as it is the contemporary humanities with their faith in the truth revealed through theory. If the fallacy of separate human and machine orders is rejected, then neither the traditional humanistic yearning nor its more progressive, contemporary, theory-driven version appear less mechanistic in their methods than those of the chemistry laboratory. In other words, while on first glance, the difference between the digital humanities and its print-based predecessors may appear to be the employment of technologies in the production of knowledge, this appearance relies upon a mistaken belief that humans and machines are ontologically and epistemologically separate. It is possible that the digital humanities might offer a method to move beyond this fallacy and abolish the divide between humans and nonhumans on which the humanities has been traditionally established, though it would be premature to suggest that the field is doing this now.
At the same time, it is reasonable to hypothesize that the sites where the digital humanities is weakening this divide would also be sites of controversy with the print humanities. For literary studies, Ted Underwood argues that a quantitative approach generates controversy primarily “because it opens up new ways of characterizing gradual change, and thereby makes it possible to write a literary history that is no longer bound to a differentiating taxonomy of authors, periods, and movements” (2013: 16). Underwood explains that evolutionary patterns of gradual change contradict the contrastive study of literary periodization that have defined the disciplinary paradigms of literary study. How is periodization connected to the fallacy of separate cultural/human and natural/machinic orders? The analysis of large collections of texts representing decades of literary production and the resulting detection of patterns of influence, development, or evolution suggests that literary production operates across networks that exceed the scale of individual human authors. While postmodern literary theory has diminished the role of the author that was already less prominent in the discipline, given the “intentional fallacy” of New Criticism, than it is in mainstream culture, there remains, in the extrinsic, cultural interpretations of literature, some exceptional, agential role to be played by the authorial subject. Even if the author’s role is overdetermined by culture, it remains on the cultural side, as opposed to the natural/nonhuman side, with its capacity for immanent change as opposed to an obedience to transcendental, natural laws. Periodization is evidence that symbolic action is a uniquely human trait; it is evidence of the ontological divide between humans and others. As Matthew Jockers remarks following his own digital-humanistic investigation, “Evolution is the word I am drawn to, and it is a word that I must ultimately eschew. Although my little corpus appears to behave in an evolutionary manner, surely it cannot be as flawlessly rule bound and elegant as evolution” (171). As he notes elsewhere, evolution is a limited metaphor for literary production because “books are not organisms; they do not breed” (PAGE?). He turns instead to the more familiar concept of “influence.” However, influence also reasserts the human/nonhuman divide. Certainly there is no reason to expect that books would “breed” in the same way the biological organisms do (even though those organisms reproduce via a rich variety of means). If literary production were imagined to be undertaken through a network of compositional and cognitive agents, then such productions would not be limited to the capacity of a human to be influenced. Jockers may be right that “evolution” is not the most felicitous term, primarily because of its connection to biological reproduction, but an evolutionary-type process, a process as “natural” as it is “cultural,” as “nonhuman” as it is “human,” may exist. Regardless of whether one is convinced by such are argument about literary history (and even Jockers and Underwood remain skeptical), it is evidence that the controversy the digital humanities presents lies not in its assertion of the ontological divide between humans and nonhumans, or more precisely in its preference for the measurement of machines over the interpretation of humans, but rather in its erasure of that divide.