Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
Clearly there are majors, primarily in professional schools and in the sciences, that certify students as being capable of doing certain things, of having some requisite know-how. Nurses, teachers, accountants, and engineers are all clear examples. Students with science degrees that go on to work in some capacity in labs might be another set of examples. However there are a good number of students across the humanities, arts, social sciences, and even into some business areas where that is not the case. True, every major prepares students for graduate study in its discipline, but for many students that’s not a concern. They aren’t getting degrees in psychology or English to prepare for graduate school, at least not in the way that nursing or chemical engineering students are getting degrees to prepare for jobs in those fields.
Now, if one looks at the learning outcomes for BA degrees from sociology and communications to English, history, and philosophy to media study and visual design, would anyone be surprised if teaching students to write/communicate, think/read critically, and conduct research were heavily featured on every programs’ list of outcomes? No doubt there will also be something about the content and methods of the discipline (e.g. knowledge of literary history and literary criticism in the case of English). But my contention is that those outcomes are of limited value to undergraduates in comparison to the “soft skills” that are promised. This does not mean that the degrees are interchangeable in terms of the soft skills they offer though, even though they might sound alike. Writing, thinking, reading, and researching turn out to be shaped by the disciplinary-activity system in which they operate: literary critics do not write, read, or research like sociologists. These differences turn out to be the basis for an argument for majors: students get a sustained, in-depth experience with a particular (disciplinary) way of thinking. Maybe. All depths are relative. And ultimately what I’m heading toward here is not a free-for-all jumble of courses but simply one that is non-disciplinary.
Clearly there are some colleges, typically small and experimental, where such curricula are commonplace. And virtually every college offers some mechanism for a roll-your-own-degree. But it is even more clear that these are exceptions to the rule. There are any number of disciplinary-institutional-historical and bureaucratic-pragmatic reasons for having majors. As compelling as those reasons are, it is just as interesting to realize that majors are historically and bureaucratically contingent. They aren’t necessary. So the question really should be what is the cost/benefit to having them?
I do think there is value to a programatic education, to being able to build from one semester to the next. At the same time, there is something powerful in the idea of a curriculum that does not take its primary obligation as disciplining students, that is, in providing students with some introductory conception of a discipline. This is familiar to many rhetoricians. We don’t tend to teach undergraduate courses as an introduction to our discipline. Where there are majors, they tend to be technical/professional writing; that is majors that have been designed by asking the question, how might an undergraduate make use of rhetorical knowledge? We could still ask this question without answering in the form of a 36-credit (or so) major. We could answer those questions in 9-credit segments, for example, or in thematically integrated learning communities. This is not an argument for such a curricular revision, so much as an argument that such structures are possible and worthy of consideration. We have little basis for arguing that such structures would be better or worse than our current curriculum. Undoubtedly, they would produce different results. One wouldn’t make such changes in an effort to do a better job of what we are currently doing; the point would be to pursue different intellectual goals and to design a curriculum better suited to those ends.
The major obstacle to even considering such changes is the mental trap of disciplinarity itself. Disciplines want to live on and members of disciplines want their disciplines to live on. I am not suggesting the erasure of disciplines. No matter what we do, assemblages, paradigms, activity systems, networks, and such will form and operate; disciplines are a part of that. I am only suggesting a shift in the way disciplines interact with undergraduates through the curriculum. I am suggesting that if disciplines are serious about the soft skills they purport to deliver (and often tout as the primary value of their majors), then they might think more creatively about how they offer them to students. Ultimately I think a curriculum like this might be more attractive and ultimately valuable for undergraduates than many of the disciplinary silos in which they currently operate.
Anne Balsamo writes in Designing Culture that “Shift work is a fact of life in a 24/7 age. Unlike shifts that start and end with a punch clock, working the paradigm shift is one long now.” Designing Culture is a book about innovation and changing technological literacies; it’s a book about Balsamo’s unusual (for a humanist) experiences at Xerox PARC; and it’s about the future of the university in a digital age. But it is also a book that searches for and insists upon a role for the humanities in technological development. In the end, Balsamo summarizes that role in the following way:
Contribute expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provide expertise on the process of meaning-making, which is central to the development of successful new technologies; provide appropriate historical contextualization.
Balsamo also describes roles for artists, social scientists, engineers, computer scientists, and physical scientists, but it is the humanist’s role that interests me here. Briefly put: to historicize, interpret, and critique. It’s a fair description inasmuch as that is what humanists tend to do in any context so it makes sense that they might serve that same role in technological development. Of course, the challenges are convincing others that such functions are valuable, and then that humanistic methods can provide knowledge that can be put to use in design. The first challenge could be tough, but I think the second is even more daunting as even the humanists themselves might baulk at the notion of being “useful.” In some ways though these two might be the same problem. That is, that if one can frame one’s work as useful, as contributing productively to a larger activity, then perhaps it becomes easier to see how the traditional methods of the humanities might be valued.
However, there’s another way to frame the shifting work of the humanities. Understandably, Balsamo’s book talks a lot about the future and the various ways that we try to imagine/describe it. What if the shift of the humanities was from interpreting the past to inventing the future? Here I am thinking of Greg Ulmer’s keen observation in Heuretics of the split between heuristic and hermeneutic uses of theory. The humanities have largely focused on hermeneutics, on interpretation, and have always made use of their theoretical methods (from poststructuralism and cultural studies to the digital humanities) for interpretive ends. Ulmer’s work though demonstrates the inventive potential of those methods. What is visible on the inventive edge of humanities methods is the capacity for speculating about human potential: what might we be? This can be dangerous work of course, and it is work that requires interpretation or historicizing. However it is also the kind of work that has value in design.
It is self-evident that we continue to struggle with figuring out how to live with digital media and networks. Sebastian Thrun’s recent admission of the failure of MOOCs to live up to their hype (stunning, I know) is one recent example. We clearly haven’t figured out how to design for learning on that scale (if it is even possible). But I am thinking more of Ian Bogost’s latest Atlantic piece on our state of “hyperemployment.”
After that daybreak email triage, so many other icons on your phone boast badges silently enumerating their demands. Facebook notifications. Twitter @-messages, direct messages. Tumblr followers, Instagram favorites, Vine comments. Elsewhere too: comments on your blog, on your YouTube channel. The Facebook page you manage for your neighborhood association or your animal rescue charity. New messages in the forums you frequent. Your Kickstarter campaign updates. Your Etsy shop. Your Ebay watch list. And then, of course, more email. Always more email.
Bogost reminds me of Trebor Scholz’s description of “immaterial free labor” in this First Monday article. Scholz writes “People like to be where other people are. They enjoy using these platforms: from entertainment, to staying in touch with friends and family, to chatting, remixing, collaborating, sharing, and gossiping, to getting a job through the mighty power of weak links. It’s a tradeoff. Presence does not produce objects but life as such that is put to work and monetary value is created through the affective labor of users who are either not aware of this fact or do not mind it (yet).” Bogost and Scholz are each offering critiques of our wayward digital lives, the ways that we seem to become chained and addicted to our devices, the work we are continually doing in their name (even as we imagine we are saving labor), and the resulting wealth we are creating for others as prosumers.
If we are going to design culture, if we are going to take up the Xerox PARC refrain that the best way to invent the future is to build it, then we need to invent new ways of living, ways that are not in service to technology or profit but are also not blindly beholden to antiquated notions of human nature. Instead we need to recognize that we are inventing ourselves along with our technologies. No doubt it is a grandiose role to put oneself in: inventing future humans. And who knows to what extent any of us, or all of us, can really shape that future. But certainly not making the effort doesn’t make sense either. Nor do I imagine that as solely the role of humanists, as if we somehow know the answers as to what we all should or could be. But it is a place where humanists who take up an inventional approach to their methods could have a productive role.
Discussions of big data and pedagogy typically focus on the relative merits of analytics for assessing and improving curriculum and teaching practices. Michael Feldstein has a good piece on this from a few months back where he argues,
Right now, what we’re trying to do is a little like trying to conduct physics research before somebody has invented calculus. You can do some things around the edges, but you can’t describe the really important hypotheses about causes and effects in learning situations with any precision. And if you can’t describe them with precision, then you can’t test them, and you certainly can’t get a machine to understand them.
In other words, maybe but not yet. I’ve wondered about this with writing assessment. I’m not sure if there is anyone out there using the methods developed in the digital humanities to study literary corpora for studying student writing. Such research is not suited, at least not initially, to determining the quality of writing, and as such whether or not students are meeting some standard, which is typically what assessment is investigating. However, it could tell us things about linguistic diversity, topic modeling, use of citation, and other textual features. In other words, it could tell us something about how student writing is changing over time. Maybe we could, through some secondary analysis, connect shifting textual features (e.g. paragraph or sentence length to give a basic example) with “good writing.” Maybe. Of course, turning this into a measure of pedagogy is another matter (turning into a mechanism for machine grading is an even more distant step in my view). As Feldstein says, maybe, but not yet.
However, I have another point that is really about reversing the relationship between big data and the individual student writer. Pedagogy begins (and ends) with the belief that individual students learn and that the learning experience of individuals is ultimately what we want to measure and value. It makes sense. As a student, I pay to learn; I get a degree; and when I leave college, I want to take something with me. At the same time, we recognize that learning is social and environmental; if it weren’t, then we wouldn’t have schools in the first place. So even though it is meant to have predictable effects upon individuals, learning is relational. (Indeed, I would argue that learning is a cognitive activity and all cognition arises from relation; thinking is not “inside.”) The more we start to think about thinking as a relational, networked activity, the more we might also want to shift our focus toward understanding the collective activity rather than the individual one. As such the point is that rather than being concerned about what big data can tell us regarding individual experience, maybe we should turn toward thinking about pedagogy as something that shapes a massive, collective activity that we are now getting a better ability to see. Think of this in terms of climate change. What would it mean to think of pedagogy as shaping the climate of learning? Individual activities, like changing our consumer habits, can have an impact on the climate, but changing individual behaviors is a means to an end that cannot be seen on the individual scale.
How does this apply to writing pedagogy? Though there are a wide variety of teaching practices out there, I’d argue they all share a focus on changing the individual behaviors of student writers. We may contend that there are social-cultural-ideological factors to discourse communities or activity systems or whatever term we want to use for the context in which we write, but we still end up focusing on the writing processes (or products) of individual students. If a student’s essay doesn’t met a standard, then the problem is addressed by focusing on that student’s writing process/behaviors. Even when we acknowledge that some of the causes for the problem are systemic, we still identify the problem as manifesting on the individual level. We pose the problem and solution on the individual scale. So what would writing pedagogy look like if it were designed to teach the collective rather than the individual? I know this sounds, well, inhuman, but if, as I have asserted elsewhere, writing is not a strictly human activity, and the goal is better writing, then why focus on individual humans? It seems to me that when students struggle with writing that it is because they are caught up in networked activities or assemblages that perhaps were productive once upon a time for some purpose but are no longer. We commonly recognize as instructors how difficult it is to shift students’ writing practices. In my view this is partly because we are seeking to make changes at the wrong site. This is the recognition of activity theory, though I believe activity theorists continue to put too much focus on the humans in their systems, which is fine if your interest is in studying human activity but is less useful if one’s interest is in the system itself. Writing is a systemic activity and the logical extension is that we would alter it on that scale.
This doesn’t mean that students don’t “learn to write;” it just changes what that phrase means to something like learn to operate within a compositional network. However it also means that the performance of students within that network cannot be attributed solely to the students. Understanding that big data allows us to see writing on a new “real” level in the same way that information technologies have allowed us to see climate is a significant change. It doesn’t mean that the individual student’s writing isn’t real anymore than climate means that the raindrops on your head aren’t real. It just gives us a different (and compelling) explanation for how those local phenomena arise.
This post takes up where the last one ended. In discussions of theory, I often hear poststructuralism described as a Copernican moment. As the Earth moves from being at the center of the universe, with poststructuralism, we say, the human subject is decentered. Maybe. We can recognize, “in theory,” how subjectivity, agency, rationality are treated; it’s all postmodernism 101. So we are familiar with this analogy in the humanities. At the same time, poststructuralism is carried out through the work of individual philosophers and often through close readings. Certainly the work that has been undertaken with these methods has continued in the form of close readings. One of the compelling qualities of the Ptolemaic model of the solar system was that it was predictive. Based upon careful observations and measurements of the visible universe, the geocentric model could anticipate the movement of heavenly objects. In short, it was self-validating within its own metaphysics. Of course, the geocentric universe also depending upon having access to a limited amount of information about the heavens–what could be seen by the naked eye.
As is maybe obvious, close reading is likewise what can be seen of texts by the naked eye. As is perhaps also obvious, the geocentric model rested implicitly on the premise that the universe was made for humans to experience. Close reading also rests on the premise that texts are made for humans to experience. The text/close reading argument seems more plausible because texts are written by people for people… right? Well, at least we can say that texts are the product of cultural/social forces as opposed to the natural forces of the universe… right? Instead, if we asserted, in a Latourian sense, that texts are a product of human and nonhuman forces, that they are not necessarily made “for humans” any more than heavenly bodies are.
IF we made such an assertion, then what would we make of the methodology of close reading? What kind of knowledge would we say that it produced? Certainly it could tell us something, in a kind of ethnographic way, about how humans experience texts. And really that’s all close reading ever aspired to be. No one would claim that an interpretation would tell you what a text “really” is. It’s just that we never really gave much thought to there being anything worthwhile about texts beyond our relationship to them. This is what I see in digital humanities: an investigation into how texts operate in a scope that is outside our direct experience with them. (Set aside for a moment, if you will, the correlationism issue.) In this context, I’m not sure how close reading and macroanalysis (to use Jockers’ term) will play together. I’m sure people will continue to be both. It just seems to me that if we accept the ontological premise of macroanalysis then close reading becomes a strange kind of practice, more like astrology than astronomy, imagining that the stars tell us about ourselves.
To begin with a caveat: I’m not in the literary studies business, let alone in the digital humanities area of literary studies. So my interests in these two recent books (Jocker’s Macroanalysis and Underwood’s Why Literary Periods Mattered) are likely idiosyncratic when viewed from within the context of those fields. [Let me say by way of immediate digression that these are two excellent books, accessible to those without technical DH expertise, and I would think germane to any literary scholar, as well as anyone interested in DH.] Though I don’t do lit crit, I do share an interest in theories/concepts of rhetorical composing, and obviously there is at least some historical association between literary studies and rhetoric that has shaped both fields approaches to this issue. As curious as I am about digital methods, and particularly “big data” analysis, what I want to focus on here is the way both Jockers and Underwood encounter and address the concept of evolution.
To be clear, Underwood doesn’t really employ the term evolution; that would be my imposition. Instead, he talks about a tension between a tendency to view history in terms of rupture and contrast and the tendency to see history as a more continuous process of change. Underwood argues that literary studies emphasis on rupture helps to explain the enduring role of literary periodization in the field. As he writes,
The introduction of quantitative methods in literary history is controversial for a host of reasons, but I would argue that it matters above all because it opens up new ways of characterizing gradual change, and thereby makes it possible to write a literary history that is no longer bound to a differentiating taxonomy of authors, periods, and movements.
Jockers does write more directly about evolution, though he is careful to make clear that “Evolution is the word I am drawn to, and it is a word that I must ultimately eschew” at least in part because “books are not organisms; they do not breed.” Instead, he suggests that “Information and ideas can and do behave in ways that seem evolutionary” (my emphasis). Both Underwood and Jockers are ultimately cautious about any claims regarding gradual change, just as they are cautious about any claims they make for DH/quantitative methods (only going so far as to suggest these methods can compliment traditional literary studies). Perhaps this is their genuine position; maybe it is a rhetorical decision. Generally speaking, being conservative about the claims that come from research is admirable. I won’t fault them for not sharing my own rhetorical faults, which make it difficult for me to adopt caution.
Instead, when it comes to such decisions, my tendency is to burn that bridge when I come to it.
My willingness to push this matter (not in terms of literary studies necessarily, which is not my field, but more broadly in our understanding of rhetorical composing) is fueled by my long-term interest in Deleuzian philosophy. Underwood references Guns, Germs, and Steel as an example of a trend toward gradualist historiography (though he also notes that historians have less disciplinary issues over gradual change and that literary periodization can be read as a way that literary scholars have sought to distance themselves from historians). When I was reading this though, I immediately thought of DeLanda’s Thousand Years of Nonlinear History as well as Deleuze and Guattari’s use of the concepts of onto- and phylogenesis:
We may distinguish in every case a number of very different lines. Some of them, phylogenetic lines, travel long distances between assemblages of various ages and cultures (from the blowgun to the cannon? from the prayer wheel to the propeller? from the pot to the motor?); others, ontogenetic lines, are internal to one assemblage and link up its various elements or else cause one element to pass, often after a delay, into another assemblage of a different nature but of the same culture or age (for example, the horseshoe, which spread through agricultural assemblages).
And as that passage indicates, these concepts are closely connected with assemblage theory. When we start to think about biological evolution and speciation, one argument we might have is whether species are “real” or only epistemological categories, and then, if they are real, what relationship might pertain between a species and one individual member of that species. For example, in A New Philosophy for Society DeLanda argues, ”a biological species is an individual entity, as unique and singular as the organisms that compose it, but larger in spatiotemporal scale. In other words, individual organisms are the component parts of a larger individual whole, not the particular members of a general category or natural kind.” Needless to say, there are a lot of sides to that conversation. I don’t want to get into that now, so I’m just going to take DeLanda’s passage and run with it.
We can recognize from Latour, the Modernist impulse to assert ontological differences between nature and culture. I read this tendency at work in Underwood’s study of literary periodization, a belief that literary production (at least) is not governed by natural laws. That is, Underwood explains how the development and maintenance of literary periods reflects a disciplinary-paradigmatic insistence on a certain ontology for literary production. Of course the Latourian argument is no more that culture must obey natural laws than it is to say that our understanding of natural laws is nothing but culture. What we might get from Deleuze/Guattari, DeLanda, Latour, and others is a different ontology. So, if one were to investigate influences on Melville and Moby Dick or stylistic/thematic similarities across a massive corpus of nineteenth century novels (two examples from Jockers), how might assemblage theory interpret the “seeming” evolutionary relationship among texts that might suggest that novels are a species of rhetorical composing? I couldn’t give a specific answer to that question, but in principle it would not argue that a species has essential characteristics that it imparts to its members. The point is that “the novel” would not exist in some ethereal plane imparting qualities on texts. The novel “species” is the entity that is made up of all the novels. It doesn’t exist anywhere else anymore than I exist somewhere beyond all the cells that make up my body. And yes, my cells come and go and I persist, though at any given moment I am only my cells. Novels come and go too, disappearing out of print, but the novel persists. (Ok there are philosophical issues that require more time than I can devote to here.)
Nevertheless from this perspective, one can study the species of novels just as one could study an individual novel (different methods would be required, but my point is that these entities share an ontological foundation; they are equally real). Sure one might object that one could never find “all of the novels” and there would be all these boundary issues (is this a novel or not?). There are similar “problems” in biology, but to study species of birds you don’t have to study every individual bird. And there are boundary issues with assemblages, though they aren’t a problem philosophically. To the contrary, assemblage theory is designed to address ontological complexities. Epistemologically, as researchers trying to understand rhetorical composing (literary or otherwise), the construction of knowledge is always about the selective reduction of complexity in the pursuit of usable knowledge.
My point here is that I believe there is a theoretical/philosophical basis in the various strands of speculative realism/nonhuman turn/etc. that would really work with the kind of work Jockers and Underwood are doing. So let me try to end this post by bringing it back to their two books. I think there is more to what is happening with big data analysis than something that will complement (or compliment) traditional literary studies. I think there is more going on here than literary studies maybe needing to recognize that the discontinuous periodization approach to literature might be balanced by a more gradualist analysis of literary practices (which is where Underwood ends up, I think). I do agree with both Jockers and Underwood that DH is not about computers providing a more authoritative, objective, or scientific account of literature than close reading. Instead, as I think they both recognize, big data analysis introduces a different ontological foundation for rhetorical composing. The result is that close reading can no longer mean what it once did. What it will come to mean for literary studies is anyone’s guess. For my own field of rhetoric and composition, which has its own history of close reading from rhetorical philosophy/analysis to our understanding of student composing and learning, I think big data and assemblage theory must necessarily reorient our approach to the way we understand how texts are produced and the role they play in the world.
It’s a conversation that begins with propriety and manners, moves into legalese and institutional policy, and ends up with moralizing. What should or shouldn’t professors (and other college instructors) say about their students on Facebook (or other social media)? I am less interested in the answers to that question than I am in the ways that our attempts to answer reveal something about what faculty understand the rhetorical space of social media to be. For some reason I’ve seen this conversation in a variety of places of late, including the writing program administrators listserv. There are really three basic positions:
- abstinence: faculty shouldn’t remark about their students
- positive-only: faculty should only say nice things about their students
- in private: faculty who complain should severely limit access to those posts
That last position is really conciliatory (no one really thinks there is such a thing as “private” on facebook) but it recognizes the fourth, unrecommended position: to just fire away on facebook. Clearly there some things one might say that would violate laws (FERPA, slander, and sexual harassment come to mind), but I’m not sure that these matters apply differently online than they would in any public space (except that maybe you are more likely to get caught when you publicize your violation to the whole world and leave a record of it). But this conversation isn’t about those kinds of communications. It’s mostly about venting, which could take the form of calling out an individual student, I suppose, (e.g. “Johnny is really annoying me with his behavior in class”) but tends to be more general and anonymous (e.g. “I hate it when the students in my class don’t read!” or “Not looking forward to grading this giant stack of student papers”). We’ve all felt these frustrations about bosses, colleagues, customers, clients, students, etc. (as well as spouses, kids, parents, friends, coaches, and people). Venting is a common rhetorical function of social media, as it is a common function of communication. However, we generally recognize that there is a time and place for these things.
So here is where we encounter the matter of propriety. We say that it looks bad to vent about your students. Maybe. I would say, like all rhetorical acts, it comes down to skill. As we would say to our students, who is your audience and what is your purpose? There are rarely good answers to these questions with the verbal diarrhea of venting. I imagine there is some psychological motive here; maybe it feels better after you’ve vented, and getting some confirmation from one’s friends also helps. We don’t intend those statements to get back to our students. I’d guess that it rarely happens (unless you’ve friended your students!), so it is unfortunate when it does. However, I’d also have to say that this propensity we have now for taking offense has reached absurd levels. How many reality TV shows are essentially driven by the plot line of someone feigning injury at something that was said out of earshot? There is no doubt that we are still somewhat at sea in terms of figuring out how to make social media part of our social lives. Furthermore social gaffs, which remain common in all forms of communication, are just that much more public online, so they’re harder to smooth over.
However, then we get to matters of moralizing. Here we say that faculty shouldn’t even say nice things about students because the other students might feel bad. Really? Is there a university website that doesn’t profile successful students? I’m wondering about that standard to which we are holding faculty. Actually, I believe this is a matter of viewing the classroom as primarily a confidential space between a teacher and a student. Even though the classroom is clearly a kind of limited public space, we tend to think of the interaction between teachers and students, especially in terms of student writing, as private conversation. I believe this is a product of our modeling writing pedagogy on the one-to-one mentoring relationship. In this view, venting is not only a matter of impropriety but an immoral act that reflects upon one’s character. It becomes an action that should inform hiring decisions. Perhaps, some suggest, there should be a policy by which one could lose one’s job for venting. Comparisons are made to lawyer and doctor confidentiality.
It’s a strange twist from those that would elsewhere defend academic freedom. We all know that the typical classroom based scholarship that has driven much of rhet/comp does not require IRB approval or informed consent from the students. Why? Because it’s not confidential. If it’s not confidential when you publish it in a journal, then it’s not confidential when you publish it on facebook.
So what happens if we move in the other direction? What happens if student writing becomes public? What if the discussion between teachers and students happened in public? Well, it might look something like this. Scary, huh? I’ve been teaching in public, online spaces for close to a decade. I won’t claim that it is revolutionary. However, if we started to think of classrooms as public conversations, or at least with a public dimension, then perhaps it would alter our rhetorical orientation toward the class when we were inclined to vent. Instead of moving from one private conversations (with students) to another (with Fb friends), we would see to public spaces that overlap. That’s not to say that there wouldn’t continue to be social gaffs, but it might shift this moralistic response.
Personally I don’t vent on social media about my job. It’s not a moral choice or even a rhetorical one. I guess I’ve never really felt the need. If you’re my facebook friend, you’ll see I am not a big sharer. You might find me the same way face-to-face. My own preferences aside, I see our job in this decade as rhetoricians to investigate social media so that we’ll be able to help students develop their own digital literacies. Not that we get to be the “deciders” about what will or will not (or should or should not) happen, but we do need to develop an understanding and a way of teaching related to the digital world. I don’t think this starts with establishing a moral code whereby we try to separate writing from public online spaces.
Studies In the past two decades indicate that people often understand and remember text on paper better than on a screen. Screens may inhibit comprehension by preventing people from intuitively navigating and mentally mapping long texts.
In general, screens Are also more cognitively and physically taxing than paper. Scrolling demands constant conscious effort, and LCD screens on tablets and laptops can strain the eyes and cause headaches by shining light directly on people’s faces.
Preliminary research Suggests that even so-called digital natives are more likely to recall the gist of a story when they read it on paper because enhanced e-books and e-readers themselves are too distracting. Paper’s greatest strength may be its simplicity.
The article does conclude though by noting the electronic texts now incorporate interactive and multimedia elements that are simply not possible to achieve in print. (Wow, I did not know that. Thanks Scientific American.) Jones carefully reviews some of the research Jabr references to reveal some of the problems with the specific claims about reading comprehension and the cognitive demands of e-readers. I recommend that you take a look at it.
I am interested here in the more abstract question of reading itself. Jabr turns to Proust and the Squid to observe that “we are not born with brain circuits dedicated to reading, because we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So in childhood, the brain improvises a brand-new circuit for reading by weaving together various ribbons of neural tissue devoted to other abilities, such as speaking, motor coordination and vision.” So reading isn’t “natural,” right? It’s a learned, “social” activity. It’s an activity tied to various constellations of technologies, as Jones points out. So the brain has to develop the visual acuity to recognize letters and words. Depending on whether you’re reading up or down from left to right or right to left and so on, the brain operates differently. Furthermore, the brain faces different demands and calls on different operations depending on the content and genre of the text.
Perhaps, as Jones suggests, the issue here is related to education and the possibility of students being asked to use e-readers rather than textbooks. Can Johnny learn from his Kindle or not? Of course these matters beg the question. If learning or comprehension is what happens when you read a paper text, then you should probably read a paper text if you want to learn or comprehend. If e-readers need to simulate the cognitive experience of the paper text, they are always going to miss the mark to some extent. That seems fairly obvious to me. However, another discovery from the realm of the painfully obvious is that it would be impossible for me to write this or you to read this on paper. I wouldn’t have had access to John’s piece (or Scientific American). I wouldn’t have found out about either if it weren’t for Facebook. The very first problem, in my mind, with these questions is that they assume the reading, comprehension, and learning are solitary activities. Perhaps, on paper, they seem that way; the networks are so distant and non-responsive. The second problem has to do with scale. It presumes that reading and comprehension should necessarily scale to the cognitive load of the individual human reader. The distant reading practice in the digital humanities is a version of this problem. Anyone who has been a grad student and had to devise reading lists for a phd qualifying exam also encounters this. Derek Mueller has discussed this as a fundamental “reading problem” in our discipline. And the third problem, as a result, is that we end up pointing the finger in the wrong direction. Our concern shouldn’t be how do we make the reading experience of e-readers more like the experience of reading a book. Our concern should be understanding and developing the cognitive potential of emerging technologies.
If we accept this argument that the difference between the cognitive operation of paper reading and that of screen reading is significant, then this is really a problem for those who would want to hold on to the value of print literacy. Suddenly they are providing horseback riding lessons in the age of the automobile.
The Chronicle offers a review of a recent book by Howard Gardner and Katie Davis, The App Generation, which “puts forward a new framework for thinking about young people’s changing experience of intimacy, identity, and creativity. The change that cuts across all those areas is a heightened ‘risk aversion’ among modern kids, who, as Mr. Gardner puts it, tend to ‘see their whole life as a series of apps.’” Some of these observations are likely familiar. For example, “kids today” are more in contact with their parents than a generation ago, but they are likely to mediate their social relationships through texting and social media. The result of which is that their friend groups have fewer intimate friends and a larger group of weaker social ties. We are all familiar with how Facebook or Twitter enable us to stay in better touch with those colleagues we see once a year, the people we went to high school with, distant relatives, and so on. Some of the claims are more surprising and perhaps debatable,
When judged by changes in things like genre, plot, and setting, teens’ writing was found to have grown more conventional over time. For example, the authors found a decline in what they call “genre play,” meaning tales that deviated from a standard “realist perspective” by incorporating absurdist themes or fantasy elements. In the 1990s, 64 percent of the stories featured genre play. Among later stories, however, 72 percent showed “no sign of genre play at all.”
I haven’t read the book, so obviously I can’t really comment on the research itself, but that conclusion is surprising to me. Given the obvious fascination with “teen paranormal romance” as a genre, it’s difficult to imagine students aren’t interested in “fantasy elements.” If anything you’d think the argument would be that a realist perspective would be unconventional, especially since this earlier Chronicle story discusses banning zombies from the creative writing classroom. Is it possible that students exhibit less creativity now than a generation ago? Sure, it’s possible. I’m not sure if social media rather than institutional educational experiences is the cause though.
I am mostly interested in the identification of contemporary students as “risk averse.” This is something that often comes up in our teaching practicum in terms of the ways students approach their compositions. I do think that the habits of our educational testing culture, plus the careerist orientation of college, have shifted the calculus of risk-taking in the classroom. I’m less sure that students today are more risk averse than previous generations when it comes to things like sexual activity or alcohol and drug use. Certainly universities do a lot of student service programming in these areas. Furthermore, given the typical messages students receive about social media (e.g. warning them not to put up party photos or to beware of stalkers), maybe it isn’t surprising that they are worried about the risks of identity formation online.
The other thing I’d point out is the particular historical context that always seems to appear in these conversations (and seems to inform this book as well). That is, “kids today” are compared to the last two generations (Boomers and Gen X). OK. The key for me here is that the notion of “teenager” didn’t really exist in previous generations. Why? I think this has to do with extended schooling and the related prolonged adolescence. The whole teen high school culture and going to college as a significant feature of American life (at least for white, middle class kids) didn’t exist before the late 50s/early 60s. So I’m not sure if the fact that “kids today” don’t have behaviors like those of teens in the 60s or 70s is all that significant. Teens in the 60s or 70s didn’t have much in common with teens from the 20s or 30s either.
So what might another approach to this question of “apps” and intimacy, creativity, and identity formation be? If we begin with the ontological premise that agency and cognition–the actions and thoughts that produce relations, ideas/expression, and subjectivity–are neither intrinsic nor exclusive to humans, then we would recognize that the human experience of intimacy, creativity, and identity have always been products of a network, that they have never been “ours” in the sense of being intrinsic to us, even though we have experienced them as internal. When we ask questions like “What is a healthy level of intimacy?” or “Are we more or less creative than we used to be?” or “How should identities be formed?” perhaps we need a different understanding of how these subjective effects are produced. These are also obviously value-laden questions, moral judgments even. This is also familiar to us, going back to Socrates: the ways in which rhetoric, communication, and media technologies are woven into moral arguments. Let’s imagine some vaguely dystopian (to us) future where relationships are more data-driven (e-harmony for all my friends), where creativity becomes more collective, networked, and procedural (and less the iconoclastic, romantic vision), and where identities are more structured by apps. Do I want to live there? No. But can I say that society would be objectively worse than my own? No, I don’t think so. Our inclination (and here I am not talking about this specific book by Gardner and Davis [which I haven't read] but more generally in our discourses about social media) is to worry that technologies will violate our “human nature.” And that’s possible. Technologically-induced climate change could make our world unlivable; I’d call that a violation. On the other hand, the argument begins with the premise that there is a normative/natural state for teens (and we can find it 50 years ago) and that social media are interfering with our natural development is strange to me. Are we changing? Yes. Should we study those changes? Absolutely. Are we changing for the better? I don’t know. What’s better?
I was wandering through some recent Inside Higher Ed articles. Here is one on the debate over the merits of the “flipped” classroom and another on the importance of developing students’ “cultural capital.” These seemingly disparate topics are connected for me in the way they both demonstrate some kind of magical reasoning for how learning will happen. In the one article, students are uncultured because they lack some appreciation for fine arts, the humanities, and such. So the argument is that we should be requiring students to go to museums, the opera, etc. Unfortunately it seems that requiring students to go to such places does not have the magical effect of making them “cultured.” In the other article, the complaint was that flipped classes were more work for students and faculty and maybe don’t offer improved outcomes in comparison to traditional classrooms. Unfortunately it seems that just flipping your class does not magically make it better.
What is it that we imagine pedagogy to be? A magic wand?
To put these issues more generally, the problem with the “flipped” classroom conversation is that it begins with the premise that the goals of the curriculum will remain the same. And the problem with the “cultural capital” conversation is that it is staged on the premise of student deficit and weakness-fixing and really fails to recognize that viewing education as weakness-fixing is a choice one makes (and a choice with some significant consequences). Both also represent a kind of magical reasoning in my view that believes that by meeting current goals and filling in student gaps that students are transformed from some barbaric ignorant state into some more fully human creature. Perhaps that seems hyperbolic but think about it this way. One begins by imagining the average college student is uncultured, a state that has been reached through 18 years, including 12 years of public education. While no one imagines that attending a few symphonies or taking a sequence of general education courses will fully transform the student, there is some faith in the notion that the semester a student spends taking some course will have some tangible impact. It’s not unlike the belief that a semester spent in a composition class will transform students into good writers (another deficit).
There are plenty of articles about the demise of the humanities and problems with undergraduate education in general, that our students are “academically adrift.” Maybe that is all journalistic hype. Maybe we think there’s nothing wrong with the means and ends of curriculum. However, if one is of the mind that reform is needed, then it has to begin with examining the ends/goals and trying to understand how those goals are connected to a methods, which are in turn reflective of (outmoded) activity systems. That is, if I say that my pedagogy is made up of reading texts, writing papers, test-taking, lecturing, and class discussion, then I might recognize these as a constellation of activities borne by a certain material context. But then, I also need to think about how the goals of my course have been derived from those activities. That is, if I say my goal is for students to write in a particular essay genre, how has that goal been conditioned by the context in which it developed? Given that I am now in a different context how should the goal change? Not wanting to change the goal is a kind of magical reasoning about the power of essays (in my example) and by extension the magical power of lecturing (ha!) that drives the particular kind of essay writing.
If we think about general education, what happens if we don’t plug all those wholes in the students’ knowledge? Conversely what magical thing happens in that class you’ve taken to meet that distribution requirement in art or humanities or science or whatever? If the goals remain the same (“Fill the holes!”) then methodological innovation is severely truncated. That’s not to say that we should innovate for the sake of innovation either, but I don’t accept the inverse argument either, which is that things should stay as they are unless we have a good reason to change them. Or maybe my point is that we do have a good reason, which is that the world is changing around us. So the means and the ends of curriculum have to shift together in a dialogical relation I suppose, and also hopefully with some guard against our tendency to imagine that learning just magically happens without some real attention to how it works.
For a long time it seemed that we talked about “academic discourse,” some common set of discursive practices that characterize writing across the university. That seems less the case today. Today we more commonly refer to a constellations of genres and professional-disciplinary practices. Despite this, on an institutional level, it would not be surprising to see some reference something like academic discourse, a common set of rhetorical-compositional expectations for all students. Furthermore, your university, like mine, might establish a similar set of goals related to digital discourses or literacies. So I wonder what that’s supposed to look like? As I’ve argued here in the past, our expectations for undergraduates as writers, the genres in which we ask them to write, reflect in some fashion the genres in which we write as scholars. E.g., the literary or rhetorical history or analysis undergraduate essay is a reflection of the journal article.
My question in this post follows from this logic. If undergraduate digital literacy is a reflection of scholarly digital literacy, then what is it a reflection of? What does digital literacy look like in your discipline/profession?
The immediate skeptical/cynical answer in the humanities is that there is no digital literacy among scholars and that this part of the problem the humanities currently face. However, that’s an oversimplification. Clearly there are some scholars in the humanities that might be described as “digitally literate,” though that literacy takes a variety of forms: programming, web design, multimedia production, database archiving, social media, etc. We have scholars who use computing to study texts/media, scholars who compose born-digital scholarship, scholars who are experts in a range of digital-cultural practices, etc. (these are not mutually exclusive categories). We also have many, many scholars who wouldn’t see themselves in any of these categories but of necessity engage in a number of digital literate activities: we all use computers, the web, databases, digital documents, and a variety of applications (word processors, email clients, course management systems, etc.) in our daily work. 25-30 years ago that wouldn’t have been the case. I wasn’t a professor back then, but I remember visiting professors’ offices that didn’t have computers in them.
Is the contemporary expectation for a humanities professor’s digital literacy roughly the same as that of the generic entry-level corporate job? I.e., ability to use Office, email, etc., plus maybe some specialized facility with web/database searching. Is that what we mean, institutionally, when we say we want students to be digitally literate? Again, I think the answer to this question needs to be addressed on a disciplinary level, but I would say that for humanities undergraduates, I would add the following criteria:
- communicate visually (i.e. some basic web, slide, document design)
- archive/curate data
- evaluate online sources
- produce and manage an online professional identity across social media platforms
- work collaboratively asynch and synch online
- basic understanding of how hardware, software, and networks operate
Those are the technical/practical parts. To these, I would also add some critical-cultural understanding about
- copyright, privacy, access and related social issues particular to the digital
- a history of computing and culture (something like what you can get from The New Media Reader)
- digital identity formation
- diverse cultural practices emerging from digital technologies on a global scale
- shifts in rhetorical-compositional practices
- changing nature of work, education, politics
Is that enough? Is that too much? Did I miss something? And if we would expect the humanities undergrad to know such things, what would we expect from the humanities professor? (PS and if you’re starting in graduate school in the humanities this year, you might also want to speculate on what the answer to this question will be in 8 years.)
This post comes out of a Facebook thread I was following on scholarly authority when it comes to writing about teaching writing. In short, who gets to say what to whom? In basic rhetorical terms these matters are always situated. I.e., I get to offer my views on the English Premier league with my fellow soccer dads at our sons’ practices. I could post here about it (though I don’t) or on Facebook or Twitter (though, again, I don’t). I don’t get to write for ESPN or go on TV as a pundit. Similarly, in rhetoric and composition, we are familiar with Stephen North’s discussion of “lore:” the practical knowledge shared by instructors in a department. Social media now offers a broader distribution channel for lore. So, on your blog or Facebook or Twitter, one can discuss one’s experiences, tips, and what not with teaching writing. Those discourses do not amount to scholarship. That is, one couldn’t publish such things in a scholarly journal in rhet/comp. And if one is teaching writing outside of the rhet/comp discipline, I imagine it is even less likely that such work would be published as scholarship. However, we now also have middle-state publishing venues such as the blogs on The Chronicle and Inside Higher Ed. While no one would mistake such writing for traditional scholarship, the venue does lend a certain credibility. That is, if The Chronicle is putting its name to something, one imagines that there is some expertise there. Even in the case of editorials where the views are the author’s alone, one doesn’t select just anyone to write an editorial. One turns to an author with the appropriate credentials.
The question of who gets to write about teaching writing is tied to, but different from, the question of who gets to teach it, though both questions deal on some level with qualification (one hopes). And it is toward teaching that I want to turn this post.
One of the things we recognize in composition is that there are many kinds of composing activities in the world. To begin with, for example, my ability to teach composing is limited to English. More specifically though we recognize many different genres both in the university and in the professional world (to say nothing of other cultural-rhetorical contexts). This is why we talk about teaching writing in the disciplines, because an engineering professor has more expertise in writing in her profession and discipline than I or most other rhetoricians would. The problem is that an expertise in writing in a genre is a different skill set from expertise in teaching someone to write in that genre. In theory, one could be in the field of technical communication, with a focus on engineering, and have the expertise to teach writing in that genre. Or one could go about providing the professional development that engineering professor would need to teach writing in her discipline. And I don’t mean to pick on engineers here, as one could easily say the same thing about humanities professors. Even English professors, who may have received some training in writing pedagogy and taught composition as graduate students (and may continue to do so as faculty), have trouble deploying productive writing pedagogies within their discipline.
It might be useful to think about this challenge in terms of the difference between declarative and procedural knowledge. An academic in any given field has the expertise to write in that field. S/he has probably so thoroughly digested those genre practices that they have become invisible and natural; they are just the way one writes. With some coaxing though, s/he would be able to elucidate the features of the genre and articulate a declarative knowledge of the genre. That is, s/he could describe the genre, including a declarative knowledge of the procedures. But that’s not what the student really needs. To offer an analogy, a tennis pro can give you a detailed description of the mechanics of a good tennis serve, but that will be of limited utility to you if your desire is to serve a tennis ball better. One of the things I learned from my years as a youth soccer coach is that there are some common principles that are shared across any task of teaching procedural knowledge. Learning to play soccer is not so different from learning to write in a genre. In fact learning to play soccer has more in common with learning to write than it does with learning about soccer. In both cases, it’s fundamentally about practice. It’s less about what the teacher/coach says and more about what the student/player does. As coach, I set up an activity that is aimed at developing a particular skill (in basic terms dribbling, passing, shooting, etc.) but then I have to let them do it. And even though the structure of an activity might emphasize one skill over others, they are all generally incorporated. In soccer terms, by making the field of play wider or narrower, by moving the goals/targets, and by determining the number of players involved in a small-sided game (3v3, 4v4, etc.) one can emphasize one activity over others. There are analogies to this in writing pedagogy, activities that emphasize invention or revision or using evidence, etc. In both situations, reflection is important following the activity as is situating the emphasizes skill in a real world environment. That’s a full scrimmage or game in soccer and a formal writing assignment for the class.
Now I got my “E” coaching license (the lowest level you can get) by taking a weekend course. That’s not much, but it’s two more days of training than most university professors have received on teaching writing. Teaching is also a kind of procedural knowledge; it takes practice, but it also takes knowledge. “Know” is a part of know-how. And while I may not know much about the specific genres of many of my colleagues in other disciplines, I do know how to teach writing. Similarly, I don’t know much about the particular skills you need to play baseball (I know about the game, but I don’t have know-how), but I’d be willing to bet that the basic pedagogical principles of soccer training would apply.
Unfortunately the university doesn’t often do a good job of thinking about know-how. It only wants to think about know-that. And that is strange because writing is hardly the only kind of know-how on the campus. There are the obvious examples in the fine arts: music, dance, painting, etc. There’s creative writing of course. But there is also literary interpretation, scientific experiments, architectural design, etc. Obviously, every field has practices; every field has methods courses. Writing is a kind of meta-disciplinary method. So who does get to be the “expert” in writing? After all, the natural world gets cut up into many disciplines and on one gets to be the expert. Should we think of writing in the same way, as an object of study available to many disciplines and methods? There are plenty of humanities disciplines that study texts and maybe textual practices: historical documents and historiography; literature and literary production; philosophy and philosophical argument.
I suppose my point in this now long meandering post is that writing puts to question the conventions of disciplinary expertise that organize the university. This makes rhetoric an interesting field of study, the one that does fit because it was always outside of, and preceding/conditioning, the philosophical and natural philosophical traditions that all the other fields (except math) emerged from. Because rhetoric is grounded in a know-how that conditions all these other know-thats, it’s expertise is also different. And teaching it is even more different.
In composition studies over the years folks like Lester Faigley, James Berlin, and Richard Fulkerson have attempted to describe the various schools of thought within the field. This is a different question as we know of the disconnect between scholarship and actual teaching practice. So I’m brainstorming a list of the common ways I think composition is actually taught in US colleges.
Here’s my list:
- The writing process: as in teaching a single process approach. This is what happens to the process theory of the 80s when it gets repackaged as a textbook. Even though it isn’t meant to be linear, it ends up imagining these stages of invention, thesis, organization, paragraphing, sentences, editing.
- Writing and literature: basically a writing-intensive introduction to literature course where there is some revision and discussion of writing practices but all within the context of writing in response to literary texts.
- Writing and culture: this is the cultural studies composition classroom but it’s similar to #2 where we read about various cultural issues and cultural theory and then write in response to them. Like #2 this is a class that argues that learning to read in a particular disciplinary way is a necessary precursor to learning to write.
- The modes: yes, they are still around, or at least a perusal of textbook catalogs would indicated they are. Write a description, write a summary, write a narrative, etc.
- Writing and style: the focus is on grammatical and stylistic concerns, primarily on the sentence level. I imagine students mostly write essays.
- Writing and argument: a critical-thinking approach to composition where students write argumentative essays and learn about logic-based forms of argument. Students write humanities-style essays.
- The many genres approach: students write in many genres and the focus is on identifying the formal qualities of genres so that they may serve as models.
- The many purposes approach: students write for a variety of purpose (to argue, evaluate, solve a problem, analyze) and investigate who those purposes manifest in different genres.
- Rhet theory/writing about writing: classes that introduce the discipline of rhet/comp in some way as a mechanism for students learning about writing.
At UB our history is that we’ve done a fair amount of #3, but, at least in our 101, we are moving toward #8 in our use of Mike Palmquist’s Joining the Conversation. I read Palmquist’s book as being informed by the genre and activity theory approach to composition we seen in the work of Bazerman, Russell and others. In that respect it shares a scholarly foundation with the writing about writing approach, though the pedagogical application is different. It is a challenge for our instructors who are primarily TAs in the areas of literary studies, cultural theory, and poetics. As such, they haven’t had many encounters with cultural-historical activity theory or North American genre theory, let alone how those areas intersect composition studies.
My ideal approach would likely be something else but I have to think about what is sustainable and deliverable across 100+ sections every semester when every year I have a 20-25% overturn in instructors. The advantage to the churn is that any change I make this year will seem like “the way it’s always been” to more than half the instructors in 3 years. The disadvantage is that I do have a case of the eternal September, of never really being able to build a more sophisticated curriculum, as in any given year more than half the instructors have not taught the course they are teaching more than twice. This is, of course, not their fault. It’s just a condition of the program as it is constructed.
As I always tell my new TAs, a composition pedagogy is based, implicitly or explicitly, on one’s theory of how composing happens and one’s theory of how learning/teaching works. The activity theory of composing is probably as close as any mainstream theory gets to my own. I would tend more toward Latour and DeLanda than CHAT, which means, for example, that I’d give a larger role to nonhumans and to the distribution of cognitive processes. I also am more concerned about the shift toward digital technologies and networks and their impact on composing processes than we see in the list above. An FYC class begins with helping students understand the roles they play as actors in the various compositional networks in which they already participate (I would describe those networks in a slightly different ways than CHAT, though maybe the subtleties would be lost at the 101 level). FYC needs to address the importance of a writer’s affective orientation toward those compositional networks; that is, in less jargon, to recognize that how we feel about what we are writing and our motives for writing are a significant part of the process, so strategies for reorienting ourselves are important. Finally (and ideally), an FYC class should help connect students to the compositional networks of university life, but it cannot do that alone as there needs to be some articulation with some WAC/WID curriculum for that to happen. Here I think those “networks” can be quite literally some digital form of community, an ongoing e-Portfolio, and so on. Those are things we don’t have right now at UB. All of this also indicates a pedagogy where the focus is on student activity and the design of the compositional networks in which students will be asked to participate. The first goal has to be addressing student affect, by which I don’t mean helping students “find their voice” and learn to love writing. Instead I mean shifting students from viewing writing as a monolithic, top-down activity where they are at the bottom and instead seeing writing as a varied, multi-directional and fluid activity that can be connected to many different purposes and goals a student may have both in and out of the university. This may sound like it is all on the students, but it is really all on the curriculum and faculty. It starts with us stopping using writing as a pedagogical weapon for ensuring conformity, standardization, and evaluation and beginning to use it for communication, collaboration, invention and so on.
I think it’s fair to say that the University at Buffalo is a technocratically-oriented institution. I suppose the word “technocrat” rarely has a positive connotation, especially when used by a presumed humanist like me. OK, but for the moment let’s make this word more value neutral. Around 10% of our majors are in the arts and the humanities, around a third are in communications, psychology, and social sciences, and more than half are in STEM or business. So, you get the picture. The UB student population shows its preference for developing technical expertise with its feet. On my campus the statement that “humanities are not for everyone” would appear to be a gross understatement.
Here’s another angle though: are humanities professors also technocrats? Our disciplines are also organized around particular kinds of technical expertise, right? Aren’t literary criticism, rhetorical analysis, historical investigation, and philosophical argument all forms of technical communication once they reach the level of scholarly publication? In some respects we’ve pursued this technocratic moniker in our desire for our scholarship to be valued in the same ways as our colleagues across the quad, in our pursuit of increasing specialization, and so on. We all know that “engineering isn’t for everyone.” We watch students wash out each year. It’s a challenging and technical discipline. Do we stylize the humanities in the same way? That is, does our curriculum seek to mirror STEM in the same way that our research has? So when we say “the humanities aren’t for everyone,” do we mean that the humanities represent a kind of impractical course of study (in comparison with STEM or business) and thus not what many students appear to want? Or do we mean that the humanities are a narrow set of technical practices, analogous in their accessibility to engineering or business, and thus not for everyone? Or maybe both.
In the good old, bad old days, that are still alive in some places, English was a default major, especially for women. Now, nationally it seems the default majors are psychology and communications. Students certainly aren’t entering those majors because they lead to good paying jobs: on average those BAs pay about as well as English. In English departments we may worry about low enrollments but we also wring our hands over the notion of becoming a default major…. English isn’t for everyone.
Isn’t it a little strange that the humanities isn’t for (most) humans? What have we really gained in our pursuit of an identity as an elite technocratic set of disciplines? In my view, there’s no challenge in designing an English class so that it is too difficult for the average student to take. In fact, graduate students do it all the time, out of inexperience or perhaps a desire to demonstrate their own technocratic expertise, to be like their mentors. What would it be like–not to return to the good old, bad old days–if English was a refugee major, a site of intellectual refuge, a place that was for everyone? To the extent that we value the education our broken family of disciplines can offer, it isn’t because of the market, professional, or even cultural value that our technocratic skills (of interpretation, critique, or archival research) possess. Nor is it the cultural capital of what we might know of our beloved books. I would venture that it has something to do with expression. If I were to say what I feel I got out of my education, not only BA, but MA and PhD., it wouldn’t be the ability to deconstruct this or critique that. I can do those things, but so what? It certainly wouldn’t be my ability to do well on a GRE trivia test of literature. It would be the capacity I developed to think through problems and get things done with language. Am I describing a technocratic skill? I don’t think so. It’s not that specialized. So what if English wasn’t the replication of our technocratic skill set? What if English really could be for everyone rather than yet another discipline that prided itself on the elite, technocratic abilities of its students? What if it were democratic instead?
I don’t know that I have seen any studies that particularly identify why students don’t choose to study English very often. We can all guess. There are a lot of guesses, or more accurately there are a lot of people who all guess the same things. English doesn’t lead to a job. Students (these days) don’t like to read or write (except of course they do read and write more than my generation did, just not things that count). So let me put that differently: students don’t value the literacy on offer in English departments. I suppose this leads me where I always seem to end up when I start thinking about these matters: maybe students have a good reason to question the literacy we are offering them. Maybe it could do with some updating (maybe!). But maybe we should also rethink the technocratic qualities.
It is not only the students who think that English isn’t for them. It’s the discipline as well.
I haven’t posted here recently because I made a bargain with myself that I would finish Latour’s An Inquiry into Modes of Existence. It was slow-going, but I’ve also been involved in my university’s efforts to reform our 20+ year-old general education program. Those efforts are still ongoing, so I don’t really want to talk about the particulars of that here. However, I do want to think through the end of Latour’s book and the light it sheds on the complexities of connecting disciplinary knowledge to pedagogy and further still to a market conception of the university that attracts and impassions students.
Very briefly, Latour groups most of his “modes” into the following categories:
Modes that precede and create conditions for the production of quasi-subject and quasi objects:
Those that produce and link together quasi-objects:
Those that produce and link together quasi-subjects:
Those that link quasi-subjects with quasi-objects:
Those that govern linking:
- Double Click (DC)
I know that doesn’t tell you much, except maybe that this is a complex system of modes. However, what you might glean is that Latour is beginning with abolishing the ontological distinction of Subject and Object, developing a different ontological process, and then investigating how subject-like and object-like things emerge and interconnect. Thankfully, I’ll just take up a couple of these things here.
Academic disciplines are primarily interested in the mode of Reference, which is the mode that establishes chains that connect what is known with the quasi-objects that are being known. This is familiar Latourian territory I think. It’s the sciences construct knowledge about the natural world. Of course, they also require the modes of technology and fiction; these modes must operate in conjunction with one another. For disciplines that do not study the natural world, the mode of reference does not work so smoothly, and yet I would argue that the Modern world has pushed the humanities and social sciences in this direction.
The university, of course, is an organization, and increasingly we say it must be driven by economic considerations. This is the prevailing complaint from all the “stakeholders.” The politicians talk about the rising cost of education. Students and parents want degrees that lead to jobs. Administrations and boards seek new revenue streams and investments. Professors worry that economic calculations supersede the paradigms of their disciplines. For Latour, the modes of organization, attachment, and morality, which comprise the final section of the book, are an effort to displace the Economy as a kind of “second Nature” established by the Moderns. Briefly, organization refers to the many scripts by which quasi-subjects live their lives. Attachments are the passions that drive those scripts. And morality is the obligation to never be satisfied by the economic calculations we make. Latour is attempting to do away with the pseudo-rationality of economics that tries to operate as those it were a science.
So I want to think through how this all plays out in the movement from discipline/research to pedagogy to brand. The scholar has her methods for establishing chains of reference, her own disciplinary uses of technologies and fictions, organizing scripts, scholarly attachments or passions, and ongoing moral questions about the calculations she makes. But how do these translate to pedagogy? The old (but still active) method says the professor must only express her own passions, her own attachments, and introduce students to her scripts. It is the classic mini-me pedagogy. As the university has expanded its purposes and its students, the mini-me pedagogy as lost whatever credibility it might have ever possessed. Even in the professional schools one knows that the aspiring engineer is not made in the model of the engineering professor, and certainly in the old Arts and Sciences, we know that only the smallest percentage of our students will ever follow in our professional footsteps. Just as the diner’s attachment to the meal is different than the chef’s, the moviegoer’s attachment to the film is different from the director’s, and gamer’s attachment to the console is different from the designer’s, the student’s attachment to the curriculum and the discipline is different from the professor’s. Of course the immediate complaint with all those analogies is that the student is being cast as a consumer. This is what sickens the academy! This is the making-economic of university life. However, to follow Latour’s argument and dispense with the monolithic second Nature of the Economy, instead we might say that students have different scripts, different passions, and different calculations in their organizational relationship with university and discipline.
Pedagogy is the key mediator here. It remains a candidate to serve as another mode in my view. And here I am reminded of Deleuze and Guattari in What is Philosophy?
The post-Kantians concentrated on a universal encyclopedia of the concept that attributed concept creation to a pure subjectivity rather than taking on the more modest task of a pedagogy of the concept, which would have to analyze the conditions of creation as factors of always singular moments. If the three ages of the concept are the encyclopedia, pedagogy, and commercial professional training, only the second can safeguard us from falling from the heights of the first into the disaster of the third.
To put this in my own terms, “training” would be the wrong way to think about teaching, a concept that put pedagogy fully within the realm of the Economy. The encyclopedia as the height of reference represents disciplinarity but perhaps also the error that mistakes reference for reproduction. Pedagogy then is about a different set of scripts, attachments and calculations. And some of us already know this at least. To think rhetorically about the audience of students does not require folding to some spectral economic demand to treat students as consumers. That our students are an audience does not make professors into entertainers.
The more difficult translation is the one from the classroom and the curriculum to the university brand, the way universities communicate their programs to attract students and excite them about the possibilities of enrolling. When education is swallowed by the Economy, it quickly becomes a commodity. The Ivies and elite liberal arts colleges may generate some value-added appeal (or they may just be conspicuous consumption), but in the end there appears an economic calculation about optimal results related to university ranking, programs and career opportunities, and cost of attendance. Latour suggests this is a mistake. Not that we “shouldn’t” make such decisions but that we don’t. University admissions departments understand this, as is revealed in the smiling, multicultural faces on their brochures. Investments in new dorms, student life facilities, and athletics are part of this as well. But it is harder to make this work for curriculum because the discipline is connected on the other end to research.
I have been thinking about this in terms of general education. We know that students don’t like general education because it doesn’t reflect their attachments to a profession and a major as a route into that profession. And most faculty don’t care to teach general education either: partly because the students don’t want to be there, but also because the premise of mini-me pedagogy clearly doesn’t operate there. There is a branding/rhetorical problem here of making general education appealing in both of these directions (and appealing to the administrators who must commit to invest in it). However if we think about general education as building new organizational scripts and attachments then it becomes a mechanism for reorienting the university away from the Modern error of the encyclopedic Natural/Social distinction while avoiding the Economic university of commodified training.
Continuing my increasingly plodding march across An Inquiry into Modes of Existence, I’ve completed Latour’s description of the modes he refers to as “quasi-objects” and “quasi-subjects.” To recall, one of the keys of the argument here is to dispense with the binary of subject and object. However, Latour recognizes that these concepts play a central role in how Moderns define themselves, so one cannot pretend as if they simply don’t exist. As such, we get these “quasi” things. Not only that, but Latour also has modes that are prior to subjects or objects.
Just a brief review (I hope), as I want to get to the matter of quasi-subjects. Reproduction, metamorphosis, and habit belong to the pre-object/subject group. As Latour writes
To use a linguistic metaphor, if the beings of reproduction define some kinds of syntagmas (lines of force for inert beings, lineages for the living), might we not say that the beings of metamorphosis define paradigms, possible series of transformations, vertiginous trances? We would then be sketching a matrix made of the crossings between horizontal lines—reproductions—and vertical lines—metamorphoses or substitutions. They would form the warp and the woof of which all the rest is woven. If, much later on, humans begin to speak, it is because they slip into these horizontal and vertical series that they could not have invented. If humans act and speak, it is because the worlds are already articulated in at least these two ways: they reproduce, they metamorphose. (287)
To me, this sounds very close to DeLanda’s use of Deleuze and Guattari’s assemblages, but that’s a post for another day. Thankfully the mapping is one to one, but all the elements are there: lines of force, enunciations, deterritorializations and reterritorializations, and coding/symbolic behaviors.
Latour’s quasi-object modes are technology, reference, and fiction. As Latour continues:
The beings of fiction have lent powers of delegation to the beings of technology, powers that have allowed the sciences, starting from a limited viewpoint that condemned them to blindness, to traverse the whole world and cover it with chains of reference paved from end to end with instruments [tec·ref] and with delegated and domesticated virtual observers [fic·ref]. Hence the idea of grouping these three modes together. (289)
These modes each deal with things “fabricated thing (tec), dispatched things (fic), and known things (ref).” However it is these things that allow for quasi-subjects to emerge.
So, quasi-subjects: Latour identifies three modes here as well–religion, politics, and law. There is enough of a fascination with trinities in this book as it is, but the religion chapter takes it to a new level with its implicit focus on Catholicism. I struggled with that chapter. I realize that Latour has written extensively about religion elsewhere, and I suppose I could go and read those books. And, of course, I recognize that religion is important for many people in the world and for the “Moderns” definition of themselves. Latour mixes up religion and love, which I suppose makes sense to a Catholic. I can see religion has something to do with renewal for Latour, but honestly I am at a loss here. Maybe you can help me (is that a prayer?).
Fortunately the politics and law chapters made a little more sense to me. Latour situates politics firmly in a rhetorical context, at least as far as I am concerned. He describes a cyclical process whereby quasi-subjects shift from being multitudes into a unity and back again. he writes:
What the Sophists discovered is that there is a truth of curves, a necessary truth when one has to produce, in the middle of the agora, statements like “We want,” “We can,” “We obey,” whereas we are multitudes, we do not agree about anything, above all we do not want to obey and we do not control either the causes or the consequences of the affairs that are submitted to us. To pass from one situation to another, yes, a miracle is required, a transposition, a translation in comparison with which transubstantiation is only a minor mystery. (346-7)
This points to one of the central linkages among these modes: none are allowed to speak truthfully or falsely. To jump ahead to Law, legal truths are not perceived as Truth (e.g. he was found “not guilty” on a technicality). We can say that religion is true for believers and that politicians are liars but really neither mode has credence within the mode of reference (of science), which is where truth is established for Moderns. But Latour’s point is that these modes accomplish things that science/reference cannot and that misunderstanding this is the great error of philosophy: “As if Socratic dialogue could put an end to the pandemonium of the public” (347). One more long quote (thanks to the online version of AIME).
The third group has a common feature, moreover, that justifies calling it that of quasi subjects: the fact that the felicity and infelicity conditions for the group always depend on the moment, the situation, the tonality, almost on the tone of voice—in any event, on form. (For this reason it would not be a bad idea to reserve the term regimes of enunciation for this third group, for it is definitely a matter of a “manner of speaking.”) It was the very fragility of these conditions that led modernism to declare them irrational, or at least irrelevant to truth and falsity alike. And yet what a loss, if we couldn’t trace once again the differences between truly speaking politically, legally, religiously and falsely speaking politically, legally, religiously. And a still greater loss if we were to mix up these forms of truth, if we were to amalgamate them. (375)
Perhaps it is just me, but I hear rhetoric in this passage: moment, situation, tonality, ”manner of speaking.” As I understand it, these modes take up technology, fiction, and reference to enunciate subjective layers, to articulate humans as individuals and groups. I think this is very close to where I am going with understanding rhetoric as an enunciative force (symbolic or otherwise). Rhetoric is not a product of subjects but rather something closer to the other way around (although we never actually get “subjects”).
It is in this context that I wonder if pedagogy or learning might not be another mode of quasi-subjects. We wouldn’t want to mistake it for educational institution any more than we would religion for religious institution. Double-click, Latour’s Modern nemesis throughout this book, wants to think of pedagogy as transmission of fact (reference), but the project here is to recover these modes from the influence of Double-click. Pedagogy has its own mode of gathering together fabricated, dispatched, and known things, its own sense of moment, situation, and tonality. Pedagogy links people together in a unique way. It has its own intersection with metamorphosis. On the other hand, if one adds pedagogy/learning then maybe that opens the door for dozens of other possible modes and the results is unwieldy. On the third hand though, religion, politics, and law seem incomplete to me.
I am continuing my slow march through An Inquiry into Modes of Existence (AIME) and want to do so today in the context of a conversation I had with my Teaching Practicum class. On Monday we read and discussed Maxine Hairston (“Winds of Change”) and Lester Faigely (Competing Theories of Process) on the topic of the writing process. These pieces are from ’82 and ’86 respectively. While there are various arguments about when the process approach to teaching writing began, it is around this time that “teaching the process” as we still practice it became widespread. To briefly summarize, Hairston argues that the process represents a kind of Kuhnian paradigm change in “our profession” (though who the “our” is might be a little unclear): is the profession “English” or is it a still nascent (in 82) “rhetoric and composition”? Four years later, Faigley describes three different process theories: expressive, cognitive, and social. He clearly prefers the third and is already recognizing the cultural turn in composition studies that will, by the 90s, move from being a process theory to a “post-process” theory.
Our class conversation focused on a couple points. Hairston writes:
in many schools, even graduate assistants who are in traditional literary programs rather than rhetoric programs are getting their in-service training from the rhetoric and composition specialists in their departments. They are being trained in process-centered approaches to the teaching of composition, and when they enter the profession and begin teaching lower-division writing courses along with their literary specialities, they are most likely to follow the new paradigm.
The first sentence is still true. I was doing that Monday. However I think it is fair to say, 30 years later, that literary scholars, despite being trained as TAs in process, do not teach the process in a widespread, paradigmatic way outside of the context of composition (if they even happen to teach composition, which I think is increasingly unlikely). I want to put this in context of Faigley. He begins with Stanely Aronowitz and Henry Giroux who “see the development of writing programs as part of a more general trend toward an atheoretical and skills-oriented curriculum that regards teachers as civil servants who dispense pre-packaged lessons.” Faigely then quotes Aronowitz and Giroux’s assertion that ”The splitting of composition as a course from the study of literature,[sic] is of course a sign of its technicization and should be resisted both because it is an attack against critical thought and because it results in demoralization of teachers and their alienation from work.” I see these passages as still articulating the operation of composition in English departments, that composition is still often viewed as atheoretical, skills-oriented and an “attack against critical thought” (though what “critical thought” means here is up for question). And this is part of the reason why the paradigm change Hairston imagines doesn’t quite come to pass.
Instead, we get a bifurcation of paradigms where composition studies separates from literary studies on the question of process. Of course this becomes more complicated than Kuhnian paradigms can manage (at least as Hairston deploys them). For one thing, there remains a philosophical, hermeneutic, belletristic, humanistic branch of rhetoric whose methods parallel those of literary studies (as well as a social scientific branch of rhetoric that is quite distant from English). Then, as composition largely remains within English departments where the paradigms of literary studies remain dominant, there is a fair amount of gravitational pull on composition. So as Faigley’s “social view” becomes the post-process, cultural studies paradigm of the 90s, composition becomes less focused on process and turns back toward a text-oriented hermeneutics based on post-structural/postmodern interpretive methods. Composition moved away from any sense of teaching “skills” and focused on critical thinking (as it would be understood in a neo-Marxist, cultural studies ideological critique sense as opposed to the philosophy class, rational argumentation sense) and the resulting discourse analysis. During the same period however, we also see a rapid expansion of technical communication doctoral programs, followed by the emergence of professional writing programs. So while composition wavers in its relation to literary studies methods (likely because of its dependency on literary studies-trained TAs and adjuncts), elsewhere the broader range of “writing studies” is departing from the literary paradigm.
So what does this have to do with Latour? Well, maybe he offers a better way to understand what is happening. In chapter 10 of AIME, Latour focuses on habit as a mode that tends to make invisible the work that is done to create the impression of immanence:
Whence the feeling, as old as thought, that phenomena are “hiding something from us.” And it is true, they really are hiding something, yet there is no mystery to worry about: continuity is always the effect of a leap across discontinuities; immanence is always obtained by a paving of minuscule transcendences… Through habit, indeed, the discontinuities are not forgotten, but they are temporarily omitted. (267)
We may tend to think of habits in a negative light, especially in the lens of critique, but that’s not Latour’s point. To the contrary, habituation is helpful and necessary. His running example in AIME is the hiker searching for a trail when “he no longer has to choose, he can finally follow, he can finally put himself ‘in the hands’ of others, he knows what to do next, and he nows this without reflecting” (265). Clearly, a lot of paradigm is about habit: discontinuities are “temporarily omitted” so that we can pay attention to where the trail is taking us rather than the trail itself. The habits of literary study and writing studies are obviously different. We don’t need Latour to tell us that! Not only are they different trails heading to different destinations, they are habits for different beings.
What is most crucial, in my view, for Latour’s habit is the role it serves in addressing the classical Western ontological divide between immanence and transcendence. Latour notes that moderns were right to suspect “appearance” and to search for what was hidden. So here’s an extended passage that I think will make the loop back to composition and literary studies:
We no longer have to confuse making something explicit with imposing a difference between those who don’t know what they’re doing because they have “forgotten” the essence of Beauty, Truth, and Goodness, and those who know these things by way of “formal” knowledge. For habit, making explicit is simply to specify the key to reading that it veils while maintaining its presence through vigilant attention. This doesn’t mean that we have to grasp every course of action according to the mode of reference alone, as Socrates requires of his interlocutors, while unduly exaggerating the empire of that mode. This false dichotomy between practical knowledge and formal knowledge is imposed by the Socratic question itself; this is what empties practice of all explicit knowledge. (274)
Okay. In studying the practice of writing/composing/constructing, instead of the writing product (which is one way of understanding the difference between “writing studies” and “literary studies”), one is not only identifying different habits of “reference” (which is Latour’s mode for the production of constative/formal knowledge) but different approaches to habit itself. Latour’s point? Habit has its own way of knowing. If we want to make a complaint about composition studies, it might be the ways in which it has turned toward reference as a mode in developing a disciplinary paradigm: this is Sirc’s complaint, I would say. The “social turn” is a turn toward a hermeneutics… again. As Latour continues, “when we complain that the Moderns do not know how to account for their own riches, we are not trying to extend the critical question, the Socratic question, to their entire anthropology: we are asking, proposing, suggesting that they no longer raise that question, so that all the other keys can be made explicit, each according to its mode” (274). This is also the difference between genre-activity theory (which would be the contemporary disciplinary method in composition studies with which I have the most sympathy) and the assemblage-network-nonmodern rhetoric that I am trying to develop. Activity theory remains in that Marxian hermeneutic mode. Not that it has to, but I think it does.
However, this is certainly the difference between composition and literary studies. Composition has looked to practice and habit as having their own modes of knowledge; habits and practices that can be understood, taught, and learned on their own terms. I don’t see literary studies as situated in a way that can ask these questions or accept this kind of knowledge. Maybe it has something to do with the odd relationship the field has always had with the authors of literature. What Latour would obviously suggest one should do in studying literature–to follow the actors and give credence to their values, language, and practices–is nearly forbidden in literary studies. And yet this is exactly what composition studies has done since the inception of process research: studied the practices of writers.
There is your paradigmatic gulf.
I’m looking to extend a conversation from recent posts by Stephen Ramsay and Alan Liu, which are in turn parts of a longer conversation that includes Liu’s essay in the Debates in the Digital Humanities, the “dark side of DH” business, and the general intersection of DH and cultural critique. The intellectual projects Ramsay and Liu describe run parallel to my own ongoing book project, though I am coming from a digital rhetoric perspective rather than the humanities computing wing of DH. For those who are not familiar at all with this conversation, basically this is about responding to the cultural critique of digital humanities. This takes several generic forms.
- Because programming languages and computer science reproduce hegemony (and here feel free to put in patriarchy, capitalism, etc.), making use of these puts one in service to those interests. And serving those interests is bad… just in case you didn’t know, so the point is that one should stop doing that and do this other thing, which may or may not be specified in the argument, but generally ends up meaning that one should end up moving in whatever direction cultural critique points.
- The same argument except insert social media, the web, and consumer grade technologies (e.g. mobile phones, video cameras, video games).
- The same argument but leveled directly at particular DH projects.
The one “nice” thing about cultural critique is that you can always count on it to make the same argument. I think it’s safe to say that Ramsay and Liu have more generosity toward critique than I do. Ramsay writes “Writing a book contra cultural studies seems to me to be the wrong direction entirely. I would like to make positive statements about what we’re doing, about why it’s different, and about the ethical problems it raises. The insights of cultural criticism are not so easily dismissed.” I understand Ramsay’s position, especially from within the DH world where speaking against cultural studies is likely to result in a lot of vitriol being sent in one’s direction. Actually, it’s all too easy to critique critique: because everything is always already subject to critique, and critique is interminable. Cultural critique is its own disciplinary hegemony, reproducing itself. It’s the thought police of the humanities, posing as skepticism but assailing its detractors as automatically being willing or (perhaps even worse) unwitting servants of hegemony.
But I agree with Ramsay that it is not worth doing that, but maybe for different reasons. For me, critiquing critique just feeds back into the same machine. The point is to move on from critique and do something different.
So here are some ideas in the spirit of moving on.
- Treat critique as a heuristic. Critique isn’t going anywhere and anything you do can be critiqued a half-dozen ways. So critique will also give you something new to do. You just shouldn’t treat critique as if it is telling you some horrible truth.
- Treat critique like your parents. You can love critique because it nurtured you through your intellectual growth. And you can remember how critique used to tower over you, appearing as an absolute authority. Maybe critique made you feel safe. Maybe it would scold or punish you. Maybe critique abused you. I don’t know. But now that we are all adults, we can see critique is just as screwed up and lost as the rest of us.
- Treat critique as a machine. It does certain things. It has inputs and outputs. It’s predictable when it is operated according to the established instructions. It’s true in the way a hammer or telescope or automobile is true. That is to say that it is true to itself.
Admittedly those ideas are presented with some humor and lightness. Critique believes in itself and sees itself locked in a death match with evil. Understandably it can be humorless. Of course injustice and evil and what not existed prior to critique. The questions of ethical and moral behavior, the challenges of making lived experience better, and the dangers of confronting those who act unethically existed prior to critique. We need to be able to question critique as a method without denying the importance of the objects critique studies. We also need to recognize that the particular modes of attack on all things digital by cultural critique may be fueled by disciplinary paradigms and historical contexts.
For example, Ramsay observes:
At the most basic level, we wonder what it means to use the tools handed down to us by corporations (Twitter, Facebook, mobile devices, etc.) to do something that is supposedly (to quote Google) “not evil.” We might just want a cup of coffee, but we are walking into Starbucks to get it. We also, I think, tend to “track” corporate trends. They get into mobile, we get into mobile. They get into data mining, we get into data mining. So asking whether we “channel, advance, or resist” is a good question. A serious question. A book-length question.
I think this is a good question to ask. It’s a question I would put into the context of how corporate, hegemonic technologies and processes like typewriters (built by arms manufacturers like Remington and Holocaust collaborators like IBM), printing presses, electrification, telephony, and so on (all the trappings of the second industrial revolution) were handed to the humanists of the early 20th century. How journals, monographs, and academic conferences followed corporate trends. And we should think about the reverse trend as well… how corporations make the literacy and cultural criticism we give our students productive. It’s a good question to ask: why do cultural critics want to attack DH methods, while ignoring that the print humanities have always had an equal complicity with the military-industrial complex? After all, it should be fairly obvious that publishing a monograph requires one to become as immersed in marketplace forces and corporate technologies as posting to a blog. Teaching 15 students in a seminar room serves hegemony just as well as teaching 15000 in a MOOC (maybe better since MOOCs are so ineffective pedagogically).
The problem might be that critique is something of a blunt instrument. Maybe I’ve only presented it as such here. No doubt it would take more than 1000 words to pursue this argument with care. I am generally with Latour here. I would no more deny the realities of human suffering and injustice than I would deny climate change. I would argue that our understanding of these conditions is flawed as critique stands like a house of cards on a modern ontological worldview that doesn’t work. I see the humanities cultural critique of the digital as driven by a disciplinary methodological paradigm that wants to keep churning. In short the response to cultural critique should not respond to the critiques themselves (unless one finds them heuristically useful). Instead, we need to investigate a different ontological mode (and the methods it might suggest) and recognize the inertial drag of our print legacy on our disciplines.
So, for example, I think Liu is asking some very important questions at the end of his post:
How can digital methods be used to uncover what I called micro-, meso-, and macro-level identity formations that unpredictably and rhizomatically link between “individuals,” “groups,” “classes,” “nations,” and “globalism”? For example, what is the human meaning–-i.e, the affordance for significant human understanding, action, and interaction–-of viral biopolitics at the cellular and sub-cellular level; of equally but differently viral contagions of influence at the institutional level (where corporations and governments, for instance, today infect universities through the vectors of MOOC’s, “accountability” measures, “impact” studies, etc.); and of truly global-scale flows of information-cum-capital?
But I wouldn’t feed this investigation back into the language of cultural critique. I am very interested in how the university conceives of digital literacy–the policies, curriculum, pedagogies, and other investments that are made in this vein both in terms of undergraduate teaching and faculty scholarship. In fact, you could say this is the overarching topic of my work. I want to understand how these systems/networks/assemblages/objects operate. I do not want to begin with the answer to the question, as cultural critique would. I want to create understanding, tools, and activities in this arena that try to make the world better, even though I only have a provisional and localized understanding of what “better” might be and how to achieve it. But they will never satisfy critique. So what?
I see many friends on Facebook remarking on MLA’s efforts to reorganize their group structure. Collin has blogged about it. This is definitely one of those “short straw” tasks where no one wins. It’s a fairly trivial matter. I don’t know who uses these groups or what purpose they are meant to achieve. I know several of my rhet/comp and tech comm friends have responded critically, but I think this is mostly because this is yet another rehearsal of a very long history of antagonism between writing studies and literary studies. While some colleagues tell stories of departments that find a common sense of purpose across this divide, that is quite clearly not the general history of English departments in the last twenty years. What message does MLA intend to convey when rhetoric and composition is one of what must be at least 100 different groups? I can think of four possibilities:
- There’s no message. This ended up this way because no time or energy was devoted to how MLA members or potential members might express interests in this general area.
- MLA sees rhet/comp as a limited field, equivalent in scope to a single literary period (each period gets its own group).
- MLA has very few members interested in this subject, so there’s no point in having many groups.
- MLA is actively discouraging interest (and members interested) in rhet/comp.
If I were MLA, I’d go with #3 as my answer. It might be true. There might be very few members who have used whatever passed for this group in the past. Of course the counter-argument might be that probably very few literary studies members would be interested in a group labeled “Literature.” I don’t think it’s #4; that’s just so clearly against the organization’s financial interests. I think the most likely answer is some combination of #1 and #2.
What’s more interesting to me though is what this says about our continuing commitments, in both writing and literary studies, to 20th-century, print-based paradigms. As I’ve said many times on this blog, I think both disciplines are pretty much doomed. By that I mean that I think the future of literary studies is to become art history: a smaller and far less central (to the university) version of what literary studies was in the 20th century. And writing studies will be subsumed by whatever discipline takes on the challenges of digital literacy and practice. Maybe rhet/comp will just become that, but I see no evidence that such a transition is more likely than some other field emerging just as rhet/comp emerged in English departments many decades ago. In other words, matters like this MLA thing are a classic example of rearranging deck chairs on the Titanic. These aren’t discussion groups; they are grave markers. That doesn’t mean that we are dead. My point is that the epistemological paradigm represented in these groups needs to be replaced.
Rhet/Comp has subdivisions too, even if MLA doesn’t know how to spell them. They’re easy to see in the CCCC program, where there are 14 different categories. Be cool or be cast out! In the case of rhet/comp the subdivisions reflect a historical commitment to humanistic/academic essay writing; individualistic, isolated writing processes; vague, spectral visions of culture and empowerment; instrumentalized technologies; and, lest we forget, the heroic pedagogy narrative.
Of course we need to subdivide the world in our efforts to understand it. The problem isn’t with creating categories but rather with the historical processes by which the categories are produced and maintained. So arguments for “big rhetoric” or claims like my own that do not wish to limit rhetoric to the study of human, symbolic action are not arguments for eliminating categories but for establishing new paradigms. I am more than tired with arguments about redefining MLA or English Studies to be more expansive, when the whole thing has no future. A far more nimble mode of collectivity will be required to meet this challenge than either a professional organization or an academic department is likely to provide.
As I’ve written here before, one passage that has really stuck with me since my early days in graduate school comes from Lester Faigley’s Fragments of Rationality, where he observes that the disagreements in rhetoric and composition can be understood in terms of the subjectivities we want our students to occupy in the classroom. I’ve always seen this as an astute accounting of a disciplinary paradigm; I’ve also always seen it as a matter of concern (to adopt a Latourian phrase). Following the administrivia tornado of the first couple weeks of the semester, I’ve gotten back into Modes of Existence and immediately encountered a couple chapters that I see as speaking directly to this matter.
Chapter 7 deals with questions of the psyche and introduces a mode Latour terms “metamorphosis.” To briefly summarize, one of the defining characteristics of “moderns” is their rejection of external, invisible forces impinging on the psyche (e.g. spirits). Instead moderns attribute their mental states strictly to internal processes. As Latour writes, “the continuity of a self is not ensured by its authentic, and as it were, native core, but by its capacity to let itself by carried along, carried away, by forces capable at every moment of shattering it or, on the contrary, of installing themselves in it. Experience tells us that these forces are external, while the official account asserts that they are only internal” (196). Perhaps one might want to raise the question of ideology here. That is, at least among some moderns (e.g. academics), there is the belief in an external, overdetermining ideological force. Indeed, harkening back to Faigley and the cultural studies/postmodern composition moment, one might say that teaching students to understand their subjectivity as shaped by ideology is a primary goal of instruction. But these really are quite different things. Even the moderns believe they can be “influenced” by the external world, and the critical pedagogue ultimately relies upon fostering an internal process of critical thinking. In teaching composing though, we mostly focus on internal processes of invention. Following Latour’s lead, it shouldn’t be too difficult to demonstrate how composition follows the rest of the moderns in imagining an internal subject/psyche as the focal point of writing and pedagogy.
Chapter 8 turns to technology and to technical processes. Perhaps I should say “returns to” since technology is clearly Latour’s wheelhouse. Moderns are, of course, the technological creatures par excellence. It is technology and a technological way of thinking that separate them from the nonmoderns. However, moderns tend to make their technologies invisible. We only see them when they break (as Heidegger reminds us). The cliche about necessity and invention reveals our belief that technologies are an extension of will, and we measure them by their effectiveness. Latour writes “effectiveness is to technology what objectivity is to reference, the way to have your cake and eat it too, the results with out the means, that is, without the path of appropriate mediations” (218). Let me throw a few quotes at you from this chapter:
- If nothing in technology goes in a straight line, it is because the logical course–that of the episteme–is always interrupted, deflected, modified, and because in following it one goes from displacement to deviation… This is what we mean, quite banally, when we assert that there is a “technological problem,” an obstacle, a snag, a bug; this is what we are referring to when we say of someone that “he’s the only one with the technical ability” to solve a given problem. (223)
- Through technology, the being-as-other learns that it can still be even more infinitely altered that it thought possible up to that point. (226)
- If we always have to maintain the ambiguity of constructionism without ever believing in the assured existence of a builder, it is because the author learns from what his is doing that he is perhaps its author. In the case of technological beings, this general property is of capital importance, since technologies have preceded and generated humans: subjects, or rather, as we shall soon name them, QUASI SUBJECTS, have spring up little by little from what they were doing. (230)
- The author, at the outset, is only the effect of the launching from behind, of the equipment ahead. If gunshots entail, as they say, a “recoil effect,” then humanity is above all the recoil of the technological detour. (230)
- all humans are the children of what they have worked on. (231)
I would ask, simply, what does it mean to put “writing process” in this context? Both in the context of “metamorphosis,” of a psyche/subject produced and maintained through external means (if external/internal can still operate) and in this context of the technological. Technology, as noted above, is riddled with glitches. It moves in a zig-zag fashion, as Latour repeatedly notes. Technologies bring animate and inanimate objects together, each lending some capacities, to produce a new technological relation, a metamorphosis that is solidified in some respects though it still must be continually repaired and adjusted (zig-zag). The technological mode precedes humans and it is through our encounter with it that we are altered, made human perhaps. What does it mean to conceive of writing processes in these terms? From my perspective, it fits in quite well with many of the things I’m already writing about, though I think Latour brings some great conceptual clarity to the matter. When we think of language, symbols, and other communication as nonhuman technological others that are not produced by humans but through which humans are produced, are metamorphosed, through this technological mode, then we have a very different way of thinking about writing process that moves beyond the expressivist voice, beyond the ideological subject, and even past the human actors in activity systems, to see a far more open and complex system, one that I think is far better situated to address the task of investigating the shift from print to digital media.
Here is perhaps the key point from these chapters: “now that we are beginning to free ourselves from the scenography of Subject and Object, the question becomes essential: if there are several ways to exist, and not just two, we can no longer define the one simply as the opposite of the other” (201). The “question” being what kinds of alterations are possible? If we can move beyond the mire of subjectivity that Faigley rightly identified, how do we open up the field of writing?
After the very busy administrative time at the beginning of the semester, I’ve been catching up on my reading. Ted Underwood has an interesting post on the relationship between data and genre as he investigates genre through an analysis of large digital collections. He observes:
The biggest problem was even less quantitative, and more fundamental: I needed to think harder about the concept of genre itself. As I model different kinds of genre, and read about similar (traditional and digital) projects by other scholars, I increasingly suspect the elephant in the room is that the word may not actually hold together. Genre may be a box we’ve inherited for a whole lot of basically different things. A bibliography is a genre; so is the novel; so is science fiction; so is the Kailyard school; so is acid house. But formally, socially, and chronologically, those are entities of very different kinds.
This concern has me thinking about an issue I’m raising in an essay I’ve written for a collection on digital rhetoric that my colleagues Bill Hart-Davidson and Jim Ridolfo are editing. My essay considers the role a speculative digital rhetoric might play in relation to this kind of digital humanities work (not Ted’s in particular, but big data more generally), and that role has to do with the kind of ontological questions that Ted is raising here. Specifically what is the ontological status of genre?
This is a question that comes up in relation to species in terms of biology as well and has been extensively discussed in speculative realism of various stripes. The underlying question would be whether genre and species exist as “real” entities independent of the human discussion of them or if they are only epistemological creations that wouldn’t exist without humans to talk/think about them. So, yes, you could say that even in the latter sense, genres would still be real, but their reality would clearly be different. This argument is probably easier to swallow in terms of genre than species since genre is already talking about products of human effort. Indeed this is how one might view the activity-genre theory approach, where genre is a system of human activity. So whether one is talking about poems and novels or lab reports and stock market analysis, one is participating in some human-oriented network, or at least that is how it would be described in activity theory.
The speculative realist question is whether or not a genre is a real object (or machine or assemblage, depending on your SR brand). To think of species as real in this sense is to imagine an object with qualities and operations independent of the activities of the individual animals that are part of that species. An analogous question might arise with genre: are genres independent machines with their own autopoietic function? Do they have qualities and operations that are independent of those attributed to them by the humans participating in their activity systems? I don’t think this is a question that can be answered “in theory.” It is a question for research. That is, even within our concept of genre, there may be some genres that are “real” (in the sense of being independent machines with autopoietic function) and others that are imaginary (to put some term to the other state). However, it is probably more useful to think of research as an attempt to discern the real operations behind a genre.
Here is where I think literary studies, rhetoric, and much of the humanities encounters an impasse. As Ted writes, “Skepticism about foundational concepts has been one of the great strengths of the humanities. The fact that we have a word for something (say genre or the individual) doesn’t necessarily imply that any corresponding entity exists in reality. Humanists call this mistake ‘reification,’ and we should hold onto our skepticism about it.” I would go farther and suggest that our dominant disciplinary paradigm would insist that corresponding entities cannot exist (as such), that genre is inescapably for us. Here is where that great skepticism becomes an article of faith. Of course, this is a large part of what a realist ontology seeks to redress.
So this is where speculative realism becomes conceptually valuable for this kind of DH work. It offers an ontological foundation that holds open the possibility that all of the big data analysis might describe something that is real (whether it is genre or something else). After all, only the most steadfastly idealist would not believe that literary artifacts are real objects composed through material processes. The question then becomes can we know something about those real objects and processes? If we can, is one thing we might know about these objects is that they participate in some larger system of activity that imparts common characteristics on them (i.e. a genre)? Perhaps we will want to say (or at least some of us will want to say) that the answers to these questions will always point to “culture.” The Latourian in me is skeptical of the implied distinction here (though this points to another limit to the humanities’ capacity to be skeptical). However, this kind of conceptual approach might offer new avenues of exploration for big data analysis in literary studies.