Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
Earlier this week, Gregory Ulmer spoke on campus. I was happy for the opportunity to see him speak, as I hadn’t met him before and his work, especially Heuretics, has been important to my own since my first semester in my doctoral program. His talk focused on his work with the Florida Research Ensemble creating artistic interventions, which he terms Konsults, into Superfund sites. However, more broadly, Ulmer’s work continues to address the challenge of imagining electracy (n.b. for those who don’t know, electracy is to the digital world what literacy is/was to the print world). I’ve discussed Ulmer’s work many times here, so today my interest is in discussing it in terms of the Bérubé talk I saw last week.
In the Bérubé talk (see my last post), humanities’ focus emerged from dealing with the promises and challenges of modernity and Enlightenment. Freedom, justice, equality, rationality: they all offer tremendous promise as universals and yet also seem unreachable and treacherous. So the humanities must play this role in the indeterminable pursuit of judgment. In this discourse of right/wrong it supplants religion, though obviously religion continues on, where the humanities is perhaps less willing to settle on an answer than religion often seems to be.
Ulmer offers a different perspective. To the binaries of right/wrong (religion) and true/false (science), he offers pain/pleasure (aesthetics). As he notes this third segment comes from Kant as well but is only realizable as an analog to the first two in an electrate society, with the first being the product of oral cultures and the second the product of literate ones. He makes an interesting point in relation to Superfund sites and climate change more generally where we are largely able to recognize that destroying our climate is wrong and we are able to establish the scientific truth of climate change, but we appear to need to feel it as well.
In a fairly obvious sense, pain/pleasure seems a more basic segment, and one that is available to a wide range of animals, at least. What we get in a control society (Deleuze) and perhaps more so in a feed-forward culture (Hansen) are technologies that operate on this aesthetic level to modulate subjectivity and thought in a way that the symbolic behaviors of oral and print societies did not. That is, we’ve always been able to seduce, persuade, entice, repel, frighten, hurt, and so on with words, but at least there was some opportunity for conscious engagement there.
Many of the challenges Bérubé identified with judgment have to do with the orientation of the individual to societies: e.g. how we view people with disabilities or differing sexual orientations. However, one thing we might take from Ulmer’s argument is the realization that the “self” is a product of literature culture. If we see the self as a mythology, perhaps as the way we might view an oral culture’s notion of spirit, then perhaps the challenges of judgment that arise from Enlightenment become irrelevant, much like the challenge of appeasing gods is to moderns. In some respects we still want the same ends in terms of material-lived experience–we still want a good crop–we just stop appeasing gods or pursuing justice to get it. No doubt such notions seem absurd. Ulmer would suggest that they are because we are only beginning to grasp at them. He reminds us of Plato’s initial definition of humans as “featherless bipeds.”
In the place of the self Ulmer suggests an avatar as an “existential positioning system,” an analog to GPS. He didn’t get too far into this matter in the talk, but I am intrigued. Of course GPS is a technological, networked identification. The self is also a technological identification, a product of literacy. For Ulmer the EPS is likely image-based. I am interested in its “new aesthetic,” alien phenomenological qualities though as a kind of machine perception. While I argue that language is nonhuman, so both oral and print cultures had nonhuman foundations, electracy might so decenter the human as to allow us to feel the nonhumans in a new way. In this respect, an EPS might be a tool that shows us a very different way of inhabiting or orienting toward the world. Arguably, that’s what writing did.
In any case, trying to figure that out seems like a really interesting project for the humanities, one that would produce an outcome, even if the implications of that outcome may take decades to realize.
I attended a talk today at UB by Michael Bérubé on “The value and values of the humanities.” Without rehearsing the entirety of his argument, the main theme regarded how the notion of the human gets defined and struggles in the humanities over universal values. So while we largely critique the idea of universalism, we also seek to expand notions of humans and human rights in universal ways (in particular the talk focused on queer theory and disability studies, but one could go many ways with that), though even that encounters some limits (as when people raise concerns over whether Western values about equality should be fostered in non-Western cultures). The talk is part of a larger conference on the role of the humanities in the university and part of Bérubé’s point is that the intellectual project of the humanities, which he characterized as this ongoing, perhaps never-ending, struggle over humanness, continues to be a vibrant project and should not be confused with whatever economic, institutional, bureaucratic, political crisis is happening with the humanities in higher education.
I don’t disagree with him on these points, but my concerns run at a tangent to his claims. I think we can accept the enduring value of the humanities project as this ongoing struggle with Enlightenment and modernity. (I.e. we value justice, freedom, equality, rationality, etc. but we can’t really manage to figure those things out.) But, for me, this has little to do with valuing the particular ways that this project is undertaken or the scope of the project. That is, one can completely share in this project and still argue that many of the disciplines that comprise the humanities are unnecessary or at least do not require as many faculty as they currently have. So in the 19th century we didn’t really have literary studies. We had it in spades in 20th century (literature departments were almost always the largest departments in the humanities and perhaps across the campus). In the 21st century? Well, we’ll see I guess. But those ups and downs would really have nothing to do with the value of this general humanities project. Because, in the end, the argument for or against the importance of literary study in the pursuit of this project has to be made separately. And the same would be true of any humanities discipline.
In fact, it’s not only true of every discipline, it is also true of every theory, method, genre, course, pedagogy, and so on. It does not necessarily mean that we as humanists should continue writing what we write, teaching what we teach, or studying what we study or that such practices should be propagated to a new generation of students and scholars. It doesn’t mean that they shouldn’t, either.
In the discussion following, Bérubé made an observation that anything with which humans interact could be fair game for humanistic study. I think his point of reference was fracking, but I started thinking about gravity, which obviously we all interact with. I also sometimes think about gravity when I think about nonhuman rhetoric as a force and how far it extends. If Timothy Morton is willing to argue that the “aesthetic dimension is the causal dimension,” than might one substitute rhetoric for aesthetic. That is, are all forces rhetorical? Or barring that, might any force have a rhetorical capacity? So, gravity.
Here’s the argument I came up with for saying gravity is rhetorical. Every living thing on Earth evolved under a common and consistent gravitational force. Obviously we didn’t all end up the same because gravity was just one of many pressures on evolution. But clearly our skeletal and muscular structure are partly an expression of our encounter with gravity. This is true not only in evolutionary or species terms but individual ones as well. If I grew up on the Moon then I would look different than I do (as any reader of sci fi knows). It was Michael Jordan’s relationship to gravity that made him so amazing, and we might say the same of dancers, acrobats, and so on. One might proceed to speak about architecture’s aesthetics in gravitational terms. Anyway, I think you get the idea. It might be possible to speak of gravity as an expression, not simply as a constant predictable force, but as an indeterminate force productive of a wide range of capacities that cannot be codified in a law. So while I don’t think I would want to argue that gravity is inherently rhetorical, that the Moon’s orbit of the Earth is rhetorical. I might argue that rhetorical capacities can emerge in gravitational relations.
Maybe you don’t want to accept that argument. Most humanists would not because the humanities, in the end, are more defined by their objects of study, their methods, and their genres than by these larger, more abstract questions of value. That is, no history or English department is going to organize itself in terms of curriculum or faculty around these questions of value. They organize around geographic regions and historical periods. We don’t hire people to study questions of value, we hire them to study particular literary periods or apply specific methods. We place highly constrained expectations on the results of those studies as well in the production of highly technical genres–the article, the monograph, etc.
So perhaps these broader questions act as a kind of gravitational force on the humanities both drawing the disciplines together and shaping the particular expressions and capacities they reflect, but if so then that only points to the contingent qualities of those disciplines. In addition clearly other forces shaped the particular forms humanities study has taken in the US–from massive shifts like nationalism and industrialization to policies regarding the building of universities (e.g. the Morrill Act or the GI Bill) or demographic shifts in US population. And, of course, I should forget technologies.
I don’t think Bérubé would disagree with any of that, so in the end I guess I’m left thinking that the value of the humanities really tells us very little of its future.
Close reading is often touted as the offering sacrificed at the alters of both short attention spans and the digital humanities (though for probably different reasons). Take for example this piece in The New Rambler by Jonathan Freedman which is ostensibly a review of Moretti’s Distant Reading but manages to hit many of the commonplaces on the subject of digital literacy, including laments about declining numbers of English majors: “fed on a diet of instant messages and twitter feeds, [students today] seem to be worldlier than students past—than I and my generation were—but to find nuance, complexity, or just plain length of literary texts less to their liking than we did.” But it’s not just students, it’s colleagues as well: the distant and surface readers, for example.
In the end though, Freedman’s argument is less against distant reading than it is for close reading: “distant reading doesn’t just have a guilty, complicitous secret-sharer relation to soi-disant close reading: it depends on it. Drawing on the techniques of intrinsic analysis of literary texts becomes all the more necessary if we are to keep from drowning in the sea of undifferentiated and undifferentiable data.” And as far as I can tell, the distant and surface readers do not really make arguments against close reading in principle. They may critique particular close reading methods in order to argue for the value of their own methods, but that’s a different matter.
So I’ll take up the task of arguing against close reading, just so there’s actually an argument that defenders of close reading can push up against if they want.
I don’t want to make this a specifically literary argument. Yes, “close reading” is a term that we get from New Criticism, so it has terminological roots in literary studies, but it’s come a long way from then. The symptomatic readings of poststructuralism, cultural studies, and so on are all close reading practices, even though they are quite unlike the intrinsic interpretive methods of New Criticism (relying on the text itself). As Katherine Hayles argues in How We Think,
close reading justifies the discipline’s continued existence in the academy, as well as the monies spent to support literature faculty and departments. More broadly, close reading in this view constitutes the major part of the cultural capital that literary studies relies on to prove its worth to society
To borrow an old cattle industry slogan, close reading is “what’s for dinner” in English Studies. And we have made a meal of it. Whether we’ve made a dog’s dinner of it is another matter. Regardless, in the contemporary moment, and certainly for the 2 decades or so I’ve been in the discipline, close reading has also been a central feature of rhetoric. All one has to do is think of the attention to student writing to see that, but it is also characteristic of the way many rhetoricians go about their own scholarship. So what I say here about close reading applies across English Studies.
Now, while I have just said that close reading is a wide-ranging practice, it is still one that is specific to print texts and culture. And, of course it is not just a reading practice, because if it were, how would we know we did it? It’s also a writing/communicating practice. That is, I’d think of close reading as a set of genres of print textual analysis.
The key question, from my perspective, is how these genres operate in a digital media ecology. I wouldn’t want to say that they don’t operate, because people still produce close readings, and I wouldn’t want to gainsay their claim that they do so. Instead, my point is that close reading can no longer operate as it once did. From the early days of the web, across computers and writing research and beyond, it was already clear that multimedia and hyperlinks shifted rhetorical/reading experience. But it has become much clearer in the era of high speed internet, mobile media, and big data, that text just isn’t what it once was. It doesn’t produce meaning or other rhetorical effects in the same way.
Besides that, reading and writing are so much more obviously and immediately two-way streets. As you read, you are being read. As you write, you are being written. Is that an “always already” condition? Maybe, but it certainly has specific implications for digital media ecologies. What does it mean to read your Facebook status feed closely when what is being offered to you has been produced by algorithmic procedures that take account of your own activities in ways that you are not consciously aware? Even if you’re going to read some pre-Internet text (as we often do), you’re still reading it in a digital media ecology. Again, it’s not that one can’t do close reading. It’s that close reading can’t work the same way. Maybe close reading just comes to me that we study something, that we pay attention to it, rather than indicating any particular method or strategy for studying, but that would seem to miss the point. For me, close reading rests on a particular set of assumptions about how text is produced and how it connects with readers, not only in terms of one particular text and one particular reader, but also the whole constellation of texts and readers: i.e., a print media ecology.
Arguing “against close reading” then is not an argument to say that we should stop paying close attention to texts. If anything, it’s an argument that we should pay closer attention to the ways in which the operation of text is shifting.
Terry Eagleton’s recent Chronicle op-ed is making the rounds. It’s a piece with some clever flourishes but with largely familiar arguments. What I think is curious is that the nostalgia for the good old, bad old days describes a university that we would no longer find acceptable. Looking back at the end of the 20th century, the greatest accomplishment of higher education might be the way that we managed to greatly expand access in the last two decades. We have clearly not found a sustainable way to afford the post-secondary education of this growing portion of the population, and many of our problems revolve around that challenge. However, many of the other changes that Eagleton laments are a result of other aspects of this shift. Students show up on campus with different values, goals, and expectations for higher education than they once did. Governments, businesses, and other “stakeholders” also have shifting views to which universities are increasingly accountable as the role of higher ed becomes further embedded in the economy with more and more jobs requiring it. As I mentioned in my last post, I’m doing some campus visits with my daughter. It may be that the “highly/most selective” colleges and universities still get to select students who fit their educational values, but that’s not the case at public universities.
When I read articles like this one, I tend to have three general reactions. First, I agree that there’s a lot wrong with the way higher education is moving (increased bureaucracy, decreased public support, etc.). Second, I find it odd and a little worrisome how “technology” is scapegoated, as if higher education has always been technological. Third, I find the nostalgia understandable but ultimately unhelpful. As much as we may not like where we are or where we appear to be going, trying to go back is not a viable or even desirable option.
One of the amusing parts of Eagleton’s essay is his description of how it used to be, when faculty didn’t finish dissertations or write books because such things suggested “ungentlemanly labor.” I don’t think we have many colleagues who still share those values, but we still object to notions of “utility.” Maybe it’s the lower middle-class upbringing, or maybe it’s the rhetorician in me, but I’m not insulted when someone finds something I’ve written or a class I’ve taught to be useful. To the contrary, I actually prefer to do work that other people value and makes their lives easier or better, even though that might make me “ungentlemanly.”
In a couple recent conversations I’ve had around this topic, I have heard repeated the value of writing a book that maybe only a handful of people might read. I was struck by the widespread appeal of this value, at least among the audiences of humanities faculty and grad students who were present. I think I understand why they feel that way. They want to pursue their own interests without having any obligation to an audience. If Eagleton’s old colleagues found writing itself to be ungentlemanly then many contemporary humanists find the idea of writing for an audience (or writing something that would be useful) to be an anti-intellectual constraint.
Given that perhaps as a set of disciplines we are not particularly inclined to rhetorical strategies, here’s some fairly straightforward advice. It’s not an especially effective argument to say that everything about the contemporary university is going to hell and that we need to change everything so that we can create conditions were I can pursue my own interests regardless of whether they result in anything useful or even produce something that any one else would bother reading because the humanities are inherently good and must be preserved. Perhaps that seems like a hyperbolic version of this position, but if so, only barely. A better rhetorical strategy would be one that said something along the lines of “here’s how we believe higher education should be adapting to the changing demands of society, and here’s what we in the humanities would do/change to respond to those challenges.” I see a lot, A LOT, of digital ink spilt on the humanities crisis. I almost never see an argument from within the humanities about how the humanities itself should change. It’s almost always about how everyone else should change (students, parents, politicians, administrators, employers, etc.) so that we don’t have to.
Why is that?
By now this is a familiar commonplace in our discussions about the crisis of higher education. Here’s one recent example by Derek Thompson from the Atlantic that essentially argues that it’s less important that you get accepted into a great college than that you be the kind of person who might get accepted. However, as is painfully evident, the whole upper-middle class desperation of “helicopter parents” and “tiger moms” and whatnot to get their kids into elite schools and away from the state university systems that they’ve helped to defund through their voting patterns creates a great deal of ugliness. I’m assuming there’s no news for you there.
Personally I am in the midst of this situation. My daughter is a junior and we’ll be headed to some campus visits next week during her spring break. Her SATs put here in the 99.7 percentile of test takers and the rest of her academic record reflects that as she looks to pursue some combination of math, physics, and possibly computer science. We live in a school system with a significant community of ambitious students (and families), where the top of the high school class regularly heads out to the Ivies. I’m sure it’s not as intense as the environment of elite private schools in NYC but it’s palpable. This also has me thinking back to when I was headed out to college, as a smart kid (“class bookworm” as my yearbook will evidence) in an unremarkable high school, first-generation college grad going to a state university, coming out of a family that had its financial struggles until my mom remarried when I was a teenager. I don’t mean to offer that as a sob story (because it isn’t) but only that my own background gives me a lot of misgivings about the value and faith we put in this race to get into elite colleges.
I think it’s easy to see the ideological investments underlying the way we try to answer this question. Part of the American Dream is believing that education is the great democratizer, that it is meritocratic, and that in the end, overall, the brightest and best students are rewarded. Part of that is also believing that intelligence is not really a genetic trait and that socio-economic contexts are not a roadblock; almost anyone can succeed if they put their mind to it. For those on the Left (i.e. the circles I mostly travel in), there is a clear recognition of socio-economics as largely driving opportunities for academic success, and that’s hard to deny when one looks at the big picture. So that tells you that on a societal level education on its own does not solve economic disparity. However, it doesn’t tell you much on the individual level where ultimately what you want is some sense of agency rather than having your agency taken away by socioeconomics or admissions boards.
Derek Thompson describes the situation he is investigating as affecting the top “3%” of high schoolers, though it’s probably more like 1% if one is thinking of the top 20-25 schools in the country. Here’s what I think about those kids, including my daughter… They’re going to do ok if they manage to avoid getting a nervous breakdown trying to get into college. Maybe an Ivy-league degree is a surer route to being CEO or senator; it almost certainly is. But you’re probably as likely to be a professional athlete or movie star as become one of those, particularly if you aren’t already a senator’s son (cue the Creedence). Even though we’re still talking about tens of thousands of families, focusing on the top 1 or even 3 percent seems fairly odd. In all honesty it probably is a little beyond the scope of pure individual will to get into a top, top college. You probably do need some natural-born smarts and some socio-economic advantages to have a decent shot.
If we’re going to have a conversation about the importance of where you go to college, it makes sense to me to talk instead about the students in the middle of the college bell curve. What’s the difference between the university ranked #50 and the one ranked #150? Is there a big difference between Florida, Buffalo, Tennessee, and West Virginia? Setting aside the Ivy bias, what’s at stake at going to Emory or Virginia (a top 20 school) rather than Wisconsin or Illinois or RPI (a top 50 school)? From school #20 to school #150 we’re still talking about students in the top 5-20% of SAT test-takers. I’m thinking all those students are reasonably well-positioned to get a good education that leads to a rewarding career. And what’s the difference between the student in the 80th percentile of college applicants who goes to a big public university and one in the 50th or 60th who goes to a regional state college? And how do these differ from the ones coming up through community colleges?
It seems to me that those are much more interesting questions than the ones about the top 1 or 2%, even if that’s where my own kid is drawing my personal attention.
Yesterday I attended a roundtable on this topic on my campus. These things interest me both because I have the same concerns as most of us do about these issues and because I am interested in the ways faculty in the humanities discuss these matters. So here are few observations, starting with things that were said that I agree with:
- The larger forces of neoliberal capitalism cause problems for higher education and the humanities in particular.
- There is a perception of humanistic education as lacking value which needs to be corrected.
- We need to take care with any changes we make.
Certainly it’s the case that broader cultural and economic conditions shape, though do not determine, what is possible in higher education and the humanities. This has always been the case. When we invented the dissertation, the monograph, and tenure as we experience them today (which was roughly in the early-mid 20th century), there were cultural-economic conditions that framed that. It’s important to recognize that graduate education is part of a larger network and ecology of relations, that you probably can’t just change it without changing other things.
In terms of actual graduate education issues, our discussion focused on two key points, I think: the possibility of revising the dissertation and concerns about the job market. These are two of the common themes that come out of the MLA report. Here’s my basic thoughts on these two matters.
- It’s very difficult to change what the dissertation is like without also changing the scholarship one does after completing the degree.
- The casualization/adjunctification of the job market is tied to the operation of graduate education and the work of tenured faculty. You can’t change it without changing those other things.
The upshot, from my perspective, is that while I completely agree that we need fix the way higher education is funded, to reaffirm our understanding of it as a social good and, if necessary, as a strategic national interest, AND that we need to intervene in the popular discourse about humanities education to make clear the value of the things we can do, none of that will be enough on its own. It will also be necessary for us to change what we do as well.
Unfortunately that’s the part I hear the least about and also the part that produces the most resistance. It’s unfortunate because it’s the element over which we have the most direct control. Mostly what I hear are defenses of the value of the work that we do and how people who want us to work differently don’t really understand us. Both of those things might be true. There is value in the work of the humanities, and probably at least some of the people who want humanities to change may not understand the work very well or appreciate that value. But ultimately I don’t think that’s the point either.
So I would put graduate education reform in the following question: what would it take for us to dethrone the monograph as the central measuring stick of scholarly work in the humanities? You would think that the answer should be “not much.” After all, it’s got to be less than 10% of four-year institutions that are effectively “book-for-tenure, two books for full professor” kinds of places. Even if we just switched to journal articles and chapters in essay collections (i.e to other well-established genres) that would be enough. The problem, I would say, is that humanities professors want to write books, or at least have a love/hate relationship with the prospect.
No doubt it is true that one can accomplish certain scholarly and intellectual goals in book-length texts that cannot be achieved in other genres. That’s the case with most genres: they do things other genres do not. How did we become so paradigmatically tied to this genre? So tied that many might feel that the humanities cannot be done without monographs.
If our scholarship worked differently then our graduate curriculum could as well. Not just the dissertations, but the coursework, which in many cases is a reflection of a faculty member’s active book project. Without the extended proto-book dissertation, maybe there would be more coursework, more pedagogical training, more digital literacy (to name some of the goals in the MLA report). If there was more coursework then maybe you’d need fewer graduate students to take up seats in grad courses and make the courses run. If you had three years of coursework instead of two, then you’d need to enroll 1/3 fewer students each year to fill the same number of classes. If you didn’t have dissertations to oversee, then you could free faculty from what can be a significant amount of work, especially for popular professors.
I’m not sure if that would impact adjunctification much, but at least it would reduce the number of students going through the pipeline, which is probably about as much as one could ask graduate education reform to accomplish on its own in this matter.
Now I don’t think any of these things will happen. I am very skeptical of the capacity of the humanities to evolve. Other disciplines across the campus have been more successful at adapting to these changes but they are not as deeply wedded to print literacy as much of the humanities are. However, until we can recognize that it is our commitment to the monograph that drives the shape of graduate education, I don’t think we can do more than make cosmetic changes.
I’m working at a tangent from my book manuscript today, preparing a presentation for a local conference on “Structures of Digital Feeling.” If you have the (mis)fortune to be in Buffalo in March, I invite you to come by. Anyway, my 15 minutes of fame here involve wresting Williams’ “structure of feeling” concept from its idealist ontological anchors, imagining what real structures of feeling might be, and then putting that to work in discussing “debates” around the digital humanities.
Fortunately, Richard Grusin offers the perfect opening for this conversation as he is already discussing “structures of academic feeling” at the MLA conference in his juxtaposition of panels about the “crisis” in the humanities with the more positive outlook of DH panels. (I haven’t been to MLA in a few years so I wonder if this distinction still holds.) The quoted phrase in the title of this post comes from his Differences article on the “Dark Side of the Digital Humanities” (a reformulation of the panel presentation of the same title). It’s a response to the familiar DH refrain of “less yack, more hack.”
As he argues:
Specifically, because digital humanities can teach students how to design, develop, and produce digital artifacts that are of value to society, they are seen to offer students marketable skills quite different from those gained by analyzing literature or developing critiques of culture. This divide between teachers and scholars interested in critique and those interested in production has been central to the selling of digital humanities. My concern is that this divide threatens both to increase tensions within the mla community and to intensify the precarity running through the academic humanities writ large.
His objection to this is twofold. First, he objects to the suggestion that he doesn’t make things too (“tell that to anyone who has labored for an hour or more over a single sentence”). And second is to suggest that making things, in the absence of “critique” echoes “the instrumentalism of neoliberal administrators and politicians in devaluing critique (or by extension any other humanistic inquiry that doesn’t make things) for being an end in itself as opposed to the more valuable and useful act ‘of making stuff work.’” So the net is something along the lines of all humanists make things, but in case we don’t making things is bad. So while Grusin wants DHers to stop making the “invidious distinction between critique and production,” he still wants to make it himself in order to critique DH.
In my view, this is an argument between methods and argument for the primacy and necessity of “critique.” It is an argument that says the humanities are essentially defined by critique. What else can critique be expected to argue? I am reminded of Vitanza’s “Three Countertheses” essay where he playfully asks if we can imagine CCCC having as its conference theme the question “Should Writing Be Taught?” We might similarly ask MLA to have as its conference them “Should We Be Doing Critique?”
Given this connection (in my head, at least) when I read about “invidious distinctions between critique and production,” I don’t think about DH. I think about rhetoric. I think about how literary studies established these distinctions in order to make critique a master term and devalue production as “skills.” I guess it’s not so funny now that the shoe is on the other foot.
What is funny though is the sudden concern with the precarity of labor. Here Grusin is rehearsing his earlier argument about the role that DH plays in creating alt-ac, non-tenure academic work. It’s a legitimate concern, but it’s a little like focusing on recycling your beer cans while driving a Hummer. If there’s a responsible party for adjunctification in English Studies, it’s got to be the literary critics who turned composition into a mill for graduate student TAs who then turn into adjuncts. I will not ignore rhet/comp’s complicity in this, but it is the “invidious distinctions between critique and production” that allowed writing instruction to become a place where this kind of labor practice could evolve.
But let me end on a point where I agree with Grusin because really I find much of his work valuable even though I disagree with him here. Near the end he writes:
Digital media can help to transform our understanding of the canon and history of the humanities by foregrounding and investigating the complex entanglements of humans and nonhumans, of humanities and technology, which have too often been minimized or ignored in conventional narratives of the Western humanistic tradition.
Grusin may not think of himself as a digital humanist, and by some narrow definition of the term he isn’t. But he’s as much a digital humanist as I am. This is at least partly the way he sees his own work, and it’s a fair description of my approach to digital media as well. And I suppose that given my deep investment in the “theory” of DeLanda, Latour, Deleuze, and so on, one might think I’m hip deep if not neck deep in critique as well. But I don’t look at it that way. I don’t look at it that way because, as I see it, critique only exists by invidiously distinguishing itself from production. However that distinction is unstable. Production can be uncritical, but criticism cannot exist without being produced. It’s the idealism of critique that prevents it from seeing this, that prevents it from seeing that being tied to books, articles, genres, word processors, offices, tenure, etc., etc. instrumentalizes critique as much as computers and networks instrumentalizes DH. The project Grusin describes addresses the division between critique and production, but critique doesn’t really survive that. Critique needs to be the pure private thought of the critic in order to be what it claims to be. Once critique becomes a kind of production, a kind of rhetoric and composition, it looses its hold as the master discourse of the humanities.
This is an article that came out last year in Differences (25.1), but my library doesn’t have access to the most recent issues, so I’m catching up. I’m writing here about it in part because it connects with my recent post on reading practices, as well as more generally with interest in digital matters. In the past I’ve certainly taken some issue with some of Galloway’s arguments, though I regularly use his Gaming book in my course on video games. Here, I think my overall reception of his argument is more balanced.
Galloway begins by noting that in the contemporary humanities one finds a wide range of methods: “methodology today is often more a question of appropriateness than existential fit, more a question of personal style than universal context, more a question of pragmatism than unwavering conviction.” He applies this observation equally to quantitative investigation and ethnographic interviews as he does to the “instrumentalized strains of hermeneutics such as the Marxist reading, the feminist reading, or the psychoanalytic reading.” However, “such liberalism nevertheless simultaneously enshrines the law of positivistic efficiency, for what could be more efficient than infinite customization?” I think he has a point here, but it’s a curious one. On the one hand, there’s the defense of academic freedom that insists on allowing for this “liberal ecumenicalism” as he terms it, but then perhaps also the realization that such a position might undermine the critical-oppositional effect one might hope to have. I think Galloway is accurately pinpointing a site of consternation for many humanists here, but let me bookmark that thought for a moment.
The main interest of the article is Galloway’s titular cybernetic hypothesis, which he describes as “a specific epistemological regime in which systems or networks combine both human and nonhuman agents in mutual communication and command.” I find this reasonable though I probably need to think through the particulars of his argument more thoroughly. Presumably, one can examine any cultural-historical moment and find one or more “epistemological regimes” at work. I would certainly argue, and I imagine Galloway would agree, that this cybernetic regime begins in particular places and spreads unevenly, so that not all humans (or nonhumans) are equally invested in this regime. I was particularly interested in his observation that
This has produced a number of contentious debates around the nature and culture of knowledge work. Perhaps the most active conversation concerns the status of hermeneutics and critique, or “what it means to read today.” Some assert that the turn toward computers and media destabilizes the typical way in which texts are read and interpreted.
As I wrote in a recent post, I share this interest in the shift in reading practices (which, I would add, are interwoven with a shift in composing). At it turns out though, the crux of the matter seems to lie in how when values this shift. Following his historical investigation Galloway writes, “The debate over digital humanities is thus properly framed as a debate not simply over this or that research methodology but over a general regime of knowledge going back several decades at least. Given what we have established thus far—that digital methods are at best a benign part of the zeitgeist and at worst a promulgation of late twentieth-century computationalism.” I don’t have much of an issue with this either, Presumably we can say essentially the same thing about the pre-digital or print humanities–that they were at best a benign part of the zeitgeist of the early-mid twentieth century and at worst a promulgation of industrialization and nationalism.
Right? I’m less certain Galloway would agree here. And here is why, and here is also where I disagree. Galloway contends that “the naturalization of technology has reached unprecedented levels with the advent of digital machines,” by which he means that they operate invisibly in our lives. I’m not sure that’s true. Like most middle-aged Americans, I certainly feel like my life is more technological than ever: my smartphone, the Internet, all these media devices, everything has got a computer chip in it (even the dog), etc. But it doesn’t seem “natural” to me, and it certainly isn’t invisible. Technology probably seemed more natural and invisible to me 30 years ago. Are our lives more technological and less natural than those of Native Americans in the 17th century? How about factory workers in New York in the 1880s? For Galloway’s argument it is necessary to be able to answer Yes to those questions. He wants to be able to argue that increased technologicalization means an increased ideological-hegemonic power that we, especially we in the humanities, must resist.
This leads to a second point of disagreement. He writes, “Ever since Kant and Marx inaugurated the modern regime of critical thought, a single notion has united the various discussions around criticality: critique is foe to ideology (or, in Kant’s case, not so much ideology as dogma).” My disagreement here is more subtle. I agree with the history here, and it’s probably also accurate to say that those who engage in critique view it as a “foe to ideology.” However, to return to where we started, if we view “theory” as a toolbox of methods, as Galloway puts it “more a question of pragmatism than unwavering conviction,” then how is it really a foe to ideology? Isn’t it just ideologies all the way down? Like many others, Galloway wants to connect interest in the digital humanities with the effects of neoliberalism on higher education, such as the adjunctification of faculty. However, significant interest in the digital humanities is really just a decade old and those neoliberal effects started in the 80s. If we really wanted to play the historical coincidence game, didn’t the rise of cultural studies and critical theory begin in the 80s? Critique and theory may claim to be a foe to ideology just as technologies may claim to liberate us, but I would suggest skepticism toward both claims. I would hypothesize that the institutional and disciplinary operation of critical theory is just as complicit in the neoliberal transformation of the humanities as digital technology has been, and moreso than the fledgling digital humanities.
However, despite these disagreements, in the end, I find myself in agreement with much of Galloway’s project which he describes as “a multimodal strategy of producing academic writing concurrent with software production, the goal of which being not to quarantine criticality, but rather to unify critical theory and digital media.” I’m sure we have different ideas of what that would look like, but that’s OK too. I have no more invested in promulgating some corporate view of a pseudo-technotopia than I do preserving some disciplinary vision of a fading print culture, so I am interested in studying the ways emerging technologies shape rhetorical practice without taking as an assumption either that a) those technologies uniformly represent the imposition of some evil hegemonic power or b) that print technologies were better. Nor do I think the only other available position is technophilia. If we want to hold media technologies accountable for the nasty things done by the cultures that use them then… Is it really necessary to finish that sentence?
So, to end with the “reading” issue. Yes, reading practices have changed with the media ecology in which they operate. I suggest that we try to understand those changes, that we invest in exploring, experimenting with, and establishing digital scholarly and pedagogical practices as we did with industrial-print practices a century or so ago. Will we end up with something that can operate in opposition to the dominant ideology? I’m sure we will… at least as much as we did in the past.
I read Stephen Johnson’s How We Got to Now this weekend, a book that examines six technological trajectories: glass, cooling, sound recording, clean water, clocks, and lighting. These histories cut across disciplinary and social areas following what Johnson calls the “hummingbird effect” (after the co-evolution of hummingbirds and flowers). These are not technological determinist arguments but rather accounts of how intersections among innovations open up unexpected possibilities. This is the “adjacent possible,” a term Johnson borrows from biologist Stuart Kauffman, though here Johnson is applying it to technological rather than biological evolution. If there is a central thesis to Johnson’s book it is “the march of technology expands the space of possibility around us, but how we explore that space is up to us” (226).
Overall, it’s an interesting book, well-written as you’d expect, with many curious narratives. I was especially interested in the glass chapter. However, I was taken from the start, where Johnson begins with a reference to Manuel DeLanda’s robot historian from War in the Age of Intelligent Machines, where DeLanda suggests that a robot would have a very different perspective on our history than a human. Johnson agrees and takes up this challenge, writing “I have tried to tell the story of these innovations from something like the perspective of DeLanda’s robot historian. If the lightbulb could write a history of the past three hundred years, it too would look very different” (2). In other words, Johnson suggests something that is akin to a kind of alien phenomenological approach. I can’t say that he necessarily delivers on that. I’m not sure that silicon dioxide’s view of its becoming glass through its interactions with humans over the past few thousand years would make much, if any, sense to us. However, the speculation could be interesting.
The glass chapter offers a couple interesting twists. It addressing the development of optics–reading glasses, microscopes, telescopes. It jumps to the industrial development of fiberglass as a building material, and then joins the two in fiber optics. However, Johnson takes a sidestep back to mirrors, where he takes up Lewis Mumford’s argument that the mirror initiated a new conception of the self and self-consciousness among Europeans. Again, not determined, but opened an adjacent possibility space.
You can see how all of these innovations come together in social media spaces. No server farms without cooling. No computer chips without super clean water or quartz timing. No Internet without fiber optics and digital audio communication. Throw in the mirror effect and one gets Selfie City, for example.
Johnson’s use of the adjacent possible works well enough for him, but for me it still leaves too many agency questions open. I prefer the more DeLanda-inspired notion of capacities or the Latourian idea of how we are “made to act.” Still Johnson does make a convincing argument for the ways in which seemingly unrelated events conspire to create a new opportunity, where a “slow hunch” (to use a term from one of his earlier works) suddenly becomes realizable because of a discovery somewhere or a change in economic conditions somewhere else. I suppose one might think of it as a nod to kairos.
I want to keep this in mind for the particular questions that concern me around the intersections of digital rhetoric and higher education. Maybe I have a slow hunch too, which does seem strange in the rapid turnover of digital innovation. (And when I say “I have” I don’t mean to suggest others are not seeing something similar, either.) Johnson points out how Edison at first imagined people using the gramophone to record audio letters to send to one another and Bell imagined people using the telephone to listen to orchestras play live music. The reversal seems funny from our perspective, though today, the process of “softwarization” (to use Manovich’s term) means that we have smartphones that combine all of these activities. Watch a video or video chat or watch a live event or record a video and share it with others. What happens when classrooms become softwarized, which they obviously already have? Are there analogous misunderstandings?
The slow hunch relies on a rather subtle misunderstanding about the kind of people that digital technologies mediate. Our expectations about digital learning presume interiorized subjects of the sort that occupied the possibility space of modern life, maybe starting with the mirror. Our dissatisfaction with digital pedagogy fundamentally lies in our awareness that we do not act the same way online as we do in class, and we don’t even act the same way in class anymore because of digital media. We put the lion’s share of our energy into trying to make digital pedagogy conform to its predecessor, in part perhaps because we share in the rather fantastical belief that the virtual world is immaterial and can be made to be like anything.
I share in Johnson’s rejection of techno-determinism, though I have a more complex vision of agency than he is willing to share in his book. It does make sense to me that we need to explore the possibility spaces here. As with many of the stories in this book, what comes about will likely be shaped by economic realities as much as anything else. However, if we start by investigating the different kinds of subjects we might become and then imagining how those subjects would learn, we start to illuminate those capacities, those possibility spaces, in ways that might be taken up more materially and economically.
I’ve been working recently through some concepts on attention and reading: Katherine Hayles on deep attention and hyper-reading, Richard Miller on slow reading, surface reading, Moretti’s distant reading, and so on. It’s part of my larger project taking a “realist rhetorical” approach to media ecologies and, in particular, that part of the ecology that I term “learning assemblies:” institutional assemblages that have explicit pedagogical operation. This has been juxtaposed for me by two recent on-campus conversations about teaching research writing.
At a workshop on Tuesday supporting writing-in-the-disciplines, Deb Rossen-Knill, our colleague from Rochester, was discussing with the faculty the ongoing challenge of supporting students as they seek to synthesize source materials into an argument of their own. Coincidentally, a similar topic was raised in a department meeting. Of course, it is not a surprising topic. In fact, it’s one of the commonplaces of writing instruction. I have certainly had many conversations in our program regarding the research paper in first-year composition.
So here’s the point of intersection. Miller does a good job of explaining the basic situation here, one which I would articulate in terms of media ecologies. Particularly for the undergrad, but really all of us have been impacted, the difference between the pre-1995 or even pre-2000 research paper and the contemporary situation is the availability of information. Again, we all know this. And this data abundance demands a different kind of reading, even just to sift through the results to find what one wants to read more closely. Furthermore, just as we have writing in the disciplines, we might also want to have reading in the disciplines, as we clearly do not all treat texts in the same way.
And I want to take a sideways step here, drawing on Lev Manovich’s concept of “softwarization” (in Software Takes Command). There he discusses how various analog media become translated into digital-software forms and then, out of that ecological shift, begin to proliferate new media species. Again, I think we know this. When texts, photos, films, and audio recordings become digitized, they lose some of the characteristics related to their analog media, some get translated (which implies transformation as well), and then some new characteristics get added. Print text and digital text are not the same things, but this gets even more apparent as new textual species start to emerge and the level of differentiation between the two begins to grow (e.g. reading a novel vs. reading Twitter). To add to that, there really aren’t “print texts” anymore, at least not in the sense that they existed 30 years ago. Not only have they changed in the sense that they exist in a very different media context but they are composed in a different media ecology as well.
So not only do we have disciplinary differences in reading, we also need to recognize that “reading” now refers to our encounters with a wide range of different species in our media ecology.
Back to the intersection with research writing. Student writers can face several obvious challenges in the “research paper” assignment:
- inexperience with the disciplinary genre in which they are being asked to write;
- inexperience with disciplinary practices of conducting research and reading;
- lack of knowledge/context for the academic sources they are asked to cite;
- lack of intrinsic motivation or curiosity in the research task they’ve been assigned;
- any number of other, competing demands on their time and attention.
The truth though is that even as experienced academic writers, we face versions of these challenges: struggling with writing well in the genre, laboring through the research process, dealing with difficult texts, staying motivated and on task.
My (brief) point here is that these matters are all shifted along with our media ecology. I disagree, somewhat, with Hayles on this matter in her description of “deep attention.” At points (in her 2007 Profession article for example) she describes deep attention as a generalized cognitive skill one that applies equally to reading a Victorian novel and solving a complicated math problem. I don’t think it’s that generalizable, and I know plenty of people who can attend to a novel but not a math problem and visa versa. I also know plenty of older generational folks who cannot do either. Where I do agree is that cognitive-attentional processes emerge from relations among objects (human and otherwise). And I would assert that we are not helpless in the face of these media shifts.
Just as we developed highly specialized disciplinary and professional reading and writing practices in a print culture (that were different from popular-cultural reading and writing practices), presumably we can do the same in a digital culture. I think it’s fair to say we haven’t quite figured those things out, but it strikes me as something worth addressing.
So, if you’re teaching a class with a “research paper” in it and trying to figure out how to articulate your assignment, I suppose the first question I’d ask you to consider is “how is your assignment different from one you might have given (or received) 15-20 years ago?” Because I assure you that even if your assignment isn’t different, everything else about the media ecology in which it is situated is.
If you do not know then Wikipedia will happily tell you that the 1968 photo known as “Earthrise” (unsurprisingly taken by an astronaut) has been called the “most influential environmental photograph ever taken.” Why? Presumably because it presents the Earth as a cohesive yet fragile entity. In any case, “Earthrise” captures something about the ecological turn in the humanities from ecocriticism to ecocomposition. The general ecological/environmental movement asks us to rethink our relationship to the world. The world is not ours to exploit nor is it simply the backdrop for our history. Perhaps it should be obvious by now that we are actors on a global scale in our ecology. Ecohumanities movements take up these environmental concerns but also adopt an ecological view toward their traditional objects of study. E.g., what does it mean to view composing as an ecological process? In short, one decenters the human from traditionally anthropocentric studies of what we have so firmly understood as human activity that we have called them the humanities.
Distributed cognition moves in this direction with thinking. Typically when I see discussions of distributed cognition, they are more along the lines of extended mind, of tools for thought. That is, they illuminate how various technologies allow us to engage in cognitive activities we wouldn’t normally be able to do. Think of a calculator or even, a la Walter Ong, how writing technologies shape our thinking. However one might also conceive of distributed cognition as the way humans and machines interact to undertake cognitive tasks no individual human could accomplish. Edwin Hutchins’ classic example is the docking of a naval vessel, but I’m thinking Wikipedia.
Eco-cognition would seem to be another matter altogether. One might think of the noosphere. Indeed some have compared the ecocritical concept of the anthropocene with the noosphere, as both point to a shift where human cognitive-technical capacities develop to a point of have an impact on the global ecology. The noosphere suggests the emergence of some collective human consciousness, a shared ecology of human thought. The noosphere though does tend to keep the human at the center. Another angle would point toward panpsychism where all objects are thinking or at least might be thinking.
I have a slightly different interest. If thinking is real, then why would it not join other real things and processes in an ecology? If thinking is distributed then it partly, maybe largely, happens beyond the purview of our conscious experience of it. Just as our subjective experience of ecology in general is incomplete, so too is our subjective experience of cognition.
So perhaps cognition needs a kind of “Earthrise” moment, one that captures the shared yet fragile context in which we think.
Organization is a common topic of discussion in writing instruction. Often, students are asked to produce “well-organized” essays and organization is a familiar criteria for assessment. Organization generally refers to the rhetorical cannon of arrangement, but somehow it makes more sense to say to students that their essays should be well-organized instead of well-arranged. Organization also implies a denser connection, stratification, and perhaps even hierarchy than arrangement.
But that’s what I want to get after here.
Latour brings up organization as one of his “modes of existence.” Combined with attachment and morality, organization represents his effort to displace social explanations founded on a spectral notion of The Economy. (I haven’t given it much thought but it might be interesting to match these three with the rhetorical modes of persuasion: pathos/attachment, ethos/morality, logos/organization.) In my reading, the key point about Latour’s organization is that it both easy to trace and paradoxical: “Easy, because we are constantly in the process of organizing and being organized; paradoxical, because we always keep on imagining that elsewhere, higher up, lower down, above or below, the experience would be totally different; that there would have to be a break in the planes, in levels, thanks to which other beings, transcendent with respect to the first, would finally come along to organize everything” (389).
This certainly applies to the way we approach organizing writing. We are constantly in the process of doing it. And yet organization is always somewhere else. It’s not here in this word or sentence or paragraph. Where is it? I was just here, organizing. To make that happen I have a script, which I am above and below. Take the example of the book I am working on (or avoiding working on by blogging instead). As Latour would point out, I am the writer of the script I will follow. (That’s not to say I have free will. It is instead to say that I am made to act or in this case, made to script.) I am also the person who must carry out the script: above and below. And yet “Organization never works because of the scripts; and yet, because of the scripts, it works after all, hobbling along through an often exhausting reinjection of acts of (re)organization, or, to use a delicious euphemism from economics, through massive expenditures of ‘transaction costs.'”
This is where I want to think about the glitchy character of real rhetorical relations. There’s always this patchy, hit or miss quality to communication (as there is to all relations). There are these extensive, ecological assemblages with which we are contending as both writers and readers. We have scripts to follow, but we have many competing scripts to follow, so many different ways to be organized. I do not mean to suggest that a text cannot be well or poorly composed, organized or disorganized. Instead, the point is that organization is not some meta-entity, some transcendent being, that comes along to impose itself.
Thinking back to yesterday’s post on digital literacy… If one of the many complaints lodged against digital communication is its unsuitability for the tasks of “rigorous” academic work. Yes, I know, you’d think we could get past it. But I think we still struggle with imagining how fully digital scholarship would operate, would be organized (as opposed to the skeuomorphs of the PDF essay for example). Though we should know better, I think we still imagine something transcendent in the organization of the essay that allows it to be academic. What happens when we discover that the essay turns out to be like the façades in those fake Western, Hollywood movie sets? There’s no transcendent organization there, just more texts and readers, editors, publishers, computers, offices, meetings, reviewers letters, libraries, databases, etc. etc. Digital scholarship gets organized in the same way.
In fact, if we want students to produce well-organized essays, we might think in similar terms about the networks, assemblages, and ecologies in which they compose. That’s not to say that student-writers are not actors in this matter, that they are not made to act. They are actors following scripts, scripts they have a hand in authoring.
It’s been a few years since I wrote about the annual Horizon Report, put out by EduCause and the New Media Consortium, but the 2015 report recently came out. There’s a lot of interesting information in there, but I want to speak to one particular issue, digital literacy. Basically, the report identifies three categories–trends, challenges, and technological developments–and focuses on six items in each category. So there are 18 different items in the report, and I’m talking about one of them here.
The report identifies “improving digital literacy” as a significant but solvable challenge, one “that we understand and know how to solve.” I guess I’m glad to hear that. I suppose this might be a semantic matter. What do we mean by “improving”? And what do we mean by “digital literacy”? In terms of the latter, the Report has an ambitious if vague definition.
Current definitions of literacy only account for the gaining of new knowledge, skills, and attitudes, but do not include the deeper components of intention, reflection, and generativity. The addition of aptitude and creativity to the definition emphasizes that digital literacy is an iterative process that involves students learning about, interacting with, and then demonstrating or sharing their new knowledge.
I do think this recognition that digital literacy is an ongoing process of learning rather than a one-time knowledge dump is an ongoing theme in the report. The report also divides strategies for addressing the challenge into areas of policy making, faculty leadership, and practice. So it points to new policies being established by governments and new learning standards built by professional organizations. It recognizes the importance of ongoing professional development for faculty (though this ties into the Report’s “wicked challenge” of figuring out how to reward teaching), and providing support for students from coursework to online resources. Undoubtedly there is a lot of energy and effort going into this challenge. Far more than there was a decade ago, which is good news. At UB, our revised general education program is very conscious of the task of supporting students’ digital literacy, and that’s a significant step in the direction of “improving digital literacy.”
I remain concerned about the use of the word “literacy.” I am concerned that it leads people to imagine that whatever digital literacy might be is somehow analogous to print literacy (or just plain old literacy). So let’s call it digital literacy instead. You might ask how much listening/speaking and reading/writing have in common? Something, for sure. I’m sure if we strapped you into an fMRI we’d fine some common areas of the brain lighting up for both activities. And maybe overlaps with digitally mediated tasks as well. I find that observation rather unsatisfying though. Reading books and writing essays as a means for becoming digitally literate is analogous to having a first-year composition course where one sits and talks about writing but never actually writes anything. It’s great to talk about writing and it’s useful to read about digital literacy too (as you are doing now), but at some point you have to do it.
And what is “it”? The report acknowledges that digital literacy is a shifting target (which is why we need Horizon Reports in the first place). We can speak broadly of a few general goals:
- finding and evaluating “good” digital media and information
- using digital media/technologies to communicate and collaborate on an informal and real-time basis
- composing digital media
As we might already argue with our legacy writing instruction challenges, these are not generalizable skills. They are specific to networks, assemblages, communities–however you want to think of that. In fact, if improving digital literacy is a solvable challenge that would be great news because it might mean we could leave behind the apparently not so solvable challenge of improving print literacy.
Still it’s not so useful to just take the air out of someone’s balloon. Even if improving digital literacy proves to be more intractable, at least these folks are taking a whack at it. And so am I. I know my arguments on this blog (and certainly in my more formal scholarship) can prove to be rather abstract, but I do think our challenge partly lies in our abstractions of rhetorical practice, specifically in our anthropocentric notions of symbolic behavior that imagine that regardless of the technology/ecology in which we are immersed, rhetorical action begins and ends with humans.
So, for example, faculty development is clearly an issue. But teaching professors how to use WordPress or whatever isn’t the issue. If you could magically turn the faculty into highly expert digerati, you’d still be left with sending them back to their disciplines, their curriculum, and their classrooms. You can’t really teach digital literacy in an environment that is ultimately about listening to lectures, taking notes, reading textbooks, writing essays, and passing exams. If faculty can recognize how their curriculum is shaped around certain technological networks/ecologies and the kinds of cognitive/subjective behaviors that emerge and are territorialized within them, then we have a starting point.
Let me put this differently… To what extent is your course and its objectives founded upon the affordances of reading and writing texts on an essentially individual scale (i.e. individual students silently reading or writing texts)? That’s was the focus of print literacy (though we can certainly contest the notion that such things were every really “individual”). Adding a WordPress site to such a course isn’t going to improve students digital literacy. Sure the faculty do need the skills, but they need to use them to rethink curriculum and pedagogy are a far deeper level. Not so solvable, though I wish it was.
I am at work on a chapter in my book that deals with cognition as it relates to a realist ontology and rhetoric, and I’m hoping this exercise will help me to crystalize my thoughts. I’m drawing on some familiar concepts (at least to me) from distributed cognition and extended mind to DeLanda’s fascinating and bizarre account of the development of cognition in Philosophy and Simulation. I also work through the research on writing and cognition going on in cognitive science, the neurorhetorical response to that, the sociocultural account of cognition in activity theory, and some of the posthuman accounts drawing on complexity theory in our field (e.g. Hawk, Dobrin, Rickert).
Obviously the question of cognition is central to our field, though the “cultural turn” has changed this into a question of subjectivity or agency. (I appreciate Dobrin’s admonishment that we focus on it too much.) My basic argument should be familiar within a realist ontological framework.
- All objects have the capacity to express and be perturbed by expression (though that capacity is not always realized).
- Those expressions are themselves autonomous. These are the ontological conditions of what I term a “minimal rhetoric.” I’m not interested in drawing boundaries between rhetorical and not rhetorical, except to argue against the boundary that limits rhetoric to human symbolic behavior.
- The relations of expression and perturbation create the capacities for cognition and agency. Again I’m not interested in drawing boundaries regarding which objects have these capacities. Assuming that you ascribe to a theory of evolution then you ascribe to the capacity of thought and agency emerging from nonliving entities. As Latour would say, through interaction we are “made to act,” which would include being made to think.
- I also draw on DeLanda here. The specific development of biological cognitive capacities emerge from their simplest form through interactions with objects. As those capacities develop, the ability to be express and be perturbed expands. We (biological critters) expand our senses into larger spaces, and, with memory, into time as well. That works both backward and forward as we develop the capacity to generate nonsymbolic scripts (expectations of what will happen next). What we can get out of this though is that cognition is an activity that emerges through relations with others and that the increasing capacity of an object to think can be traced in those terms.
- So thinking joins a hypothetically infinite range of capacities available to objects through their interactions with others. It’s as real and material as any other activity. It is not ontologically exceptional, even though we tend to value it. As such there’s really no reason to build a universe around the perceived strengths or limits of thinking. When I consider an apple, I engage in an activity with certain capacities over others. When I eat the apple, I engage in an activity with certain capacities over others.
- Thinking through an interaction with language (symbolic behavior) produces capacities of its own. The whole process might be speculatively explored, as DeLanda does, as emerging from mechanism-independent processes. Of course no one empirically knows how language came about. From my perspective what’s important is understanding symbolic behavior as co-emergent with cognition as real activities that are ecological. By that I don’t mean that they are related to “everything,” but that there is an extensive network of relations, limited only by our capacities for perturbation, that are at play.
I’m not sure if these claims strike you as obvious or absurd. It would suggest that rhetoric cannot be limited to symbolic behavior or to culture (as opposed to nature) or to humans. It would suggest that looking for cognition in the brain or in language or in society will only offer partial pictures. A realist rhetoric can assert that it is not limited to human thought or symbolic behavior, but it does need to be able to account for them in a way that doesn’t lead one back to idealism or empiricism.
For the mainstream, postmodern rhet/comp person, I suppose symbolic behavior is cultural and ideological. It can overdetermine subjectivity and agency. The only possible escapes are through the indeterminacy of language or the chance that critical thinking produces enough resistance to overdetermination, but there’s never really any outside here. I call this the “agent complex:” which is really like the Higgs Boson problem for postmodernity. Posthuman rhetoric offers in turn a “complex agent,” one where complexity theory describes how agency can emerge in a non-deterministic way.
To end by circling back to DeLanda and Latour, both idealism and empiricism want to impart thought with special ontological powers: to create a space to act free from relation and/or to create an objectively true model of the world. Realism sees thought as another capacity for action, another means of construction or instauration, where acting outside of relation makes no sense and knowledge is always constructed without necessarily being fictional.
Perhaps you were like me and didn’t catch this Chronicle piece last month when it was published in the run-up to MLA where Jeffrey Williams touts the “New Modesty in Literary Criticism.” What is this new modesty? Williams suggests that
Literary critics have become more subdued, adopting methods with less grand speculation, more empirical study, and more use of statistics or other data. They aim to read, describe, and mine data rather than make “interventions” of world-historical importance. Their methods include “surface reading,” “thin description,” “the new formalism,” “book history,” “distant reading,” “the new sociology.”
No doubt part of this is a gesture toward digital humanities with his mention of data and distant reading in particular. However much of it is not necessarily DH. Instead, there is an interest in a broader range of maybe empirical practices (though if you follow through on some of the article links in the piece you’ll see a lot of careful trodding around the idea of empiricism). However there is a fair amount of interest in Latour, which is where my interests come in.
So here’s my question. Whether it’s an article like “Why critique has run out of steam” or books from We Have Never Been Modern to An Inquiry into Modes of Existence I’m just not sure where the “modesty” starts showing up. There’s not a great deal of modesty in the argument that what the humanities have been doing for the last couple decades is a load of bollocks. Now, it appears that the literary scholars cited in the article want to hold on to the hermeneutics of suspicion, so maybe that’s where the “modesty” lies: they are modest in their criticisms of their predecessors. Maybe, but somehow I don’t think that’s the point here.
Maybe this is modesty in reference to the “modest witness,” that foundation of scientific method. If so, then this wouldn’t make too much sense with Latour, who would want to account for the many hybridized actors that allow for the construction of modest witnessing. This might make sense inasmuch as the main thrust of this article is to report on a constellation of literary methods that foreground description over interpretation. However I think it is too subtle a connection in the end. As “modest” as the witnesses of scientific experimentation may be, the words modest and science are not generally associated.
No, instead, there’s a very strange kind of modesty that is suggested here. As Williams writes, “surface reading and allied approaches seem to return to an older orientation of criticism, one that sees its mission as more scholarly than political.” Those of us in the humanities business understand exactly what this implies. To be “political” is to share not only in a kind of leftist political view but also to assert that humanistic interpretation (e.g. literary criticism) is a direct form of political work with a primary obligation of seeking to achieve some political objective. That is, this isn’t simply a wishy-washy way of saying “everything is political;” it is an insistence that humanities scholars conceive of their work as directly participation in an emancipatory project of some kind. Personally, I would suggest that it is debatable the extent to which all humanists really thought (or think) of their work in these direct-action political terms. That said, there have, in my experience, always been a fair number of true believers out there ready to put anyone to the question if their political commitments appeared in doubt. But the modesty here is not even a suggestion that the scholars do not have these political commitments. Instead, it is a suggestion of one of two possible positions: 1) that literary criticism is an ineffective method for achieving political change (imagine that) 2) the focus on interpretation as politics obscures the study of literature.
Either way, Williams ends with the following:
It remains to be seen, though, whether surface reading and allied approaches re-embrace a more cloistered sense of literary studies. I’d like to think that criticism has more to do than accumulate scholarly knowledge, at the least to explain our culture to ourselves, as well as serving as a political watchdog.
Today’s modesty may not bode an academic withdrawal from public life. It may simply register an unsettled moment, as past practices cede and a new generation takes hold. The less-optimistic outlook is that it represents the decline of criticism as a special genre with an important role to investigate our culture. While realism carries less hubris, it leaves behind the utopian impulse of criticism.
Maybe this is good news for realist ontology. Apparently it is no longer arrogant to abandon postmodernity. Apparently there’s no hubris in describing the “modes of existence.” But let me briefly take issue with some of these claims… not in the name of these literary critics but for this more general project of realism or, at least, a Latourian “second empiricism.” In many ways, these approaches are less cloistered. Who is more cloistered than the traditional humanist typing out yet another rehearsal of a critical position, who sits in his/her office with the same old set of books because there’s no point in empirical evidence anyway, no reason to leave the office?
I’m not sure what difference is suggested between accumulating scholarly knowledge and explaining our culture. Of course, in the tradition of postmodernity explaining culture doesn’t require doing scholarship because one already knows what culture is before one begins. It’s deductive reasoning where one already knows what the rules of culture are. Postmodern scholarship never led to an understanding of culture; it just began with one.
Of course what is ultimately at stake here are ideological commitments: scholarship as “political watchdog;” the “utopian impulse of criticism.” These are familiar critiques/attacks from my perspective. In my 20+ years in academia, it’s always been the case that there are scholars who will insist that everyone must do what they do and share in their theoretical-ideological perspective, To do otherwise is to become some horrible, anti-intellectual, capitalist dupe or collaborator. To which I have learned to say “don’t feed the trolls.” At the same time, I think it is unfair to accuse Latour of not having a political project. Maybe you don’t agree with it, but that’s another matter. It’s true that it isn’t “utopian,” but how can any postmodernist be utopian? I suppose if one is modest because one does not believe that one’s scholarly work in the humanities (writing scholarly articles, teaching classes, going to committee meetings, etc.) is taking the world on a direct path to utopia, then I’m a modest guy.
The Chronicle reported today on the abuse of faculty by students in a class via Yik Yak. Steve Krause writes about the event here (it happened at his institution, Eastern Michigan). And, coincidentally, Jeff Rice has a general piece on universities and Yik Yak on Insider Higher Ed.
The basic story in this most recent event is that some unreported number of students in a class of 230 wrote over 100 messages on Yik Yak during a class meeting. Apparently many of the messages were rude, insulting, and abusive. We’ve seen this story before in the form of tweckling: different app (Twitter), same basic rhetorical effect. Of course Yik Yak allows for even greater anonymity that Twitter does. (Although, as we know, in the end, it’s very hard to be truly anonymous.) Nevertheless, student perception of anonymity certainly appears to have loosened social propriety.
Setting aside judgments of students, faculty, institutions, the designers of Yik Yak, “today’s modern, fast-paced society,” or whatever, what might be investigated in this event?
1. I don’t think we would say that anonymity directs people to free, unfettered action. As such, instead we might seek to uncover the actors and relations from which these rhetorical practices emerge. In my brief forays into Yik Yak, it appears that anonymity does not dissuade users from wanting attention. Users still want to perform and perform well. They want to interact, and they get taken up by the situation. As Latour would say, they are “made to act” or maybe yak in this case. This is not in any way an excuse or defense, but simply a suggestion that it would be wrong-headed to take these anonymous remarks as evidence of what the students “really think.”
2. That said, the web clearly bisects the conventional classroom and deterritorializes its operation. To use DeLanda’s appropriation of assemblage, we would observe that the physical aspects of the room–the orientation of the chairs, the lighting, the chalkboard, etc. etc.–all establish a specific territory which is expressed on a non-symbolic level. These territories are coded further by any number of symbolic interventions from the class schedule to university policies about student behavior, as well as systems that establish social relations between faculty and students. All of these items can also serve deterritorializing and decoding features, as when the lights buzz and flicker in a distracting way, the chairs are uncomfortable, the chalkboard squeaks, class scheduling creates conflicts, or students and faculty decide to start acting in ways beyond their established institutional roles. Similarly the appearance of wifi or cellular data connections in a classroom has the potential to function in a territorializing/coding fashion, when we use the technology toward pedagogical ends, for example. And, in this case, it can deterritorialize the classroom, potentially to such a state that the professor says she cannot proceed.
So what does that tell us about what should be done? The actions available to institutions are fairly obvious. They can geofence campuses to prevent Yik Yak use. They can prohibit use of devices in classrooms. They can track down and punish offenders. You might say all these actions presume that what the students did was wrong. Maybe, but in this context they reflect the operation of an assemblage in reasserting its territoriality. It’s desire to continue to persist.
Here’s a relevant part of this for me though. Let’s say the same group of students met after the class and made the same comments to one another verbally in private. Or that they used SMS to text one another the same messages, but not in a public forum. Or that they wrote them all out on a piece of paper. These are all very hypothetical situations as part of my contention is that they did what they did precisely because of the environment in which they were operating. Compare those examples with them taking that piece of paper with their comments, making a bunch of photo copies and handing them to their classmates as they left the classroom. Or shouting their comments during the class itself.
Where does this Yik Yak activity fall among these more familiar, mostly “pre-digital,” forms of communication? We can say that it is wrong to say hurtful, sexist things in private, but saying them in public is a different offense, and directing them toward a specific person who is in the audience is yet another. It is likely that the students failed to imagine that their professor would be in their audience. If they had, we could guess they would have behaved differently, even if they still felt protected by anonymity. Of course that’s only speculation.
Perhaps it would be interesting to “peek” into the EMU Yik Yak community and see if any self-correction takes place or not. Because EMU is not the only assemblage at work here. Yik Yak forms its own assemblage, right? Even though each user typically forms his/her own Yik Yak community based on the phone’s specific location, there is a Yik Yak EMU community. Of course it isn’t nearly as solidified as the college, so it’s hard to suggest that it would operate with some collective intent in the way a college could set a policy. Still I imagine there are many students who would think their peers actions here were unwarranted. As it is, I see back-and-forth on Yik Yak when someone makes a really offensive statement.
So it would be unsurprising for professors collectively or an institution to make some move to reterritorialize their assemblage by geofencing Yik Yak or engaging in some similar move. On the other hand, as Jeff Rice points out, stories like this one are not the norm on Yik Yak, which is typically more banal than anything else, even if the anonymity does lead to a degree of crudeness. Ultimately some mechanisms of social interaction arise to regulate behavior. Even primates demonstrate that!
Who can resist a job market post during MLA season? Not Inside Higher Ed, though this one points to some interesting research done by economist David Colander (with Daisy Zhuo) and published in Pedagogy. I suppose it’s a dog-bites-man scenario. Colander samples hiring and job placement at a group of English departments and comes to the conclusion that graduates of top tier doctoral programs (Tier 1 ranked 1-6 and Tier 2 ranked 7-28 as per US News & World Report) are much more likely to get jobs at the top 62 doctoral universities (and the top liberal arts colleges) than graduates of lower ranked programs.
I know, surprising stuff, right? Though the actual numbers are very clear: according to the study, less than 2% of graduates from tier 3 schools land jobs at the top 62 universities. Basically what you see is that the top schools hire their own. 57% of the faculty at the top 6 schools come from the other 5 in tier 1. Nearly 75% of the faculty at tier 2 schools come from the top two tiers.
It’s not hard to imagine how this happens. Some might like to argue that it is a rational process. The best candidates are those who get into and graduate from the best programs. They’ve already been filtered, though the narrowness of the top six hiring one another does seem a little incestuous. (It would be interesting to compare this with other disciplines.) Others are more likely to see this practice as a problem. As onerous and outdated as the current MLA job search practice is, it was implemented to replace a far less fair, old boys network of hiring. One could argue this study reveals that network is still in effect.
But I’m not here to contend with that issue today. Instead I want to address faculty at tier 3 or 4 institutions. More than half of you got your degrees from schools in the top 2 tiers. If Colander’s study is accurate, your students aren’t likely to get jobs in the top 3 tiers or win prestigious post-docs. In the top 2 tiers it’s not unreasonable to train students with the idea that successful grads will go onto the positions much like one’s own: research-intensive, low-teaching, doctoral programs, and strong undergrads. But in tiers 3 and 4 this just isn’t the case, but you already knew that, right?
So here’s an extended quote from Colander:
the best explanation of the current job market situation is that English programs are populated with students who love the study of English and want to combine that love of English with some way to make an acceptable living. Students who are not independently wealthy need to have some way to combine their love—the study and teaching of English—with a job that provides sufficient income to live. For many students, even relatively low-paying part-time and adjunct jobs, combined with other part-time, better-paying private-sector jobs ideally using their English skills, are evidently preferable to giving up the study of English. From an economist’s free-choice perspective, if that is what students choose, a program focused solely on actual job training should prepare them for that life as well as possible. Training would be designed, among other things, to prepare students to put together the combination of jobs that is most likely in their future. This is not to argue that the situation they will face is a desirable one, or that the institutional structures governing academic employment should not be changed. But that is a separate issue; job training should focus on preparing students for the institutional reality they will likely face. To my knowledge, no programs do this.
Numerous possibilities exist to address this goal. Most people do not know how to write well, and if more English PhD programs provided training in preparing students to do freelance consulting, analytic writing and composition, rhetoric, copyediting, proofreading, general editing, or tutoring, in addition to the study of literature and literary criticism, their students would have a set of skills that are more marketable than those needed to advance in a research university. The very fact that job placement is thought of primarily in terms of tenure-track academic jobs is suggestive of the problem.
But then, if you’re faculty at one of these institutions, these things have probably crossed your mind. The suggestion that departments should design their programs to prepare students for their future lives as contingent labor is a little shocking (which is not to suggest the situation is desirable, ahem). Though it is easy to respond with anger to Colander’s suggestion, I think what is more to the point, perhaps with the cold, dismal eye of the economist, is what people, both students and faculty, are willing to sacrifice in the name of love: in this case, the love of literature. Personally I don’t think I can go quite where Colander is going and set up a doctoral program that recognizes that many of its students will have no better professional future than the one with which they entered the program. He tosses out the idea of non-academic jobs. Fine. Let’s put a pack of economists to the task of identifying current non-academic jobs for which a PhD in English (or some reasonably modified version of such) is a required or at least preferable qualification.
What is reasonable, at least to me, is thinking about how tier 3 and 4 institutions might revise their curriculum to prepare graduates for the kinds of academic jobs they do land. Again, dog-bites-man I think.
Here’s the abstract to my contribution, “Digital Humanities Now and the Possibilities of a Speculative Digital Rhetoric.”
This chapter examines connections between big data digital humanities projects (the Digital Humanities Now project in particular), digital rhetoric, and the philosophies of speculative realism (focusing on Bruno Latour). It addresses the critique that digital humanities are under-theorized and connects these critiques with those made against speculative realism’s use of scientific and mathematical concepts. Finally it proposes how a speculative digital rhetoric might contribute to a network analysis of informal, online scholarly work.
Keywords: big data, speculative realism, Bruno Latour, middle-state publishing, nonhuman
Some liner notes:
The digital humanities is a rapidly growing field that is transforming humanities research through digital tools and resources. Researchers can now quickly trace every one of Issac Newton’s annotations, use social media to engage academic and public audiences in the interpretation of cultural texts, and visualize travel via ox cart in third-century Rome or camel caravan in ancient Egypt. Rhetorical scholars are leading the revolution by fully utilizing the digital toolbox, finding themselves at the nexus of digital innovation.
Rhetoric and the Digital Humanities is a timely, multidisciplinary collection that is the first to bridge scholarship in rhetorical studies and the digital humanities. It offers much-needed guidance on how the theories and methodologies of rhetorical studies can enhance all work in digital humanities, and vice versa. Twenty-three essays over three sections delve into connections, research methodology, and future directions in this field. Jim Ridolfo and William Hart-Davidson have assembled a broad group of more than thirty accomplished scholars. Read together, these essays represent the cutting edge of research, offering guidance that will energize and inspire future collaborations.Stuart A. Selber, author of Multiliteracies for a Digital Age “Ridolfo and Hart-Davidson have produced a volume that interrogates the most important questions facing both rhetoric scholars and teachers who are interested in the digital humanities and digital humanists who are interested in the rhetorical dimensions of multimodal texts. Avoiding the negative aspects of territorialism and disciplinary politics, the contributors remix theories, practices, and methods in new and exciting ways, mapping productive relationships between rhetorical studies and the digital humanities and illuminating how these areas intersect and interanimate one another. This volume should be required reading for anyone who cares about the future of writing and reading.” Collin Brooke, Syracuse University “Rhetoric and the Digital Humanities is a landmark collection for scholars in rhetoric and writing studies. Its attention to procedurality, coding, scholarly communication, archives, and computer-aided methodologies, among other things, maps many of the important changes in disciplinary terrain prompted by the emergence of the digital humanities. It’s also a compelling demonstration of the role that rhetoric and writing studies can and should play in discussions about digital humanities. This book will provide colleagues across the disciplines with a strong sense of the ways that rhetorical studies might intersect with their own work.” Matthew K. Gold, Debates in the Digital Humanities “An important and timely exploration of the many ties that bind the digital humanities and composition/rhetoric. Rhetoric and the Digital Humanities is a much-needed book that will stir conversations in both fields.” The Table of Contents Introduction
Jim Ridolfo and William Hart-Davidson
PART ONE Interdisciplinary Connections
1 Digital Humanities Now and the Possibilities of a Speculative Digital Rhetoric
2 Crossing State Lines: Rhetoric and Software Studies
JAMES J. BROWN JR.
3 Beyond Territorial Disputes: Toward a “Disciplined Interdisciplinarity” in the Digital Humanities
SHANNON CARTER, JENNIFER JONES, AND SUNCHAI HAMCUMPAI
4 Cultural Rhetorics and the Digital Humanities: Toward Cultural Reflexivity in Digital Making
5 Digital Humanities Scholarship and Electronic Publication
DOUGLAS EYMAN AND CHERYL BALL
6 The Metaphor and Materiality of Layers
DANIEL ANDERSON AND JENTERY SAYERS
7 Modeling Rhetorical Disciplinarity: Mapping the Digital Network
PART TWO Research Methods and Methodology
8 Tactical and Strategic: Qualitative Approaches to the Digital Humanities
BRIAN MCNELY AND CHRISTA TESTON
9 Low Fidelity in High Definition: Speculations on Rhetorical Editions
10 The Trees within the Forest: Extracting, Coding, and Visualizing Subjective Data in Authorship Studies
KRISTA KENNEDY AND SETH LONG
11 Genre and Automated Text Analysis: A Demonstration
RODERICK P. HART
12 At the Digital Frontier of Rhetoric Studies: An Overview of Tools and Methods for Computer-Aided Textual Analysis
DAVID HOFFMAN AND DON WAISANEN
13 Corpus-Assisted Analysis of Internet-Based Discourses: From Patterns to Rhetoric
PART THREE Future Trajectories
14 Digitizing English
JENNIFER GLASER AND LAURA R. MICCICHE
15 In/Between Programs: Forging a Curriculum between Rhetoric and the Digital Humanities
16 Tackling a Fundamental Problem: Using Digital Labs to Build Smarter Computing Cultures
KEVIN BROOKS, CHRIS LINDGREN, AND MATTHEW WARNER
17 In, Through, and About the Archive: What Digitization (Dis)Allows
TAREZ SAMRA GRABAN, ALEXIS RAMSEY-TOBIENNE, AND WHITNEY MYERS
18 Pop-Up Archives
JENNY RICE AND JEFF RICE
19 Archive Experiences: A Vision for User-Centered Design in the Digital Humanities
20 MVC, Materiality, and the Magus: The Rhetoric of Source-Level Production
21 Procedural Literacy and the Future of the Digital Humanities
22 Nowcasting/Futurecasting: Big Data, Prognostication, and the Rhetorics of Scale
23 New Materialism and a Rhetoric of Scientific Practice in the Digital Humanities
In Pandora’s Hope, Latour tells the story of being asked if he “believes in reality.” His response was something to the effect of not realizing that reality was something one needed to believe in. Elsewhere Graham Harman has written of an email exchange with Manual DeLanda, who wrote “For decades admitting that one was a realist was equivalent to acknowledging [that] one was a child molester.” Harman’s response? “The past tense may be too optimistic, since it is not clear that those decades lie entirely behind us.” That was 2007. Since then we’ve been up, down, and around the hype adoption cycles of speculative realism, new materialism, the “nonhuman turn,” etc., etc. To be honest, I’m not sure if the result has changed the situation Latour, DeLanda, and Harman describe.
Rhetoric is in an odd situation is relation to these matters. On the one hand, rhetoric is classically interested in human symbolic action. It’s stereotypical detractors would declare rhetoric to be idealist to a fault, uninterested in “reality” or “truth” and squarely focused only on what people think and what they can be persuaded to think. On the other hand, rhetoric is equally invested in the ideas of the public and the marketplace, of justice, deliberation, and so on. In other words, rhetoric recognizes the very real, material effects of symbolic action. One assumes those effects are occurring in reality. Of course, to be an idealist does not require denying reality. It simply means that one’s access to reality is subjective. As the correlationist would put it, one only sees the world as it relates to oneself.
What does it mean to call rhetoric “real”? To start, there are two interrelated takes on this. To be a realist is to assert the existence of a mind-independent reality that exists beyond empirical observation. As DeLanda notes, this means that the realist’s “first task is to delimit the kinds of entities that it considers legitimate inhabitants of the world.” Some parts of the real world exist only in relation to humans (e.g. my university) while others (e.g. mountains) do not, and still other things may exist only in human minds (e.g. arguably heaven and hell, though clearly some may argue that ideas have mind-independent realities as well or think these things exist in the same way mountains do). Certainly one could say that there are rhetorical practices that are as dependent upon humans as a university would be. So a realist would be faced with three options for rhetoric:
- Rhetoric exists only in human minds; it is not a legitimate inhabitant of the world.
- Rhetoric is real but dependent upon humans to exist.
- Rhetoric exists independent of humans. If there suddenly were not humans, there would still be rhetoric.
So let’s say I adopt position #3, with the recognition that there are certain rhetorical practices that would fit #2. Such a statement would be speculation. One would have to establish means for investigating the claim, as DeLanda does with his concept of quasi-causal mechanisms. There are other theories out there, of course.
What are some of the implications of this position?
- Rhetoric precedes humans and thus symbolic action. Rather than rhetoric being invented as a way of using language, language emerges as a capacity of rhetorical interaction.
- Rhetoric is not an exclusively human trait. It is not evidence of the ontological exceptionality of humans. It is not evidence of a human-social-cultural world that is ontologically separate from the natural world.
- Human practices of rhetoric emerges, of necessity, in relation to nonhuman rhetoric. There is no purely human or social rhetoric.
- Because human practices of rhetoric rely upon nonhumans (of all kinds), those practices shift along with our nonhuman relations (the obvious example being media technologies).
- Though human practices shift over time and space, there is no inherently human rhetorical practice that can be threatened by these changes.
- That said, human-nonhuman relations (networks, assemblages) shape rhetorical practices, which in turn have other real effects.
In my view, as rhetoricians, and teachers of rhetoric in particular, we proceed everyday as if we believe #6. When we ask students to sit in a circle; when we do some freewriting to give students a chance to think through a question or “get the juices flowing;” when we ask students to put away their cell phones; when we require students to write in one genre rather than another; when we write on the chalk board, use a handout, or show a video; do we not do those things because we believe the nonhumans involved shape our capacities for rhetorical action?
If I add into this something like Andy Clark’s extended mind, then what is asserted here about rhetorical practice might be broadened to all those things that we might conventionally view as the product of human thought. I tend to think of it this way. Thoughts are real. They can be measured empirically, if partially, by fMRI and other technologies. They have real effects, like this blog post. Thoughts may be ephemeral, short-lived, but so is a gust of wind. Is the gust real? Are subatomic interactions occurring in “planck time” not real? Thoughts are just things in the world. Some emerge in relation to humans; others do not. At the very least we would say some other animals think. In my view, even if we limited rhetoric to a subset of things that humans think (which I would not), this would not make rhetoric any less real.
Instead one might ask the reverse question. Is rhetoric a kind of thought? Or is thought a kind of rhetorical relation? That is, do rhetorical relations create the conditions for the capacities of thought and agency? If I asserted that the minimal requirements for a rhetorical encounter were an expressive force and an object capable of sensing the expression, would that presuppose thought? I don’t know. I am not particularly interested in studying the rhetorical relations among rocks or quasars or even among a flock of geese or a stand of trees, but I’m also not interested in declaring a priori that such investigations are out of bounds.
I am interested in investigating nonhumans participating in human rhetorical practices, media technologies in particular, though not exclusively. Take for example, the image attached to this post which depicts guest workers in Djibouti seeking cell signals from across the sea so that they can phone home. How can we study the ways such nonhumans participate in our rhetorical and cognitive activity? The idealist can only look into the human mind (which I would not term as a legitimate inhabitant of the world); perhaps one can say something about capitalism. The empiricist (e.g. the cognitive rhetorician or the activity theorist) is limited to the observable world, to her qualitative methods. So one might observe and interview these guest workers, and I will not deny the usefulness of that work. However the signature difference with the realist (and DeLanda puts this well) is that one does not view the knowledge one creates as a representation of either a mind-dependent (idealist) or mind-independent (empirical) reality but as a construction, a composition (as Latour says in his manifesto), that has effects. As such one can go beyond the empirical representation or cultural-critical interpretation of these guest workers to speculate on the networks of relations that produce this event. As Latour observes, when we create scientific knowledge we change the world. Of course we do, why else would we go to all that work?
And this brings me back to the native heart of rhetoric: effecting change, persuading. Though in many ways the study of rhetoric appears distinctly suited to idealism, when we think of rhetoric as practice, as know-how, in a way that philosophy can never be, it has realist roots. If rhetoric isn’t the know-how to interact/compose with objects to have real effects, then what is it?