Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 1 day 3 hours ago

arduino heuretics

25 May, 2015 - 09:23

As those of you who are involved in the maker end of the digital humanities or digital rhetoric know, Arduino combines a relatively simple microcontroller, open source software, and other electronics to create a platform for developing a range of devices. I seem to recall encountering Arduino-based projects at CCCC several years ago. In other words, folks have been playing around with this stuff in our discipline for a few years. (Arduino itself has been around for about a decade.) My own exigency for writing this post is that I purchased a starter kit last week, partly for my own curiosity and partly for my kids growing interest in engineering and computer science. In short, it looks like a fun way to spend part of the summer.

David Gruber writes about Arduino in his chapter in Rhetoric and the Digital Humanities where he observes “digital tools can contribute to a practice-centered, activity-based digital humanities in the rhetoric of science, one that moves scholars away from a logic of representation and toward the logic informing ‘New Materialism’ or the rejection of metaphysics in favor of ontologies made and remade from material processes, practices, and organizations, human and nonhuman, visible and invisible, turning together all the time” (296-7). In particular, he describes a project at North Carolina State which employed an Arduino device attached to a chair that would send messages to Twitter based upon the sensor’s registering of movement in the chair. Where Gruber prefers the term “New Materialism,” I prefer realism: realist philosophy, realist ontology, and realist rhetoric. I think we may mean the same thing. For me the term materialism is harder to redeem, harder to extricate from the “logic of representation” he references, which has discussed materialism and materiality for decades while eschewing the word realism. I would suggest that the logic informing realism or new materialism, the logic juxtaposed to representation, is heuretics, the logic of invention. As I am putting the finishing touches on my book, these are the pieces I’m putting together: Ulmer’s heuretics, Bogost’s carpentry, Latour’s instauration, and DeLanda’s know-how.

As I started to learn more about Arduino, I discovered the split that has occurred in the community/company. It is a reminder of the economic processes behind any technology. Perhaps by serendipity, I have also been involved in a few recent conversations about the environmental impact of these devices (e.g. the rare earth minerals) and the economic-political impact of those (e.g. the issue of cobalt as a conflict mineral). These are all serious issues to consider, ones that are part of a critical awareness of technology that critics of DH of say get overlooked. Of course, sometimes it seems this argument is made as if our legacy veneration of print culture has not been built upon a willful ignorance of slavery, labor exploitation, political strong-arming, environmental destruction, and capitalist avarice that made it possible from cotton production to star chambers to lumber mills. Somehow though no one suggests that we should stop reading or writing in print. That’s not an argument for turning a blind eye now but only to point out that the problems, in principle, are not new. Technologies are part of the world, and the world has problems. While the world’s problems can hardly be construed as good news, they are sites for invention and agency. As Ulmer says at one point of his EmerAgency, “Problems B Us.”

I’m expecting I’ll post a few more times about Arduino this summer as I mess around with it. Given that I’m just starting to poke around, I really don’t have any practical insights yet. It’s always a little challenging to take on a new area beyond one’s realm of expertise. We live in a world where there’s so much hyper-specialization that it’s hard to justify moving into an area where you know you’ll almost certainly never rival the expertise of those who really know what they’re doing. This is a kind of general challenge in the digital humanities and rhetoric, where you might realize that the STEM folks or the art folks will always outpace you, where we seem squeezed out of the market of making. Perhaps that’s why we’re so insistent on the logic of representation, as Gruber terms it. Articulating our ventures into this area as play, while objectionable to many, is one way around this. For me, the window of saying this is fun and something I do with my kids, as a hobby, is part of what makes it doable, part of the way that I can extricate myself from the disciplinary pressures to remain textual. I’ll let you know how it goes.

Categories: Author Blogs

the humanities’ dead letter office

13 May, 2015 - 10:15

Adeline Koh writes “a letter to the humanities” reminding them that DH will not save the humanities (a subject I’ve touched on at least once). Of course I agree, as I agree with her assertion that we “not limit the history of the digital humanities to humanities computing as a single origin point.” Even the most broadly articulated “DH” will not save the humanities, because saving is not the activity that the humanities require: ending maybe, but more generously changing, evolving, mutating, etc.

Koh’s essay echoes earlier arguments made about the lack of critical theory in DH projects (narrowly defined). As Koh writes:

throughout the majority of Humanities Computing projects, the social, political and economic underpinnings, effects and consequences of methodology are rarely examined. Too many in this field prize method without excavating the theoretical underpinnings and social consequences of method. In other words, Humanities Computing has focused on using computational tools to further humanities research, and not to study the effects of computation as a humanities question.

But “digital humanities” in the guise of “humanities computing,” “big data,” “topic modelling,” (sic) “object oriented ontology” is not going to save the humanities from the chopping block. It’s only going to push the humanities further over the precipice. Because these methods alone make up a field which is simply a handmaiden to STEM.

I have no idea what object oriented ontology is doing in that list. Maybe she’s referring to object oriented programming? I’m not sure, but the philosophical OOO is not a version of DH. However, its inclusion in the list might be taken as instructive in a different way. That is to say that I was maybe lying when I said I had no idea what OOO is doing on this list alongside a couple DH tropes. It is potentially a critical theorist’s list of enemies (though presumably any such list would be incomplete without first listing other competing critical theories at the top). And this really brings one to the core of Koh’s argument:

So this is what I want to say. If you want to save humanities departments, champion the new wave of digital humanities: one which has humanistic questions at its core. Because the humanities, centrally, is the study of how people process and document human cultures and ideas, and is fundamentally about asking critical questions of the methods used to document and process. (emphasis added)

So “humanistic questions” are “critical questions.” As I read it, part of what is going on in these arguments is an argument over method. As Koh notes, DH is a method (or collection of methods, really, even in its most narrow configuration). But “critical theory” is also a collection of methods. As the argument goes, if the humanities is centrally defined by critical-theoretical methods then any method that challenges or bypasses those methods would be deemed “anti-humanistic.”

I’ve spent the bulk of my career failing the critical theory loyalty litmus test, so I suppose that’s why I am unsympathetic to this argument. Not because my work isn’t theoretical enough! One can always play the theory oneupmanship game and say “my work is too theoretical. It asks the ‘critical questions’ of critical theory.” But actually I don’t think there’s a hierarchy of critical questions, though there clearly is a disciplinary paradigm that prioritizes certain methods over others, and from within that paradigm DH (and apparently OOO as well while we’re at it) might be viewed as a threat. The rhetorical convention is to accuse such threats of being “anti-theoretical,” as being complicit with the dominant ideology (like STEM), or, perhaps worse, being ignorant dupes of that ideology.

I can certainly account for my view that critical-theoretical methods are insufficient for the purposes of my research. That said, I have no issue with others undertaking such research. The only thing I really object to is the claim that a critical-theoretical humanities serves as the ethical conscience of the university.  If the argument is that scholars who use methods different from one’s own are “devaluing the humanities” then I question the underlying ethics of such a position.

I’m not sure if the humanities need saving or if the critical-theoretical paradigm of the humanities needs saving or if it’s not possible to distinguish between these two. I’m not part of the narrow DH community that is under critique in this letter. I’m not part of the critical-theoretical digital studies community that Koh is arguing for. And I’m not part of the other humanities community that is tied to these central critical-humanistic questions.

I suppose in my view, digital media offers an opportunity (or perhaps creates a necessity) for the humanities to undergo a paradigm shift. I would expect that paradigm shift to be at least as extensive as the one that ushered in critical theory 30-40 years ago and more likely will be as extensive as the one that invented the modern instantiation of these disciplines in the wake of the second industrial revolution. I’m not sure if the effect of such a shift can be characterized as “saving.” But as I said, I don’t think the humanities needs saving, which doesn’t necessarily mean that it will continue to survive, but only that it doesn’t need to be saved.

Categories: Author Blogs

writing in the post-disciplines

3 May, 2015 - 13:40

Or, the disorientation of rhetoric toward English Studies…

In her 2014 PMLA article “Composition, English, and the University,” Jean Ferguson Carr makes a strong argument for the value of rhetoric and composition for literary studies in building the future of English Studies. She pays particular attention to composition’s interests in “reading and revising student writing,” “public writing,” “making or doing,” and using “literacy as a frame.” As I discussed in a recent post, there’s a long history of making these arguments for the value of composition in English, an argument whose proponents one assumes would welcome MLA’s recent gestures toward inclusiveness. Of course the necessity of these arguments, including Carr’s, stems from the fact that mostly the answer to the question “what is the value of composition to English?” has been answered as “nothing” or “not much,” at least beyond the pragmatic value of providing TAships for literary studies Phd students.

I’m more interested in the opposite question though, “what’s the use of literary studies to rhetoric/composition?” It’s not a question Carr really concerns herself with, mentioning only in passing that “a more intentional and articulated relationship between composition and English is still mutually beneficial,” though she doesn’t offer much evidence for this. Presumably she (rightly) identifies her audience as literary scholars for whom this question would likely never arise. However, I think the answer might be similar: nothing, or not much, at least beyond the pragmatic value that the institutional security of an established English department might provide. And with that security wavering, well…

What does this have to do with “writing in the post-disciplines” (whatever that is)? As it turns out, a fair bit. With a little bit of historical fudging that I’ll call fair play in the broad brushstrokes of a blog post, we can see that

  1. We start off with a belletristic, humanistic, essay-writing, product-oriented and literary-focused form of writing instruction.
  2. Then we move to process-oriented, less literary but still humanistic and essayistic composition studies.
  3. Over time, writing instruction becomes more varied both within rhet/comp (e.g. technical-professional writing) and beyond in the growing popularity of WAC and WID programs.

So where are we now? Few would contend with the general principles that 1) writing is a useful tool for learning in many contexts and 2) it is a good idea for as many disciplines as possible to be involved in teaching students in their fields/professions how to write and communicate. That is, we still hold to the principles of WAC and WID. However, the longstanding view that faculty in English Studies are not well-equipped to teach students in other fields (especially STEM) how to communicate has always been founded on a particular expectation of what English faculty are like. What happens if/when that changes?

For example, let’s say I have a cadres of college sophomores who want to major in chemistry, and we want to develop their communication skills in connection with chemistry as a field and with professions they might enter. And let’s say that I give you a blank slate to create a graduate program for the faculty who will take on that job. Would you want them to get chemistry degrees and then provide a little extra professional development? Or would you imagine some kind of science studies/science rhetoric-communication curriculum? I’m thinking the latter makes more sense, not as a replacement for faculty to teach writing in their curriculum but as a way of delivering courses where instruction in writing/communication is the primary focus.

Let me take this a little further afield. Of course we know that only a fraction of undergrads go on to get graduate degrees and even fewer end up really communicating as experts in any discipline. Mostly they go on to careers in corporations or small businesses or with the government.  This is more true in the humanities or social sciences, but even in the sciences, students find themselves in careers where communication is more business than scientific. There are many inter-disciplinary niches, and when it comes down to it, the argument that there are no “general writing skills”  which casts doubt on composition classes can cast doubt on the utility of writing in the disciplines.

Are there general chemistry writing skills? No, of course not.  Maybe one could argue for some common genres among chemistry professors, but chemistry itself is far more varied. So instead (and I think this is the direction WID and “writing studies” approaches go) one might imagine a rhetoric/communication curriculum that 1) teaches students an introduction to rhetorical principles, 2) puts those principles to work in the study of genres at work broadly in their fields/professions, 3) pays attention throughout a disciplinary curriculum to communication practices, and 4) offers a proliferating range of possibilities for writing and communication.

Writing in the post-disciplines moves beyond the historical either/or presumption that writing instruction is either general/introductory, or writing is discipline-specific and tied to the content expertise of faculty. Instead it suggests writing as a vast field of inquiry tied to an expansive set of academic methods that can be given many names and descriptions: empirical, social scientific, data-driven, rhetorical, humanistic, philosophical, theoretical, digital, historical, pedagogical, cultural, etc. Such investigations are post-disciplinary in themselves, though this does not mean that they cannot be coordinated or organized within an institution.

Of course there is writing in the disciplines. There’s writing in most places one finds humans. But writing is always necessarily post-disciplinary as it operates to facilitate relations among disciplines and across varied institutions. Most subjects we study in the university are slippery in this way. Whether it is biology or society or art, our objects always act and connect in ways the push beyond our disciplinary paradigms. Writing is no different in this regard.

So, to bring this full circle, why fall prey to the gravity well of literary studies when there is a vast universe of writing to investigate?

Categories: Author Blogs

“this will revolutionize education”

29 April, 2015 - 09:47

I picked up on this from Nick Carbone here. It’s a video by physics educator Derek Muller (who I think I’ve written about before here but I can’t seem to find it if I did).  Here’s actually two videos.

The share a common there. The first deals with the long history of claims that various technologies will “revolutionize education.” In debunking those claims, Muller argues for the important role of the teacher as a facilitator of the social experience of education and an understanding of learning as a dialogic experience, though he doesn’t quite put it in those terms. The second video discusses research he has done with using video to instruct students in physics (he has a YouTube channel now with around 1M subscribers). Similar to the first video, he finds that a video that enacts a dialogue and works through common misconceptions, while being more confusing and demanding more attention of the viewer, ultimately results in more learning.

As he points out, students have a lot of direct experience with the physical world, but it turns out the knowledge they derive from those experiences is an obstacle rather than an aid in their understanding physics. As such, a dialogic approach that works through those misconceptions and leads to an understanding of physical laws proves most effective. I would suggest composition encounters an analogous challenge in that students have a lot of experience with writing and language before entering the classroom, but the understanding of writing that comes from those experiences can be counter-productive.

That said, I don’t entirely agree with Muller’s characterization of the role of technology in education. (It’s probably just a simplification that is the inevitable result of a short video.) Yes, technology is not revolutionary in the way people claim it to be. I agree that education is a social, even institutional process (as opposed to learning, which is an activity that need not be social or even human). I even agree that it makes sense to characterize the role of technology in learning as evolutionary rather than revolutionary. However, if societies themselves can undergo technological revolutions, and if education is a social process, then education can be subject to technological revolutions, right?

For example, can we compare US education in 1800 with US education in 1900? During that century, the country underwent two industrial revolutions. We saw a rapid expansion of the number of public schools and colleges. Industrialization transformed our capacity to build schools and educational materials. It demanded entirely new literacies and skills from the workforce in a standardized way that schools would now need to provide. The marks of industrialization on education are obvious. Could you really argue that education was not revolutionized during this period?

Muller’s emphasis on social dialogue though would point to a far more ancient pedagogical method, that of Socrates. Despite the fact that Socrates didn’t write, it would be hard to deny that the Socratic method, and then later Plato’s academy, were not products of alphabetic writing technologies. Wasn’t that a technological-educational revolution? Literacy has shaped what we imagine learning to be.

Perhaps these are just semantic differences over revolution and evolution. I certainly agree that education is a socially mediated process that involves human interaction. Since we can imagine fantastical technologies like Data, the all-too-human android in Star Trek: Next Generation, I’m sure we can imagine machines that could replace teachers, but it’s little more than imagination at this point.

I think, in part, our problem is confusing learning with education. I can learn a lot from watching YouTube videos or reading books. I can also forget a lot. And, just as I can learn things about the physical motion of objects from life experience that works perfectly well in aiding my interaction with the world but does me a disservice in physics class, many of the things I learn from videos or books or whatever might not coincide with some educational project. So learning has been revolutionized by books, movies, radio, TV, videodiscs (love that example), video games, and now the Internet and YouTube. But that’s not the same as education.

Education relies on learning (of course) and it relies on mediation, even if its “only” the mediation of speech. It also relies more broadly on the social structures of which it is a part. Revolutions in media (which we certainly have had) can lead to revolutions in learning (which I would argue we are encountering with digital media), but all that might only register as an evolutionary change in education (which we have also seen). What would revolutionize education, what has revolutionized it in the past, are broader socio-technological revolutions (e.g. the Industrial revolution). That’s the “this” in “this will revolutionize education.” So the question is, is “this” happening now?

Categories: Author Blogs

what to do when a professional organization tries to embrace you

28 April, 2015 - 09:06

Yesterday, at least in my disciplinary corner of the online world, there was a fair amount of discussion about the Chronicle of Higher Education report of the Modern Language Association’s upcoming officer elections, which will ultimately result in someone from the field of rhetoric becoming MLA president. I was interviewed and briefly quoted for the article, so I thought I’d be a little more expansive here.

In the most cynical-pragmatic terms, one imagines that MLA can see that rhetoric faculty are underrepresented among its members, so they are an obvious potential market. One can hardly blame an organization for trying to grow its membership, so what does it have to do to appeal to those potential consumers?  In more generous terms, MLA might view itself as having some professional-ethical obligation to better represent all of the faculty it lays claim to when it asserts itself as representing “English.” It specifically names “writing studies” in its mission, though not rhetoric or composition. Of course it is always a happy coincidence when the pragmatic and the ethical are in harmony.

Here are the main problems I think MLA faces. First is its own track record. It’s been around for over 130 years. As far as I can tell it’s marginalized rhetoric for that entire period. Even in the recent history of my 20 years as a rhetorician, there’s been very little to indicate that rhetoric belongs in the MLA. Many in rhet/comp also have strongly held positions regarding adjunctification and are unhappy with MLA’s engagement on that issue. These are some serious issues, but maybe ones that could be resolved with a decade of good will.  MLA just has to hope that there are enough rhetoricians out there like the ones standing for office who are willing to work toward that end. I think in particular those who believe MLA might still play an important role in addressing adjuncts will be interested in working with them.

But those issues are minor compared to the second set of problems, which are not directly MLA’s fault but have to do with the relations between literary studies and rhetoric/writing studies/composition. Here’s the easiest way to understand this. As was reported in the CHE article, there were some 300 rhet/comp jobs in the MLA job list. Rhet/comp and technical writing jobs routinely make up ~40% of the jobs in English. We also know that virtually every English department relies upon teaching first-year composition for its economic survival. Those courses fund the TAs in literary studies graduate programs and make up the bulk of what English departments do on campus. So we know there are a lot of faculty in English departments with rhetoric specializations and that writing instruction forms the foundation of most of these departments. So now go and look at the undergraduate majors of these departments. Perhaps you will find a writing major of some kind or maybe a concentration in writing as an option for students. Undoubtedly there are a growing number of these, but I want you to ignore those for a moment. Look at departments that don’t have those things. Look at the “English BA” itself. Do you see any required courses in writing/rhetoric?

Yesterday I was writing in passing about an article bemoaning the disappearance of the Shakespeare requirement in English majors. Rest assured however, if you peruse some English majors you’ll find plenty of literary requirements–in different historical periods in British and American, in different ethnic literatures, and so on. I doubt you’ll find many such majors with a single required course in rhetoric. What that should tell you quite plainly is that the literary studies faculty, who, by majority rule, design these curricula, do not believe that some exposure to rhetoric is an important part of getting a disciplinary education in “English.” Sure, we can have some cordoned off area where students can study rhetoric and writing, and we might even allow writing electives as part of the English major, but we’re not going to make rhetoric/writing integral to English. The most amusing part of that, of course, is that while English majors minimize rhetoric from one side of their mouth on the other side of their mouth they are claiming to their students that they help them become “good writers.”

As I said, there’s not much or probably anything MLA can do about that situation, but what it means is that literary scholars, as a group, do not view rhetoric/writing studies/composition as an integral part of English. So why would rhetoricians want to be part of an organization that has devoted itself to literary studies for over a century? Maybe it would be in MLA’s interest to try to shift the view of its literary studies members on this matter. There was a long period of time, particularly in the early days of my career, when it seemed that people in my field were demanding some respect from their literary colleagues. There was a time when there were a lot of hard feelings, departments splitting apart and so on. Maybe that’s still the case in some places. Today though I think we’re in a very different situation. I’m not sure that an alliance with literary studies is in the best long term interest of rhetoric.

Categories: Author Blogs

when students get their “money’s worth” and other academic clickbait

27 April, 2015 - 09:11

Without laying this all at the feet of social media, in today’s fast-paced modern world (ahem), the competition in the attention economy appears to push more extreme positions. There’s nothing really new there, as the sensationalism of tabloids attest, but that seemed more avoidable in the past. The modern instantiation of clickbait is far more pervasive, and unlike spam, we pass it around willingly. Indeed we have reached a moment when it is becoming increasingly difficult to differentiate among actual news, genuine concerns, and clickbait, largely because effective clickbait draws on the other two. There’s a nice article in the Atlantic by Megan Garber called “The Argument Economy” which takes on some of this.

But my point is that this is not just social media. Perhaps you’ve seen (on Facebook, of course) news of the recent bill in Iowa whereby professors whose student evaluations fall below a certain level would be automatically fired. Even more amusing (or at least it would be amusing if it were fiction) is the suggestion that the five worst professors above the minimum line would face being voted off the campus by students in some reality game show fashion. The general sense is that this bill will not become law. As such it might be fair to call this clickbait legislation. And if NPR reports on the matter is that clickbait?

Similarly when the Chronicle, theTelegraph, and the National Review all want to report on American Council of Trustees and Alumni’s report of an apparent decline in the requirement of Shakespeare in English majors, do we call that news or clickbait? Is this clickbait curriculum? The promulgation of academic clickbait does not preclude the possibility of more serious conversations about teaching or curriculum. In fact, those conversations are certainly happening, but I imagine they have an effect on those conversations, especially when those participating in the conversation might get most of their information from such outlets.

I see these clickbait offerings as presenting two familiar commonplaces about college education, neither of which is especially helpful. As the NPR article reports, the emphasis on teacher evaluations is ostensibly about ensuring students get their “money’s worth.” This refers, of course, to our concerns about student debt and the cost of college but also to the economic valuation of college degrees as investments in human capital. On the flipside, the cultural conservatism of a group like ACTA and its plea for Shakespeare reflects a competing but equally unhelpful vision of education as the transmission of traditional cultural values.

To be clear, I don’t have any investment in conversations about how to structure a degree in literary studies. And I don’t think there’s anything wrong with students viewing their college education in terms of how it might connect to their professional life after college. Indeed these are really both discussions about how to value a college education. Unfortunately the commonplaces of academic clickbait don’t appear to provide much affordance for trying to address this challenge. In their defense, though, that’s not their purpose, so I guess their ok as long as we understand we won’t get anything productive out of this kind of rhetoric.

In all fairness, there should be some standard of expectations to which faculty are held as teachers, even beyond tenure, with the possibility of losing one’s job as a kind of measure of last resort. However, to get there, we’d really have to create a culture of teaching that doesn’t exist. Graduate students in most disciplines receive little or no training as educators. At best, we tend to rely on mentoring. Furthermore, we know teaching is only part of the job, and research productivity is often the primary measure of tenure. We’d have to shift that (at least at some institutions). So, could you imagine your department offering a series of professional development workshops for faculty in the area of teaching and the faculty showing up on a regular basis? If they did, we’d probably have to have some serious conversations about what constitutes good teaching. That would be weird, if not horrifying. That’s how far we are from valuing teaching as a university culture.

If we did have such conversations in an English department, we would probably want to talk about what we teach and why we teach it. Maybe there would be faculty in such a department that would share ACTA’s view of Shakespeare. This commonplace seems to set up a battle among pragmatic pandering to student interests in pop culture to attract numbers, some version of the culture wars over the canon, and a commitment to the traditions of literary studies. Not surprisingly as a rhetorician in an English department, I think staging a conversation about an English major in terms of literary studies is missing the boat. What would it mean to establish a purpose for an English degree that didn’t mention literature at all? Then one might articulate how literary studies might serve that purpose. Of course, there’s likely a disciplinary issue there, as that would require establishing the study of literature as useful to some end other than its own, as designed to do something other than reproduce its own disciplinary paradigm.

As impossible/comical as it is to imagine sitting in a series of teaching workshops with faculty, it’s even more absurd to imagine English departments entertaining the possibility of an English major that was not at least 75% literary studies. Sure, there could be some separate majors or concentrations, but can anyone imagine an English department with a single major where only 50% of the courses addressed literature? It sounds absurd, even though 50% of the jobs in English every year are not in literary studies. They’re rhet/comp, technical writing, creative writing, and so on. It sounds absurd until one remembers that most of what most English departments do, in terms of raw numbers of students served, is teach writing through first-year composition. It’s like having a department that taught BIO 101, but then was otherwise a Chemistry department. Of course we now have biochemistry departments.

In any case, academic clickbait isn’t doing us any favors in terms of opening some productive dialogue about the values driving higher education. All it likely does is create reactionary positions by espousing extreme views.

Categories: Author Blogs

the humanities’ nonhuman electrate future

23 April, 2015 - 13:31

Earlier this week, Gregory Ulmer spoke on campus. I was happy for the opportunity to see him speak, as I hadn’t met him before and his work, especially Heuretics, has been important to my own since my first semester in my doctoral program. His talk focused on his work with the Florida Research Ensemble creating artistic interventions, which he terms Konsults, into Superfund sites. However, more broadly, Ulmer’s work continues to address the challenge of imagining electracy (n.b. for those who don’t know, electracy is to the digital world what literacy is/was to the print world). I’ve discussed Ulmer’s work many times here, so today my interest is in discussing it in terms of the Bérubé talk I saw last week.

In the Bérubé talk (see my last post), humanities’ focus emerged from dealing with the promises and challenges of modernity and Enlightenment.  Freedom, justice, equality, rationality: they all offer tremendous promise as universals and yet also seem unreachable and treacherous. So the humanities must play this role in the indeterminable pursuit of judgment. In this discourse of right/wrong it supplants religion, though obviously religion continues on, where the humanities is perhaps less willing to settle on an answer than religion often seems to be.

Ulmer offers a different perspective. To the binaries of right/wrong (religion) and true/false (science), he offers pain/pleasure (aesthetics). As he notes this third segment comes from Kant as well but is only realizable as an analog to the first two in an electrate society, with the first being the product of oral cultures and the second the product of literate ones. He makes an interesting point in relation to Superfund sites and climate change more generally where we are largely able to recognize that destroying our climate is wrong and we are able to establish the scientific truth of climate change, but we appear to need to feel it as well.

In a fairly obvious sense, pain/pleasure seems a more basic segment, and one that is available to a wide range of animals, at least. What we get in a control society (Deleuze) and perhaps more so in a feed-forward culture (Hansen) are technologies that operate on this aesthetic level to modulate subjectivity and thought in a way that the symbolic behaviors of oral and print societies did not. That is, we’ve always been able to seduce, persuade, entice, repel, frighten, hurt, and so on with words, but at least there was some opportunity for conscious engagement there.

Many of the challenges Bérubé identified with judgment have to do with the orientation of the individual to societies: e.g. how we view people with disabilities or differing sexual orientations. However, one thing we might take from Ulmer’s argument is the realization that the “self” is a product of literature culture. If we see the self as a mythology, perhaps as the way we might view an oral culture’s notion of spirit, then perhaps the challenges of judgment that arise from Enlightenment become irrelevant, much like the challenge of appeasing gods is to moderns. In some respects we still want the same ends in terms of material-lived experience–we still want a good crop–we just stop appeasing gods or pursuing justice to get it.  No doubt such notions seem absurd. Ulmer would suggest that they are because we are only beginning to grasp at them. He reminds us of Plato’s initial definition of humans as “featherless bipeds.”

In the place of the self Ulmer suggests an avatar as an “existential positioning system,” an analog to GPS. He didn’t get too far into this matter in the talk, but I am intrigued. Of course GPS is a technological, networked identification. The self is also a technological identification, a product of literacy. For Ulmer the EPS is likely image-based. I am interested in its “new aesthetic,” alien phenomenological qualities though as a kind of machine perception. While I argue that language is nonhuman, so both oral and print cultures had nonhuman foundations, electracy might so decenter the human as to allow us to feel the nonhumans in a new way. In this respect, an EPS might be a tool that shows us a very different way of inhabiting or orienting toward the world. Arguably, that’s what writing did.

In any case, trying to figure that out seems like a really interesting project for the humanities, one that would produce an outcome, even if the implications of that outcome may take decades to realize.

Categories: Author Blogs

gravity’s rhetoric and the value of the humanities

17 April, 2015 - 14:35

I attended a talk today at UB by Michael Bérubé on “The value and values of the humanities.” Without rehearsing the entirety of his argument, the main theme regarded how the notion of the human gets defined and struggles in the humanities over universal values. So while we largely critique the idea of universalism, we also seek to expand notions of humans and human rights in universal ways (in particular the talk focused on queer theory and disability studies, but one could go many ways with that), though even that encounters some limits (as when people raise concerns over whether Western values about equality should be fostered in non-Western cultures). The talk is part of a larger conference on the role of the humanities in the university and part of Bérubé’s point is that the intellectual project of the humanities, which he characterized as this ongoing, perhaps never-ending, struggle over humanness, continues to be a vibrant project and should not be confused with whatever economic, institutional, bureaucratic, political crisis is happening with the humanities in higher education.

I don’t disagree with him on these points, but my concerns run at a tangent to his claims.  I think we can accept the enduring value of the humanities project as this ongoing struggle with Enlightenment and modernity. (I.e. we value justice, freedom, equality, rationality, etc. but we can’t really manage to figure those things out.) But, for me, this has little to do with valuing the particular ways that this project is undertaken or the scope of the project. That is, one can completely share in this project and still argue that many of the disciplines that comprise the humanities are unnecessary or at least do not require as many faculty as they currently have.  So in the 19th century we didn’t really have literary studies. We had it in spades in 20th century (literature departments were almost always the largest departments in the humanities and perhaps across the campus). In the 21st century? Well, we’ll see I guess. But those ups and downs would really have nothing to do with the value of this general humanities project. Because, in the end, the argument for or against the importance of literary study in the pursuit of this project has to be made separately. And the same would be true of any humanities discipline.

In fact, it’s not only true of every discipline, it is also true of every theory, method, genre, course, pedagogy, and so on. It does not necessarily mean that we as humanists should continue writing what we write, teaching what we teach, or studying what we study or that such practices should be propagated to a new generation of students and scholars. It doesn’t mean that they shouldn’t, either.

In the discussion following, Bérubé made an observation that anything with which humans interact could be fair game for humanistic study. I think his point of reference was fracking, but I started thinking about gravity, which obviously we all interact with. I also sometimes think about gravity when I think about nonhuman rhetoric as a force and how far it extends. If Timothy Morton is willing to argue that the “aesthetic dimension is the causal dimension,” than might one substitute rhetoric for aesthetic. That is, are all forces rhetorical? Or barring that, might any force have a rhetorical capacity? So, gravity.

Here’s the argument I came up with for saying gravity is rhetorical. Every living thing on Earth evolved under a common and consistent gravitational force. Obviously we didn’t all end up the same because gravity was just one of many pressures on evolution. But clearly our skeletal and muscular structure are partly an expression of our encounter with gravity. This is true not only in evolutionary or species terms but individual ones as well. If I grew up on the Moon then I would look different than I do (as any reader of sci fi knows). It was Michael Jordan’s relationship to gravity that made him so amazing, and we might say the same of dancers, acrobats, and so on. One might proceed to speak about architecture’s aesthetics in gravitational terms. Anyway, I think you get the idea. It might be possible to speak of gravity as an expression, not simply as a constant predictable force, but as an indeterminate force productive of a wide range of capacities that cannot be codified in a law. So while I don’t think I would want to argue that gravity is inherently rhetorical, that the Moon’s orbit of the Earth is rhetorical. I might argue that rhetorical capacities can emerge in gravitational relations.

Maybe you don’t want to accept that argument. Most humanists would not because the humanities, in the end, are more defined by their objects of study, their methods, and their genres than by these larger, more abstract questions of value. That is, no history or English department is going to organize itself in terms of curriculum or faculty around these questions of value. They organize around geographic regions and historical periods. We don’t hire people to study questions of value, we hire them to study particular literary periods or apply specific methods.  We place highly constrained expectations on the results of those studies as well in the production of highly technical genres–the article, the monograph, etc.

So perhaps these broader questions act as a kind of gravitational force on the humanities both drawing the disciplines together and shaping the particular expressions and capacities they reflect, but if so then that only points to the contingent qualities of those disciplines. In addition clearly other forces shaped the particular forms humanities study has taken in the US–from massive shifts like nationalism and industrialization to policies regarding the building of universities (e.g. the Morrill Act or the GI Bill) or demographic shifts in US population. And, of course, I should forget technologies.

I don’t think Bérubé would disagree with any of that, so in the end I guess I’m left thinking that the value of the humanities really tells us very little of its future.

Categories: Author Blogs

against close reading

13 April, 2015 - 16:49

Close reading is often touted as the offering sacrificed at the alters of both short attention spans and the digital humanities (though for probably different reasons). Take for example this piece in The New Rambler by Jonathan Freedman which is ostensibly a review of Moretti’s Distant Reading but manages to hit many of the commonplaces on the subject of digital literacy, including laments about declining numbers of English majors: “fed on a diet of instant messages and twitter feeds, [students today] seem to be worldlier than students past—than I and my generation were—but to find nuance, complexity, or just plain length of literary texts less to their liking than we did.” But it’s not just students, it’s colleagues as well: the distant and surface readers, for example.

In the end though, Freedman’s argument is less against distant reading than it is for close reading: “distant reading doesn’t just have a guilty, complicitous secret-sharer relation to soi-disant close reading: it depends on it.  Drawing on the techniques of intrinsic analysis of literary texts becomes all the more necessary if we are to keep from drowning in the sea of undifferentiated and undifferentiable data.” And as far as I can tell, the distant and surface readers do not really make arguments against close reading in principle. They may critique particular close reading methods in order to argue for the value of their own methods, but that’s a different matter.

So I’ll take up the task of arguing against close reading, just so there’s actually an argument that defenders of close reading can push up against if they want.

I don’t want to make this a specifically literary argument. Yes, “close reading” is a term that we get from New Criticism, so it has terminological roots in literary studies, but it’s come a long way from then. The symptomatic readings of poststructuralism, cultural studies, and so on are all close reading practices, even though they are quite unlike the intrinsic interpretive methods of New Criticism (relying on the text itself). As Katherine Hayles argues in How We Think,

close reading justifies the discipline’s continued existence in the academy, as well as the monies spent to support literature faculty and departments. More broadly, close reading in this view constitutes the major part of the cultural capital that literary studies relies on to prove its worth to society

To borrow an old cattle industry slogan, close reading is “what’s for dinner” in English Studies. And we have made a meal of it. Whether we’ve made a dog’s dinner of it is another matter. Regardless, in the contemporary moment, and certainly for the 2 decades or so I’ve been in the discipline, close reading has also been a central feature of rhetoric. All one has to do is think of the attention to student writing to see that, but it is also characteristic of the way many rhetoricians go about their own scholarship. So what I say here about close reading applies across English Studies.

Now, while I have just said that close reading is a wide-ranging practice, it is still one that is specific to print texts and culture. And, of course it is not just a reading practice, because if it were, how would we know we did it? It’s also a writing/communicating practice. That is, I’d think of close reading as a set of genres of print textual analysis.

The key question, from my perspective, is how these genres operate in a digital media ecology. I wouldn’t want to say that they don’t operate, because people still produce close readings, and I wouldn’t want to gainsay their claim that they do so. Instead, my point is that close reading can no longer operate as it once did. From the early days of the web, across computers and writing research and beyond, it was already clear that multimedia and hyperlinks shifted rhetorical/reading experience. But it has become much clearer in the era of high speed internet, mobile media, and big data, that text just isn’t what it once was. It doesn’t produce meaning or other rhetorical effects in the same way.

Besides that, reading and writing are so much more obviously and immediately two-way streets. As you read, you are being read. As you write, you are being written. Is that an “always already” condition? Maybe, but it certainly has specific implications for digital media ecologies. What does it mean to read your Facebook status feed closely when what is being offered to you has been produced by algorithmic procedures that take account of your own activities in ways that you are not consciously aware? Even if you’re going to read some pre-Internet text (as we often do), you’re still reading it in a digital media ecology. Again, it’s not that one can’t do close reading. It’s that close reading can’t work the same way. Maybe close reading just comes to me that we study something, that we pay attention to it, rather than indicating any particular method or strategy for studying, but that would seem to miss the point. For me, close reading rests on a particular set of assumptions about how text is produced and how it connects with readers, not only in terms of one particular text and one particular reader, but also the whole constellation of texts and readers: i.e., a print media ecology.

Arguing “against close reading” then is not an argument to say that we should stop paying close attention to texts. If anything, it’s an argument that we should pay closer attention to the ways in which the operation of text is shifting.

Categories: Author Blogs

writing epilogues on the 20th-century university

8 April, 2015 - 08:15

Terry Eagleton’s recent Chronicle op-ed is making the rounds. It’s a piece with some clever flourishes but with largely familiar arguments. What I think is curious is that the nostalgia for the good old, bad old days describes a university that we would no longer find acceptable. Looking back at the end of the 20th century, the greatest accomplishment of higher education might be the way that we managed to greatly expand access in the last two decades. We have clearly not found a sustainable way to afford the post-secondary education of this growing portion of the population, and many of our problems revolve around that challenge. However, many of the other changes that Eagleton laments are a result of other aspects of this shift. Students show up on campus with different values, goals, and expectations for higher education than they once did. Governments, businesses, and other “stakeholders” also have shifting views to which universities are increasingly accountable as the role of higher ed becomes further embedded in the economy with more and more jobs requiring it. As I mentioned in my last post, I’m doing some campus visits with my daughter. It may be that the “highly/most selective” colleges and universities still get to select students who fit their educational values, but that’s not the case at public universities.

When I read articles like this one, I tend to have three general reactions. First, I agree that there’s a lot wrong with the way higher education is moving (increased bureaucracy, decreased public support, etc.). Second, I find it odd and a little worrisome how “technology” is scapegoated, as if higher education has always been technological. Third, I find the nostalgia understandable but ultimately unhelpful. As much as we may not like where we are or where we appear to be going, trying to go back is not a viable or even desirable option.

One of the amusing parts of Eagleton’s essay is his description of how it used to be, when faculty didn’t finish dissertations or write books because such things suggested “ungentlemanly labor.” I don’t think we have many colleagues who still share those values, but we still object to notions of “utility.” Maybe it’s the lower middle-class upbringing, or maybe it’s the rhetorician in me, but I’m not insulted when someone finds something I’ve written or a class I’ve taught to be useful. To the contrary, I actually prefer to do work that other people value and makes their lives easier or better, even though that might make me “ungentlemanly.”

In a couple recent conversations I’ve had around this topic, I have heard repeated the value of writing a book that maybe only a handful of people might read. I was struck by the widespread appeal of this value, at least among the audiences of humanities faculty and grad students who were present. I think I understand why they feel that way. They want to pursue their own interests without having any obligation to an audience. If Eagleton’s old colleagues found writing itself to be ungentlemanly then many contemporary humanists find the idea of writing for an audience (or writing something that would be useful) to be an anti-intellectual constraint.

Given that perhaps as a set of disciplines we are not particularly inclined to rhetorical strategies, here’s some fairly straightforward advice. It’s not an especially effective argument to say that everything about the contemporary university is going to hell and that we need to change everything so that we can create conditions were I can pursue my own interests regardless of whether they result in anything useful or even produce something that any one else would bother reading because the humanities are inherently good and must be preserved. Perhaps that seems like a hyperbolic version of this position, but if so, only barely. A better rhetorical strategy would be one that said something along the lines of “here’s how we believe higher education should be adapting to the changing demands of society, and here’s what we in the humanities would do/change to respond to those challenges.” I see a lot, A LOT, of digital ink spilt on the humanities crisis. I almost never see an argument from within the humanities about how the humanities itself should change. It’s almost always about how everyone else should change (students, parents, politicians, administrators, employers, etc.) so that we don’t have to.

Why is that?

 

Categories: Author Blogs

Does it matter where you go to college?

3 April, 2015 - 09:38

By now this is a familiar commonplace in our discussions about the crisis of higher education.  Here’s one recent example by Derek Thompson from the Atlantic that essentially argues that it’s less important that you get accepted into a great college than that you be the kind of person who might get accepted. However, as is painfully evident, the whole upper-middle class desperation of “helicopter parents” and “tiger moms” and whatnot to get their kids into elite schools and away from the state university systems that they’ve helped to defund through their voting patterns creates a great deal of ugliness. I’m assuming there’s no news for you there.

Personally I am in the midst of this situation. My daughter is a junior and we’ll be headed to some campus visits next week during her spring break. Her SATs put here in the 99.7 percentile of test takers and the rest of her academic record reflects that as she looks to pursue some combination of math, physics, and possibly computer science. We live in a school system with a significant community of ambitious students (and families), where the top of the high school class regularly heads out to the Ivies.  I’m sure it’s not as intense as the environment of elite private schools in NYC but it’s palpable.  This also has me thinking back to when I was headed out to college, as a smart kid (“class bookworm” as my yearbook will evidence) in an unremarkable high school, first-generation college grad going to a state university, coming out of a family that had its financial struggles until my mom remarried when I was a teenager. I don’t mean to offer that as a sob story (because it isn’t) but only that my own background gives me a lot of misgivings about the value and faith we put in this race to get into elite colleges.

I think it’s easy to see the ideological investments underlying the way we try to answer this question. Part of the American Dream is believing that education is the great democratizer, that it is meritocratic, and that in the end, overall, the brightest and best students are rewarded. Part of that is also believing that intelligence is not really a genetic trait and that socio-economic contexts are not a roadblock; almost anyone can succeed if they put their mind to it. For those on the Left (i.e. the circles I mostly travel in), there is a clear recognition of socio-economics as largely driving opportunities for academic success, and that’s hard to deny when one looks at the big picture. So that tells you that on a societal level education on its own does not solve economic disparity. However, it doesn’t tell you much on the individual level where ultimately what you want is some sense of agency rather than having your agency taken away by socioeconomics or admissions boards.

Derek Thompson describes the situation he is investigating as affecting the top “3%” of high schoolers, though it’s probably more like 1% if one is thinking of the top 20-25 schools in the country.  Here’s what I think about those kids, including my daughter… They’re going to do ok if they manage to avoid getting a nervous breakdown trying to get into college. Maybe an Ivy-league degree is a surer route to being CEO or senator; it almost certainly is. But you’re probably as likely to be a professional athlete or movie star as become one of those, particularly if you aren’t already a senator’s son (cue the Creedence). Even though we’re still talking about tens of thousands of families, focusing on the top 1 or even 3 percent seems fairly odd. In all honesty it probably is a little beyond the scope of pure individual will to get into a top, top college. You probably do need some natural-born smarts and some socio-economic advantages to have a decent shot.

If we’re going to have a conversation about the importance of where you go to college, it makes sense to me to talk instead about the students in the middle of the college bell curve. What’s the difference between the university ranked #50 and the one ranked #150? Is there a big difference between Florida, Buffalo, Tennessee, and West Virginia?  Setting aside the Ivy bias, what’s at stake at going to Emory or Virginia (a top 20 school) rather than Wisconsin or Illinois or RPI (a top 50 school)? From school #20 to school #150 we’re still talking about students in the top 5-20% of SAT test-takers. I’m thinking all those students are reasonably well-positioned to get a good education that leads to a rewarding career. And what’s the difference between the student in the 80th percentile of college applicants who goes to a big public university and one in the 50th or 60th who goes to a regional state college? And how do these differ from the ones coming up through community colleges?

It seems to me that those are much more interesting questions than the ones about the top 1 or 2%, even if that’s where my own kid is drawing my personal attention.

Categories: Author Blogs

academic capitalism: futures of humanities graduate education

31 March, 2015 - 09:10

Yesterday I attended a roundtable on this topic on my campus. These things interest me both because I have the same concerns as most of us do about these issues and because I am interested in the ways faculty in the humanities discuss these matters. So here are few observations, starting with things that were said that I agree with:

  1. The larger forces of neoliberal capitalism cause problems for higher education and the humanities in particular.
  2. There is a perception of humanistic education as lacking value which needs to be corrected.
  3. We need to take care with any changes we make.

Certainly it’s the case that broader cultural and economic conditions shape, though do not determine, what is possible in higher education and the humanities. This has always been the case. When we invented the dissertation, the monograph, and tenure as we experience them today (which was roughly in the early-mid 20th century), there were cultural-economic conditions that framed that. It’s important to recognize that graduate education is part of a larger network and ecology of relations, that you probably can’t just change it without changing other things.

In terms of actual graduate education issues, our discussion focused on two key points, I think: the possibility of revising the dissertation and concerns about the job market. These are two of the common themes that come out of the MLA report. Here’s my basic thoughts on these two matters.

  1. It’s very difficult to change what the dissertation is like without also changing the scholarship one does after completing the degree.
  2. The casualization/adjunctification of the job market is tied to the operation of graduate education and the work of tenured faculty. You can’t change it without changing those other things.

The upshot, from my perspective, is that while I completely agree that we need fix the way higher education is funded, to reaffirm our understanding of it as a social good and, if necessary, as a strategic national interest, AND that we need to intervene in the popular discourse about humanities education to make clear the value of the things we can do, none of that will be enough on its own. It will also be necessary for us to change what we do as well.

Unfortunately that’s the part I hear the least about and also the part that produces the most resistance. It’s unfortunate because it’s the element over which we have the most direct control. Mostly what I hear are defenses of the value of the work that we do and how people who want us to work differently don’t really understand us. Both of those things might be true. There is value in the work of the humanities, and probably at least some of the people who want humanities to change may not understand the work very well or appreciate that value. But ultimately I don’t think that’s the point either.

So I would put graduate education reform in the following question: what would it take for us to dethrone the monograph as the central measuring stick of scholarly work in the humanities? You would think that the answer should be “not much.” After all, it’s got to be less than 10% of four-year institutions that are effectively “book-for-tenure, two books for full professor” kinds of places. Even if we just switched to journal articles and chapters in essay collections (i.e to other well-established genres) that would be enough. The problem, I would say, is that humanities professors want to write books, or at least have a love/hate relationship with the prospect.

No doubt it is true that one can accomplish certain scholarly and intellectual goals in book-length texts that cannot be achieved in other genres. That’s the case with most genres: they do things other genres do not. How did we become so paradigmatically tied to this genre? So tied that many might feel that the humanities cannot be done without monographs.

If our scholarship worked differently then our graduate curriculum could as well. Not just the dissertations, but the coursework, which in many cases is a reflection of a faculty member’s active book project. Without the extended proto-book dissertation, maybe there would be more coursework, more pedagogical training, more digital literacy (to name some of the goals in the MLA report). If there was more coursework then maybe you’d need fewer graduate students to take up seats in grad courses and make the courses run. If you had three years of coursework instead of two, then you’d need to enroll 1/3 fewer students each year to fill the same number of classes. If you didn’t have dissertations to oversee, then you could free faculty from what can be a significant amount of work, especially for popular professors.

I’m not sure if that would impact adjunctification much, but at least it would reduce the number of students going through the pipeline, which is probably about as much as one could ask graduate education reform to accomplish on its own in this matter.

Now I don’t think any of these things will happen. I am very skeptical of the capacity of the humanities to evolve. Other disciplines across the campus have been more successful at adapting to these changes but they are not as deeply wedded to print literacy as much of the humanities are. However, until we can recognize that it is our commitment to the monograph that drives the shape of graduate education, I don’t think we can do more than make cosmetic changes.

 

Categories: Author Blogs

regarding “invidious distinctions between critique and production”

24 March, 2015 - 11:03

I’m working at a tangent from my book manuscript today, preparing a presentation for a local conference on “Structures of Digital Feeling.” If you have the (mis)fortune to be in Buffalo in March, I invite you to come by. Anyway, my 15 minutes of fame here involve wresting Williams’ “structure of feeling” concept from its idealist ontological anchors, imagining what real structures of feeling might be, and then putting that to work in discussing “debates” around the digital humanities.

Fortunately, Richard Grusin offers the perfect opening for this conversation as he is already discussing “structures of academic feeling” at the MLA conference in his juxtaposition of panels about the “crisis” in the humanities with the more positive outlook of DH panels. (I haven’t been to MLA in a few years so I wonder if this distinction still holds.) The quoted phrase in the title of this post comes from his Differences article on the “Dark Side of the Digital Humanities” (a reformulation of the panel presentation of the same title). It’s a response to the familiar DH refrain of “less yack, more hack.”

As he argues:

Specifically, because digital humanities can teach students how to design, develop, and produce digital artifacts that are of value to society, they are seen to offer students marketable skills quite different from those gained by analyzing literature or developing critiques of culture. This divide between teachers and scholars interested in critique and those interested in production has been central to the selling of digital humanities. My concern is that this divide threatens both to increase tensions within the mla community and to intensify the precarity running through the academic humanities writ large.

His objection to this is twofold. First, he objects to the suggestion that he doesn’t make things too (“tell that to anyone who has labored for an hour or more over a single sentence”). And second is to suggest that making things, in the absence of “critique” echoes “the instrumentalism of neoliberal administrators and politicians in devaluing critique (or by extension any other humanistic inquiry that doesn’t make things) for being an end in itself as opposed to the more valuable and useful act ‘of making stuff work.’” So the net is something along the lines of all humanists make things, but in case we don’t making things is bad. So while Grusin wants DHers to stop making the “invidious distinction between critique and production,” he still wants to make it himself in order to critique DH.

In my view, this is an argument between methods and argument for the primacy and necessity of “critique.” It is an argument that says the humanities are essentially defined by critique.  What else can critique be expected to argue? I am reminded of Vitanza’s “Three Countertheses” essay where he playfully asks if we can imagine CCCC having as its conference theme the question “Should Writing Be Taught?” We might similarly ask MLA to have as its conference them “Should We Be Doing Critique?”

Given this connection (in my head, at least) when I read about “invidious distinctions between critique and production,” I don’t think about DH. I think about rhetoric. I think about how literary studies established these distinctions in order to make critique a master term and devalue production as “skills.” I guess it’s not so funny now that the shoe is on the other foot.

What is funny though is the sudden concern with the precarity of labor. Here Grusin is rehearsing his earlier argument about the role that DH plays in creating alt-ac, non-tenure academic work. It’s a legitimate concern, but it’s a little like focusing on recycling your beer cans while driving a Hummer. If there’s a responsible party for adjunctification in English Studies, it’s got to be the literary critics who turned composition into a mill for graduate student TAs who then turn into adjuncts. I will not ignore rhet/comp’s complicity in this, but it is the “invidious distinctions between critique and production” that allowed writing instruction to become a place where this kind of labor practice could evolve.

But let me end on a point where I agree with Grusin because really I find much of his work valuable even though I disagree with him here. Near the end he writes:

Digital media can help to transform our understanding of the canon and history of the humanities by foregrounding and investigating the complex entanglements of humans and nonhumans, of humanities and technology, which have too often been minimized or ignored in conventional narratives of the Western humanistic tradition.

Grusin may not think of himself as a digital humanist, and by some narrow definition of the term he isn’t. But he’s as much a digital humanist as I am. This is at least partly the way he sees his own work, and it’s a fair description of my approach to digital media as well. And I suppose that given my deep investment in the “theory” of DeLanda, Latour, Deleuze, and so on, one might think I’m hip deep if not neck deep in critique as well. But I don’t look at it that way. I don’t look at it that way because, as I see it, critique only exists by invidiously distinguishing itself from production. However that distinction is unstable.  Production can be uncritical, but criticism cannot exist without being produced. It’s the idealism of critique that prevents it from seeing this, that prevents it from seeing that being tied to books, articles, genres, word processors, offices, tenure, etc., etc. instrumentalizes critique as much as computers and networks instrumentalizes DH. The project Grusin describes addresses the division between critique and production, but critique doesn’t really survive that. Critique needs to be the pure private thought of the critic in order to be what it claims to be. Once critique becomes a kind of production, a kind of rhetoric and composition, it looses its hold as the master discourse of the humanities.

Categories: Author Blogs

reading Alex Galloway’s “Cybernetic Hypothesis”

23 March, 2015 - 14:39

This is an article that came out last year in Differences (25.1), but my library doesn’t have access to the most recent issues, so I’m catching up. I’m writing here about it in part because it connects with my recent post on reading practices, as well as more generally with interest in digital matters. In the past I’ve certainly taken some issue with some of Galloway’s arguments, though I regularly use his Gaming book in my course on video games. Here, I think my overall reception of his argument is more balanced.

Galloway begins by noting that in the contemporary humanities one finds a wide range of methods: “methodology today is often more a question of appropriateness than existential fit, more a question of personal style than universal context, more a question of pragmatism than unwavering conviction.” He applies this observation equally to quantitative investigation and ethnographic interviews as he does to the “instrumentalized strains of hermeneutics such as the Marxist reading, the feminist reading, or the psychoanalytic reading.” However, “such liberalism nevertheless simultaneously enshrines the law of positivistic efficiency, for what could be more efficient than infinite customization?” I think he has a point here, but it’s a curious one.  On the one hand, there’s the defense of academic freedom that insists on allowing for this “liberal ecumenicalism” as he terms it, but then perhaps also the realization that such a position might undermine the critical-oppositional effect one might hope to have. I think Galloway is accurately pinpointing a site of consternation for many humanists here, but let me bookmark that thought for a moment.

The main interest of the article is Galloway’s titular cybernetic hypothesis, which he describes as “a specific epistemological regime in which systems or networks combine both human and nonhuman agents in mutual communication and command.” I find this reasonable though I probably need to think through the particulars of his argument more thoroughly. Presumably, one can examine any cultural-historical moment and find one or more “epistemological regimes” at work. I would certainly argue, and I imagine Galloway would agree, that this cybernetic regime begins in particular places and spreads unevenly, so that not all humans (or nonhumans) are equally invested in this regime. I was particularly interested in his observation that

This has produced a number of contentious debates around the nature and culture of knowledge work. Perhaps the most active conversation concerns the status of hermeneutics and critique, or “what it means to read today.” Some assert that the turn toward computers and media destabilizes the typical way in which texts are read and interpreted.

As I wrote in a recent post, I share this interest in the shift in reading practices (which, I would add, are interwoven with a shift in composing). At it turns out though, the crux of the matter seems to lie in how when values this shift. Following his historical investigation Galloway writes, “The debate over digital humanities is thus properly framed as a debate not simply over this or that research methodology but over a general regime of knowledge going back several decades at least. Given what we have established thus far—that digital methods are at best a benign part of the zeitgeist and at worst a promulgation of late twentieth-century computationalism.” I don’t have much of an issue with this either, Presumably we can say essentially the same thing about the pre-digital or print humanities–that they were at best a benign part of the zeitgeist of the early-mid twentieth century and at worst a promulgation of industrialization and nationalism.

Right? I’m less certain Galloway would agree here. And here is why, and here is also where I disagree. Galloway contends that “the naturalization of technology has reached unprecedented levels with the advent of digital machines,” by which he means that they operate invisibly in our lives. I’m not sure that’s true. Like most middle-aged Americans, I certainly feel like my life is more technological than ever: my smartphone, the Internet, all these media devices, everything has got a computer chip in it (even the dog), etc. But it doesn’t seem “natural” to me, and it certainly isn’t invisible. Technology probably seemed more natural and invisible to me 30 years ago. Are our lives more technological and less natural than those of Native Americans in the 17th century? How about factory workers in New York in the 1880s? For Galloway’s argument it is necessary to be able to answer Yes to those questions. He wants to be able to argue that increased technologicalization means an increased ideological-hegemonic power that we, especially we in the humanities, must resist.

This leads to a second point of disagreement. He writes, “Ever since Kant and Marx inaugurated the modern regime of critical thought, a single notion has united the various discussions around criticality: critique is foe to ideology (or, in Kant’s case, not so much ideology as dogma).” My disagreement here is more subtle. I agree with the history here, and it’s probably also accurate to say that those who engage in critique view it as a “foe to ideology.” However, to return to where we started, if we view “theory” as a toolbox of methods, as Galloway puts it “more a question of pragmatism than unwavering conviction,” then how is it really a foe to ideology? Isn’t it just ideologies all the way down? Like many others, Galloway wants to connect interest in the digital humanities with the effects of neoliberalism on higher education, such as the adjunctification of faculty. However, significant interest in the digital humanities is really just a decade old and those neoliberal effects started in the 80s. If we really wanted to play the historical coincidence game, didn’t the rise of cultural studies and critical theory begin in the 80s? Critique and theory may claim to be a foe to ideology just as technologies may claim to liberate us, but I would suggest skepticism toward both claims. I would hypothesize that the institutional and disciplinary operation of critical theory is just as complicit in the neoliberal transformation of the humanities as digital technology has been, and moreso than the fledgling digital humanities.

However, despite these disagreements, in the end, I find myself in agreement with much of Galloway’s project which he describes as “a multimodal strategy of producing academic writing concurrent with software production, the goal of which being not to quarantine criticality, but rather to unify  critical theory and digital media.” I’m sure we have different ideas of what that would look like, but that’s OK too. I have no more invested in promulgating some corporate view of a pseudo-technotopia than I do preserving some disciplinary vision of a fading print culture, so I am interested in studying the ways emerging technologies shape rhetorical practice without taking as an assumption either that a) those technologies uniformly represent the imposition of some evil hegemonic power or b) that print technologies were better. Nor do I think the only other available position is technophilia. If we want to hold media technologies accountable for the nasty things done by the cultures that use them then… Is it really necessary to finish that sentence?

So, to end with the “reading” issue. Yes, reading practices have changed with the media ecology in which they operate. I suggest that we try to understand those changes, that we invest in exploring, experimenting with, and establishing digital scholarly and pedagogical practices as we did with industrial-print practices a century or so ago. Will we end up with something that can operate in opposition to the dominant ideology? I’m sure we will… at least as much as we did in the past.

Categories: Author Blogs

the “adjacent possible,” capacities, and How We Got to Now

16 March, 2015 - 11:20

I read Stephen Johnson’s How We Got to Now this weekend, a book that examines six technological trajectories: glass, cooling, sound recording, clean water, clocks, and lighting. These histories cut across disciplinary and social areas following what Johnson calls the “hummingbird effect” (after the co-evolution of hummingbirds and flowers). These are not technological determinist arguments but rather accounts of how intersections among innovations open up unexpected possibilities. This is the “adjacent possible,” a term Johnson borrows from biologist Stuart Kauffman, though here Johnson is applying it to technological rather than biological evolution. If there is a central thesis to Johnson’s book it is “the march of technology expands the space of possibility around us, but how we explore that space is up to us” (226).

Overall, it’s an interesting book, well-written as you’d expect, with many curious narratives. I was especially interested in the glass chapter. However, I was taken from the start, where Johnson begins with a reference to Manuel DeLanda’s robot historian from War in the Age of Intelligent Machines, where DeLanda suggests that a robot would have a very different perspective on our history than a human. Johnson agrees and takes up this challenge, writing “I have tried to tell the story of these innovations from something like the perspective of DeLanda’s robot historian. If the lightbulb could write a history of the past three hundred years, it too would look very different” (2).  In other words, Johnson suggests something that is akin to a kind of alien phenomenological approach. I can’t say that he necessarily delivers on that. I’m not sure that silicon dioxide’s view of its becoming glass through its interactions with humans over the past few thousand years would make much, if any, sense to us. However, the speculation could be interesting.

The glass chapter offers a couple interesting twists. It addressing the development of optics–reading glasses, microscopes, telescopes. It jumps to the industrial development of fiberglass as a building material, and then joins the two in fiber optics. However, Johnson takes a sidestep back to mirrors, where he takes up Lewis Mumford’s argument that the mirror initiated a new conception of the self and self-consciousness among Europeans. Again, not determined, but opened an adjacent possibility space.

You can see how all of these innovations come together in social media spaces. No server farms without cooling. No computer chips without super clean water or quartz timing. No Internet without fiber optics and digital audio communication. Throw in the mirror effect and one gets Selfie City, for example.

Johnson’s use of the adjacent possible works well enough for him, but for me it still leaves too many agency questions open. I prefer the more DeLanda-inspired notion of capacities or the Latourian idea of how we are “made to act.” Still Johnson does make a convincing argument for the ways in which seemingly unrelated events conspire to create a new opportunity, where a “slow hunch” (to use a term from one of his earlier works) suddenly becomes realizable because of a discovery somewhere or a change in economic conditions somewhere else. I suppose one might think of it as a nod to kairos.

I want to keep this in mind for the particular questions that concern me around the intersections of digital rhetoric and higher education. Maybe I have a slow hunch too, which does seem strange in the rapid turnover of digital innovation. (And when I say “I have” I don’t mean to suggest others are not seeing something similar, either.) Johnson points out how Edison at first imagined people using the gramophone to record audio letters to send to one another and Bell imagined people using the telephone to listen to orchestras play live music. The reversal seems funny from our perspective, though today, the process of “softwarization” (to use Manovich’s term) means that we have smartphones that combine all of these activities. Watch a video or video chat or watch a live event or record a video and share it with others. What happens when classrooms become softwarized, which they obviously already have? Are there analogous misunderstandings?

The slow hunch relies on a rather subtle misunderstanding about the kind of people that digital technologies mediate. Our expectations about digital learning presume interiorized subjects of the sort that occupied the possibility space of modern life, maybe starting with the mirror.  Our dissatisfaction with digital pedagogy fundamentally lies in our awareness that we do not act the same way online as we do in class, and we don’t even act the same way in class anymore because of digital media. We put the lion’s share of our energy into trying to make digital pedagogy conform to its predecessor, in part perhaps because we share in the rather fantastical belief that the virtual world is immaterial and can be made to be like anything.

I share in Johnson’s rejection of techno-determinism, though I have a more complex vision of agency than he is willing to share in his book. It does make sense to me that we need to explore the possibility spaces here. As with many of the stories in this book, what comes about will likely be shaped by economic realities as much as anything else.  However, if we start by investigating the different kinds of subjects we might become and then imagining how those subjects would learn, we start to illuminate those capacities, those possibility spaces, in ways that might be taken up more materially and economically.

Categories: Author Blogs

teaching research, deep attention, and reading

12 March, 2015 - 08:36

I’ve been working recently through some concepts on attention and reading: Katherine Hayles on deep attention and hyper-reading, Richard Miller on slow reading, surface reading, Moretti’s distant reading, and so on. It’s part of my larger project taking a “realist rhetorical” approach to media ecologies and, in particular, that part of the ecology that I term “learning assemblies:” institutional assemblages that have explicit pedagogical operation. This has been juxtaposed for me by two recent on-campus conversations about teaching research writing.

At a workshop on Tuesday supporting writing-in-the-disciplines, Deb Rossen-Knill, our colleague from Rochester, was discussing with the faculty the ongoing challenge of supporting students as they seek to synthesize source materials into an argument of their own. Coincidentally, a similar topic was raised in a department meeting. Of course, it is not a surprising topic. In fact, it’s one of the commonplaces of writing instruction. I have certainly had many conversations in our program regarding the research paper in first-year composition.

So here’s the point of intersection. Miller does a good job of explaining the basic situation here, one which I would articulate in terms of media ecologies. Particularly for the undergrad, but really all of us have been impacted, the difference between the pre-1995 or even pre-2000 research paper and the contemporary situation is the availability of information. Again, we all know this. And this data abundance demands a different kind of reading, even just to sift through the results to find what one wants to read more closely. Furthermore, just as we have writing in the disciplines, we might also want to have reading in the disciplines, as we clearly do not all treat texts in the same way.

And I want to take a sideways step here, drawing on Lev Manovich’s concept of “softwarization” (in Software Takes Command). There he discusses how various analog media become translated into digital-software forms and then, out of that ecological shift, begin to proliferate new media species. Again, I think we know this. When texts, photos, films, and audio recordings become digitized, they lose some of the characteristics related to their analog media, some get translated (which implies transformation as well), and then some new characteristics get added. Print text and digital text are not the same things, but this gets even more apparent as new textual species start to emerge and the level of differentiation between the two begins to grow (e.g. reading a novel vs. reading Twitter). To add to that, there really aren’t “print texts” anymore, at least not in the sense that they existed 30 years ago. Not only have they changed in the sense that they exist in a very different media context but they are composed in a different media ecology as well.

So not only do we have disciplinary differences in reading, we also need to recognize that “reading” now refers to our encounters with a wide range of different species in our media ecology.

Back to the intersection with research writing. Student writers can face several obvious challenges in the “research paper” assignment:

  • inexperience with the disciplinary genre in which they are being asked to write;
  • inexperience with disciplinary practices of conducting research and reading;
  • lack of knowledge/context for the academic sources they are asked to cite;
  • lack of intrinsic motivation or curiosity in the research task they’ve been assigned;
  • any number of other, competing demands on their time and attention.

The truth though is that even as experienced academic writers, we face versions of these challenges: struggling with writing well in the genre, laboring through the research process, dealing with difficult texts, staying motivated and on task.

My (brief) point here is that these matters are all shifted along with our media ecology. I disagree, somewhat, with Hayles on this matter in her description of “deep attention.” At points (in her 2007 Profession article for example) she describes deep attention as a generalized cognitive skill one that applies equally to reading a Victorian novel and solving a complicated math problem. I don’t think it’s that generalizable, and I know plenty of people who can attend to a novel but not a math problem and visa versa. I also know plenty of older generational folks who cannot do either. Where I do agree is that cognitive-attentional processes emerge from relations among objects (human and otherwise). And I would assert that we are not helpless in the face of these media shifts.

Just as we developed highly specialized disciplinary and professional reading and writing practices in a print culture (that were different from popular-cultural reading and writing practices), presumably we can do the same in a digital culture. I think it’s fair to say we haven’t quite figured those things out, but it strikes me as something worth addressing.

So, if you’re teaching a class with a “research paper” in it and trying to figure out how to articulate your assignment, I suppose the first question I’d ask you to consider is “how is your assignment different from one you might have given (or received) 15-20 years ago?” Because I assure you that even if your assignment isn’t different, everything else about the media ecology in which it is situated is.

Categories: Author Blogs

cognition’s earthrise

19 February, 2015 - 12:03

If you do not know then Wikipedia will happily tell you that the 1968 photo known as “Earthrise” (unsurprisingly taken by an astronaut) has been called the “most influential environmental photograph ever taken.” Why? Presumably because it presents the Earth as a cohesive yet fragile entity. In any case, “Earthrise” captures something about the ecological turn in the humanities from ecocriticism to ecocomposition.  The general ecological/environmental movement asks us to rethink our relationship to the world. The world is not ours to exploit nor is it simply the backdrop for our history. Perhaps it should be obvious by now that we are actors on a global scale in our ecology. Ecohumanities movements take up these environmental concerns but also adopt an ecological view toward their traditional objects of study. E.g., what does it mean to view composing as an ecological process? In short, one decenters the human from traditionally anthropocentric studies of what we have so firmly understood as human activity that we have called them the humanities.

Distributed cognition moves in this direction with thinking. Typically when I see discussions of distributed cognition, they are more along the lines of extended mind, of tools for thought. That is, they illuminate how various technologies allow us to engage in cognitive activities we wouldn’t normally be able to do. Think of a calculator or even, a la Walter Ong, how writing technologies shape our thinking. However one might also conceive of distributed cognition as the way humans and machines interact to undertake cognitive tasks no individual human could accomplish. Edwin Hutchins’ classic example is the docking of a naval vessel, but I’m thinking Wikipedia.

Eco-cognition would seem to be another matter altogether. One might think of the noosphere. Indeed some have compared the ecocritical concept of the anthropocene with the noosphere, as both point to a shift where human cognitive-technical capacities develop to a point of have an impact on the global ecology. The noosphere suggests the emergence of some collective human consciousness, a shared ecology of human thought. The noosphere though does tend to keep the human at the center. Another angle would point toward panpsychism where all objects are thinking or at least might be thinking.

I have a slightly different interest. If thinking is real, then why would it not join other real things and processes in an ecology? If thinking is distributed then it partly, maybe largely, happens beyond the purview of our conscious experience of it. Just as our subjective experience of ecology in general is incomplete, so too is our subjective experience of cognition.

So perhaps cognition needs a kind of “Earthrise” moment, one that captures the shared yet fragile context in which we think.

Categories: Author Blogs

rhetorical organization and Latourian modes of existence

12 February, 2015 - 13:15

Organization is a common topic of discussion in writing instruction. Often, students are asked to produce “well-organized” essays and organization is a familiar criteria for assessment. Organization generally refers to the rhetorical cannon of arrangement, but somehow it makes more sense to say to students that their essays should be well-organized instead of well-arranged. Organization also implies a denser connection, stratification, and perhaps even hierarchy than arrangement.

But that’s what I want to get after here.

Latour brings up organization as one of his “modes of existence.” Combined with attachment and morality, organization represents his effort to displace social explanations founded on a spectral notion of The Economy.  (I haven’t given it much thought but it might be interesting to match these three with the rhetorical modes of persuasion: pathos/attachment, ethos/morality, logos/organization.) In my reading, the key point about Latour’s organization is that it both easy to trace and paradoxical: “Easy, because we are constantly in the process of organizing and being organized; paradoxical, because we always keep on imagining that elsewhere, higher up, lower down, above or below, the experience would be totally different; that there would have to be a break in the planes, in levels, thanks to which other beings, transcendent with respect to the first, would finally come along to organize everything” (389).

This certainly applies to the way we approach organizing writing. We are constantly in the process of doing it. And yet organization is always somewhere else. It’s not here in this word or sentence or paragraph. Where is it? I was just here, organizing. To make that happen I have a script, which I am above and below. Take the example of the book I am working on (or avoiding working on by blogging instead). As Latour would point out, I am the writer of the script I will follow. (That’s not to say I have free will. It is instead to say that I am made to act or in this case, made to script.) I am also the person who must carry out the script: above and below. And yet “Organization never works because of the scripts; and yet, because of the scripts, it works after all, hobbling along through an often exhausting reinjection of acts  of (re)organization, or, to use a delicious euphemism from economics, through massive expenditures of ‘transaction costs.'”

This is where I want to think about the glitchy character of real rhetorical relations. There’s always this patchy, hit or miss quality to communication (as there is to all relations). There are these extensive, ecological assemblages with which we are contending as both writers and readers. We have scripts to follow, but we have many competing scripts to follow, so many different ways to be organized.  I do not mean to suggest that a text cannot be well or poorly composed, organized or disorganized. Instead, the point is that organization is not some meta-entity, some transcendent being, that comes along to impose itself.

Thinking back to yesterday’s post on digital literacy… If one of the many complaints lodged against digital communication is its unsuitability for the tasks of “rigorous” academic work. Yes, I know, you’d think we could get past it. But I think we still struggle with imagining how fully digital scholarship would operate, would be organized (as opposed to the skeuomorphs of the PDF essay for example). Though we should know better, I think we still imagine something transcendent in the organization of the essay that allows it to be academic. What happens when we discover that the essay turns out to be like the façades in those fake Western, Hollywood movie sets? There’s no transcendent organization there, just more texts and readers, editors, publishers, computers, offices, meetings, reviewers letters, libraries, databases, etc. etc. Digital scholarship gets organized in the same way.

In fact, if we want students to produce well-organized essays, we might think in similar terms about the networks, assemblages, and ecologies in which they compose. That’s not to say that student-writers are not actors in this matter, that they are not made to act. They are actors following scripts, scripts they have a hand in authoring.

 

Categories: Author Blogs

improving digital literacy: the Horizon Report’s “solvable” challenge

11 February, 2015 - 14:27

It’s been a few years since I wrote about the annual Horizon Report, put out by EduCause and the New Media Consortium, but the 2015 report recently came out. There’s a lot of interesting information in there, but I want to speak to one particular issue, digital literacy. Basically, the report identifies three categories–trends, challenges, and technological developments–and focuses on six items in each category. So there are 18 different items in the report, and I’m talking about one of them here.

The report identifies “improving digital literacy” as a significant but solvable challenge, one “that we understand and know how to solve.” I guess I’m glad to hear that. I suppose this might be a semantic matter. What do we mean by “improving”?  And what do we mean by “digital literacy”? In terms of the latter, the Report has an ambitious if vague definition.

Current definitions of literacy only account for the gaining of new knowledge, skills, and attitudes, but do not include the deeper components of intention, reflection, and generativity. The addition of aptitude and creativity to the definition emphasizes that digital literacy is an iterative process that involves students learning about, interacting with, and then demonstrating or sharing their new knowledge.

I do think this recognition that digital literacy is an ongoing process of learning rather than a one-time knowledge dump is an ongoing theme in the report. The report also divides strategies for addressing the challenge into areas of policy making, faculty leadership, and practice. So it points to new policies being established by governments and new learning standards built by professional organizations. It recognizes the importance of ongoing professional development for faculty (though this ties into the Report’s “wicked challenge” of figuring out how to reward teaching), and providing support for students from coursework to online resources.  Undoubtedly there is a lot of energy and effort going into this challenge. Far more than there was a decade ago, which is good news. At UB, our revised general education program is very conscious of the task of supporting students’ digital literacy, and that’s a significant step in the direction of “improving digital literacy.”

I remain concerned about the use of the word “literacy.” I am concerned that it leads people to imagine that whatever digital literacy might be is somehow analogous to print literacy (or just plain old literacy). So let’s call it digital literacy instead. You might ask how much listening/speaking and reading/writing have in common? Something, for sure. I’m sure if we strapped you into an fMRI we’d fine some common areas of the brain lighting up for both activities. And maybe overlaps with digitally mediated tasks as well. I find that observation rather unsatisfying though. Reading books and writing essays as a means for becoming digitally literate is analogous to having a first-year composition course where one sits and talks about writing but never actually writes anything. It’s great to talk about writing and it’s useful to read about digital literacy too (as you are doing now), but at some point you have to do it.

And what is “it”? The report acknowledges that digital literacy is a shifting target (which is why we need Horizon Reports in the first place). We can speak broadly of a few general goals:

  • finding and evaluating “good” digital media and information
  • using digital media/technologies to communicate and collaborate on an informal and real-time basis
  • composing digital media

As we might already argue with our legacy writing instruction challenges, these are not generalizable skills. They are specific to networks, assemblages, communities–however you want to think of that.  In fact, if improving digital literacy is a solvable challenge that would be great news because it might mean we could leave behind the apparently not so solvable challenge of improving print literacy.

Still it’s not so useful to just take the air out of someone’s balloon. Even if improving digital literacy proves to be more intractable, at least these folks are taking a whack at it. And so am I. I know my arguments on this blog (and certainly in my more formal scholarship) can prove to be rather abstract, but I do think our challenge partly lies in our abstractions of rhetorical practice, specifically in our anthropocentric notions of symbolic behavior that imagine that regardless of the technology/ecology in which we are immersed, rhetorical action begins and ends with humans.

So, for example,  faculty development is clearly an issue. But teaching professors how to use WordPress or whatever isn’t the issue. If you could magically turn the faculty into highly expert digerati, you’d still be left with sending them back to their disciplines, their curriculum, and their classrooms. You can’t really teach digital literacy in an environment that is ultimately about listening to lectures, taking notes, reading textbooks, writing essays, and passing exams. If faculty can recognize how their curriculum is shaped around certain technological networks/ecologies and the kinds of cognitive/subjective behaviors that emerge and are territorialized within them, then we have a starting point.

Let me put this differently… To what extent is your course and its objectives founded upon the affordances of reading and writing texts on an essentially individual scale (i.e. individual students silently reading or writing texts)? That’s was the focus of print literacy (though we can certainly contest the notion that such things were every really “individual”). Adding a WordPress site to such a course isn’t going to improve students digital literacy. Sure the faculty do need the skills, but they need to use them to rethink curriculum and pedagogy are a far deeper level. Not so solvable, though I wish it was.

Categories: Author Blogs

really thinking: rhetoric and cognition

6 February, 2015 - 14:52

I am at work on a chapter in my book that deals with cognition as it relates to a realist ontology and rhetoric, and I’m hoping this exercise will help me to crystalize my thoughts. I’m drawing on some familiar concepts (at least to me) from distributed cognition and extended mind to DeLanda’s fascinating and bizarre account of the development of cognition in Philosophy and Simulation. I also work through the research on writing and cognition going on in cognitive science, the neurorhetorical response to that, the sociocultural account of cognition in activity theory, and some of the posthuman accounts drawing on complexity theory in our field (e.g. Hawk, Dobrin, Rickert).

Obviously the question of cognition is central to our field, though the “cultural turn” has changed this into a question of subjectivity or agency. (I appreciate Dobrin’s admonishment that we focus on it too much.) My basic argument should be familiar within a realist ontological framework.

  1. All objects have the capacity to express and be perturbed by expression (though that capacity is not always realized).
  2. Those expressions are themselves autonomous. These are the ontological conditions of what I term a “minimal rhetoric.” I’m not interested in drawing boundaries between rhetorical and not rhetorical, except to argue against the boundary that limits rhetoric to human symbolic behavior.
  3. The relations of expression and perturbation create the capacities for cognition and agency. Again I’m not interested in drawing boundaries regarding which objects have these capacities. Assuming that you ascribe to a theory of evolution then you ascribe to the capacity of thought and agency emerging from nonliving entities.  As Latour would say, through interaction we are “made to act,” which would include being made to think.
  4. I also draw on DeLanda here. The specific development of biological cognitive capacities emerge from their simplest form through interactions with objects. As those capacities develop, the ability to be express and be perturbed expands. We (biological critters) expand our senses into larger spaces, and, with memory, into time as well. That works both backward and forward as we develop the capacity to generate nonsymbolic scripts (expectations of what will happen next). What we can get out of this though is that cognition is an activity that emerges through relations with others and that the increasing capacity of an object to think can be traced in those terms.
  5. So thinking joins a hypothetically infinite range of capacities available to objects through their interactions with others. It’s as real and material as any other activity. It is not ontologically exceptional, even though we tend to value it. As such there’s really no reason to build a universe around the perceived strengths or limits of thinking. When I consider an apple, I engage in an activity with certain capacities over others. When I eat the apple, I engage in an activity with certain capacities over others.
  6. Thinking through an interaction with language (symbolic behavior) produces capacities of its own. The whole process might be speculatively explored, as DeLanda does, as emerging from mechanism-independent processes. Of course no one empirically knows how language came about. From my perspective what’s important is understanding symbolic behavior as co-emergent with cognition as real activities that are ecological. By that I don’t mean that they are related to “everything,” but that there is an extensive network of relations, limited only by our capacities for perturbation, that are at play.

I’m not sure if these claims strike you as obvious or absurd. It would suggest that rhetoric cannot be limited to symbolic behavior or to culture (as opposed to nature) or to humans. It would suggest that looking for cognition in the brain or in language or in society will only offer partial pictures. A realist rhetoric can assert that it is not limited to human thought or symbolic behavior, but it does need to be able to account for them in a way that doesn’t lead one back to idealism or empiricism.

For the mainstream, postmodern rhet/comp person, I suppose symbolic behavior is cultural and ideological. It can overdetermine subjectivity and agency. The only possible escapes are through the indeterminacy of language or the chance that critical thinking produces enough resistance to overdetermination, but there’s never really any outside here. I call this the “agent complex:” which is really like the Higgs Boson problem for postmodernity. Posthuman rhetoric offers in turn a “complex agent,” one where complexity theory describes how agency can emerge in a non-deterministic way.

To end by circling back to DeLanda and Latour, both idealism and empiricism want to impart thought with special ontological powers: to create a space to act free from relation and/or to create an objectively true model of the world. Realism sees thought as another capacity for action, another means of construction or instauration, where acting outside of relation makes no sense and knowledge is always constructed without necessarily being fictional.

Categories: Author Blogs