Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 1 day 1 hour ago

academic writing, genre, and clarity

28 September, 2014 - 09:00

Steven Pinker is clearly on a promotional tour for his new book on the subject of style. He’s been taking a couple recent jabs at academic writing, including this one in The Chronicle, which asks the eternal musical question “Why Academics’ Writing Stinks.” For decades Pinker has enjoyed taken jabs at the humanities such as this one “Scholars in the softer fields spout obscure verbiage to hide the fact that they have nothing to say,” though here he backs away from this claim… sort of. As such, it is easy to fall to the temptation to jab back at this kind of troll bait. However, I think it is more interesting to try to answer the question Pinker poses.

To do that though we first have to ask the question “why do we think academic writing is poor?” Given the sheer volume of scholarly publication, the most reasonable hypothesis would be some kind of bell curve distribution of excellent, average, and poor writing performance. Of course that all depends on establishing some usable standard. No doubt part of this judgment, a large part, has to do with the use of jargon, and Pinker acknowledges that some use of technical terms is useful. Sometimes academics write for larger audiences, but for the most part they are writing to other experts in their fields. The easiest way I can think to adjust for this is to focus on the discursive practices within one’s own field. Pinker is well-known for his complaints about postmodernism, not only in terms of its jargon but also its philosophical positions. So when he complains about poor writing in the humanities filled with postmodern jargon, it is a somewhat disingenuous complaint (which is not to say that one cannot find examples of jargon-ridden prose in humanities scholarship). However the point here is simply that given such a volume of texts, there are going to be a fairly substantial number of substandard examples but also some good ones. Pinker writes “Helen Sword masochistically analyzed the literary style in a sample of 500 scholarly articles and found that a healthy minority in every field were written with grace and verve.” That would seem to support my hypothesis, even if one doesn’t really know how one establishes a standard for “grace and verve.” Perhaps Sword does a better job of explaining her process. In any case, even if there’s a positive skew distribution to the bell curve with a shining one-percent and a bulge of mediocrity, we still end up with a fairly bell-like shape.

We can’t all be above-average writers.

This raises another point to which Pinker does allude: academics are not selected or rewarded for their writing ability. Yes, one does need to get published and, depending on one’s field, write successful grant applications. However, to the extent that such success is based on writing ability it is certainly relative to the competition. To mangle a cliche, one doesn’t need to outrun the bear of excellent writing.  I’m not sure about Pinker, but I would not ascribe to the claim that there is some general writing ability that one can either have or not have. Instead, following many of my colleagues, I would view academic writing as a highly specialized skill not easily translatable from one discipline to another or even from a disciplinary genre to a broader audience genre. Learning to do the latter is a skill in itself. It is not one that is necessary for academic success (you could argue that we should change that, but you’d need to convince me that there is a broader audience out there for much of this work). In any case, most academics don’t acquire that skill. And, as I’ve said above, they may not be the best writers in their given technical genre. This is Pinker’s point, I think. Once you get over the publication hurdle, there’s little incentive to get better as a writer.

So why do we think academic writing is poor? Because some academic articles are worse than others and there’s not much incentive to do better.

While one one level some of Pinker’s specific “Strunk and White-esque” advice on word choice makes sense, in the bigger picture focusing on these sentence level issues is as misguided here as it is for first-year composition. However, I think he’s quite wrong on elsewhere. For example, he writes:

The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns language with truth, the proof of success being clarity and simplicity. The truth can be known and is not the same as the language that reveals it; prose is a window onto the world. The writer knows the truth before putting it into words; he is not using the occasion of writing to sort out what he thinks.

He attributes this view not to himself but to a “classic style” of writing, though I believe the rest of the article indicates his strong support of this style.  Pinker indicates that efforts toward clarity in academic writing are stalled by a second style “that Thomas and Turner call self-conscious, relativistic, ironic, or postmodern, in which ‘the writer’s chief, if unstated, concern is to escape being convicted of philosophical naïveté about his own enterprise.'” Mostly he wants to argue that this concern is unwarranted but he does acknowledge that even scientists 

recognize that it’s hard to know the truth, that the world doesn’t just reveal itself to us, that we understand the world through our theories and constructs, which are not pictures but abstract propositions, and that our ways of understanding the world must constantly be scrutinized for hidden biases. It’s just that good writers don’t flaunt that anxiety in every passage they write; they artfully conceal it for clarity’s sake.

Pinker is far more confident that he knows what writing is than I am, and I am skeptical of his confidence. As he indicates in this passage, clarity is achieved through artful concealment. This is essentially the hallmark recognition of deconstruction. It is also a recognition that is incompatible with his description of a classic style of writing that has a motive of “disinterested truth.” The disinterested truth would be that the writer doesn’t know the truth but that s/he conceals that fact through the rhetorical performance of clarity.

So how does conventional academic writing fit into this view? Is academic jargon an effort to obscure the fact that the author doesn’t know the truth or is hedging his/her bets, as Pinker seems to be suggesting here? Or is it a kind of intellectual laziness that reflects little concern about communicating (which is perhaps the most generous explanation Pinker offers). I would say that academic jargon is not just a convenient shorthand for complex ideas. Pinker himself points to the value of “chunking” ideas within academic concepts:

To work around the limitations of short-term memory, the mind can package ideas into bigger and bigger units, which the psychologist George Miller dubbed “chunks.” As we read and learn, we master a vast number of abstractions, and each becomes a mental unit that we can bring to mind in an instant and share with others by uttering its name. An adult mind that is brimming with chunks is a powerful engine of reason, but it comes at a cost: a failure to communicate with other minds that have not mastered the same chunks.

The difficulty is knowing which chunks we share with our audience, but that’s where genre comes in. By developing a facility with the genre shared within a community we expand our ability to think more complexly while also being able to expect our audience will understand what we are talking about. However Pinker’s recognition here also casts doubt on his earlier description of a classic style where “the writer knows the truth before putting it into words.”

In the end Pinker is circumspect on whether or not he ultimately wants to argue that “good” academic writing would adopt the “classic style.” However all of his more specific complaints and pieces of stylistic advice would imply that he believes not only that a classic style is best but that it also is a good description of how writing works. To be generous we could say that the former is debatable (what is the best style for academic writing), but the latter is mistaken.

That’s not how writing works. Beyond the writing skills one typically acquires in elementary school, there are few general writing skills. Writing in an academic discipline is a specific technical skill with a specific genre that is not simply “jargon” but a range of cognitive tools that enable scholars to do the work they do. In this respect they are similar to microscopes, bibliographies, mathematical formulas, spreadsheets, and research methods. This doesn’t mean that some scholars aren’t better writers of their genres than others (of course they are). This doesn’t mean that scholars might not also write for audiences beyond the community that shares their technical genre or that, when they do, they might not do a better job of it (of course they could). This doesn’t mean that disciplines might not do a better job–in graduate school curriculum, in the dissertating process, through editorial review of articles or monographs, or with the mentoring of junior faculty–of helping scholars develop as writers.

I think all those things could happen. I just don’t think that Pinker’s stylistic advice is especially helpful in that regard. It’s really just an analog of the old-fashioned red ink on an undergraduate student’s essay.

Categories: Author Blogs

building a campus culture of writing #sunycow2014

28 September, 2014 - 06:58

Below is the text of my presentation at the SUNY Council of Writing conference, delivered yesterday in Syracuse as part of a panel of the role of student desire in developing writing programs.

Building a Campus Culture of Writing

On a campus like UB with nearly 30,000 students and thousands of faculty and staff spread around the planet studying abroad, conducting research, attending our Singapore campus, and here in western New York, we might say that writing is a non-stop activity. We might say that writing is a integral activity in virtually all the communities on campus from the football team to the contractors building the medical campus to the admissions office, the various deans’ offices, and of course every academic department to say nothing of all the media composition at work and play on a campus: emails, text messages, selfies, tweets, and so on.

We might say that. But if we did, what would the word “writing” mean? What, if anything, holds these activities together to create something we might call a “culture of writing”?

The discourses surrounding the notion of “student need” offer one approach to this question. More than 20 years ago in his book Fragments of Rationality, Lester Faigley offered an observation about student desires in his description of the intra-disciplinary conflicts within rhetoric and composition at that time. As he writes, “many of the fault lines in composition studies are disagreements over the subjectivities that teachers of writing want students to occupy” (17). We can read this two ways. First, Faigley was observing that the disagreements among composition scholars could be understood in terms of the different ways they theorized subjectivity and desire. This is perhaps still true, though the theories have changed somewhat. The second more complicated reading focuses on the desires of teachers for students to occupy particular subjective positions. That is, fault lines emerge over our desire for students to think and feel, or at least behave, in certain ways.

I believe this condition is only intensified in our failed attempts to develop campus wide cultures of writing. All too often we hear ego-driven heroic pedagogy narratives that begin and end with an assertion of authority and expertise that seeks to ground arguments about the subject positions that students should occupy as writers across the campus. That is, what students should be doing, what they need to be doing. Understandably, rhetoric and composition, as a discipline, has much at stake in these narratives. When, in his 1991 essay “Three Countertheses: Or, A Critical In(ter)vention into Composition Theories and Pedagogies,” Victor Vitanza’s seriocomically wonders if the CCCC could ever have as it’s conference theme the question “Should writing be taught?” he is pointing to a paradigmatic disciplinary foundation in a particular view of writing, as well as the necessity of teaching it. It’s a view that drives our desire to build a “culture of writing,” a desire that rises as “writing,” whatever that is, becomes increasingly untenable. Indeed I imagine anyone in our discipline can already feel the response welling up within them to say “Yes exactly. It is because we are losing grip on writing as a culture that we rhetoricians need to hold on that much more tightly.”

But allow me to return to Lester Faigley, this time in his 1996 CCCC address where he imagined that “If we come back to our annual convention a decade from now and find that the essay is no longer on center stage, it will not mean the end of our discipline. I expect that we will be teaching an increasingly fluid, multimedia literacy” (40-41). Of course that didn’t really happen. Eight years later, in her 2004 address to CCCC, Kathleen Yancey echoed Faigley: “New composition includes the literacy of print: it adds on to it and brings the notions of practice and activity and circulation and media and screen and networking to our conceptions of process. It will re-quire a new expertise of us as it does of our students. And ultimately, new composition may require a new site for learning for all of us” (320). One one level, it would simply be impossible for any academic discipline to remain unchanged in the wake of the Internet revolution. Every academic in every field has seen the ways research is published and accessed shift.  However, the essay remains at center stage of our field, as a genre of scholarly production and as a classroom assignment, and if there is a new site for learning, as Yancey describes, few have travelled there and even fewer have stayed.

The essay has remained the standard of humanities scholarly production, as well as the typical genre of the humanities curriculum. But much has happened in the 15-20 years since Faigely was offering his prognostication. According to the US department of education around 40% of the majors offered at US colleges today didn’t exist in the early 1990s. The majority of these majors are in professional schools, which in turn point to a proliferation of activities that we might loosely characterize as writing on campus. For anyone who has been an academic during this period, this shift would have been hard to miss. It would be equally hard to fail to recognize that our students, along with many others around the world, are now communicating by a plethora of digital means. Today, we still often refer to the excellent longitudinal study of student writers carried at Stanford by Andrea Lunsford and her colleagues. That study revealed the extensive amount of self-sponsored and multimedia writing undertaken by students. However that study ended in 2006. Just to put that in context. There were no iphones in 2006 and Facebook had 12 million users, as opposed to the more than one billion today. Compared to a decade ago, our communicational environment today is unrecognizable. As is evident across the spectrum of academic discourses from articles in the Chronicle to journal articles and even curricular and classroom policies, we may want to call those activities writing, so that we can lay claim to them, but we also want to refute their status as writing in the name of some other “culture of writing.” For even Faigley and Yancey, with their unrealized visions of the future of our discipline, still imagine something called “writing” at the center of it. But if so, what does that word mean?

Even if we limit ourselves quite narrowly to the communication practices undertaken in undergraduate course, one unavoidably recognizes that different departments will teach their students different genres. What status do we attribute to those genres in relation to the more nebulous disciplinary abstraction of “writing”? Do we simply rehearse the devaluation carried out historically against rhetoric by suggesting that learning to write in a given genre is a formalistic, stylistic, superficial task, one that is necessary, of course, but not truly intellectual? If we manage to make this argument then we allow ourselves to maintain some imagined domain over what “writing” really is: “the” writing process, critical thinking, reasoning, logic, argument, audience, expression, voice. All the familiar watchwords of our disciplinary legacy. It also conveniently allows us to assert our enduring essayistic practices as foundational to some generalized notion of writing. We can claim the lasting value in continuing to do what we do, as well as impose expertise in our field over a larger “culture of writing.” It allows us to argue that the ways we teach writing, the curricular structures we have developed, the conceptions of argument, thinking, audience, rhetoric and so on should inform writing practice and pedagogy elsewhere.

While clearly I am skeptical of such familiar maneuvers, we are still faced with the task of helping students develop as communicators. We still encounter students, faculty, and departments that are dissatisfied with writing instruction. Theirs and ours. And I certainly think there can be a role for rhetoricians in addressing these challenges. It’s not enough to simply pose problems or offer critiques. Bruno Latour describes an analogous situation in the work of sociologists.

Too often, social scientists—and especially critical sociologists—behave as if they were ‘critical’, ‘reflexive’, and ‘distanced’ enquirers meeting a ‘naive’, ‘un- critical’, and ‘un-reflexive’ actor. But what they too often mean is that they translate the many expressions of their informants into their own vocabulary of social forces. The analyst simply repeats what the social world is already made of; actors simply ignore the fact that they have been mentioned in the analyst’s account. (Reassembling the Social 57)

In working with our colleagues across disciplines, rhetoricians have a similar tendency to translate their colleagues’ expressions into their own vocabulary. In doing so, we learn nothing except to confirm what we already know. Latour, of course, would offer us a method that involves following the trails of associations out of a given node in a network of writing actors and listening to the actors’ explanations for their activity, what makes them do what they do rather than leaping suddenly to spectral social forces for explanations. This is a different notion of student and faculty need and desire. In speaking with students and faculty about their coursework, they will often describe what they need to do. And those needs are generally externally located, as in the familiar faculty explanation that they don’t have time to focus on writing in their classes because they need to do this or that.

However, being made to do something is as much about gaining agency as it is about constricting it. The student in a chemistry lab is constricted in her activities, limited by what she needs to do there, but she also gains the ability to run chemistry experiments and construct disciplinary knowledge. Writing technologies, genres, and practices are all actors in that system. In conjunction with these actors, the chemist and chemistry student are made to write lab reports. In conjunction with these nonhuman actors, student needs, desires, affects, and thoughts emerge.  We might say the same for the student sitting on his darkened dorm room bed, surrounded by a half-dozen wi-fi enable devices pulsing unpredictably and generating dopamine loops urging him to seek out new information, to check Facebook and email; a sugary venti coffee drink sending caffeine to block adenosine receptors in the brain, making him stay awake; the ambient glow and hum of the laptop with its backlit keys, bevelled architecture, cheerful icons, and rigorously user-tested and focus-grouped interfaces; the pillow that still smells like home resting on otherwise institutional furniture; and his notes and results for the same chemistry experiment that he needs to turn into a report. In this context, we would expect that a familiarity with the genre supports a network of distributed cognition that allows him to write the report he needs to meet his objectives as opposed to writing something else. The genre not only organizes pre-existing thoughts, it also participates in thinking. Through his interaction with the lab report genre, the student is made to think in certain ways rather than others. Of course the genre is not all-powerful or over-determining. It is just one actor among many in a network.

There is the culture of writing, if indeed there is such a thing. In following such trails, we can uncover our degrees of freedom, those sites where we might put our networks together differently. Can we sit in a different room, work at a different time, drink more or less coffee? Does developing a self-awareness for the actors that participate in our thinking and composing shift our relationship to them? As rhetoricians participating in universities do we view our objective as supporting student ability to be successful authors of specific genres? Maybe sometimes. Do we also establish a broader goal of investigating how and why composing happens and then sharing that knowledge with students and colleagues in a manner that might be of use to them? I would think so.

I’m going to end by going even further back in time. Coincidentally I was teaching our TA practicum students about the history of the process movement this week, and we were reading Maxine Hairston’s well-known essay “The Winds of Change” from 1982. There Hairston insists that

We have to try to understand what goes on during the internal act of writing and we have to intervene during the act of writing if we want to affect its outcome. We have to do the hard thing, examine the intangible process, rather than the easy thing, evaluate the tangible product.(84)

Things have changed some in the last 30 years. We may be less likely to think of writing as an “internal act.” And, as I have been arguing, the concept of writing itself has been attenuated to the very limits of its conceptual utility. And, we have developed new and varied methods for studying these activities. However, the core tasks remain of understanding these compositional activities and developing means for intervening in them. This is what “building a culture of writing” means from my perspective: the slow ant-like task of following trails and constructing brick-by-brick, actor-by-actor, networks of composition.

Categories: Author Blogs

laptops, pedagogies, and assemblages of attention

24 September, 2014 - 08:53

This is a continuation of this conversation about laptops in classrooms. Clay Shirky, Nick Carr, Dave Parry, and Steve Krause all have recent posts on this issue (that list is almost strange enough to be a Latourian litany). As I said last time, this is the eternal September of the laptop policy. And as I mentioned in that last post, there is clearly a real issue with the disconnect between laptops (and mobile phones and other digital media/network devices) and the legacy practices of college curriculum, classrooms, and pedagogy: these two sets of things don’t seem to mix well. The primary complaint is that the devices distract students. Dave talks about how his students perform better in class discussions with their laptops closed. And Nick Carr makes a good observation:

Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.

In my view there are a couple key issues at work here and they all revolve around the way we understand and value thinking, participation, and attention. If we begin with the premise that thinking is not a purely internal activity then we realize that the tools and environments in which we find ourselves shape our capacities for thought. Obviously there are also internal (as in beneath the skin) processes at work as well. I don’t think anyone would really deny this. However, we might commonly assume some intrinsic consciousness that might be enhanced or inhibited by external forces. A different view would assert that we only have consciousness in relation to others. In the first case, one might look at laptops and ask if they make us better or worse thinkers, if they affect our ability to participate or pay attention. And of course they do in many ways, negatively, perhaps, in the classroom, but also expansively (for good or bad) in terms of participating and paying attention to the web. However, that view would seem to presume that the pre-laptop state is a natural or default state and that whatever technologies we employ should be valued in relation to the capacities and terms of that default state. E.g., do laptops make us better versions of default students in a classroom? We can debate this question, and for faculty sitting right now in such a classroom, it is a question worth asking and answering. But it is not the question that interests me.

Carr proposes a different question in relation to the design of computer technology. To put the question in my terms, he is asking about the assemblage of attention that these devices are designed to produce. One answer is to say that these devices are flexible in that regard. Obviously they can be shut off. That’s one answer: the powered-down laptop still participates in an attentional-cognitive network. It just doesn’t do much for us. And when they are opened full throttle? Then not only do we have access to the public Internet and our various personal accounts, we are also subject to a wide range of push notifications. Even without the pusher there, the thought might prey on our minds that certainly people are emailing, updating, tweeting, etc. And you’ve probably seen articles (like this one) about how addictive email can be. Without suggesting too sinister a motive, it’s unsurprising that companies design products that fuel our desires though not necessarily our best interests.

Looking into these questions of design and developing ways to intervene in the design process or otherwise build upon it are important directions to pursue. However they are also only part of the puzzle. We must similarly look at the motives behind classroom design, curriculum, pedagogy and other educational-institutional policies and designs.

Setting aside the immediate challenges facing faculty and students this month, we need to think more broadly and experimentally about how to design the assemblages of attention (and cognition) that will drive future learning. We have a fairly good idea of what the past looks like. Those spaces were designed to focus attention on the teacher and encourage individualized student activity (notetaking, silent reading, worksheets, tests, etc.). They were organized as a series of fairly short and discrete linear tasks: listen to a lecture, take notes, complete a worksheet, take a test, move to the next class. The classes themselves were/are designed to be silos, organized by discipline. This is very clear from middle school through college. This requires relatively short-term single point attention (e.g. listen to a lecture for 20-30 minutes). For the most part, homework is similarly designed. It’s true that students can do homework and studying in groups, but since everyone has their own book and notes and everyone is ultimately responsible as a individual for demonstrating knowledge, the assemblage encourages individual activity. Over time, we build toward extending the period of single-point attention so that the graduate student or faculty member might spend hours focused on reading a single book or writing an article. (So that’s the story, except that I think the notion of an extended period of single-point attention is a fiction and I would suggest would be unproductive if it were true. But that’s for another time.) In any case, we define knowing and expertise as the cognitive effects of these activities: to know is to have engaged effectively in these activities; to be an expert is to have done so repeatedly in a single area of knowledge. We created reasonably efficient feedback loops between educational practices and workplace practices so that workplaces were organized around employees with disciplinary expertise and expanded capacities for single-point attention. And I don’t mean that only in terms of managerial/bureaucratic structures but also the physical spaces of offices and factories, the design of the work day, and so on.

I imagine the future will have a similar feedback loop between education and workplace. (BTW, I find it strange when colleagues complain about universities serving corporate interests, as if we haven’t been doing that for at least a century, as if our current curriculum and practices weren’t constructed in this way, as if that relationship wasn’t integral to what we do.) I don’t think workplaces have any better idea of what this future should look like, though I do think they are more volatile than universities and thus quicker to change, for good or ill. However, we should think about what knowing and expertise look like in a digital networked environment, what work looks like (academic or professional), and then what assemblages of attention we want to build to support those activities and outcomes.

Here’s a brief speculative comparison. I’ve never taught a large lecture course but let’s say I had an introductory class of 200 undergraduates in my field (rhetoric). Conventionally we would have an anthology of rhetorical texts (like this one) or some other kind of textbook (like this one, I guess). I would lecture, respond to student questions, and try to create some other opportunities for student interaction (like clickers maybe). Then we could have a Blackboard site for quizzes and discussion boards (and I could post my PowerPoint slides!). Then a mid-term and final. Maybe some short writing assignments. Maybe more if I had a TA to help me read student writing. In that classroom, laptops probably would be a distraction.

Now let’s imagine a different structure. Still 200 students, but let’s not call it a class. We aren’t going to measure students by having them demonstrate disciplinary knowledge on a test or in an essay. Instead, they are going to engage in rhetorical activities, using rhetorical concepts, methods, and practices to do something: persuade some group to take an action, inform a particular audience about a topic, do research into the rhetorical dimensions of some matter of concern. They will need to work collaboratively. They will need to integrate learning from other parts of their curriculum as well as other experiences. They will need to draw upon my expertise and work with me to define the parameters of their activities. This requires a different kind of assemblage of attention. We probably don’t need the lecture hall with all the seats facing the podium. I could still give lectures, but they would be far less useful as they would no longer tie so neatly into the working definition of what it means “to know.” On the other hand, it would become more important to figure out how to make productive use of those contemporary devices of distraction. Of course they could still distract, still have a negative impact. We would still need to learn how to use them, but now we would have built a structure that supported their use rather than continuing to use a structure designed to support legacy media technologies.

What kind of workforce are we imagining here? One that can work independently in small groups without panoptic supervision. One that works across disciplines and cultures in collaboration to integrate knowledge and experience from different perspectives. One that can use emerging technologies productively to find, evaluate and manage information as well as communicate and produce knowledge. Something like that. Will every student become that? Of course. Not every 20th-century student became the ideal “organization man” either. Nor do they need to, nor should they. But inasmuch as our legacy curriculum and assemblage structures pointed toward that organization man, we need to think about building new structures that point elsewhere.

Categories: Author Blogs

searching for an assistant professor in the rhetoric of science and technology

18 September, 2014 - 13:03

In case you don’t have your eyes peeled to the MLA job list, you might not know that we are searching for an assistant professor in the rhetoric of science and technology (official job copy below). And guess what? I’m hoping we can find a fantastic person to join our faculty, so I’m going to throw in here some unofficial hard sell.

In our department and beyond there are a number of faculty and students doing interesting work in interdisciplinary areas that would be of real interest and value to someone in this field: science studies, ecocriticism, disability studies, media study, and digital humanities all come to mind as possible connections.  UB has a strong engineering school and several medical-professional schools (we’re in the process of building a new medical campus in downtown Buffalo), so there’s a lot going on here in the STEM fields.

Of course, when people think about Buffalo, the caricature is of a snow-bound, rust belt city. But Buffalo is a surprisingly international city, maybe not surprising since it sits on an international border. At UB, 17% of our students are international, a fact which is as evident in my department graduate seminars as it is walking the campus, where it’s not unusual to hear students speaking four or five different languages on my way to get coffee. Now I’m not going to compare UB and Buffalo to the cosmopolitan experiences of major US cities (or the cost of living) but compared to the other places where I have lived and worked as an academic I really appreciate the combination of affordability, quality public school education for my kids, variety of restaurants and things to do, access to nature, etc.

Tenure-track Assistant Professor in rhetoric of science and technology with a PhD in English, rhetoric, technical communication or related field. Preferred secondary fields include but are not limited to environmental rhetoric, health communication, science writing, digital literacy or online and mediated pedagogy.

Application deadline: November 1, 2014. Salary, benefits, and privileges competitive with other Research-1 universities. Faculty are expected to teach at the graduate & undergraduate levels, maintain an active research program, mentor graduate students, and provide service to the department and/or University as required. Submit letter of application, CV, and contact information for three recommenders to http://www.ubjobs.buffalo.edu. For information, contact Graham Hammill, Chair of the English Department (eng-jobsearch@buffalo.edu). University at Buffalo is an affirmative action/equal opportunity employer and in keeping with our commitment, encourages women, minorities, persons with disabilities and veterans to apply.

Categories: Author Blogs

prestige education in the network age

11 September, 2014 - 10:09

Perhaps you have seen Steven Pinker’s response to William Deresiewicz’s “Don’t Send Your Kid to the Ivy League” in which they variously decry and defend the Ivy League. I can’t really speak to the conditions of an Ivy League education, nor do they especially interest me. My daughter is a high school junior this year and an exceptional student, good enough to compete for an Ivy League admission. Her friends are similarly talented. I see in them many of the qualities that concern Deresiewicz and Pinker both. It’s not the kids’ fault at all. They are being driven to become caricatures of the ideal college applicant. What does it mean to be the kid with the high test scores, the AP classes, the high GPA, high school sports, science olympiad, club officer roles, conspicuous volunteerism and so on? I think it’s a reasonable question to ask. I think it’s obvious the students and parents pursue such institutions for the prestige associate with the name that does seem to translate into better job opportunities. Who knows, maybe the people with Ivy League degrees are better humans than the rest of us…. probably not though.

How far does that prestige effect extend beyond the Ivies? To the elite private liberal arts colleges, certainly. To a handful of other elite private and public universities (MIT, Stanford, Berkeley,  e.g.), no doubt. For example, consider this list of AAU universities. Do they all get to be “prestigious”? And if so, what does that mean? Does it mean that large numbers of students apply from around the country and the world to attend your university? Not necessarily. I’m not saying these aren’t good schools. They have very good reputations. They are obviously all “highly ranked” by some metrics, which is why they are AAU schools. My point is simply in terms of the marketplace of student admissions. What does prestige actually get you and who cares about it? I don’t think this is an idle question, particularly for the humanities, which are a prestige-driven, reputation economy. The typical humanities departmental strategy is to hire for, support, and promote faculty prestige/reputation, and the metrics are driven by this valuation to some extent as well. However, how many undergraduate students at large public research universities, for instance, are choosing majors based upon national department reputation? Put differently, what is the reputation of reputation?

I want to offer an odd juxtaposition to this. Also recently in the news is Apple’s latest iPhone. See, for example, Wired magazine’s speculation on the impact of the new iPhone on filmmaking. I don’t want to make this about Apple, but more generally about technological churn. We’ve already had widespread consumer filmmaking for at least as long as we’ve had YouTube, but as the phone technology improves the possibilities expand. Now perhaps I should use this juxtaposition as an opportunity to talk about digital literacy, but that’s not exactly my point. Instead, the point I want to make is that technological churn shifts the ways in which reputation is produced and maintained. While I can share in the general academic skepticism about Apple ads and their suggestion of how their technologies enable people to do cool things, at the same time, in a broader sense, human capacities are being altered through their interaction with technologies.

It’s completely obvious in a humanities department, where reputations are built on publishing monographs, that reputation is driven by technology. The principle is not foreign to us, even if we are generally blind to the ways in which our disciplines are tied to technologies. Just like the prestige of getting into an Ivy, reputation hinges on access. This creates a series of feedback loops. Elite humanities departments can support their faculty in monograph production, so they produce more books, so the departments remain elite. It used to be that filmmaking was expensive and technically complex, so only professionals could really make and distribute films. It’s still hard to make a good film, but the barriers are otherwise lower. So I suppose this could be an argument for digital scholarship and/or changing the ways in which we work as humanities scholars, but I just want to focus on reputation/prestige.

If reputation is about our participation in a technological network (of books, e.g.) and we are building an entire disciplinary and departmental infrastructure and strategy to facilitate that participation, then how is that really different from Deresiewicz’s zombie undergrads with their endless activities? Aren’t we both just building reputations within some arbitrary network? If we are spending hours upon years on dissertation research and monograph writing to get jobs and tenure and improve department reputation, so that we can “get into” or stay in categories like AAU, then we can point to the monetary rewards that accrue, just like those aspiring Harvard students. However, just like those students we are investing a lot of effort and money in those goals as well. IF you get into (and out of) your Ivy, then you can probably feel confident that your investment is going to work out. For those of us in the humanities, the future is less certain because the sustainability of the reputation-technology network we employ is more tenuous.

So what will our future reputation network look like? Obviously it won’t be iPhone filmmaking, but it obviously won’t be monographs either. If the 20th-century English department was born from Victorian literary culture, industrial printing, electrification, and the increased demand for a print literate workforce, then what analogous things might we say of the 21st-century version of the discipline? If 20th-century scholarly labor and reputation in turn hinged about our ability to study and engage with these technologies, then what would be the 21st-century analogy?

The problem that Pinker and Deresiewicz have is that the criteria upon which applicant reputations are built makes no sense, which in their view does harm in the end to both the students and the institution. We could say the same thing about the humanities, where the cause of the reputational disconnect seems fairly obvious. What is less obvious is how one goes about shifting those terms.

Categories: Author Blogs

why does the web need to be “social”?

5 September, 2014 - 08:33

When I was starting out in grad school, I saw Timothy Leary speak about the Internet as “electronic LSD.” It was the early nineties, pre-Netscape if memory serves. The ideas he was offering up were not that different from the argument he made in his essay “The Cyberpunk: The Individual as Reality Pilot,” which was anthologized in Bruce Sterling’s Storming the Reality Studio, that well-known anthology capturing the zeitgeist of 80s cyberpunk. I am not here today to advocate or express nostalgia for this moment. In fact this essay would be familiar to you probably for its romantic, libertarian/anarchist, masculinist, Eurocentric, techno-optimistic sentiments, which seem to strike a familiar but ironic tone in the context of the dystopian worlds cyberpunk literature portrays. For example, Leary writes:

The CYBERPUNKS arc the inventors, innovative writers, techno-frontier artists, risk-taking film directors, icon-shifting composers, expressionist artists, free-agent scientists, innovative show-biz entrepreneurs, techno-creatives, computer visionaries, elegant hackers, bit-blitting Prolog adepts, special-effectives, video wizards, neurological test pilots, media-explorers-all of those who boldly package and steer ideas out there where no thoughts have gone before.

CYBERPUNKS are sometimes authorized by the governors. They can, with sweet cynicism and patient humor, interface their singularity with institutions. They often work within “the governing systems” on a temporary basis.

As often as not, they are unauthorized.

Perhaps, in the age of cultural studies (even though Leary does cite Foucault), we might attempt to recoup such views through Haraway’s cyborg manifesto or Deleuze and Guattari’s nomads and rhizomes or maybe even Hakim Bey’s temporary autonomous zones. It’s easy enough to say that these fantasies built the web we have today, or more generously maybe that the contemporary web is what happens after state capture and reterritorialization. So let’s not go there. When I think of the social web (Facebook, Twitter, etc.) I know what I think of: family events, witty remarks, linking something funny or heartwarming, academic politics, current events. The social web manages somehow to demonstrate that the self-aggrandizing “greed is good” ethos of the 80s and the “sharing is caring” mantra of 90s children’s programming are compatible. It’s about as far from the vertiginous romanticism of cyberpunk as one could get. It’s more like a toned-down, less-interesting, pathetic version of Snow Crash or maybe one of Sterling’s novels.

So while I’m not interested in advocating the seemingly individualistic “anti-social” cyberpunk, I am unhappy with the word social.

Maybe it’s the Latour in me that has trained me to raise a skeptical eyebrow to the word “social.” Latour rails against the common view that there is some kind of social stuff. As he makes clear at the start of Reassembling the Social

The argument of this book can be stated very simply: when social scientists add the adjective ‘social’ to some phenomenon, they designate a stabilized state of affairs, a bundle of ties that, later, may be mobilized to account for some other phenomenon. There is nothing wrong with this use of the word as long as it designates what is already assembled together, without making any superfluous assumption about the nature of what is assembled. Problems arise, however, when ‘social’ begins to mean a type of material, as if the adjective was roughly comparable to other terms like ‘wooden’, ‘steely’, ‘biological’, ‘economical’, ‘mental’, ‘organizational’, or ‘linguistic’. At that point, the meaning of the word breaks down since it now designates two entirely different things: first, a movement during a process of assembling; and second, a specific type of ingredient that is supposed to differ from other materials.

Not surprisingly, people battle for the claim to have invented the term “social media,” though it’s mostly corporate and web entrepreneur types.  But what does the adjective social mean here? I’m thinking it’s supposed to mean media technologies that promote socializing, as in many-to-many rather than one-to-many. Perhaps in a technical sense it does. But the web always did that. If we think of Latour’s view of the social in his conception of a sociology of associations, then I suppose we’d begin by thinking of social media applications as actors that produce new associations: new communities and new genres/discourses. I guess that’s a fairly basic starting point that tells us almost nothing; we are, as always, instructed to follow the actors and their trails. In the end though, through social media we are “made to do” things. Not compelled exactly. It’s just that Facebook or Twitter or whatever activates particular capacities within us over others. Those are not “social” capacities. They are not made of social stuff, nor do they do social things as opposed to other capacities that would be non-social.

Leary’s cyberpunks built much of the underlying technology of the social web, perhaps with “sweet cynicism and patient humor.” And then I suppose they persist in the niches of hacker culture, while the typical user becomes immersed in a new “social.” Of course a platform like Facebook or Twitter with its millions of users is diverse in the experiences it might provide on an individual basis. At the same time, an investigation like Manovich’s Selfiecity offers some insight into how technologies generate commonalities. It’s not really my point to say that we are or aren’t being brainwashed by social media. It’s hard to get outside of the binary of the romantic narrative that Leary tells or even gets read into something like Deleuze and Guattari’s articulation of nomads and the state.

My point in the end is more pragmatic and less interesting on some romantic, visionary scale. Can we stop calling this media “social”? What else could we call it? And if we called it something different would we gain a better understanding of it? One that wouldn’t lead us nostalgically toward Leary or running in fear of a technopoly, like Benjamin’s angel of history, or mind-numbingly toward a corporate mall culture or whatever other cheesy narrative you can construct when we imagine that technologies are social.

Categories: Author Blogs

the eternal September of the no laptop policy

31 August, 2014 - 08:39

It’s the time of year when academics like to talk about their syllabi and inevitably the no-laptop policy arises. It is evidence of a recurring theme: we do not know how to live, let alone learn, in a digital networked environment. It’s hard to blame the faculty, though it’s difficult to figure out who else might be responsible. The classes we teach are in the same rooms and buildings, follow the same schedules, and are essentially understood in the same terms as they were 30 years ago. Yes, there’s wi-fi now, as well as 4G LTE signals, permeating the classrooms, and yes, almost everyone has some device that links to those signals. (As I’ve mentioned in prior posts, at UB anyway students bring an average of 5 wi-fi enabled devices to campus.) However, students don’t know how to use these devices in the classroom to support reaching the learning objectives, and the learning objectives remain based in a pre-digital world, as if what and how we are learning hasn’t been transformed by our new conditions, so faculty don’t know how to use these devices to define and achieve learning objectives either.

The Chronicle (of course) published a piece recently in which one professor, Ann Curzan, offers her explanation of her own no laptop policy. I appreciate the thorough explanation she offers to her students in her syllabus, including citing research on multitasking, effects on test performance, and so on.  It’s a very old  story, right? New technology affects our ability to think, nay, remember things. Just like the Phaedrus. Sure that’s just some old myth about Thoth, while with laptops we’re talking scientific research. Sure, except that people living in an oral society do have memory capacities that are different from ours. What do we imagine that distributed cognition means? It means that we think in conjunction with tools. It means we think differently in the context of digital networks. And that’s scary and difficult. Obviously, because these are the recurring themes in our discussion of educational technologies.

Curzan and the many, many other professors with similar policies have educational objectives and practices that have no place for emerging media. It makes perfect sense that if the purpose of coming to a class is to take notes on a lecture then a laptop is of limited utility. Yes, you can take notes on a laptop but that’s like driving your car at the speed of a horse-drawn carriage. If the purpose of class is to engage in class-wide discussion or group work then maybe those devices have a role to play but that depends on how the professor shapes the activity. For example, a typical pre-wifi class/group activity I did was to ask students to look at a particular passage in the reading, figure out what it means, and discuss what they think about it. Today, depending on the particular reading, there’s probably a good deal of information online about it and that information needs to be found, understood, and evaluated. It’s also possible to be in real time conversation with people outside the classroom, as we know. So that activity isn’t the same as it was 10-15 years ago. It’s possible that the laptops could distract from the activity, but distraction is always a problem with a group activity.

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write. We spend years teaching them how to use these technologies in college, and that’s following even more years in K-12. We teach them highly specialized ways of reading and writing so that they are able to do this. But we complain when they walk in, wholly untrained, and fail to make productive use of their laptops? When we give them no teaching on the subject? And we offer little or no opportunity for those laptops to be productive because our pedagogy is hinged on pretending they don’t exist?

Certainly it’s not as easy as just substituting one media for another. (Not that such a substitution is in any way easy, and in fact, the near impossibility of making that substitution will probably doom a number of humanities disciplines, but that’s a subject for another post.) To make it happen, the entire activity network around the curriculum needs to be rethought, beginning with the realization that the network we have is built in conjunction with the legacy media we are seeking to change. We need to change physical structures, policies, curriculum, outcomes, pedagogy…

It is easier to just ban laptops.

 

 

Categories: Author Blogs

academic freedom, social media, and the university without conditions

29 August, 2014 - 10:15

Let’s call this a “Law and Order” style post, as in “inspired by real events.” This is also, I believe, a classical example of a Latourian “matter of concern.”

Without suggesting in any way that the principles of academic freedom ought to be modified or interpreted differently, it should be clear that the material conditions of communication have completely changed since the last time (in 1970) the AAUP “interpreted” the 1940 Statement of Principles of Academic Freedom and Tenure. Though there is a Statement on Professional Ethics that was last revised in 2009 to me it seems clear that it is still failing to account for our changing conditions (maybe there wasn’t a critical mass of academics on Twitter yet). The key paragraph in that document is probably:

As members of their community, professors have the rights and obligations of other citizens. Professors measure the urgency of these obligations in the light of their responsibilities to their subject, to their students, to their profession, and to their institution. When they speak or act as private persons, they avoid creating the impression of speaking or acting for their college or university. As citizens engaged in a profession that depends upon freedom for its health and integrity, professors have a particular obligation to promote conditions of free inquiry and to further public understanding of academic freedom.

For me, there are two interesting points here. First that professors have “rights and obligations” that are no different from other citizens. And second, that when they speak or act as private persons that they “avoid creating the impression of speaking for their college or university.”  Let’s deal with the second point first. Exactly how do you avoid that impression in social media? I suppose if you in no way identify yourself as a professor in your profile page and you can’t be googled and identified as such. What is due diligence in terms of “avoiding” here? It’s not like one can invoke the online version of Robert’s Rules to insist that an audience not associate one’s speech with one’s institution. And as for having the same rights and obligations as other citizens, that’s hardly much solace. Do we imagine that high profile professionals in corporate America are not subject to personal conduct policies? We know that people get in trouble and not hired for jobs because of what they post in Facebook and such. We teach our students about this all the time. So the notion that we have the same rights and obligations as any private person is something to think about.

So one argument says that professors should be able to write/say anything that falls within the protections of the First Amendment and not be subject to any professional or institutional consequences. Of course this is not practically possible to ensure because everything we say has consequences, often unintentional ones. This is an inherent risk of communication. I could be writing something right now that angers some reader who will remember and some day be disinclined to publish something I’ve written or promote me or whatever. No one can control that. That’s always been the case. There have always been feuds among faculty in departments, where one professor always opposes anything the other one suggestions. Only with social media we have this business on a larger scale. One can say that the acts of an institution are a different matter, and that’s true, but those acts are always actually taken by individual people sitting on a committee or in an office somewhere. That angry letter to the editor you may have written 10 years ago, instead of being buried in some archive where no one can find it, now comes up on the first page of a Google search for your name. And really anyone in America can find it and read it at any time, not just the couple thousand local folks who might have turned to page 53 of the newspaper one night a decade ago. We all know this already. So how can we pretend that our circumstances have not changed?

I know that we want to imagine that all these things are separate, but they were never inherently separate. As Latour would suggest, there were many hybrid technologies at work beneath that old 20th-century system constructing order, like Maxwell’s demon. But those old systems no longer function. The question then becomes what system should we build? In answer to this question, one often hears Derrida’s concept of the “university without conditions” cited:

[t]his university demands and ought to be granted in principle, besides what is called academic freedom, an unconditional freedom to question and to assert, or even, going still further, the right to say publicly all that is required by research, knowledge, and thought concerning the truth.

I would note that the key point here is that this is the university without conditions and not the professor with conditions. Professorial freedom has always been constrained by editors, reviewers, granting agencies, etc. We know what the university of print looked like and how it aspired to, though obviously did not reach, Derrida’s idealized institution. Whatever the digital university will look like, whether it is better or worse, it will clearly be different, because it already is.

Categories: Author Blogs

pedagogy, computers and writing, and the digital humanities #cwdhped

17 July, 2014 - 10:44

Over the past couple days there’s been a Twitter conversation (#cwdhped) and an evolving open Google doc that explores the idea of some summit or FTF discussion among scholars in the digital humanities and those in computers and writing on shared interests in pedagogy. For those that don’t know, “computers and writing” is a subfield of rhetoric and composition that focuses on technological developments. I’ll reserve my comments about the weirdness of such a subfield in 2014 for another day. Let’s just say that it exists, has existed since the early 80s, and that there’s a lot of research there on pedagogical issues. Digital humanities, on the other hand, is an amorphous collection of methods and subjects across many disciplines, potentially including computers and writing and possibly including people and disciplines that are not strictly in the humanities (e.g. education or communications or the arts). So, for example, when I think of the very small DH community on my campus, I’m meeting with people in Linguistics, Classics, Theater/Dance, Anthropology, Education, Media Study, Architecture… Some of these people are teaching students how to use particular media creation tools. Some are teaching programming. Some are doing data analysis. Some are teaching pedagogy. Most of the digital-type instruction is happening at the graduate level. And none of it is happening in what we’d commonly think of as the core humanities departments (i.e. the ones with the largest faculty, grad programs, and majors). Of course that’s just one campus, one example, which begs the following questions:

  • What % of 4-year US colleges have a specific digital outcome for their required composition curriculum?
  • What % of those campuses have a self-described “digital humanities” undergraduate curriculum that extends beyond a single course?

I would guess there are ~1000 faculty loosely associated with computers and writing, maybe less. I’m sure they are doing digital stuff individually in their classrooms, but is there something programmatic going on there on there respective campuses? There are 100s(?) of professional writing majors now, most of which have some digital component, but sometimes it is still just one class. And if we stick to the MLA end of the DH world, how many English and/or language departments have a specific DH curriculum? How many have any kind of DH or digital literacy outcomes for their majors?

This leads me to the following question/provocation: setting aside composition courses, how many different courses does the average US English department offer each year with an established digital learning outcome or digital topic in its formal catalog course description? I think that if I set the over/under at 2.5 you’d be crazy to take the over.

My point is that when we are talking about DH pedagogy, we are talking about something that barely exists in a formal way. If you want to think about 1000s of professors and TAs doing “something digital” in their courses here and there, then yes, it’s all over the place. And yes we are using Blackboard, teaching online, and so on. And maybe we could come up with a list of 25 universities that are delivering a ton of DH content, the 100s of institutions with professional writing majors are offering an above average amount of digital content, and the English departments that are delivering secondary education certification might be delivering the required digital literacy content for those degrees, but put into the context of 3000 4-year colleges and what do you see?

I think the same is true on the graduate level. We can point to some programs and to individual faculty, but nationally, how many doctoral programs have specific expectations in relation to DH or digital literacy for their graduates? I would bet that even at the biggest DH universities in the nation, you can get a PhD in English without having any more digital literacy than a BA at the same school. Rhet/Comp has a higher expectation than literary studies, but only because of the pedagogical focus and the expectation that one can teach with technology. This doesn’t mean that students can’t choose to pursue DH expertise at many institutions, at either the undergrad or grad level. It’s just not integral to the curriculum.

So my first question(s) to the MLA end of the DH community (just to start there) is

  • What role do you see for yourselves in the undergraduate curriculum?
  • Is DH only a specialized, elective topic or should there be some digital outcome for an English major?
  • Should there be some digital component of a humanities general education?
  • What role should DH play in institutional goals around digital literacy?

The same questions apply at the graduate level. Is DH only an area of specialization or does it also represent a body of knowledge that every Phd student should know on some level?

If a humanities education should prepare students to research, understand and communicate with diverse cultures and peoples, then how that preparation is not integrally and fundamentally digital is beyond me. We really don’t need to say “digital literacy” anymore, because there is no postsecondary literacy that is not digital. Why is it that virtually every English major is required to take an entire course on Shakespeare but hardly any are required to have a disciplinary understanding of the media culture in which they are actually living and participating? (That’s a rhetorical question; we all know why.)

From my viewpoint, that’s the conversation to have. Tell me what it is that we want to achieve and what kind of curricular structures you want to develop to achieve those goals. The pedagogic piece is really quite simple. How do you teach those courses? You hire people who have the expertise. Sure there’s some research there, best practices, and various nuances, but that’s about optimizing a practice that right now barely exists.

 

Categories: Author Blogs

rhetoric’s default mode

8 July, 2014 - 11:03

Following on my previous post, a continuation of a discussion of “neurorhetoric.” Generally speaking, rhetoricians, like other humanists, approach science with a high degree of skepticism, especially a science that might potentially explain away our disciplinary territory. As Jordynn Jack  and others have pointed out, there is a strong interest in the prefix neuro- and that those in our field might benefit from looking bi-directionally at both the rhetoric of neuroscience (how neuroscience operates rhetorically as a field), as well as the neuroscience of rhetoric (what neuroscience can tell us about rhetorical practices). In her article with Gregory Applebaum (a neuroscientist), they point to the broader lessons from the rhetoric of science in approaching neuroscientific research, particularly to resist engaging in “neurorealism, neuroessentialism, or neuropolicy,” which are all variants of interpreting research as making certain kinds of truth claims. Similarly I tend to turn toward Latour here to think about the constructedness of science.

With that in  mind, I was following my nose from my last post’s discussion of an article in Science, to this article “Rest Is Not Idleness: Implications of the Brain’s Default Mode for Human Development and Education” by Mary Helen Immordino-Yang, Joanna A. Christodoulou and Vanessa Singh. The “default mode” describes a relatively new theory of the brain/mind that identifies two general networks. One is “task positive,” which is a goal-oriented, outward-directed kind of thinking and the other is “task negative,” which is inward-directed. The latter is the default mode and is concerned with “self-awareness and reflection, recalling personal memories, imagining the future, feeling emotions about the psychological impact of social situations on other people, and constructing moral judgments.” As they continue

Studies examining individual differences in the brain’s DM connectivity, essentially measures of how coherently the areas of the network coordinate during rest and decouple during outward attention, find that people with stronger DM connectivity at rest score higher on measures of cognitive abilities like divergent thinking, reading comprehension, and memory (Li et al., 2009; Song et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; Wig et al., 2008). Taken together, these findings lead to a new neuroscientific conception of the brain’s functioning “at rest,” namely, that neural processing during lapses in outward attention may be related to self and social processing and to thought that transcends concrete, semantic representations and that the brain’s efficient monitoring and control of task-directed and non-task-directed states (or of outwardly and inwardly directed attention) may underlie important dimensions of psychological functioning. These findings also suggest the possibility that inadequate opportunity for children to play and for adolescents to quietly reflect and to daydream may have negative consequences—both for socialemotional well-being and for their ability to attend well to tasks.

As I’ll discuss in a moment, the article goes on to make some interesting claims and recommendations about social media, but let’s just deal with this. Let’s call it unsurprising to discover that the brain is doing different things when one is looking outward and focused on a specific task than when one is daydreaming, speculating, fantasizing, remembering or otherwise being introspective. How “real” those two networks are versus their being products of our perspective on our brains I cannot say. Certainly these are notions that reflect our mundane experience with thinking. I am certainly not going to argue against the wisdom of having down time, taking opportunities for reflection, or developing a meditative practice for children, teens, or adults. I also don’t need a multimillion dollar machine-that-goes-bing to know that.

Here is what might be interesting though as one investigates the ontological dimensions of a rhetoric not restricted to symbolic behavior. Without falling into neuroessentialism, it is not radical, I think, to imagine the rhetorical strategies, such as audience awareness, develop from the way we are able to think and conceive of others, a task attributed here to the “default mode.” It is only speculation, as far as I am concerned, but the capacity to conceive of a self is dependent on the capacity of conceive of a non-self. Following upon that the ability to imagine that others have similar capacities, that there are other “selves” out there develops when? Prior to symbolic behavior? In concert with symbolic behavior? Following symbolic behavior? Who knows? I do, however, think that such neurorhetorical work opens a space for the investigation of a naturalcultural, material, nonsymbolic rhetoric.

That said, it certainly does not resolve such matters. And the discussion of social media in this article is an excellent example of this. To be fair, they conclude that “In the end, the question will not be as much about what the technology does to people as it will be about how best to use the technology in a responsible, beneficial way that promotes rather than hinders social development.” Thanks for that insight. Indeed they do admonish us that “the preliminary findings described here should not be taken as de facto evidence that access to technology is necessarily bad for development or weakens morality.” Of course they only reason that such caveats must appear in the article is that much of what they discuss suggests exactly the opposite of these backpedaling sentences, that “if youths are habitually pulled into the outside world by distracting media snippets, or if their primary mode of socially interacting is via brief, digitally transmitted communications, they may be systematically undermining opportunities to reflect on the moral, social, emotional, and longer term implications of social situations and personal values.” How do they get to this implication? Basically by arguing that effective use of the default mode is necessary for moral behavior and that social media interferes with entering this default mode through its continual demand for attention.

I’ll just toss out a different hypothesis for you, one that doesn’t have to fall into the trap of technology makes us more or less moral, stupid, etc. Or retreat to some version of the “guns don’t kill people; people kill people” commonplace. Here’s my premise: we don’t know how to live in a digitally-mediated, networked world. It’s a struggle. We are trying, unsuccessfully, to import paradigms from an industrial, print culture about what life should be like (and to be fair those are the only paradigms we have to work with). Addressing this struggle is not simply about some rational process of using technology in a beneficial way. It’s a more recursive and mutative process where the notion of benefit shifts as well. It’s unlikely that we will evolve out of our need for “down time” in the near future or develop some scifi wetware implants to do the job for us. So we will need to understand the ontological basis for rhetorical action, in the brain and elsewhere. But we also need to recognize that what constitutes “moral” behavior is a moving target. What are our moral obligations to our Facebook friends or Twitter followers? How to they intersect with and alter our responsibilities to family or neighbors or other citizens? These are all concepts that we learned through rhetorical activity, concepts that shift with rhetorical activity. And though the authors of this article are careful to hedge their claims, it is also clear that they want to raise some concern about social media that rests upon a certain faith about how we should behave, a faith that they seek to confirm through science.

In the end, I am interested in their argument and largely inclined to share their value in the need for down time and reflection. I worry about the time my son spends staring at his iphone. Not because I think it’s making him a bad person; it just seems like a diminished life experience to me. I’m also interested in this idea of the default mode. However I inclined to be a little wary about these claims regarding social media. I am sure these technologies are shaping our cognitive experience, and I am sure that we struggle with these digital shifts, both individually and collectively. But I’d like to avoid falling into these rhetorical commonplaces about emerging media and morality or stupidity.

Categories: Author Blogs

it hurts when I think

6 July, 2014 - 07:43

Perhaps you have seen this recent Science article (the paywall article itself or an Guardian piece on it.) If you haven’t, this is a psychological study where participants are left alone with their thoughts for 6-15 minutes and then asked questions about the experience. The conclusion? Generally people do not enjoy being alone with their thoughts. The article got attention though because the researchers gave participants the option of shocking themselves, and a good number of them, especially men, chose to do so. As Wilson et al note, “what is striking is that simply being alone with their own thoughts for 15 min was apparently so aversive that it drove many participants to self-administer an electric shock that they had earlier said they would pay to avoid.”

I will not pretend expertise, but having engaged in zazen meditation over the years, it doesn’t really surprise me that people don’t enjoy being alone with their thoughts. In this kind of meditation the objective is not to not think, which isn’t really possible, but rather to not hold onto thoughts. In my experience (and I imagine yours), the unpleasantness of thinking comes from holding on to thoughts (or perhaps their holding on to you). As I understand it, this kind of mindfulness training is fundamentally about letting go. The researchers arrived a similar conclusion, writing

There is no doubt that people are sometimes absorbed by interesting ideas, exciting fantasies, and pleasant daydreams. Research has shown that minds are difficult to control, however, and it may be particularly hard to steer our thoughts in pleasant directions and keep them there. This may be why many people seek to gain better control of their thoughts with meditation and other techniques, with clear benefits. Without such training, people prefer doing to thinking, even if what they are doing is so unpleasant that they would normally pay to avoid it. The untutored mind does not like to be alone with itself.

One might argue that the mind is never “alone with itself.” There’s only more or less stimulation. In this study, the participant is sitting on a chair for instance. One might mention air or gravity, but language is the key outsider from my perspective. My inclination would not be to characterize the participants minds as untrained or untutored but to the contrary as specifically trained to “prefer doing to thinking” where “thinking” is narrowly defined as mental activity that is detached from any apparent stimulation/sensation or a particular immediate objective.

In the disciplinary terms of cognitive science and psychology, what we are talking about here is the brain’s “default network,” which is sometimes described as the brain idling or as mind-wandering but is also suggested as the means by which the brain considers the past and future or imagines other people’s mental states. It is, perhaps, our internal self-reflection: the internal mental state that we imagine other’s similarly have. And really what this study is suggesting is that this internal world is generally not all that pleasant. Perhaps it’s a good thing that navel-gazing doesn’t feel that good. Even though we value self-reflection and mindfulness, we wouldn’t want to find ourselves drawn inward as toward a delicious treat.

An article like this attracts attention in part because of the details of participants shocking themselves but also because of our increasing moralizing over media and attention. It feeds our supposition that we have become so dependent on media stimulation that we are losing ourselves. Actually I don’t think the article is making any kind of cultural-historical argument. There are some cultural assumptions here, specifically those who are tutored to be alone with their thoughts would get different results, but there isn’t a value suggesting that there is something wrong with not enjoying this experience. We just bring that morality to the findings.

Whether we are talking about deep breathing exercises, some more developed meditative practice, language, or an iphone, these are all technologies. Even when we are in that default mental mode, we are still in a hybridized, nature/culture, technological, distributed network of thinking. The condition of being “alone” is relative not absolute.

 

Categories: Author Blogs

when the future isn’t like the past

26 June, 2014 - 09:14

A group of scholars respond to MLA’s proposal regarding doctoral education in Inside Higher Ed, another group propose to replace MLA’s executive director with a triumvirate who will focus on the problems of adjunctification, on Huffington Posta university president write in defense of a liberal arts education: these are all different slices of a larger issue. On this blog, there are a few recurring topics:

  • emerging digital media and their aesthetic, rhetorical, and cultural effects;
  • teaching first-year composition;
  • practices in scholarly communication;
  • technologies and higher education teaching;
  • the digital humanities and its impact on the humanities at large;
  • the academic job market, including the issue of part-time labor;
  • doctoral education in English Studies;
  • undergraduate curriculum, including both general education and English majors.

There’s also a fair amount of “theory” talk, though, at least in my mind, it’s always about developing conceptual tools for investigating one or more of these topics. So perhaps it is not surprising that from my perspective these things are all part of a common situation, not one that is caused by technological change in some deterministic way, but one in which the development of digital media and information technologies has played a significant role. And obviously it’s not just about technology, but when we remark on the changing nature of work in the global economy, the resulting growing demand for postsecondary education, the shift in government support and public perception of higher education, and the impacts of these on academia, it’s clear that technological change has played its role there as well. In other words, the challenges we face today were not necessary and the future has not already been written, but there was and is no chance that the future will be like the past.

And this is where I see the biggest contradictions in our efforts to address these problems, contradictions which are rehearsed again in the pieces referenced above. Who can doubt that the way we approach doctoral education, university hiring practices in relation to adjuncts, and our valuing of a liberal arts curriculum are all tied together? The obvious answer is for there to be greater public investment in higher education. Maybe states should think about incarcerating fewer citizens and educating more of them. Maybe the federal government doesn’t need more aircraft carriers than the rest of the world combined. Maybe we need to close some corporate tax loopholes. Maybe.  Maybe. But even if there were more money flowing into the system, would that mean that things would stay as they are/were?

In his Huffington Post piece, Michael Roth, president of Wesleyan, points to a tradition in American higher education dating back to Franklin and Jefferson that emphasize the value of a liberal education for lifelong learning over specific vocational training, as he concludes:

Since the founding of this country, education has been closely tied to individual freedom, and to the ability to think for oneself and to contribute to society by unleashing one’s creative potential. The pace of change has never been faster, and the ability to shape change and seek opportunity has never been more valuable than it is today. If we want to push back against inequality and enhance the vitality of our culture and economy, we need to support greater access to a broad, pragmatic liberal education.

Ok, but what should that “broad, pragmatic liberal education” look like? Does this ability to “shape change and seek opportunity” also apply to higher education itself? The “10 Humanities Scholars” writing in response to MLA’s proposal object to the suggestion that graduation education should be different and instead contend “As long as departments continue to be structured by literary-historical fields and tenure continues to be tied to monographs, a non-traditional dissertation seems likely to do a great disservice to students on the job market and the tenure track.” That’s my emphasis. In short, as long as things remain the same, they should remain the same. (I should note, btw, that with possibly a few exceptions at elite private liberal arts colleges, tenure is only tied to monographs at research universities, which make up less than 10% of American universities. So that claim is not true and has never been true.) But that’s just a side note.

Here’s the point. We want students to receive a liberal arts education in that most medieval of senses: the skills and knowledge needed to succeed as a free individual. And we want to deliver that education without exploitative employment practices. But these movements also want to hold on to the curricular and disciplinary structures of the 20th century. And in the end, the latter are valued over the former. And while the MLA report is obviously focused on MLA fields, this issue extends beyond those departments.

If the solution to our challenges includes changing the curricular and disciplinary paradigms of the arts and humanities are we still committed to finding that solution? Or are we more inclined to stay on this ride until it ends?

What is this future like? Where literary-historical fields are a minor part of the humanities, where the focus turns to digital media and the contemporary global context, where the curriculum focuses on the soft skills of communicating, collaborating and research rather than traditional content, where faculty research efforts, including the genres of scholarly communication, reflect this shift in emphasis, where the elimination of adjunct positions changes both the curriculum offered and the technological means of its delivery, where the focus on graduate programs that train future professors is greatly diminished. In short what if the solution to our problems is to create a future where the job of the humanities professor looks nothing like what it is today?

I’m not saying it has to be that way. My point is only that our conversations about finding solutions to these problems always seem predicating on returning to some imaginary historical moment rather than really trying to shape a future. Didn’t we all receive that “pragmatic liberal education” of which Roth speaks? If we can’t use it to find such solutions, then maybe it isn’t worth saving in the first place.

 

 

Categories: Author Blogs

language, programming, and procedure

23 June, 2014 - 10:52

Following on my last post, by coincidence I picked up a copy of Max Berry’s Lexicon, which is in the sci-fi supernatural genre, light reading but well-reviewed. It’s basic premise is that language triggers neurochemical responses in the brain and that there are underlying operating languages that can compel and program humans. The result is something that is part spellcraft, part cognitive science and sociolinguistics, and part rhetoric, with the identification of different audiences who respond to different forms of persuasion. In this aspect it reminds me somewhat of Stephenson’s Snow Crash or even Reed’s Mumbo Jumbo, in a far more literary vein.

Conceptually what’s most interesting about Lexicon for me is the role of big data and surveillance. Compelling people requires identifying their psychographic segmentation, which is a practice in marketing research; think of it as demographics on steroids. This is the information produced from tracking your “likes” on Facebook, text mining in your Gmail and Google searches, data collected from your shopper card. Perhaps you remember the story from a few years ago about Target identifying a shopper as pregnant. Maybe this happened, maybe not. But that’s the kind of thing we are talking about.

Where does this get us?

  1. If the better you know your audience, the more likely you will be able to persuade them. I don’t think anyone would disagree with this.
  2. Through big data collection and analysis, one can gain a better understanding of audiences not just in broad demographic terms but in surprisingly narrow segments. How narrow, I’m not exactly sure.
  3. The result is the Deleuzian control society version of propaganda where we are not disciplined to occupy macrosocial categories but are modulated across a spectrum of desires.

Certainly there are legitimate, real world concerns underlying Lexicon, as one would hope to find in any decent scifi novel. It’s also a paranoid, dystopian fantasy that gets even more fantastical when one gets down to the plot itself (but no spoilers here). I suppose my reaction in part is to say that I don’t think we are that smart, competent or organized to make this dystopia real. But for me the more interesting question is to ask are we really this way? To what extent are we programmable by language or other means? This is where one might return to thinking about procedural rhetoric.

I suppose the short answer is that we are very programmable and that our plasticity is one of our primary evolutionary advantages, starting with the ability to learn a language as an infant. One might say that our openness to programming is what allows us to form social bonds, have thoughts and desires, cooperate with others for mutual benefit, and so on. If we think about it in Deleuzian terms, the paranoid fear of programming (tinfoil hat, etc.) is a suicidal-fascist desire for absolute purity, but ultimately there’s no there there, just nothingness. If we view thought, action, desire, identity and so on as the products of relation, of assemblage, then “we” do not exist without the interconnection of programming.

Of course it’s one thing to say that we emerge from relations with others. It’s another to investigate deliberate strategies to sway or control one’s thinking by some corporation or government. It’s Latour’s sleeping policeman (or speed bump as we call it) or the layout of the supermarket. Imagine the virtual supermarket that is customized for your tastes. You don’t need to imagine it, of course, because that’s what Amazon is. Not all of these things are evil. Generally speaking I think we imagine speed bumps are a good way to stop people from speeding in front of an elementary school, more effective than a speed limit sign alone. There is an argument for the benefit of recommendation engines. We require the help of technologies to organize our relations with the world. This has been true at least since the invention of writing. Maybe we’d prefer more privacy around that; actually there’s no maybe about it. It’s one thing to have some technological assistance to find things that interest us, it’s another to have some third party use that information for their own purposes.

I also wonder to what extent we are permanently and unavoidably susceptible to such forms of persuasion. Clearly the idea of most advertising and other persuasive genres is not to convince you on a conscious level but to shape your worldview of possibilities, not to send you racing to McDonalds right away but for McDonalds to figure prominently in your mind the next time you ask yourself “what should I have for lunch?” And even then when fast food enters into our mind as a possibility we might consciously recognize that the idea is spurred by a commercial, but do we really care?  Do we really care where our ideas come from? Are our stories about our thoughts and actions ever anything more than post-hoc rationalizations?

Returning to my discussion of Bogost, Davis, and DeLanda in the last post, I think there is something useful in exploring symbolic action as a mechanism/procedure. As a book like Lexicon imagines, we’ve been programming each other as long as there has been history, perhaps longer. Maybe we are getting “better” at it, more fine-tuned. Maybe it’s a dangerous knowledge that we shouldn’t have, though we’ve been using ideas to propel one group of humans to slaughter, enslave, and oppress another group of humans for millennia. That’s nothing new. If anything though, for me it points to the importance of a multidisciplinary understanding of how information, media, technologies, thoughts, and actions intertwine as the contemporary rhetorical condition of humans.

 

Categories: Author Blogs

alien languages and rhetorical procedures

20 June, 2014 - 09:05


Ian Bogost writes about Star Trek: The Next Generation and the unique language of the Tamarians, an alien race encountered in one episode. Picard and the crew eventually figure out how to speak with the Tamarians by interpreting their language as a series of metaphors. Bogost, however, suggests that metaphor is the wrong concept,

Calling Tamarian language “metaphor” preserves our familiar denotative speech methods and sets the more curious Tamarian moves off against them. But if we take the show’s science fictional aspirations seriously and to their logical conclusion, then the Children of Tama possess no method of denotative communication whatsoever. Their language simply prevents them from distinguishing between an object or event and what we would call its figurative representation.

Bogost then proceeds to put the Tamarian language in the context of computers where, from our perspective when we look at the computer we perceive descriptions, appearance, or narrative but what is actually happening are logics and procedures. Picard may think the Tamarians are speaking in metaphors, but they are in fact speaking in procedural logic. There is some insight there for us, Bogost observes,

 To represent the world as systems of interdependent logics we need not elevate those logics to the level of myth, nor focus on the logics of our myths. Instead, we would have to meditate on the logics in everything, to see the world as one built of weird, rusty machines whose gears squeal as they grind against one another, rather than as stories into which we might write ourselves as possible characters.

It’s an understandable mistake, but one that rings louder when heard from the vantage point of the 24th century. For even then, stories and images take center stage, and logics and processes wait in the wings as curiosities, accessories. Perhaps one day we will learn this lesson of the Tamarians: that understanding how the world works is a more promising approach to intervention within it than mere description or depiction. Until then, well: Shaka, when the walls fell.

Perhaps not surprisingly, this episode has received some treatment in rhetorical theory. Both Steven Mailloux and Diane Davis (paywall) have written about it as an opportunity to investigate the challenges of communication with otherness. As Davis points out, the episode ends without any real understanding being achieved between the Enterprise crew and the Tamerians. They do not establish diplomatic relations. The best they can achieve is peace without understanding, which, as Davis argues, “suggests that understanding is not a prerequisite for peace, that a radically hospitable opening to alterity precedes cogitation and volition.” From this she concludes

the challenge is to compare without completely effacing the incomparableness of the “we” that is exposed in the simple fact of the address; that is, the challenge is to refuse to reduce the saying to the said, to keep hermeneutic interpretation from absorbing the strictly rhetorical gesture of the approach, which interrupts the movement of appropriation and busts any illusion of having understood .

In this moment, Bogost and Davis appear like Picard and the Tamarians: two non-communicating entities. However they both recognize the partial-at-best success of Picard’s ability to communicate here and the limits of the hermeneutic gesture. Davis points to a rhetorical gesture that precedes communication. I wonder if that gesture might be procedural, or if I were to put it in more Deleuzian terms, as the operation of an assemblage.

Let’s see where that takes us by bringing in two other sci-fi stories.

  1. Kirk-and-GornThe ST:NG “Darmok” episode is often compared to the original Star Trek episode “Arena” where some omnipotent space race called the Metrons forces Kirk to fight an alien captain from a reptilian race called the Gorn. In the end, Kirk manages to create a makeshift weapon (anticipating every episode of MacGyver) and defeats his enemy. However he chooses not to kill the Gorn and he is rewarded for this decision by the Metrons. It has many of the classic tropes of the original episode: Olympian-styled super aliens, violent bestial aliens, and scrappy, can-do American know-how with the perfect mix of brains and brawn, judgment and courage, etc., etc. One way of comparing the episodes is in terms of the shift from Golden Age to New Wave sci fi, where in the former the heroes are cowboy engineers and in the latter they are anthropologists. In “Arena” there is no hope for communication and apparently no attempt.
  2. Stepping out of the Star Trek universe, China Meiville’s Embassytown focuses on an alien race called the Areikei. They are two-headed creatures, and the only way humans can communicate with them is through genetically-engineered twins called Ambassadors who can speak with two mouths and one mind. Like the Darmok, the Areikei appear to speak through metaphorical concepts but more importantly they cannot create fictions or lies. As such, humans are called upon to stage various actions in order to create concepts for communication. There is a Derridean pharmacological aspect as well, as the Areikei find themselves intoxicated by a new Ambassador’s speech. And then, when they figure out how to lie… Following Bogost, we might also call the Areikei’s language procedural. I see the pharmakon as fitting into a procedural understanding of rhetoric and communication: language is a machine.

It’s tempting to see language, or more generally symbolic behavior, as the proto-machine of modern humans. Today, when we look at technologies, they all are preceded by language, by descriptions, images, narratives, and metaphors. When we think about remediation or just McLuhan’s contention that all media take prior media as their content, that’s what we see. The origins of symbolic behavior are as murky as efforts to define it in the first place, but I think we acknowledge that there are technologies prior to language. Technologies always bridge the modern nature-culture divide, responding to physical laws but also shaped by cultural processes. Language is certainly that way, partly in our nature in evolutionary developments of the mouth and brain but also cultural. From Bogost’s view, as well as Deleuze’s (though the two are quite different in other ways), language is machinic because being is machinic. The machine precedes language. For Davis, rhetoric also precedes language and communication as this opening of a relation to Otherness.

Might we say that rhetoric is also a machine? I don’t think Davis would agree to that, but this is precisely Bogost’s point when he discusses procedural rhetoric. Persuasive Games,where Bogost introduces us to procedural rhetoric, focuses on the contemporary scene of videogames, especially games with a social-political agenda. However, if we say that procedural rhetoric is not only a way to understand how software persuades but more broadly a way of seeing rhetoric as a machine, prior to symbolic behavior, then we move toward a different understanding of these science fiction situations.

Human and alien assemblages grind their gears into one another. (Mis)understanding is one output. Violence, heat, entropy are others. Dis/order is produced as assemblages mutate. One inclination is to say there are no aliens here, just stories written in English. Let’s interpret them with our various hermeneutic methods. But there are aliens here, albeit not extra-terrestrial ones, just nonhumans. What happens if we take Bogost’s advice and not see the “Darmok” episode as description, image, and narrative but rather as a process?

Categories: Author Blogs

speculative politics, academic life and the “legacy” of postmodernism

16 June, 2014 - 13:12

Alex Galloway wrote an interesting post a couple weeks ago that sparked a long conversation (100+ comments), including a more recent post by David Golumbia that makes reference to a post I wrote two years ago. In a nutshell this is a conversation about the politics surrounding speculative realism, object-oriented ontology, and such. It mostly focuses on Graham Harman, less so on other OOO-related folks like Bryant and Bogost, and extends to Latour, DeLanda, and others. The questions of “what is?” (ontology) and “what should be?” (politics) are clearly interrelated. I don’t think anyone believes that some version of Stalinist science is a good idea (where the search for understanding is censored up front by a political agenda). On the other hand, no one in this conversation believes that any search for understanding by humans is not shaped by ideology, politics, culture, and so on.

I agree with Galloway when he writes “The political means *justice* first and foremost, not liberation. Justice and liberation may, of course, coincide during certain socio-historical situations, but politics does not and should not mean liberation exclusively. Political theory is full of examples where people must in fact *curb* their own liberty for the sake of justice.” As far as I can tell, justice isn’t built into the structure of the world. It’s not gravity. Justice is a claim about how the world should be. As Galloway points out, there are plenty of political theories that instruct people on what they should do. Of course there’s also a lot of disagreement over that justice is, as well as how it can be achieved. Much of it is tied to theories of ontology (e.g. do you believe the Genesis story accurately describes how the universe was formed?). If I understand Galloway’s criticism, it is that OOO separates politics from ontology and fails to see how its ontology is informed by politics. He then goes on to demonstrate that the politics that informs OOO is capitalism. Maybe. Ultimately the proof is in the putting, and for me that means not only saying but doing. 

From my perspective this conversation focuses on academic life. Galloway’s post takes up Harman’s references to the political situation in Egypt. He also talks about the Occupy movement, Wikileaks and so on. But this is an academic argument happening between academics. We can say that academic life and work is political in the way that all human life and work is political. Write an article, teach a class, attend a committee meeting: all are political acts. But they are not political in the sense of Occupy or Wikileaks. If they are efforts to make the lived experiences of other humans more just then they are quite circuitous in their tactics. Certainly there are some activist academics who are more explicitly political in their research. There are some who are active with unions or with faculty oversight of institutions. But such things do not characterize academic life in general. Let’s say there are two monographs on Moby Dick. One invokes Zizek as a primary theoretical inspiration. The other one invokes Harman. From Galloway’s perspective the former is preferable on political grounds, but I am having a hard time seeing either as doing much for justice.

To put my own research on digital media technologies, higher education, rhetoric, and teaching composition is similar terms, I suppose I would say DeLanda and Latour are my primary inspiration. Put simply, my work examines the premise in my discipline that symbolic behavior is a uniquely human trait. In my view it is a premise that tends to obscure the way that symbolic behavior (and the broader realm of though and action) relies upon a broader network of actors. In particular, I see our continuing struggles over what to do with digital media as stemming from this premise. Is it sufficiently political? I’m not sure. Who makes that determination? Does “being political” by humanistic academic standards require choosing an argument from among a set of proscribed acceptable positions? I would hope not. Does it require offering some prescription, some strategy or tactic, for increasing justice in the world? Maybe. I would like to think that my work strives to make life better. That is, if I offer to you my very best understanding of how digital rhetoric and composition works and what it might mean for teaching and higher education in general, I think that I am trying to make life better. Does it make the world a more just place? How is one even supposed to measure that? If a butterfly flaps its wings…

Meanwhile, David Golumbia in responding to Galloway, takes issue with a phrase in that earlier post of mine, where I say that  “there is potentially less relativism in a flat ontology than there is in our legacy postmodern views.” The word “potentially” there has to do with point-of-view. In my view, it is almost tautological to say that a flat ontology has less relativism. This is, in some respect, Galloway’s complaint: that a flat ontology does not pursue a “superimposition of a new asymmetry.” But that’s not Golubmia’s concern. His concern is with the phrase “legacy postmodern views.” As near as I can figure though, he is not asserting that there is no such thing as “legacy postmodern views,” but rather that their shouldn’t be. As he writes

the major lights of theory have been presented by many of us to students as a bloc, as doctrine, or even as dogma: as a way of thinking or even “legacy view” that we professors of today mean to “educate” our students about. But we should not and cannot be “educating” or “indoctrinating” our students “into” theory. To the contrary: because that work is a diverse set of responses to several bodies of work, more and less traditional and/or orthodox, it can only be understood well when embedded in that tradition.

I don’t have a problem with his argument that theory should be taught a different way. In the end he makes a fairly disciplinary-conservative argument that students need to read the philosophical tradition. He complains that SR plays into this with its “sweeping dismissal” of prior philosophy and argues that their object orientation isn’t all that new anyway. He blames technology for short-attention spans, a devaluing of proper education, and an unwillingness to give due consideration to the philosophical tradition. Keep in mind that these are professors complaining that other professors don’t take education seriously and don’t read enough. Actually though, these are familiar rhetorical moves. What could be more familiar than saying persons A, B, and C have misread or failed to read persons X, Y. and Z.

I do want to respond briefly to where Golumbia remarks  “That phrase “legacy postmodern views” really strikes me wrong, and rings in harmony with the “‘leftist faculty cabal’ mentioned by Galloway. Among other things, both phrases sound much like the major buzzwords used by the political right to attack all of theory during its heyday in the 1980s and 1990s.” I think he means to suggest that I am taking up some right wing attack on theory.  And I’m not sure why, as we seem to agree that “legacy postmodern views” exist and are taught, even though neither of us believe such things are worthwhile.

If I decide, for example, to focus on Latour and DeLanda rather than Badiou and Zizek, and some other digital rhetorician decides the opposite then… I’ve got nothing. I mean, I’m not sure what the stakes are. We write two different kinds of articles and books. Maybe our classes are a little different but not that different. Is one of us making the world a more just place than the other? According to whom? Either way, we’re both stuck on this treadmill of writing articles and monographs for tenure and promotion. How is it that I am evil and the other scholar some avatar of justice? When there’s maybe a couple thousand people on the planet at best who could tell the difference between us and less than 100 who would bother to. That’s the stuff that I don’t get.

Categories: Author Blogs