Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 17 hours 13 min ago

Writing alone on the social web.

20 June, 2016 - 06:25

In The New York Times, Randall Stross opines on the pending incorporation of LinkedIn into MS-Word. Apparently the idea is to create an opportunity for people on LinkedIn to participate or assist you in whatever you’re doing in Word. As Stross writes

My version of Word, a relatively recent one, is not that different from the original, born in software’s Pleistocene epoch. It isn’t networked to my friends, family and professional contacts, and that’s the point. Writing on Word may be the only time I spend on my computer in which I can keep the endless distractions in the networked world out of sight.

I have to agree. It’s really difficult to imagine desiring an intrusion from the friendly experts at LinkedIn, or, god forbid, being asked to perform the role of expert yourself for one of your contacts there.

This got me thinking along a tangent though. Since the early days of web 2.0 and social media there has been this idea that writing would become a more social, collaborative experience. We have Wikipedia, of course, and the many more focused wikis one can find. And that’s a useful example of collaborative, networked writing. However I would imagine that on many wikis individual pages tend to be largely created by a single author, perhaps with some editors coming along to touch things up. The advantage of the wiki structure is that it is plainly subdivided, and thus one might approach them the way students generally approach group writing projects, by dividing the work into parts done individually. When we think of other forms of social media, the level of collaboration is very low: e.g. commenting on some’s FB status update. There’s collaboration in the sense that an asynchronous online conversation can be collaborative. FB is more like parallel play than it is a group endeavor.

Nevertheless we still seem to be left with this idea that writing should be more networked and collaborative. Not only that it could be, but that it should be. Where does this value come from? I’m not sure. I suppose from general valuations of the ideas of crowdsourcing, collective intelligence, and so on. If one approaches this perception from a genre studies perspective, one might say that whether or not multiple people should compose a text together largely depends on the activity system, genre, and objective of the composition. So for example, if there was going to be a shift in scholarly writing in the humanities to where the common approach was to have a half-dozen or more co-authors, then we’d have to start creating new genres, which would probably mean creating new research paradigms and methods as well. In other words, it’s not the kind of thing one just decides to do on a whim.

But there’s a more pressing point here I believe. I suppose we shouldn’t be surprised by the implicit anthropocentrism in this apparent push by Microsoft toward networked composing. That is, there is this notion that the media ecology is for us and should serve our interests. Of course, you or I might disagree with Microsoft’s notion of what is in our interest. We might critique the ideology of labor that informs their notion of how we should write. Nevertheless, it is still a human-centered notion.

However, the digital media ecology is no more interested in us than the dirt is in the worm. It doesn’t tell us we should collaborate or that we shouldn’t. In a Latourian turn of phrase, in our encounter with the media ecology, we write and are made to write. Like Stross I spend many hours with Word documents, “alone” (i.e. without other humans) and avoiding the temptations of social media and the web. But even if I turn to Facebook, Twitter, email or whatever, I am not suddenly less alone in human terms. I write this blog post for the web. I write a status update. An email message. I am still writing alone. And the digital media ecology in which I participate is largely indifferent to it all, just as the rest of the world I normally inhabit is indifferent: the toaster, the front lawn, the stop sign, the sidewalk, etc.

Of course we are all also interdependent and in that sense we never write alone; we only gain the capacity to write through our relations with others. But that is not an argument for this anthropocentric model of networked composing where “we” should write together. It is only an observation that, in fact, we never write alone. “Should” has nothing to do with it.

I am very skeptical of this LinkedIn/MS-Word model of networked collaborative composing. In fact, it sounds like a nightmare. After all, hell is other people LinkedIn contacts. But it’s not about that. Writing is already a largely nightmarish labor of wringing out and sorting out thoughts, of trying to organize and find something useful to say, of confronting some internalized demand of an imagined external audience.

Despite all that negativity, I am curious about the notion of writing with other humans in a more collaborative way, or at least, writing in a different network of relations than the one in which I participate now.

Probably not LinkedIn though.

Categories: Author Blogs

the ROI on con/tested #cwcon terms

22 May, 2016 - 06:48

I suppose this follows along on the previous post’s discussion of the name of “computers and writing.” This starts with the second town hall on the subject of professional and technical writing in relation to computers and writing. Bill Hart-Davidson offers this visualization in his brief talk, which you can read here. In his work with Tim Peeples, he depicts the interests/territories of professional writing (in blue) in relation to the interests/territories of rhetoric and composition. He then asked, how would we draw “computers and writing?” Or perhaps, as he also suggested, we might chart out the fields using different terms.

I think the tough question is where has C&W fit in curricular terms? I would say as a topic in FYC, as an elective or maybe a required course within a professional writing major, and, in a few places as a concentration in a graduate program. I’d say the shape would like roughly like the r/c space, except jacked up on technology and diminished on the curricular side.

I think it’s useful to ask what are we doing then and where do we want to go next? The implied answer of the Town Hall is “somewhere hand-in-hand with professional-technical communication.”

I then happened into a panel on digital research methods (G1). There Rik Hunter raised the question of ROI in terms of the learning curve  in acquiring the technical skills to pursue methods for answering questions about data. There’s no doubt there’s a real challenge in acquiring a significant new set of methodological skills after graduate school and before tenure. After tenure, it becomes a different matter, not easier necessarily but different. Another presenter on that panel (I believe it was Kerry Banazek) discussed the ontological commitments that underlie choices we make about empirical methods and the ethical values at stake. Tim Laquintano’s presentation led to a discussion of why it seems that our discipline (meaning R/C in general) tends not to test the claims made in scholarship through repetition of methods. To me these are key observations folding back into the town hall and our questions about names and, in my view, our terminology. Specifically I’d like to con/test some key terms for their ROI and the ontological commitments they represent.

 

Or you could say that this post is an opportunity for me to complain about the term “multimodality” and its anthropocentric ontological commitments. If we were going to put Computers and Writing/Composition on Bill’s chart and use multimodality as a term, we’d have to say something like, “this one goes to 11.”

While there’s a pre-digital history and variety to multimodality, as someone like Jody Shipka would point out, mostly we use this term to talk about the combination of different media in digital environments. We argue that we are all called upon to communicate in a variety of media, so we should teach our students about such things. And since computers and writing has mostly been an offshoot of composition studies, that instruction has been primarily in two places: in FYC and in the preparation of FYC instructors. Much like Kirschenbaum’s notion of digital humanities as a tactical term, I see multimodality as a tactic to recruit composition studies for the purposes of computers and writing/composition. In some ways the multi- leaves writing alone. It makes writing one of many, part of the multi-, but a secure and stable part. That’s not to say that adding a video or image to a text wouldn’t alter what you’d write, but writing still remains a stable, separate, and identifiable entity.

Multimodality is also human-centered. In fact, this is often specifically announced: it’s people first, not technology. Again, much to the relief of composition studies humanists. And I agree that multimodality, at least as it is deployed in our discipline, is heavily anthropocentric. It is the human perspective that makes the media modes multi-, that brings them into relation with one another in a phenomenological-subjective synthesis.

Is there another way? Many I’m sure. How about a material, media-ecological approach? For experimental purposes, let’s remove the human entirely and look at a quintessential multimodal webpage (like this one). There are media types (or file formats)–html, jpg, etc. There’s a database (this is a blog). There’s the browser and the hardware. One could keep going, down to the circuit boards and out to the server farms. We’ve played this game before, but multimodal doesn’t make much sense of it. It mutes those things that are not easily experienced by the human subject. And because it does that, it is able to stabilize “writing,” preserving it for composition studies. That is to say that the text on this “page” looks like the text on a printed page.

But it ain’t.

Though I taught and sometimes ran a professional writing major for several years about a decade ago (wow, has it been that long?), I don’t feel well-positioned to talk about the field, especially not about technical communication. However, I am very interested in the pragmatic future orientation of Rik’s discussion of ROI. And I see the conversation about professional-technical communication happening in that vein. I see two, maybe 2.5, possibilities.

  1. First and foremost, we create return through pedagogy and students, whether you want to think about that idealistically or in the crudest cynical terms of tuition dollars. Once one gets beyond FYC (and getting beyond I think is crucial for our field and the reason why we are talking about Prof-Tech Comm), one has to be able to show the value to the students before they take the class. That is, you have to persuade them to become majors. I think this isn’t just about pedagogy or research to improve pedagogy. It has to be about creating scholarship that they can see as valuable to them.
  2. It can be about creating roles for ourselves in larger collaborative efforts on campus. I’m thinking primarily in terms of research but it doesn’t have to be. Our colleagues have problems (or we could call them research questions) and we have expertise that can assist in addressing them. I don’t think of this as “service,” though in some instances it might be. Instead I see it more as growing our research interests in response to problems that others also see and value solving. The .5 part of this answer is that, in response to these problems, one then might build the kinds of entities, like ELI Review, that Jeff Grabill discussed.

I don’t have a roadmap for doing this. For me, contesting terms like computers and writing/composition or multimodality are ways of envisioning alternatives. In a media ecology where the institutional assemblages securing writing are deterritorialized both risks and opportunities increase. For me, a (new) material, media-ecological approach offers ways to articulate rhetorical practices (across media) that will create more value for students and in scholarship; it might also establish a zone of disciplinary expertise for digital rhetoric moving forward.

 

Categories: Author Blogs

robots and writing? a #cwcon update

20 May, 2016 - 23:29

After a few years personal hiatus, I’ve returned to the Computers and Writing conference, which is taking place just an hour down the road from me at St. John Fisher’s College. The conference, and really the field, find themselves at a moment of reflection with the retirement of several founding members, including Cindy Selfe, Dickie Selfe, and Gail Hawisher. Following on that moment, the conference began with a presentation of microhistories of the conference (which started in 1984 I believe).

In many ways, these histories were panegyrics, which is fine in itself. There is a time for all things, etc., etc. Many things have been accomplished and should be remembered. At the same time, I think there’s a reasonable question about where that history has led us. One way of thinking about that is to ask whether or not the title of the conference, “Computers and Writing,” or its sister journal, Computers and Composition, really still works. Maybe they do, but if so, it’s because we’ve accepted the significant mutation of computing in our culture over the last 30 years. We all know there’s more computing power in the car you take to the grocery store than there was in the Apollo rockets we sent to the Moon. We probably need to realize that the birth of “Computers and Writing” occurred in an era of computing closer to that moonshot than to your car’s manufacture. Today, computing is so ubiquitous that even discussions of ubiquitous computing seem passé. If, when this business started, computers and writing described students with word processing software or maybe local networks, staring at monochrome monitors, then today, what does it mean? I know Will Hochman will be discussing the matter of the conference title  in his presentation on Saturday, so we’ll have to see what conversation comes out of that.

As some will observe, the early battles of computers and writing to get people to recognize the compositional and rhetorical capacities of computers are over. We won in a landslide, though probably not so much as a result of our own efforts as the inexorable force of technological development. It’s likely that our closest colleagues remain the staunchest opponents to this reality. Needless to say (but said anyway), there are plenty more battles, but maybe the end of those battles means “computers and writing” has run its course. We’ve become a mainstream part of rhetoric and composition. Maybe.

In looking at alternatives, some are content with the “digital rhetoric” appellation. I find it acceptable. Maybe there’s a conference title in there. Of course I never really referred to myself as a “computers and writing/composition” specialist, so I don’t personally feel like I’m losing much. Before digital rhetoric, I would have gone with new media rhetoric. There’s also techno-rhetoric or even cyber-rhetoric I guess. However, I wonder if there’s a reason the field has eschewed “rhetoric” for so long, at least in terms of its title.

Despite the title of this post, I’m not seriously suggesting “robots and writing” as a replacement, though it has a nice alliterative ring to it. Instead it’s suggestive of Jeff Grabill’s keynote this evening. Grabill’s talk focused on the various robots being developed around writing instruction. He pointed to the destructive pedagogical consequences of many of these robots that promise to evaluate student writing at scale… provided of course that teachers shape their classrooms to suit what the robots can do. Grabill quite forcefully called out his audience to get involved in this situation, arguing that it is not sufficient for us to decide that our scholarly interests don’t really coincide with addressing the ways that writing instruction, especially at the K-12 level is becoming roboticized. Furthermore, critique is not enough. We need solutions, which is what Grabill has been working on with many others in the form of the ELI Review program.

I don’t doubt Grabill’s assertion that these writing robots lead to bad pedagogy that tends to hurt the most vulnerable of our students first and the worst. But let me set that aside for a moment.

Or more precisely, before we get to that, there’s a prior argument that I think needs to be made. It starts with recognizing that there are nonhumans (robots or whatever) that are reading and writing and that are shaping human rhetorical capacities. To be clear, they are not just mute extensions of human will. They are doing their own thing.

In the morning session, many of the microhistories pointed to a “pivot” that took place around 1993-4. You don’t have to be much of a historian to figure out what happened then that might have changed the way we looked at computers and writing. As I was discussing with a few friends over lunch, one might see a related pivot about a decade later around the arrival of social media. Somewhere around there it seemed like there was a tipping point in terms of the average rhet/comp tenure track position expecting a level of technological competence that would have made one a specialist in the previous decade (e.g. an ability to “teach with technology”). And while I don’t want to diminish the social media-mobile technology revolution, we are on the brink of, maybe already in the midst of, something far more substantive in the form of spimes, smart objects, robots, whatever you want to call them.

It’s tricky for us digital rhetoricians/computers and writing folks, because these things aren’t really media in any conventional sense, but they are rhetorical devices. And sure, one can say, as I have, in a new materialist vein that all objects might have rhetorical capacities, but this is something a little different. I’m talking about devices that are designed to perform rhetorical actions with us and other nonhumans. There’s nothing especially abstract, speculative, or theoretical about your smartphone’s rhetorical behaviors.

I think you have to understand this general situation before you can get to Grabill’s argument. Whether or not one gets involved directly in the struggle Grabill describes over the shape of writing education, it does seem like we require some new terminology (and concepts) to address an emerging situation with these technologies.

Honestly, I would be surprised if there was a title change any time soon. And I don’t have much at stake in the matter. I do think there’s a growing sense that “computers” are not necessarily what we are talking about and that maybe even “writing” is stretched to its limits. I suppose the danger is that we become so diverse in our interests that there’s nothing really holding us together except a somewhat illusory notion of our being “about computers.” While I feel comfortable with my own scholarly direction, I’m not sure how it fits into a larger picture or what that larger picture is or should be of computers and writing.

Categories: Author Blogs

looking at college from the other side

19 May, 2016 - 14:26

I’m sure many of many colleagues and gentle readers have been through this experience, but this fall my daughter is headed off to college. Briefly, her college application story goes like this: she’s a national merit scholar with a load of AP classes; she was accepted at three ivies and some other very good privates; they all ended up being too expensive in our eyes; she’s going to a public AAU university (not UB) with a full tuition scholarship where she’ll graduate debt free (and possibly in 3 years if that’s what she wants). Right now it looks like she’ll major in math and computer science.

But this isn’t really about her. It’s about me looking at a university through the eyes of a parent rather than a professor. So here are my observations/complaints.

  1. Let me echo Gardner Campbell’s criticism of the way we present classes to students in online registration formats. This could be so much better than it is. It’s like shopping in a supermarket where all the aisles are filled with stacks of identical white boxes with generic titles and fine print meaningless ingredients.
  2. In my daughter’s case, which is of her own making if she chooses to double major, all the courses she takes as an undergrad will be in fulfillment of either a general education or major requirement. There’s a lesson I’m sure one is supposed to learn about education there.
  3. Shopping for general education courses through laundry lists of distribution requirements. Would you like to take “Intro to #$%#@” or “Generic Title about my *&(% discipline”? My, my, it’s all so tempting and thought-provoking. Plus, with a 100+ classmates, they’ll be so many opportunities to make friends! Fortunately there are some more narrowly focused classes that are smaller in size that meet these requirements, but then they kind of put the lie to general education. I mean, if I can meet my general education requirement in the humanities by taking a class in the science fiction of the Czech republic.

I’ve spent the last 2-3 years working to reform our general education program at UB, so I really do sympathize. In fact, maybe it is knowing how the sausage gets made that makes me a little cynical (ok, more than a little). It’s also left me wishing that we could do better.

I know that from a faculty perspective it looks like there are thousands of sections of courses on offer each year at a university like UB or the one my daughter is attending. But that’s not how one experiences it as a student or as a parent helping a student. What I see is something more like Tetris. You start with a plan to meet the sequence of course requirements in your major, which means you have to take certain courses in a given semester. Once you put those in your schedule, then you’re looking at gen ed requirements, trying to figure out which titles might be interesting, and looking for ones that will fit into your schedule.

It’s really quite amazing how quickly the possibilities narrow down. In a way, it’s necessary, because how can one really make a decision among 1000 options. On the other hand, when you’re really deciding among 3 or 4 options… well let’s just say that as the professor of such a course you should hardly be surprised if the students who show up aren’t as intrinsically motivated as you’d hoped.

There’s little opportunity for native intrinsic motivation when the externalities of the curricular bureaucracy are driving the choices students make.

At UB we’ve tried to reform general education by making the courses students take more thematically relevant to one another and to the majors and other interests students have.  The other option, which was not available to us (not that we would have taken it), is to abolish general education altogether. I would be in favor of such a move, though it would be like congress trying to get rid of social security.

The thing is I don’t object to the content of these courses. In fact, I actually think it is worthwhile for students to take classes across the disciplines of the university. I just wish we could present them as something other than a legalistic set of requirements. How about using some persuasion instead? Of course that would mean trusting students to make decisions that benefit them intellectually rather than trying to stuff intellectual benefit down their throats.

If we did that though it would mean that we would miss out on the classic moment of academic irony where students are required to take a class where they are exhorted to think critically and take responsibility for their own learning.

It may be that course registration is an unavoidable bottleneck in the learning process: time, faculty, and classrooms are all limited resources. So I suppose I really have two brief suggestions.

  1. To reform the registration process so that students can connect with the courses they choose on a more affective and informed level. This means a fuller description of the course and why they might want to take it. Then we could use that information to link classes together and show other courses the student may want to take in combination with this course or in lieu of it.
  2. Once the students are on the other side of the bottleneck (i.e., they’re in your class), what can we do to reopen the space of intellectual possibilities they had to squeeze through in registration. Here I am thinking especially of general education courses. And by this, I don’t really mean having courses that have really broad topics. I think a course in the science fiction of the Czech republic could be pretty cool. I’m just thinking there’s a potential for a shared ethos here. After all, the professor also had to go through some bottleneck of choices to end up in this course at this time (indeed, possibly a lifetime of such choices). Now that we’re all here though and committed to reading these novels (or whatever), what do we do next?

 

Categories: Author Blogs

laptops, classrooms, and matters of electrate concern

17 May, 2016 - 08:02

Last week, Inside HigherEd reported on this study (by Susan Payne Carter, Kyle Greenberg, and Michael Walker), which shows, once again, that students who use laptops in classrooms do not perform as well as students without laptops. Steve Krause wrote about the study a few days ago, wondering what might happen in a laptop-mandated classroom as opposed to a laptop-banned one.

I had a similar response to this study and the growing number of such studies. This study, like many of its kind, finds that students who have laptops in their lecture classes do not perform as well on multiple-choice tests at the end of the semester. There are many possible reasons for this, as the study explains:

There are at least a few channels through which computer usage could affect students. First, students who are using their tablet or computer may be surfing the Internet, checking email, messaging with friends, or even completing homework for that class or another class. All of these activities could draw a student’s attention away from the class, resulting in a lower understanding of the material. Second, Mueller and Oppenheimer (2014) find that students required to use computers are not as effective at taking notes as students required to use pen and paper, which could also lower test scores. Third, professors might change their behavior – either teaching  differently to the whole class or interacting differently to students who are on their computer or tablet relative to how they would have otherwise. Regardless of the mechanism, our results indicate that students perform worse when personal computing technology is available.

My first response, actually, was to suggest that someone might conduct a different study wherein the students who brought laptops to class were also allowed to use them during the multiple-choice final.  It’s only a guess, but I would think that having access to the Internet (and presumably e-book version of course materials) could substantially improve their performance. But really that’s only a hypothesis, and probably one would see better results if some direct instruction in finding good information was included. Of course such a study would seem counterintuitive as the presumed objective of a course is for students to “internalize” knowledge, i.e., for them to know it without reliance on notes, books, computers, or whatever. As we know, there are historical but ultimately arbitrary reasons for defining “knowing” in this way.

I know I’ve written about this many times before, and, in my view, this comes down to two interrelated problems.

  1. None of us, students and faculty included, have really figured out how to live, learn, and work in the emerging digital media-cognitive ecology. So it is certainly true that we can struggle to accomplish various purposes with technologies pulling us in different directions.
  2. The courses in these studies, and many, many other courses, want to operate as if the conditions for thinking, learning, and knowing have not changed. The faculty teaching them imagine that these are inherent human qualities. Even among those of us who style ourselves as “critical thinkers” and can recognize that such values are historical, cultural, and ideological can still manage to view technologies as either expanding, limiting, or overdetermining some inherent human agency and capacity for thought.

The second problem suggests that as faculty we need to rethink our curriculum and pedagogy as it now operates in a different media-cognitive ecology than it did in the past. The first problem complicates that as it suggests our understanding of that ecology and how to operate within it remains fairly limited. As such, we must proceed experimentally.

Perhaps the greatest hurdle in all of this is the uncertainty regarding how we should make such judgments. We know that technological development is itself a value-laden and not simply rational process informed by a desire for profit and by any number of other cultural values. As such, we shouldn’t just accept whatever is handed to us. On the other hand, the same critique might be made of pre-digital technologies and the practices that have been built around them, and we shouldn’t just accept those either.

Then in practical terms, trying to address all these matters in the typical classroom is very hard. If your goal is to teach literature or economics (like in this study) or chemistry, then these technology hurdles are significant detours. You probably just want to give your lectures and grade some exams, or, more generously, you want to deal with the subject matter in which you have expertise. It’s not your job (I think it is fair to say) to rethink the foundations of pedagogical practice in your discipline. And when we do attempt this, often we end up with things like the “TEDification” of lectures, as this Chronicle article reports.

It’s easy to criticize TEDification (no one would use such a word to say nice things), and yet the notion that entertainment should play a role in pedagogy fits well into an electrate apparatus. This is, after all, the classical line about poetry, that it should “delight and instruct,” so such matters are hardly new. It is only that in electracy we develop new institutions around entertainment. When I write “matters of electrate concern” then, I bring together Ulmer and Latour. Matters of concern remind us to listen to the nonhumans, the “missing masses.” If the hybrids of the 17th and 18th centuries fostered the Modern era, then following Ulmer, the second industrial revolution, specifically the invention of mechanical, then electronic, and then digital media, are ushering in the electrate era.

It’s not the right question to ask “how do I get 200 students with laptops in a lecture hall to learn my course material?” Why are they in a lecture hall for 50 minutes, three days a week for 15 weeks or whatever the schedule is? Why do they need to learn the material in your course? I don’t mean to suggest that we should abandon everything we do. I assume we have good answers for that last question!

Rather than establishing values and answering questions before hand, I think we need to move forward experimentally. We cannot expect immediate good results. It will take time to develop new institutions. Students in digital media-cognitive ecologies have different capacities than those students who preceded them. Those capacities are not a delimited list; they will shift depending on the particular network of actors in which they operate. We will need to experiment to discover those capacities and create new learning environments that will have a recursive relationship with pedagogy and curriculum. As we might say, pedagogy shapes and is shaped by learning technologies… primarily because those nonhumans have a say. Furthermore, we will need to help students learn how to shape such ecologies for themselves to facilitate their own learning, work, and life.

I think that’s what electracy instruction might look like as an evolution of the literacy instruction that was once, in a past century, primarily the domain of English Studies.

Categories: Author Blogs

otters’ noses, digital humanities, political progress, and splitters

13 May, 2016 - 09:28

Here’s the thing that confuses me the most about this DH conversation. See, for example, the recent defense of the digital humanities in the LA Review of Books (which, at least from my experience of it, needs to consider a name change) by Juliana Spahr, Richard So, and Andrew Piper, which responds to this other LARB article byDaniel Allington, Sarah Brouillette, and David Golumbia. The defense is as curious as the article that requires the defense be offered, which offers the now familiar accusation of neoliberalism.

The last time I took on this subject I was inspired by Monty Python’s argument sketch. This time round though my first thought was of this scene from Life of Brian.

Of course the whole point of it is to satirize the divisive nature of political progressivism. Apparently it pointed specifically at the leftist politics of England at the time, but really this kind of stuff has a timeless quality to it.

What does it mean to be neoliberal as opposed to liberal or… what? paleoliberal? (And yes, there are paleoliberals though apparently the meaning of that is quite variable.) Hillary Clinton is neoliberal, or at least might be, as a Google search would suggest. I don’t actually want to go into a lengthy definition of the term here but only to point out its rhetorical use, essentially as a kind of ad hominem attack made by groups on the left politically against other groups or individuals that most people would also consider liberal (e.g. Hillary Clinton).

Splitter!

But before getting into this matter, let’s play the believing game. Let’s believe that the digital humanities is neoliberal. That would mean that for whatever reason, humanities scholars who shared neoliberal views gravitate toward DH research, become neoliberal through their DH work, or maybe more obliquely come to have neoliberal effects in aggregate even though no one of them is in fact neoliberal. English Studies and the humanities in general are replete with faculty who view their work as political and as achieving political ends. Almost uniformly those politics are leftist. But even if there were now a group of scholars whose work was neoliberal rather than leftist, would that mean that we would call to exert disciplinary means to silence them?

Apparently so because that’s what the critique of DH, here and elsewhere, calls for: an end to the scholarly work of these academics on political grounds. Presumably the expectation is that the knife being used here shouldn’t cut both to the left and the right. No doubt the reality of academic life is that politics come into play in such decisions. I just don’t usually encounter people explicitly arguing that we should employ specific political commitments to evaluate scholarship.

Of course Spahr et al refute the neoliberal accusation anyway, but then the defense gets interesting. They write, “there is a second more general problem embedded in Allington et al.’s assertion that DH’s ‘institutional success has for the most part involved the displacement of politically progressive humanities scholarship and activism.’ This claim suggests that there has been a dominant politically progressive humanities scholarship to be displaced.” They go on to suggest, using New Criticism as a historical example, that scholarly methods are not bound to political ends, that different people can use different methods for different political goals. In short DH has no more or less potential to be politically progressive than any other method.

What does it mean to say that scholarship is “politically progressive”? I can see how some scholarly work might be progressive within the context of the discipline by bringing noncanonical authors to the attention of colleagues or expanding the scholarship on such authors. This would seem to be the understanding Spahr et al have as well, as they point to numerous examples of DH work along these lines. But honestly the majority of literary scholarship doesn’t address such authors. Do a cursory database search in your own library and I’m sure you’ll discover, like me, that there were hundreds of peer-reviewed scholarly articles published on Shakespeare last year. That’s just the easiest example. Clearly thousands of articles are written each year on canonical literary figures and texts. And that’s fine with me. I’m just not sure what kind of semantic gymnastics are necessary for that work to become “politically progressive.”

When it comes to literary studies, English departments, and politics, to generalize over my 20ish years of experience, my colleagues, not surprisingly, tend to be liberal by any conventional sense of the political designation. Some of them become involved in politically progressive groups or movements on campus or beyond, but I can’t say that I’ve ever experienced a department culture itself as a hotbed of political activism or progressivism. Nor do I think that it needs to be because there should be no expectation that employees of an academic department share political commitments such that they be asked to carry out explicit political goals as part of their jobs. I’d have to say the same thing of academic conferences. Undoubtedly there are some meetings that are explicitly political and some politically progressive acts come out of such meetings (e.g. various declarations or positions or resolutions). But if you randomly attended various sessions, I don’t think you’d find them any more politically progressive than the content of a random academic journal you might read.

Actually, my experience with the local politics of departments is that they are fairly conservative in the sense that they are quite resistant to change. Certainly there are trends in theory and method that have some impact on course content, maybe even drawing attention to a new set of texts or authors. But for the most part, English departments, their curricular structures, their course content, their pedagogic practices, their shape of faculty specialization, their definitions of research, teaching, and service, and so on are unchanged during my academic career. If there’s political progress, or really progress of any kind, wouldn’t there first have to be change? Even if the progress we were hoping for would not be our own progress but other people’s progress or change (which, I would have to say, is pretty arrogant) wouldn’t one still expect that to require us to do something differently? How does one create progress in oneself or others by continuing to do the same things?

Historically I am sure that it would be easy to accuse English Studies of a patriarchal ethnocentrism in support of an industrial-capitalism, nationalist hegemony. Indeed Spahr et al offer a quick jab at New Criticism along these lines. One could also point to the willingness of departments to “adjunctify” themselves and their graduate students by turning first-year composition into a kind of labor camp in exchange for some maintenance of the status quo in terms of tenure-track faculty work and department structures. Those are the “sins of the fathers” I suppose. In the contemporary moment we certainly find ourselves in an intractable situation vis-a-vis composition. And we would have to recognize that, in the gentlest terms, we have a long way to go in terms of the diversity of graduate students and faculty.

I have to admit that if the purpose of literary criticism, rhetorical studies, or any other kind of English Studies scholarship is to engender some tangible political change in contemporary America (or anywhere else on the planet) or even on college campuses that it strikes me as a fairly oblique strategy for accomplishing such goals. I would have to assume that the political progression that is sought is one within scholarly communities themselves. Even that doesn’t seem very effective and mostly tends to manifest a kind of Life of Brian scene. Whatever political progress is being made in scholarship has a fairly subtle impact on what English professors actually do. It must be fairly well hidden within the content of courses whose titles and general areas of investigation (e.g. a literary period) remain unchanged.

 

I would, however, support a more progressive discipline if we wanted to pursue one. I think there are a variety of ways we could be more progressive in terms of a more diverse curriculum (and not just a more diverse literary curriculum), which would of course necessitate a diversity of faculty, scholarly methods, pedagogies, and academic genres. It would, in my view, de-prioritize historical disciplinary commitments and seek to approach the task of investigating literate/electrate culture and practices with an educational mission at its core. In short, I find it hard to envision any kind of progress where we keep doing basically what we’ve been doing for decades. But I’m not sure if making such progress is something we all want to do together. In fact I’m fairly sure it isn’t.

Most importantly it would make progress on the use of adjuncts in our departments. I wouldn’t lay the blame for adjunctification on our departments (let alone on DH), but we must recognize that our use of adjuncts and TAs, especially as composition instructors, has allowed faculty to lead particular kinds of academic careers, including producing the kinds of scholarship we do, politically progressive or not. I would say this is equally true of faculty in other disciplines, so it’s not just about us. It’s really about larger structural issues on campuses.

I think those are the hard political questions in English Studies. How do we grow and change with the rest of academia toward a more diverse, engaged, and sustainable version of ourselves?  And to be honest, while I think addressing digital media and culture will be central to that question, the particular methods of DH seem like a really minor part.

Categories: Author Blogs

the cognitive-media ecologies of graduate curriculum

10 May, 2016 - 08:44

I seem to have developed a recent preference for the term “cognitive-media ecology.” It’s not a term one finds readily bandied about, but it references a familiar concept or at least an intersection of two familiar concepts: media ecology and cognitive ecology. Though they are separate fields with the latter including a more constellation of empirical methods (both are interdisciplinary), both are interested in questions of how environments shape individual and cultural human experience and thought. My own interests in this varied area of investigation are connected to concepts like new materialism, assemblage theory (DeLanda), second empiricism (Latour), and so on, which tend to the less anthropocentric end of these studies. When it comes down to it, my scholarship follows a new materialist, media-cognitive-ecological-rhetorical approach to understanding how emerging technologies shift our capacities for thought, action, and communication, often within the specific contexts of higher education.

In other words, the topic described in the post title, as odd as it might sound, is right where I like to work.

Here are a few things I will point to but not rehearse:

  1. The transformation of the university since the 1980s including overproduction of PhDs, adjunctification, the shift of costs to students, increased administration, etc. The current abysmal job market for those seeking tenure-track jobs.
  2. What activity theory and genre studies tells us about how genres develop and function in communities, including graduate curriculum genres such as seminar papers, dissertation proposals, and the dissertations themselves.
  3. The emergence of digital technologies in the same 30-40 year period (the first home PCs appeared in 1977 I believe). While this may seem like a tertiary matter to graduate curriculum in the humanities, one has to keep in mind that our 20th-century disciplines were born from an analogous technological revolution (the Second Industrial) at the end of the 19th-century: literacy and literate culture as we have understood it make no sense outside of that context.

So all one really needs to do here is hold those points plus the theoretical concepts mentioned above in one’s head for a while and see what thoughts come out.  For example:

  • There’s nothing intellectually, ideologically, or ethically “pure” about any of the work we have ever done. It’s always been messy, material, compromised, historical, and so on.  Whatever affective commitment (e.g. love for one’s work) that one might have won’t change that. Whatever ideological commitment one might have won’t change that either. I don’t mean that as a condemnation, but only to ask that we dispense with the ubi sunt business.
  • Our genres, pedagogies, courses, methods–really all the trappings of our disciplines–form from the cognitive and rhetorical capacities made available to us through our relations to our media ecology. This is not technological determinism but a complex historical process that results in assemblages (genres for example) that manage to perpetuate. It’s a process in which we academics participate both individually and collectively and thus in which can intervene. As a result, shifts in the media ecology result in disciplinary-paradigmatic challenges.
  • The cultural and institutional function of our discipline has fundamentally been about its (perceived) relationship to literacy. The demands of literacy/electracy undoubtedly change over time, but, at least since the Second Industrial revolution, there’s been a need for students to develop a literate capacity to function in technologically, professionally, bureaucratically complex discourse communities that are quite unlike the rhetorical practices of their adolescence. That basic fact hasn’t changed; we still talk about our students’ need to learn to communicate. What has changed though is the perceived relationship of English Studies to that task, where we have come to say three things simultaneously:
    • Our discipline is not especially interested in new literacies (or electracies if you prefer); our focus is primarily historical.
    • We are ambivalent about “preparing” students to join a workforce (even though really it’s all we’ve ever done).
    • Whatever work we might do in this area can be accomplished by adjuncts (which is another way of saying it isn’t what “we” do).

What does all this have to do graduate curriculum? Well, first, graduate curriculum emerges from these same conditions. We typically make the mistake of saying graduate school in English Studies is intended to prepare students to be professors. That’s only half true. It’s true that we generally imagine our students as planning to become professors, and they tell us as much. But the curriculum doesn’t prepare them for the job. It’s true that graduate courses will teach students something about their area of specialization, knowledge which they  then might in turn impart to students in classes that they are asked to teach. And the experience of writing seminar papers and then a dissertation teaches students research practices that they will employ as scholars. However, those are really indirect side effects of the curriculum; if they were intended then we’d be far more explicit about those elements.

So the upshot of this is that we have a discipline of academics with varying, but generally strong, affective and/or ideological commitments to an extant historical practice; a general unwillingness or at least ambivalence about addressing the task of supporting student literacy, which has been the implicit if not explicit cause of English’s centrality to higher education for the last century; and a graduate curriculum that was never designed to prepare students to do anything, even be professors.

So in relation to the situation in which we find ourselves, this results in one of two general options.

  1. The discipline and graduate curriculum make no intentional changes. We simply say, become a student in our program and learn how to do certain disciplinary work. There’s a chance you’ll become a professor, but probably not, and we’re not really going to do much to prepare you for that job or really any job. Just come take the classes and write the dissertation because you want to do those things, not because they represent some investment in a future of any kind.
  2. Do something different than what we’ve done in the past.

I’d say there’s a 99%+ chance that overall as a discipline we will choose door #1. It’s what we have always done. The only difference is that 30-40 years ago, one might have said to an incoming class of graduate students that 50% of you will get degrees and 70% of those folks will eventually get tenure-track jobs (so about a third of an incoming class), today that number might be more like one in five or one in six. But there’s really no need to dwell on such numbers because ultimately the ethos of our discipline is that we, both students and faculty, do the work we do because we love it, not because it has any future.

And indeed it probably doesn’t have much of a future. Eventually the system will implode but so what? Something will replace it as there will likely continue to be a need to develop the rhetorical capacities of college students, and there will need to be some way to prepare and certify the faculty that do that work.  There will even need to be research to support those activities. All of that will emerge from the capacities of the cognitive-media ecologies we inhabit.

 

 

 

Categories: Author Blogs

making a graduate seminar pedagogy

3 May, 2016 - 10:12

For the first half of my career, I rarely taught graduate courses, but since I’ve come to UB, it’s become a central part of my job, especially teaching our Teaching Practicum.  In the last couple years I’ve become increasingly dissatisfied with what I’m doing, so I am resolved to change it.

Basically I do what was done to me. Almost uniformly, my graduate courses were class-wide discussions, with 10-25 students in the class. The professor talked 30-70% of the time, depending on the prof, sometimes lecturing but rarely delivering anything specifically prepared in the sense we conventionally think of the lecture.  Mostly it was just speaking in response to students, who get more or less air time depending on the prof.  As for the student participation, it typically followed the 80/20 rule with 20% of the students doing 80% of the talking. The larger the class, in  my recollection, the more this percentage held true. Smaller sections tended to have a more equitable distribution of participation, or at least that was my experience.

So I do the same thing. There’s an assigned reading. I have some notes on things I want to discuss about it, but we generally go where the conversation takes us organically. My goal is to keep my mouth shut as much as possible. My success at that varies. However largely I imagine the student experience is the same as the one I described for myself.

Why do I want to change?

  • I’m not sure what it accomplishes.
  • Some students don’t get involved and I’m really not sure what they get out of it.
  • It doesn’t model the kind of pedagogy I’d like our TAs to practice in composition.
  • It’s dull. I mean the conversations can be interesting but it’s damned repetitive.

Without offering this as an excuse, the classrooms present a challenge. They are ideal for the kind of pedagogy I’ve just described, but quite limiting for other practices. The rooms have a large table surround by chairs: a meeting room essentially. The space is otherwise quite narrow. It’s really impossible for students to work in groups where they can face each other.  There’s very little board space.

Basically though, I can’t let that limit me.

So here I am thinking about loud.

  1. Redefining my role and making it transparent. For any class meeting there’s going to be some stuff I want to say to the students about the readings or topic. I could write something for them to read before or after class (or maybe both).  Then my role in class becomes organizing activities and answering questions.
  2. What do the students do? Well, what I encourage my TAs to do in their classes is a rough version of write/pair/share. That is, have them do some in-class writing, discussing it in small groups, and then report back to the class. Along with this basic template there needs to be some objective. So, for example, early on in our practicum, we talk about how to respond to student writing. We read some of the classic scholarship on the subject. So what should we do in class? (n.b. class is 160 minutes).
    • I give each student a part of the reading and ask them to spend 5-10 minutes writing about what that part tells them regarding the task of responding to student writing: what’s important here.
    • In groups of 3-4 students they cycle through a series of tasks, spending 15-20 minutes on each. The groups move around, so they can always see the work of the preceding groups if they want:
      • There’s a student essay with instructor comments. Their job is to read it, discuss it, and prepare a group evaluation of the instructor’s feedback. Just a paragraph that they write on a piece of paper and leave for the next group to see.
      • A shared google doc where their task is to write a document that explains to students what they should expect to see in their feedback and how to use it.
      • At the whiteboard, they brainstorm a list of follow-up research questions and look for some current scholarship that might address them.
      • A 2-pg student essay to which they have to respond as a group. They write the response and leave it for other groups to see.
    • A 5-minute break.
    • 20 minutes as a class discussion on the google doc and material on the whiteboard, with each group having someone speak for them.
    • That leaves 45 minutes to address what’s going on in their classes, planning for the next assignment, problems with students, etc. Before class I will ask each student to post a comment or question related to their teaching. Then we can organize small groups around the questions and discuss them. I can circulate and step in as needed.
  3. Undoubtedly this approach requires a different kind of lesson planning (maybe more attention to it as well). There will be a challenge of coming up with new activities without it seeming gimmicky. Also one is continually pushing up against cultural expectations that the classroom experience should be one way rather than another (even if that experience is not good). That said, the TAs face similar challenges in taking such approaches in their classrooms, so I don’t see why I shouldn’t step up as well.
Categories: Author Blogs

de-baits in the digital humanities

1 May, 2016 - 07:48

The LA Review of Books has published 4 interviews so far in an ongoing series on the digital humanities conducted by Melissa Dinsman. The series promises “Through conversations with both leading practitioners in the field and vocal critics, this series is a means to explore the intersection of the digital and the humanities, and its impact on research and teaching, American higher education, and the increasingly tenuous connection between the ivory tower of elite institutions and the general public.”

At this point, I am not interested in resolving the following questions:

  • What is the nature of the digital humanities’ relationship with neoliberalism?
  • What does it take to be a real digital humanities scholar?
  • Can/should digital humanities save the rest of the humanities?
  • Is/are the digital humanities anti-intellectual?
  • Is/are the digital humanities racist, sexist, or guilty of some related ethico-political violation?
  • And, of course, what is/are the digital humanities anyway?

I have been interested in the rhetoric of these conversations as they occur in journals, in the press (like LARB), at conferences, and across social media.

While I have no answers to these questions, I feel confident in saying that the rhetorical moves one sees here are commonplaces in the humanities for seeking to delegitimize one’s opponents. I suppose one could say that’s because critique is as critique does, and it’s rhetorically effective, regardless of whether the one making the accusation believes it or if it is true. (Again, not interested in adjudicating here.) To be clear, I’m quite certain that those offering critiques of DH firmly believe in their arguments. That said, believing in critique is a little like believing in a hammer. It’s largely unnecessary for the tool to do its job. Instead, the role of belief probably lies more in deciding that the hammer is the right tool for the job that needs doing.

Now I am tempted to try to understand the arguments that are at work here. For example, I am fairly confident that the underlying objection leveled at DH is an objection to a quantitative, empirical methodology, which is clearly facilitated by computers, but in an abstract sense wouldn’t require them. That is to say in a kind of philosophy 101 “what if” scenario, if some super human genius was able to process massive amounts of text and perform extensive calculations on that data without a computer, the objections to the results would be the same.

The only problem with trying to do that is that it simply will not get you anywhere. In academic disagreements, one never accepts a summarization of one’s argument made by another. Honestly I think Socrates was the last person to be able to get away with that and that’s probably only because Plato wrote both parts.

As a rhetorician, your  next move might be to try to understand the purposes driving these arguments. It’s easy enough to get the basics of the thesis statements (e.g. “DH is some kind of bad” and “No, it isn’t”). I am apparently arguing on my spare time. But why are they really doing this? If this were a television reenactment of couple’s therapy then maybe we’d be trying to get at what each person was really feeling, what their real motivations were.  But I am not interested in hermeneutics, as I think I’ve already said.

Instead I’m interested in de-baiting, removing the fuel from the argument rather than arguing. Not because I don’t like arguing (after all, I am a rhetorician) and not because I want everyone to get along. I have no illusions that anything I might say would end such arguments.  You’re thinking there of an entirely different narrative. It’s a spin on the one where the two guys have to fight it out before they can become buddies. In the spin the guys have far too much ego to ever decide to become buddies on their own, someone else has to step in (e.g. the police chief or maybe “the woman with a past”) to insist that they join forces. Those roles are older than Plato.

This in/ter/vention is less well known, more experimental and heuristic, method. The idea I guess is to proliferate responses to these questions, not in an effort to get to the truth but only for the purposes of making them productive of something else. It’s not for folks who are already occupied by these matters and have stakes there. It’s for the rest of us, particularly those of us who do “digital work” of some kind in humanities departments, and don’t want our work territorialized by these arguments.

The method is a kind of modified electrate approach where argument is processed through the popcycle, image reason, and the punctum. Monty Python’s argument clinic is already a comic intervention into rational-rhetorical argumentation. It would be understandable if one only remember the central joke of this sketch, where Cleese and Palin contradict one another, but it’s processing the rest of the sketch which can contribute to de-bating. It’s the situation of argument as contradiction in the midst of abuse, complaint, and finally simply volunteering to have yourself hit on the head. Then of course there’s the conclusion where an infinite series of cops arrive with the intention of ending the sketch with the imposition of their authority, but (in an abstract sense) it can never happen, because there’s always another cop on the beat.

It’s not necessary to cast our colleagues in these roles, though one might take a cue from Ulmer’s Heuretics and create a tableux vivant where the argument clinic de-baits the digital humanities. The point though is to attune oneself in a different, productive way.

Categories: Author Blogs

slow of study and study of slow in academic life

21 April, 2016 - 09:37

The recently published book, The Slow Professor: Challenging the Culture of Speed in the Academy, is probably too easy a target. As comes up in a recent Inside Higher Ed article, few are going to feel any sympathy for tenure-track, let alone tenured, professors, least of all those who work most closely with us: graduate students, adjunct faculty, administrators, and so on. From a greater distance, one might legitimately ask who has greater job security or greater latitude in defining their work than tenured faculty?

The answer is not many.

Pleas for sympathy aside, there’s little doubt that the academy has changed a great deal in this century. The book refers to this change as corporatization. Certainly we’ve become more bureaucratic, more economically driven (both in terms of how students view their majors and how administrations value departments and programs), and been transformed by digital culture (like the rest of the world). Stereotypes notwithstanding, there is growing empirical evidence that faculty work long hours (61 per week on avg) and that a significant number experience stress and/or anxiety in their work.

Personally, there’s no doubt I’ve experienced stress, anxiety, and general unhappiness with my work at times. Who hasn’t? The fact that it’s a common experience doesn’t mean we shouldn’t do something about it; indeed, one might say it’s more of an argument for addressing the issue seriously. And I don’t just mean for academics.

As the book’s title suggests, the general condition under question here is “the culture of speed.” This notion is of interest to me, more from a technological than corporate-bureaucratic perspective (though the two are related). As I’ve often written here (as is perhaps the underlying kairos of my work), we do not yet know how to live in our digital culture. The struggles of academics are just once slice of that general problem. Our connections to media have altered our capacities such that we no longer know what it is that we should do. Institutionally we have new capacities to measure, analyze, communicate, organize, and so on, but I don’t think we know what we should be doing there either. And the related post-industrial shift in the economics of college only exacerbates the problem: we have (or at least feel we have) very little room for error. 

That’s not stressful at all, heh. We don’t know what we should be doing, but we had better start doing it fast, and we had better not mess it up.

So here’s my abstract-theoretical disciplinary response to this. We need to develop new rhetorical-cognitive-agential tactics for our relations with media ecologies. That begins with recognizing that rhetorical practice, thought, and agency are not inherent, let alone ontologically exceptional, qualities of humans but rather emergent, relational capacities. Once we recognize that, we can begin to develop those capacities. I’m certainly not going to tell you what you should be doing. If you’re looking for that, I’m sure you can find it elsewhere. I’ll just say that what you should do is logically a subset of what you might do, and what you might do is a product of your capacities, which are themselves fluid.

If I had to guess at this problem, I would imagine that the stress and anxiety arise from a  combination of the way academics tend to strongly identify with their work (moreso than people in other professions) and the growing disconnection between what academics imagine their work (and hence their identity) would/should be and what it is actually becoming. That is, if one didn’t identify so strongly with a particular imagine of one’s profession, then changes to that profession probably wouldn’t make one feel quite so miserable. My personal confession on this matter is that over the years I have come to view my work as a less central part of my identity. And I think I’m better off for that and, honestly, no less productive.

I’m not suggesting that academics shouldn’t be involved in shaping the future of the university. To the contrary, I think it is a key part of what we should be doing collectively, though that responsibility is likely one of the key examples of the kind of work most academics don’t really imagine as part of their identification with the profession.

I guess I don’t know how to conclude this post except to say that we need to invent in a collective fashion a better way for university to work and for academics to work within them and that whatever this is is likely not a replication of the past.

Categories: Author Blogs

Technical Writing Lecturer at UB’s School of Engineering.

8 April, 2016 - 12:53

I thought some of you might be interested in this position:

https://www.ubjobs.buffalo.edu/applicants/jsp/shared/position/JobDetails_css.jsp?postingId=214396

Position Summary

The School of Engineering and Applied Sciences (SEAS) seeks candidates for a Lecturer position, beginning with the 2016-2017 academic year. We are particularly looking for candidates who can operate effectively in a team environment and in a diverse community of students and faculty and share our vision of helping all constituents reach their full potential.

The successful candidate is expected to develop a Communication Literacy 2 course (EAS 360), a central component of the new “UB Curriculum” for General Education. The aim of this course is to prepare students to successfully communicate, across a range of professional genres and media, to technical, professional, and public audiences; to produce communications individually and as part of a team; and to produce communications that are consistent with ethical engineering and applied science practice. All engineering and computer science undergraduate majors will participate in the course. The successful candidate would be expected to teach six sessions of the course per academic year. The lecturer will work closely with school and department leadership on course development as well as on accreditation-related assessment, and to coordinate activities with other aspects of the undergraduate engineering experience. The lecturer may also be involved in other professional and scholarly activities including developing proposals for educational funds to aid in pedagogical advancements.  

Position Category:

Faculty  

Minimum Qualifications

An M.A. or M.S. in English, Communications, or a related field, with a focus on professional/technical communication or composition. The Masters degree must be conferred before appointment.  

Preferred Qualifications

Demonstrated experience teaching technical or professional communication at the college level or experience in the practice of technical or professional communications. A Ph.D. in English, Communications, or a related field, with a focus on professional/technical communication or composition.  

Salary Range

$45,000 – $70,000

Categories: Author Blogs

students can’t write and other slow news days

6 April, 2016 - 07:13

Making the Facebook rounds of late is this article that makes the titular observation that “Poor Writing Skills Are Costing Businesses’ Billions.” Huh. Maybe so. The article, posted a week ago, cites three reports on this situation… from 2004, 2006, and 2011.

Maybe the situation hasn’t improved. Probably not. I doubt anything systematic has been done to address the issue, despite these and many other reports. Besides, “______ can’t write” is a timeless classic. It hardly requires evidence.

Here’s the number from this report that I love. Businesses are spending $3.1B annually to instruct employees in writing. That’s a number from the 2004 report. So I’m not sure what that means, except  you could easily teach a writing course to every college student (~20M people) in America for that money. But here’s really the one thing you’d want to say about this:

suckers.

 

Meanwhile, a 2015 Ithaka SR study indicates that 54% of faculty believe students have “poor skills related to locating and evaluating scholarly information.” The same study though indicates that “Approximately two-thirds of faculty members strongly agreed that improving their undergraduate students’ ‘research skills related to locating and evaluating scholarly information’ is an important educational goal for the courses they teach.” So what do we make of that? 2/3 of us (more in the humanities) say teaching these skills are important, but most of us still believe our students are poor at them.

This is a familiar refrain about writing as well, as I’m sure  you know. Yes, we say, it is important that students learn to communicate. Yes, we say (especially in the humanities), teaching students to communicate is an important part of what we do in our classes. No, we say, our students are not good writers/communicators.

Meanwhile, in the corporate world, one spends over $3B trying to help college grads write better… I wonder how that’s working out?

Perhaps one might believe all this leads up to that traditional belief that writing can’t be taught. What does that mean? Obviously people do learn to write. I mean I’m not able to do this because I picked up a magic frog when I was 5. So are we suggesting that writing is the one thing that people cannot learn in a systematic socialized way? Sure we can’t all learn to write like “fill in your favorite author.” Similarly we can learn to play soccer but probably not like Messi. And that might be limitations of intelligence or some in-born talent but it’s also about the shear amount of time we’re willing and able to devote to the task.

So what if we start with a different premise?

Students, college grads, and corporate workers all are able to write and research well. They learn and adapt to the rhetorical-informational practices of the various communities and networks they encounter in reasonable and predictable ways, adopting these practices about as quickly and effectively as they take to other aspects of their community’s culture.

With this premise, we might come to a similar course of action but without finding fault in students. Our problem, I would (probably unsurprisingly) say, is that “we” view writing as an interiorized, rational skill that humans carry around in their brains. No doubt, part of writing happens there. If we viewed writing as a distributed, networked activity that is widely variable from one site to another then we would understand the challenge of helping students and workers link into this new activity differently.

So this kind of stuff drive me a little nuts.

  • Students come to college having never done college research. Imagine that. As it turns out, it takes a couple years to learn how to do that, even at the level we expect of undergraduates. In part because there’s almost no “academic” research that is written for undergraduate audiences, so it takes years to acquire the context to understand the scholarship. I wonder what it would be like if we wrote some research with them in mind? Not just instructional textbooks, but actual research we are doing communicated to an undergraduate audience for the purpose of helping them adjust to this new rhetorical practice.
  • Students also enter your major having never written for your discipline before. Shocking. I wonder what rhetorical roles we offer new undergraduate writers in our discourse communities? What rhetorical work can they do? What purposes can they accomplish? Maybe if there was something that students could write that served a purpose other than demonstrating that they don’t know how to write or do research then maybe we would discover some other attributes about their writing ability.
  • What structures exist to assist students in connecting to the rhetorical-compositional structures of an academic community (or later, workers in a corporate one)? I know we say we teach these things and spend billions on them, but given our misunderstandings of how these things work, I am skeptical of the effectiveness of these efforts.

Of course the other way of looking at this is to say that on the whole, college students manage to graduate, get jobs, and keep them (or at least not lose them because they are poor writers). People figure out what they need to figure out. We can undoubtedly help more students be more successful with a better-informed approach to this pedagogical task, but none of that is likely to change the views of professors and corporate officers about their students and employees.

Categories: Author Blogs

paleorhetoric and the media ecology of flint knapping

17 March, 2016 - 08:33

The feature article in Scientific American (subscription required) this month addresses the role that flint knapping (the practice/art of making Stone Age tools by striking one rock against another) might have played in the development of the human brain, language, and even teaching. (And here’s a related article in Nature if you prefer more academic prose.) Here’s the gist.

The basic idea that toolmaking shaped the human brain is not new. It’s at least 70 years old and ascribed to anthropologist Kenneth Oakley in his book Man the Tool-maker, though one might observe that the notion of homo faber is centuries old. It was discredited among behavioral scientists in the 1960s when it was observed that nonhuman species also used and even made tools. As the article recounts, “As paleontologist Louis Leakey put it in his now famous reply in 1960 to Jane Goodall’s historic first report of chimpanzee tool use: ‘Now we must redefine tool, redefine Man, or accept chimpanzees as humans.’” In abandoning tool use, behavioral scientists turned to complex social relations. Without putting too much pressure on a single sentence, one can see some of the difficulties here. First, why would tool use have to provide evidence of human cognitive exceptionalism in order for it to play a role in our cognitive development? That is, why would the fact that other animals use or make tools serve as evidence against the role of tools in forming our brains? In fact, wouldn’t it work the other way if we could see this effect across species? Second, why would tool use and “complex social relations” be exclusive rather than mutually reinforcing parts of an explanation for human cognitive development? That is, the discovery of better tools puts pressure on (and facilitates) the formation of more complex social arrangements in order to use those tools, which leads to further tool development, and so on: and all of this shapes human cognitive development. We certainly see that in the Industrial Age.

The research referenced above takes up a view like this and links the practices of experimental archeology with neuroscience. Experimental archeology is essentially the effort of archeologists to learn ancient methods through trial and error. In this case, it means learning how to flint knapp. What’s added here is neuroscientific study of the experimenters’ brains to see how learning to knapp affects them. Anyway, to make a long story short:

the toolmaking circuits identified in our PET, MRI and DTI studies were indeed more extensive in humans than in chimps, especially when it came to connections to the right inferior frontal gyrus. This finding became the final link in a chain of inferences from ancient artifacts to behavior, cognition and brain evolution that I had been assembling since my days as a graduate student in the late 1990s. It provides powerful new support for the old idea that Paleolithic toolmaking helped to shape the modern mind.

However, the research has some further suggestions. It’s certainly possible to learn knapping independently, through trial and error, learning from others through observation and imitation makes it a lot easier. One can imagine a small group of early humans creating tools and learning new techniques simply through observation. Then, at some point, one sets upon the notion that one could intentionally demonstrate a technique to another for the purpose of that other person imitating the first. That’s teaching. Needless to say some symbolic behavior could come in handy here as well. And that’s the speculation that ends the article:

The results of our own imaging studies on stone toolmaking led us recently to propose that neural circuits, including the inferior frontal gyrus, underwent changes to adapt to the demands of Paleolithic toolmaking and then were co-opted to support primitive forms of communication using gestures and, perhaps, vocalizations. This protolinguistic communication would then have been subjected to selection, ultimately producing the specific adaptations that support modern human language.

I’ve written about the notion of paleorhetoric a couple times here. It also comes up in The Two Virtuals. I think this issue is an important component of a new materialist rhetoric. Why? Though I find value in the speculative investigation of nonhuman rhetorical activity that typifies much current new materialist rhetorical study, I believe it is important for us to be able to see rhetoric as an ecological phenomenon in which humans and nonhumans co-participate. It is possible and often useful to cut the world at its joints and say “here’s human rhetoric” and “here’s nonhuman rhetoric,” but what we see here (possibly) is the emergence of symbolic communication among humans in the rhetorical encounters among people and rocks. It’s not just human rhetoric or human language.

And perhaps most importantly, rhetoric was already at work. The expert knapper learns to address his/her strikes of the stone: the angle and the force. One must conceptualize and plan. This is clearly composition, as we know composition has never been only about writing. However it is about expression in a medium. It is instauration, as Latour would put it. Perhaps, gentle reader, you are reticent to call this rhetoric, but we certainly seem willing to call it art, right? The art, and artifice, of flint knapping? Why do we find it easy to imagine art without rhetoric? Yes, no doubt, we could say that rhetoric is a kind of art. And when by rhetoric we mean the specific oratorial practices taught to young Athenians that would make sense. But when we think of rhetoric ecologically, as the capacity for expression and incorporeal transformation that arises in the relation among humans and nonhumans, it is more than that. Certainly, when a paleo human struck one rock with another that was hardly an “incorporeal” expression. And you can spend all day turning big rocks into smaller ones, expressing personal frustration or perhaps as punishment. But if you strike one rock against another until something that was a “rock” becomes an “axe” then you’ve done more than exert simple force. Those repeated blows have also incorporeally transformed that second rock. Its relation to the knapper has changed, and new capacities have emerged. Instead of a person with a rock we now have a person with an axe. In other words, it’s not just the rock that has been incorporeally transformed; the human has as well.

This is what new materialist rhetoric is (at least partly) about: understanding how the relations among humans and nonhumans shape capacities for thought and action.

Categories: Author Blogs