Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 1 day 11 hours ago

prestige education in the network age

11 September, 2014 - 10:09

Perhaps you have seen Steven Pinker’s response to William Deresiewicz’s “Don’t Send Your Kid to the Ivy League” in which they variously decry and defend the Ivy League. I can’t really speak to the conditions of an Ivy League education, nor do they especially interest me. My daughter is a high school junior this year and an exceptional student, good enough to compete for an Ivy League admission. Her friends are similarly talented. I see in them many of the qualities that concern Deresiewicz and Pinker both. It’s not the kids’ fault at all. They are being driven to become caricatures of the ideal college applicant. What does it mean to be the kid with the high test scores, the AP classes, the high GPA, high school sports, science olympiad, club officer roles, conspicuous volunteerism and so on? I think it’s a reasonable question to ask. I think it’s obvious the students and parents pursue such institutions for the prestige associate with the name that does seem to translate into better job opportunities. Who knows, maybe the people with Ivy League degrees are better humans than the rest of us…. probably not though.

How far does that prestige effect extend beyond the Ivies? To the elite private liberal arts colleges, certainly. To a handful of other elite private and public universities (MIT, Stanford, Berkeley,  e.g.), no doubt. For example, consider this list of AAU universities. Do they all get to be “prestigious”? And if so, what does that mean? Does it mean that large numbers of students apply from around the country and the world to attend your university? Not necessarily. I’m not saying these aren’t good schools. They have very good reputations. They are obviously all “highly ranked” by some metrics, which is why they are AAU schools. My point is simply in terms of the marketplace of student admissions. What does prestige actually get you and who cares about it? I don’t think this is an idle question, particularly for the humanities, which are a prestige-driven, reputation economy. The typical humanities departmental strategy is to hire for, support, and promote faculty prestige/reputation, and the metrics are driven by this valuation to some extent as well. However, how many undergraduate students at large public research universities, for instance, are choosing majors based upon national department reputation? Put differently, what is the reputation of reputation?

I want to offer an odd juxtaposition to this. Also recently in the news is Apple’s latest iPhone. See, for example, Wired magazine’s speculation on the impact of the new iPhone on filmmaking. I don’t want to make this about Apple, but more generally about technological churn. We’ve already had widespread consumer filmmaking for at least as long as we’ve had YouTube, but as the phone technology improves the possibilities expand. Now perhaps I should use this juxtaposition as an opportunity to talk about digital literacy, but that’s not exactly my point. Instead, the point I want to make is that technological churn shifts the ways in which reputation is produced and maintained. While I can share in the general academic skepticism about Apple ads and their suggestion of how their technologies enable people to do cool things, at the same time, in a broader sense, human capacities are being altered through their interaction with technologies.

It’s completely obvious in a humanities department, where reputations are built on publishing monographs, that reputation is driven by technology. The principle is not foreign to us, even if we are generally blind to the ways in which our disciplines are tied to technologies. Just like the prestige of getting into an Ivy, reputation hinges on access. This creates a series of feedback loops. Elite humanities departments can support their faculty in monograph production, so they produce more books, so the departments remain elite. It used to be that filmmaking was expensive and technically complex, so only professionals could really make and distribute films. It’s still hard to make a good film, but the barriers are otherwise lower. So I suppose this could be an argument for digital scholarship and/or changing the ways in which we work as humanities scholars, but I just want to focus on reputation/prestige.

If reputation is about our participation in a technological network (of books, e.g.) and we are building an entire disciplinary and departmental infrastructure and strategy to facilitate that participation, then how is that really different from Deresiewicz’s zombie undergrads with their endless activities? Aren’t we both just building reputations within some arbitrary network? If we are spending hours upon years on dissertation research and monograph writing to get jobs and tenure and improve department reputation, so that we can “get into” or stay in categories like AAU, then we can point to the monetary rewards that accrue, just like those aspiring Harvard students. However, just like those students we are investing a lot of effort and money in those goals as well. IF you get into (and out of) your Ivy, then you can probably feel confident that your investment is going to work out. For those of us in the humanities, the future is less certain because the sustainability of the reputation-technology network we employ is more tenuous.

So what will our future reputation network look like? Obviously it won’t be iPhone filmmaking, but it obviously won’t be monographs either. If the 20th-century English department was born from Victorian literary culture, industrial printing, electrification, and the increased demand for a print literate workforce, then what analogous things might we say of the 21st-century version of the discipline? If 20th-century scholarly labor and reputation in turn hinged about our ability to study and engage with these technologies, then what would be the 21st-century analogy?

The problem that Pinker and Deresiewicz have is that the criteria upon which applicant reputations are built makes no sense, which in their view does harm in the end to both the students and the institution. We could say the same thing about the humanities, where the cause of the reputational disconnect seems fairly obvious. What is less obvious is how one goes about shifting those terms.

Categories: Author Blogs

why does the web need to be “social”?

5 September, 2014 - 08:33

When I was starting out in grad school, I saw Timothy Leary speak about the Internet as “electronic LSD.” It was the early nineties, pre-Netscape if memory serves. The ideas he was offering up were not that different from the argument he made in his essay “The Cyberpunk: The Individual as Reality Pilot,” which was anthologized in Bruce Sterling’s Storming the Reality Studio, that well-known anthology capturing the zeitgeist of 80s cyberpunk. I am not here today to advocate or express nostalgia for this moment. In fact this essay would be familiar to you probably for its romantic, libertarian/anarchist, masculinist, Eurocentric, techno-optimistic sentiments, which seem to strike a familiar but ironic tone in the context of the dystopian worlds cyberpunk literature portrays. For example, Leary writes:

The CYBERPUNKS arc the inventors, innovative writers, techno-frontier artists, risk-taking film directors, icon-shifting composers, expressionist artists, free-agent scientists, innovative show-biz entrepreneurs, techno-creatives, computer visionaries, elegant hackers, bit-blitting Prolog adepts, special-effectives, video wizards, neurological test pilots, media-explorers-all of those who boldly package and steer ideas out there where no thoughts have gone before.

CYBERPUNKS are sometimes authorized by the governors. They can, with sweet cynicism and patient humor, interface their singularity with institutions. They often work within “the governing systems” on a temporary basis.

As often as not, they are unauthorized.

Perhaps, in the age of cultural studies (even though Leary does cite Foucault), we might attempt to recoup such views through Haraway’s cyborg manifesto or Deleuze and Guattari’s nomads and rhizomes or maybe even Hakim Bey’s temporary autonomous zones. It’s easy enough to say that these fantasies built the web we have today, or more generously maybe that the contemporary web is what happens after state capture and reterritorialization. So let’s not go there. When I think of the social web (Facebook, Twitter, etc.) I know what I think of: family events, witty remarks, linking something funny or heartwarming, academic politics, current events. The social web manages somehow to demonstrate that the self-aggrandizing “greed is good” ethos of the 80s and the “sharing is caring” mantra of 90s children’s programming are compatible. It’s about as far from the vertiginous romanticism of cyberpunk as one could get. It’s more like a toned-down, less-interesting, pathetic version of Snow Crash or maybe one of Sterling’s novels.

So while I’m not interested in advocating the seemingly individualistic “anti-social” cyberpunk, I am unhappy with the word social.

Maybe it’s the Latour in me that has trained me to raise a skeptical eyebrow to the word “social.” Latour rails against the common view that there is some kind of social stuff. As he makes clear at the start of Reassembling the Social

The argument of this book can be stated very simply: when social scientists add the adjective ‘social’ to some phenomenon, they designate a stabilized state of affairs, a bundle of ties that, later, may be mobilized to account for some other phenomenon. There is nothing wrong with this use of the word as long as it designates what is already assembled together, without making any superfluous assumption about the nature of what is assembled. Problems arise, however, when ‘social’ begins to mean a type of material, as if the adjective was roughly comparable to other terms like ‘wooden’, ‘steely’, ‘biological’, ‘economical’, ‘mental’, ‘organizational’, or ‘linguistic’. At that point, the meaning of the word breaks down since it now designates two entirely different things: first, a movement during a process of assembling; and second, a specific type of ingredient that is supposed to differ from other materials.

Not surprisingly, people battle for the claim to have invented the term “social media,” though it’s mostly corporate and web entrepreneur types.  But what does the adjective social mean here? I’m thinking it’s supposed to mean media technologies that promote socializing, as in many-to-many rather than one-to-many. Perhaps in a technical sense it does. But the web always did that. If we think of Latour’s view of the social in his conception of a sociology of associations, then I suppose we’d begin by thinking of social media applications as actors that produce new associations: new communities and new genres/discourses. I guess that’s a fairly basic starting point that tells us almost nothing; we are, as always, instructed to follow the actors and their trails. In the end though, through social media we are “made to do” things. Not compelled exactly. It’s just that Facebook or Twitter or whatever activates particular capacities within us over others. Those are not “social” capacities. They are not made of social stuff, nor do they do social things as opposed to other capacities that would be non-social.

Leary’s cyberpunks built much of the underlying technology of the social web, perhaps with “sweet cynicism and patient humor.” And then I suppose they persist in the niches of hacker culture, while the typical user becomes immersed in a new “social.” Of course a platform like Facebook or Twitter with its millions of users is diverse in the experiences it might provide on an individual basis. At the same time, an investigation like Manovich’s Selfiecity offers some insight into how technologies generate commonalities. It’s not really my point to say that we are or aren’t being brainwashed by social media. It’s hard to get outside of the binary of the romantic narrative that Leary tells or even gets read into something like Deleuze and Guattari’s articulation of nomads and the state.

My point in the end is more pragmatic and less interesting on some romantic, visionary scale. Can we stop calling this media “social”? What else could we call it? And if we called it something different would we gain a better understanding of it? One that wouldn’t lead us nostalgically toward Leary or running in fear of a technopoly, like Benjamin’s angel of history, or mind-numbingly toward a corporate mall culture or whatever other cheesy narrative you can construct when we imagine that technologies are social.

Categories: Author Blogs

the eternal September of the no laptop policy

31 August, 2014 - 08:39

It’s the time of year when academics like to talk about their syllabi and inevitably the no-laptop policy arises. It is evidence of a recurring theme: we do not know how to live, let alone learn, in a digital networked environment. It’s hard to blame the faculty, though it’s difficult to figure out who else might be responsible. The classes we teach are in the same rooms and buildings, follow the same schedules, and are essentially understood in the same terms as they were 30 years ago. Yes, there’s wi-fi now, as well as 4G LTE signals, permeating the classrooms, and yes, almost everyone has some device that links to those signals. (As I’ve mentioned in prior posts, at UB anyway students bring an average of 5 wi-fi enabled devices to campus.) However, students don’t know how to use these devices in the classroom to support reaching the learning objectives, and the learning objectives remain based in a pre-digital world, as if what and how we are learning hasn’t been transformed by our new conditions, so faculty don’t know how to use these devices to define and achieve learning objectives either.

The Chronicle (of course) published a piece recently in which one professor, Ann Curzan, offers her explanation of her own no laptop policy. I appreciate the thorough explanation she offers to her students in her syllabus, including citing research on multitasking, effects on test performance, and so on.  It’s a very old  story, right? New technology affects our ability to think, nay, remember things. Just like the Phaedrus. Sure that’s just some old myth about Thoth, while with laptops we’re talking scientific research. Sure, except that people living in an oral society do have memory capacities that are different from ours. What do we imagine that distributed cognition means? It means that we think in conjunction with tools. It means we think differently in the context of digital networks. And that’s scary and difficult. Obviously, because these are the recurring themes in our discussion of educational technologies.

Curzan and the many, many other professors with similar policies have educational objectives and practices that have no place for emerging media. It makes perfect sense that if the purpose of coming to a class is to take notes on a lecture then a laptop is of limited utility. Yes, you can take notes on a laptop but that’s like driving your car at the speed of a horse-drawn carriage. If the purpose of class is to engage in class-wide discussion or group work then maybe those devices have a role to play but that depends on how the professor shapes the activity. For example, a typical pre-wifi class/group activity I did was to ask students to look at a particular passage in the reading, figure out what it means, and discuss what they think about it. Today, depending on the particular reading, there’s probably a good deal of information online about it and that information needs to be found, understood, and evaluated. It’s also possible to be in real time conversation with people outside the classroom, as we know. So that activity isn’t the same as it was 10-15 years ago. It’s possible that the laptops could distract from the activity, but distraction is always a problem with a group activity.

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write. We spend years teaching them how to use these technologies in college, and that’s following even more years in K-12. We teach them highly specialized ways of reading and writing so that they are able to do this. But we complain when they walk in, wholly untrained, and fail to make productive use of their laptops? When we give them no teaching on the subject? And we offer little or no opportunity for those laptops to be productive because our pedagogy is hinged on pretending they don’t exist?

Certainly it’s not as easy as just substituting one media for another. (Not that such a substitution is in any way easy, and in fact, the near impossibility of making that substitution will probably doom a number of humanities disciplines, but that’s a subject for another post.) To make it happen, the entire activity network around the curriculum needs to be rethought, beginning with the realization that the network we have is built in conjunction with the legacy media we are seeking to change. We need to change physical structures, policies, curriculum, outcomes, pedagogy…

It is easier to just ban laptops.

 

 

Categories: Author Blogs

academic freedom, social media, and the university without conditions

29 August, 2014 - 10:15

Let’s call this a “Law and Order” style post, as in “inspired by real events.” This is also, I believe, a classical example of a Latourian “matter of concern.”

Without suggesting in any way that the principles of academic freedom ought to be modified or interpreted differently, it should be clear that the material conditions of communication have completely changed since the last time (in 1970) the AAUP “interpreted” the 1940 Statement of Principles of Academic Freedom and Tenure. Though there is a Statement on Professional Ethics that was last revised in 2009 to me it seems clear that it is still failing to account for our changing conditions (maybe there wasn’t a critical mass of academics on Twitter yet). The key paragraph in that document is probably:

As members of their community, professors have the rights and obligations of other citizens. Professors measure the urgency of these obligations in the light of their responsibilities to their subject, to their students, to their profession, and to their institution. When they speak or act as private persons, they avoid creating the impression of speaking or acting for their college or university. As citizens engaged in a profession that depends upon freedom for its health and integrity, professors have a particular obligation to promote conditions of free inquiry and to further public understanding of academic freedom.

For me, there are two interesting points here. First that professors have “rights and obligations” that are no different from other citizens. And second, that when they speak or act as private persons that they “avoid creating the impression of speaking for their college or university.”  Let’s deal with the second point first. Exactly how do you avoid that impression in social media? I suppose if you in no way identify yourself as a professor in your profile page and you can’t be googled and identified as such. What is due diligence in terms of “avoiding” here? It’s not like one can invoke the online version of Robert’s Rules to insist that an audience not associate one’s speech with one’s institution. And as for having the same rights and obligations as other citizens, that’s hardly much solace. Do we imagine that high profile professionals in corporate America are not subject to personal conduct policies? We know that people get in trouble and not hired for jobs because of what they post in Facebook and such. We teach our students about this all the time. So the notion that we have the same rights and obligations as any private person is something to think about.

So one argument says that professors should be able to write/say anything that falls within the protections of the First Amendment and not be subject to any professional or institutional consequences. Of course this is not practically possible to ensure because everything we say has consequences, often unintentional ones. This is an inherent risk of communication. I could be writing something right now that angers some reader who will remember and some day be disinclined to publish something I’ve written or promote me or whatever. No one can control that. That’s always been the case. There have always been feuds among faculty in departments, where one professor always opposes anything the other one suggestions. Only with social media we have this business on a larger scale. One can say that the acts of an institution are a different matter, and that’s true, but those acts are always actually taken by individual people sitting on a committee or in an office somewhere. That angry letter to the editor you may have written 10 years ago, instead of being buried in some archive where no one can find it, now comes up on the first page of a Google search for your name. And really anyone in America can find it and read it at any time, not just the couple thousand local folks who might have turned to page 53 of the newspaper one night a decade ago. We all know this already. So how can we pretend that our circumstances have not changed?

I know that we want to imagine that all these things are separate, but they were never inherently separate. As Latour would suggest, there were many hybrid technologies at work beneath that old 20th-century system constructing order, like Maxwell’s demon. But those old systems no longer function. The question then becomes what system should we build? In answer to this question, one often hears Derrida’s concept of the “university without conditions” cited:

[t]his university demands and ought to be granted in principle, besides what is called academic freedom, an unconditional freedom to question and to assert, or even, going still further, the right to say publicly all that is required by research, knowledge, and thought concerning the truth.

I would note that the key point here is that this is the university without conditions and not the professor with conditions. Professorial freedom has always been constrained by editors, reviewers, granting agencies, etc. We know what the university of print looked like and how it aspired to, though obviously did not reach, Derrida’s idealized institution. Whatever the digital university will look like, whether it is better or worse, it will clearly be different, because it already is.

Categories: Author Blogs

pedagogy, computers and writing, and the digital humanities #cwdhped

17 July, 2014 - 10:44

Over the past couple days there’s been a Twitter conversation (#cwdhped) and an evolving open Google doc that explores the idea of some summit or FTF discussion among scholars in the digital humanities and those in computers and writing on shared interests in pedagogy. For those that don’t know, “computers and writing” is a subfield of rhetoric and composition that focuses on technological developments. I’ll reserve my comments about the weirdness of such a subfield in 2014 for another day. Let’s just say that it exists, has existed since the early 80s, and that there’s a lot of research there on pedagogical issues. Digital humanities, on the other hand, is an amorphous collection of methods and subjects across many disciplines, potentially including computers and writing and possibly including people and disciplines that are not strictly in the humanities (e.g. education or communications or the arts). So, for example, when I think of the very small DH community on my campus, I’m meeting with people in Linguistics, Classics, Theater/Dance, Anthropology, Education, Media Study, Architecture… Some of these people are teaching students how to use particular media creation tools. Some are teaching programming. Some are doing data analysis. Some are teaching pedagogy. Most of the digital-type instruction is happening at the graduate level. And none of it is happening in what we’d commonly think of as the core humanities departments (i.e. the ones with the largest faculty, grad programs, and majors). Of course that’s just one campus, one example, which begs the following questions:

  • What % of 4-year US colleges have a specific digital outcome for their required composition curriculum?
  • What % of those campuses have a self-described “digital humanities” undergraduate curriculum that extends beyond a single course?

I would guess there are ~1000 faculty loosely associated with computers and writing, maybe less. I’m sure they are doing digital stuff individually in their classrooms, but is there something programmatic going on there on there respective campuses? There are 100s(?) of professional writing majors now, most of which have some digital component, but sometimes it is still just one class. And if we stick to the MLA end of the DH world, how many English and/or language departments have a specific DH curriculum? How many have any kind of DH or digital literacy outcomes for their majors?

This leads me to the following question/provocation: setting aside composition courses, how many different courses does the average US English department offer each year with an established digital learning outcome or digital topic in its formal catalog course description? I think that if I set the over/under at 2.5 you’d be crazy to take the over.

My point is that when we are talking about DH pedagogy, we are talking about something that barely exists in a formal way. If you want to think about 1000s of professors and TAs doing “something digital” in their courses here and there, then yes, it’s all over the place. And yes we are using Blackboard, teaching online, and so on. And maybe we could come up with a list of 25 universities that are delivering a ton of DH content, the 100s of institutions with professional writing majors are offering an above average amount of digital content, and the English departments that are delivering secondary education certification might be delivering the required digital literacy content for those degrees, but put into the context of 3000 4-year colleges and what do you see?

I think the same is true on the graduate level. We can point to some programs and to individual faculty, but nationally, how many doctoral programs have specific expectations in relation to DH or digital literacy for their graduates? I would bet that even at the biggest DH universities in the nation, you can get a PhD in English without having any more digital literacy than a BA at the same school. Rhet/Comp has a higher expectation than literary studies, but only because of the pedagogical focus and the expectation that one can teach with technology. This doesn’t mean that students can’t choose to pursue DH expertise at many institutions, at either the undergrad or grad level. It’s just not integral to the curriculum.

So my first question(s) to the MLA end of the DH community (just to start there) is

  • What role do you see for yourselves in the undergraduate curriculum?
  • Is DH only a specialized, elective topic or should there be some digital outcome for an English major?
  • Should there be some digital component of a humanities general education?
  • What role should DH play in institutional goals around digital literacy?

The same questions apply at the graduate level. Is DH only an area of specialization or does it also represent a body of knowledge that every Phd student should know on some level?

If a humanities education should prepare students to research, understand and communicate with diverse cultures and peoples, then how that preparation is not integrally and fundamentally digital is beyond me. We really don’t need to say “digital literacy” anymore, because there is no postsecondary literacy that is not digital. Why is it that virtually every English major is required to take an entire course on Shakespeare but hardly any are required to have a disciplinary understanding of the media culture in which they are actually living and participating? (That’s a rhetorical question; we all know why.)

From my viewpoint, that’s the conversation to have. Tell me what it is that we want to achieve and what kind of curricular structures you want to develop to achieve those goals. The pedagogic piece is really quite simple. How do you teach those courses? You hire people who have the expertise. Sure there’s some research there, best practices, and various nuances, but that’s about optimizing a practice that right now barely exists.

 

Categories: Author Blogs

rhetoric’s default mode

8 July, 2014 - 11:03

Following on my previous post, a continuation of a discussion of “neurorhetoric.” Generally speaking, rhetoricians, like other humanists, approach science with a high degree of skepticism, especially a science that might potentially explain away our disciplinary territory. As Jordynn Jack  and others have pointed out, there is a strong interest in the prefix neuro- and that those in our field might benefit from looking bi-directionally at both the rhetoric of neuroscience (how neuroscience operates rhetorically as a field), as well as the neuroscience of rhetoric (what neuroscience can tell us about rhetorical practices). In her article with Gregory Applebaum (a neuroscientist), they point to the broader lessons from the rhetoric of science in approaching neuroscientific research, particularly to resist engaging in “neurorealism, neuroessentialism, or neuropolicy,” which are all variants of interpreting research as making certain kinds of truth claims. Similarly I tend to turn toward Latour here to think about the constructedness of science.

With that in  mind, I was following my nose from my last post’s discussion of an article in Science, to this article “Rest Is Not Idleness: Implications of the Brain’s Default Mode for Human Development and Education” by Mary Helen Immordino-Yang, Joanna A. Christodoulou and Vanessa Singh. The “default mode” describes a relatively new theory of the brain/mind that identifies two general networks. One is “task positive,” which is a goal-oriented, outward-directed kind of thinking and the other is “task negative,” which is inward-directed. The latter is the default mode and is concerned with “self-awareness and reflection, recalling personal memories, imagining the future, feeling emotions about the psychological impact of social situations on other people, and constructing moral judgments.” As they continue

Studies examining individual differences in the brain’s DM connectivity, essentially measures of how coherently the areas of the network coordinate during rest and decouple during outward attention, find that people with stronger DM connectivity at rest score higher on measures of cognitive abilities like divergent thinking, reading comprehension, and memory (Li et al., 2009; Song et al., 2009; van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009; Wig et al., 2008). Taken together, these findings lead to a new neuroscientific conception of the brain’s functioning “at rest,” namely, that neural processing during lapses in outward attention may be related to self and social processing and to thought that transcends concrete, semantic representations and that the brain’s efficient monitoring and control of task-directed and non-task-directed states (or of outwardly and inwardly directed attention) may underlie important dimensions of psychological functioning. These findings also suggest the possibility that inadequate opportunity for children to play and for adolescents to quietly reflect and to daydream may have negative consequences—both for socialemotional well-being and for their ability to attend well to tasks.

As I’ll discuss in a moment, the article goes on to make some interesting claims and recommendations about social media, but let’s just deal with this. Let’s call it unsurprising to discover that the brain is doing different things when one is looking outward and focused on a specific task than when one is daydreaming, speculating, fantasizing, remembering or otherwise being introspective. How “real” those two networks are versus their being products of our perspective on our brains I cannot say. Certainly these are notions that reflect our mundane experience with thinking. I am certainly not going to argue against the wisdom of having down time, taking opportunities for reflection, or developing a meditative practice for children, teens, or adults. I also don’t need a multimillion dollar machine-that-goes-bing to know that.

Here is what might be interesting though as one investigates the ontological dimensions of a rhetoric not restricted to symbolic behavior. Without falling into neuroessentialism, it is not radical, I think, to imagine the rhetorical strategies, such as audience awareness, develop from the way we are able to think and conceive of others, a task attributed here to the “default mode.” It is only speculation, as far as I am concerned, but the capacity to conceive of a self is dependent on the capacity of conceive of a non-self. Following upon that the ability to imagine that others have similar capacities, that there are other “selves” out there develops when? Prior to symbolic behavior? In concert with symbolic behavior? Following symbolic behavior? Who knows? I do, however, think that such neurorhetorical work opens a space for the investigation of a naturalcultural, material, nonsymbolic rhetoric.

That said, it certainly does not resolve such matters. And the discussion of social media in this article is an excellent example of this. To be fair, they conclude that “In the end, the question will not be as much about what the technology does to people as it will be about how best to use the technology in a responsible, beneficial way that promotes rather than hinders social development.” Thanks for that insight. Indeed they do admonish us that “the preliminary findings described here should not be taken as de facto evidence that access to technology is necessarily bad for development or weakens morality.” Of course they only reason that such caveats must appear in the article is that much of what they discuss suggests exactly the opposite of these backpedaling sentences, that “if youths are habitually pulled into the outside world by distracting media snippets, or if their primary mode of socially interacting is via brief, digitally transmitted communications, they may be systematically undermining opportunities to reflect on the moral, social, emotional, and longer term implications of social situations and personal values.” How do they get to this implication? Basically by arguing that effective use of the default mode is necessary for moral behavior and that social media interferes with entering this default mode through its continual demand for attention.

I’ll just toss out a different hypothesis for you, one that doesn’t have to fall into the trap of technology makes us more or less moral, stupid, etc. Or retreat to some version of the “guns don’t kill people; people kill people” commonplace. Here’s my premise: we don’t know how to live in a digitally-mediated, networked world. It’s a struggle. We are trying, unsuccessfully, to import paradigms from an industrial, print culture about what life should be like (and to be fair those are the only paradigms we have to work with). Addressing this struggle is not simply about some rational process of using technology in a beneficial way. It’s a more recursive and mutative process where the notion of benefit shifts as well. It’s unlikely that we will evolve out of our need for “down time” in the near future or develop some scifi wetware implants to do the job for us. So we will need to understand the ontological basis for rhetorical action, in the brain and elsewhere. But we also need to recognize that what constitutes “moral” behavior is a moving target. What are our moral obligations to our Facebook friends or Twitter followers? How to they intersect with and alter our responsibilities to family or neighbors or other citizens? These are all concepts that we learned through rhetorical activity, concepts that shift with rhetorical activity. And though the authors of this article are careful to hedge their claims, it is also clear that they want to raise some concern about social media that rests upon a certain faith about how we should behave, a faith that they seek to confirm through science.

In the end, I am interested in their argument and largely inclined to share their value in the need for down time and reflection. I worry about the time my son spends staring at his iphone. Not because I think it’s making him a bad person; it just seems like a diminished life experience to me. I’m also interested in this idea of the default mode. However I inclined to be a little wary about these claims regarding social media. I am sure these technologies are shaping our cognitive experience, and I am sure that we struggle with these digital shifts, both individually and collectively. But I’d like to avoid falling into these rhetorical commonplaces about emerging media and morality or stupidity.

Categories: Author Blogs

it hurts when I think

6 July, 2014 - 07:43

Perhaps you have seen this recent Science article (the paywall article itself or an Guardian piece on it.) If you haven’t, this is a psychological study where participants are left alone with their thoughts for 6-15 minutes and then asked questions about the experience. The conclusion? Generally people do not enjoy being alone with their thoughts. The article got attention though because the researchers gave participants the option of shocking themselves, and a good number of them, especially men, chose to do so. As Wilson et al note, “what is striking is that simply being alone with their own thoughts for 15 min was apparently so aversive that it drove many participants to self-administer an electric shock that they had earlier said they would pay to avoid.”

I will not pretend expertise, but having engaged in zazen meditation over the years, it doesn’t really surprise me that people don’t enjoy being alone with their thoughts. In this kind of meditation the objective is not to not think, which isn’t really possible, but rather to not hold onto thoughts. In my experience (and I imagine yours), the unpleasantness of thinking comes from holding on to thoughts (or perhaps their holding on to you). As I understand it, this kind of mindfulness training is fundamentally about letting go. The researchers arrived a similar conclusion, writing

There is no doubt that people are sometimes absorbed by interesting ideas, exciting fantasies, and pleasant daydreams. Research has shown that minds are difficult to control, however, and it may be particularly hard to steer our thoughts in pleasant directions and keep them there. This may be why many people seek to gain better control of their thoughts with meditation and other techniques, with clear benefits. Without such training, people prefer doing to thinking, even if what they are doing is so unpleasant that they would normally pay to avoid it. The untutored mind does not like to be alone with itself.

One might argue that the mind is never “alone with itself.” There’s only more or less stimulation. In this study, the participant is sitting on a chair for instance. One might mention air or gravity, but language is the key outsider from my perspective. My inclination would not be to characterize the participants minds as untrained or untutored but to the contrary as specifically trained to “prefer doing to thinking” where “thinking” is narrowly defined as mental activity that is detached from any apparent stimulation/sensation or a particular immediate objective.

In the disciplinary terms of cognitive science and psychology, what we are talking about here is the brain’s “default network,” which is sometimes described as the brain idling or as mind-wandering but is also suggested as the means by which the brain considers the past and future or imagines other people’s mental states. It is, perhaps, our internal self-reflection: the internal mental state that we imagine other’s similarly have. And really what this study is suggesting is that this internal world is generally not all that pleasant. Perhaps it’s a good thing that navel-gazing doesn’t feel that good. Even though we value self-reflection and mindfulness, we wouldn’t want to find ourselves drawn inward as toward a delicious treat.

An article like this attracts attention in part because of the details of participants shocking themselves but also because of our increasing moralizing over media and attention. It feeds our supposition that we have become so dependent on media stimulation that we are losing ourselves. Actually I don’t think the article is making any kind of cultural-historical argument. There are some cultural assumptions here, specifically those who are tutored to be alone with their thoughts would get different results, but there isn’t a value suggesting that there is something wrong with not enjoying this experience. We just bring that morality to the findings.

Whether we are talking about deep breathing exercises, some more developed meditative practice, language, or an iphone, these are all technologies. Even when we are in that default mental mode, we are still in a hybridized, nature/culture, technological, distributed network of thinking. The condition of being “alone” is relative not absolute.

 

Categories: Author Blogs

when the future isn’t like the past

26 June, 2014 - 09:14

A group of scholars respond to MLA’s proposal regarding doctoral education in Inside Higher Ed, another group propose to replace MLA’s executive director with a triumvirate who will focus on the problems of adjunctification, on Huffington Posta university president write in defense of a liberal arts education: these are all different slices of a larger issue. On this blog, there are a few recurring topics:

  • emerging digital media and their aesthetic, rhetorical, and cultural effects;
  • teaching first-year composition;
  • practices in scholarly communication;
  • technologies and higher education teaching;
  • the digital humanities and its impact on the humanities at large;
  • the academic job market, including the issue of part-time labor;
  • doctoral education in English Studies;
  • undergraduate curriculum, including both general education and English majors.

There’s also a fair amount of “theory” talk, though, at least in my mind, it’s always about developing conceptual tools for investigating one or more of these topics. So perhaps it is not surprising that from my perspective these things are all part of a common situation, not one that is caused by technological change in some deterministic way, but one in which the development of digital media and information technologies has played a significant role. And obviously it’s not just about technology, but when we remark on the changing nature of work in the global economy, the resulting growing demand for postsecondary education, the shift in government support and public perception of higher education, and the impacts of these on academia, it’s clear that technological change has played its role there as well. In other words, the challenges we face today were not necessary and the future has not already been written, but there was and is no chance that the future will be like the past.

And this is where I see the biggest contradictions in our efforts to address these problems, contradictions which are rehearsed again in the pieces referenced above. Who can doubt that the way we approach doctoral education, university hiring practices in relation to adjuncts, and our valuing of a liberal arts curriculum are all tied together? The obvious answer is for there to be greater public investment in higher education. Maybe states should think about incarcerating fewer citizens and educating more of them. Maybe the federal government doesn’t need more aircraft carriers than the rest of the world combined. Maybe we need to close some corporate tax loopholes. Maybe.  Maybe. But even if there were more money flowing into the system, would that mean that things would stay as they are/were?

In his Huffington Post piece, Michael Roth, president of Wesleyan, points to a tradition in American higher education dating back to Franklin and Jefferson that emphasize the value of a liberal education for lifelong learning over specific vocational training, as he concludes:

Since the founding of this country, education has been closely tied to individual freedom, and to the ability to think for oneself and to contribute to society by unleashing one’s creative potential. The pace of change has never been faster, and the ability to shape change and seek opportunity has never been more valuable than it is today. If we want to push back against inequality and enhance the vitality of our culture and economy, we need to support greater access to a broad, pragmatic liberal education.

Ok, but what should that “broad, pragmatic liberal education” look like? Does this ability to “shape change and seek opportunity” also apply to higher education itself? The “10 Humanities Scholars” writing in response to MLA’s proposal object to the suggestion that graduation education should be different and instead contend “As long as departments continue to be structured by literary-historical fields and tenure continues to be tied to monographs, a non-traditional dissertation seems likely to do a great disservice to students on the job market and the tenure track.” That’s my emphasis. In short, as long as things remain the same, they should remain the same. (I should note, btw, that with possibly a few exceptions at elite private liberal arts colleges, tenure is only tied to monographs at research universities, which make up less than 10% of American universities. So that claim is not true and has never been true.) But that’s just a side note.

Here’s the point. We want students to receive a liberal arts education in that most medieval of senses: the skills and knowledge needed to succeed as a free individual. And we want to deliver that education without exploitative employment practices. But these movements also want to hold on to the curricular and disciplinary structures of the 20th century. And in the end, the latter are valued over the former. And while the MLA report is obviously focused on MLA fields, this issue extends beyond those departments.

If the solution to our challenges includes changing the curricular and disciplinary paradigms of the arts and humanities are we still committed to finding that solution? Or are we more inclined to stay on this ride until it ends?

What is this future like? Where literary-historical fields are a minor part of the humanities, where the focus turns to digital media and the contemporary global context, where the curriculum focuses on the soft skills of communicating, collaborating and research rather than traditional content, where faculty research efforts, including the genres of scholarly communication, reflect this shift in emphasis, where the elimination of adjunct positions changes both the curriculum offered and the technological means of its delivery, where the focus on graduate programs that train future professors is greatly diminished. In short what if the solution to our problems is to create a future where the job of the humanities professor looks nothing like what it is today?

I’m not saying it has to be that way. My point is only that our conversations about finding solutions to these problems always seem predicating on returning to some imaginary historical moment rather than really trying to shape a future. Didn’t we all receive that “pragmatic liberal education” of which Roth speaks? If we can’t use it to find such solutions, then maybe it isn’t worth saving in the first place.

 

 

Categories: Author Blogs

language, programming, and procedure

23 June, 2014 - 10:52

Following on my last post, by coincidence I picked up a copy of Max Berry’s Lexicon, which is in the sci-fi supernatural genre, light reading but well-reviewed. It’s basic premise is that language triggers neurochemical responses in the brain and that there are underlying operating languages that can compel and program humans. The result is something that is part spellcraft, part cognitive science and sociolinguistics, and part rhetoric, with the identification of different audiences who respond to different forms of persuasion. In this aspect it reminds me somewhat of Stephenson’s Snow Crash or even Reed’s Mumbo Jumbo, in a far more literary vein.

Conceptually what’s most interesting about Lexicon for me is the role of big data and surveillance. Compelling people requires identifying their psychographic segmentation, which is a practice in marketing research; think of it as demographics on steroids. This is the information produced from tracking your “likes” on Facebook, text mining in your Gmail and Google searches, data collected from your shopper card. Perhaps you remember the story from a few years ago about Target identifying a shopper as pregnant. Maybe this happened, maybe not. But that’s the kind of thing we are talking about.

Where does this get us?

  1. If the better you know your audience, the more likely you will be able to persuade them. I don’t think anyone would disagree with this.
  2. Through big data collection and analysis, one can gain a better understanding of audiences not just in broad demographic terms but in surprisingly narrow segments. How narrow, I’m not exactly sure.
  3. The result is the Deleuzian control society version of propaganda where we are not disciplined to occupy macrosocial categories but are modulated across a spectrum of desires.

Certainly there are legitimate, real world concerns underlying Lexicon, as one would hope to find in any decent scifi novel. It’s also a paranoid, dystopian fantasy that gets even more fantastical when one gets down to the plot itself (but no spoilers here). I suppose my reaction in part is to say that I don’t think we are that smart, competent or organized to make this dystopia real. But for me the more interesting question is to ask are we really this way? To what extent are we programmable by language or other means? This is where one might return to thinking about procedural rhetoric.

I suppose the short answer is that we are very programmable and that our plasticity is one of our primary evolutionary advantages, starting with the ability to learn a language as an infant. One might say that our openness to programming is what allows us to form social bonds, have thoughts and desires, cooperate with others for mutual benefit, and so on. If we think about it in Deleuzian terms, the paranoid fear of programming (tinfoil hat, etc.) is a suicidal-fascist desire for absolute purity, but ultimately there’s no there there, just nothingness. If we view thought, action, desire, identity and so on as the products of relation, of assemblage, then “we” do not exist without the interconnection of programming.

Of course it’s one thing to say that we emerge from relations with others. It’s another to investigate deliberate strategies to sway or control one’s thinking by some corporation or government. It’s Latour’s sleeping policeman (or speed bump as we call it) or the layout of the supermarket. Imagine the virtual supermarket that is customized for your tastes. You don’t need to imagine it, of course, because that’s what Amazon is. Not all of these things are evil. Generally speaking I think we imagine speed bumps are a good way to stop people from speeding in front of an elementary school, more effective than a speed limit sign alone. There is an argument for the benefit of recommendation engines. We require the help of technologies to organize our relations with the world. This has been true at least since the invention of writing. Maybe we’d prefer more privacy around that; actually there’s no maybe about it. It’s one thing to have some technological assistance to find things that interest us, it’s another to have some third party use that information for their own purposes.

I also wonder to what extent we are permanently and unavoidably susceptible to such forms of persuasion. Clearly the idea of most advertising and other persuasive genres is not to convince you on a conscious level but to shape your worldview of possibilities, not to send you racing to McDonalds right away but for McDonalds to figure prominently in your mind the next time you ask yourself “what should I have for lunch?” And even then when fast food enters into our mind as a possibility we might consciously recognize that the idea is spurred by a commercial, but do we really care?  Do we really care where our ideas come from? Are our stories about our thoughts and actions ever anything more than post-hoc rationalizations?

Returning to my discussion of Bogost, Davis, and DeLanda in the last post, I think there is something useful in exploring symbolic action as a mechanism/procedure. As a book like Lexicon imagines, we’ve been programming each other as long as there has been history, perhaps longer. Maybe we are getting “better” at it, more fine-tuned. Maybe it’s a dangerous knowledge that we shouldn’t have, though we’ve been using ideas to propel one group of humans to slaughter, enslave, and oppress another group of humans for millennia. That’s nothing new. If anything though, for me it points to the importance of a multidisciplinary understanding of how information, media, technologies, thoughts, and actions intertwine as the contemporary rhetorical condition of humans.

 

Categories: Author Blogs

alien languages and rhetorical procedures

20 June, 2014 - 09:05


Ian Bogost writes about Star Trek: The Next Generation and the unique language of the Tamarians, an alien race encountered in one episode. Picard and the crew eventually figure out how to speak with the Tamarians by interpreting their language as a series of metaphors. Bogost, however, suggests that metaphor is the wrong concept,

Calling Tamarian language “metaphor” preserves our familiar denotative speech methods and sets the more curious Tamarian moves off against them. But if we take the show’s science fictional aspirations seriously and to their logical conclusion, then the Children of Tama possess no method of denotative communication whatsoever. Their language simply prevents them from distinguishing between an object or event and what we would call its figurative representation.

Bogost then proceeds to put the Tamarian language in the context of computers where, from our perspective when we look at the computer we perceive descriptions, appearance, or narrative but what is actually happening are logics and procedures. Picard may think the Tamarians are speaking in metaphors, but they are in fact speaking in procedural logic. There is some insight there for us, Bogost observes,

 To represent the world as systems of interdependent logics we need not elevate those logics to the level of myth, nor focus on the logics of our myths. Instead, we would have to meditate on the logics in everything, to see the world as one built of weird, rusty machines whose gears squeal as they grind against one another, rather than as stories into which we might write ourselves as possible characters.

It’s an understandable mistake, but one that rings louder when heard from the vantage point of the 24th century. For even then, stories and images take center stage, and logics and processes wait in the wings as curiosities, accessories. Perhaps one day we will learn this lesson of the Tamarians: that understanding how the world works is a more promising approach to intervention within it than mere description or depiction. Until then, well: Shaka, when the walls fell.

Perhaps not surprisingly, this episode has received some treatment in rhetorical theory. Both Steven Mailloux and Diane Davis (paywall) have written about it as an opportunity to investigate the challenges of communication with otherness. As Davis points out, the episode ends without any real understanding being achieved between the Enterprise crew and the Tamerians. They do not establish diplomatic relations. The best they can achieve is peace without understanding, which, as Davis argues, “suggests that understanding is not a prerequisite for peace, that a radically hospitable opening to alterity precedes cogitation and volition.” From this she concludes

the challenge is to compare without completely effacing the incomparableness of the “we” that is exposed in the simple fact of the address; that is, the challenge is to refuse to reduce the saying to the said, to keep hermeneutic interpretation from absorbing the strictly rhetorical gesture of the approach, which interrupts the movement of appropriation and busts any illusion of having understood .

In this moment, Bogost and Davis appear like Picard and the Tamarians: two non-communicating entities. However they both recognize the partial-at-best success of Picard’s ability to communicate here and the limits of the hermeneutic gesture. Davis points to a rhetorical gesture that precedes communication. I wonder if that gesture might be procedural, or if I were to put it in more Deleuzian terms, as the operation of an assemblage.

Let’s see where that takes us by bringing in two other sci-fi stories.

  1. Kirk-and-GornThe ST:NG “Darmok” episode is often compared to the original Star Trek episode “Arena” where some omnipotent space race called the Metrons forces Kirk to fight an alien captain from a reptilian race called the Gorn. In the end, Kirk manages to create a makeshift weapon (anticipating every episode of MacGyver) and defeats his enemy. However he chooses not to kill the Gorn and he is rewarded for this decision by the Metrons. It has many of the classic tropes of the original episode: Olympian-styled super aliens, violent bestial aliens, and scrappy, can-do American know-how with the perfect mix of brains and brawn, judgment and courage, etc., etc. One way of comparing the episodes is in terms of the shift from Golden Age to New Wave sci fi, where in the former the heroes are cowboy engineers and in the latter they are anthropologists. In “Arena” there is no hope for communication and apparently no attempt.
  2. Stepping out of the Star Trek universe, China Meiville’s Embassytown focuses on an alien race called the Areikei. They are two-headed creatures, and the only way humans can communicate with them is through genetically-engineered twins called Ambassadors who can speak with two mouths and one mind. Like the Darmok, the Areikei appear to speak through metaphorical concepts but more importantly they cannot create fictions or lies. As such, humans are called upon to stage various actions in order to create concepts for communication. There is a Derridean pharmacological aspect as well, as the Areikei find themselves intoxicated by a new Ambassador’s speech. And then, when they figure out how to lie… Following Bogost, we might also call the Areikei’s language procedural. I see the pharmakon as fitting into a procedural understanding of rhetoric and communication: language is a machine.

It’s tempting to see language, or more generally symbolic behavior, as the proto-machine of modern humans. Today, when we look at technologies, they all are preceded by language, by descriptions, images, narratives, and metaphors. When we think about remediation or just McLuhan’s contention that all media take prior media as their content, that’s what we see. The origins of symbolic behavior are as murky as efforts to define it in the first place, but I think we acknowledge that there are technologies prior to language. Technologies always bridge the modern nature-culture divide, responding to physical laws but also shaped by cultural processes. Language is certainly that way, partly in our nature in evolutionary developments of the mouth and brain but also cultural. From Bogost’s view, as well as Deleuze’s (though the two are quite different in other ways), language is machinic because being is machinic. The machine precedes language. For Davis, rhetoric also precedes language and communication as this opening of a relation to Otherness.

Might we say that rhetoric is also a machine? I don’t think Davis would agree to that, but this is precisely Bogost’s point when he discusses procedural rhetoric. Persuasive Games,where Bogost introduces us to procedural rhetoric, focuses on the contemporary scene of videogames, especially games with a social-political agenda. However, if we say that procedural rhetoric is not only a way to understand how software persuades but more broadly a way of seeing rhetoric as a machine, prior to symbolic behavior, then we move toward a different understanding of these science fiction situations.

Human and alien assemblages grind their gears into one another. (Mis)understanding is one output. Violence, heat, entropy are others. Dis/order is produced as assemblages mutate. One inclination is to say there are no aliens here, just stories written in English. Let’s interpret them with our various hermeneutic methods. But there are aliens here, albeit not extra-terrestrial ones, just nonhumans. What happens if we take Bogost’s advice and not see the “Darmok” episode as description, image, and narrative but rather as a process?

Categories: Author Blogs

speculative politics, academic life and the “legacy” of postmodernism

16 June, 2014 - 13:12

Alex Galloway wrote an interesting post a couple weeks ago that sparked a long conversation (100+ comments), including a more recent post by David Golumbia that makes reference to a post I wrote two years ago. In a nutshell this is a conversation about the politics surrounding speculative realism, object-oriented ontology, and such. It mostly focuses on Graham Harman, less so on other OOO-related folks like Bryant and Bogost, and extends to Latour, DeLanda, and others. The questions of “what is?” (ontology) and “what should be?” (politics) are clearly interrelated. I don’t think anyone believes that some version of Stalinist science is a good idea (where the search for understanding is censored up front by a political agenda). On the other hand, no one in this conversation believes that any search for understanding by humans is not shaped by ideology, politics, culture, and so on.

I agree with Galloway when he writes “The political means *justice* first and foremost, not liberation. Justice and liberation may, of course, coincide during certain socio-historical situations, but politics does not and should not mean liberation exclusively. Political theory is full of examples where people must in fact *curb* their own liberty for the sake of justice.” As far as I can tell, justice isn’t built into the structure of the world. It’s not gravity. Justice is a claim about how the world should be. As Galloway points out, there are plenty of political theories that instruct people on what they should do. Of course there’s also a lot of disagreement over that justice is, as well as how it can be achieved. Much of it is tied to theories of ontology (e.g. do you believe the Genesis story accurately describes how the universe was formed?). If I understand Galloway’s criticism, it is that OOO separates politics from ontology and fails to see how its ontology is informed by politics. He then goes on to demonstrate that the politics that informs OOO is capitalism. Maybe. Ultimately the proof is in the putting, and for me that means not only saying but doing. 

From my perspective this conversation focuses on academic life. Galloway’s post takes up Harman’s references to the political situation in Egypt. He also talks about the Occupy movement, Wikileaks and so on. But this is an academic argument happening between academics. We can say that academic life and work is political in the way that all human life and work is political. Write an article, teach a class, attend a committee meeting: all are political acts. But they are not political in the sense of Occupy or Wikileaks. If they are efforts to make the lived experiences of other humans more just then they are quite circuitous in their tactics. Certainly there are some activist academics who are more explicitly political in their research. There are some who are active with unions or with faculty oversight of institutions. But such things do not characterize academic life in general. Let’s say there are two monographs on Moby Dick. One invokes Zizek as a primary theoretical inspiration. The other one invokes Harman. From Galloway’s perspective the former is preferable on political grounds, but I am having a hard time seeing either as doing much for justice.

To put my own research on digital media technologies, higher education, rhetoric, and teaching composition is similar terms, I suppose I would say DeLanda and Latour are my primary inspiration. Put simply, my work examines the premise in my discipline that symbolic behavior is a uniquely human trait. In my view it is a premise that tends to obscure the way that symbolic behavior (and the broader realm of though and action) relies upon a broader network of actors. In particular, I see our continuing struggles over what to do with digital media as stemming from this premise. Is it sufficiently political? I’m not sure. Who makes that determination? Does “being political” by humanistic academic standards require choosing an argument from among a set of proscribed acceptable positions? I would hope not. Does it require offering some prescription, some strategy or tactic, for increasing justice in the world? Maybe. I would like to think that my work strives to make life better. That is, if I offer to you my very best understanding of how digital rhetoric and composition works and what it might mean for teaching and higher education in general, I think that I am trying to make life better. Does it make the world a more just place? How is one even supposed to measure that? If a butterfly flaps its wings…

Meanwhile, David Golumbia in responding to Galloway, takes issue with a phrase in that earlier post of mine, where I say that  “there is potentially less relativism in a flat ontology than there is in our legacy postmodern views.” The word “potentially” there has to do with point-of-view. In my view, it is almost tautological to say that a flat ontology has less relativism. This is, in some respect, Galloway’s complaint: that a flat ontology does not pursue a “superimposition of a new asymmetry.” But that’s not Golubmia’s concern. His concern is with the phrase “legacy postmodern views.” As near as I can figure though, he is not asserting that there is no such thing as “legacy postmodern views,” but rather that their shouldn’t be. As he writes

the major lights of theory have been presented by many of us to students as a bloc, as doctrine, or even as dogma: as a way of thinking or even “legacy view” that we professors of today mean to “educate” our students about. But we should not and cannot be “educating” or “indoctrinating” our students “into” theory. To the contrary: because that work is a diverse set of responses to several bodies of work, more and less traditional and/or orthodox, it can only be understood well when embedded in that tradition.

I don’t have a problem with his argument that theory should be taught a different way. In the end he makes a fairly disciplinary-conservative argument that students need to read the philosophical tradition. He complains that SR plays into this with its “sweeping dismissal” of prior philosophy and argues that their object orientation isn’t all that new anyway. He blames technology for short-attention spans, a devaluing of proper education, and an unwillingness to give due consideration to the philosophical tradition. Keep in mind that these are professors complaining that other professors don’t take education seriously and don’t read enough. Actually though, these are familiar rhetorical moves. What could be more familiar than saying persons A, B, and C have misread or failed to read persons X, Y. and Z.

I do want to respond briefly to where Golumbia remarks  “That phrase “legacy postmodern views” really strikes me wrong, and rings in harmony with the “‘leftist faculty cabal’ mentioned by Galloway. Among other things, both phrases sound much like the major buzzwords used by the political right to attack all of theory during its heyday in the 1980s and 1990s.” I think he means to suggest that I am taking up some right wing attack on theory.  And I’m not sure why, as we seem to agree that “legacy postmodern views” exist and are taught, even though neither of us believe such things are worthwhile.

If I decide, for example, to focus on Latour and DeLanda rather than Badiou and Zizek, and some other digital rhetorician decides the opposite then… I’ve got nothing. I mean, I’m not sure what the stakes are. We write two different kinds of articles and books. Maybe our classes are a little different but not that different. Is one of us making the world a more just place than the other? According to whom? Either way, we’re both stuck on this treadmill of writing articles and monographs for tenure and promotion. How is it that I am evil and the other scholar some avatar of justice? When there’s maybe a couple thousand people on the planet at best who could tell the difference between us and less than 100 who would bother to. That’s the stuff that I don’t get.

Categories: Author Blogs

what happens when I don’t disagree with you?

6 June, 2014 - 12:43

We are all familiar with the echo chamber that can be the interwebs: pick your own news source, pick your friends, and mute the rest. Despite this familiar complaint, we are all equally familiar with Twitter wars, trolling comments, cyberbullying, and all varieties of textual assault. These things have their academic varieties as well. We hear about the digital humanities and its “niceness” (as well as complaints that niceness is created by erasing difference within the echo chamber). And we can also witness the latest flame war over Rebecca Schuman’s Slate article on Zizek.

How does this work in more formal academic discourses? I don’t know about your graduate training, by my coursework was essentially an exercise in critical-rhetorical knife work. Class time was about critiquing this and critiquing that. The graduate student listserv was mostly theory wars. Writing a dissertation was an extension of that, where the first task was really to find or make a hole in the current research. Every argument can be deconstructed. Every viewpoint has a blindspot. Arguments about capitalism give little attention to patriarchy that don’t account for hegemony that cover over the slipperiness of language games etc., etc.

We do the same thing in teaching academic discourse to first-year students. What’s your thesis? What are you trying to argue? A thesis can’t be a statement of fact. It has to be something with which your audience might disagree. I’ve taught this myself. You can tell your students to take their theses and say the opposite thing. If the opposite statement isn’t one that you can imagine people believing, then your original statement isn’t really a thesis. In other words, your audience are people who hold a view different from yours. That said, their views can’t be too different from yours. Clearly they are people who care about the same issue as you, who would be willing to discuss it in the same terms as you, and who are open to the possibility of being persuaded by the kind of argument and evidence that you will provide. In other words, they are the kind of people who are part of a fairly limited discourse community: your discourse community.

In a way, it’s all a performance and not just for the author but the readers as well. Being an academic reader requires one’s willingness to adopt a very specific position. It’s almost like participating in a child’s magic trick. It must be carefully constructed. And here I suppose I should invoke Latour as a kind of ward against our inclination to take that to mean that the scholarship doesn’t have value. That isn’t what I mean at all. All knowledge of any value must be carefully constructed. We all know that critique is interminable and that any text can be critiqued ad infinitum. Going there breaks the performance. But one equally breaks the performance by simply agreeing to what the author says. As a reader, you must disagree (or at least express skepticism and doubt) with the thesis but only within the scope of the discourse community. You must play by the rules and accept the genre of evidence and argument that is provided. That doesn’t mean that you need to agree in the end of course but only play by the established rules for disagreement.

In short, you must begin with skepticism and allow yourself to be open to persuasion.

It’s an interesting experience to try reading these works from a different position. I’m not talking about major philosophical works, where you’re mostly just trying to figure out what’s being said in the first place. I’m referring to the typical humanistic article or monograph. There’s a clumsiness that results, like a couple dancing together but to different songs.  Is this a criticism? No it’s not, not really. Every genre has conventions that establish roles for authors and readers. I will admit that I sometimes get tired of playing the same role though, as if it were the only way to read, as if serious academic thought required one to adopt this readerly role. Where I end up is with a “what if” game. What if persuasion and argument were not the primary rhetorical functions of academic writing? We could play the believing game, but that’s just the flipside of the same coin. It’s difficult for us, especially us rhetoricians who are inclined to assert that “everything is an argument.” It’s difficult because we really do believe in the value of an agonistic approach to testing and strengthening knowledge.

I suppose I wonder if it is possible to play more than one game.

Categories: Author Blogs

role your own (post)disciplinary future

4 June, 2014 - 14:51

That’s a pun, not a misspelling. The question is, what role do you see for yourself as a academic in 2025? Why 2025? Partly because we like numbers that end in 0 or 5, and partly because by then our entering doctoral students, with their 7-10 year journey toward a phd ahead of them, should have a had a few good whacks at the job market. In other words, we should be thinking about 2025 or thereabouts as we think through the reform of doctoral programs, especially since any reform will take a few years, at best, to take hold. Besides, it’s easy, too easy really, to criticize the MLA. It’s a lot harder to find alternatives, other than shutting down or dramatically shrinking the current enterprise.

About a decade ago, Ann Green and I co-wrote a chapter in a collection called Culture Shock and the Practice of Profession: Training the Next Wave in Rhetoric And Composition about our time as doctoral students in the experimental phd program at SUNY Albany in the mid-nineties. Berlin wrote about our program in Rhetorics, Poetics, and Culture and Steve North later wrote Refiguring the PhD in English Studies which was largely about our program in “Writing, Teaching, and Criticism.” While the program produces some fine graduates (ahem), it imploded because, in my view, it demanded inter- or intra-disciplinary (depending on how you think of the various parts of an English department) collaboration on the part of faculty who were simply not capable of pulling it off.  In short, there was too much personal and professional antagonism in the department for it to work. I’m not sure if the department was unusual in its antagonism. I just think that in most English departments working together is unnecessary. 10 years ago when we wrote that article, the main point as I recall was that if you want to “reform” a discipline, you shouldn’t really make graduate students pay the price for that reform. That is, as long as the available jobs are traditionally defined and expect traditional training, then that’s what doctoral programs should provide.

So you need to begin by reforming the job market. That is, let’s hire different people. If we want the kinds of scholars that the MLA reformers describe then let’s hire them and while we’re at it, let’s change the way we tenure them too. And if we are going to do that, then we are going to change the kinds of courses we ask them to teach, which means reforming undergraduate curricula. Those things probably go hand-in-hand. Propose a new curriculum in your department and create a hiring plan to deliver it. The MLA report suggests that doctoral programs should encourage a diversity of outcomes for jobs beyond the academy, but maybe we need that diversity within departments as well. In my mind all of these things are part of a single puzzle:

  • reform and diversify the major to attract more students and expand the idea of what expertise in “English” might mean in terms of professions.
  • more students is the best argument we have toward sustaining and maybe building the job market
  • a strengthened undergraduate major will increase the value and viability of the MA
  • those things together create better conditions for doctoral programs which will need to be reformed in terms of content to meet the needs of this new disciplinary paradigm and might also be reformed in terms of some of the pragmatic concerns raised by MLA (time to degree, technology training, etc.)

If you think about it that way, the question is where to we get the students from? Think about this. In the last decade, according to NCES, the total number of 4-year grads has remained fixed (around 52K a year nationally), while the number of Communication majors has grown nearly 20% and Psychology majors has grown more than 30%, which is to say that psychology has roughly kept pace with the overall increase in the number of graduates. My point is that a four-year psychology or communication degree, while more professional sounding maybe, doesn’t exactly provide a qualification for a specific career. English used to be a place where students could learn valuable communication skills, get to know something about different cultures around the world and through time, and get some insight into how people tick. We used to think writing literary interpretive essays would give you the first. And we thought reading literature would give you the other two. But we don’t see it that way anymore. So instead students go to communications and psychology for the same thing.

In theory, by curtailing our disciplinary focus on literariness, we might be able to shift the perception once more. If we can move toward the digital then we can regain our claim to teaching communication skills for the average, entry-level professional career. If we are willing to expand greatly the media we study, I think we can still offer some of the most interesting content in terms of aesthetic experience and insight into other cultures/times. And I still think that rhetoric offers us an excellent pragmatic approach to understanding how people tick. I’m not saying that we can do what psychology, communications, or business do, or that we would want to. I’m just saying we could offer a comparable curriculum that might bring back some of those majors. In 90-91, when I got my English BA, English and Communications each represented 4.6% of the total grads. Today, Communications still represents 4.6%, while English is at 2.9%. (As it happens, at UB, communications is five times the size of English in terms of degrees conferred, so part of this is local as well.)

Here’s the thing. I’ve been a tenure line professor for 15 years. I’ve taught first-year comp, technology for teachers, grammar for teachers, creative writing, technical writing, business writing, literary theory, intro to literature, digital writing, poetics, media theory, videogame studies, TA teaching practicum, speculative realism, digital humanities, and those are just the ones that come to mind. There is no book or article that I have taught more than 2 or 3 times in my academic career. I’ve gone from teaching HTML to Dreamweaver to Flash back to HTML/CSS then to video and audio, then to social media and likely in the future on to some other coding. So I don’t have a problem imagining that my English department contemporaries should also be called upon to learn, grow, and shift their areas of expertise. That’s what this is going to require.

Once we do that, this more capacious doctoral program that MLA is proposing will resemble more what departments are doing.

 

Categories: Author Blogs

why five years for a Phd is both too short and too long

29 May, 2014 - 14:24

It seems much of the attention on the MLA report has gone toward the proposal to shorten the time-to-degree. Inside HigherEd wrote about this and Steve Krause has a blog post on the issue. Here’s my question: what is the problem that we are trying to solve here? Here’s what the report says, ”

we consider 9.0 years unacceptable, in great part because of the social, economic, and personal costs associated with such a lengthy time to degree. Long periods of study delay full-fledged entry into the workforce, with associated financial sacrifices. For many there is increased indebtedness; for some, delayed family planning. For some students a long time to degree may not be especially disturbing if funding from their universities—through fellowships or research and teaching assistantships—is available. Here, however, there is also a cost at the level of the university itself. Just as colleges and universities are being urged to steward their resources and encourage undergraduates to complete their degrees in a timely fashion, so should they be urged to apply this policy at the graduate level.

None of that argument is surprising. However, one might say that five years is still too long to spend if “full-fledged entry into the workforce” remains unlikely. That is, if we are graduating 1000 phds for 600 jobs now and doctoral programs were successful in shortening time to degree, we’d likely see an increase in the number of graduates. As it is, doctoral students intentionally delay completing their degrees so that they can acquire additional credentials (e.g. published journal articles) so as to be competitive on the job market. It’s the market expectations as much as anything else that drives time to degree. In that respect, five years isn’t enough time.

Let’s return to the question: what is the problem we are trying to solve? One of the most misguided arguments in the MLA report is

Doctoral education is not exclusively for the production of future tenure-track faculty members. Reducing cohort size is tantamount to reducing accessibility. The modern languages and literatures are vital to our culture, to the research university and to higher education, and to the qualified students who have the dedication for graduate work and who ought to be afforded the opportunity to pursue advanced study in their field of choice. Instead of contraction, we argue for a more capacious understanding of our fields and their benefits to society, including the range of career outcomes.

I appreciate the rhetorical move here. It’s quite savvy. I even think it is sincere. We cannot reduce the size of our programs because that would cheat our students out of access. Plus, this stuff is important, and we think that society should value us more highly. I’m sure you do, MLA. I’m sure you do. But we already make economic calculations about cohort size and access. Presumably there is a reason why Phd production and tenure-track job openings moved proportionally with one another, until the bottom fell out in 2008.

It’s fine, in theory, if we want to make the PhD into a degree that opens doors to a wider range of careers. If so, we have to convince those employers that the degree represents has some value. Clearly the first thing we’d have to do is radically reformulate the dissertation. The dissertation is the main thing which reveals the fraudulent claim about doctoral education not being connected with the tenure-track. As a general rule, the dissertation as it is currently composed only makes sense in relation to the research activity of faculty.

But I don’t think that’s what this is really about. There’s a slippery slope in making doctoral programs smaller. What’s the right size? Fewer graduate students means less need for faculty which means fewer jobs which means fewer grad students, etc. At some point it evens out, but where? That’s a scary prospect for departments and the MLA. If you reduced your program by half now your graduate faculty are teaching a grad course once every two years instead of once a year. Not a popular decision. And do you have anything else for them to teach given the declining undergrad major? Yikes. Then maybe the dean wants to revisit our hiring plan, eh? So maybe five years down the road, due to retirement and such we end up 20-30% smaller in terms of faculty, our grad program is half the size, and we can’t deliver on the idealized version of the literary studies graduate curriculum because we just don’t have enough faculty in all  the different areas to run dissertations in every field. What we’d be talking about is the end of English departments at research universities as we know them in the next decade. Don’t get me wrong. Those departments would still exist. They might even thrive on a new set of terms. They just wouldn’t look the way they do now.

Arguing for a larger range of career outcomes though essentially leads one in the same direction, unless one is being wholly disingenuous about it. What are these other careers going to be? Publishing? Higher Ed administration? Public humanities (museums and such)? Technical or professional communication? K-12 teaching? Tell me how that dissertation in whatever literary period makes sense for such a career path? Maybe indirectly the research and writing experience could be helpful for some of these jobs. But if you’re going to spend 2+ years researching and writing, doesn’t it make sense to choose a topic that directly relates to the profession you want to enter? Isn’t that why students write a dissertation in a particular period, so that they can qualify for a faculty job in that period? Furthermore, wouldn’t one say the same thing about the coursework?

In the end, it seems to me that a both/and approach is needed here.

  • Transform the curriculum which means faculty rethinking their graduate courses and mentoring for purposes other than reproducing the discipline;
  • Shrinking doctoral programs but maybe increasing MA programs by making them more valuable workplace credentials (which requires transforming the curriculum);
  • Reducing time to degree by making the curriculum more pragmatic and fitting it better to the outcomes we are imagining for our students.

The humanities have insisted for decades that their curricula are not practical, that they are not meant to be career-oriented. We have even insisted on that for graduate education, which is barely practical preparation even for a faculty job. I’m not entirely sure, but that stance doesn’t seem to have much viability left in it.

Categories: Author Blogs

MLA, doctoral education, and the benefits of hindsight

28 May, 2014 - 12:29

The MLA has released a task force report on “Doctoral Study in Modern Language and Literature.” Primarily it recommends

  • engaging with technology
  • reducing time to degree
  • rethinking the dissertation (see the bullet point above)
  • emphasizing teaching
  • validating “diverse career outcomes” (my personal favorite)

Not coincidentally, my own department has decided that next year we will have extended conversations about our own doctoral program. So I suppose you could say the report is timely, and that it’s recommendations are interesting and provocative, even to those who might not agree with them. Personally, I think they would have been more interesting 15 years ago, when the Internet was still in its cultural infancy, it would have been forward-thinking, prescient even. 10 years ago, when the web was everywhere and social media were starting to take off, it would have been smart and strategic. Today it’s more a case of 20-20 hindsight.If we had started down this path a decade ago, today we would have doctoral programs like the ones imagined in the report. Of course ten years ago most doctoral programs didn’t have the faculty with the expertise or will to deliver such a curriculum. As far as that goes, I doubt many doctoral programs have such faculty today. The good news (not really) is that there isn’t really any point spending the next 3-5 years rebuilding a graduate program for students who will graduate in 2025 with the goal being to prepare them for the demands of teaching today.

The prospects for doctoral study in modern language and literature are far, far worse than that. I am not going to get out my tarot cards and try to predict the future, but I don’t think it’s a big stretch to imagine that continuing increases in network speed, processing speed, data storage, and mobility will transform literacy as radically over the next decade as they have done in the past decade. What continues to mystify me, and this report is no better, is that literary studies continues to demonstrate what I can only call a willful ignorance of the fact that it is forever tied to print culture.  Will the humanities continue to study print culture as a historical phenomenon? Sure, I guess. But the difference is that 50 years ago, we were in a print culture and this study was connected to the literacy of everyday life. And now it’s not. And, as the report notes, we’ve gone from 1000 new assistant professor ads to 600 such ads a year (n.b. according to the Rhet Map project, 167 of those ads in 12-13 where in rhet/comp). And yes, it was precipitated by the recession, but what’s the explanation now? The explanation, as best as I can understand it, is that the entire institution of higher education is being forced to rethink itself and question all of its practices.

That’s a good explanation, but I’ll offer another one that focuses more on our own disciplinary actions. Here is what the report says about embracing technology:

Some doctoral students will benefit from in-depth technological training that builds their capacity to design and develop research software. Some will require familiarity with database structures or with digitization standards to facilitate the representation and critical editing of documents and cultural artifacts online. Still others will need to add statistical literacy to their portfolios. Still others will need to understand the opportunities and implications of methods like distant reading and text mining. Programs should therefore link technology training to student research questions, supporting this training as they would language learning or archival research and partnering where appropriate outside the department to match students with relevant mentors or practicum experiences. Because all doctoral students will need to learn to compose in multimodal online platforms, to evaluate new technologies independently, and to navigate and construct digital research archives, mastery of basic digital humanities tools and techniques should be a goal of the methodological training offered by every department.

This is not solely a matter of the application of new methods to research and writing. At stake is also increasingly sophisticated thinking about the use of technology in teaching. Future undergraduates will bring new technological expectations and levels of social media fluency to the classroom, and their teachers—today’s doctoral students—must be prepared to meet them with versatility and confidence. Students who understand the workings of analytic tools and the means of production of scholarly communication in the twenty-first century will be better able to engage technology critically and use it to its fullest scholarly and pedagogical potential.

Fine. I can agree with that, but I don’t think this will do us much good because I don’t think it will go far enough. In the end we will still be left with doctoral students who essentially want to do the same kind of work as their professors and what I think we are seeing is that there will be very little future in that kind of work. This reads to me as the “DH will save us” kind of argument. As the report states elsewhere “The traditional hermeneutics of the individual work is not endangered; rather, it is augmented by digital technologies. But the collaborative, interdisciplinary, and interprofessional aspects of much digital scholarship do suggest critical transitions ahead for literary fields.” This is still an argument for saying we need to augment our programs, i.e. make sure students can learn about technology by taking courses in other departments. Maybe that’s a pragmatic and necessary step. But that’s not what this is about.

It’s not the methods of literary study that are the problem. Unfortunately it’s literary study itself. The reality of the job market and the undergraduate English major is that we don’t need hundreds of new professors each year to study and teach literature, regardless of the method or degree of technological savvy they bring with them. Instead we hire these graduates as adjuncts into writing curricula that we have spent decades deprofessionalizing and devaluing precisely so that we could give those jobs to TAs and fill our graduate programs. And now that pyramid scheme has finally come to an end.

Arguably, someone in 2025 will have disciplinary expertise to study and teach digital culture in the way literary scholars study and taught print culture in the 20th century. Possibly those digital scholars would be able to build curricula and student interest that would sustain legitimate academic careers (maybe even tenure if such a thing still exists then). But the reforms described here won’t produce such scholars. Well they could, but they won’t in any systematic way.

Categories: Author Blogs