Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 5 hours 17 min ago

economics and the English job market

9 January, 2015 - 14:32

Who can resist a job market post during MLA season? Not Inside Higher Ed, though this one points to some interesting research done by economist David Colander (with Daisy Zhuo) and published in Pedagogy. I suppose it’s a dog-bites-man scenario. Colander samples hiring and job placement at a group of English departments and comes to the conclusion that graduates of top tier doctoral programs (Tier 1 ranked 1-6 and Tier 2 ranked 7-28 as per US News & World  Report) are much more likely to get jobs at the top 62 doctoral universities (and the top liberal arts colleges) than graduates of lower ranked programs.

I know, surprising stuff, right? Though the actual numbers are very clear: according to the study, less than 2% of graduates from tier 3 schools land jobs at the top 62 universities. Basically what you see is that the top schools hire their own. 57% of the faculty at the top 6 schools come from the other 5 in tier 1. Nearly 75% of the faculty at tier 2 schools come from the top two tiers.

It’s not hard to imagine how this happens. Some might like to argue that it is a rational process. The best candidates are those who get into and graduate from the best programs. They’ve already been filtered, though the narrowness of the top six hiring one another does seem a little incestuous. (It would be interesting to compare this with other disciplines.) Others are more likely to see this practice as a problem. As onerous and outdated as the current MLA job search practice is, it was implemented to replace a far less fair, old boys network of hiring.  One could argue this study reveals that network is still in effect.

But I’m not here to contend with that issue today.  Instead I want to address faculty at tier 3 or 4 institutions. More than half of you got your degrees from schools in the top 2 tiers. If Colander’s study is accurate, your students aren’t likely to get jobs in the top 3 tiers or win prestigious post-docs. In the top 2 tiers it’s not unreasonable to train students with the idea that successful grads will go onto the positions much like one’s own: research-intensive, low-teaching, doctoral programs, and strong undergrads. But in tiers 3 and 4 this just isn’t the case, but you already knew that, right?

So here’s an extended quote from Colander:

the best explanation of the current job market situation is that English programs are populated with students who love the study of English and want to combine that love of English with some way to make an acceptable living. Students who are not independently wealthy need to have some way to combine their love—the study and teaching of English—with a job that provides sufficient income to live. For many students, even relatively low-paying part-time and adjunct jobs, combined with other part-time, better-paying private-sector jobs ideally using their English skills, are evidently preferable to giving up the study of English. From an economist’s free-choice perspective, if that is what students choose, a program focused solely on actual job training should prepare them for that life as well as possible. Training would be designed, among other things, to prepare students to put together the combination of jobs that is most likely in their future. This is not to argue that the situation they will face is a desirable one, or that the institutional structures governing academic employment should not be changed. But that is a separate issue; job training should focus on preparing students for the institutional reality they will likely face. To my knowledge, no programs do this.

Numerous possibilities exist to address this goal. Most people do not know how to write well, and if more English PhD programs provided training in preparing students to do freelance consulting, analytic writing and composition, rhetoric, copyediting, proofreading, general editing, or tutoring, in addition to the study of literature and literary criticism, their students would have a set of skills that are more marketable than those needed to advance in a research university. The very fact that job placement is thought of primarily in terms of tenure-track academic jobs is suggestive of the problem.

But then, if you’re faculty at one of these institutions, these things have probably crossed your mind. The suggestion that departments should design their programs to prepare students for their future lives as contingent labor is a little shocking (which is not to suggest the situation is desirable, ahem). Though it is easy to respond with anger to Colander’s suggestion, I think what is more to the point, perhaps with the cold, dismal eye of the economist, is what people, both students and faculty, are willing to sacrifice in the name of love: in this case, the love of literature.  Personally I don’t think I can go quite where Colander is going and set up a doctoral program that recognizes that many of its students will  have no better professional future than the one with which they entered the program. He tosses out the idea of non-academic jobs. Fine. Let’s put a pack of economists to the task of identifying current non-academic jobs for which a PhD in English (or some reasonably modified version of such) is a required or at least preferable qualification.

What is reasonable, at least to me, is thinking about how tier 3 and 4 institutions might revise their curriculum to prepare graduates for the kinds of academic jobs they do land.  Again, dog-bites-man I think.

Categories: Author Blogs

Rhetoric and the Digital Humanities: a new essay collection

9 January, 2015 - 08:19

Fresh off the presses, Rhetoric and the Digital Humanities, edited by Jim Ridolfo and William Hart-Davidson, from University of Chicago Press (AMZN).

Here’s the abstract to my contribution, “Digital Humanities Now and the Possibilities of a Speculative Digital Rhetoric.”

This chapter examines connections between big data digital humanities projects (the Digital Humanities Now project in particular), digital rhetoric, and the philosophies of speculative realism (focusing on Bruno Latour). It addresses the critique that digital humanities are under-theorized and connects these critiques with those made against speculative realism’s use of scientific and mathematical concepts. Finally it proposes how a speculative digital rhetoric might contribute to a network analysis of informal, online scholarly work.
Keywords: big data, speculative realism, Bruno Latour, middle-state publishing, nonhuman

Some liner notes:

The digital humanities is a rapidly growing field that is transforming humanities research through digital tools and resources. Researchers can now quickly trace every one of Issac Newton’s annotations, use social media to engage academic and public audiences in the interpretation of cultural texts, and visualize travel via ox cart in third-century Rome or camel caravan in ancient Egypt. Rhetorical scholars are leading the revolution by fully utilizing the digital toolbox, finding themselves at the nexus of digital innovation.

Rhetoric and the Digital Humanities is a timely, multidisciplinary collection that is the first to bridge scholarship in rhetorical studies and the digital humanities. It offers much-needed guidance on how the theories and methodologies of rhetorical studies can enhance all work in digital humanities, and vice versa. Twenty-three essays over three sections delve into connections, research methodology, and future directions in this field. Jim Ridolfo and William Hart-Davidson have assembled a broad group of more than thirty accomplished scholars. Read together, these essays represent the cutting edge of research, offering guidance that will energize and inspire future collaborations.

Stuart A. Selber, author of Multiliteracies for a Digital Age “Ridolfo and Hart-Davidson have produced a volume that interrogates the most important questions facing both rhetoric scholars and teachers who are interested in the digital humanities and digital humanists who are interested in the rhetorical dimensions of multimodal texts. Avoiding the negative aspects of territorialism and disciplinary politics, the contributors remix theories, practices, and methods in new and exciting ways, mapping productive relationships between rhetorical studies and the digital humanities and illuminating how these areas intersect and interanimate one another. This volume should be required reading for anyone who cares about the future of writing and reading.” Collin Brooke, Syracuse UniversityRhetoric and the Digital Humanities is a landmark collection for scholars in rhetoric and writing studies. Its attention to procedurality, coding, scholarly communication, archives, and computer-aided methodologies, among other things, maps many of the important changes in disciplinary terrain prompted by the emergence of the digital humanities. It’s also a compelling demonstration of the role that rhetoric and writing studies can and should play in discussions about digital humanities. This book will provide colleagues across the disciplines with a strong sense of the ways that rhetorical studies might intersect with their own work.” Matthew K. Gold, Debates in the Digital Humanities “An important and timely exploration of the many ties that bind the digital humanities and composition/rhetoric. Rhetoric and the Digital Humanities is a much-needed book that will stir conversations in both fields.” The Table of Contents Introduction
Jim Ridolfo and William Hart-Davidson

PART ONE  Interdisciplinary Connections

1 Digital Humanities Now and the Possibilities of a Speculative Digital Rhetoric
ALEXANDER REID

2 Crossing State Lines: Rhetoric and Software Studies
JAMES J. BROWN JR.

3 Beyond Territorial Disputes: Toward a “Disciplined Interdisciplinarity” in the Digital Humanities
SHANNON CARTER, JENNIFER JONES, AND SUNCHAI HAMCUMPAI

4 Cultural Rhetorics and the Digital Humanities: Toward Cultural Reflexivity in Digital Making
JENNIFER SANO-FRANCHINI

5 Digital Humanities Scholarship and Electronic Publication
DOUGLAS EYMAN AND CHERYL BALL

6 The Metaphor and Materiality of Layers
DANIEL ANDERSON AND JENTERY SAYERS

7 Modeling Rhetorical Disciplinarity: Mapping the Digital Network
NATHAN JOHNSON

PART TWO  Research Methods and Methodology

8 Tactical and Strategic: Qualitative Approaches to the Digital Humanities
BRIAN MCNELY AND CHRISTA TESTON

9 Low Fidelity in High Definition: Speculations on Rhetorical Editions
CASEY BOYLE

10 The Trees within the Forest: Extracting, Coding, and Visualizing Subjective Data in Authorship Studies
KRISTA KENNEDY AND SETH LONG

11 Genre and Automated Text Analysis: A Demonstration
RODERICK P. HART

12 At the Digital Frontier of Rhetoric Studies: An Overview of Tools and Methods for Computer-Aided Textual Analysis
DAVID HOFFMAN AND DON WAISANEN

13 Corpus-Assisted Analysis of Internet-Based Discourses: From Patterns to Rhetoric
NELYA KOTEYKO

PART THREE  Future Trajectories

14 Digitizing English
JENNIFER GLASER AND LAURA R. MICCICHE

15 In/Between Programs: Forging a Curriculum between Rhetoric and the Digital Humanities
DOUGLAS WALLS

16 Tackling a Fundamental Problem: Using Digital Labs to Build Smarter Computing Cultures
KEVIN BROOKS, CHRIS LINDGREN, AND MATTHEW WARNER

17 In, Through, and About the Archive: What Digitization (Dis)Allows
TAREZ SAMRA GRABAN, ALEXIS RAMSEY-TOBIENNE, AND WHITNEY MYERS

18 Pop-Up Archives
JENNY RICE AND JEFF RICE

19 Archive Experiences: A Vision for User-Centered Design in the Digital Humanities
LIZA POTTS

20 MVC, Materiality, and the Magus: The Rhetoric of Source-Level Production
KARL STOLLEY

21 Procedural Literacy and the Future of the Digital Humanities
BRIAN BALLENTINE

22 Nowcasting/Futurecasting: Big Data, Prognostication, and the Rhetorics of Scale
ELIZABETH LOSH

23 New Materialism and a Rhetoric of Scientific Practice in the Digital Humanities
DAVID GRUBER

Categories: Author Blogs

when rhetoric gets real

8 January, 2015 - 09:39

In Pandora’s Hope, Latour tells the story of being asked if he “believes in reality.” His response was something to the effect of not realizing that reality was something one needed to believe in. Elsewhere Graham Harman has written of an email exchange with Manual DeLanda, who wrote “For decades admitting that one was a realist was equivalent to acknowledging [that] one was a child molester.” Harman’s response?  “The past tense may be too optimistic, since it is not clear that those decades lie entirely behind us.” That was 2007. Since then we’ve been up, down, and around the hype adoption cycles of speculative realism, new materialism, the “nonhuman turn,” etc., etc.  To be honest, I’m not sure if the result has changed the situation Latour,  DeLanda, and Harman describe.

Rhetoric is in an odd situation is relation to these matters. On the one hand, rhetoric is classically interested in human symbolic action. It’s stereotypical detractors would declare rhetoric to be idealist to a fault, uninterested in “reality” or “truth” and squarely focused  only on what people think and what they can be persuaded to think. On the other hand, rhetoric is equally invested in the ideas of the public and the marketplace, of justice, deliberation, and so on.  In other words, rhetoric recognizes the very real, material effects of symbolic action. One assumes those effects are occurring in reality. Of course, to be an idealist does not require denying reality. It simply means that one’s access to reality is subjective. As the correlationist would put it, one only sees the world as it relates to oneself.

What does it mean to call rhetoric “real”? To start, there are two interrelated takes on this. To be a realist is to assert the existence of a mind-independent reality that exists beyond empirical observation. As DeLanda notes, this means that the realist’s “first task is to delimit the kinds of entities that it considers legitimate inhabitants of the world.” Some parts of the real world exist only in relation to humans (e.g. my university) while others (e.g. mountains) do not, and still other things may exist only in human minds (e.g. arguably heaven and hell, though clearly some may argue that ideas have mind-independent realities as well or think these things exist in the same way mountains do). Certainly one could say that there are rhetorical practices that are as dependent upon humans as a university would be. So a realist would be faced with three options for rhetoric:

  1. Rhetoric exists only in human minds; it is not a legitimate inhabitant of the world.
  2. Rhetoric is real but dependent upon humans to exist.
  3. Rhetoric exists independent of humans. If there suddenly were not humans, there would still be rhetoric.

So let’s say I adopt position #3, with the recognition that there are certain rhetorical practices that would fit #2. Such a statement would be speculation. One would have to establish means for investigating the claim, as DeLanda does with his concept of quasi-causal mechanisms. There are other theories out there, of course.

What are some of the implications of this position?

  1. Rhetoric precedes humans and thus symbolic action. Rather than rhetoric being invented as a way of using language, language emerges as a capacity of rhetorical interaction.
  2. Rhetoric is not an exclusively human trait. It is not evidence of the ontological exceptionality of humans. It is not evidence of a human-social-cultural world that is ontologically separate from the natural world.
  3. Human practices of rhetoric emerges, of necessity, in relation to nonhuman rhetoric. There is no purely human or social rhetoric.
  4. Because human practices of rhetoric rely upon nonhumans (of all kinds), those practices shift along with our nonhuman relations (the obvious example being media technologies).
  5. Though human practices shift over time and space, there is no inherently human rhetorical practice that can be threatened by these changes.
  6. That said, human-nonhuman relations (networks, assemblages) shape rhetorical practices, which in turn have other real effects.

In my view, as rhetoricians, and teachers of rhetoric in particular, we proceed everyday as if we believe #6. When we ask students to sit in a circle; when we do some freewriting to give students a chance to think through a question or “get the juices flowing;” when we ask students to put away their cell phones; when we require students to write in one genre rather than another; when we write on the chalk board, use a handout, or show a video; do we not do those things because we believe the nonhumans involved shape our capacities for rhetorical action?

If I add into this something like Andy Clark’s extended mind, then what is asserted here about rhetorical practice might be broadened to all those things that we might conventionally view as the product of human thought. I tend to think of it this way. Thoughts are real. They can be measured empirically, if partially, by fMRI and other technologies. They have real effects, like this blog post. Thoughts may be ephemeral, short-lived, but so is a gust of wind. Is the gust real?  Are subatomic interactions occurring in “planck time” not real? Thoughts are just things in the world. Some emerge in relation to humans; others do not. At the very least we would say some other animals think. In my view, even if we limited rhetoric to a subset of things that humans think (which I would not), this would not make rhetoric any less real.

Instead one might ask the reverse question. Is rhetoric a kind of thought? Or is thought a kind of rhetorical relation? That is, do rhetorical relations create the conditions for the capacities of thought and agency?  If I asserted that the minimal requirements for a rhetorical encounter were an expressive force and an object capable of sensing the expression, would that presuppose thought? I don’t know. I am not particularly interested in studying the rhetorical relations among rocks or quasars or even among a flock of geese or a stand of trees, but I’m also not interested in declaring a priori that such investigations are out of bounds.

I am interested in investigating nonhumans participating in human rhetorical practices, media technologies in particular, though not exclusively. Take for example, the image attached to this post which depicts guest workers in Djibouti seeking cell signals from across the sea so that they can phone home.  How can we study the ways such nonhumans participate in our rhetorical and cognitive activity? The idealist can only look into the human mind (which I would not term as a legitimate inhabitant of the world); perhaps one can say something about capitalism. The empiricist (e.g. the cognitive rhetorician or the activity theorist) is limited to the observable world, to her qualitative methods. So one might observe and interview these guest workers, and I will not deny the usefulness of that work. However the signature difference with the realist (and DeLanda puts this well) is that one does not view the knowledge one creates as a representation of either a mind-dependent (idealist) or mind-independent (empirical) reality but as a construction, a composition (as Latour says in his manifesto), that has effects. As such one can go beyond the empirical representation or cultural-critical interpretation of these guest workers to speculate on the networks of relations that produce this event.  As Latour observes, when we create scientific knowledge we change the world. Of course we do, why else would we go to all that work?

And this brings me back to the native heart of rhetoric: effecting change, persuading.  Though in many ways the study of rhetoric appears distinctly suited to idealism, when we think of rhetoric as practice, as know-how, in a way that philosophy can never be, it has realist roots. If rhetoric isn’t the know-how to interact/compose with objects to have real effects, then what is it?

 

 

Categories: Author Blogs

Star Wars, empires, WPAs, and postcomposition

8 December, 2014 - 14:11

I’m working on the fourth chapter of my monograph, where the focus will be more on pedagogy, and I’ve been reading Sid Dobrin’s Postcomposition, which is a great book in my view. Basically I agree with Dobrin. Our discipline has defined itself, and the study of writing, in terms of subjectivity, and more specifically in terms of teaching subjects (i.e. students) to write. In doing so, it has developed a resistance to “theory,” that is, a resistance to postmodern theory. Dobrin doesn’t pull many punches in this regard, especially when it comes to his discussion of the WPA organization or wpas (i.e. people serving roles like mine). In some respects, his argument reminds me of Sirc’s in Composition as a Happening where we point to the formation of the discipline some 30-40 years ago and wonder if we might not have gone in a different direction.

Here’s the line from Dobrin that inspires the title of this post:

Empires often form in part by using a rhetoric of safety, espousing protection of individual difference under imperial rule. Unification and standardization are imperial tools for consolidating rule. Disparate locals fighting individual institutional battles are willing to offer votes of confidence to a ruling system if that system stands as an ally (underhanded though it was, this was specifically how Palpatine was able to gain control over the Senate and achieve a vote of nonconfidence int he Republic, leading eventually to the formation of the Empire). Hence, the immediate benefit of empire is the manner in which the homogenizing force is able to counter previous oppressions of individual entities. In this sense standardization and homogenization become better systems than those previously in place. The history of composition studies’ oppression is countered in the building of Empire. (107)

FYC is created based on a perception of student need/deficiency. From the start, it’s about the students. Dobrin’s point is that composition builds its disciplinary empire (such as it is) on meeting this student need and then later on the administration of the vast programs of TAs and adjuncts deployed. In this chapter, picks out Richard Miller’s 1999 PLMA article “‘Let’s Do the Numbers': Comp Droids and the Prophets of Doom,” which makes the argument that rhet/comp doctoral programs should focus on WPA administration, since that’s where the jobs are.  By chance, another Star Wars reference? Though here Miller is referring to a Cary Nelson article in which Nelson refers to the attitude of faculty at elite institutions who view teaching writing in terms of

Rhet/Comp Droid assembly lines. These dedicated “droids,” so many literature faculty imagine, will fix comma splices, not spaceship wiring. But why give Rhet/Comp Droids extra leisure time? What are they going to do with time off? They beep and whir and grade, that’s all. They’re not training for research.

Better a droid than a stormtrooper I suppose. We seem to be mixing analogies here. Nevertheless there seem to be several inter-related elements here:

  • the discover of a widespread and significant student need for writing instruction
  • the invention of a course (FYC) and the invention of a class of instructors to teach the course (TAs/Adjuncts)
  • the invention of a discipline that manages these and validates them through research

Dobrin suggests shifting research away from student-subjects and pedagogy and toward the study of writing itself as an ecological process (a la Guattari): an ecocomposition. That points at both the first and third bullets. Change the conversation. He also argues for a the elimination of our reliance on contingent labor to deliver FYC. He doesn’t really call it an abolitionist argument, but abolishing the FYC curriculum is one way to achieve that goal. Though what happens to composition studies if there is no FYC?

So I’m coming at this from a similar angle as my book basically comes from the position of suggesting that our discipline (and higher education in general) struggles to address digital literacy because of its commitments to view symbolic behavior as an exceptional characteristic of human-ness. In other words, when we say “writing” we struggle not to see human writers. If, on the other hand, rhetoric viewed itself as studying the nonhuman activities/objects we call writing (broadly conceived), including the relation of humans with writing then it would come at these issues differently. Maybe that sounds like a Jedi mind trick. I don’t know. I think it means something like this. Academic-disciplinary writing networks/activity systems/assemblages involve humans. They have various mechanisms for establishing this involvement. As rhetoricians we might study these mechanisms (among many other things). And we might teach people about these mechanisms. But we don’t necessarily need to be the mechanism and we don’t need to be the managers of the mechanism either.

I am maybe a little more sympathetic to the wpa than Dobrin. I agree that the wpa is not a position from which to launch revolutions. That said, at UB the Senate just approved a revision to our general education curriculum. If it is implemented as proposed, we will be replacing our part-time adjuncts with full-time NTT positions teaching writing in the disciplines across the campus. Fear not, there will still be an army of TAs for a wpa to oversee in the FYC program. Not revolution but decent reform, if seen through to the end. Any venture of this size is going to need administration. I don’t think there’s anything inherently wrong in that. As I see it, the problem FYC has often faced (depending on the campus) is that the instructors are part-time (even the TAs) and not integrated into a discipline. A smaller number of full-time disciplinary instructors could offer a very different curriculum and culture, one that would very much change the role of the wpa. However it wouldn’t be easy because we don’t really know what we would put in place of FYC. Even the “writing studies” approach, which purports to introduce students to discipline, is still a discipline defined by the study of FYC.

What would a disciplinary replacement for FYC look like if it wasn’t an introduction to composition studies but an introduction to writing studies otherwise shaped? How would it define itself in relation to the broader goals of general education (assuming it would still be a required general education course)? Who would teach it and by what method? Would we still need small classes? What would we do with the TAs that we displaced? How would we meet their needs?

I imagine I’ll touch on some of those questions in my book, though not really the last ones as those are questions that necessarily have local answers.

 

Categories: Author Blogs

changing the ends of scholarship

26 November, 2014 - 09:32

A continuation of the last post on graduate education…

As I think more about it, it’s fairly obvious that this is all part of a larger system with graduate curriculum as the obvious start point. Graduate coursework is also folded into later parts of the process from the perspective of the faculty (who teach their research). I’ll take it as a given that the production of scholarship is a necessary feature of the academy, though it should be clear that the relationship between faculty and research is a historical one that could, in theory, change. Obviously the demands on faculty to produce scholarship vary by institution and over time. That variation comes in terms of content and method, not just in amount. It also varies in terms of genre, which makes sense as genre would be interwoven with content and method. These variations occur natureculturally (to coin an obnoxious adverb).

I’ve written about this matter many times. It’s one of the primary themes of this blog and my scholarship. In English Studies, really across the humanities, our research practices are woven in tight and specific ways with 20th-century print culture and technologies, as well as persistent values about symbolic behavior and cognition stemming from the modern world. Put as a question: why do we write single-author monographs? Answer: cuz. There’s no real reason other than the cybernetic, territorializing forces of institution and history, which admittedly are quite powerful. Lord knows no one really wants to read them. But we believe in the totemic power of the monograph. It is undoubtedly a particular kind of scholarly experience. It has heft, or something. It represents long hours of sustained, focused, introspective and productive thought, and that is what we value I think: a particular model of the “life of the mind.”

It’s not possible to snap one’s fingers and change these things, but things have been shifting for a long time, at least 30 years. Funding for higher education, the availability of tenure-track positions, the popularity of the humanities, the viability of the academic publishing marketplace, the changing demographics of student populations, the emergence of digital media: yes, things have been changing for a while, and that doesn’t even begin to address changes within disciplines. Those are extra-disciplinary drivers.

So let me just offer a hypothetical. Let’s say we worked in small groups and published research collectively. No doubt it would be painful initially. Then in the future we hired faculty on the basis of joining one of these groups and we admitted graduate students to participate in these groups. Those students wouldn’t need to produce single-author dissertations because that’s not the kind of work any of us would be doing. Instead, after being trained, they’d go off and join similar research groups elsewhere as professors. If you don’t like that hypothetical, that’s fine. It isn’t meant as a serious proposal. Instead it’s meant to illustrate the ways in which the problems we have with graduate education are interwoven with the larger activity systems of humanities research. We don’t have to be what we are. We certainly don’t have to be what our academic predecessors were.

 

 

Categories: Author Blogs

What’s worse? Coursework or dissertation?

25 November, 2014 - 20:16

Today, as I regularly do around this time in my Teaching Practicum, we discussed the job market. It’s not much fun as you can imagine. I think (I hope) that it is illuminating. I mostly do it because I want students to see the relationship between the job market and their development as teachers (as well as scholars). Today Inside Higher Ed also published this little number on how time is spent in graduate school. As the story relates, much of the focus on revising doctoral programs (at least in the humanities) has been on shortening the dissertation process, but study covered in the article indicates that the reason humanities degrees tend to take longer than other doctoral programs is because of the time devoted to coursework (4 years on average). So that’s 4 years to go through coursework and exams and 3 years, on average, to write the dissertation. And that’s down from where we were a decade ago.

This is another wrinkle in the ongoing humanities project of revising doctoral programs, which might rightly strike one as missing the point when the real problem is the lack of tenure-track jobs.  The lack of jobs is certainly a problem, but to say that it is a problem in relation to doctoral programs would require making the presumption that the objective of doctoral programs is to professionalize students and prepare them to do tenure-track jobs. There’s no doubt we are all happy when our students get jobs, and there’s no doubt that programs are at least partially evaluated for their success at placement. But if the point of doctoral programs is really to prepare students to be professors then they certainly have a funny way of going about it.  That is, since most academic jobs are primarily teaching jobs, most of the doctoral preparation (one would think) would be teacher-training. In reality almost none of it is. Most tenure-track jobs do not require faculty to produce books for tenure, so why all this effort put into the proto-monograph we call the dissertation? Honestly, if doctoral programs were transformed to be job preparation, then very little of what you commonly see would remain.

But that’s not really what doctoral programs are about. Instead, as near as I can figure, the graduate curriculum is a tool for creating a particular kind of intellectual, disciplinary, scholarly community. In that community, professors carry out their research and discuss their research with their students in seminars, and students attend seminars and pursue their own research interests under the guidance of faculty. Let’s just postulate that this is a good thing worth preserving, or at least that the community is worth preserving. Maybe there would be a way of supporting this community while shortening the years of coursework and also making the dissertating process more efficient. But then that really points to back to the central disconnect. The reason for making the path to the Phd shorter and more efficient is to reduce the demands placed on students and the risks they take in relation to the job market. But if we want to do that, then we really need to be asking very different questions.

Categories: Author Blogs

writing and the speed of thought

23 November, 2014 - 12:02

When we first learned to write, we focused on holding the pencil and forming the letters. The attention given to the physical task of writing likely interfered with our ability to give attention to what we wanted to say. Later, after mastering writing (or, if you are like me, gave up on forming legible letters), the ability to write things down made it easier to develop more complex lines of thought. I think I experienced similar pressures on cognitive load when learning to type. And I still have a related experience in my need to focus on the virtual keys on my iphone.

When my writing experience is working well, the thoughts just seem to flow into sentences. I don’t have to stop and think about how to put a given thought into words. Everything seems to be clicking. I know where I want to go next but not in a fully conscious way. If I start to turn my attention farther into the future, toward the end of the paragraph or the bottom of the page or alternately if I start paying attention to the movement of my fingers on the keys then the whole mental state starts to collapse, as if it is a delicate wave structure. And I suppose neuroscientists might explain such states, at least partially, in terms of waves. Other times, I stop and plan, my mind reaching out for multiple connections as if I am gathering mental strength to hurtle myself forward into the stream of writing. Here my mind converses with itself, point and counterpoint, trying out different rhetorical strategies, poking holes in arguments, persuading itself. I become argumentative. I have, in the past, thought about this as a process of intensification, a kind of boiling over of the mind, where speed, connection, and argument leads to some change, some insight, where something new, with new properties emerges, like water that becomes steam, leaving the ground to float over a landscape of concepts.

However, over the years, I’ve also come to find that an exhausting, unpleasant, and unsustainable process. Also, it is perhaps not the most productive approach. In composition studies, mostly through CHAT and video game studies, we’ve become familiar with Mihaly Csikszentmihalyi’s concept of flow. In neuroscience, flow states have become associated with transient hypofrontality, a concept that’s been around for about a decade I believe. What is that? Basically (and I will not pretend to more than a basic understanding), transient refers to a mental state that comes and goes. Hypofrontality references a reduction in the operation of the prefrontal cortex (the part of the brain responsible for all of our higher order cognitive operations, including symbolic behaviors). Transient hypofrontality is commonly associated with certain practices like mediation, hypnosis, runner’s high, and the use of certain drugs (e.g. LSD). It is also associated with flow states. This appears counter-intuitive. Typically we imagine that we are at our most capable when our prefrontal cortex is fully engaged, not when it is operating in a reduced way. Transient hypofrontality suggests a reduction in our attention. As the article linked above suggested, athletic performance causes hypofrontality because physical demands are reflected in cognitive demands for implicit (i.e. unconscious) mental systems. I think that’s why I enjoy exercise. If you push yourself hard enough, you literally lose your capacity to think.

What does this have to do with writing and particularly with writing practices that are intertwined with exploration and invention (as opposed to more transactional and mundane writing practices)? Writers have used a variety of strategies from drug use to automatic writing to activate hypofrontality. That’s nothing new. And there’s research into writing and flow states. However I’ve always thought about it as speeding up. Now I wondering if it is better to think of this as slowing down. Not as fully activating the brain and putting it all to work, pushing it to its limits, but calming the mind.

Categories: Author Blogs

The empiricist-idealist divide in composition studies (and the role of realists)

10 November, 2014 - 12:12

I’ve been thinking about this big picture disciplinary issues primarily in terms of my Teaching Practicum, but maybe it is useful to share this here as well. Manuel DeLanda has a helpful brief piece on “Ontological Commitments” (PDF) in which he identifies three familiar categories of philosophical positions on ontology: idealism, empiricism, and realism. I don’t see these as absolute categories but rather as historical ones that work most closely with modern Western traditions. As he summarizes:

For the idealist philosopher there are no entities that exist independently of the human mind; for the empiricist entities that can be directly observed can be said to be mind-independent, but everything else (electrons, viruses, causal capacities etc.) is a mere theoretical construct that is helpful in making sense of that which can be directly observed; for the realist, finally, there are many types of entities that exist autonomously even if they are not directly given to our senses.

The long history of rhetorical philosophy shares with most of the humanities a commitment to an idealist ontology. This is what is sometimes termed the correlationist perspective: the idea that we can only know the world as we relate to it; we can only know what we think, and perhaps not even that entirely. In the framework, rhetoricians study symbolic action and representation. However, composition studies begins with a fair amount of empirical research into cognitive and writing processes. In part this comes out of historical connections with education departments and perhaps the social scientific side of rhetoric (mostly in communications departments). When the post-process moment comes in the late eighties, one ends up with an idealist side (which would primarily include cultural studies pedagogies) and an empiricist side (with a variety of practices including CHAT, technical communication, and cognitive rhetoric). In most respects the former are more comfortable in English departments since literary studies is almost entirely idealist. (One possible exception there are digital humanities methods, which at least some might view as empirical.)

From the perspective of first-year composition programs one ends up mostly with idealist curricula which focus on subjectivity, culture, representation, and ideology and view “the writing process” in those terms (i.e as something we think rather than something real that we observe). Because things like ideology and culture are not directly observable, empirical approaches to composition tend to focus on a smaller scale, perhaps genre for example, which is not to say that they also do not assert theoretical constructs like ideology. Empiricism allows for both, as we see with CHAT which combines Marixan philosophy with empirical methods.

So what about realism? Realism as DeLanda defines it sit opposed to idealism and empiricism on a fundamental question. Idealists and empiricist both decide a priori what makes up the world. It is either “appearances (entities as they appear to the human mind) or directly observable things and events.” Realists, however, do not know a priori what the contents of the world may be for the world may contain beings that cannot be observed. As such, realists must speculate. Now from here DeLanda goes into his particular version of speculative realism. As it happens I think his approach is especially useful for considering the role realism might play in composition studies, specifically by his claim that “knowledge is produced not only by representations but by interventions.”

What does this mean? Basically, in DeLanda’s realist philosophy what is real is not a priori, transcendental, or essential. It is emergent in the relations among things. As such, we can know things not simply by representing some transcendental truth but by intervening, and thus making or constructing knowledge. This to me is very sympathetic with Latour. It also results in an epistemological division between “know that” (representation) and “know how” (making).  This has a clear pedagogical implication:

Unlike know-that, which may be transmitted by books or lectures, know-how is taught by example and learned by doing: the teacher must display the appropriate actions in front of the student and the student must then practice repeatedly until the skill is acquired. The two forms of knowledge are related: we need language to speak about skills and theorize about them. But we need skills to deploy language effectively: to argue coherently, to create appropriate representations, to compare and evaluate models. Indeed, the basic foundation of a literate society is formed by skills taught by example and learned by doing: knowing how to read and how to write.

DeLanda’s argument ultimately suggests that realists are better poised to deal with questions of know-how. We see a similar view in Latour and even in some of the object-oriented ontology interest in carpentry.

Conventionally in composition studies we encounter a problem of translation in trying to shift from idealist knowledge about rhetorical theory (e.g. logos, pathos, ethos) or empirical knowledge about process (e.g. we know that writers have these practices) to teaching how to write. Empiricists end up trying to argue that disciplinary representational knowledge about writing (i.e. knowing that writing works in certain ways) can be a useful tool in learning how to write in a particular situation. This is the writing about writing approach. Idealists similarly suggest that knowing that ideology and culture operate in particular ways in relation to discourse will make us better writers…. somehow. Realists, on the other hand, actually have a different set of ontological-epistemological options available to them. For the realist it isn’t about theory and knowledge on the one hand and practice on the other, with no reliable way to get across the divide. Instead, know-how has its own theories (of intervention), its own ways of determining significance (of what makes a difference). Know-how can be taught just as reliably as know-that.

So the realist compositionist might say rather that discovering and teaching the “know-that” of our discipline, can we teach the “know-how” of our discipline, which might be where the practice of rhetoric (as opposed to the idealist or empirical study of rhetoric) lies.

 

Categories: Author Blogs

motivation and attention as matters of concern

29 October, 2014 - 14:19

Motivation and attention are common subjects of discussion in our graduate teaching practicum. Students in composition don’t seem motivated to do the readings or really work on their assignments. They don’t pay attention in class. They are distracted by their devices. They don’t participate as much as we would like. None of these are new problems in the classroom. Of course they have been made new to some degree by their combination with emerging technologies. Fear not though, this is not yet another “laptops in the classroom” post.

In the past I have written here and elsewhere about the concept of “intrinsic motivation.” Essentially the point is the complex creative tasks, such as the kind of writing tasks we often assign in college, are hard to accomplish, indeed can often be inhibited, by extrinsic motivations (e.g. grades). To develop as a writer one needs an “intrinsic motivation” to succeed, which requires some sense of autonomy, a task that is within reach (insert zone of proximal development talk here), and some purpose (which could be “becoming a better writer” or could be any goal that could be achieved through writing, e.g. winning a grant). Indeed we rarely want to “become a better writer.” Instead, we might want to be a more successful grant writer, finish and publish our monograph, attract more readers to our blogs, and so on. Are those extrinsic motivations? Hmmm… good question.

I’ll set that question aside though and come at this from a Latourian angle (as the “matters of concern” reference in the title promises). Let’s say that agency is an emergent capacity of relations rather than an internal characteristic of humans. In that case, when one says “intrinsic motivation” (hence all the scare quotes), one must respond by asking “intrinsic to what?” The conventional answer is intrinsic to the self or individual. In Latourian terms though one might try to say intrinsic to the network but even that doesn’t work so very well as networks are not the most discretely bounded entities. In networks, actors are “made to do.” Inasmuch, they can be “made to be intrinsically motivated,” even though the makers are not necessarily all “inside.” In the end intrinsic motivation has to do with an actor’s affective relationship to a given task. So while I think it is rhetorically useful to suggest to students that they must find some intrinsic motivation for writing, I think it is more accurate to say that we need to compose networks of motivation.

Attention offers a similar problem. We ask students to pay attention and speak of economies of attention. I think we imagine attention as a limited natural resource and as an internal human characteristic. Again though, if we imagine that we are “made to attend” to actors and networks then attention is an emergent capacity. Furthermore, attention might be too general a concept, including a range of cognitive activities from “single-point” to “multi” to “hyper.” Is introspective attention the same as the attention required driving in heavy traffic as that of listening to a lecture as that of making a free throw in a close game? Do we think neuroscience can answer that question? Certainly networks (a lecture hall for example) are designed so that humans are made to attend to the speaker in a particular way (not that they always work).

With matters of concern (see “Why has critique run out of steam? From matters of fact to matters of concern“), Latour seeks to move away from the two-step critical process that either criticizes the person who claims s/he is made to do something by an external object (because objects have no real control over us; that’s all in our minds)  or criticizes the person who believes s/he has free will (because we are shaped by cultural-ideological forces). So it is neither acceptable to say that we can’t pay attention because other objects distract us (maybe that’s true but we are responsible for turning them off then) or to say I have control over may ability to pay attention (because ideological-cultural forces shape my desire). Do students express free will when seeking motivation to write or are their motivations overdetermined by ideology? Yes and yes. No and no. Science comes along and offers measurements of the plasticity of the brain and the effects of new technologies upon it. We conduct studies on the cognitive limits of attention. We make motivation into a science as well. Human nature, cultural forces, the individual capacity for thought or agency: where does one begin and the other start? And what role do technologies play? They are real objects with their own “natures,” their own science, but also their own cultural contexts.

Ban computers from the classroom. Does that make you a fetishist because you believe these devices make people do things? Does it mean that you naively believe that once the devices are out of the room the students will be free from the overdetermining forces of society? Can we be more or less overdetermined? Of course not because no one can live in those old critical worlds (any longer). We live in a messier world of hybrids where attention and motivation are emergent phenomena that are always up for grabs in any network.

Categories: Author Blogs

building communities in the humanities

23 October, 2014 - 10:22

I was participating in a discussion this morning around a proposal to build a kind of DH-themed interdisciplinary community on our campus. One of the central concerns that came up was that faculty in the humanities don’t tend to collaborate, so was it really feasible to imagine a scholarly community that was centered on such faculty? It’s a good question. We have struggled somewhat with building a DH community on campus because, when it comes down to it, for most conventional humanities scholars, collaboration with one’s local colleagues serves little or no purpose in relation to scholarly production. As we discussed this morning, the digital humanities is something of a counterculture in the humanities then, because it is an example of a methodological shift that invites collaboration in a way that is not typically seen elsewhere in these fields. The meeting was quite early so my mind maybe wasn’t clicking well enough at the time, but later it struck me that in fact the humanities in an abstract sense are as collaborative as any field but that those collaborations are harder to perceive because of the technologies that mediate them and what I would term the ontological/epistemological paradigms at work in research.

Let’s start with the technologies. You don’t see many humanities books or articles with multiple authors, so we clearly do not tend to collaborate to produce scholarship as authors. But the way that humanists engage with the texts they cite is quite different from what we see in most other fields. E.g., how often does one see a block quote in scientific or even social scientific research? It’s almost cliche to say that a humanist’s community sits on her bookshelf. We collaborate with other scholars through the mediating technologies of monographs and articles. And yes, every discipline does this in one way or another, but I think it’s role in the humanities is worth considering. Of course I also think it’s worth saying (since I’ve said it here many times already) that this disciplinary practice is obviously the product of certain 20th-century technological conditions that no longer pertain. But my point is to say that it’s not really a matter of moving from not being communal to being communal but rather to mediating community in a new way.

The other part of this though is the ontological/epistemological paradigm. By this I mean the way we understand our relation to our work and how that knowledge gets known. Let abstractly, in English Studies for example, historically we have done our scholarship by reading texts. Even though we recognize the social-communal contexts of reading, the activity of reading itself is individual. Even though two people can huddle around a screen and read this post, each of you would still be reading it “on your own.” Similarly writing is an individual experience whereby knowledge is not only represented but created. In a different field, where multiple scholars might collaborate on conducting an experiment, gathering data, and interpreting the results, the resulting article might be produced collaboratively as well based on the research that has been completed. In the humanities though, I think the writing of scholarship is part of the production of the research: researching and writing are not so neatly separated as they appear in other fields. So I read a text, I create an interpretation, and I write about it. Even though we might say there is a community with whom I am collaborating through the mediation of the texts I am citing, in disciplinary terms I still view this as a solitary activity. So how do we move from “I” read, interpret, and write to “we” do these things? Or perhaps even more challengingly, how do we move to viewing these as networked activities?

This is one of the central questions of my current book project (the one I am hoping next semester’s research leave will allow me to complete), as I am arguing that the challenges we face in developing digital scholarly practices and teaching digital literacies really begin with founding ontological assumptions about symbolic behavior as an exceptional human characteristic.  When symbolic behaviors become hybridized natureculture objects and practices, what does that do to the humanities and more particularly to rhetoric? For one thing I think it has to change the way we understand our disciplinary methods. It brings our interdependency to the foreground. Community, of course, is such a vexed word in theory-speak. I’m not going to get into that here, but I think it’s important to realize that when we say that humanities scholars tend to work alone that we’re hallucinating that. That isn’t to say a shift to a more DH-style kind of collaboration wouldn’t require a real change in the lived experience of humanities scholars, or that it would be easy. I just don’t think we can really say that humanities scholarship can operate without community or collaboration.

Categories: Author Blogs

the relationship of English Studies to emerging media

13 October, 2014 - 09:40

As a digital rhetorician, it is not surprising that I take a professional, scholarly interest in emerging media from social media platforms to the latest devices. I am interested in the rhetorical-compositional practices that develop around them and the communities they form. I am particularly interested in how they are employed, or might be employed, for purposes of pedagogy and scholarly collaboration and communication.

However I don’t love these technologies. I am not a fan of them, and I do not teach the “appreciation” of digital media. I think this is something that is sometimes difficult for colleagues in literary studies to understand. Typically I think they do love their subjects. It’s a cliche after all that one goes to graduate school because one “loves literature.” To be fair, it is probably a broader cliche shared among more traditional disciplines in the arts, humanities, and natural sciences… to have a love of one’s subject. I suppose you could say that I have a love, or at least an ardent fascination, with rhetorical practices. I can sit in a dull, pedantic faculty meeting and become interested by the particular rhetorical moves that people make, what is allowable, what it isn’t, what people find convincing, and so on. I find emerging media interesting in this regard because we continue to struggle to figure out what those rhetorical practices should be. But I don’t love emerging media any more than I love faculty meetings.

I would estimate that I average less than one hour per day on my computer or iPhone for non-work related reasons. Like many of us in academia, the large majority of my work does require these devices. As an administrator, the plurality of this time is spent in email applications. There is no doubt that every aspect of my work–research, teaching, service–has been shaped by emerging media, just as my 20th-century predecessor’s work was shaped by the information-communication technologies of her era. And as an English professor, there’s no doubt that she studied those technologies (books in particular) and was an expert in their use. The 20th-century literature professor may have not realized it, but she was in a unique historical moment where the media objects she studied and the information-communication technologies she employed to do the studying were a part of the same media ecology.

This is not a matter of love or hate or critique or ideology. It’s a matter of history. Literary studies emerged in the late 19th century and early 20th century as a discipline (MLA formed in the 1880s) in the context of the second industrial revolution. It served to expand print literacy and establish an Anglo-American national identity. It didn’t really matter that Matthew Arnold saw religious poetry as a salve against the depredations of industrialization or that the New Critics supported southern agrarian politics. Literary studies wouldn’t have existed without electrification and the constellation of technologies around it or the economy it enabled. By the time we get to end of the Vietnam War, we’re already switching to a post-industrial information economy with new literacy practices and we’re far less interested in that patriarchal Anglo-American literary identity. English tried to adapt, but it never really did.

The End.

Then some new disciplinary paradigm may (or may not) emerge, one that would bear as much resemblance to the industrial age discipline as the industrial age discipline bore to its antebellum antecedents. It isn’t one that loves digital technology any more than the last one loved industrial technology. But it is one that teaches people how to communicate, how to be literate, in the contemporary world. We’ve long ago given up the idea of fostering an Anglo-American national identity, but like our predecessors, we continue to be interested in how identity shapes, and is shaped by, media. We can and should continue to study literature, but just as the dimensions of literary study changed in the last century, they will change again. They’ve already changed.

No doubt we will continue to see anti-technology clickbait jeremiads from English faculty, the sermonizing of the literary clergy (like this one). I will leave it to you to decide for yourselves if these are outliers or more representative of the discipline. On the flipside, there is plenty of techno-love out there in our culture, companies that want us to buy their products, and so on. Such rhetoric is deserving of skepticism and critique. But if we want to attack the love of media then that cuts both ways. In my mind the intellectual misstep of falling for the hype of emerging technology is no worse than the one that leads one to ardent faith in the technologies of the past. If anything, the latter is worse because it seems to suggest that perfection was achieved. What are we really believing there? That we had thousands of years of human civilization leading to the invention of the printing press and then the novel, that the novel is the final apogee of human expression in some absolute, universal sense, and that all that follows, drawing us inexorably away from the print culture that supported the novel, is a fall from civilization and grace and a return to barbarism?

What exactly are they teaching in grad school these days? I’m fairly sure it isn’t this. At least I don’t see this going on around me. I don’t see people in love with digital media either. But I don’t think they need to be or should be. They just need to understand it, use it, and teach with it, just as they did with print media in the past.

 

 

Categories: Author Blogs

the ethics of digital media research

12 October, 2014 - 11:49

Dorothy Kim has an piece titled “Social Media and Academic Surveillance: The Ethics of Digital Bodies” on Model View Culture. I have to admit I don’t find her particular argument regarding these concerns to be especially convincing. However, that isn’t to say that ethical issues surrounding research using digital-social media don’t need to be addressed. As I often argue here, I think we continue to struggle to live in digital contexts. We import legal and social concepts like public and private, which don’t really work or at least don’t work as they once did. Kim also makes an analogy between digital and physical bodies, which I see as even harder to work out thoroughly. Ultimately, socially-legally, as well as academic/professionally, we need to develop some better understanding of what these spaces and practices are and develop ethics that are independent of past contexts (though obviously they will be informed by them).

For example, we know that all the sites/services in question are not public in the sense of a public park or street, in the sense of being publicly owned. Everything one sees on all these sites is owned by someone whether it be through terms of service, intellectual property, or copyright law. Maybe we should have a “public” Internet, one that is maintained through government and tax-payer money (as if anyone would feel like communicating opening on a government website!). When we think about a public park or street, we think of having certain rights and responsibilities related to our shared ownership of the space. This is a specific definition of public “Of or relating to the people as a whole; that belongs to, affects, or concerns the community or the nation” (according to the OED). But in social media we don’t have shared ownership of anything. Instead public has a different meaning here, though not one that is surprising or rare.

To quote the OED again, it’s somewhere between:

Open to general observation, view, or knowledge; existing, performed, or carried out without concealment, so that all may see or hear. Of a person: that acts or performs in public.

and

Of a book, piece of writing, etc.: in print, published; esp. in to make public

I suppose I think about it this way. Media that are published on the Internet are public in a way that combines the two definitions above. While it is possible we could redefine informed consent in some way, even then I think it would be very hard to say that publishing something on the Internet is not informed consent. There is a grayer area in the case of Facebook where statements are made to a limited audience of “friends.”

On the other hand, conducting research in a social media space where one is interacting with users, asking them questions or otherwise experimenting with them, seems to me to be a different matter, one that should involve informed consent. For example, Kim discusses the #raceswap experiments on Twitter where users changed their avatars to suggest they were of a different race and examined how other users treated them differently. Certainly the experiment recently done on Facebook (also mentioned by Kim) falls into that category. If that research were undertaken in an academic context for purposes of publication or with grant support, would it be the kind of thing we would expect to have IRB review and involve some kind of informed consent? In my view this is different from observing and studying public statements.

I can understand that as a member of a group of people, one might be unhappy with the way one’s public text is analyzed or believe that some other kind of research could or should be done. As Kim points out, users on Twitter have the right to shout back, protest, or whatever. And one could study that as well.  As academics, we may or may not consider those specific complaints to be legitimate. Any specific research may be done well or poorly. The research might be poorly communicated or represented. Academics have freedom to define and pursue their research but that freedom is always constrained by what other academics will agree to fund or publish. We can decide as an academic community what we value. To me that is all a very different matter from the ethical issues concerned with interacting with human subjects either face-to-face or online.

I don’t see anything unethical, as a general principle, with studying public texts. In specific cases, might that research be done poorly or unethically? Certainly. One could do poor or unethical research with anything. Should experimental interaction with users be done with IRB review and informed consent? Absolutely, but that’s a whole different question.

 

 

Categories: Author Blogs