Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 18 hours 1 min ago

Star Wars, empires, WPAs, and postcomposition

8 December, 2014 - 14:11

I’m working on the fourth chapter of my monograph, where the focus will be more on pedagogy, and I’ve been reading Sid Dobrin’s Postcomposition, which is a great book in my view. Basically I agree with Dobrin. Our discipline has defined itself, and the study of writing, in terms of subjectivity, and more specifically in terms of teaching subjects (i.e. students) to write. In doing so, it has developed a resistance to “theory,” that is, a resistance to postmodern theory. Dobrin doesn’t pull many punches in this regard, especially when it comes to his discussion of the WPA organization or wpas (i.e. people serving roles like mine). In some respects, his argument reminds me of Sirc’s in Composition as a Happening where we point to the formation of the discipline some 30-40 years ago and wonder if we might not have gone in a different direction.

Here’s the line from Dobrin that inspires the title of this post:

Empires often form in part by using a rhetoric of safety, espousing protection of individual difference under imperial rule. Unification and standardization are imperial tools for consolidating rule. Disparate locals fighting individual institutional battles are willing to offer votes of confidence to a ruling system if that system stands as an ally (underhanded though it was, this was specifically how Palpatine was able to gain control over the Senate and achieve a vote of nonconfidence int he Republic, leading eventually to the formation of the Empire). Hence, the immediate benefit of empire is the manner in which the homogenizing force is able to counter previous oppressions of individual entities. In this sense standardization and homogenization become better systems than those previously in place. The history of composition studies’ oppression is countered in the building of Empire. (107)

FYC is created based on a perception of student need/deficiency. From the start, it’s about the students. Dobrin’s point is that composition builds its disciplinary empire (such as it is) on meeting this student need and then later on the administration of the vast programs of TAs and adjuncts deployed. In this chapter, picks out Richard Miller’s 1999 PLMA article “‘Let’s Do the Numbers': Comp Droids and the Prophets of Doom,” which makes the argument that rhet/comp doctoral programs should focus on WPA administration, since that’s where the jobs are.  By chance, another Star Wars reference? Though here Miller is referring to a Cary Nelson article in which Nelson refers to the attitude of faculty at elite institutions who view teaching writing in terms of

Rhet/Comp Droid assembly lines. These dedicated “droids,” so many literature faculty imagine, will fix comma splices, not spaceship wiring. But why give Rhet/Comp Droids extra leisure time? What are they going to do with time off? They beep and whir and grade, that’s all. They’re not training for research.

Better a droid than a stormtrooper I suppose. We seem to be mixing analogies here. Nevertheless there seem to be several inter-related elements here:

  • the discover of a widespread and significant student need for writing instruction
  • the invention of a course (FYC) and the invention of a class of instructors to teach the course (TAs/Adjuncts)
  • the invention of a discipline that manages these and validates them through research

Dobrin suggests shifting research away from student-subjects and pedagogy and toward the study of writing itself as an ecological process (a la Guattari): an ecocomposition. That points at both the first and third bullets. Change the conversation. He also argues for a the elimination of our reliance on contingent labor to deliver FYC. He doesn’t really call it an abolitionist argument, but abolishing the FYC curriculum is one way to achieve that goal. Though what happens to composition studies if there is no FYC?

So I’m coming at this from a similar angle as my book basically comes from the position of suggesting that our discipline (and higher education in general) struggles to address digital literacy because of its commitments to view symbolic behavior as an exceptional characteristic of human-ness. In other words, when we say “writing” we struggle not to see human writers. If, on the other hand, rhetoric viewed itself as studying the nonhuman activities/objects we call writing (broadly conceived), including the relation of humans with writing then it would come at these issues differently. Maybe that sounds like a Jedi mind trick. I don’t know. I think it means something like this. Academic-disciplinary writing networks/activity systems/assemblages involve humans. They have various mechanisms for establishing this involvement. As rhetoricians we might study these mechanisms (among many other things). And we might teach people about these mechanisms. But we don’t necessarily need to be the mechanism and we don’t need to be the managers of the mechanism either.

I am maybe a little more sympathetic to the wpa than Dobrin. I agree that the wpa is not a position from which to launch revolutions. That said, at UB the Senate just approved a revision to our general education curriculum. If it is implemented as proposed, we will be replacing our part-time adjuncts with full-time NTT positions teaching writing in the disciplines across the campus. Fear not, there will still be an army of TAs for a wpa to oversee in the FYC program. Not revolution but decent reform, if seen through to the end. Any venture of this size is going to need administration. I don’t think there’s anything inherently wrong in that. As I see it, the problem FYC has often faced (depending on the campus) is that the instructors are part-time (even the TAs) and not integrated into a discipline. A smaller number of full-time disciplinary instructors could offer a very different curriculum and culture, one that would very much change the role of the wpa. However it wouldn’t be easy because we don’t really know what we would put in place of FYC. Even the “writing studies” approach, which purports to introduce students to discipline, is still a discipline defined by the study of FYC.

What would a disciplinary replacement for FYC look like if it wasn’t an introduction to composition studies but an introduction to writing studies otherwise shaped? How would it define itself in relation to the broader goals of general education (assuming it would still be a required general education course)? Who would teach it and by what method? Would we still need small classes? What would we do with the TAs that we displaced? How would we meet their needs?

I imagine I’ll touch on some of those questions in my book, though not really the last ones as those are questions that necessarily have local answers.

 

Categories: Author Blogs

changing the ends of scholarship

26 November, 2014 - 09:32

A continuation of the last post on graduate education…

As I think more about it, it’s fairly obvious that this is all part of a larger system with graduate curriculum as the obvious start point. Graduate coursework is also folded into later parts of the process from the perspective of the faculty (who teach their research). I’ll take it as a given that the production of scholarship is a necessary feature of the academy, though it should be clear that the relationship between faculty and research is a historical one that could, in theory, change. Obviously the demands on faculty to produce scholarship vary by institution and over time. That variation comes in terms of content and method, not just in amount. It also varies in terms of genre, which makes sense as genre would be interwoven with content and method. These variations occur natureculturally (to coin an obnoxious adverb).

I’ve written about this matter many times. It’s one of the primary themes of this blog and my scholarship. In English Studies, really across the humanities, our research practices are woven in tight and specific ways with 20th-century print culture and technologies, as well as persistent values about symbolic behavior and cognition stemming from the modern world. Put as a question: why do we write single-author monographs? Answer: cuz. There’s no real reason other than the cybernetic, territorializing forces of institution and history, which admittedly are quite powerful. Lord knows no one really wants to read them. But we believe in the totemic power of the monograph. It is undoubtedly a particular kind of scholarly experience. It has heft, or something. It represents long hours of sustained, focused, introspective and productive thought, and that is what we value I think: a particular model of the “life of the mind.”

It’s not possible to snap one’s fingers and change these things, but things have been shifting for a long time, at least 30 years. Funding for higher education, the availability of tenure-track positions, the popularity of the humanities, the viability of the academic publishing marketplace, the changing demographics of student populations, the emergence of digital media: yes, things have been changing for a while, and that doesn’t even begin to address changes within disciplines. Those are extra-disciplinary drivers.

So let me just offer a hypothetical. Let’s say we worked in small groups and published research collectively. No doubt it would be painful initially. Then in the future we hired faculty on the basis of joining one of these groups and we admitted graduate students to participate in these groups. Those students wouldn’t need to produce single-author dissertations because that’s not the kind of work any of us would be doing. Instead, after being trained, they’d go off and join similar research groups elsewhere as professors. If you don’t like that hypothetical, that’s fine. It isn’t meant as a serious proposal. Instead it’s meant to illustrate the ways in which the problems we have with graduate education are interwoven with the larger activity systems of humanities research. We don’t have to be what we are. We certainly don’t have to be what our academic predecessors were.

 

 

Categories: Author Blogs

What’s worse? Coursework or dissertation?

25 November, 2014 - 20:16

Today, as I regularly do around this time in my Teaching Practicum, we discussed the job market. It’s not much fun as you can imagine. I think (I hope) that it is illuminating. I mostly do it because I want students to see the relationship between the job market and their development as teachers (as well as scholars). Today Inside Higher Ed also published this little number on how time is spent in graduate school. As the story relates, much of the focus on revising doctoral programs (at least in the humanities) has been on shortening the dissertation process, but study covered in the article indicates that the reason humanities degrees tend to take longer than other doctoral programs is because of the time devoted to coursework (4 years on average). So that’s 4 years to go through coursework and exams and 3 years, on average, to write the dissertation. And that’s down from where we were a decade ago.

This is another wrinkle in the ongoing humanities project of revising doctoral programs, which might rightly strike one as missing the point when the real problem is the lack of tenure-track jobs.  The lack of jobs is certainly a problem, but to say that it is a problem in relation to doctoral programs would require making the presumption that the objective of doctoral programs is to professionalize students and prepare them to do tenure-track jobs. There’s no doubt we are all happy when our students get jobs, and there’s no doubt that programs are at least partially evaluated for their success at placement. But if the point of doctoral programs is really to prepare students to be professors then they certainly have a funny way of going about it.  That is, since most academic jobs are primarily teaching jobs, most of the doctoral preparation (one would think) would be teacher-training. In reality almost none of it is. Most tenure-track jobs do not require faculty to produce books for tenure, so why all this effort put into the proto-monograph we call the dissertation? Honestly, if doctoral programs were transformed to be job preparation, then very little of what you commonly see would remain.

But that’s not really what doctoral programs are about. Instead, as near as I can figure, the graduate curriculum is a tool for creating a particular kind of intellectual, disciplinary, scholarly community. In that community, professors carry out their research and discuss their research with their students in seminars, and students attend seminars and pursue their own research interests under the guidance of faculty. Let’s just postulate that this is a good thing worth preserving, or at least that the community is worth preserving. Maybe there would be a way of supporting this community while shortening the years of coursework and also making the dissertating process more efficient. But then that really points to back to the central disconnect. The reason for making the path to the Phd shorter and more efficient is to reduce the demands placed on students and the risks they take in relation to the job market. But if we want to do that, then we really need to be asking very different questions.

Categories: Author Blogs

writing and the speed of thought

23 November, 2014 - 12:02

When we first learned to write, we focused on holding the pencil and forming the letters. The attention given to the physical task of writing likely interfered with our ability to give attention to what we wanted to say. Later, after mastering writing (or, if you are like me, gave up on forming legible letters), the ability to write things down made it easier to develop more complex lines of thought. I think I experienced similar pressures on cognitive load when learning to type. And I still have a related experience in my need to focus on the virtual keys on my iphone.

When my writing experience is working well, the thoughts just seem to flow into sentences. I don’t have to stop and think about how to put a given thought into words. Everything seems to be clicking. I know where I want to go next but not in a fully conscious way. If I start to turn my attention farther into the future, toward the end of the paragraph or the bottom of the page or alternately if I start paying attention to the movement of my fingers on the keys then the whole mental state starts to collapse, as if it is a delicate wave structure. And I suppose neuroscientists might explain such states, at least partially, in terms of waves. Other times, I stop and plan, my mind reaching out for multiple connections as if I am gathering mental strength to hurtle myself forward into the stream of writing. Here my mind converses with itself, point and counterpoint, trying out different rhetorical strategies, poking holes in arguments, persuading itself. I become argumentative. I have, in the past, thought about this as a process of intensification, a kind of boiling over of the mind, where speed, connection, and argument leads to some change, some insight, where something new, with new properties emerges, like water that becomes steam, leaving the ground to float over a landscape of concepts.

However, over the years, I’ve also come to find that an exhausting, unpleasant, and unsustainable process. Also, it is perhaps not the most productive approach. In composition studies, mostly through CHAT and video game studies, we’ve become familiar with Mihaly Csikszentmihalyi’s concept of flow. In neuroscience, flow states have become associated with transient hypofrontality, a concept that’s been around for about a decade I believe. What is that? Basically (and I will not pretend to more than a basic understanding), transient refers to a mental state that comes and goes. Hypofrontality references a reduction in the operation of the prefrontal cortex (the part of the brain responsible for all of our higher order cognitive operations, including symbolic behaviors). Transient hypofrontality is commonly associated with certain practices like mediation, hypnosis, runner’s high, and the use of certain drugs (e.g. LSD). It is also associated with flow states. This appears counter-intuitive. Typically we imagine that we are at our most capable when our prefrontal cortex is fully engaged, not when it is operating in a reduced way. Transient hypofrontality suggests a reduction in our attention. As the article linked above suggested, athletic performance causes hypofrontality because physical demands are reflected in cognitive demands for implicit (i.e. unconscious) mental systems. I think that’s why I enjoy exercise. If you push yourself hard enough, you literally lose your capacity to think.

What does this have to do with writing and particularly with writing practices that are intertwined with exploration and invention (as opposed to more transactional and mundane writing practices)? Writers have used a variety of strategies from drug use to automatic writing to activate hypofrontality. That’s nothing new. And there’s research into writing and flow states. However I’ve always thought about it as speeding up. Now I wondering if it is better to think of this as slowing down. Not as fully activating the brain and putting it all to work, pushing it to its limits, but calming the mind.

Categories: Author Blogs

The empiricist-idealist divide in composition studies (and the role of realists)

10 November, 2014 - 12:12

I’ve been thinking about this big picture disciplinary issues primarily in terms of my Teaching Practicum, but maybe it is useful to share this here as well. Manuel DeLanda has a helpful brief piece on “Ontological Commitments” (PDF) in which he identifies three familiar categories of philosophical positions on ontology: idealism, empiricism, and realism. I don’t see these as absolute categories but rather as historical ones that work most closely with modern Western traditions. As he summarizes:

For the idealist philosopher there are no entities that exist independently of the human mind; for the empiricist entities that can be directly observed can be said to be mind-independent, but everything else (electrons, viruses, causal capacities etc.) is a mere theoretical construct that is helpful in making sense of that which can be directly observed; for the realist, finally, there are many types of entities that exist autonomously even if they are not directly given to our senses.

The long history of rhetorical philosophy shares with most of the humanities a commitment to an idealist ontology. This is what is sometimes termed the correlationist perspective: the idea that we can only know the world as we relate to it; we can only know what we think, and perhaps not even that entirely. In the framework, rhetoricians study symbolic action and representation. However, composition studies begins with a fair amount of empirical research into cognitive and writing processes. In part this comes out of historical connections with education departments and perhaps the social scientific side of rhetoric (mostly in communications departments). When the post-process moment comes in the late eighties, one ends up with an idealist side (which would primarily include cultural studies pedagogies) and an empiricist side (with a variety of practices including CHAT, technical communication, and cognitive rhetoric). In most respects the former are more comfortable in English departments since literary studies is almost entirely idealist. (One possible exception there are digital humanities methods, which at least some might view as empirical.)

From the perspective of first-year composition programs one ends up mostly with idealist curricula which focus on subjectivity, culture, representation, and ideology and view “the writing process” in those terms (i.e as something we think rather than something real that we observe). Because things like ideology and culture are not directly observable, empirical approaches to composition tend to focus on a smaller scale, perhaps genre for example, which is not to say that they also do not assert theoretical constructs like ideology. Empiricism allows for both, as we see with CHAT which combines Marixan philosophy with empirical methods.

So what about realism? Realism as DeLanda defines it sit opposed to idealism and empiricism on a fundamental question. Idealists and empiricist both decide a priori what makes up the world. It is either “appearances (entities as they appear to the human mind) or directly observable things and events.” Realists, however, do not know a priori what the contents of the world may be for the world may contain beings that cannot be observed. As such, realists must speculate. Now from here DeLanda goes into his particular version of speculative realism. As it happens I think his approach is especially useful for considering the role realism might play in composition studies, specifically by his claim that “knowledge is produced not only by representations but by interventions.”

What does this mean? Basically, in DeLanda’s realist philosophy what is real is not a priori, transcendental, or essential. It is emergent in the relations among things. As such, we can know things not simply by representing some transcendental truth but by intervening, and thus making or constructing knowledge. This to me is very sympathetic with Latour. It also results in an epistemological division between “know that” (representation) and “know how” (making).  This has a clear pedagogical implication:

Unlike know-that, which may be transmitted by books or lectures, know-how is taught by example and learned by doing: the teacher must display the appropriate actions in front of the student and the student must then practice repeatedly until the skill is acquired. The two forms of knowledge are related: we need language to speak about skills and theorize about them. But we need skills to deploy language effectively: to argue coherently, to create appropriate representations, to compare and evaluate models. Indeed, the basic foundation of a literate society is formed by skills taught by example and learned by doing: knowing how to read and how to write.

DeLanda’s argument ultimately suggests that realists are better poised to deal with questions of know-how. We see a similar view in Latour and even in some of the object-oriented ontology interest in carpentry.

Conventionally in composition studies we encounter a problem of translation in trying to shift from idealist knowledge about rhetorical theory (e.g. logos, pathos, ethos) or empirical knowledge about process (e.g. we know that writers have these practices) to teaching how to write. Empiricists end up trying to argue that disciplinary representational knowledge about writing (i.e. knowing that writing works in certain ways) can be a useful tool in learning how to write in a particular situation. This is the writing about writing approach. Idealists similarly suggest that knowing that ideology and culture operate in particular ways in relation to discourse will make us better writers…. somehow. Realists, on the other hand, actually have a different set of ontological-epistemological options available to them. For the realist it isn’t about theory and knowledge on the one hand and practice on the other, with no reliable way to get across the divide. Instead, know-how has its own theories (of intervention), its own ways of determining significance (of what makes a difference). Know-how can be taught just as reliably as know-that.

So the realist compositionist might say rather that discovering and teaching the “know-that” of our discipline, can we teach the “know-how” of our discipline, which might be where the practice of rhetoric (as opposed to the idealist or empirical study of rhetoric) lies.

 

Categories: Author Blogs

motivation and attention as matters of concern

29 October, 2014 - 14:19

Motivation and attention are common subjects of discussion in our graduate teaching practicum. Students in composition don’t seem motivated to do the readings or really work on their assignments. They don’t pay attention in class. They are distracted by their devices. They don’t participate as much as we would like. None of these are new problems in the classroom. Of course they have been made new to some degree by their combination with emerging technologies. Fear not though, this is not yet another “laptops in the classroom” post.

In the past I have written here and elsewhere about the concept of “intrinsic motivation.” Essentially the point is the complex creative tasks, such as the kind of writing tasks we often assign in college, are hard to accomplish, indeed can often be inhibited, by extrinsic motivations (e.g. grades). To develop as a writer one needs an “intrinsic motivation” to succeed, which requires some sense of autonomy, a task that is within reach (insert zone of proximal development talk here), and some purpose (which could be “becoming a better writer” or could be any goal that could be achieved through writing, e.g. winning a grant). Indeed we rarely want to “become a better writer.” Instead, we might want to be a more successful grant writer, finish and publish our monograph, attract more readers to our blogs, and so on. Are those extrinsic motivations? Hmmm… good question.

I’ll set that question aside though and come at this from a Latourian angle (as the “matters of concern” reference in the title promises). Let’s say that agency is an emergent capacity of relations rather than an internal characteristic of humans. In that case, when one says “intrinsic motivation” (hence all the scare quotes), one must respond by asking “intrinsic to what?” The conventional answer is intrinsic to the self or individual. In Latourian terms though one might try to say intrinsic to the network but even that doesn’t work so very well as networks are not the most discretely bounded entities. In networks, actors are “made to do.” Inasmuch, they can be “made to be intrinsically motivated,” even though the makers are not necessarily all “inside.” In the end intrinsic motivation has to do with an actor’s affective relationship to a given task. So while I think it is rhetorically useful to suggest to students that they must find some intrinsic motivation for writing, I think it is more accurate to say that we need to compose networks of motivation.

Attention offers a similar problem. We ask students to pay attention and speak of economies of attention. I think we imagine attention as a limited natural resource and as an internal human characteristic. Again though, if we imagine that we are “made to attend” to actors and networks then attention is an emergent capacity. Furthermore, attention might be too general a concept, including a range of cognitive activities from “single-point” to “multi” to “hyper.” Is introspective attention the same as the attention required driving in heavy traffic as that of listening to a lecture as that of making a free throw in a close game? Do we think neuroscience can answer that question? Certainly networks (a lecture hall for example) are designed so that humans are made to attend to the speaker in a particular way (not that they always work).

With matters of concern (see “Why has critique run out of steam? From matters of fact to matters of concern“), Latour seeks to move away from the two-step critical process that either criticizes the person who claims s/he is made to do something by an external object (because objects have no real control over us; that’s all in our minds)  or criticizes the person who believes s/he has free will (because we are shaped by cultural-ideological forces). So it is neither acceptable to say that we can’t pay attention because other objects distract us (maybe that’s true but we are responsible for turning them off then) or to say I have control over may ability to pay attention (because ideological-cultural forces shape my desire). Do students express free will when seeking motivation to write or are their motivations overdetermined by ideology? Yes and yes. No and no. Science comes along and offers measurements of the plasticity of the brain and the effects of new technologies upon it. We conduct studies on the cognitive limits of attention. We make motivation into a science as well. Human nature, cultural forces, the individual capacity for thought or agency: where does one begin and the other start? And what role do technologies play? They are real objects with their own “natures,” their own science, but also their own cultural contexts.

Ban computers from the classroom. Does that make you a fetishist because you believe these devices make people do things? Does it mean that you naively believe that once the devices are out of the room the students will be free from the overdetermining forces of society? Can we be more or less overdetermined? Of course not because no one can live in those old critical worlds (any longer). We live in a messier world of hybrids where attention and motivation are emergent phenomena that are always up for grabs in any network.

Categories: Author Blogs

building communities in the humanities

23 October, 2014 - 10:22

I was participating in a discussion this morning around a proposal to build a kind of DH-themed interdisciplinary community on our campus. One of the central concerns that came up was that faculty in the humanities don’t tend to collaborate, so was it really feasible to imagine a scholarly community that was centered on such faculty? It’s a good question. We have struggled somewhat with building a DH community on campus because, when it comes down to it, for most conventional humanities scholars, collaboration with one’s local colleagues serves little or no purpose in relation to scholarly production. As we discussed this morning, the digital humanities is something of a counterculture in the humanities then, because it is an example of a methodological shift that invites collaboration in a way that is not typically seen elsewhere in these fields. The meeting was quite early so my mind maybe wasn’t clicking well enough at the time, but later it struck me that in fact the humanities in an abstract sense are as collaborative as any field but that those collaborations are harder to perceive because of the technologies that mediate them and what I would term the ontological/epistemological paradigms at work in research.

Let’s start with the technologies. You don’t see many humanities books or articles with multiple authors, so we clearly do not tend to collaborate to produce scholarship as authors. But the way that humanists engage with the texts they cite is quite different from what we see in most other fields. E.g., how often does one see a block quote in scientific or even social scientific research? It’s almost cliche to say that a humanist’s community sits on her bookshelf. We collaborate with other scholars through the mediating technologies of monographs and articles. And yes, every discipline does this in one way or another, but I think it’s role in the humanities is worth considering. Of course I also think it’s worth saying (since I’ve said it here many times already) that this disciplinary practice is obviously the product of certain 20th-century technological conditions that no longer pertain. But my point is to say that it’s not really a matter of moving from not being communal to being communal but rather to mediating community in a new way.

The other part of this though is the ontological/epistemological paradigm. By this I mean the way we understand our relation to our work and how that knowledge gets known. Let abstractly, in English Studies for example, historically we have done our scholarship by reading texts. Even though we recognize the social-communal contexts of reading, the activity of reading itself is individual. Even though two people can huddle around a screen and read this post, each of you would still be reading it “on your own.” Similarly writing is an individual experience whereby knowledge is not only represented but created. In a different field, where multiple scholars might collaborate on conducting an experiment, gathering data, and interpreting the results, the resulting article might be produced collaboratively as well based on the research that has been completed. In the humanities though, I think the writing of scholarship is part of the production of the research: researching and writing are not so neatly separated as they appear in other fields. So I read a text, I create an interpretation, and I write about it. Even though we might say there is a community with whom I am collaborating through the mediation of the texts I am citing, in disciplinary terms I still view this as a solitary activity. So how do we move from “I” read, interpret, and write to “we” do these things? Or perhaps even more challengingly, how do we move to viewing these as networked activities?

This is one of the central questions of my current book project (the one I am hoping next semester’s research leave will allow me to complete), as I am arguing that the challenges we face in developing digital scholarly practices and teaching digital literacies really begin with founding ontological assumptions about symbolic behavior as an exceptional human characteristic.  When symbolic behaviors become hybridized natureculture objects and practices, what does that do to the humanities and more particularly to rhetoric? For one thing I think it has to change the way we understand our disciplinary methods. It brings our interdependency to the foreground. Community, of course, is such a vexed word in theory-speak. I’m not going to get into that here, but I think it’s important to realize that when we say that humanities scholars tend to work alone that we’re hallucinating that. That isn’t to say a shift to a more DH-style kind of collaboration wouldn’t require a real change in the lived experience of humanities scholars, or that it would be easy. I just don’t think we can really say that humanities scholarship can operate without community or collaboration.

Categories: Author Blogs

the relationship of English Studies to emerging media

13 October, 2014 - 09:40

As a digital rhetorician, it is not surprising that I take a professional, scholarly interest in emerging media from social media platforms to the latest devices. I am interested in the rhetorical-compositional practices that develop around them and the communities they form. I am particularly interested in how they are employed, or might be employed, for purposes of pedagogy and scholarly collaboration and communication.

However I don’t love these technologies. I am not a fan of them, and I do not teach the “appreciation” of digital media. I think this is something that is sometimes difficult for colleagues in literary studies to understand. Typically I think they do love their subjects. It’s a cliche after all that one goes to graduate school because one “loves literature.” To be fair, it is probably a broader cliche shared among more traditional disciplines in the arts, humanities, and natural sciences… to have a love of one’s subject. I suppose you could say that I have a love, or at least an ardent fascination, with rhetorical practices. I can sit in a dull, pedantic faculty meeting and become interested by the particular rhetorical moves that people make, what is allowable, what it isn’t, what people find convincing, and so on. I find emerging media interesting in this regard because we continue to struggle to figure out what those rhetorical practices should be. But I don’t love emerging media any more than I love faculty meetings.

I would estimate that I average less than one hour per day on my computer or iPhone for non-work related reasons. Like many of us in academia, the large majority of my work does require these devices. As an administrator, the plurality of this time is spent in email applications. There is no doubt that every aspect of my work–research, teaching, service–has been shaped by emerging media, just as my 20th-century predecessor’s work was shaped by the information-communication technologies of her era. And as an English professor, there’s no doubt that she studied those technologies (books in particular) and was an expert in their use. The 20th-century literature professor may have not realized it, but she was in a unique historical moment where the media objects she studied and the information-communication technologies she employed to do the studying were a part of the same media ecology.

This is not a matter of love or hate or critique or ideology. It’s a matter of history. Literary studies emerged in the late 19th century and early 20th century as a discipline (MLA formed in the 1880s) in the context of the second industrial revolution. It served to expand print literacy and establish an Anglo-American national identity. It didn’t really matter that Matthew Arnold saw religious poetry as a salve against the depredations of industrialization or that the New Critics supported southern agrarian politics. Literary studies wouldn’t have existed without electrification and the constellation of technologies around it or the economy it enabled. By the time we get to end of the Vietnam War, we’re already switching to a post-industrial information economy with new literacy practices and we’re far less interested in that patriarchal Anglo-American literary identity. English tried to adapt, but it never really did.

The End.

Then some new disciplinary paradigm may (or may not) emerge, one that would bear as much resemblance to the industrial age discipline as the industrial age discipline bore to its antebellum antecedents. It isn’t one that loves digital technology any more than the last one loved industrial technology. But it is one that teaches people how to communicate, how to be literate, in the contemporary world. We’ve long ago given up the idea of fostering an Anglo-American national identity, but like our predecessors, we continue to be interested in how identity shapes, and is shaped by, media. We can and should continue to study literature, but just as the dimensions of literary study changed in the last century, they will change again. They’ve already changed.

No doubt we will continue to see anti-technology clickbait jeremiads from English faculty, the sermonizing of the literary clergy (like this one). I will leave it to you to decide for yourselves if these are outliers or more representative of the discipline. On the flipside, there is plenty of techno-love out there in our culture, companies that want us to buy their products, and so on. Such rhetoric is deserving of skepticism and critique. But if we want to attack the love of media then that cuts both ways. In my mind the intellectual misstep of falling for the hype of emerging technology is no worse than the one that leads one to ardent faith in the technologies of the past. If anything, the latter is worse because it seems to suggest that perfection was achieved. What are we really believing there? That we had thousands of years of human civilization leading to the invention of the printing press and then the novel, that the novel is the final apogee of human expression in some absolute, universal sense, and that all that follows, drawing us inexorably away from the print culture that supported the novel, is a fall from civilization and grace and a return to barbarism?

What exactly are they teaching in grad school these days? I’m fairly sure it isn’t this. At least I don’t see this going on around me. I don’t see people in love with digital media either. But I don’t think they need to be or should be. They just need to understand it, use it, and teach with it, just as they did with print media in the past.

 

 

Categories: Author Blogs

the ethics of digital media research

12 October, 2014 - 11:49

Dorothy Kim has an piece titled “Social Media and Academic Surveillance: The Ethics of Digital Bodies” on Model View Culture. I have to admit I don’t find her particular argument regarding these concerns to be especially convincing. However, that isn’t to say that ethical issues surrounding research using digital-social media don’t need to be addressed. As I often argue here, I think we continue to struggle to live in digital contexts. We import legal and social concepts like public and private, which don’t really work or at least don’t work as they once did. Kim also makes an analogy between digital and physical bodies, which I see as even harder to work out thoroughly. Ultimately, socially-legally, as well as academic/professionally, we need to develop some better understanding of what these spaces and practices are and develop ethics that are independent of past contexts (though obviously they will be informed by them).

For example, we know that all the sites/services in question are not public in the sense of a public park or street, in the sense of being publicly owned. Everything one sees on all these sites is owned by someone whether it be through terms of service, intellectual property, or copyright law. Maybe we should have a “public” Internet, one that is maintained through government and tax-payer money (as if anyone would feel like communicating opening on a government website!). When we think about a public park or street, we think of having certain rights and responsibilities related to our shared ownership of the space. This is a specific definition of public “Of or relating to the people as a whole; that belongs to, affects, or concerns the community or the nation” (according to the OED). But in social media we don’t have shared ownership of anything. Instead public has a different meaning here, though not one that is surprising or rare.

To quote the OED again, it’s somewhere between:

Open to general observation, view, or knowledge; existing, performed, or carried out without concealment, so that all may see or hear. Of a person: that acts or performs in public.

and

Of a book, piece of writing, etc.: in print, published; esp. in to make public

I suppose I think about it this way. Media that are published on the Internet are public in a way that combines the two definitions above. While it is possible we could redefine informed consent in some way, even then I think it would be very hard to say that publishing something on the Internet is not informed consent. There is a grayer area in the case of Facebook where statements are made to a limited audience of “friends.”

On the other hand, conducting research in a social media space where one is interacting with users, asking them questions or otherwise experimenting with them, seems to me to be a different matter, one that should involve informed consent. For example, Kim discusses the #raceswap experiments on Twitter where users changed their avatars to suggest they were of a different race and examined how other users treated them differently. Certainly the experiment recently done on Facebook (also mentioned by Kim) falls into that category. If that research were undertaken in an academic context for purposes of publication or with grant support, would it be the kind of thing we would expect to have IRB review and involve some kind of informed consent? In my view this is different from observing and studying public statements.

I can understand that as a member of a group of people, one might be unhappy with the way one’s public text is analyzed or believe that some other kind of research could or should be done. As Kim points out, users on Twitter have the right to shout back, protest, or whatever. And one could study that as well.  As academics, we may or may not consider those specific complaints to be legitimate. Any specific research may be done well or poorly. The research might be poorly communicated or represented. Academics have freedom to define and pursue their research but that freedom is always constrained by what other academics will agree to fund or publish. We can decide as an academic community what we value. To me that is all a very different matter from the ethical issues concerned with interacting with human subjects either face-to-face or online.

I don’t see anything unethical, as a general principle, with studying public texts. In specific cases, might that research be done poorly or unethically? Certainly. One could do poor or unethical research with anything. Should experimental interaction with users be done with IRB review and informed consent? Absolutely, but that’s a whole different question.

 

 

Categories: Author Blogs

academic writing, genre, and clarity

28 September, 2014 - 09:00

Steven Pinker is clearly on a promotional tour for his new book on the subject of style. He’s been taking a couple recent jabs at academic writing, including this one in The Chronicle, which asks the eternal musical question “Why Academics’ Writing Stinks.” For decades Pinker has enjoyed taken jabs at the humanities such as this one “Scholars in the softer fields spout obscure verbiage to hide the fact that they have nothing to say,” though here he backs away from this claim… sort of. As such, it is easy to fall to the temptation to jab back at this kind of troll bait. However, I think it is more interesting to try to answer the question Pinker poses.

To do that though we first have to ask the question “why do we think academic writing is poor?” Given the sheer volume of scholarly publication, the most reasonable hypothesis would be some kind of bell curve distribution of excellent, average, and poor writing performance. Of course that all depends on establishing some usable standard. No doubt part of this judgment, a large part, has to do with the use of jargon, and Pinker acknowledges that some use of technical terms is useful. Sometimes academics write for larger audiences, but for the most part they are writing to other experts in their fields. The easiest way I can think to adjust for this is to focus on the discursive practices within one’s own field. Pinker is well-known for his complaints about postmodernism, not only in terms of its jargon but also its philosophical positions. So when he complains about poor writing in the humanities filled with postmodern jargon, it is a somewhat disingenuous complaint (which is not to say that one cannot find examples of jargon-ridden prose in humanities scholarship). However the point here is simply that given such a volume of texts, there are going to be a fairly substantial number of substandard examples but also some good ones. Pinker writes “Helen Sword masochistically analyzed the literary style in a sample of 500 scholarly articles and found that a healthy minority in every field were written with grace and verve.” That would seem to support my hypothesis, even if one doesn’t really know how one establishes a standard for “grace and verve.” Perhaps Sword does a better job of explaining her process. In any case, even if there’s a positive skew distribution to the bell curve with a shining one-percent and a bulge of mediocrity, we still end up with a fairly bell-like shape.

We can’t all be above-average writers.

This raises another point to which Pinker does allude: academics are not selected or rewarded for their writing ability. Yes, one does need to get published and, depending on one’s field, write successful grant applications. However, to the extent that such success is based on writing ability it is certainly relative to the competition. To mangle a cliche, one doesn’t need to outrun the bear of excellent writing.  I’m not sure about Pinker, but I would not ascribe to the claim that there is some general writing ability that one can either have or not have. Instead, following many of my colleagues, I would view academic writing as a highly specialized skill not easily translatable from one discipline to another or even from a disciplinary genre to a broader audience genre. Learning to do the latter is a skill in itself. It is not one that is necessary for academic success (you could argue that we should change that, but you’d need to convince me that there is a broader audience out there for much of this work). In any case, most academics don’t acquire that skill. And, as I’ve said above, they may not be the best writers in their given technical genre. This is Pinker’s point, I think. Once you get over the publication hurdle, there’s little incentive to get better as a writer.

So why do we think academic writing is poor? Because some academic articles are worse than others and there’s not much incentive to do better.

While one one level some of Pinker’s specific “Strunk and White-esque” advice on word choice makes sense, in the bigger picture focusing on these sentence level issues is as misguided here as it is for first-year composition. However, I think he’s quite wrong on elsewhere. For example, he writes:

The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns language with truth, the proof of success being clarity and simplicity. The truth can be known and is not the same as the language that reveals it; prose is a window onto the world. The writer knows the truth before putting it into words; he is not using the occasion of writing to sort out what he thinks.

He attributes this view not to himself but to a “classic style” of writing, though I believe the rest of the article indicates his strong support of this style.  Pinker indicates that efforts toward clarity in academic writing are stalled by a second style “that Thomas and Turner call self-conscious, relativistic, ironic, or postmodern, in which ‘the writer’s chief, if unstated, concern is to escape being convicted of philosophical naïveté about his own enterprise.'” Mostly he wants to argue that this concern is unwarranted but he does acknowledge that even scientists 

recognize that it’s hard to know the truth, that the world doesn’t just reveal itself to us, that we understand the world through our theories and constructs, which are not pictures but abstract propositions, and that our ways of understanding the world must constantly be scrutinized for hidden biases. It’s just that good writers don’t flaunt that anxiety in every passage they write; they artfully conceal it for clarity’s sake.

Pinker is far more confident that he knows what writing is than I am, and I am skeptical of his confidence. As he indicates in this passage, clarity is achieved through artful concealment. This is essentially the hallmark recognition of deconstruction. It is also a recognition that is incompatible with his description of a classic style of writing that has a motive of “disinterested truth.” The disinterested truth would be that the writer doesn’t know the truth but that s/he conceals that fact through the rhetorical performance of clarity.

So how does conventional academic writing fit into this view? Is academic jargon an effort to obscure the fact that the author doesn’t know the truth or is hedging his/her bets, as Pinker seems to be suggesting here? Or is it a kind of intellectual laziness that reflects little concern about communicating (which is perhaps the most generous explanation Pinker offers). I would say that academic jargon is not just a convenient shorthand for complex ideas. Pinker himself points to the value of “chunking” ideas within academic concepts:

To work around the limitations of short-term memory, the mind can package ideas into bigger and bigger units, which the psychologist George Miller dubbed “chunks.” As we read and learn, we master a vast number of abstractions, and each becomes a mental unit that we can bring to mind in an instant and share with others by uttering its name. An adult mind that is brimming with chunks is a powerful engine of reason, but it comes at a cost: a failure to communicate with other minds that have not mastered the same chunks.

The difficulty is knowing which chunks we share with our audience, but that’s where genre comes in. By developing a facility with the genre shared within a community we expand our ability to think more complexly while also being able to expect our audience will understand what we are talking about. However Pinker’s recognition here also casts doubt on his earlier description of a classic style where “the writer knows the truth before putting it into words.”

In the end Pinker is circumspect on whether or not he ultimately wants to argue that “good” academic writing would adopt the “classic style.” However all of his more specific complaints and pieces of stylistic advice would imply that he believes not only that a classic style is best but that it also is a good description of how writing works. To be generous we could say that the former is debatable (what is the best style for academic writing), but the latter is mistaken.

That’s not how writing works. Beyond the writing skills one typically acquires in elementary school, there are few general writing skills. Writing in an academic discipline is a specific technical skill with a specific genre that is not simply “jargon” but a range of cognitive tools that enable scholars to do the work they do. In this respect they are similar to microscopes, bibliographies, mathematical formulas, spreadsheets, and research methods. This doesn’t mean that some scholars aren’t better writers of their genres than others (of course they are). This doesn’t mean that scholars might not also write for audiences beyond the community that shares their technical genre or that, when they do, they might not do a better job of it (of course they could). This doesn’t mean that disciplines might not do a better job–in graduate school curriculum, in the dissertating process, through editorial review of articles or monographs, or with the mentoring of junior faculty–of helping scholars develop as writers.

I think all those things could happen. I just don’t think that Pinker’s stylistic advice is especially helpful in that regard. It’s really just an analog of the old-fashioned red ink on an undergraduate student’s essay.

Categories: Author Blogs

building a campus culture of writing #sunycow2014

28 September, 2014 - 06:58

Below is the text of my presentation at the SUNY Council of Writing conference, delivered yesterday in Syracuse as part of a panel of the role of student desire in developing writing programs.

Building a Campus Culture of Writing

On a campus like UB with nearly 30,000 students and thousands of faculty and staff spread around the planet studying abroad, conducting research, attending our Singapore campus, and here in western New York, we might say that writing is a non-stop activity. We might say that writing is a integral activity in virtually all the communities on campus from the football team to the contractors building the medical campus to the admissions office, the various deans’ offices, and of course every academic department to say nothing of all the media composition at work and play on a campus: emails, text messages, selfies, tweets, and so on.

We might say that. But if we did, what would the word “writing” mean? What, if anything, holds these activities together to create something we might call a “culture of writing”?

The discourses surrounding the notion of “student need” offer one approach to this question. More than 20 years ago in his book Fragments of Rationality, Lester Faigley offered an observation about student desires in his description of the intra-disciplinary conflicts within rhetoric and composition at that time. As he writes, “many of the fault lines in composition studies are disagreements over the subjectivities that teachers of writing want students to occupy” (17). We can read this two ways. First, Faigley was observing that the disagreements among composition scholars could be understood in terms of the different ways they theorized subjectivity and desire. This is perhaps still true, though the theories have changed somewhat. The second more complicated reading focuses on the desires of teachers for students to occupy particular subjective positions. That is, fault lines emerge over our desire for students to think and feel, or at least behave, in certain ways.

I believe this condition is only intensified in our failed attempts to develop campus wide cultures of writing. All too often we hear ego-driven heroic pedagogy narratives that begin and end with an assertion of authority and expertise that seeks to ground arguments about the subject positions that students should occupy as writers across the campus. That is, what students should be doing, what they need to be doing. Understandably, rhetoric and composition, as a discipline, has much at stake in these narratives. When, in his 1991 essay “Three Countertheses: Or, A Critical In(ter)vention into Composition Theories and Pedagogies,” Victor Vitanza’s seriocomically wonders if the CCCC could ever have as it’s conference theme the question “Should writing be taught?” he is pointing to a paradigmatic disciplinary foundation in a particular view of writing, as well as the necessity of teaching it. It’s a view that drives our desire to build a “culture of writing,” a desire that rises as “writing,” whatever that is, becomes increasingly untenable. Indeed I imagine anyone in our discipline can already feel the response welling up within them to say “Yes exactly. It is because we are losing grip on writing as a culture that we rhetoricians need to hold on that much more tightly.”

But allow me to return to Lester Faigley, this time in his 1996 CCCC address where he imagined that “If we come back to our annual convention a decade from now and find that the essay is no longer on center stage, it will not mean the end of our discipline. I expect that we will be teaching an increasingly fluid, multimedia literacy” (40-41). Of course that didn’t really happen. Eight years later, in her 2004 address to CCCC, Kathleen Yancey echoed Faigley: “New composition includes the literacy of print: it adds on to it and brings the notions of practice and activity and circulation and media and screen and networking to our conceptions of process. It will re-quire a new expertise of us as it does of our students. And ultimately, new composition may require a new site for learning for all of us” (320). One one level, it would simply be impossible for any academic discipline to remain unchanged in the wake of the Internet revolution. Every academic in every field has seen the ways research is published and accessed shift.  However, the essay remains at center stage of our field, as a genre of scholarly production and as a classroom assignment, and if there is a new site for learning, as Yancey describes, few have travelled there and even fewer have stayed.

The essay has remained the standard of humanities scholarly production, as well as the typical genre of the humanities curriculum. But much has happened in the 15-20 years since Faigely was offering his prognostication. According to the US department of education around 40% of the majors offered at US colleges today didn’t exist in the early 1990s. The majority of these majors are in professional schools, which in turn point to a proliferation of activities that we might loosely characterize as writing on campus. For anyone who has been an academic during this period, this shift would have been hard to miss. It would be equally hard to fail to recognize that our students, along with many others around the world, are now communicating by a plethora of digital means. Today, we still often refer to the excellent longitudinal study of student writers carried at Stanford by Andrea Lunsford and her colleagues. That study revealed the extensive amount of self-sponsored and multimedia writing undertaken by students. However that study ended in 2006. Just to put that in context. There were no iphones in 2006 and Facebook had 12 million users, as opposed to the more than one billion today. Compared to a decade ago, our communicational environment today is unrecognizable. As is evident across the spectrum of academic discourses from articles in the Chronicle to journal articles and even curricular and classroom policies, we may want to call those activities writing, so that we can lay claim to them, but we also want to refute their status as writing in the name of some other “culture of writing.” For even Faigley and Yancey, with their unrealized visions of the future of our discipline, still imagine something called “writing” at the center of it. But if so, what does that word mean?

Even if we limit ourselves quite narrowly to the communication practices undertaken in undergraduate course, one unavoidably recognizes that different departments will teach their students different genres. What status do we attribute to those genres in relation to the more nebulous disciplinary abstraction of “writing”? Do we simply rehearse the devaluation carried out historically against rhetoric by suggesting that learning to write in a given genre is a formalistic, stylistic, superficial task, one that is necessary, of course, but not truly intellectual? If we manage to make this argument then we allow ourselves to maintain some imagined domain over what “writing” really is: “the” writing process, critical thinking, reasoning, logic, argument, audience, expression, voice. All the familiar watchwords of our disciplinary legacy. It also conveniently allows us to assert our enduring essayistic practices as foundational to some generalized notion of writing. We can claim the lasting value in continuing to do what we do, as well as impose expertise in our field over a larger “culture of writing.” It allows us to argue that the ways we teach writing, the curricular structures we have developed, the conceptions of argument, thinking, audience, rhetoric and so on should inform writing practice and pedagogy elsewhere.

While clearly I am skeptical of such familiar maneuvers, we are still faced with the task of helping students develop as communicators. We still encounter students, faculty, and departments that are dissatisfied with writing instruction. Theirs and ours. And I certainly think there can be a role for rhetoricians in addressing these challenges. It’s not enough to simply pose problems or offer critiques. Bruno Latour describes an analogous situation in the work of sociologists.

Too often, social scientists—and especially critical sociologists—behave as if they were ‘critical’, ‘reflexive’, and ‘distanced’ enquirers meeting a ‘naive’, ‘un- critical’, and ‘un-reflexive’ actor. But what they too often mean is that they translate the many expressions of their informants into their own vocabulary of social forces. The analyst simply repeats what the social world is already made of; actors simply ignore the fact that they have been mentioned in the analyst’s account. (Reassembling the Social 57)

In working with our colleagues across disciplines, rhetoricians have a similar tendency to translate their colleagues’ expressions into their own vocabulary. In doing so, we learn nothing except to confirm what we already know. Latour, of course, would offer us a method that involves following the trails of associations out of a given node in a network of writing actors and listening to the actors’ explanations for their activity, what makes them do what they do rather than leaping suddenly to spectral social forces for explanations. This is a different notion of student and faculty need and desire. In speaking with students and faculty about their coursework, they will often describe what they need to do. And those needs are generally externally located, as in the familiar faculty explanation that they don’t have time to focus on writing in their classes because they need to do this or that.

However, being made to do something is as much about gaining agency as it is about constricting it. The student in a chemistry lab is constricted in her activities, limited by what she needs to do there, but she also gains the ability to run chemistry experiments and construct disciplinary knowledge. Writing technologies, genres, and practices are all actors in that system. In conjunction with these actors, the chemist and chemistry student are made to write lab reports. In conjunction with these nonhuman actors, student needs, desires, affects, and thoughts emerge.  We might say the same for the student sitting on his darkened dorm room bed, surrounded by a half-dozen wi-fi enable devices pulsing unpredictably and generating dopamine loops urging him to seek out new information, to check Facebook and email; a sugary venti coffee drink sending caffeine to block adenosine receptors in the brain, making him stay awake; the ambient glow and hum of the laptop with its backlit keys, bevelled architecture, cheerful icons, and rigorously user-tested and focus-grouped interfaces; the pillow that still smells like home resting on otherwise institutional furniture; and his notes and results for the same chemistry experiment that he needs to turn into a report. In this context, we would expect that a familiarity with the genre supports a network of distributed cognition that allows him to write the report he needs to meet his objectives as opposed to writing something else. The genre not only organizes pre-existing thoughts, it also participates in thinking. Through his interaction with the lab report genre, the student is made to think in certain ways rather than others. Of course the genre is not all-powerful or over-determining. It is just one actor among many in a network.

There is the culture of writing, if indeed there is such a thing. In following such trails, we can uncover our degrees of freedom, those sites where we might put our networks together differently. Can we sit in a different room, work at a different time, drink more or less coffee? Does developing a self-awareness for the actors that participate in our thinking and composing shift our relationship to them? As rhetoricians participating in universities do we view our objective as supporting student ability to be successful authors of specific genres? Maybe sometimes. Do we also establish a broader goal of investigating how and why composing happens and then sharing that knowledge with students and colleagues in a manner that might be of use to them? I would think so.

I’m going to end by going even further back in time. Coincidentally I was teaching our TA practicum students about the history of the process movement this week, and we were reading Maxine Hairston’s well-known essay “The Winds of Change” from 1982. There Hairston insists that

We have to try to understand what goes on during the internal act of writing and we have to intervene during the act of writing if we want to affect its outcome. We have to do the hard thing, examine the intangible process, rather than the easy thing, evaluate the tangible product.(84)

Things have changed some in the last 30 years. We may be less likely to think of writing as an “internal act.” And, as I have been arguing, the concept of writing itself has been attenuated to the very limits of its conceptual utility. And, we have developed new and varied methods for studying these activities. However, the core tasks remain of understanding these compositional activities and developing means for intervening in them. This is what “building a culture of writing” means from my perspective: the slow ant-like task of following trails and constructing brick-by-brick, actor-by-actor, networks of composition.

Categories: Author Blogs

laptops, pedagogies, and assemblages of attention

24 September, 2014 - 08:53

This is a continuation of this conversation about laptops in classrooms. Clay Shirky, Nick Carr, Dave Parry, and Steve Krause all have recent posts on this issue (that list is almost strange enough to be a Latourian litany). As I said last time, this is the eternal September of the laptop policy. And as I mentioned in that last post, there is clearly a real issue with the disconnect between laptops (and mobile phones and other digital media/network devices) and the legacy practices of college curriculum, classrooms, and pedagogy: these two sets of things don’t seem to mix well. The primary complaint is that the devices distract students. Dave talks about how his students perform better in class discussions with their laptops closed. And Nick Carr makes a good observation:

Computers and software can be designed in many different ways, and the design decisions will always reflect the interests of the designers (or their employers). Beyond the laptops-or-no-laptops-debate lies a broader and more important discussion about how computer technology has come to be designed — and why.

In my view there are a couple key issues at work here and they all revolve around the way we understand and value thinking, participation, and attention. If we begin with the premise that thinking is not a purely internal activity then we realize that the tools and environments in which we find ourselves shape our capacities for thought. Obviously there are also internal (as in beneath the skin) processes at work as well. I don’t think anyone would really deny this. However, we might commonly assume some intrinsic consciousness that might be enhanced or inhibited by external forces. A different view would assert that we only have consciousness in relation to others. In the first case, one might look at laptops and ask if they make us better or worse thinkers, if they affect our ability to participate or pay attention. And of course they do in many ways, negatively, perhaps, in the classroom, but also expansively (for good or bad) in terms of participating and paying attention to the web. However, that view would seem to presume that the pre-laptop state is a natural or default state and that whatever technologies we employ should be valued in relation to the capacities and terms of that default state. E.g., do laptops make us better versions of default students in a classroom? We can debate this question, and for faculty sitting right now in such a classroom, it is a question worth asking and answering. But it is not the question that interests me.

Carr proposes a different question in relation to the design of computer technology. To put the question in my terms, he is asking about the assemblage of attention that these devices are designed to produce. One answer is to say that these devices are flexible in that regard. Obviously they can be shut off. That’s one answer: the powered-down laptop still participates in an attentional-cognitive network. It just doesn’t do much for us. And when they are opened full throttle? Then not only do we have access to the public Internet and our various personal accounts, we are also subject to a wide range of push notifications. Even without the pusher there, the thought might prey on our minds that certainly people are emailing, updating, tweeting, etc. And you’ve probably seen articles (like this one) about how addictive email can be. Without suggesting too sinister a motive, it’s unsurprising that companies design products that fuel our desires though not necessarily our best interests.

Looking into these questions of design and developing ways to intervene in the design process or otherwise build upon it are important directions to pursue. However they are also only part of the puzzle. We must similarly look at the motives behind classroom design, curriculum, pedagogy and other educational-institutional policies and designs.

Setting aside the immediate challenges facing faculty and students this month, we need to think more broadly and experimentally about how to design the assemblages of attention (and cognition) that will drive future learning. We have a fairly good idea of what the past looks like. Those spaces were designed to focus attention on the teacher and encourage individualized student activity (notetaking, silent reading, worksheets, tests, etc.). They were organized as a series of fairly short and discrete linear tasks: listen to a lecture, take notes, complete a worksheet, take a test, move to the next class. The classes themselves were/are designed to be silos, organized by discipline. This is very clear from middle school through college. This requires relatively short-term single point attention (e.g. listen to a lecture for 20-30 minutes). For the most part, homework is similarly designed. It’s true that students can do homework and studying in groups, but since everyone has their own book and notes and everyone is ultimately responsible as a individual for demonstrating knowledge, the assemblage encourages individual activity. Over time, we build toward extending the period of single-point attention so that the graduate student or faculty member might spend hours focused on reading a single book or writing an article. (So that’s the story, except that I think the notion of an extended period of single-point attention is a fiction and I would suggest would be unproductive if it were true. But that’s for another time.) In any case, we define knowing and expertise as the cognitive effects of these activities: to know is to have engaged effectively in these activities; to be an expert is to have done so repeatedly in a single area of knowledge. We created reasonably efficient feedback loops between educational practices and workplace practices so that workplaces were organized around employees with disciplinary expertise and expanded capacities for single-point attention. And I don’t mean that only in terms of managerial/bureaucratic structures but also the physical spaces of offices and factories, the design of the work day, and so on.

I imagine the future will have a similar feedback loop between education and workplace. (BTW, I find it strange when colleagues complain about universities serving corporate interests, as if we haven’t been doing that for at least a century, as if our current curriculum and practices weren’t constructed in this way, as if that relationship wasn’t integral to what we do.) I don’t think workplaces have any better idea of what this future should look like, though I do think they are more volatile than universities and thus quicker to change, for good or ill. However, we should think about what knowing and expertise look like in a digital networked environment, what work looks like (academic or professional), and then what assemblages of attention we want to build to support those activities and outcomes.

Here’s a brief speculative comparison. I’ve never taught a large lecture course but let’s say I had an introductory class of 200 undergraduates in my field (rhetoric). Conventionally we would have an anthology of rhetorical texts (like this one) or some other kind of textbook (like this one, I guess). I would lecture, respond to student questions, and try to create some other opportunities for student interaction (like clickers maybe). Then we could have a Blackboard site for quizzes and discussion boards (and I could post my PowerPoint slides!). Then a mid-term and final. Maybe some short writing assignments. Maybe more if I had a TA to help me read student writing. In that classroom, laptops probably would be a distraction.

Now let’s imagine a different structure. Still 200 students, but let’s not call it a class. We aren’t going to measure students by having them demonstrate disciplinary knowledge on a test or in an essay. Instead, they are going to engage in rhetorical activities, using rhetorical concepts, methods, and practices to do something: persuade some group to take an action, inform a particular audience about a topic, do research into the rhetorical dimensions of some matter of concern. They will need to work collaboratively. They will need to integrate learning from other parts of their curriculum as well as other experiences. They will need to draw upon my expertise and work with me to define the parameters of their activities. This requires a different kind of assemblage of attention. We probably don’t need the lecture hall with all the seats facing the podium. I could still give lectures, but they would be far less useful as they would no longer tie so neatly into the working definition of what it means “to know.” On the other hand, it would become more important to figure out how to make productive use of those contemporary devices of distraction. Of course they could still distract, still have a negative impact. We would still need to learn how to use them, but now we would have built a structure that supported their use rather than continuing to use a structure designed to support legacy media technologies.

What kind of workforce are we imagining here? One that can work independently in small groups without panoptic supervision. One that works across disciplines and cultures in collaboration to integrate knowledge and experience from different perspectives. One that can use emerging technologies productively to find, evaluate and manage information as well as communicate and produce knowledge. Something like that. Will every student become that? Of course. Not every 20th-century student became the ideal “organization man” either. Nor do they need to, nor should they. But inasmuch as our legacy curriculum and assemblage structures pointed toward that organization man, we need to think about building new structures that point elsewhere.

Categories: Author Blogs

searching for an assistant professor in the rhetoric of science and technology

18 September, 2014 - 13:03

In case you don’t have your eyes peeled to the MLA job list, you might not know that we are searching for an assistant professor in the rhetoric of science and technology (official job copy below). And guess what? I’m hoping we can find a fantastic person to join our faculty, so I’m going to throw in here some unofficial hard sell.

In our department and beyond there are a number of faculty and students doing interesting work in interdisciplinary areas that would be of real interest and value to someone in this field: science studies, ecocriticism, disability studies, media study, and digital humanities all come to mind as possible connections.  UB has a strong engineering school and several medical-professional schools (we’re in the process of building a new medical campus in downtown Buffalo), so there’s a lot going on here in the STEM fields.

Of course, when people think about Buffalo, the caricature is of a snow-bound, rust belt city. But Buffalo is a surprisingly international city, maybe not surprising since it sits on an international border. At UB, 17% of our students are international, a fact which is as evident in my department graduate seminars as it is walking the campus, where it’s not unusual to hear students speaking four or five different languages on my way to get coffee. Now I’m not going to compare UB and Buffalo to the cosmopolitan experiences of major US cities (or the cost of living) but compared to the other places where I have lived and worked as an academic I really appreciate the combination of affordability, quality public school education for my kids, variety of restaurants and things to do, access to nature, etc.

Tenure-track Assistant Professor in rhetoric of science and technology with a PhD in English, rhetoric, technical communication or related field. Preferred secondary fields include but are not limited to environmental rhetoric, health communication, science writing, digital literacy or online and mediated pedagogy.

Application deadline: November 1, 2014. Salary, benefits, and privileges competitive with other Research-1 universities. Faculty are expected to teach at the graduate & undergraduate levels, maintain an active research program, mentor graduate students, and provide service to the department and/or University as required. Submit letter of application, CV, and contact information for three recommenders to http://www.ubjobs.buffalo.edu. For information, contact Graham Hammill, Chair of the English Department (eng-jobsearch@buffalo.edu). University at Buffalo is an affirmative action/equal opportunity employer and in keeping with our commitment, encourages women, minorities, persons with disabilities and veterans to apply.

Categories: Author Blogs

prestige education in the network age

11 September, 2014 - 10:09

Perhaps you have seen Steven Pinker’s response to William Deresiewicz’s “Don’t Send Your Kid to the Ivy League” in which they variously decry and defend the Ivy League. I can’t really speak to the conditions of an Ivy League education, nor do they especially interest me. My daughter is a high school junior this year and an exceptional student, good enough to compete for an Ivy League admission. Her friends are similarly talented. I see in them many of the qualities that concern Deresiewicz and Pinker both. It’s not the kids’ fault at all. They are being driven to become caricatures of the ideal college applicant. What does it mean to be the kid with the high test scores, the AP classes, the high GPA, high school sports, science olympiad, club officer roles, conspicuous volunteerism and so on? I think it’s a reasonable question to ask. I think it’s obvious the students and parents pursue such institutions for the prestige associate with the name that does seem to translate into better job opportunities. Who knows, maybe the people with Ivy League degrees are better humans than the rest of us…. probably not though.

How far does that prestige effect extend beyond the Ivies? To the elite private liberal arts colleges, certainly. To a handful of other elite private and public universities (MIT, Stanford, Berkeley,  e.g.), no doubt. For example, consider this list of AAU universities. Do they all get to be “prestigious”? And if so, what does that mean? Does it mean that large numbers of students apply from around the country and the world to attend your university? Not necessarily. I’m not saying these aren’t good schools. They have very good reputations. They are obviously all “highly ranked” by some metrics, which is why they are AAU schools. My point is simply in terms of the marketplace of student admissions. What does prestige actually get you and who cares about it? I don’t think this is an idle question, particularly for the humanities, which are a prestige-driven, reputation economy. The typical humanities departmental strategy is to hire for, support, and promote faculty prestige/reputation, and the metrics are driven by this valuation to some extent as well. However, how many undergraduate students at large public research universities, for instance, are choosing majors based upon national department reputation? Put differently, what is the reputation of reputation?

I want to offer an odd juxtaposition to this. Also recently in the news is Apple’s latest iPhone. See, for example, Wired magazine’s speculation on the impact of the new iPhone on filmmaking. I don’t want to make this about Apple, but more generally about technological churn. We’ve already had widespread consumer filmmaking for at least as long as we’ve had YouTube, but as the phone technology improves the possibilities expand. Now perhaps I should use this juxtaposition as an opportunity to talk about digital literacy, but that’s not exactly my point. Instead, the point I want to make is that technological churn shifts the ways in which reputation is produced and maintained. While I can share in the general academic skepticism about Apple ads and their suggestion of how their technologies enable people to do cool things, at the same time, in a broader sense, human capacities are being altered through their interaction with technologies.

It’s completely obvious in a humanities department, where reputations are built on publishing monographs, that reputation is driven by technology. The principle is not foreign to us, even if we are generally blind to the ways in which our disciplines are tied to technologies. Just like the prestige of getting into an Ivy, reputation hinges on access. This creates a series of feedback loops. Elite humanities departments can support their faculty in monograph production, so they produce more books, so the departments remain elite. It used to be that filmmaking was expensive and technically complex, so only professionals could really make and distribute films. It’s still hard to make a good film, but the barriers are otherwise lower. So I suppose this could be an argument for digital scholarship and/or changing the ways in which we work as humanities scholars, but I just want to focus on reputation/prestige.

If reputation is about our participation in a technological network (of books, e.g.) and we are building an entire disciplinary and departmental infrastructure and strategy to facilitate that participation, then how is that really different from Deresiewicz’s zombie undergrads with their endless activities? Aren’t we both just building reputations within some arbitrary network? If we are spending hours upon years on dissertation research and monograph writing to get jobs and tenure and improve department reputation, so that we can “get into” or stay in categories like AAU, then we can point to the monetary rewards that accrue, just like those aspiring Harvard students. However, just like those students we are investing a lot of effort and money in those goals as well. IF you get into (and out of) your Ivy, then you can probably feel confident that your investment is going to work out. For those of us in the humanities, the future is less certain because the sustainability of the reputation-technology network we employ is more tenuous.

So what will our future reputation network look like? Obviously it won’t be iPhone filmmaking, but it obviously won’t be monographs either. If the 20th-century English department was born from Victorian literary culture, industrial printing, electrification, and the increased demand for a print literate workforce, then what analogous things might we say of the 21st-century version of the discipline? If 20th-century scholarly labor and reputation in turn hinged about our ability to study and engage with these technologies, then what would be the 21st-century analogy?

The problem that Pinker and Deresiewicz have is that the criteria upon which applicant reputations are built makes no sense, which in their view does harm in the end to both the students and the institution. We could say the same thing about the humanities, where the cause of the reputational disconnect seems fairly obvious. What is less obvious is how one goes about shifting those terms.

Categories: Author Blogs

why does the web need to be “social”?

5 September, 2014 - 08:33

When I was starting out in grad school, I saw Timothy Leary speak about the Internet as “electronic LSD.” It was the early nineties, pre-Netscape if memory serves. The ideas he was offering up were not that different from the argument he made in his essay “The Cyberpunk: The Individual as Reality Pilot,” which was anthologized in Bruce Sterling’s Storming the Reality Studio, that well-known anthology capturing the zeitgeist of 80s cyberpunk. I am not here today to advocate or express nostalgia for this moment. In fact this essay would be familiar to you probably for its romantic, libertarian/anarchist, masculinist, Eurocentric, techno-optimistic sentiments, which seem to strike a familiar but ironic tone in the context of the dystopian worlds cyberpunk literature portrays. For example, Leary writes:

The CYBERPUNKS arc the inventors, innovative writers, techno-frontier artists, risk-taking film directors, icon-shifting composers, expressionist artists, free-agent scientists, innovative show-biz entrepreneurs, techno-creatives, computer visionaries, elegant hackers, bit-blitting Prolog adepts, special-effectives, video wizards, neurological test pilots, media-explorers-all of those who boldly package and steer ideas out there where no thoughts have gone before.

CYBERPUNKS are sometimes authorized by the governors. They can, with sweet cynicism and patient humor, interface their singularity with institutions. They often work within “the governing systems” on a temporary basis.

As often as not, they are unauthorized.

Perhaps, in the age of cultural studies (even though Leary does cite Foucault), we might attempt to recoup such views through Haraway’s cyborg manifesto or Deleuze and Guattari’s nomads and rhizomes or maybe even Hakim Bey’s temporary autonomous zones. It’s easy enough to say that these fantasies built the web we have today, or more generously maybe that the contemporary web is what happens after state capture and reterritorialization. So let’s not go there. When I think of the social web (Facebook, Twitter, etc.) I know what I think of: family events, witty remarks, linking something funny or heartwarming, academic politics, current events. The social web manages somehow to demonstrate that the self-aggrandizing “greed is good” ethos of the 80s and the “sharing is caring” mantra of 90s children’s programming are compatible. It’s about as far from the vertiginous romanticism of cyberpunk as one could get. It’s more like a toned-down, less-interesting, pathetic version of Snow Crash or maybe one of Sterling’s novels.

So while I’m not interested in advocating the seemingly individualistic “anti-social” cyberpunk, I am unhappy with the word social.

Maybe it’s the Latour in me that has trained me to raise a skeptical eyebrow to the word “social.” Latour rails against the common view that there is some kind of social stuff. As he makes clear at the start of Reassembling the Social

The argument of this book can be stated very simply: when social scientists add the adjective ‘social’ to some phenomenon, they designate a stabilized state of affairs, a bundle of ties that, later, may be mobilized to account for some other phenomenon. There is nothing wrong with this use of the word as long as it designates what is already assembled together, without making any superfluous assumption about the nature of what is assembled. Problems arise, however, when ‘social’ begins to mean a type of material, as if the adjective was roughly comparable to other terms like ‘wooden’, ‘steely’, ‘biological’, ‘economical’, ‘mental’, ‘organizational’, or ‘linguistic’. At that point, the meaning of the word breaks down since it now designates two entirely different things: first, a movement during a process of assembling; and second, a specific type of ingredient that is supposed to differ from other materials.

Not surprisingly, people battle for the claim to have invented the term “social media,” though it’s mostly corporate and web entrepreneur types.  But what does the adjective social mean here? I’m thinking it’s supposed to mean media technologies that promote socializing, as in many-to-many rather than one-to-many. Perhaps in a technical sense it does. But the web always did that. If we think of Latour’s view of the social in his conception of a sociology of associations, then I suppose we’d begin by thinking of social media applications as actors that produce new associations: new communities and new genres/discourses. I guess that’s a fairly basic starting point that tells us almost nothing; we are, as always, instructed to follow the actors and their trails. In the end though, through social media we are “made to do” things. Not compelled exactly. It’s just that Facebook or Twitter or whatever activates particular capacities within us over others. Those are not “social” capacities. They are not made of social stuff, nor do they do social things as opposed to other capacities that would be non-social.

Leary’s cyberpunks built much of the underlying technology of the social web, perhaps with “sweet cynicism and patient humor.” And then I suppose they persist in the niches of hacker culture, while the typical user becomes immersed in a new “social.” Of course a platform like Facebook or Twitter with its millions of users is diverse in the experiences it might provide on an individual basis. At the same time, an investigation like Manovich’s Selfiecity offers some insight into how technologies generate commonalities. It’s not really my point to say that we are or aren’t being brainwashed by social media. It’s hard to get outside of the binary of the romantic narrative that Leary tells or even gets read into something like Deleuze and Guattari’s articulation of nomads and the state.

My point in the end is more pragmatic and less interesting on some romantic, visionary scale. Can we stop calling this media “social”? What else could we call it? And if we called it something different would we gain a better understanding of it? One that wouldn’t lead us nostalgically toward Leary or running in fear of a technopoly, like Benjamin’s angel of history, or mind-numbingly toward a corporate mall culture or whatever other cheesy narrative you can construct when we imagine that technologies are social.

Categories: Author Blogs

the eternal September of the no laptop policy

31 August, 2014 - 08:39

It’s the time of year when academics like to talk about their syllabi and inevitably the no-laptop policy arises. It is evidence of a recurring theme: we do not know how to live, let alone learn, in a digital networked environment. It’s hard to blame the faculty, though it’s difficult to figure out who else might be responsible. The classes we teach are in the same rooms and buildings, follow the same schedules, and are essentially understood in the same terms as they were 30 years ago. Yes, there’s wi-fi now, as well as 4G LTE signals, permeating the classrooms, and yes, almost everyone has some device that links to those signals. (As I’ve mentioned in prior posts, at UB anyway students bring an average of 5 wi-fi enabled devices to campus.) However, students don’t know how to use these devices in the classroom to support reaching the learning objectives, and the learning objectives remain based in a pre-digital world, as if what and how we are learning hasn’t been transformed by our new conditions, so faculty don’t know how to use these devices to define and achieve learning objectives either.

The Chronicle (of course) published a piece recently in which one professor, Ann Curzan, offers her explanation of her own no laptop policy. I appreciate the thorough explanation she offers to her students in her syllabus, including citing research on multitasking, effects on test performance, and so on.  It’s a very old  story, right? New technology affects our ability to think, nay, remember things. Just like the Phaedrus. Sure that’s just some old myth about Thoth, while with laptops we’re talking scientific research. Sure, except that people living in an oral society do have memory capacities that are different from ours. What do we imagine that distributed cognition means? It means that we think in conjunction with tools. It means we think differently in the context of digital networks. And that’s scary and difficult. Obviously, because these are the recurring themes in our discussion of educational technologies.

Curzan and the many, many other professors with similar policies have educational objectives and practices that have no place for emerging media. It makes perfect sense that if the purpose of coming to a class is to take notes on a lecture then a laptop is of limited utility. Yes, you can take notes on a laptop but that’s like driving your car at the speed of a horse-drawn carriage. If the purpose of class is to engage in class-wide discussion or group work then maybe those devices have a role to play but that depends on how the professor shapes the activity. For example, a typical pre-wifi class/group activity I did was to ask students to look at a particular passage in the reading, figure out what it means, and discuss what they think about it. Today, depending on the particular reading, there’s probably a good deal of information online about it and that information needs to be found, understood, and evaluated. It’s also possible to be in real time conversation with people outside the classroom, as we know. So that activity isn’t the same as it was 10-15 years ago. It’s possible that the laptops could distract from the activity, but distraction is always a problem with a group activity.

Can we imagine a liberal arts degree where one of the goals is to graduate students who can work collaboratively with information/media technologies and networks? Of course we can. It’s called English. It’s just that the information/media technologies and networks take the form of books and other print media. Is a book a distraction? Of course. Ever try to talk to someone who is reading a book? What would you think of a student sitting in a classroom reading a magazine, doodling in a notebook or doing a crossword puzzle? However, we insist that students bring their books to class and strongly encourage them to write. We spend years teaching them how to use these technologies in college, and that’s following even more years in K-12. We teach them highly specialized ways of reading and writing so that they are able to do this. But we complain when they walk in, wholly untrained, and fail to make productive use of their laptops? When we give them no teaching on the subject? And we offer little or no opportunity for those laptops to be productive because our pedagogy is hinged on pretending they don’t exist?

Certainly it’s not as easy as just substituting one media for another. (Not that such a substitution is in any way easy, and in fact, the near impossibility of making that substitution will probably doom a number of humanities disciplines, but that’s a subject for another post.) To make it happen, the entire activity network around the curriculum needs to be rethought, beginning with the realization that the network we have is built in conjunction with the legacy media we are seeking to change. We need to change physical structures, policies, curriculum, outcomes, pedagogy…

It is easier to just ban laptops.

 

 

Categories: Author Blogs

academic freedom, social media, and the university without conditions

29 August, 2014 - 10:15

Let’s call this a “Law and Order” style post, as in “inspired by real events.” This is also, I believe, a classical example of a Latourian “matter of concern.”

Without suggesting in any way that the principles of academic freedom ought to be modified or interpreted differently, it should be clear that the material conditions of communication have completely changed since the last time (in 1970) the AAUP “interpreted” the 1940 Statement of Principles of Academic Freedom and Tenure. Though there is a Statement on Professional Ethics that was last revised in 2009 to me it seems clear that it is still failing to account for our changing conditions (maybe there wasn’t a critical mass of academics on Twitter yet). The key paragraph in that document is probably:

As members of their community, professors have the rights and obligations of other citizens. Professors measure the urgency of these obligations in the light of their responsibilities to their subject, to their students, to their profession, and to their institution. When they speak or act as private persons, they avoid creating the impression of speaking or acting for their college or university. As citizens engaged in a profession that depends upon freedom for its health and integrity, professors have a particular obligation to promote conditions of free inquiry and to further public understanding of academic freedom.

For me, there are two interesting points here. First that professors have “rights and obligations” that are no different from other citizens. And second, that when they speak or act as private persons that they “avoid creating the impression of speaking for their college or university.”  Let’s deal with the second point first. Exactly how do you avoid that impression in social media? I suppose if you in no way identify yourself as a professor in your profile page and you can’t be googled and identified as such. What is due diligence in terms of “avoiding” here? It’s not like one can invoke the online version of Robert’s Rules to insist that an audience not associate one’s speech with one’s institution. And as for having the same rights and obligations as other citizens, that’s hardly much solace. Do we imagine that high profile professionals in corporate America are not subject to personal conduct policies? We know that people get in trouble and not hired for jobs because of what they post in Facebook and such. We teach our students about this all the time. So the notion that we have the same rights and obligations as any private person is something to think about.

So one argument says that professors should be able to write/say anything that falls within the protections of the First Amendment and not be subject to any professional or institutional consequences. Of course this is not practically possible to ensure because everything we say has consequences, often unintentional ones. This is an inherent risk of communication. I could be writing something right now that angers some reader who will remember and some day be disinclined to publish something I’ve written or promote me or whatever. No one can control that. That’s always been the case. There have always been feuds among faculty in departments, where one professor always opposes anything the other one suggestions. Only with social media we have this business on a larger scale. One can say that the acts of an institution are a different matter, and that’s true, but those acts are always actually taken by individual people sitting on a committee or in an office somewhere. That angry letter to the editor you may have written 10 years ago, instead of being buried in some archive where no one can find it, now comes up on the first page of a Google search for your name. And really anyone in America can find it and read it at any time, not just the couple thousand local folks who might have turned to page 53 of the newspaper one night a decade ago. We all know this already. So how can we pretend that our circumstances have not changed?

I know that we want to imagine that all these things are separate, but they were never inherently separate. As Latour would suggest, there were many hybrid technologies at work beneath that old 20th-century system constructing order, like Maxwell’s demon. But those old systems no longer function. The question then becomes what system should we build? In answer to this question, one often hears Derrida’s concept of the “university without conditions” cited:

[t]his university demands and ought to be granted in principle, besides what is called academic freedom, an unconditional freedom to question and to assert, or even, going still further, the right to say publicly all that is required by research, knowledge, and thought concerning the truth.

I would note that the key point here is that this is the university without conditions and not the professor with conditions. Professorial freedom has always been constrained by editors, reviewers, granting agencies, etc. We know what the university of print looked like and how it aspired to, though obviously did not reach, Derrida’s idealized institution. Whatever the digital university will look like, whether it is better or worse, it will clearly be different, because it already is.

Categories: Author Blogs