Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
I’m participating on a roundtable on MOOCs at the 4Cs conference in a couple weeks. It’s one of those experiences that shows the temporal disconnect between the churn of technological innovation and the stately pace of academic discourse. We proposed this roundtable nearly a year ago, and I think the things we would have wanted to say then probably are less applicable now. Or at least that’s the case for me.
Here’s what I’m thinking now (and I imagine I’ll say something along these lines). A year ago some folks were gleefully pronouncing the end of higher education and proclaiming that MOOCs would revolution learning on a global scale. Today that revolution seems very far away. While MOOCs might remind us that education is intertwined with technologies, they might also show us that technologies cannot solve the challenges education presents. Millions of people are not going to log onto a website, interact with some material, and teach each other in any sustained, programmatic way. Perhaps we can imagine some near future AI that will teach us things or even that we could download knowledge into our brains like the Matrix. But those fantasies, like the fantasy of the MOOC seem to misunderstand how learning, pedagogy, and education work. These fantasies are not so different from the one that imagines that we could just read books and learn that way.
To me, the most interesting thing about the MOOC was watching how institutions, disciplines, and faculty responded with a mixture of entrepreneurial opportunism and trade protectionism. This is particularly the case for composition. The basic argument against the composition MOOC is that instructor feedback is crucial to writing pedagogy and cannot scale to the level of a MOOC. It begs the question of whether one can learn to write without instructor feedback. I’d say the answer is yes. I mean I did, unless you count a couple check marks and “very good, B+” as feedback. But that’s not really the point. It is a more systemic, networked issue. If the goal is to be able to write for academic classroom environments with a single primary reader/professor (and then later to write for professional environments with a single primary reader/boss), then the traditional classroom fits into that system/network. If, on the other hand, one needs to learn how to communicate in a distributed, non-hierarchical digital environment with 1000s of members, then a MOOC isn’t a bad place to start writing. However neither of those are good models for understanding how communication works today for the average professional, college educated citizen in the US.
The other side of the MOOC equation is not about pedagogy but about the economics of composition instruction. Sure, it’s pretty cheap in terms of adjunct pay. So maybe there’s this fantasy of even cheaper, automated instruction. There certainly is from the student perspective trying to get out of paying for those composition credits. And as a discipline we have an interesting reaction to that. No one is in favor of the exploitative practice of hiring adjuncts, but no one wants to fire their adjuncts either. As such, compositionists find themselves defending adjunctification and the status quo against moocification. In the continuum between status quo and abolishing fyc, where does the composition mooc stand? Is it better than nothing? Worse than nothing? Do we really prefer adjunctification? Or are we insisting on some idealistic model of well-paid instructor-mentors who will deliver a curriculum that is ultimately anachronistic anyway?
The rapid rise and decline of MOOCs is a familiar tech start-up story. However, the obvious shortcomings of MOOCs have not reaffirmed the continuing value of traditional methods as much as they have shed light on challenges that remain unmet.
My graduate course this spring is on media theory. We’re right between McLuhan and Kittler now, so I suppose I have Kittler’s declaration that media determine our situation on my mind. In our class on Monday, we were looking at Manovich’s selfiecity project, which is an analysis of 3200 selfies taken in six cities around the world and posted to Instagram. The conversation we had got me thinking about the intersection between concepts of media and genre.
I suppose my starting point is to say that genre is an attempt to describe a communicative activity undertaken by a network of humans and nonhumans, which would include media technologies. Genres can suffer from all the familiar effects of generalizing as we see when we try to describe “academic writing” in general. Since I am ultimately not going to come down on the side of arguing that media determine our situation, I’m going to have to figure out how to bring media into conversation with other agents.
The selfie and the selfiecity project are good examples to work with here. How do we talk about selfies in terms of media? In theory there are many devices that could be used to take a selfie, though the most common is a smartphone. Certainly the mobile and instant nature of selfies are typical of the genre. But is the media the smartphone or is it the internet, where the smartphone is just the sensing organ of the web? What kind of media is the web? It is easy to see in McLuhanesque terms how the content of the web is prior media, including photography. The genre of the selfie is clearly more than a self-portrait. It is participation in a network. It is communication for any number of potential purposes. In this sense the Instagram selfie is different from the Facebook profile picture, which is also often a self-portrait. Presumably though Instagram and Facebook are just content on the Internet, as are music, television, movies, video games, ebooks, etc. etc.
Rather than going down the avenue of skepticism regarding determinist arguments, I will admit to having interest in the differences among these kinds of content and in the philosophical question of which differences make a difference. Many people will say the sound fidelity of MP3s is a difference that makes a difference. Others would point to the smartphone-mp3 playing device as changing the culture around music, as well as the ease of single song downloads, music piracy, etc. Would we want to make similar arguments about ebook formats vs. print novels? If so, how do we want to make that argument? We can make the fidelity/aesthetic experience argument about print vs. electronic where we’d say the print and ebook versions of the same novel are different in a way that makes a difference. Or we can make a larger shift argument where we’d say that the development of ebooks has changed/is changing/will change the genres of novels (and other books) in some way. Is that media determinism, or is it just media agency?
And what about the genre? Is it an object/actor with agency as well? To be honest, I’m not sure. Clearly the idea of a genre has an effect on humans that write in it. And in some sense genres are emergent phenomena of communication activities within a network and they have some cybernetic operation so that, for example, journal articles keep replicating. So let’s say yes, provisionally. To return to the selfie, the historical genre of self-portraits, which presumably could go back to cave painting (though maybe not, maybe we want to say that the idea of self and self-image as a concept emerges at some historical point… a question for another time, regardless, we’ve been doing it for a while). The earliest photographic self-portraits fit into the broader genre of portraits. In fact, if you put a camera on a timer and then take the photo it’s probably hard to tell the difference between that and another person pressing the button. Sure, the selfie is often taken at arm’s length, but the head and shoulders shot that results is familiar to the genre historically. If, for some reason, it wasn’t possible to get a head and shoulders shot from an arm-length selfie then I would guess we wouldn’t see that many of them. If you agree with that hypothesis then you’d be suggesting that the historical genre of the self-portrait had an impact on the expectations and requirements for selfies.
This leads to some other questions. If I take a self-portrait in a mirror (another common practice) is this still a selfie? What if I have some kind of remote control or timing device that allows me to set my smartphone at a greater distance? What if I have someone else take a picture of me and it just looks like I took it at arms length? Does it matter if the self is pushing the button on the self-portrait? We can try to set some rules or what not, but really the answers are in the networks. Do the images circulate in the same way as part of the same genre doing the same kind of work for the same kinds of communities?
How about this one: if I take a selfie but don’t upload it, is it still a selfie? If I print it out and frame it instead? Or is that a different genre? Is it a different medium? Certainly the answer to that last one has to be yes.
For my own research questions related to teaching digital literacy and practicing digital scholarship, these are worthwhile questions. When we think about the relations between teaching students to compose print essays and preparing them to be digital communicators, when we think about the move from scholarly articles and monographs to online journals, we are thinking about the intersection of genre and media. I would say that these are differences that make a difference.
The #futureEd Coursera course has moved from the history of education onto the future, onto imagining future versions of higher education. As I’ve discussed in my last few posts, there are a slew of problems connected to higher education in the US in terms of access, cost, and outcomes (i.e. good jobs for graduates). Though these problems manifest themselves on campuses, and these institutions can act in response to them, they are really problems caused elsewhere. Higher education does not control the rapidly increasing demand for degrees, nor does it control the job marketplace its graduates enter. As for costs, so much of that is a factor of state/public support, the role of student loans, and the strength of the overall economy.
However, I do think we can change how education functions on campuses, but to do that we need to alter the operation of research. Research drives the activity of faculty and forms the basis for curriculum. While it varies from campus to campus, I doubt there are many four-year colleges that wouldn’t value research activity as being at least equal to teaching in terms of how tenure-track faculty are evaluated. And, of course, faculty have pursued their careers for the purpose of doing research. It’s easy and obvious to see that higher education is built around these disciplinary-research paradigms. For example, examine the upper-division courses in an English department and you can get a sense of the specializations within the discipline (literary-historical periods of Anglo-American literature, various post-colonial and ethnic literatures, creative writing, journalism, rhetoric, and so on, depending on the particular mix of faculty at that institution). Examine the more customized descriptions of graduate seminars offered in a given semester and you can see the particular kinds of research being done within those specializations by the faculty. But the shaping power of research paradigms doesn’t end there. In English, and many other humanities disciplines, the most-valued research product is a single-author book. The most common is the single-author journal article. Humanities research is, at its base, a solitary activity. The result of this is a kind of hyper-specialization, which I’ve discussed many times on this blog. Perhaps hyper-specialization is necessary for the sciences; I can’t speak to that. In English Studies though, it just seems like we are creating hothouse flowers. Though undergraduates don’t become hyper-specialists, their curricular experience is shaped by that hyper-specialization not only in the sense of how their individual courses connect but how their curriculum intersects with other disciplines.
Here’s an example of what I mean. Let’s say that instead of spending a good chunk of 3-5 years writing a monograph on a highly specialized topic for an audience of a few hundred readers, to get tenure at a research institution, an English professor needed to collaborate on an interdisciplinary project that published research (though maybe not a book), communicated with the general public, and developed curriculum. What might that look like? It might look like what Anne Balsamo has described (which I discussed here). There are a whole range of topics, including: ecological studies, disability, social justice, literacy, education, cultural preservation, and so on. I am fairly certain that most humanities professors believe their work is culturally relevant, so this would simply be a matter of stepping more fully into that role.
In my own case, I could see being in a community of artists, engineers, architects, business faculty, health professionals, social scientists, education researchers, scientists, and other humanists who were investigating digital media communication. How does digital media operate in the workplace? What are its social and cultural effects? What are its impacts on the human body? How does it interact with physical spaces? How do we learn with it? How do we learn to use it? How do we communicate with it? How do we design better technologies? What are the artistic/aesthetic possibilities? What are the ethical and legal issues? And so on. It’s easy to imagine several dozen faculty on a campus working collaboratively on a group of research questions/projects. I could see myself working on a question such as designing hardware and applications to facilitate digital communication and learning in an educational space.
Doing this wouldn’t eliminate our current departmental structures. Those would remain, especially at the graduate level. You need at least two ways to slice the university to get interdisciplinarity. But a student could major in engineering or business or the humanities but also follow a path through one of these research communities that would connect their disciplinary education with its operation in an interdisciplinary problem.
Right now though, it would be professionally unwise for me to do that kind of work. I need to publish my single-author monograph to get my promotion to full professor. Really any work that I do, including what I am doing right now writing this blog, comes with a financial penalty. The smart, efficient professor is the one who organizes his/her teaching and other activities around publication. Everything the university tells us to do insists that we act myopically, regardless of what the university PR machine says.
Change that and you’ll change the university. You’ll change the daily work of faculty. You’ll change the kind of faculty we hire and promote. You’ll change the way graduate students get trained. And you’ll change undergraduate education. You might even change the cultural role of higher education in our society.
It’s week 3 of Cathy Davidson’s MOOC on the Future of Higher Education. I’m not sure where we are going. Though obviously there are far too many forum topics and posts for any one person to follow, from my vantage there are a couple consistent themes which are echoed in the course video lectures and related materials.
- Higher education is (too) expensive, especially in the US.
- Getting a degree is a pathway to better pay and job security over a lifetime, so it’s a good investment.
- The higher education system cannot handle the vast number of people seeking an education. This is true in the US but even more so on a global scale.
- Higher education is doomed. People don’t want to go to college because it’s too expensive and it doesn’t lead to good paying jobs. They are all going to go online and get badges instead.
How can all these things be true at the same time? Wait, I know. It’s a trick question. It has something to do with the contradictory nature of capitalism, or something?. Maybe not. It’s more about point of view. College is more expensive than it used to be, and that expense is a barrier to some people, though there are more college students now in the US than ever before. Though statistically college remains a good investment on average, with more people getting degrees than ever before, the value of the degree is less than it used to be (we can’t all have above average incomes). On the other hand, the demand side is also misleading because the 450000 students on the wait list for community colleges in California (a statistic to which Davidson often refers) are perhaps not so much seeking an education as they are seeking what correlates to that education: a better-paying job. And who could blame them for wanting that? What that means though is that if there was an alternate route to that job, then the demand would disappear. With a quick Google search, you can find plenty of journalistic reports about “talent shortages” and the need for more, better-educated workers. But how real is this demand? For example, the US Bureau of Labor Statistics lists elementary school teachers as one of the areas with the largest job growth in the next decade, but if you talk to professors who teach education you will discover that the numbers of students entering the field of have dropped off. In addition, we hear about layoffs of teachers, states busting unions, and the increasingly poor environment of the teaching profession. Who would want to be a teacher? If we need teachers so desperately then how come the pay isn’t getting better? Of course you could say that about a lot of the jobs on the BLS list like home health aid, maid, childcare worker, and so on. Maybe there’s a projected need in these areas because these are low-paying jobs that no one really wants if they can avoid them. Teacher isn’t in that category but it shares a common characteristic in that teachers perform a necessary function than no one really wants to pay for.
Now, do you see how that paragraph just spun off like crazy? If “the problem” is something like the neoliberal, transnational capitalistic, privatization of society, then in what possible way do we envision that changing the way people get certified for jobs makes a difference? And what, if anything, does this have to do with higher education, except that we are as caught up in this snafu the same as everyone else? Is the question here “what should higher education look like in 10 years?” or is it “how do we go about changing the ethics of the planet?” Because as ridiculous as that second question is, I’m not sure we are even asking that so much as we are each asking “how do I get to be on the not-sharp end of the stick?” In short, there are too many stories and too many different problems and trying to see them all as related just produces aporias as far as I can tell.
That’s why I prefer more localized versions of this question of higher ed’s future. For example, how can one university shift its curriculum and priorities to prepare its undergraduates to be effective communicators in a digital media/network and increasingly global context as both professionals and citizens? I’m asking for a friend. Riddle me that one, my fellow MOOCcupants.
I suppose my point is that I recognize the serious problem that many people face trying to make their way and that higher education at least appears to be a solution to the challenges they face. However, in the end, these are not problems caused by higher education and unfortunately I believe that higher education has only a minor role to play in solving them. That said, higher education does have many problems of its own to face. I don’t know that confusing the two is all that helpful.
I’ve taken some time to get involved in Cathy Davidson’s course on the History and Future of (Mostly) Higher Education. I have to say that while I’ve taken some enjoyment out of participating, I’ve witnessed a fair amount of magical reasoning, or as your grandpa might say “If wishes were horses then beggars would ride.” For example, if K-12 schools could educate students to be self-motivated, independent learners… If students had the digital literacy to seek out educational resources, create their own curriculum, and build collaborative communities to foster creative, engaged learning… If universities or tech companies or someone would create these technologies and offer them for free… If we had a valid accreditation system independent of higher education that employers would recognize as valid… If, after all this, those would-be students could get good jobs… well, then I’ll tell you what, IF all those things, then higher education would be in trouble because the INTERNET.
Let me restart from a different point. Irregardless of technology, what is it that we want higher education (or whatever might replace it) to do?
- support one’s study of a professional area
- certify one’s preparedness to enter into a field
- develop communication skills (written, digital, oral) to be able to participate in professional (and civic/public?) discourse communities
- develop research skills, critical thinking, and creative problem solving
- learn to do all this work independently, collaboratively, online, FTF, in a global/diverse context
- an interdisciplinary understanding of the world so that we are not mono-dimensional.
Do we also want to say that higher education should not only be about contributing to one’s personal long-term earnings potential but also about enhancing one’s ability to contribute to the common/social good? Then we also have to consider whether we are taking the students as we find them or if, in our vision of the future, we are offloading a lot of current problems onto K-12 education. That is, is our vision one that begins by saying “If K-12 did x,y,z then higher ed would do a,b,c.” I agree that K-12 education could be better and that we should try to make it better. However, I am not going to plan a future for higher ed that hinges on K-12 doing things it’s never been able to do.
Now we could get rid of all the trappings of modern education. We could make learning more project/activity-based, more student-centered, and more collaborative, but we will still need experts to teach and mentor students and a larger support system. We could shift more of these activities into online environments, but we will still need some FTF because students benefit from it and because that is still how the professional world works. In fact we see less telecommuting now than we did a few years ago.
So here is a semi-idealized version of how our composition program works (which is not so different from other composition programs, I think).
- we meet 3 hours a week in-class where students write, work in groups, and have class discussion
- our pedagogy is student-centered and project-based: the curriculum is organized around a series of writing projects
- students explore professional genres and learn something about digital composing
- we have a writing center and individual faculty conferences with students to mentor/support them
- we have a significant online component for resources, class discussion, student and instructor feedback on writing projects, and so on
- we learn how to do basic library/database research and evaluate sources
- we practice invention strategies to find creative approaches to problems and assignments
In short, we’re contributing to many of the goals we might establish for higher education, and we are meeting the students where they are, not in some fantasy land where they are all autodidacts. What I am saying is that when you look inside the university you can find a lot of pedagogies that are conceptually sound. Even at a big public university like UB, less than 10% of the classes have over 100 students and more than 2/3 are smaller than 30. So I wonder if the problem isn’t so much resources as it is professional/pedagogical development. In a class of less the 30 you can do project-based learning, class discussion, small group projects, and so on. And maybe those things do happen regularly across the campus. I honestly couldn’t tell you how much lecturing goes on in those smaller classes. In conversations I have with colleagues across the campus though, my sense is that there concern is often that they feel obligated to “cover” some large range of material. The curriculum is set up so really the only thing that you can do is info-dump.
The problem is that the digital “solution” isn’t any better. It’s also an info-dump (plus let the students talk to each other in forums and figure the stuff out). The standard info-dump pedagogy is based on the same fantasy that imagines plugging something into your head and learning kung-fu in 30 seconds. There’s no doubt that the info-dump pedagogy can give you a bunch of disconnected pieces of information that you can hold in your mind until the end of the semester. And if you don’t do the info-dump approach then students probably won’t do as well on a test that is designed to see how much you retained from the info-dump.
So what if we established different goals for education. What if a successful education was measured by the work students were able to accomplish, the skills they developed, and the activities they were able to undertake than by the information that they were able to retain? Maybe our pedagogies would be different. And maybe MOOCs would make less sense as an alternative. That doesn’t mean that social media and other digital technologies wouldn’t play an important role in learning. In fact I imagine they will. It just means that watching some videos, reading some material, and then taking a test on it wouldn’t seem like learning to us regardless of whether those things happened online or in a lecture hall.
This is a reference to one of my favorite lines in Neal Stephenson’s Snow Crash, which offers a satiric, dystopian view of a future America.
When it gets down to it — talking trade balances here — once we’ve brain-drained all our technology into other countries, once things have evened out, they’re making cars in Bolivia and microwave ovens in Tadzhikistan and selling them here — once our edge in natural resources has been made irrelevant by giant Hong Kong ships and dirigibles that can ship North Dakota all the way to New Zealand for a nickel — once the Invisible Hand has taken away all those historical inequities and smeared them out into a broad global layer of what a Pakistani brickmaker would consider to be prosperity — y’know what? There’s only four things we do better than anyone else:
high-speed pizza delivery
So that’s probably not the future, though I think we definitely have a leg up on the pizza delivery thing. Nevertheless we like to predict the future and this Coursera course is filled with amateur prognosticators (and who, after all, is really a professional one?). Not surprisingly when you get thousands of people together in forums you end up with a lot of common tropes. Much of the conversation I’ve seen starts with the complaint/observation that a college education doesn’t lead to a good job (like it is apparently supposed to), that, as a result it is too expensive, and that these two facts mean that there is a crisis in higher education.
Here is my three part response.
- The historical relation between a college degree, lifelong earning potential, and job security is a correlational not a causal one. A college degree does not equal a good job.
- Once you realize that, then the argument about the expense of college doesn’t make sense. I’m not saying that college isn’t expensive or that there might be a social benefit to making it more affordable. I’m just saying that once you realize that college doesn’t equal job that making a cost-benefit analysis based on the job you did/didn’t get doesn’t make sense.
- If there is a crisis in higher education it’s that too many people want one for the current system to support, in part because they misunderstand what college does.
Think about how colleges work. They are increasingly tuition-driven as we know. They try to get the best students they can, but they also just need students. Then they make an effort to keep those students and ensure they graduate, because those statistics are significant for how their institutions are evaluated, but also because faculty and administrators (who are mostly former faculty) do care about students. We don’t control the economy or the job marketplace, and we don’t control the majors our students select. We could decide not to offer certain majors, but we are in a market with other institutions. We are in business just like everyone else.
We all understand that these are tough economic times and that even in good economic times there are plenty of people who are looking to improve their economic standing, looking for anything that might give them a chance. We can all see that better paying jobs require college degrees. But while not having a degree is a barrier, having a degree is not a magic key. I don’t know if we can change the motivations of people coming to college. We could try to better educate them about what college degrees really mean in relation to a career once they get here. But we can’t seem to do that with English phd students, so I’m not sure what shot we have with teenagers.
In Cathy Davidson’s Coursera course there are plenty of people rehearsing the argument about how various alternate credentialing mechanisms will put an end to higher education forever. I think those efforts could be higher education’s salvation, if they worked, which I am afraid they will not. There are nearly 22 million people enrolled in higher education this year. That’s a 50% increase since 1996. Maybe higher education would work better if half of those people did seek some alternate form of credentialing. To be a little crude, if we have millions of students getting four-year degrees and ending up in crappy low paying jobs anyway, then why not let them collect some badges and MOOC certificates on the way to those same crappy jobs instead? The point is that it doesn’t matter how many students get degrees, the job market is going to be the job market. So the Bureau of Labor Statistics tells us there will be a growing number of jobs for nurses, elementary school teachers, accountants, various kinds of managers and software developers. Those are the growing occupations that require college degrees and pay a decent wage right now (over $50K as a median). There will be something like a million new jobs in these fields between now and 2022. That’s great, except there are 22 million college students right now! How many of them are trying to get nursing, teaching, or business degrees? I’d say a high number of them. Of course a lot of them will wash out and end up with communications or psychology degrees (the two most common degrees in the US right now). And they will probably end up in sales or customer service or something. And you can make good money in sales if you’re good at it. Or you might work your way into management and a better salary. But along the way you’ll probably not be happy about the money you spent getting that degree because getting the degree didn’t lead you to the career you imagined.
I think it would be fantastic if some alternate credentialing mechanism that employers would accept came along. The problem is that if you really want to be a nurse, accountant, or software developer you are going to need some intensive, long-term postsecondary education. Employers who hire nurses, accountants, and software developers pay them a good salary. They want to be assured that the people they are hiring have more than the basic technical training required to do the job; they want people who are smart, good communicators, self-motivated, professional, and so on. They want the best and because the jobs are desirable they can demand the best. Maybe what we need to do is what we used to do 40 years ago: stop people at the gateway to higher education. Maybe we should moderate the number of students accepted into college based upon projections of the jobs available in those fields. Isn’t that the argument we are making now about accepting Phd students in English?
Of course we didn’t have this obsessive link between college and jobs back then. Think about The Graduate or 20 years later, Douglas Coupland’s Generation X. All those college-educated slackers in the late 80s and early 90s (I was one). Where was the furor over jobs and degrees then? Maybe it was there, but we just didn’t have the internet to hype it up.
Maybe it will require some crisis, some shrinkage of colleges or something, but if we got to a future educational system where jobs and degrees were decoupled, or at the very least where the real relationship between a degree and a career was widely understood, then I think we’d be far better off. To me there is a subtle but crucial difference between saying that you are coming to college to study nursing (or engineering or accounting or whatever) and saying you are coming to college to get a job as a nurse, engineer or accountant. It is the subtle difference between reality and fantasy. Because universities invite many people to study these fields, but we don’t hire that many nurses, engineers, or accountants ourselves.
To bring it back to Neal Stephenson, which I should given the title, the future of the American economy is uncertain. Universities don’t control it. The America of Snow Crash is one that is rampant with market economies, corporate enclaves, and deregulated everything. It’s been a while since I read it but I remember even the FBI becomes a non-government agency or something. That’s the direction we have been heading for a while. I don’t know if things will get as bad as Snow Crash but we can see the effects of making higher education into a market-driven business over the last 30 years. I’m not saying we need to go backwards or that going backwards is possible. However, even if we did go back to the days when higher education was better subsidized, we still wouldn’t be the magical job and wealth creators that students wish we would be. You’re still going to have to figure out what the four things America will do well (pizza delivery or not) will be and compete like hell, because that’s the country we apparently want to live in. Maybe not.
I’m participating in Cathy Davidson’s Coursera course on the History and Future of Higher Ed. It’s just week one, so we’ll see how it turns out; the introductory material was, well, introductory. She covers a lot of history (beginning with the invention of writing) to establish our current moment as revolutionary in terms of media/information. There was nothing really surprising there, and it sets up the primary task of the course which is to imagine a future for higher education. In related news, Clay Shirky has a post on “The End of Higher Education’s Golden Age, which also makes a familiar argument: that the public funding of higher education has been on decline since the 1970s (and really takes off after the end of the Cold War). Since then we have had a series of rearguard actions trying to preserve a state that cannot work: “Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.”
In short, it’s a brave new world out there, which we already knew. One might expect Shirky to applaud technological solutions, but he takes a different rhetoric tack: “The number of high-school graduates underserved or unserved by higher education today dwarfs the number of people for whom that system works well. The reason to bet on the spread of large-scale low-cost education isn’t the increased supply of new technologies. It’s the massive demand for education, which our existing institutions are increasingly unable to handle. That demand will go somewhere.” I would take this back a step further. Why are all these people going to college? Let’s, for the moment, agree with the premise that we have left behind the “golden age” (and there’s plenty of evidence for that claim). Then we also must leave behind “golden age” values and ambitions (which were only ever fantasies anyway, ubi sunt). In particular, we should leave behind the claims that higher education supports democracy, creates citizens, or otherwise strengthens society by educating better humans. First of all, does anyone actually believe that ever happened in any consistent way? And even if it did, it has only primarily happened for a very small, white, male, and wealthy portion of America. And we all know what great leaders they’ve been of late.
Obviously most students go to college to get a job. Most people view the function of higher education as preparing students for a career. But higher education isn’t really about career preparation, or at best that’s only a small part of what we do. So maybe the answer is that the future of higher education is the expansion of institutions that are willing to respond to the consumer need for job preparation and the shrinking of institutions that will continue to provide a different kind of education that leads toward graduate and professional degrees. Maybe the same university will contain both kinds of institutions as separate colleges. The technical-vocational degree would probably be a kind of terminal certification. It would give you the basic skills needed to get that entry-level college job. Maybe that kind of training is done without much faculty, but with more modestly trained support staff and tutors, along with some masters level faculty. It would be more like a community college but without the community college’s mission of providing entry into 4-year colleges. These institutions could partner with corporations to provide specific kinds of training needed by those businesses. The corporations would underwrite some of the educational costs. In turn, qualified students could go to work for those corporations upon graduation to pay off some of their student debts, like a kind of indentured servitude, but really not that different from paying off student loans today and there’d be job security (only slaves have better job security than indentured servants).
Maybe that modest proposal isn’t to your liking though. I guess the question one has to ask is whether or not students really want to define their education in terms of the job they will get at the end. Because if they do then what they are really doing is defining their education in terms of what corporations want. And my sense of what corporations want is that they want students weeded out and sorted. We can say that higher education underserves many American students, as Shirky claims, but if want they want are jobs, if what they want is what corporate America wants for them, then we aren’t underserving them. They are getting exactly what they want; they are just not happy with the result because they ended up on the sharp end of the stick. Shirky talks about higher education’s desire to stretch out a golden age long after it stopped working. Maybe we are doing the same thing with higher education. When 30% of Americans have four-year degrees then it becomes a pathway to job security and better income. If 50 or 60% of Americans get four-year degrees, do we really think the degree will have the same value? Or will college degrees stop leading to jobs? Or lead to less desirable jobs? The same job you would have had 20 years ago with just a HS diploma.
If you believe that higher education should be a democratizing force that offers opportunity to people, then I agree with you. If you believe that we should offer that opportunity to more people by getting more people into and through college, then I agree with that too. However, I think we have to realize that increasing the number of people who can compete for the opportunities a college education provides will also increase the number of people who lose out on that competition. The future of higher education is that a four-year degree will be less and less valuable every year while simultaneously becoming more expensive because of the increased demand for the degree. I suppose that if you imagine that marketplaces are rational that at some point these things will all even out, though I’m not sure why we need to bring in rationality at this late point.
Shirky believes there is no point in trying to convince governments to increase their funding of higher education, and he points to 40 years of failure in this effort. Maybe he’s right. It does seem to be the case that voters don’t want to make funding higher education an issue. Everyone thinks its important but no one wants to pay for it.
I wish I was more optimistic, as Davidson’s class seems to be. I just can’t get myself there. I see a future with more costly, junk degrees and an increased divide between elite education and what everyone else gets. I think that if you’re in the top 10-20% of first world students, you’ll probably still be able to get a great education at a cost that is reasonable long-term in relation to what you might earn as a future professional. Those are the folks that will go on (for the most part) to do research, run corporations, serve in government, and perform other key professional roles. As for the rest of the population, as long as education is tied to short-term job needs and corporate whims, as long as no one is particularly interested in supporting that education, I don’t see anything great happening. Maybe this Coursera MOOC will change my mind (heh).
I attended the AAC&U eportfolio workshop last weekend in Washington DC. This was my first time at this conference, and it was good to see rhetoricians well-represented among the presenters, including Chris Gallagher, Kathy Yancey, and Darren Cambridge. It was also a conference with a heavy corporate presence, which is not surprising given the potential money to be made in supplying these services to universities (and the general unpreparedness of institutions to think through these issues on their own). However it was not until the very end of the day that I was able to identify a useful “matter of concern,” to use Latour’s phrase, though perhaps in retrospect it could have been predicted.
The closing session was given by Edward Watson, one of the editors of the International Journal of ePortfolios. He offered some analysis of the content of recent articles in the field and made the argument that more quantitative, reproducible research needed to be conducted and published. While he was willing to say that he didn’t mean that the research that was being done wasn’t valuable, he maintained that the field required more “rigorous” research as well. Darren, who was in the audience, spoke up to object to this characterization, which initiated some back and forth around the room. He remarked that portfolios arose from a value placed on epistemological pluralism and complexity that didn’t fit into the model of quantitative, reproducible research that Watson sought.
I’m not so interested in taking sides in a field beyond my own. From a distance, this desire for quantitative certainty to be associated with education is now a familiar theme. Government agencies, accrediting bodies, university administrations and so on want these kinds of assurances. They want to know that if they institute policy A for $XXX then this will result in x, y, and z effects, so that they can evaluate risk and return on investment. That is, they want to know that spending a bunch of money on instituting an ePortfolio system will result in better retention rates, time to degree, job placement, communications skills… something. And the edTech corporations want to make these assertions in their sales pitches too. There are also some education research disciplines that will claim such knowledge can be produced through empirical research. My position on these matters is fairly simple. I’m happy for scholars to pursue their research agendas in their communities. I don’t know that I would call such methods rigorous, unless by rigorous we mean predictable, because I will occasionally read such work and I’ve never found anything surprising about it. If you’ve ever read an empirical, quantitative study about writing pedagogy that produced results you thought were surprising, please let me know. I would honestly be interested in reading it.
Here is what I would assert about eportfolios. They are neither a necessary or sufficient cause for improving student learning. Depending on how they are employed within a curriculum and pedagogy, they can be a useful tool for creating opportunities for reflection and integrating learning experiences across courses and semesters. Depending on how they are structured, they can be a means for fostering learning communities. On the flip side, they can be nothing but a bureaucratic hoop. If eportfolios improve retention or time to degree it is because they way they are structured in an institution helps students to make their work more meaningful to themselves and chart their own development: through portfolios students can take ownership of their learning, but only if they see the portfolios as theirs rather than as another requirement. However, this also requires faculty and staff to be on board with the process. Then eportfolios become a tool for connecting faculty with students. But that’s a lot of work.
In short, if an institution invests heavily in eportfolios (though really eportfolios are just a MacGuffin) and faculty are invested in their operations and students are given ownership over their work, then they can become a tool for improving educational experience and results. Is that a claim that we really need to test? If we all care about something, believe that it is important, and invest our time and resources in it, then that thing will be the means by which we succeed. Our “problem” is that we don’t really have that thing. And I don’t think that eportfolios is a magic pill to solve that problem, but it is an opportunity to address it.
All of this become clear to me as I was onboard my flight home. The speaker system on the plane was not functioning, so when the attendant spoke into intercom system all anyone could hear was static. This fact was as obvious to her as it was to everyone else on the plane. Still she made all her rote announcements through the system. You know the ones: seat trays and seats in upright position, you can use your laptop now, etc. etc. I imagine she is required to make these announcements in this way, even though the system is malfunctioning. This reminds me of higher education. Communication is not really the point. It’s just the wah, wah, wah of the Charlie Brown teacher. So it’s not about eportfolios per se. It’s about opening a new circuit of communication.
There was a fair amount of uproar (at least on my Facebook stream) over Johanna Drucker’s LA Review, “Pixel Dust: Illusions of Innovation in Scholarly Publishing.” The uproar was around Drucker’s surprising skepticism regarding digital-scholarly innovations, surprising given her position in digital humanities, and her apparent misunderstanding of some technical concepts, or in the case of bit rot, nontechnical concepts. Here is where I agree with Drucker: developing a digital solution to replace the role of print publication in academia will neither be cheap or easy. What I think makes her argument difficult to follow though is the way she interweaves a defense of the humanities into her claims. In fact she concludes by writing “we can’t design ourselves out of the responsibility for supporting the humanities, or for making clear the importance of their forms of knowledge to our evolving culture.” So is this an argument about digital publishing or is it about the humanities? Yes.
As she acknowledges, the challenges of digital scholarship are not limited to the humanities. In fact, one might say that the humanities are a minor concern here. It’s scientific publishing and the high price (some would say extortion) paid by academic libraries to Elsevier and such for digital journals that are likely the first area of concern. Indeed one might argue that if academic libraries weren’t devoting so much of their budget to these journals then there wouldn’t be a monograph publishing crisis in the humanities. Either way, I suppose Drucker’s point is that this conflation of the humanities and digital publishing is not of her own making, that, to the contrary, she is responding to the claim that digital publishing can save the humanities and absolve others of the responsibility for investing more resources to support humanities research.
With this in mind, here is a key paragraph from her essay:
Lists that focus on literary studies, philosophy, foreign language studies, poetry and poetics have been cut, slaughtered in full public view, sacrificial lambs in the scholarly publishing market, as if they might make clear the dramatic sacrifice required to keep university presses going. Going for what? And how? To what end and for what audiences? As with other discussions of the costs of higher education in current Euro-American culture, the complexities of publishing need to be seen as an ecological system, not a set of discrete decisions about specific practices divorced from the greater political, ideological, and economically justified (at least on the surface) conditions of which they are a part. Or, to put it very simply, cutting humanities lists is like keeping dessert from a stepchild at the family table — it saves very little money and causes lots of distress. But humanities are not a luxury, and to show that they have a substantive contribution to make to the world we live in, we need to demonstrate their relevance to policy, politics, daily life, and business, not just rehash the same old bromides about critical thinking and imaginative life. The vitality of humanities is the lifeblood of culture, its resounding connection to all that is human makes us who and what we are. The preservation of cultural ecologies is akin to preserving ecologies in the natural world, it is, in fact, the human part of them. The humanities are us. Their survival is our survival.
Ok. One might quibble with the analogy here. If desserts were like humanities lists, then America wouldn’t have an obesity problem. Humanities lists are more like deserts than desserts. In this respect it is true that the humanities are not a luxury. Luxuries are desired, even if they are not practical or necessary. The problem with the humanities is not that they are viewed as luxuries; it’s that they are viewed as outmoded and irrelevant. The humanities aren’t viewed as the Lexus of academia; they’re the carriages of academia. They aren’t considered desserts; they are those horrid 1950s dinner recipes involving Jello and Spam. That said, I agree with Drucker about the need to demonstrate relevance, but then I think she goes too far. The humanities are not us. They are a historical-disciplinary paradigm (or set of paradigms if you prefer). Humans existed before the humanities, and they can survive without them. Presumably Drucker is not suggesting that the current methods, forms, and genres of humanities scholarship must remain with us forever, that our “survival” depends upon them. I would not deny that one can boil the humanities down to some basic questions that generate ongoing inquiry: why do we have the values that we have? how did our cultures and communities develop over time? how can we understand individual experience in the context of others? what role do our cultural practices, including artistic practices, have in our lives? how do we communicate with each other in order to resolve disputes, create new ideas, etc? But the methods for investigating these questions do not have to stay the same. In fact, they might change so much over time that we no longer call these investigations “humanities.” The development of the social sciences is a clear example of this.
So how should this conversation intersect with the development of digital publishing. It might seem like the tail wagging the dog for new publishing methods to shape scholarly practices. It would if our current scholarship was not shaped by the media ecologies of the 20th century. Drucker raises a number of good points about the labor and costs associated with digital publishing: “Every aspect of the old-school publishing work cycle — acquisition (highly skilled and highly valued/paid labor), editing (ditto), reviewing, fact-checking, design, production, promotion, and distribution (all ditto) — remains in place in the digital environment. The only change is in the form of the final production.” And then, of course, maintaing digital information (or making it accessible) isn’t free, even if there isn’t such a thing as bit rot.
However I think of it this way. A car makes a lousy horse. In hindsight we might say that the automobile was a mistake. Pollution, wars over oil and gas, accidents, drunk driving, the suburbs: there are a lot of reasons to complain about cars. But there’s no way we could go back to horse-drawn carriages, not without unmaking our society in fundamental ways (as in some post-apocalyptic sci-fi movie). If we used cars the same way we used horses, then there would be no point in having cars. Or here’s another example: calculators. If we only used calculators to perform the same calculations that we used to do with pencil and paper, there wouldn’t be much purpose to them. Indeed few people would carry a calculator around them because the need for making calculations in daily life is not worth the bother. Of course computers allow us to make calculations not possible by humans and open new avenues of knowledge and investigation.
In the same vein, e-books are of limited utility. I have a Kindle and find ebooks convenient for many purposes, primarily the speed of access. PDFs of journal articles are more useful than scholarly ebooks. In the 90s, I had binders full of photocopied articles. Now I have a folder on my cloud. I think these are significant shifts in scholarly practice that we might tend to ignore. However, they are likely minor compared to the shifts we will see, shifts that will shape our methods and the questions we ask as scholars. I agree with Drucker that we should have some reservations about the claims that are sometimes made about the “digital revolution.” However, in the larger picture, my skepticism is more focused on the other end of the technology-adoption spectrum. In the end, I suppose Drucker and I agree that the future of the humanities is interwoven with developments in digital media in much the same way as the humanities past has been interwoven with print media. We just seem to disagree about what that means and what we need to do.
Levi has a recent post on this subject discussing these 3 models which he terms poststructuralist, contemporary, and Deleuzian. The challenge here is figuring out how to create space for a subject with agency who can undertake political change. Thus he ends this way:
When we talk about resistance we want something approximating decision, choice, reflexivity, self-reflexivity, or, in short, agency. Yet when we adopt an ontological perspective on these issues of political emancipation and resistance, we seem to embrace a perspective where things happen of their own accord. Tornadoes don’t choose to come into existence, they just do come into existence when the requisite gradients of barometric presence are present. This is exactly the sort of thing we don’t want to argue when we discuss subjugation because we acutely sense that resistance might not occur in these circumstances. Decision seems to be required. This is the problem with the idea of a political physics. It somehow misses the dimension of subjective engagement and decision. I don’t know how to get this dimension into the framework I’m trying to think through, but then again I’ve seen no other position that’s able to– though they all assert it –either.
The poststructuralist position with its ideological overdetermination of the subject doesn’t offer that. The contemporary view, where the subject is always non-identical to itself and to those determinations. Though as Levi points out “ it’s not at all clear– and maybe I’m just dense –how a void or emptiness can be a seat of agency.” The Deleuzian model is a model of gradients and thermodynamics, but as noted above, it’s hard to see where decision arises here.
For me subject and agency are separate issues. I tend toward viewing the subject as being roughly analogous to the tornado. It’s an emergent phenomenon. We don’t control our experience of subjectivity. I don’t get to decide what I sense or feel. There’s a feedback loop between working memory, what you’re consciously thinking, and what comes next into the mind. But do I get to decide what I think? Not exactly. I can shape it. I can try to focus on one thing rather than another. I can try zazen meditation and let thoughts slip by. The conscious mind, the site of the subject, is part of the mechanism that shapes subjective experience. As for the rest of the mechanism, some of it is part of the body and brain, and part is not. Symbolic behavior is the best example of this. What is human subjectivity without language?
Agency, at least in this conversation, is about acting out of subjective states. In the most banal example of making a selection from a menu: if I have agency then it is the capacity to choose an item from the list, but I certainly don’t have the agency to control my subjective preference for one item over another, whether I feel hungry or not, etc. Of course without the restaurant, the menu, and the wait staff, I can’t order lunch. As Levi points out, the idea that agency is some magical ability should be viewed with skepticism. I can’t magically order lunch. We all experience what we call decision-making. As subjects it certainly feels like we are making a selection from the menu. However since we can only make one order, there’s no evidence that we could have chosen something else. It’s the same thing with politics or other social values. We all have them. Did we choose them? Do you recall sitting down and deciding whether you are a liberal or conservative? Do you have the freedom to switch views?
What is it that we want agency to be able to do? It seems to me that the desire for agency is the desire to have the potential to act other than the way we do. Assuming that we largely agree with our own actions, even though the results are often imperfect, I would have to guess that our belief in agency is our hope that others have the potential to act differently so that there is at least some chance that in the future we might persuade them to do so. But here’s the thing. We already do that. It’s called rhetoric, right? I choose from the menu, but the menu also persuades me. So let’s say that agency is a force. It is directed by thinking, which is an activity over which we have limited input, an activity that is shaped by our relation with objects both within and without the body. Our agency includes our capacity to affect others’ thinking and decision-making. But agency isn’t something that we have as a ontological quality.
At work on the third chapter in my book this break, titled “digital nonhumanities.” Here is a brief excerpt discussing Alan Liu’s 2013 PMLA article “The meaning of the digital humanities,” along with Matthew Jockers and Ted Underwood. The point here is fairly straightforward. The mainstream humanities’ objection to digital methods is the belief that it represents a scientistic faith in machines that is really incompatible with the humanistic traditions that Liu calls its “residual yearnings for spirit, humanity, and self—or, as we now say, identity and subjectivity.” Critical theory builds on these yearnings as I discuss below. Either way though, the argument against the digital is that it places an uncritical faith in the capacities of machines, on what Liu calls the fallacy of separate human and machine orders. My point is that the critique of DH relies upon the exact same fallacy. As such, the real “crisis” represented by DH (at least potentially) is not that it values machine over human but that it might actually move beyond the human-machine divide into a new ontological order…
The complaints often lodged against the digital humanities accuse its practitioners of overlooking the lessons gained from critical theory in terms of understanding the dynamics of ideology, power, and other, variously named cultural forces in shaping knowledge. Another, more Latourian approach would investigate the many actors operating in the formation of digital humanities research. Liu notes this as well, writing that a science and technology studies approach would recognize that “any quest for stable method in understanding how knowledge is generated by human beings using machines founders on the initial fallacy that there are immaculately separate human and machinic orders, each with an ontological, epistemological, and pragmatic purity” (416). Instead Liu suggests that digital humanities methods, like those in the sciences, require “repeatedly coadjusting human concepts and machine technologies until (as in Pickering’s thesis about ‘the mangle of practice’) the two stabilize each other in temporary postures of truth that neither by itself could sustain” (416). For Latour this coadjusting is not flaw; instead it is precisely the way in which knowledge is constructed. For Liu, however, such processes put the humanities in a crisis, where “humanistic meaning, with its residual yearnings for spirit, humanity, and self—or, as we now say, identity and subjectivity—must compete in the world system with social, economic, science-engineering, workplace, and popular-culture knowledges that do not necessarily value meaning or, even more threatening, value meaning but frame it systemically in ways that alienate or co-opt humanistic meaning” (419). This is a familiar story, at least as old as Matthew Arnold, in its identification of the technoscientific world as a threat to humanity, but it is more interesting in this context as it would seem that the humanities here suffer from the same “initial fallacy” as the sciences in seeking a separation from machines. Perhaps it is not so much the sciences that ignore the role of machines in the “temporary postures of truth” that they produce as it is the contemporary humanities with their faith in the truth revealed through theory. If the fallacy of separate human and machine orders is rejected, then neither the traditional humanistic yearning nor its more progressive, contemporary, theory-driven version appear less mechanistic in their methods than those of the chemistry laboratory. In other words, while on first glance, the difference between the digital humanities and its print-based predecessors may appear to be the employment of technologies in the production of knowledge, this appearance relies upon a mistaken belief that humans and machines are ontologically and epistemologically separate. It is possible that the digital humanities might offer a method to move beyond this fallacy and abolish the divide between humans and nonhumans on which the humanities has been traditionally established, though it would be premature to suggest that the field is doing this now.
At the same time, it is reasonable to hypothesize that the sites where the digital humanities is weakening this divide would also be sites of controversy with the print humanities. For literary studies, Ted Underwood argues that a quantitative approach generates controversy primarily “because it opens up new ways of characterizing gradual change, and thereby makes it possible to write a literary history that is no longer bound to a differentiating taxonomy of authors, periods, and movements” (2013: 16). Underwood explains that evolutionary patterns of gradual change contradict the contrastive study of literary periodization that have defined the disciplinary paradigms of literary study. How is periodization connected to the fallacy of separate cultural/human and natural/machinic orders? The analysis of large collections of texts representing decades of literary production and the resulting detection of patterns of influence, development, or evolution suggests that literary production operates across networks that exceed the scale of individual human authors. While postmodern literary theory has diminished the role of the author that was already less prominent in the discipline, given the “intentional fallacy” of New Criticism, than it is in mainstream culture, there remains, in the extrinsic, cultural interpretations of literature, some exceptional, agential role to be played by the authorial subject. Even if the author’s role is overdetermined by culture, it remains on the cultural side, as opposed to the natural/nonhuman side, with its capacity for immanent change as opposed to an obedience to transcendental, natural laws. Periodization is evidence that symbolic action is a uniquely human trait; it is evidence of the ontological divide between humans and others. As Matthew Jockers remarks following his own digital-humanistic investigation, “Evolution is the word I am drawn to, and it is a word that I must ultimately eschew. Although my little corpus appears to behave in an evolutionary manner, surely it cannot be as flawlessly rule bound and elegant as evolution” (171). As he notes elsewhere, evolution is a limited metaphor for literary production because “books are not organisms; they do not breed” (PAGE?). He turns instead to the more familiar concept of “influence.” However, influence also reasserts the human/nonhuman divide. Certainly there is no reason to expect that books would “breed” in the same way the biological organisms do (even though those organisms reproduce via a rich variety of means). If literary production were imagined to be undertaken through a network of compositional and cognitive agents, then such productions would not be limited to the capacity of a human to be influenced. Jockers may be right that “evolution” is not the most felicitous term, primarily because of its connection to biological reproduction, but an evolutionary-type process, a process as “natural” as it is “cultural,” as “nonhuman” as it is “human,” may exist. Regardless of whether one is convinced by such are argument about literary history (and even Jockers and Underwood remain skeptical), it is evidence that the controversy the digital humanities presents lies not in its assertion of the ontological divide between humans and nonhumans, or more precisely in its preference for the measurement of machines over the interpretation of humans, but rather in its erasure of that divide.
Probably the last in this series of posts surrounding the MLA silly season. While senior grad students, recent phds, and others prepare for their job interviews, another crop of potential graduate students are entering the pipeline. A recent post from The Little Professor responding to some Facebook comments from Michael Berube, suggests that graduate programs should cover the expenses of their students’ job searches (e.g. going to MLA). It’s not really a practical suggestion, but I think her more general point was that doctoral programs should take more responsibility for the relationship between the size of their programs and the job market.
This raises a different question for me though: who has responsibility to whom and for what when it comes to graduate programs?
As many have quickly pointed out, departments do not typically set their own enrollment targets. Now if one wants to make the argument that it is unethical for there to be English majors or graduate programs because they do not lead to jobs, then I suppose we can make that argument. However this applies as equally to the BA or MA as it does to the Phd. It’s just that there is zero expectation of a specific career coming out of the BA or MA. If we want to make an ethical argument to defy institutional enrollment targets for doctoral programs and accept the consequences, then why are we ok with BA or MA programs? Maybe we should shut the whole thing down on ethical grounds. Of course we don’t because we believe that the study of English is good unto itself, even if it doesn’t lead to a particular job.
Why does that change when we get to the doctoral program? The answer is that it doesn’t, at least not at first. You get into graduate school on the basis of your success as an English BA, you submit your best undergrad essay, and you write about your interest in some literary topic. Typically you don’t write about your desire to do the job of a professor: working with students, sitting on committees, responding to student writing, etc. In other words, entering graduate school in English isn’t about pursuing a professional goal; it’s about pursuing an intellectual interest. And then the first two years of graduate school are just a super-charged version of undergraduate life (plus teaching if you’re a TA). One takes classes, reads books, sits in seminars, and writes seminar papers. There’s more reading and longer papers. There’s a higher quality expectation, probably, and also probably more theory. The content shifts a little, but the practices are much the same. How many graduate students pick a field based upon an analysis of the demands of the job market? How many pick courses based upon some understanding of the expertise valued on the job list? I would say the answer is not many.
My point is that typically there is little professional turn in the first two years of an English doctoral program. Students continue to pursue their intellectual interests without giving much thought to how those connect to a professional life, just as they did as undergraduates, and graduate programs and faculty facilitate this through the curriculum they offer. Then we get to the qualifying examinations where students really need to decide on an area of specialization. This is clearly a professionalizing decision as the exams should launch the dissertation project which will in turn define one’s job qualifications. Again though, how many students look at the market and select a specialization based on job trends? And do we even recommend that they do? I would say that we don’t. Instead, the commonplace wisdom is that one must select a field that one truly loves if one expects to complete the dissertation and do well.
It’s a strange piece of advice despite its common sense appeal. As this Chronicle piece from last summer reports, only 50% of entering graduate students complete their doctoral degrees (you can also look at this quantitative data from the PhD Completion Project). Furthermore, even looking at the long-term data, only 50% of those phds get tenure track jobs, and then one would have to ask what percentage of those get tenure. So, given the 7-10 years it takes to finish a dissertation and land a tenure-track job, and add to that the six years before coming up for tenure, we might say of the class of 2014 that somewhere by 2030 hopefully 1 in 5 will have tenure. That’s assuming the job market rights itself, undergrads keep majoring in English, tenure doesn’t disappear, etc., etc. Now those chances may not seem promising, but given that completion rates have never been much better than 50%, the chance of an entering grad student getting tenure at some point has probably never been much better than 1 in 3. So that’s the other part of the argument for pursing what you love, because if you’ll be spending 10-15 years on something that has a very good chance of leading nowhere professionally then you better love it.
It would be interesting to see a survey of incoming graduate students in English to see what they know about their chances on the job market. It’s hard to imagine that they don’t have some sense of the challenges of the job market, but maybe I’m wrong. My guess is that they aren’t making these kinds of economic, cost/benefit decisions. I know I didn’t. I was making a clearly anti-careerist move in going to grad school. I was consciously rejecting the idea of pursuing a corporate job. When I got married (to another grad student), neither of us ever thought we’d be able to buy a home. To that point, we’d both lived slacker, GenX lives as temp employees and students; we weren’t thinking about some other kind of life. Now of course we have that other life. (My wife never completed that program though she has had a successful academic career and is now in pursuit of a different phd, so I suppose the two of us reflect the statistics fairly well.)
On university/department end though, the decisions are all economic. Admissions decisions reflect the demand of applicants, the enrollment priorities of institutions, and the way universities are ranked. If you really wanted to change the way graduate school functioned, then you’d pressure the American Association of Universities to make retention, completion, and placement rates for graduate programs a significant criteria for membership. I know AAU and Middle States pressures for retention and time-to-degree at the undergrad level have made my university sit up and pay attention. What if instead of admitting 10, graduating 5, and placing 2 or 3 I said you have to take those 10, graduate 8 and place 5? You could try admitting fewer students, but only if you really knew which 2 or 3 to cut (which isn’t that easy). Would you change your tactics from pursuing what you love to something more strategic? Would you alter the curriculum to reduce the shock of moving from course-taking to dissertating?
My point is that if there were top-down pressures from the AAU or federal granting agencies to improve performance on the graduate level then this would eventually result in changes in graduate curriculum and the culminating activity we call the dissertation. I don’t know if such pressures will ever arise. And I’m not sure how they would affect incoming graduate students who would enter far more pragmatic programs than they do right now. Or even if such pressures would shape undergraduate programs at least for those who want to pursue graduate degrees.
I even wonder if this is what we really want (and by we I mean both graduate students and faculty). Would we want to create programs were 1 in 2 students ended up getting tenure someday (instead of 1:4 or 1:5) if it meant creating more lock-step programs, restricting the fields and methods students enter, requiring students develop skills demanded by the job market and so on? And if we don’t want to do what is necessary to get better results, then should be stop complaining about the results we do get?
Perhaps it’s just the MLA season, but the it’s the time of year when the dearth of tenure-track jobs and the exploitation of adjuncts often come up in the same sentence. So what’s the relationship between the two? I offer that as an honest question. I’m not sure if there is a national answer to it, if the answers are unique to kinds of institutions (research, liberal arts, community colleges, etc.), or if they are entirely local. We all know that over the last 25 years or so that the number of tenured/tenure-track (TT) faculty have declined and the number of adjunct/non-tenure track (NTT) faculty have increased. It would seem to make sense that hiring TT faculty would therefore reduce the number of NTT faculty. As the director of a first-year composition program, I work with a lot of NTT faculty. Really all the NTT faculty in our department teach writing, either composition or journalism, and the latter are primarily full-time professional journalists in the region.
Here are our current stats. A little over 40% of our composition courses are taught by adjuncts; the rest are taught by TAs, which is a different issue. Of those adjunct sections, more than half are taught by former TAs. That is, our TAships last five years, but hardly anyone finishes in that time frame, so they often take on adjunct positions for a year or two before finishing. We have seven other adjuncts and two NTT faculty who serve administrative roles in the composition program.
So here’s my point. My department is making two TT hires this year: one associate and one assistant. How will these hires impact the number of adjuncts working in the department? It will not. In terms of our reliance on adjuncts, it doesn’t really matter how many TT faculty work in my department. I imagine this is true at virtually every department. You tell me. If your department has grown in the last decade, has that reduced the number of adjuncts employed? Maybe if we decided to hire a TT journalism professor that would make a difference on that end, but not for composition, which is where 90% of the adjuncts work. And while these are local numbers, I think this is a fair description of the role of adjuncts in English departments nationally.
This semester we have 44 adjunct compositions sections, 32 TT-taught undergrad literature classes, and 14 TT-taught graduate classes. To keep these proportions and eliminate adjuncts, about 50% of TT teaching would need to be composition. If we viewed supporting our former TAs as adjuncts as a worthy cause and only wanted to eliminate the long term adjuncts (which wouldn’t make them happy, btw), then composition would be 30% of the teaching, or about 1 course per year for the standard 2-2 load. Of course it would require a significant amount of hiring, probably a 30% increase in faculty. To cover our extra 40 sections a year, we’d need at least 10 TT faculty. It’s an interesting though purely hypothetical question: would the typical R1 English department faculty member agree to teach composition on a regular basis in exchange for more hires? And then there would be the question of hiring and retention. Of course we want the very best hires; we want to compete for hiring with the best departments in the country. How would this teaching requirement affect our competitiveness? Would the labor-intensive work of teaching FYC (outside of one’s disciplinary specialization) affect junior faculty in terms of their research productivity? Who knows? It’s all hypothetical because hiring would never happen that way.
In addition, there would be a real disciplinary problem and this has something to do with English or maybe the humanities in general. My sense is that elsewhere in the university it is not so unusual to have classes assigned to you and be asked to teach a fairly standard curriculum. In English though it would be simply impossible to ask TT faculty to teach composition from a standard syllabus. Instead, we would inevitably get some kind of writing about literature course. Whatever de/merits we might assign to such a course, it wouldn’t be a composition course. And this is an expanding problem, where undergraduates not only can benefit from the conventional academic writing composition course but also could use courses that address oral presentation, digital literacy, and writing in the disciplines/professions. We’re only spinning further away from the disciplinary expertise of the typical English professor. You could hire a new class/department of TT professors to teach these courses, but now we’re talking about a real explosion in hiring as you couldn’t have a department of faculty teaching only general education courses. It would mean new majors, new graduate programs and so on. Again, no one is making that investment to solve this problem.
The realistic alternative, and the one that is implemented in many places, is creating full-time NTT positions that have respectable salaries (though not as respectable as TT positions). As far as I can tell this makes sense for us. SUNY has “clinical” faculty that have their own ranks, right up to clinical full professor. There’s no tenure, but there are multi-year contracts. However, if that is the best idea, then it only is further evidence that TT hires don’t impact adjunct hiring, at least not in English. What it tells us is that adjuncts do work in our departments that is considered non-disciplinary. If that weren’t the case, there’d be TT faculty teaching composition in my department (or yours) this semester, just as there are such faculty teaching introductory literature courses.
I wouldn’t assume that the way things work in English or locally in the various departments in which I’ve worked would describe the general adjunct situation in academia. Adjuncts do lots of different things. However that’s probably just another reason to argue that TT hiring can’t be seen as a general solution to adjunct hiring. Any university will require more faculty to teach introductory writing than it requires to research it (or teach more advanced writing/rhetoric curricula). The problem right now is that we have so many literary studies job applicants who find themselves in these composition adjunct positions. They don’t want to be there and they don’t really want full-time NTT comp teaching jobs either. If there was going to be a permanent class of NTT writing faculty as a regular feature of universities, then they would have to be filled by people who wanted those jobs. Assuming they paid well enough, were secure enough, and had some opportunity for advancement, I don’t see why this couldn’t be possible. But it wouldn’t be the same people who are now on the market for TT jobs in literary studies. Those just aren’t the jobs they spent the last decade trying to get.
I suppose where I’m ending up is thinking that we aren’t going to get very far in addressing the inequities of adjunct life by fighting for more TT jobs, at least not in English. Instead, we should focus on making a career of writing instruction into a viable professional life.
From “The Professor is In” and “Blogora,” the ongoing conversation over the responsibility of tenure-track faculty for the adjunct situation and the job market. The former makes an argument for the privileged position of tenure-track faculty, comparing tenure-track privilege to white privilege. I don’t really care to make an assessment of that argument here. I will note, as I am sure others have, that the end game desired for the former is more tenure-track positions and fewer adjuncts. I’m fairly sure the analogous argument (which would be what–more white people?–) isn’t made. The argument for more tenure lines is basically an argument for more money. No one is arguing that we cut tenure line pay, increase teaching loads, and put everyone on the tenure-track (or eliminate tenure and give everyone multi-year contracts). Hell no. After all, what is tenure without the much disparaged “privilege” that accrues to it? Still, that’s fine. It’s an argument for more investment in the humanities. OK, what do we get in return for the investment?
Maybe the answer is more single author monographs that sell a couple hundred copies. Maybe but that assumes that there will also be investment to keep those publishers afloat. More realistically the answer is that humanities faculty hires are tied to a student demand for the curriculum they offer. My sense in English is that adjunct faculty are very heavily tied to the teaching of FYC. That is, there aren’t a lot of adjuncts teaching upper division literature courses or even introductory literature courses. In some places, tenure line faculty teach FYC; in others they don’t. Regardless, I think it is fair to say that in English the process of adjunctification has been linked in no small way to the curricular separation of FYC from the rest of the department.
My point is that if one wanted to reduce adjuncts in English, the only way to do it would be to have tenure track faculty teach more composition. That is unless, of course, they managed to offer other courses that students wanted to take. But these are two sides of the same problematic coin. In my 20 years or so experience in English departments, literature faculty generally don’t view teaching composition as part of their profession. (That’s fine; it probably isn’t.) Instead, they view their teaching responsibility as tied to their field experience (who wouldn’t?), which means teaching in a particular literary period. They already do that, and the student demand for such courses is what it is. More of the same won’t increase demand. So whether one is teaching composition or some other course beyond what is currently being offered, one is asking faculty to teach courses outside their field. Now one could hire new faculty in a very different field that might attract new students (that would be a risk one could attempt), but that would still mean changing a department’s culture. Even if a given professor isn’t teaching a foreign course, the rise of a foreign curriculum is just as disturbing, maybe more so. I’ve seen that first hand as well.
Besides that doesn’t do much to help the grad students being trained in the original field.
So there’s the vicious circle in a nut shell. “Privileged” tenure-line faculty are trained in a specialized disciplinary field to teach courses in that field and train grad students in that field. In my experience, the majority of such faculty would rather go down with the ship than change this arrangement. I’m not saying those are the only options, but given the choice, I’d say I hope you have a life vest (or better yet that you not get on the ship). Changing would certainly take some of the shine off of that “privilege.” But, in general terms, the answer to our problems is deceptively simple.
If you want more money, start doing something someone will pay more money for.
Of course it’s not so easy to figure out what that thing is that we should be doing. We always want to say that the humanities shouldn’t be tainted by such market-driven concerns, that it shouldn’t strive to be useful. That’s fine, until the humanities also starts clamoring for more money for more faculty. Then the humanities has to make some argument to someone about its worth, and ultimately that’s going to come down to students taking classes. Create a demand for curriculum and create a demand for faculty.
A friend of mine contacted me about this matter the other day and asked if I had a blog post about it. I thought, what a good idea. Deciding whether/how students should use their devices in the classroom remains a contentious issue in academy. My sense is the prevailing opinion is to outlaw smartphones and similar devices, while the laptop issue is more divided between those who prohibit and those who do not. I did write about laptop policies two and half years ago. Policies haven’t changed much, and neither has my view: most technology policies are established to create the technological conditions of the 1990s, conditions under which our legacy pedagogical practices still made sense.
For me this is not just a pragmatic, pedagogical question. Instead it is a very visible example of our continuing struggle to learn to live with digital media. Students don’t know how to behave in classrooms with smartphones in their hands, and faculty don’t know how to operate in digitally mediated learning environments. As faculty, I think we need to answer the second question first. And there are many viable answers to that question that have to take into account the discipline, the size of the class, the course’s curriculum, and the technologies that are available. I don’t think any faculty member wants their students randomly web surfing, on facebook, texting friends, checking email, etc. The obvious, easy, and commonly-adopted policy is just to ban all access. But that’s a policy that says figuring out how to live and work with these devices isn’t my problem. It kicks the can down the road. It also says the faculty are not responsible for figuring out how emerging technologies can enrich learning, which I believe is not true. I think we are as responsible for that as we are for selecting texts, preparing lectures, and creating assignments and other course activities.
In a fairly straightforward way, we can divide the affordances of digital media networks into two categories: knowledge/data relevant to the course available over networks; applications/devices that expand interactivity and pedagogical opportunities. In the case of library databases, scholarly journals and other websites students might access, I think the use is fairly straightforward: students are using their devices in groups, individually, or in guided discussion to conduct research in the classroom. The second category is more varied as there are many different disciplinary directions here. But a simple one is to think about a CMS or a course blog and the activity of in-class writing. Now students are writing to a networked space, rather than in their notebooks, and the writing can then be shared in class or after. In larger classes, clicker apps have become popular as a way of getting quick feedback. Some more advanced examples:
- a prearranged Twitter conversation with an outside expert;
- collaborating on a wiki page or Google doc;
- writing feedback on a video or a book on YouTube, Amazon, or wherever;
- participating in some larger social discussion site.
It is true that none of these examples involve students listening quietly to the professor. I’m not suggesting that should never happen. I would suggest that there is nothing magical or ideal about lecturing. Do most students need to develop the ability to pay attention as listeners and readers of print? Sure. They also need to develop the ability to operate in those multi-attentional spaces in which we now regularly live.
My point is that, as faculty, once we take an open-minded examination of the possibilities for digital media networks in our classrooms, we can identify ways that their operation can strengthen our teaching. Based upon that understanding, we can then craft technology use policies. The knee-jerk prohibition of technology is no more thoughtful than the careless, disinterested student who turns to facebook in the classroom. If students are given meaningful activities to perform in the classroom then they are far less inclined to be digitally drifting. Of course some students will slack off. Guess what? There are plenty of students ignoring their professors in tech-free classrooms too. In the end, if the bottom line is that the classroom is a space designed to help our students, then we can think that binding their hands is helping them because it takes away some of the temptations, but an even better approach might be to teach them how to use those hands.
I ended my last post conceiving of journals as a scholarly platform and wondering if another kind of platform, beside the kind invented over 100 years ago, might be better. To raise this question, the first step, as they say, is admitting one has a problem. What kinds of problems might exist with humanities article publication?
- Economic/marketplace problems: who wants to pay for this work?
- Access problems: perhaps a product of economic problems
- Audience/citation problems: even those who have access don’t appear to use it
- Over-production problems: publication is driven by the demands of tenure/promotion rather than rhetorical exigencies related to the content of the research itself
But the fundamental question is whether or not articles (and monographs) remain the best way to share and build research. This question is perhaps most visible in the digital humanities where much of the scholarly work–building databases, applications, etc.–bears only a secondary relationship to articles (i.e. you can also write articles about the work). For example, in my own nascent field of digital rhetoric (or new media rhetoric sometimes), the ostensible goal (briefly put) is to understand how emerging digital media technologies participate in rhetorical/compositional practices across the spectrum of cultural spaces (my own tendency has been to think about pedagogical/scholarly spaces). Said differently, it seems fairly obvious to me that people struggle regularly with living/communicating in digital spaces. Can humanistic research help to address that problem? No, it ain’t saving-the-world kind of stuff, but it’s a living. Anyway, my point is does my publishing a couple 5-8000 word essays on the subject every year really represent the best way I can go about meeting this problem. That’s the kind of question you’d have to ask about your own research. The most important thing to realize here is the way that the genre in which you are writing shapes the nature and scope of your research. In the end there is no natural way that scholarly work should be done. We can seek out the obvious joints were knowledge might be divided, but those joints have not formed through some transcendent process. That is, if we had something other than the journal article as our primary genre, we’d have different articulations, different joints. At the same time, one cannot, shouldn’t really, ignore the communities that have developed. So let’s say that I started with the approx. 1,000 faculty and grad students that make up the computers and writing community. (I’m just taking a guess at that number, but you might ask yourself how many people make up your specialization.) Those are the people that make up the primary audience for the journal articles we write. Right now, there are maybe a half-dozen journals primarily focused on this field (e.g. Kairos, Computers and Composition). Then there are a couple dozen journals, I’m guessing, within the larger discipline of rhetoric and composition, where one might find computers and writing articles (e.g. CCCC, JAC, Composition Studies, etc.). Finally there are also journals that are more rhetoric than composition oriented (yes, the two are different, most notably in that rhetoric as a discipline tends to be less pedagogically oriented and includes more faculty from Communications departments). That’s another set of journals where one can certainly find digital rhetoric being published. I would say that computers and writing is more composition-oriented as a field, but there are digital rhetoricians, such as myself, who overlap. Of course there are a large number of potentially overlapping fields, but I want to focus on these approx 1000 people. Is there a better way for 1000 people to communicate with one another than across a couple dozen journals? The answer is obviously yes. It is as obvious as the remark that we don’t publish to communicate; we publish to get credit. That’s a problem we can only solve by deciding to solve it, so I’ll set that one aside. There’s a more important point here, which is that if there was a common platform for every scholar and student in my field, then I think it would become clear that continuing to publish in this article genre doesn’t make a tremendous amount of sense
If the 1000 in my field where on the same platform, what might we do better than we do now?
- Collaborate on undergraduate and graduate curriculum
- Serve as a real-time community for those students
- Form research groups to respond to exigent circumstances (e.g. MOOCs)
- Build and study large data sets
- Construct disciplinary tools
- Speak as a collective voice to others beyond our discipline
None of this would mean that we wouldn’t publish or that the publications wouldn’t be peer-reviewed. And if one wants to argue that one should publish outside the silo of one’s field/community, then I agree. It doesn’t happen much now, but I agree it should. A platform like this might actually make those external messages more powerful.
But we can think about this at an even larger scale. Both NCTE and MLA report having 30K+ members. Both, by coincidence, are about the same size as my university (if you count students, faculty, and staff). Also, by coincidence, this is roughly the number of folks working for Google. Obviously in a corporate structure it is easier to get everyone moving in the same direction, and in academia we have a strong history of intellectual independence. But there’s really two sides to that independence. On the one hand there is “academic freedom,” which is designed to protect scholars from political backlash and perhaps from the pressures of the marketplace. On the other hand there was an unavoidable independence created by our relative isolation (which is what necessitated journals and conferences in the first place). The second is no longer relevant. And regarding the first, individual academic freedom has always relied on collective action, on the group affirming the protections of the individual, and thus has always been tempered by things like peer review and tenure cases. The purpose of the journal/conference is to create a sense of discipline/community, to foster paradigms: in other words, to get us moving in the same direction as part of the same conversation.
One could imagine MLA Commons as a step in this direction, or CUNY’s Commons-in-a-Box which is the platform on which it operates. Clearly the technology is there. It’s simply a matter of having the will to move in this direction. But who can imagine a discipline, or a university, of 30K working together in such a fashion? Is that what this is? A failure of imagination? Or do we really believe spending years writing a book that sells a couple hundred copies is a better use of our time? Or spending $1500 to deliver a paper to a dozen people? Or writing essays for collections or journals that never get cited? Which approach seems to make more sense for communicating to the larger public the value of the work we do?
I was catching up on three recent posts by Jason Jackson (here, here, and here) via the DH Now RSS feed that all deal with the subject of scholarly publishing, access, and copyright. Tim opens up a thoughtful, practical conversation that begins with a consideration of a la carte pricing for digital access to journal articles but then moves into some broader areas as one tries to understand these pricing strategies as more/other than ruthless profiteering. My own take, which is less immediately pragmatic, is to situate these conversations within a larger cultural effort to learn to live and communicate in digital networks.
So we know the historical contexts for copyright beginning in the US with the Constitution and moving through contemporary laws that seem quite clearly designed to protect Mickey Mouse, though certainly other large media interests also benefit. We should also stipulate that scholarly publishers contribute labor and capital to the publishing process. They add value, as perceived by their consumers, in their vetting, editing, archiving, organizing, and distributing of scholarship. If we value these things, then we shouldn’t mind paying for them. It doesn’t seem that open access advocates have much issue with the book publishing industry. I think it is safe to say that most academics I know want books to thrive. They want publishers to thrive; indeed we wish there were more publishers. And they don’t seem to have much problem with paying for books. Of course there are some academic books, like textbooks, that are a cost concern. But when we think generally of books at a bookstore or on Amazon, cost and access are not a central concern.
As I think we also know, the central problem with humanities scholarly publication is that the audience is so small. Few people want to read these articles, even when they are freely available. The more hidden side of this equation is with the authors: who wants to write these articles? Perhaps most professors would raise their hands. Let me rephrase, if you weren’t in your job, would you write these articles? That’s harder to answer. My point is that publishing articles is part of your job; it gets you tenure, promotion, and other reputational credits. The similarity to the first-year composition class is surprising. Authors write texts to complete a transaction that is only arbitrarily related to the composition for an audience that only reads the texts as part of an obligation of their job. Understandably no one wants to pay money to do their job. I don’t want to pay to publish my work, and I don’t want to be required to pay to access scholarship for the purpose of doing my job. I might elect to buy a scholarly book, but I don’t want to be required to do so. And I do not face such a requirement. I have not yet had to subvent any publication of mine. And if I am patient I can get access to any scholarship I need through my university library.
But if I am not paying then somebody has to.
The question is who is benefiting from scholarly publication in the humanities? What is the kairotic moment it addresses? What does it hope to achieve? The bottom line answer appears to be that colleges and universities benefit from publication, not because of any particular knowledge that is disseminated but because publication is tied to reputation. Setting aside questions of reputation and tenure, what would happen to the world if every humanities journal and every humanities monograph series ceased publication for five years? I am not suggesting that faculty would cease doing research, but only that we would have to rethink the rhetorical contexts in which we do our work. What purposes would we want to achieve through writing? Who would our audience be? What genres would we select? What modes of delivery might we explore? At the 1884 MLA convention, there was a discussion that resulted in the inception of PMLA. It was decided that “Whatever should be done to bring us nearer together and give us a sense of centralized power, this Journal idea was thought to be of the greatest importance, as through it every man could have a chance to make his views known, and to have them criticized by the body at large” (1884, v). Those still seem like good, basic motives, but I’m not sure that “this Journal idea” is still the right means, either in print or as its digital skeuomorph.
I’m not going to tell you that I have the right answer. I will say that the production cycle of journal publishing, print or digital, seems outdated. The most difficult challenge lies in recognizing how our research paradigms are interwoven with an activity system and technological infrastructure that is outmoded. That is, the way we conceive of a research project is tied to the idea of writing a journal article of an arbitrary length imposed by some printing restrictions that no longer apply. The amount of editorial/publishing effort and cost means a research project must fit some Goldilocks zone. Changing how we publish would change how we research: from the questions we ask to the methods we use and the ways that we collaborate. So here’s the odd thing. I don’t want to pay to read an article, but I would pay to access a communication platform that would connect me with my audience and with other scholars in a way that would facilitate my research, both in terms of readership and collaboration (two states that become increasingly mixed). In fact, I already do pay for that, every month as part of my Verizon FIOS bill and every year when I pay for this website. That’s what a journal is, right? A communication platform? Could we imagine one that was more lively, more interactive, more collaborative?
What this comes down to for me is that the issues with the viability of scholarly publication (in the humanities at least) are heavily tied to the attenuated rhetorical purpose for the activity, a purpose whose attenuation has been exacerbated by the inertia of legacy genres and publishing practices that have struggled to adapt to the digital era. Publishing will always cost money. But let’s pay for something we value.
In January, I’ll be attending the AACU’s eportfolio forum in DC as part of Buffalo’s ongoing reform of general education. There was a lot of interest in the idea of the portfolio as a mechanism for reflection and integration across the curriculum, so they’re sending a couple of us done to this day-long event. Not surprisingly, being in rhet/comp, I’ve been doing portfolios on and off for more than 20 years. We’re doing high stakes (50% of the grade) portfolios in our composition program right now and when I was at Cortland we did culminating portfolios for our professional writing majors. That said, I won’t claim to be an expert inasmuch as I haven’t done research or published on portfolios, but I am familiar with the research, at least as it pertains to my field. I’m hoping the forum will begin to fill in a larger picture of how portfolios will work across the campus.
So here’s the question I want to ask at this forum: how do we use eportfolios to foster kairos and purposeful communication among students and faculty?
I think everyone understands that an eportfolio can be an archive and a variable presentation of an individual’s work. The variability allows for presentation of different documents toward different ends, but it strikes me that those ends are almost always evaluative and transactional. That is, a portfolio is a tool for evaluative one’s performance in a class, in a general education program, or in a major. Alternately it might be evidence for a grad school or job application. On a less individual level, it might become data in institutional assessment. These are all good and useful ends. However, in some ways they are at odds with the many rhetorical purposes of the elements included in the portfolio, as well as the pedagogical aim of the portfolio itself, which is to strive for a reflective, integrative experience that brings together the range of activities that produced the elements of the portfolio in the first place.
Developing a sense of purpose in student writing, beyond the transaction of completing an assignment, is very difficult. We’ve all struggled with the “imagine an audience/purpose” assignment. Rebecca Schuman’s latest troll-bait on Slate complaining about student essays is, in my read, about the failure to create a sense of purpose in writing. This is a failure that falls on students and faculty alike, and it doesn’t end in composition or with general education; these problems persist into majors. And it is not as if we believe that students get a degree and magically become endowed with a sense of purpose in writing. Kairos and purpose can be a challenge professionally or in graduate school. The thing that most changes is that as we progress we become immersed in systems that bring with them their own sets of purposes and objectives, their own timelines and kairotic moments. For example, if I am on a committee reforming general education that produces a report, purpose and kairos are built-in to some degree. And you could say that I am sitting on that committee to achieve the transaction of doing my job or proceeding in some modest, incremental way toward promotion and that my role in writing that document in that regard is not so different from the student handing in a paper in an FYC class. But really that transactional element is far overshadowed by the other purposes.
So that returns me to my question: how can the portfolio be purposeful beyond the curricular transactions it mediates?
One way of answering that question, it seems to me, is to think of the blog as an analogy to the portfolio. People use blogging platforms for portfolios, so clearly there are some operational similarities. The blog is an archive, a database. It values currency primarily, though you also have categories/tags. You could easily set up a blog to show your “best work” or your best work for a given audience rather than your most recent work. The difference is fundamentally the purpose. I am writing this post here and now because I am interested in thinking through this topic and I’d like to share my thoughts with you in the hopes of some future conversation. I am not posting this here and now to say “Look at me! I should be accredited,” which is the dominant purpose of the portfolio. In part this is a reader/audience problem. I probably couldn’t get an audience to read this blog for accrediting purposes if I wanted to, and the portfolio suffers from the opposite condition. Who will read the student portfolio without being paid to do so? Who will read it for a purpose other than evaluation, certification, or assessment? How do we shift this rhetorical situation?
Given the blog analogy, I think it is clear that the technical means exist. We can deliver portfolios to a public web audience (or some subset thereof). We can use search, tags, and categories to connect portfolio content with readers interested in those contents. The challenge has to do with the rhetorical position of students in their activity systems because, in the end, we don’t value student writing as communication, as purposeful, beyond the grade. And if we don’t, then who will? Furthermore, it isn’t simply a matter of “valuing.” I can’t just put my hand over my heart and pledge to value student writing. I can sit in my office and write a report recommending changes to our general education program, and that writing will have no purpose. I need to be part of that committee to engage in that purpose. Students need to participate in systems where their writing can contribute to objectives before that writing has a chance to be purposeful.
So can the eportfolio foster such a system? That’s my question.
I was having a conversation the other day about general education, and we were talking about the desire for students to become responsible and open-minded people. Sure. Why not? Of course, we wouldn’t want to suggest that students arrive as irresponsible and close-minded, so maybe what we mean is some advancement along the line to becoming “more” responsible/open-minded. My response was to wonder when the state of “open-mindedness” is reached and if it then becomes a permanent condition. In my view, open-minded is not a characteristic that I particularly associate with academics. If anything, because we are trained to be skeptical, because of the myopia that results from the single-minded pursuit of some esoteric subject matter, academics tend to be less open-minded. Now perhaps that characterization turns being open-minded into a deficit rather than a benefit. In the context of this general education conversation, being open-minded referred specifically to those issues we associate with diversity, so it means something like being accepting of those who are culturally different from oneself. I suppose we can have a debate over whether or not universities are particularly open-minded in their treatment of cultural differences, as we might discuss the open-mindedness of the faculty and staff in less institutional/bureaucratic terms. I think we’d probably decide that higher education is relatively more open-minded than many other spaces in American culture. Maybe not.
That’s not the conversation I want to have today though. Instead, I am interested in a more capacious definition of open-mindedness and cultural differences. At least since C.P Snow, we’ve recognized cultural differences that are specific to universities (all institutions have their own cultures). Not only do disciplines/departments have rivalries, but most departments have internal disciplinary differences. I mostly experience these things from my position in rhetoric and English departments. In these spaces, it is acceptable to be close-minded about STEM or business or other professional schools. It is acceptable for rhetoric and literary studies to be close-minded about one another. It is acceptable to be close-minded about technological innovations and their impact upon teaching and scholarship. Now perhaps the objection is to say “We aren’t being close-minded; we’re being skeptical.” Maybe, but skepticism is about reserving judgment. So, for example, we might say “MOOCs haven’t arrived yet. Maybe teaching 100,000 students is possible. Maybe not.They require further study and experimentation.” That’s a healthy dose of skepticism in my view. Is that what you hear people saying? Sure, in some places, but that’s not the dominant tenor of these conversations. You’ve probably seen the recent NY Times article regarding Sebastian Thrun’s recanting of some of his claims about the future of MOOCs. Such articles might be seen as some as the beginning of the end for MOOCs. No doubt there’s been a ton of hype. However, I don’t think the rejection of MOOCs has ever been about the technology itself.
There is a tension between the value of open-mindedness and the insistent myopia of academic research. It remains to be seen if it can be made a creative tension. As I have said here and elsewhere in regards to MOOCs but really all digital media questions about pedagogy, if what one wishes to do is create the experience of sitting in a lecture, reading (text)books, and taking tests/writing essays, then probably there isn’t a better way to have that experience then actually having that experience. If the goals of one’s curriculum are deeply imbedded in the affordances and constraints of traditional learning spaces, then one is likely to find those traditional spaces work best for achieving those goals. Digital pedagogy is ultimately about redefining the experiences and goals of learning. Should we let technology drive the curriculum? Certainly not. And that applies to books, chalk, and word processors as it does to MOOCs, mobile phones, and media production software. But here is where that myopia arises. Digital media technologies impact disciplines in different ways. I would and have argued that they are paradigm-shifting for rhetoric and much of the humanities. I also believe they are paradigm shifting for pedagogy across the campus, which means they change some of our fundamental ideas about how we learn and what we should learn. This is not easy to respond to as an educator, and there is inevitable blowback into one’s research as well. Certainly in English, where research methods remain print-bound, any shift toward digital pedagogy creates tensions.
For these to be creative tensions, they have to move in two directions. One has to ask what one’s legacy research and pedagogy might contribute to digital pedagogy. That is, if/when one can extricate something of one’s practice from the print culture conditions in which it developed, one can bring that to bear on shaping a new digital pedagogy. And conversely, one has to ask what digital media could contribute to one’s research and pedagogy. It’s not objectionable to take up the tools available in the pursuit of a research question or pedagogical goal. After all, that’s what our predecessors did when they pawed through card catalogs and print indexes, climbed the library stacks, and banged out articles on typewriters. In the midst of such paradigm shifts we may discover that the old cultural boundaries we have maintained–and the value judgments we have assigned to them–no longer pertain.It may not be possible to replicate our traditional disciplinary paradigms in the digital media world. If so, then one must choose between holding on to traditional values and judgments and shaping new communities and paradigms. Perhaps it is not so easy to decide where open-mindedness exists in that context. Does tolerance include being tolerant of intolerance? To be open-minded must we be accepting of those who are close-minded? Or is open/tolerant and closed/intolerant really about who agrees with us? I hope not. Building something worthwhile out of digital pedagogy, massive or otherwise, will require some open-mindedness about the kinds of experiences we value and the goals we want to achieve. It might require asking new/different questions and being open to new methods for pursuing those questions.
Maybe that’s the open-mindedness that general education wants to instill in students, but I suggest starting with the faculty.
Clearly there are majors, primarily in professional schools and in the sciences, that certify students as being capable of doing certain things, of having some requisite know-how. Nurses, teachers, accountants, and engineers are all clear examples. Students with science degrees that go on to work in some capacity in labs might be another set of examples. However there are a good number of students across the humanities, arts, social sciences, and even into some business areas where that is not the case. True, every major prepares students for graduate study in its discipline, but for many students that’s not a concern. They aren’t getting degrees in psychology or English to prepare for graduate school, at least not in the way that nursing or chemical engineering students are getting degrees to prepare for jobs in those fields.
Now, if one looks at the learning outcomes for BA degrees from sociology and communications to English, history, and philosophy to media study and visual design, would anyone be surprised if teaching students to write/communicate, think/read critically, and conduct research were heavily featured on every programs’ list of outcomes? No doubt there will also be something about the content and methods of the discipline (e.g. knowledge of literary history and literary criticism in the case of English). But my contention is that those outcomes are of limited value to undergraduates in comparison to the “soft skills” that are promised. This does not mean that the degrees are interchangeable in terms of the soft skills they offer though, even though they might sound alike. Writing, thinking, reading, and researching turn out to be shaped by the disciplinary-activity system in which they operate: literary critics do not write, read, or research like sociologists. These differences turn out to be the basis for an argument for majors: students get a sustained, in-depth experience with a particular (disciplinary) way of thinking. Maybe. All depths are relative. And ultimately what I’m heading toward here is not a free-for-all jumble of courses but simply one that is non-disciplinary.
Clearly there are some colleges, typically small and experimental, where such curricula are commonplace. And virtually every college offers some mechanism for a roll-your-own-degree. But it is even more clear that these are exceptions to the rule. There are any number of disciplinary-institutional-historical and bureaucratic-pragmatic reasons for having majors. As compelling as those reasons are, it is just as interesting to realize that majors are historically and bureaucratically contingent. They aren’t necessary. So the question really should be what is the cost/benefit to having them?
I do think there is value to a programatic education, to being able to build from one semester to the next. At the same time, there is something powerful in the idea of a curriculum that does not take its primary obligation as disciplining students, that is, in providing students with some introductory conception of a discipline. This is familiar to many rhetoricians. We don’t tend to teach undergraduate courses as an introduction to our discipline. Where there are majors, they tend to be technical/professional writing; that is majors that have been designed by asking the question, how might an undergraduate make use of rhetorical knowledge? We could still ask this question without answering in the form of a 36-credit (or so) major. We could answer those questions in 9-credit segments, for example, or in thematically integrated learning communities. This is not an argument for such a curricular revision, so much as an argument that such structures are possible and worthy of consideration. We have little basis for arguing that such structures would be better or worse than our current curriculum. Undoubtedly, they would produce different results. One wouldn’t make such changes in an effort to do a better job of what we are currently doing; the point would be to pursue different intellectual goals and to design a curriculum better suited to those ends.
The major obstacle to even considering such changes is the mental trap of disciplinarity itself. Disciplines want to live on and members of disciplines want their disciplines to live on. I am not suggesting the erasure of disciplines. No matter what we do, assemblages, paradigms, activity systems, networks, and such will form and operate; disciplines are a part of that. I am only suggesting a shift in the way disciplines interact with undergraduates through the curriculum. I am suggesting that if disciplines are serious about the soft skills they purport to deliver (and often tout as the primary value of their majors), then they might think more creatively about how they offer them to students. Ultimately I think a curriculum like this might be more attractive and ultimately valuable for undergraduates than many of the disciplinary silos in which they currently operate.