Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
As Steve Krause has noted and has been discussed a fair amount recently on the WPA-list, there is reason to be concerned with the growing role of grading writing by machines. There is a new site and petition (humanreaders.org), and I have added my name to that petition. So it should be clear that fundamentally I share the concerns raised there because I have confidence in the research beyond this position. Essentially the point is that current versions of machine grading software are not capable of "reading." What does that mean? It means that machines do not respond to texts in the way that humans do. It is possible to compose a text that humans would identify as nonsensical and receive a high score from a machine. Machines can be trained to look for certain features of texts that tend to correlate with "good writing" from a human perspective but those features can be easily produced without producing "good writing." The upshot, given the high stakes nature of many of these texts, is that students will not be taught to produce "good writing" but rather writing that scores well. The horrors of teaching to the test are a commonplace in our culture, so there's no need to take the argument further.
And yet, of course, you would not be reading this if your computer (or phone) didn't read it first. If you have arrived at this page via Google, then there have been several levels of machine reading that brought you here. If it seems that Google and other search engines do a fairly good job of finding reliable texts on the subject in which you are interested, then it is because, by some means, they are good readers. No doubt, part of Google's system is reliant upon human evaluators who link and visit pages, including perhaps your own preferences. The same might be said of human readers. How did we figure out what "good writing" was? Do we not rely upon social networks for this insight? In its crudest form (and closer to assessment), don't we "norm" scorers of writing for assessment purposes?
Anyone who has ever done search engine optimizing has written explicitly for machines. One of the things that makes SEO trickly though is the secret, proprietary nature of Google's search algorithm. Unlike these machine grading mechanisms, it is not easy to game Google's search rank. Perhaps what is required for machine grading is a more complex, harder to predict, mechanism. In other words, while machines do not need to read in the same way as humans do, they might need to simulate the subjective, unreliable responses of human readers in order to serve our purposes. That last sentence encapsulates two potential errors we encounter in our discussions of machine grading.
1. Because machines don't read the way humans do, they don't understand the meaning of the text. Critics complain that machines can't recognize when a text is nonsense or counterfactual. (One might say the same of humans anyway.) On what basis do we claim that humans are the arbiters of sense? Only on the basis that we only care what humans think, or from a correlationist perspective, that we can only understand texts in terms of ourselves anyway. We don't understand why machines grade texts the way they do sometimes, but we don't say that machines are subjective, which is what we say when human readers disagree. Instead, we say that machines produce error. I say that machines are readers too. Maybe they aren't the readers we want to score our tests, but then we wouldn't want a room full of kindergarteners either. So being human is no guarantee of reliable scoring.
2. Good machines would simulate human readers. This is our basic premise, right? That a machine would give the same score as a human to a given text. That is, we recognize that machines and humans will never read the same way but we need them to provide the same output in terms of scores. This would be like a calculator. A calculator doesn't do math like I do, but it gets the same answer. To make this happen we black box both the human and the calculator: the process is irrelevant; only the answer counts. But that's not really a good analogy for the scoring of human writing.
Unlike calculable equations, there is not right score for a text. What human scoring processes demonstrate is that reading takes place within the context of a complex network of actors that serve to create "interrater reliability" and so on. We begin with the preimse that humans typically will not agree on the score for a text, even when you take a fairly similar group of readers (e.g. composition instructors teaching in same department) and writers (e.g. students in their classes). They already are conditioned to a high degree, but then we add on specific conditioning through the norming process and the common conditions in which they are reading. Add into that various social pressures such as recognizing the seriousness of the scoring and the pressure to grade like other readers so as to reduce the amount of work (discrepancies in scoring lead to additional readings and scorings).
Scoring is not an objective, rational process. Once one abandons the flawed concept of intersubjectivity– the consensual hallucination that we share thoughts when we agree–one has to come up with another explanation for why two readers give an essay the same score and that explanation, in my view, would involve an investigation of the actor/objects and network-assemblages that operate to produce results. We can complain that machines don't recognize meaning, but that's only because meaning isn't in the text. This has always been the flaw in any form of grading. We evaluate students based upon what their texts do to us as readers. The only reason students have any power to predict what our experience will be is because they participate in a shared network of activity: a network over which they have little control.
So to go back to the original problem of machine grading, I would say that we need to ask what it is that we are trying to determine when we are grading these exams. Do we want to know if students can produce texts that have certain definable features in a testing situation? Do we want to know if students will get good grades on writing assignments in college? Or do we want to know, more nebulously, if students are "good writers"? I think we have proceeded as if these are the same questions. That is, good writers get good grades in college because they can produce texts with certain definable features. But that's not how it works at all, and I think we know that.
In case we don't, just briefly… Good texts don't have certain definable features because the experience of "good" isn't inhered in the texts we read. This doesn't make the process subjective in the sense of one's reading practice being unpredictable or purely internal. It just makes reading relational. One way of defining rhetorical skill is having the ability to investigate the networks at work and produce texts that respond to those networks. We object to the notion of training students to compose texts that will produce positve responses from machines, but we also object to the notion of training students to compose texts that produce positive responses from normed human scorers.
The real problem though is starting with the pedagogical premise that teaching writing means teaching students to reproduce definable textual features without understanding the rhetorical and networked operations underneath. Beacuse what we discover from machine readers is that we can compose texts that have those textual features but are ineffective from our perspective. This is a discovery we have already made a million times though as we have all seen many students who diligently replicate the requirements of an assignment and still manage to produce unsatisfactory results. Why? Because they have produced those features without understanding the rhetoricity behind them.
Machines are perfectly good readers. That's not where the problem is. The problem is that we don't understand reading.
Atlantic Monthly has an article this month, "Anthropology Inc," that examines the ethnographic work of corporate anthropologists (a contentious term in itself, at least for academic anthropologists). The article focuses on a single company and one of its co-founders Christian Madsbjerg.
Madsbjerg had a list of clients desperate for Heideggerian readings of their businesses. The service he provides sounds even more improbable to a scholar who knows his Heidegger than to a layperson who does not. Many philosophers spend their lives trying and failing to understand what Heidegger was talking about. To interest a typical ReD client—usually a corporate vice president who is, Madsbjerg says, “the least laid-back person you can imagine, with every minute of their day divided into 15-minute blocks”—in the philosopher’s turgid, impenetrable post-structural theory is as unlikely a pitch as could be imagined.
But it’s the pitch Madsbjerg has been making. The fundamental blindness in the sorts of consulting that dominate the market, he says, is that they are Cartesian in their outlook: they view objects as the sum of their performance and physical properties....
To sell the ReD idea—that products and objects are inevitably encrusted with cultural meaning, and that a company that neglects to explore social theory is bound to leave profits on the table—Madsbjerg has evangelized with great success, giving what are surely the only successful corporate sales pitches salted with words like hermeneutics and phenomenology.
I suppose my first response is to chuckle. Does Heidegger really work this way? Or is this more like a familiar touchstone in a sales pitch? In a sense, Madsbjerg is making the anti-object-oriented argument. That is, what he takes from Heidegger is a recognition that human knowledge of objects is highly subjective and that it is this subjective encrusting of meaning and value that corporations need to understand to sell their products. However, it strikes me that Madsbjerg is only seeing half the picture here if he focuses solely on the relations from the human side. If we are to believe, as the article suggests, the people develop attachments to vodka based upon the stories they can tell rather than about "objective" qualities, then do we not need to consider the role of the vodka in the composition of that story? What is it about these objects that create these affective bonds?
I'm not sure that this is as terribly new as the article suggests. I am reminded, somewhat, of coolhunting and characters like William Gibson's Cayce Pollard in Pattern Recognition, who, if I recall correctly, tracks down the first kid to wear his baseball cap backwards. Maybe it is new, though, to employ these ethnographic methods and to think about products from a Heideggerean perspective. Perhaps the alien phenomenology of the latest mobile phone could provide some insight into its capacity to enter into particular assemblages with consumers.
Meanwhile the skeptical response of academic anthropologists is understandable. This enthographic research is undertaken without any kind of ethical or professional review to protect participants. Some of it may be harmless, I guess, but certainly there is a potential for harm, particularly since the underlying purpose of any of these studies is to sell things to people like those being studied. As the article indicates, anthropology tends to be a leftist discipline, much like cultural studies or critical theory in that regard. So I imagine it is difficult for scholars in these fields to watch their purportedly revolutionary and emancipatory methods being used for capitalist purposes. And yet, should we be surprised that if a method provides real and useful knowledge about how markets and consumers interact that corporations want to make use of that method? That's just another place where the arguments that are made about a theory's politics falls short. Any method can be put to just about any political purpose.
An object-oriented marketing research method would presumably but humans and their nonhuman products in a flat ontological space and speculate on their relations and phenomenological responses. Clients may be primarily interested in human agency and choices, but a better ontological understanding of their products seen through their interactions with humans might be useful. I'm not looking to start such a firm, but it makes as much sense to me as a Heideggerean one.
I was an undergrad in the pre-Internet days (late 80s, early 90s). At Rutgers even the library catalog was still paper-based in those days (i.e. a card catalog)... if you haven't ever seen a card catalog, I suggest that you google it. I'm sure there were a lot of books in that library, and paper versions of journals: I imagine there was more paper in that library then than there is today.
I remember writing an essay in my composition course in response to having read Plato's allegory of the cave (so an excerpt from The Republic). As I recall, I spun a couple pages of BS about the reading in relation to John Lennon's Imagine, all written on an electronic typewriter (and yes, you can google that too). But let's say I didn't have a good grasp on Plato after half-sleeping through my 8am MWF class. I could go to the instructor/TA or a classmate, I guess. If I went to the library, I could look in the subject catalog for Plato and see what books there were. I could go to the reference area and look up Plato in the Encyclopedia Britannica. I could try finding some journal article on Plato by looking through all the paper bound annual bibliographies (the paper equivalent of searching JSTOR). Or my personal favorite, I could look in the paper Periodical Index, which might lead me to some microfiche copies of newspapers or magazines. If I was really desparate I could try to find a librarian.
Not surprisingly I spent very little time in the library. My English major was almost entirely "New Critical," so it was all about close readings of primary texts. I recall writing one researched paper as an undergrad for English. I believe I wrote two for my History major. But really most of my classes were far too large to permit much writing anyway; it was mostly in-class essay exams. However, setting my experience aside, the point I want to make is that in the 80s, when our contemporary disciplinary notions of "the" writing process were being developed, research was difficult and time-consuming. In the example above, I probably would have found my way to an encyclopedia or maybe some intro to philosophy text to help me understand what I had read, but I doubt I would have found much conversation about it. At the time I doubt I even knew that the allegory of the cave was part of a larger text, let alone what that text was about or its historical-cultural context. As a result, invention was a far more internal process, and this is what we classically see with the writing process: freewriting, brainstorming, and so on are all largely internalized heuristics where writers draw upon their own memories and perhaps the text that is immediately before them.
Today, writing begins with Google or more broadly with a socially mediated search of the web. For example, I was working on a book chapter yesterday, and I got to a point where I wanted to write about research into working memory and writing. In the good-old, bad-old days, I would have driven to the library and probably spent at least one full day walking around the stacks, looking through books, photocopying journal articles, and, of course, filling out interlibrary loan forms. It would have taken days, if not weeks, to get my answer. And at this point, I wasn't even sure if this was a direction I wanted or needed to take in my chapter. I did a little Google searching to refine some of my search terms, to see how the conversation operated. I went to Comppile, because I knew that was a good database of research in writing, and I found some articles there that were available online. I also did some searching on cognitive science and its relation to activity theory, which eventually led me to some journal articles that were available through my library's online databases. So I found a dozen decent references, which I downloaded as PDFs and quickly scanned while searching for specific key terms.
In short, I was able to write and research on a very tight recursive loop, where invention and research were tied to an already existing big picture of where the chapter was going. Writing in this way takes a fair amount of experience to do well. It's the thing I have most learned to do through writing this blog. However, inexperienced writers use a version of this process as well. As instructors we already know that students turn quickly to Google when given an assignment. I often hear this fact bemoaned. We bemoan it because we have a different, antiquated notion of how to teach invention in the writing process. We also worry that it leads to plagiarism. And there is no doubt that plagiarism is a concern, but it is partly a concern because we need to help students understand how to use Google for invention.
For me, this is part of a larger picture of recognizing writing as a networked activity of distributed cognition involving technologies, organizations, and other nonhuman objects (as well as other humans). The network in which college students write has obviously changed a great deal in 25 years, but we still teach them the same basic process. I've been reviewing a lot of textbooks recently, and some rhetorics are making a move toward changing their perspective. However I imagine that many are hamstrung by the expectations of writing instructors (or at least the way publishers imagine those instructors). It's hard to steer the giant aircraft carrier of compostion on a national scale. Indeed it requires a complex network of distributed cogntiion, as Edwin Hutchins famously explored. And if you don't get that reference you can google it (or spend a day at the library, where the reference librarian will probably look it up online for you if you get stuck).
Steve Krause writes that the recent EDC-MOOC (in which we both participated) was "meh." I agree. But you know what else was meh for me? School. K-12. Undergrad. Grad. meh. meh. meh. I was never very good at playing the role of the student. Obviously my grades were fine, so I could play the game well enough, but I never found coursework particulary interesting or relevant. I could offer you a litany of the mediocre experiences of my undergraduate education, but I doubt that would be of much interest. However, I will say that if a MOOC amounts to a curated list of readings, some lectures, a largely unsupervised discussion among students, and a testing mechanism, then this isn't really any different from what I had in most of my courses as an undergraduate. The only real difference would be that my unsupervised discussion happened FTF with friends I was taking the course with rather than 1000s of people online. I learned that I could get decent grades with a very minimal amount of investment. I didn't value school and hence it wasn't valuable to me. I can see that. But I can also see that I put my investment into other places, other kinds of environments. I learned a great deal working in a small business in the computer industry, composing music and building a recording studio, and writing independently of school.
So from my perspective, so what if MOOCs are meh? The whole enterprise is meh. There's no doubt that there are plenty of students who have been so heavily institutionalized that they can't succeed in a MOOC. They are like those lifers in prison who find themselves paroled and don't know how to live on the outside. And we know there's an economic element to this as kids from poor school districts get the most regimented, institutionalized education and are typically the most at-risk for success in college (and even moreso online or in a MOOC). The students who will be most successful in MOOCs are ones for whom school was only ever one learning site among many. They will be able to jump through the inane MOOC hoops and get on with their lives, confident that what one learns today in a MOOC doesn't really matter anyway. It's just a tiny part of a much vaster learning ecosystem. Of course that's hardly a ringing endorsement for MOOCs, but they don't have to be great. They're free. I have a daughter in high school now. Do I really want to pay for her to sit in general education courses for two or three semesters? Not really. I'll draw up a reading list and she can read it over the summer. If she has questions, she can find the answers online. And she'll learn much more on her own than she would sitting through some lectures and taking some bubble test. Maybe she can even take a MOOC.
Clearly though, universities depend on general education courses to make their economic model go. Either they cram 100s of students into a room or they pay adjuncts minimal pay to teach the course. Or, like in English, they float their grad programs with TA-ships. None of those strategies are ringing endorsements from my perspective as the one paying the tuition dollars. And that's not to say that those adjuncts might not be better teachers than some of the professors. Few of these folks have ever received any pedagogical training. It's just to point out that universities seem to look the low costs of general education courses to balance out the high costs of other kinds of courses that are smaller in size, more technology intensive perhaps, and are taught by faculty earning more money, especially in the professional schools. It's even more crucial from the perspective of the humanities. Without general education, humanities departments are tiny and lose the means to offer graduate students TAships, which means tiny grad programs as well.
So it's not that MOOCs are great. Better-designed MOOCs might be on the horizon, but the ones we have already shine an uncomfortable light on general education. The basic premises of general education are that 1) every college student should have some baseline introductory knowledge that stretches from science and math to social science and humanities, and 2) a gen ed curriculum is the best way to deliver that knowledge. I don't know if either of those premises still works. Certainly #2 does not. Today it seems more relevant to teach the students how to fish rather than to give them the fish in a 50 minute lecture, especially since the 50-minute lecture is free online anyway.
The upshot of this is that the humanities need to start developing a new economic model, one that will not depend on 1000s of undergraduates taking composition or western civ or whatever. Could we create a better version of general education? Sure. I've talked about that before. But I'm not sure that even that will make a difference. The crucial thing we need to do is reinvent ourselves to connect with the values and concerns of our students and our culture. The MOOC, meh or not, means we can't imagine that we can just plod along as we always have.
Last weekend, we drove down to DC for a soccer tournament in which my son's team was participating. During the 16-hour trip there and back, we listened to an audiobook, Ready Player One by Ernest Cline. It's a sci-fi novel set in a dystopian future where the main characters spend most of their time in a virtual internet/game world called the OASIS (the ontologically anthropocentric sensory immersive simulation). The novel's plot is largely an easter egg hunt with the prize being ownership of the Oasis itself. The hunt, developed by the now-deceased creator of the Oasis James Halliday, is filled with the nerdy 80s trivia that Halliday loved. I won't say it is a great literary achievement, but it was fun to listen to while driving through central PA. Elsewhere this weekend, I was also revisiting Levi Bryant's Democracy of Objects, which I've been teaching in my Speculative Realism graduate seminar, and we spent some time on Monday discussing Bryant's concept of regimes of attraction.
This may seem like apples and oranges to you, but these are two sides of the concept of virtuality that I explored in The Two Virtuals. Briefly, as we know, the Deleuzian virtual is a monistic substrate. As Bryant writes, "Deleuze's constant references to the virtual as the pre-individual suggests this reading as well, for it implies a transition from an undifferentiated state to a differenciated individual. If the virtual is pre-individual, then it cannot be composed of discrete individual unities or substances. Here the individual would be an effect of the virtual, not primary being itself." The digital virtual is nothing like this. Perhaps virtual reality appears to be made of some monistic malleable materiality but it is, of course, code: ones and zeros, or even better, voltage intensities across a circuit. Deleuze's real virtual lies beneath all that even further. In Democracy of Objects, Bryant undertakes a motivated reading of Deleuze, identifying particular passages where he leans in a different direction, toward a more pluralistic, less monistic, virtuality, which eventually leads Bryant toward his concept of virtual proper being.
This got me wondering if virtual proper being might not be an ontology that is in some respects closer to the way that VR functions, or is imagined to function in a novel like Ready Player One. Bryant writes that
The virtual proper being of an object is what makes an object properly an object. It is that which constitutes an object as a difference engine or generative mechanism. However, no one nor any other thing ever encounters an object qua its virtual proper being, for the substance of an object is perpetually withdrawn or in excess of any of its manifestations.
And then later, he adds, "The virtual proper being of an object is its endo-structure, the manner in which it embodies differential relations and attractors or singularities defining a vector field or field of potentials within a substance." In the move from a common substrate to these virtual/hidden individual "generative mechanisms," this seems to me more like the generative mechanisms behind artificial life. Certainly it would be fair to say that the algorithms that drive such objects are only ever simulations or models of the singularities and attractors that operate virtually. I don't want to take this analogy too far. However, I wonder: if we could hypothesize some extra-dimensional position outside of the virtual-actual ontological circuit Bryant describes, would that position offer us a vantage that might be analogous to the human user examining the codes and manifestations of sofware objects in VR?
In any case, the development of virtual spaces offers a useful way to think about how regimes of attraction might operate. Bryant describes these regimes as follows:
Regimes of attraction should thus be thought as interactive networks or, as Timothy Morton has put it, meshes that play an affording and constraining role with respect to the local manifestations of objects. Depending on the sorts of objects or systems being discussed, regimes of attraction can include physical, biological, semiotic, social, and technological components. Within these networks, hierarchies or sub-networks can emerge that constrain the local manifestations available to other nodes or entities within the network.
In short, regimes of attraction place external limits on how the virtual proper being of an object might manifest itself. As it already stands, every object has an endo-structure that already places limits on what it might be (otherwise, virtual proper being would be the same as monism). However those manifestations are further limited by exo-relations, which form the regimes of attraction. As a result of this intersection, objects are neither entirely free to mutate in any fashion nor are they overdetermined by their context. To think about this within the analogy of a virtual reality space, one might say that any object that one creates will have an endo-structure that limits its operation and possibilities for manifesting. For example, Bryant uses the extended example of a blue coffee mug to talk about the mugs power "to blue" and how that manifests itself differently depending on regimes of attraction such as available light sources and the sensory capacities of the objects perceiving the mug. The same sort of mechanic would operate in a virtual world, just as most virtual worlds have game engines that govern things like "gravity." Those engines are part of the endo-structure of objects. The fact that one (or one's avatar) finds herself on a large planet with a particular gravitational field (or a virtual version of such) is then part of the regime of attraction that manifests your feet on the ground and constrains movement in specific ways. Virtual worlds often work this way, even though clearly they don't have to.
Ready Player One offers a peculiar vision of how the endo-structure of a VR world might develop out of a pre-existing set of regimes of attraction. That is, if we were to imagine, for a second, that someone actually created the Oasis, at the outset its limits would be defined by things like available computing technologies, processing power, programming languages, etc.: in other words, the material substrate on which any VR world would be built. The Oasis in the novel becomes further shaped by its creator's obsession with the 1980s: Dungeons and Dragons, early video games, computers, sci-fi novels, anime... the whole geek lexicon. This obession becomes built into the mechanics of the Oasis itself, along with those other technological elements. Parzival, the novel's protagonist, navigates the Oasis through his deep research into this culture.
How do we want to think about a VR world as a real object? We've seen this conversation among OOO folks about whether or not Popeye is real. In OOO terms, for an object to be real it must be able to exist independent of external relations. So a statue of Popeye is real. However, there is also something like an idea of Popeye that can be trademarked: is that a real object? Can something that is symbolic or encoded be real, independent of its local manifestations in a book or on a screen? Perhaps. So, for example, if a character in World of Warcraft finds a magical sword, which she can sell or trade, even for US dollars: is that sword a real object?
I would say that one potential error to avoid here is in imagining that virtual worlds are separate from the "real world." They aren't worlds inside of worlds. A digital photograph is as real as a printed photograph. An ebook is as real as a printed book. A digital movie file is as real, as material, as a movie on a reel. Virtual characters have a material manifestation as well. They take up physical space on a hard drive somewhere. They should not be mistaken for their local manifestations on a computer screen, though it is only within the particular regimes of attraction of that software and hardware that they become viewable as characters. Otherwise they are just data files.
Maybe this is a good analogy for a pluralist virtuality, like virtual proper being. Without the proper regime of attraction, virtual beings are unreadable, inaccessible. With the right regime, they become manifest in particular ways, not their true form of course, but a form that can relate and acquire agency. That would suggest though that virtual being can be altered by altering its local manifestation, which is something we already know in OOO right? Virtual beings cannot directly interact; only their local manifestations (for Bryant) or sensual objects (for Harman) can be encountered. But real objects (their virtual/withdrawn states) can be destroyed through these interactions; there must be feedback. If they can be destroyed then it makes sense that they can also be altered without destroying them--not just altering their local manifestations but their virtual/withdrawn states as well, while allowing the object to still be the same object.... just different. In other words, we can paint Bryant's blue coffee mug yellow, and it will no longer manifest blueness. It will lose that virtual power. It's virtual proper being will be altered. And yet it is still the same mug. I don't see why that should be a problem. We can have a discussion about how much change is required to make something into a different object, but that's for another time.
What this indicates to me is that if we think about a pluralist rather than a monist virtuality, then we create an ontology where the particulars of an object's virtual dimension are alterable through its local manifestations, which in turn are alterable through the regimes of attraction in which it emerges. This creates a condition in which we can proceed experimentally in devising different regimes for the purpose of shaping virtual being. Our ability to know the results of those experiments might always be limited but the principle that such mutations are possible seems to make sense.
Life interfered a litle last week, so I got off-track on this MOOC, so today I will be responding to both week 3 and 4 of the eLearning and Digital Culture MOOC. These weeks deal with the topic of posthumanism... so something on which I've written a great deal. The instructors also offer up transhumanism. I can see the point of comparision on some surface level, but, at least in my view, these are two very different things and the comparision might lead to some serious misunderstanding of posthumanism. I'll get to those details in a moment, but I want to arrive there through a consideration of this question: do MOOCs need guilds?
What do I mean by that? While I am not a player of MMORPGs like World of Warcraft, I am aware that much of the activity in these games is carried out by collectives of players known as guilds. Many of the missions in these games require the concerted effort of 20 or more players, so a fair amount of coordination is required both during the live event of adventuring and in-between. I realize I am an ususual participant in this MOOC as I am operating more as an observer than as someone who has specific learning goals. However I started thinking that serious MOOCers might benefit from being part of a small (20-50 person) group of participants who would commit not only to a single MOOC course but might undertake multiple courses over time. Different participants might offer different skills. For exmaple, some would have more or less time to devote to a particular course and bring more or less expertise to the content. As such their roles might change over time, sometimes being more like a mentor and sometimes playing the students. Members could divide the work of investigating different parts of the course and reporting back to the group.
I suppose one's response to this suggestion might have something to do with how one views the ethical practices of learning. Do we really want students taking courses as a group, collaborating and strategizing independing of the instructor? Or do we ultimately want students to be independent in some fundamental way that would make a guild-type strategy unethical.
In part, this also has to do with how one envisions learning on a MOOC. In an email list discussion I had a few weeks back on the potential of a writing MOOC, I suggested that learning in a MOOC couldn't simply be measured by what individual students had learned, that instead the pedagogical activity of the MOOC needs to recognize what the collective network learns. This is not a spurious suggestion. Part of the premise here is that IF MOOCs are the "future of education" then that is partly because the future of professional labor and citizenship will take place through this kind of collective, networked activity where expertise is not so much about what is inside your head but how well you can connect your head to a larger network of cognition.
Of course this brings to the matter of trans- and posthumanism. Transhumanism, at least as it is represented for the course in this article, is a political position that advocates for technoscientific experimentation and a legal system that promotes wide freedoms to adopt technological innovations. Posthumanism though is not well-represented, even though the instructors offer this introduction to a collection titled Posthumanism, which largely conflates posthumanism with poststructuralism. Unlike transhumanism, posthumanism is not about something that will or might be made to happen through technoscience. It is not even something that did happen, as in once upon a time we were humanist-type humans and now we are posthuman. Instead, posthumanism is about reconsidering what humans always already were/are. Basically, even though poststructuralism is certainly a critique of humanism, it does not do what posthumanism seeks to do in its attempt to understand the intersection of humans and nonhumans.
As a posthumanist, at least my version of it, one would look at MOOCs as networks of distributed cognition (which work with varying degrees of success). While the apparent goal of each course is mastering the content, the other, less obvious goal is teaching users to participate in a particular kind of information network, where knowledge is developed through a certain range of techniques. Of course we could say the same thing about the 20th-century industrial classroom. So, for example, 20th-century academic writing (e.g. first-year composition) was about
- close reading of print texts
- using a print library
- writing a linear text (e.g. thesis statements, paragraph development, etc.)
- working independently.
The 21st-century learning environemnt and digital composition is perhaps more akin to the wiki page
- distant reading of digital resources
- accessing information over multiple networks
- producing digital media artifacts with multiple audiences and access points
- working collectively
I don't mean to suggest that 20th century skills disappear, but I do believe they must operate within the context of the 21st century environment. For example, we still need to read closely. However, close reading is no longer enough and must be integrated with an ability to handle more information than one person can be expected to read in the time alloted for any given project. There will still be linear texts but they will not operate as they once did.
MOOCs can teach students to operate in these new environements. However I do think something like a guild approach would be useful in making that happen.
Surprisingly, English teachers from K-12 through higher education are not a particularly forward-thinking bunch. Shocking right? While schoolmarm grammarian is uncharitable, it's probably closer to the mark than future-oriented innovator. So when the National Council of Teachers of English publishes a framework for 21st century literacy curriculum that is entirely focused on digital matters, one could almost say this means that one no longer needs to be forward-thinking to recognize digital literacy as a primary site of English education.
I want to combine this with a generally more future-oriented institutional document, the New Media Consortium's Horizon report. The full report isn't out yet but they have identified the 6 technological trends:Technology to Watch Time-to-Adoption Horizon
- Massively Open Online Courses
- Tablet Computing
- Big Data and Learning Analytics
- Game-Based Learning
- 3D Printing
- Wearable Technology
I think the MOOC and the tablet are fairly obvious, which they should be given the time-to-adoption horizon. They been reporting some version of game-based learning and 3-d printing for some time, so I'm not sure about how those will come about, how broad their impact will be, or what the time frame will be. However I think big data and wearable technology are good bets.
I don't know if the particular brand of MOOCs we see with Coursera will be around in 5 years, but I'd be willing to bet that there will be millions of "students" taking open, online courses in 2020. I put students in scare quotes because students suggests, for some, college credits, and I'm not sure what the relationship will be between open courses and credits. What I do know is that these massive, networked environments will alter the way we learn (and work and socialize). I know this because they already have but that trend is only going to intensify.
What NCTE recognizes is that English should be the means by which such literacy is acquired (at least in the US, which is the nation in "National Council"). To that I say, "good luck." Good luck providing this professional development for existing teachers, who are not prepared to do this. Good luck finding university English departments with faculty to provide this literacy to the general population of college students, let alone educate preservice K-12 teachers or graduate students who will become university faculty. Good luck finding English departments who even remotely view digital literacy as a subject that even marginally concerns them, let alone one that would be central to their curriculum in the way that print literacy is now. As I suggested above, I think you'd have better luck selling the average college English department on becoming grammar-centric than you would on becoming digital-centric.
Now if you think I'm just trolling on my own website, well, you might be right. But the truth is that if this was 2003 and a department recognized that digital literacy was going to become the issue that might make or break their disciplinary future, then by now they might have four or five digital scholars hired and a couple tenured. Maybe they'd be in a position to deliver this content today. But few departments did that. This means the transition is likely to be rocky.
Here's my point of comparison. In the mid-19th century, English departments studied oratory and philology: two things contemporary English faculty no little about. Why did English split itself off from speech? Speech still survives, in a way. Most universities have some public speaking course, and speech departments evolved into communication studies. Without wanting to sound techno-determinist, the second industrial revolution had a significant hand in that transformation. I look upon print-centric literary and rhetorical studies in the same way. In hindsight we might say that the 19th century transition took 3 or 4 decades. Things move a little faster now, but the truth is that 2020 will be 25 years after the rise of the modern Internet.
For week two of the eLearning and Digital Culture MOOC, one of the assignments is watching Gardner Campbell speak at Open Ed 12 from last October. Here's the video:
One of Gardner's key points of reference is Gregory Bateson, specifically Ecologies of the Mind. Having been up and down the cybernetics business, Bateson is familiar to me from that angle, but my first encounter was through Deleuze and Guattari, right at the start of A Thousand Plateaus, where they write: "A plateau is always in the middle, not at the beginning or the end. A rhizome is made of plateaus. Gregory Bateson uses the word "plateau" to designate something very special: a continuous, self-vibrating region of intensities whose development avoids any orientation toward a culmination point or external end."
This is perhaps not so far off from what Gardner is seeking in his use of Bateson: open education as a "continuous, self-vibrating region of intensities." However, there's a problem, as we all know. And it is a problem that Gardner, through Bateson, terms the double-bind. The double-bind is when one is giving two conflicting messages or goals to follow. My favorite example from the talk is the description of a blogging requirement and rubric from a syllabus. There's a long description of rules and procedures to follow and an extensive description of requirements... followed by the injunction to "be creative" and foster a community. This isn't meant to poke fun at the anonymous professor, because we are all in this double-bind. Bateson suggests that the responses to the double-bind take the form of paranoia (this is a trick), hebephrenia (screw it, let's get drunk), and catatonia (what?). Certainly these are the typical faculty responses I have seen to the double-bind of a university strategic plan (or online education) for that matter. So it's not just students. But what are we to do? If we have a syllabus, don't we need to have requirements and grade criteria?
The central double-bind for digital literacy education--whether it is FTF, traditionally online, or open and online--is between the demand to reinvent/be creative and the expectation of meeting traditional standards. Gardner relates a story of one academic who responds to the idea of open online education by saying "it may be learning, but it's not academics." If that is true, it's because academics is tied to certification, and certification is tied to reaching specific, predetermined goals. I don't think anyone wants to do away with the practice of certification, especially for certain professions. In fact, the badges movement looks to expand the micro-certifications of academics (e.g. the course credit) into extra-institutional learning experiences. My inclination would be that we need to move in the opposite direction, distancing learning from certifying. But I don't see us doing that, in part because higher education is a research and certifying machine far more that it is a teaching and learning one. However just as importantly our student-customers want certification more than they want learning. They want jobs not an education. Are jobs and education mutually exclusive? No, but they overlap in rather impoverished ways, at least in the eyes of our students. As WPA I read through 1000s of FYC course evals each year. There is a running theme in those responses that no one should find surprising: students don't believe that a writing course is relevant to the education they are pursuing. Likely many of these comments come from prospective STEM majors, but also business, which makes up 1/5 of our student population.
I don't wish to romanticize open online education as Deleuzian plateau. However the idea of openness might imply eschewing the definition of specific culmination points or external ends: a problem for institutions (and students) who are more interested in external ends than self-vibrating intensities. When it comes down to it, I don't think faculty are all that interested in opennes either. Faculty are interested in replicating the closed systems that award them with expertise.
So, for example, a writing or digital composing MOOC is easy to imagine. DS106 digital storytelling is one. A writing MOOC with 40000 students could work without problem... as long as no certification is required, as long as the particular writing and the particular ends are not important. As long as it doesn't even matter if the students write or don't write, learn or don't learn. Of course the trick of good design would be to invite participation. But is this what the many students these MOOC's hope to attract want? Do they want "open education" or do they want "free certification"? (preferably with a minimum of work or learning).
This is the great double bind of open education: to reorient prospective students toward an open-ended education and away from the symbolic capital promised by higher ed. It's an interesting project but I wonder how it works for any of the people who are involved?
We often see studies of technology adoption by college students, such as those done by Pew. We know from our own classrooms and walking the campus that Pew's statistics that 96% of undergrads have cell phone and 88% have laptops, that 92% of students have wireless connection on some device, are reflected in our own observations. The general adult population is 82%, 52%, and 57% respectively. I wonder what the stats would be for humanities faculty? Lower than undergrads I suspect, but I wonder if it would be lower than the average adult. I don't think anyone would be surprised to discover that humanities professors spend more time with traditional cultural/media activities than the average adult or student: reading print books; going to museums, public lectures, and libraries; listing to classical music, etc. And obviously one should be able to live one's life as one chooses, including in terms of online connection.
Given the nature of our profession however, particularly our professional freedoms, these personal choices then become professional ones. To a certain degree, all faculty have needed to come online in one way or another: library databases, email, online grading and student information, and to a lesser though still significant extent, course management systems. Clearly all faculty have internet access at least through their workplaces. But to what extent have we collectively embraced networked culture? Certainly not to the extent that we have embraced the modern culture that we continue to celebrate through our curriculum.
Why is this an issue? Let's say, for example, that I didn't really care for reading books. I would assign books for my courses because that was expected, but I didn't live a life where books were personally valued. How successful do you think I would be teaching print literacy? Teaching with digital networks requires a kind of literacy derived from a significant level of immersion. This is, I think, a real stumbling block for our profession in facing up to this challenge. And it's not just one decision that is crucial here, and I would agree that one can become overly and unproductively immersed in the digital world, but here are a few examples:
- Not wanting a smartphone or other mobile device because one wishes to be disconnected
- Not wanting to be involved in social media of any kind because they are trivial
- Not taking seriously conversations that take place primarily via the web, because that's not where serious conversations happen
- Viewing digital cultural activities from gaming to YouTube to any kind of user-generated content as crass commercialism not worthy of serious attention, as a kind of anti-literate, anti-intellectual space
- Viewing connectivity as an intrusion of valuable cultural activity, in the classroom or elsewhere.
Again, there are valid points of concern here and these are all acceptable individual decisions. I have no problem with someone living their life this way, but if one puts this all together then I think one ends up with a faculty member who is not well-suited to meet the challenges of preparing students to live in a world that the faculty member has renounced.
In the eLearning MOOC we've been talking about this in terms of Prensky's digital immigrants and digital natives. In my view this is an unproductive and even damaging perspective. Again, as with the utopian/dystopian discourse, perhaps the concept is to move people away from these positions. Reading the course discussion, there are certainly people who have arrived familiar with these terms and unhappy with them. (I would imagine anyone who knows something about these matters knows this is not a productive kind of thinking. It's akin to starting a composition class with grammar instruction because that's what the students expect, even though as an instructor you know that's not the right direction.)
The immigrant/native business sets up a false dichotomy and reinforces an unnecessary conflict. If you identify the faculty as immigrants, then you are really taking a hostile position toward them. But more dangerously you are setting this up as a generational problem wherein the immigrant faculty can just say let the younger generation, the natives, do this work. But that's not what happens because this isn't about generations. It's about disciplinary culture. I have encountered plenty of mid-20s English doctoral students. Yes, many have cell phones and laptops and such, as you would expect. But very few see digital literacy or practices as part of their teaching or disciplinary work. Instead, they are adopting the cultural practices and values of their discipline, which is print-based.
English departments have always claimed that they are place where people go to become better writers. I have never believed that. I think English attracts students who are already good writers, and I think a literary and print-based curriculum can teach students to read particular genres in particular ways and write literary criticism with its specific discourse. Increasingly though, what is taught is a kind of monastic practice, one that clearly prides itself on its removal from the discourses of the marketplace and the larger culture. There is nothing wrong with monasticism.... for monks. However it doesn't have broad appeal, though there will always be some students who want that experience. We can't expect that there will be a generational shift that will lead to some eventual change in this situation. "Generations" are broad enough and the academic job needs are small enough, that there will always be potential hires and grad students to replicate these disciplinary values, just as there will always be people willing to live lives as monks.
And of course it's not just English departments. It's all of the humanities and perhaps beyond. That's just the corner of academia where I live. Ultimately, I think meeting the challenges of digita literacy will require university strategies for hiring and supporting faculty that work outside the disciplinary/departmental will to repetition.