Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
Anne Balsamo writes in Designing Culture that “Shift work is a fact of life in a 24/7 age. Unlike shifts that start and end with a punch clock, working the paradigm shift is one long now.” Designing Culture is a book about innovation and changing technological literacies; it’s a book about Balsamo’s unusual (for a humanist) experiences at Xerox PARC; and it’s about the future of the university in a digital age. But it is also a book that searches for and insists upon a role for the humanities in technological development. In the end, Balsamo summarizes that role in the following way:
Contribute expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provide expertise on the process of meaning-making, which is central to the development of successful new technologies; provide appropriate historical contextualization.
Balsamo also describes roles for artists, social scientists, engineers, computer scientists, and physical scientists, but it is the humanist’s role that interests me here. Briefly put: to historicize, interpret, and critique. It’s a fair description inasmuch as that is what humanists tend to do in any context so it makes sense that they might serve that same role in technological development. Of course, the challenges are convincing others that such functions are valuable, and then that humanistic methods can provide knowledge that can be put to use in design. The first challenge could be tough, but I think the second is even more daunting as even the humanists themselves might baulk at the notion of being “useful.” In some ways though these two might be the same problem. That is, that if one can frame one’s work as useful, as contributing productively to a larger activity, then perhaps it becomes easier to see how the traditional methods of the humanities might be valued.
However, there’s another way to frame the shifting work of the humanities. Understandably, Balsamo’s book talks a lot about the future and the various ways that we try to imagine/describe it. What if the shift of the humanities was from interpreting the past to inventing the future? Here I am thinking of Greg Ulmer’s keen observation in Heuretics of the split between heuristic and hermeneutic uses of theory. The humanities have largely focused on hermeneutics, on interpretation, and have always made use of their theoretical methods (from poststructuralism and cultural studies to the digital humanities) for interpretive ends. Ulmer’s work though demonstrates the inventive potential of those methods. What is visible on the inventive edge of humanities methods is the capacity for speculating about human potential: what might we be? This can be dangerous work of course, and it is work that requires interpretation or historicizing. However it is also the kind of work that has value in design.
It is self-evident that we continue to struggle with figuring out how to live with digital media and networks. Sebastian Thrun’s recent admission of the failure of MOOCs to live up to their hype (stunning, I know) is one recent example. We clearly haven’t figured out how to design for learning on that scale (if it is even possible). But I am thinking more of Ian Bogost’s latest Atlantic piece on our state of “hyperemployment.”
After that daybreak email triage, so many other icons on your phone boast badges silently enumerating their demands. Facebook notifications. Twitter @-messages, direct messages. Tumblr followers, Instagram favorites, Vine comments. Elsewhere too: comments on your blog, on your YouTube channel. The Facebook page you manage for your neighborhood association or your animal rescue charity. New messages in the forums you frequent. Your Kickstarter campaign updates. Your Etsy shop. Your Ebay watch list. And then, of course, more email. Always more email.
Bogost reminds me of Trebor Scholz’s description of “immaterial free labor” in this First Monday article. Scholz writes “People like to be where other people are. They enjoy using these platforms: from entertainment, to staying in touch with friends and family, to chatting, remixing, collaborating, sharing, and gossiping, to getting a job through the mighty power of weak links. It’s a tradeoff. Presence does not produce objects but life as such that is put to work and monetary value is created through the affective labor of users who are either not aware of this fact or do not mind it (yet).” Bogost and Scholz are each offering critiques of our wayward digital lives, the ways that we seem to become chained and addicted to our devices, the work we are continually doing in their name (even as we imagine we are saving labor), and the resulting wealth we are creating for others as prosumers.
If we are going to design culture, if we are going to take up the Xerox PARC refrain that the best way to invent the future is to build it, then we need to invent new ways of living, ways that are not in service to technology or profit but are also not blindly beholden to antiquated notions of human nature. Instead we need to recognize that we are inventing ourselves along with our technologies. No doubt it is a grandiose role to put oneself in: inventing future humans. And who knows to what extent any of us, or all of us, can really shape that future. But certainly not making the effort doesn’t make sense either. Nor do I imagine that as solely the role of humanists, as if we somehow know the answers as to what we all should or could be. But it is a place where humanists who take up an inventional approach to their methods could have a productive role.
Discussions of big data and pedagogy typically focus on the relative merits of analytics for assessing and improving curriculum and teaching practices. Michael Feldstein has a good piece on this from a few months back where he argues,
Right now, what we’re trying to do is a little like trying to conduct physics research before somebody has invented calculus. You can do some things around the edges, but you can’t describe the really important hypotheses about causes and effects in learning situations with any precision. And if you can’t describe them with precision, then you can’t test them, and you certainly can’t get a machine to understand them.
In other words, maybe but not yet. I’ve wondered about this with writing assessment. I’m not sure if there is anyone out there using the methods developed in the digital humanities to study literary corpora for studying student writing. Such research is not suited, at least not initially, to determining the quality of writing, and as such whether or not students are meeting some standard, which is typically what assessment is investigating. However, it could tell us things about linguistic diversity, topic modeling, use of citation, and other textual features. In other words, it could tell us something about how student writing is changing over time. Maybe we could, through some secondary analysis, connect shifting textual features (e.g. paragraph or sentence length to give a basic example) with “good writing.” Maybe. Of course, turning this into a measure of pedagogy is another matter (turning into a mechanism for machine grading is an even more distant step in my view). As Feldstein says, maybe, but not yet.
However, I have another point that is really about reversing the relationship between big data and the individual student writer. Pedagogy begins (and ends) with the belief that individual students learn and that the learning experience of individuals is ultimately what we want to measure and value. It makes sense. As a student, I pay to learn; I get a degree; and when I leave college, I want to take something with me. At the same time, we recognize that learning is social and environmental; if it weren’t, then we wouldn’t have schools in the first place. So even though it is meant to have predictable effects upon individuals, learning is relational. (Indeed, I would argue that learning is a cognitive activity and all cognition arises from relation; thinking is not “inside.”) The more we start to think about thinking as a relational, networked activity, the more we might also want to shift our focus toward understanding the collective activity rather than the individual one. As such the point is that rather than being concerned about what big data can tell us regarding individual experience, maybe we should turn toward thinking about pedagogy as something that shapes a massive, collective activity that we are now getting a better ability to see. Think of this in terms of climate change. What would it mean to think of pedagogy as shaping the climate of learning? Individual activities, like changing our consumer habits, can have an impact on the climate, but changing individual behaviors is a means to an end that cannot be seen on the individual scale.
How does this apply to writing pedagogy? Though there are a wide variety of teaching practices out there, I’d argue they all share a focus on changing the individual behaviors of student writers. We may contend that there are social-cultural-ideological factors to discourse communities or activity systems or whatever term we want to use for the context in which we write, but we still end up focusing on the writing processes (or products) of individual students. If a student’s essay doesn’t met a standard, then the problem is addressed by focusing on that student’s writing process/behaviors. Even when we acknowledge that some of the causes for the problem are systemic, we still identify the problem as manifesting on the individual level. We pose the problem and solution on the individual scale. So what would writing pedagogy look like if it were designed to teach the collective rather than the individual? I know this sounds, well, inhuman, but if, as I have asserted elsewhere, writing is not a strictly human activity, and the goal is better writing, then why focus on individual humans? It seems to me that when students struggle with writing that it is because they are caught up in networked activities or assemblages that perhaps were productive once upon a time for some purpose but are no longer. We commonly recognize as instructors how difficult it is to shift students’ writing practices. In my view this is partly because we are seeking to make changes at the wrong site. This is the recognition of activity theory, though I believe activity theorists continue to put too much focus on the humans in their systems, which is fine if your interest is in studying human activity but is less useful if one’s interest is in the system itself. Writing is a systemic activity and the logical extension is that we would alter it on that scale.
This doesn’t mean that students don’t “learn to write;” it just changes what that phrase means to something like learn to operate within a compositional network. However it also means that the performance of students within that network cannot be attributed solely to the students. Understanding that big data allows us to see writing on a new “real” level in the same way that information technologies have allowed us to see climate is a significant change. It doesn’t mean that the individual student’s writing isn’t real anymore than climate means that the raindrops on your head aren’t real. It just gives us a different (and compelling) explanation for how those local phenomena arise.
This post takes up where the last one ended. In discussions of theory, I often hear poststructuralism described as a Copernican moment. As the Earth moves from being at the center of the universe, with poststructuralism, we say, the human subject is decentered. Maybe. We can recognize, “in theory,” how subjectivity, agency, rationality are treated; it’s all postmodernism 101. So we are familiar with this analogy in the humanities. At the same time, poststructuralism is carried out through the work of individual philosophers and often through close readings. Certainly the work that has been undertaken with these methods has continued in the form of close readings. One of the compelling qualities of the Ptolemaic model of the solar system was that it was predictive. Based upon careful observations and measurements of the visible universe, the geocentric model could anticipate the movement of heavenly objects. In short, it was self-validating within its own metaphysics. Of course, the geocentric universe also depending upon having access to a limited amount of information about the heavens–what could be seen by the naked eye.
As is maybe obvious, close reading is likewise what can be seen of texts by the naked eye. As is perhaps also obvious, the geocentric model rested implicitly on the premise that the universe was made for humans to experience. Close reading also rests on the premise that texts are made for humans to experience. The text/close reading argument seems more plausible because texts are written by people for people… right? Well, at least we can say that texts are the product of cultural/social forces as opposed to the natural forces of the universe… right? Instead, if we asserted, in a Latourian sense, that texts are a product of human and nonhuman forces, that they are not necessarily made “for humans” any more than heavenly bodies are.
IF we made such an assertion, then what would we make of the methodology of close reading? What kind of knowledge would we say that it produced? Certainly it could tell us something, in a kind of ethnographic way, about how humans experience texts. And really that’s all close reading ever aspired to be. No one would claim that an interpretation would tell you what a text “really” is. It’s just that we never really gave much thought to there being anything worthwhile about texts beyond our relationship to them. This is what I see in digital humanities: an investigation into how texts operate in a scope that is outside our direct experience with them. (Set aside for a moment, if you will, the correlationism issue.) In this context, I’m not sure how close reading and macroanalysis (to use Jockers’ term) will play together. I’m sure people will continue to be both. It just seems to me that if we accept the ontological premise of macroanalysis then close reading becomes a strange kind of practice, more like astrology than astronomy, imagining that the stars tell us about ourselves.