Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 3 hours 9 min ago

digital rhetoric and the resident web

9 October, 2015 - 09:32
Two lion cubs.Vulnerability is charming until it grows up and stalks you through the savanna.

Donna Lanclos and David White offer some remarks in Hybrid Pedagogy, on “The Resident Web and its Impact on the Academy:” the “resident web” being that portion of online spaces which “involve the individual being present, or residing, to a certain extent online,” i.e. social media. Their argument is, in part, a familiar one, indicating that “New forms of scholarly communication and networking, manifested as digital tools, practices, and places such as blogs and Twitter, create a tension between the struggle to establish one’s bona-fides in traditional ways, and taking advantages of the benefits of new modes of credibility, many of which are expressed via the Web.” And it’s that last part which interests me here, the “new modes of credibility.” What are those?

As Lanclos and White describe, “When someone is followed on Twitter, it can be as much for the the way they behave — how they project character and a kind of persona — as it is for the information they can provide.” And what kind of character/personae is attractive?

Acquiring currency can be about whether a person is perceived to be vulnerable, not just authoritative, alive and sensitive to intersections and landscapes of power and privilege: As Jennifer Ansley explains, “In this context, “credibility” is not defined by an assertion of authority, but a willingness to recognize difference and the potential for harm that exists in relations across difference.” In other words, scholars will gain a form of currency by becoming perceived as “human” (the extent to which ‘humanness’ must be honest self-expression or could be fabricated is an interesting question here) rather than cloaked by the deliberately de-humanised unemotive academic voice.

My first thought here goes to Foucault’s investigation of technologies of confession in The History of Sexuality. Foucault discusses the Christian confessional but I’m thinking more about his investigation of writing as a confessional technology. My second thought is of Kittler, in Grammaphone, Film, Typewriter, where he remarks on the pre-typewriter perception of a connection between the fluidity of handwriting and a kind of honesty of expression. It’s hardly news that social media from LiveJournal blogs through Facebook and YouTube to Instagram or YikYak and beyond has been a site of confessions. These sites have generally offered a feeling of spontaneous utterance that is associated with honesty and confession.

What I think is curious here is Lanclos and White’s assertion of the development of academic status through these rhetorical practices. As they point out, impersonal objectivity has been, and really remains, at the foundation of academic knowledge. Even in discourses where subjectivity is hard to mask, like literary or rhetorical analysis, arguments must be built from textual evidence, scholarly sources, and established methods. So what role can these confessional performances play in building academic reputation?

To be “honest,” I am skeptical. There’s no doubt that the ability to create and maintain weak social bonds (i.e. networking in the non-technical, social sense) is valuable in almost every professional enterprise, and in academic terms that means building relationships with potential editors, reviewers, collaborators, hiring committee members, and more generally an audience for one’s work. In some respects this was more true in the 50s and 60s, when academia was more of an old boys network, than it is now. Clearly in those days, informal social bonds were largely created maintained face-to-face, which we still do and, as far as I can tell, is the primary reason for having conferences. As such, I don’t mean to suggest there is no value in building such relationships. And there may even be some prejudice, some semi-conscious subjective preference, to find those with whom we build such bonds to be credible. In effect, the sense that someone has confessed, has bared their soul, has exposed their neck to our teeth, makes us more inclined to believe them. Perhaps it just the curse of being a rhetorician, or maybe its some congenital incapacity on my part to trust others (oh look, that was almost a confession), but if you were investigating something that really mattered to you, would these kinds of confessions really sway your judgment?

Lanclos and White end by asserting that

As scholars we need to put aside anachronistic notions of knowledge being produced by epistemologically neutral machines and embrace the new connections between credibility and vulnerable humanity which the Resident Web brings. In tandem with this, as institutions we need to recognise this shift by negotiating the new forms of risk online and supporting increased individual agency without reneging on our our responsibility to protect and nurture those in our employ.

I can certainly agree with the first part of the first sentence. There are no epistemologically neutral machines for knowledge production. From a Latourian perspective that would make no sense. If you have a machine for the purpose of producing knowledge, how could it do/produce knowledge and have no effect (i.e. be neutral) on the knowledge? It would be like having a movement-neutral automobile. However, the connections between credibility and vulnerable humanity are not new, though the capacities of the Resident Web do shape this longstanding rhetorical practice in new ways. Furthermore, I’m not sure what is being asked in the imperative that we need to “embrace” these connections. Embrace itself is an interesting word choice as it suggests an affective response as opposed to say respect, acknowledge, value, reward, or some other similar verb.  And I’m not really sure what that last sentence is asking for. I think it is suggesting that academia needs to protect its students, staff, and faculty from the potential risks of social media (with which we are now all familiar). Of course I’m fairly sure that that doesn’t apply to “confessions” or honest expressions that we find racist, sexist, or otherwise offensive, because those bastards should clearly be pilloried, right? In other words, I don’t see how this happens, at least not in a general way. As their article does point out, these are (rhetorical) performances. Vulnerability here is a genre, just as the speech in a confessional is. Maybe we need to “embrace” this genre. I’m not really sure why. Perhaps it is simply a recognition that academics are increasingly exposed.

I suppose I would push back in the other direction, a direction Lanclos and White only briefly point toward when they note that the resident web “largely takes place in online platforms run by multinational corporations.” Foucouldian confessions were part of a disciplinary culture. Digital confessions might be articulated more as part of a Deleuzian control society. They become modulations in an algorithmic fed-forward subjectivity. Maybe we shouldn’t embrace such things. Maybe instead we need to be more cautious and at the same time more experimental in our skepticism over the value of the performance of vulnerability as a rhetorical strategy.


Categories: Author Blogs

Sherry Turkle and the pharmacology of phones

1 October, 2015 - 07:53

family distracted by technologySherry Turkle’s recent piece in The New York Times, “Stop Googling. Let’s Talk,” appears to take on the key points of her latest book, Reclaiming Conversation (also reviewed in NYT.) Turkle reports on a decline in empathy, particularly among younger people, which she asserts is a result of emerging technologies–social media and especially smartphones. While she cites some research in support of this claim (research which itself only suggests there might be a connection between technology and decreased empathy), Turkle also says “In our hearts, we know this, and now research is catching up with our intuitions.” An interesting rhetorical appeal since so often research demonstrates counter-intuitive discoveries.

But here’s a more interesting line from Turkle: “Our phones are not accessories, but psychologically potent devices that change not just what we do but who we are.” Indeed, though the distinction between doing and being is not so easily made or maintained. The point though is that we are changing. We’ve always been changing, though maybe now we are in a period of more rapid change. She writes that “Every technology asks us to confront human values. This is a good thing, because it causes us to reaffirm what they are.” And I wonder at the choice of “reaffirm.” Why re-affirm? Because human values are never changing? Why not discover or construct?

The loss of empathy and general human connection Turkle describes are moving, and the everyday stories of families sitting around the dining room table separated by screens are familiar. At the same time, you’d almost think we’d left behind some idyllic society of close families, friendships, and self-reflection.  That is, you might think that if you weren’t able to remember the 1990s. If this argument is going to be floated on what we know in our hearts, then what I know is that growing up in the 70s and 80s, I was hardly surrounded by “empathy.” Not familial and certainly not from peers. Turkle relates a story of a teen lamenting about her dad googling some fact at the dinner table (I can be guilty of that.) When I was a kid, instead of googling we were reaching for the World Book (that’s like an old-fashioned print version of Wikipedia kids).  As a teenager, when I was home I spent my evenings in my room listening to music and/or reading. My teens also tend to sit in their rooms, though they’re mostly watching videos or playing video games (or reading). I’m not sure how much of a difference that is. However, I don’t think I would base an argument on “heart knowledge.” I’m sure other teens today spend a lot of time on social media and texting. Just not my kids or their friends… I wonder who’s more empathetic? The teen who is really interested in what her friends are doing, thinking, or feeling? Or the one sitting silently in a room reading her book?

Turkle certainly wants to argue for the latter, which is perhaps not what we would “know in our hearts,” because for her, empathy begins with solitude.

One start toward reclaiming conversation is to reclaim solitude. Some of the most crucial conversations you will ever have will be with yourself. Slow down sufficiently to make this possible. And make a practice of doing one thing at a time. Think of unitasking as the next big thing. In every domain of life, it will increase performance and decrease stress.

But doing one thing at a time is hard, because it means asserting ourselves over what technology makes easy and what feels productive in the short term. Multitasking comes with its own high, but when we chase after this feeling, we pursue an illusion. Conversation is a human way to practice unitasking.

An argument near and dear to every painfully introverted academic you’ve ever met. And of course we all know there’s no better place to find empathy than in a professor’s office hours!

Meanwhile I’m especially interested in this one sentence: “Multitasking comes with its own high, but when we chase after this feeling, we pursue an illusion.” I am reminded of Plato’s concerns for the pharmacological effects of writing, which was perhaps the first “psychologically potent device,” unless one wants to count language itself (which I would). What is the “illusion” that we are pursuing here? Turkle doesn’t say, but there are certainly commonplace answers. An illusion that we are “connecting” with others via social media, that we are “getting things done” when we are multi-tasking, and perhaps even that we are really “thinking” with so many screens and distractions.

To channel Plato (and Derrida), we might say that our smartphones and digital media are a pharmakon for thought.

However, here are my confessions on these matters. At this moment I am at my computer with several screens open and my phone on the desk. But I don’t have Twitter, Facebook, or any other social media apps set to alert me. (I did get an update on my phone on the team selection for England’s national soccer team from ESPN.) I might check those sites a couple times a day and scroll through the updates. On occasion I get a little more involved in some conversational thread. My point is that maybe I’m already living the life Turkle recommends. In terms of my scholarly work, I spend a lot of time alone.

Despite all her concerns about the psychological effects of these devices, in the end Turkle seeks a technological solution in the redesign of smartphones. She asks:

What if our phones were not designed to keep us attached, but to do a task and then release us? What if the communications industry began to measure the success of devices not by how much time consumers spend on them but by whether it is time well spent?

In his review, Franzen is skeptical of such hopes as they seem to run counter to economic imperatives. I suppose I would say that all media technologies operate this way. They all have pharmacological effects. One can become addicted to books or tv or video games or the Internet or a smartphone. So while I can certainly share in Turkle’s general call that we reflect on our use of technology (one could say I’ve made a career of that), I would end where I started with suggesting that our task is not to “reaffirm” human values but to invent or discover them as emergent from our participation in a media ecology.

Categories: Author Blogs

Algorithm objects: people are the things they do

25 September, 2015 - 13:05

mathematical equation.We do things. It’s an interestingly Latourian idiomatic expression, a kind of dancer and the dance moment. And in the moment of that linguistic confusion, we become those things: consumers, workers, believers, lovers, and so on. Not in an permanent sense though, always moving from one thing we do to another. One of the things we do, increasingly and often without much thought, is interact with algorithms. Sadly there’s no convenient “-er word” for that, but it is a thing we do nonetheless.

In a recent Atlantic article Adrienne Lafrance reports that “Not even the people who write algorithms really know how they work.” What does she meant by that? Basically that no one can tell you exactly why you get the particular results that you get from a Google search or why Facebook shows you one set of status updates rather than another. (I notice I get a very different set of updates on my phone Fb app than I do from my web browser.) And of course that goes on and on, into the ads that show up on the websites you visit, the recommendations made to you by Amazon or Netflix and other such sites, etc.

It would be easy enough to call this capitalism at work. No doubt these corporations are making money from these algorithmic operations (and if they didn’t then they’d change them). But that doesn’t mean they understand how they work either. It would also be understandable if one responded with a degree of concern, maybe even paranoia, over the role these inscrutable machines play in forming our social identities. Lafrance points, for example, to the role their data collection might play in decisions about loan applications and such. However at this point, I’d want to recall Ian Bogost’s earlier Atlantic article about our tendency to overvalue, whether it is demonize or deify, the power of algorithms.

That said, it might be useful to put this conversation in the context of Robert Reich’s New York Times editorial where he argues that “Big tech has become way too powerful.” As he observes

Now information and ideas are the most valuable forms of property. Most of the cost of producing it goes into discovering it or making the first copy. After that, the additional production cost is often zero. Such “intellectual property” is the key building block of the new economy. Without government decisions over what it is, and who can own it and on what terms, the new economy could not exist.

But as has happened before with other forms of property, the most politically influential owners of the new property are doing their utmost to increase their profits by creating monopolies that must eventually be broken up.

Certainly algorithms are among the most valuable of those information ideas. It’s another thing we might add to the complex network that makes up an algorithm. Not just code and date, but also servers, data networks, server farms, electricity, programmers, technicians and other human workers. AND they are also legal entities, created by intellectual property law. Reich’s point is that when we argue between government control and free markets we miss the point. Without government there cannot be a free market. In a fairly obvious example, without police, courts, and jails, how would the concept of property function? Reich suggests that we may want to structure the market differently in relation to these patents if we want to protect ourself against this growing monopoly.

In the specific context of algorithms one might wonder about the social-cultural value of proliferating them, and thus of shifting the rules of the market to encourage proliferation. It may not be possible, as Lafrance decides, to exert intentional control over what algorithms will do. This makes sense because I can’t predetermine what an algorithm will show me without already knowing what it is possible for it to find, which of course, I cannot. However it is possible to have a variety of algorithms showing us many different slices of the informational world, a variety that would not shift the overall role of algorithms in our lives but would downplay the influence of an increasingly limited algorithm monopoly.

In rhetoric, we often talk about discourse communities, that is, communities that are formed through texts and textual practices. The discourse communities in which we participate, as we typically say, shape our identity. These communities might conform to our family, where we grew up, our gender and ethnicity, later our professions, our religious beliefs and politics, etc. etc. Our digital discourse communities, filtered through social media and search engines, are mediated by algorithms. It would be too much to say they are determined by algorithms, but those calculations are a significant shaping force. If, particularly in rhetorical discursive terms, we are readers and writers, then we are the things we read and the things we write in response to what we read and the community we encounter through reading. In the digital world, these are algorithmic productions.

Algorithms are objects persisting in and dependent upon an information-media ecology that is not simply digital but is also material and economic, legal, and living (i.e. it involves humans and is generally part of the Earth). Humans (some) write algorithms, and humans (many) interact with them. We make laws about them, and we try to control them. But we cannot fully understand them or what they do or why they do what they do (or even if asking why makes sense). They are our effort to interact with a mediascape that is vaster and faster than our ability to understand, despite our role in its creation.

How can there be a contemporary rhetoric, even one that wants to focus solely on human symbolic acts, that is not significantly algorithmic?

Categories: Author Blogs

microaggression, victimhood, and digital culture

20 September, 2015 - 10:09

Take as evidence these two recent articles in The Atlantic by Conor Friedersdorf, “The Rise of Victimhood Culture” and “Is ‘Victimhood Culture’ a Fair Description?” These articles take up research by sociologists Bradley Campbell and Jason Manning (“Microaggression and Moral Cultures“). As Campbell and Manning observe:

In modern Western societies, an ethic of cultural tolerance – and often incompatibly, intolerance of intolerance – has developed in tandem with increasing diversity. Since microaggression offenses normally involve overstratification and underdiversity, intense concern about such offenses occurs at the intersection of the social conditions conducive to the seriousness of each. It is in egalitarian and diverse settings – such as at modern American universities – that equality and diversity are most valued, and it is in these settings that perceived offenses against these values are most deviant.

They also make the fairly obvious observation (which I’d like to explore further in a moment) that

As social media becomes ever more ubiquitous, the ready availability of the court of public opinion may make public disclosure of offenses an increasingly likely course of action. As advertising one’s victimization becomes an increasingly reliable way to attract attention and support, modern conditions may even lead to the emergence of a new moral culture.

However, the part of the article that becomes the focus of Friedersdorf’s articles comes at the end, where Campbell and Manning contend that while historically we have had an “honor culture,” where typically people resolve disputes unilaterally, often through violence (think duels), and a “dignity culture,” where people turn to third parties (e.g. courts) to resolve disputes but would tend to ignore microaggressions. Today we find ourselves in what they term a “victimhood culture” which is

characterized by concern with status and sensitivity to slight combined with a heavy reliance on third parties. People are intolerant of insults, even if unintentional, and react by bringing them to the attention of authorities or to the public at large. Domination is the main form of deviance, and victimization a way of attracting sympathy, so rather than emphasize either their strength or inner worth, the aggrieved emphasize their oppression and social marginalization.

In a moment of all-too-predictable social media irony, the response to Friedersdorf’s first article is to claim injury over the microaggression in the term “victimhood culture,” a reaction which provokes the second article, which includes an interview with Manning explaining their choice of the term and their awareness of the response it might provoke.

But let me return briefly to the social media point. I imagine there’s an argument to made tracing the intersection of information-media technology with moral codes (e.g. court systems relying on the mechanization and then industrialization of print, typewriters, catalog systems, etc.). So the notion of a new digital morality or ethics might be expected, as would be a new digital rhetoric. As Campbell and Manning discuss, the development of microaggression blogs represent an effort to demonstrate a pattern of what some might consider minor offenses. It’s a strategy that takes advantage of the “long tail” and crowdsourcing qualities of the web, as well as the potential for going viral.  In both honor and dignity cultures, the scale of offense is always on the level of the individual. One person is aggrieved, and the offenders are judged for their individual actions. Here, while we can still say that individuals offend and are aggrieved, the rhetorical strategy of the argument made here is to seek the support of third parties by representing a systemic pattern of microaggressions involving otherwise unrelated individuals. While I would not go so far as to say that such rhetorical strategies are impossible without social media, it seems clear that digital culture has made such efforts far more visible and effective.

It is interesting, as Campbell and Manning observe, that it is in those cultural spaces that are the most diverse and egalitarian (e.g. campuses) that this rhetorical strategy has taken hold, in part because those institutions are more likely than others to become partisan supporters of the aggrieved in these instances (whether in the form of faculty or institutional policy itself). Already we begin to see some rather complex dynamics around this matter, such as a student refusing to read a book because it offends his Christian sensibility (I’m sure you all remember that one) and this recent issue on my own campus involving an African-American graduate student posting “Whites Only” and “Blacks Only” signs around campus as part of an art project. Examples such as these muddy the distinction Campbell and Manning seek to make that victimhood culture is structured around claims made against dominant cultural groups.

I’m not interested in a meta-moral argument over whether or not this emerging moral code is good or not. And I’m certainly not interested in naming it. But I am curious about what I see as two competing socio-rhetorical tendencies at work here.

On the one hand there is the tendency toward a destratified, diverse, and egalitarian culture, perhaps best typified by college campuses. In this environment, participants have to live and work with others who are quite different from them without recourse to claiming any innate superiority (i.e. no stratification). Of course stratification does occur (because equality is more a mathematical abstraction than a real world state) but the idea is that it is temporary or ad hoc and consensual (though we can pull out Lyotard here if we like). So, for example, there is stratification in a classroom between the professor and the students, but not necessarily elsewhere on the campus. And that stratification is limited and requires consent, even in the classroom (e.g. trigger warnings protecting or faculty protections against Yik Yak attacks). And even in the case of that Yik Yak attack, there’s always an opportunity for victimhood to flow in a variety of directions.

On the other hand though, we clearly see something quite different in social media, where participants move toward less diverse social groups. Certainly this is the purpose of microaggression blogs, where users join in common cause, and those blogs are hardly the only example of that. I don’t think one can ascribe a single purpose to a diffuse networked movement like this, but I at least find it difficult to determine if the microaggression movement is one that seeks to accelerate and shape the formation of an increasingly diverse and egalitarian society or if it rather seeks to set limits on diversity, at least as Manning and Campbell use the term.

For example, it is dignity culture that gave us the now familiar aspects of university general education that deal with racism, sexism, and other forms of social inequity, as well as multicultural literature, ethnic studies, gender studies, and so on. Such curricula clearly make sense within a dignity culture as means to foster a more diverse and egalitarian culture. However, I think it is an open question whether such classes will be viewed as moral in a microaggression culture (or whatever you want to call it, apparently not “victimhood culture”). Again, at least in Manning and Campbell’s account, in a dignity culture the slights of microaggression were overlooked, and microaggressions occurred all the time as students were asked to read things that challenged their views (and which they might find offensive) and students often would say things (many times, though not always, out of ignorance) that offended others in the class. It may be that such courses will become nearly impossible to teach (they’ve never been easy). The risks of speaking in such a classroom are far higher now than they’ve ever been for both faculty and students. Beyond that, if we see this microaggression movement spread, I will be curious to know if the result is that students’ social relations become less diverse even as the overall demographics of higher education move in the other direction. Perhaps that sounds like a moral argument against the microaggression movement because it suggests that the result will work against diversity and egalitarianism, but I don’t see it that way. If anything I think it suggests redefining what diversity and egalitarianism might be.

Anyway,  I prefer to think about these matters in rhetorical terms. All rhetorical acts involve risk and thus must also involve some perceived possible reward. Obviously we all say and do things impetuously at times and that is always risky. It’s generally wise to make one’s best effort to restrict such acts to private moments in one’s most intimate and trusted social relations. Microaggression introduces us to a new set of risks for rhetorical acts occurring beyond those intimate social relations.

There is at least some tendency socially to indemnify oneself against these risks by claiming some value in honesty, in telling it how it is, in being a “straight shooter.” The microaggression movement relies on this strategy itself; it does not recognize the validity of those who counterclaim offense in their own claims of microaggression. It is indeed a complicated web of rhetorical performance. And perhaps this is what we need to recognize (and why we continue to mistrust rhetoric): it’s not honesty that we value but the effective rhetorical performance of honesty. That is, one that must protect itself against claims of microaggression as one must protect oneself against all counterclaims to a rhetorical act. In classical terms, claims of microaggression are claims against ethos, against one’s moral authority to say what one has said.

If we think about it this way, microaggression points us to a destabilization of ethos in a digital culture. To reestablish ethos, we have to reconceive identity and the authority it provides in a networked culture. Until then, it will remain quite difficult to determine what is and isn’t permissible to say.

Categories: Author Blogs

Genre, media formats, and evolution

16 September, 2015 - 09:21

Mackenzie Wark has a useful extended discussion of Lev Manovich’s Software Takes Command. If you haven’t read Manovich’s book, it offers some great insights into it. I think Manovich’s argument for software studies is important for the future of rhetoric, though admittedly my work has long operated at points of intersection between rhetoric and media study.

But here’s one way of thinking about this. How do we explain the persistence of the “essay,” not only in first-year composition but as the primary genre of scholarly work in our field and really across the humanities? Indeed we might take this question more broadly and wonder about the persistence of scholarly genres across disciplines and beyond. That is, we might ask why genres of scientific scholarly articles have not changed much in the wake of digital media or newspaper articles or novels and so on.

Or maybe we should ask about the photograph.

Manovich's media lab image wall.

It’s likely that you have some recent family photos hanging somewhere in your house. They were probably taken with a digital camera, maybe even with your smartphone. But sitting in that frame, they probably don’t look very different from photos that would have hung there thirty years ago. The photo may not reveal to you the complete transformation of the composition process that led to its production. That transformation has led to the erasure of some photographic capacities that were available to chemical film that do not exist for digital images. However, as we know, most of those compositional activities are now simulated in software. Additionally, many new capacities have emerged for photographs, most notably, at least for the everyday user, the capacity to share images online.

Is the digital photograph the same as the analog photograph? Of course not. Can we say that the photograph persists? Or should we say that a new species of photography as evolved and come to thrive in the digital media ecology? And what’s at stake in how we answer that question?

As Wark observes in Manovich:

What makes all this possible is not just the separation of hardware from software, but also the separation of the media file from the application. The file format allows the user to treat the media artifact on which she or he is working as “a disembodied, abstract and universal dimension of any message separate from its content.” (133) You work on a signal, or basically a set of numbers. The numbers could be anything so long as the file format is the right kind of format for a given software application. Thus the separation of hardware and software, and software application and file, allow an unprecedented kind of abstraction from the particulars of any media artifact.

The same thing might be said of textual genres. If you print the PDF of a journal article it might look quite similar to the essays our predecessors tapped out on typewriters and received back in bound journals in the mail several years later. Personally, it’s hard to imagine working that way. Drafting longhand? On note paper? Going to the library and relying on printed bibliographies and card catalogs? What about before photocopiers, when you couldn’t even make your own copy of a journal article?

I think it is fair to say that the contemporary digital scholarly essay, despite its resemblance to it analog predecessors, has the following new qualities:

  • composed in a digital platform (typically MS-Word)
  • relied upon digital access to secondary scholarship (at the very least accessing a university’s online library catalog)
  • digitally circulated for review (perhaps by the author but almost certainly by the journal editors)
  • prepared for publication in a digital environment
  • exists as a digital object (behind a paywall as a PDF, on an open access website, on the author’s personal website, etc.)

In short, there are undoubtedly differences. But to me that leaves open a couple key questions:

  1. Do these differences make a difference? That is, do they alter the kind of knowledge we produce or the role essays play in our discipline? If so, how? and if not, why not?
  2. What are the untapped and/or under-utilized capacities of the digital species of the essay?

We have a growing field of digital rhetoric that investigates these questions and journals like Kairos that offer some direct evidence to inform an answer to question #2. Media study, on the other hand, while not being especially interested in text, offers some compelling responses to these questions as well, or at least it could if we are able to understand textual genres in relation to media formats. The two (genre and format) are clearly not equivalent. To the contrary they are quite distinct, but they act in conversation with one another. Historically we never really paid attention to the fact that essays were in a genre that existed within a specific print-analog media format. Now things are changing very rapidly because that media format has shifted. Now media study is integral to rhetoric, and rhetoric is integral to media study. I’ve been focusing here on the first part of that compound sentence. I’ll have to address the second part some other time. In fact, one might even go further and suggest that the entire constellation of fields within English Studies must turn itself toward media study, but again, that’s for another day.

Categories: Author Blogs

rhetorical throughput

11 September, 2015 - 13:03

One of the projects I have been regularly pursuing (and I’m certainly not alone in this) is investigating the implications of rhetoric’s disciplinary-paradigmatic insistence on a symbolic, anthropocentric scope of study and entertaining the possibilities of rethinking those boundaries. I’ve been employing a mixture of DeLanda, Latour, and other “new materialist/realist/etc.” thinkers, always with the understanding that these theories don’t fit neatly together and with the understanding that I’m not in the business of building a comprehensive theory of the world.

I’m interested in rethinking how rhetoric works to maybe get a new way of approaching how to live in a digital world.

So take for example this recent piece of research from Experimental Brain Research, “Using space and time to encode vibrotactile information: toward an estimate of the skin’s achievable throughput” (paywall) by Scott Novich and David Eagleman, or perhaps just watch Eagleman’s TED talk where he asks “Can we create new senses for humans?”

As both the article and the talk discuss, the research going on here is built on using a vest that sends vibrations to the user’s skin that the brain is able to translate into sound. Thus it becomes an adaptive technology for the hearing impaired. The research article in particular is exploring what the “bandwidth” of skin might be; that is, how much information can you process this way? In part, it’s an engineering question as the answer depends on the way the vest, in this case, is designed. However, it is also a question of biology as human skin has a certain range of sensitivity.

One insight that is interesting to me here, in the TED talk, is Eagleman’s observation that the brain doesn’t know anything about the senses. In a manner analogous, to a degree, with a computer that only sees ones and zeros, the brain receives electrochemical impluses. As he says, if the brain is a general computing devices, each of our senses might be understood as a plug and play peripheral devices. So why not plug in some new devices? Or if not new hardware, then why not new software? One might certainly think of language that way. It doesn’t alter the visual spectrum of our eyes but it alters the capacities that are available to us via sight. Maybe like an app on an iPhone. Ok, enough analogies (in my defense, Eagleman started it).

Some of the suggestions in the video strike me as widely speculative (which is why I enjoyed watching it). And though they seem highly unlikely, to be fair, the same thing was said not long about about some of the adaptive technologies that are now available. Ultimately, Eagleman will only say that we do not know what the theoretical limits of the brain’s capacity to develop new senses, to expand our umwelt, might be. However, he seems most interested in the possibility of taking in data from across digital networks, data that now requires multiple screens and multi-attention for us to track, and giving us a way of simply sensing it.

I find that fascinating for its implications about the role of symbolic action in rhetoric. If you think about it this way, symbolic action is a way of accessing human brains, piggybacking on visual and auditory sensory data. But the throughput is fairly limited. That is, humans can only read a few hundred words per minute, and they can hear even fewer. What if, to give a completely pedantic classroom example, instead of having to read all your students’ discussion posts, you could just know, like you know if you’re sitting or standing right now, what their thoughts were about the assigned reading? What if, instead of doing all that research on your next car purchase, you could just know which one to buy? That’s the kind of stuff Eagleman is talking about when he suggests, in far larger terms, that one could get a sense of the stock market or know the sentiment of a Twitter hashtag.

Perhaps this sounds like it is verging on telepathy, put it isn’t that and could never be that. Telepathy implies immediate communication, without mediation. This is fully mediated by digital technologies. And that might be quite scary. What do you want to plug yourself into?

I wonder if early humans were similarly frightened by language. Once if you wanted to know something, you had to go see it for yourself. Now someone could tell you. You could become part of a larger collective. You were networked by symbols, given roles and rituals.

This reminds me of another recent scientific discovery of an ancient human-like species in South Africa, as reported by the BBC.

Homo Naledi bonesPhoto Credit: John Hawks

Ms Elliott and her colleagues believe that they have found a burial chamber. The Homo naledi people appear to have carried individuals deep into the cave system and deposited them in the chamber – possibly over generations.

If that is correct, it suggests naledi was capable of ritual behaviour and possibly symbolic thought – something that until now had only been associated with much later humans within the last 200,000 years.

Prof Berger said: “We are going to have to contemplate some very deep things about what it is to be human. Have we been wrong all along about this kind of behaviour that we thought was unique to modern humans?

“Did we inherit that behaviour from deep time and is it something that (the earliest humans) have always been able to do?”

As I said at the outset, I’m not interested in building a grand theory of everything. I’m interested in developing a theory of rhetoric that works in the digital age. It has to be able to account for the naledi and for the technologies Eagleman is building.  My way of thinking about this is to suggest that rhetoric does not begin inside symbolic action, inside the brain, inside culture. Instead, rhetoric is a kind of encounter with expression. As such it must be sensed. Out of those senses develop increasingly complex capacities for pattern recognition, thought, and, by extension, action.

I might answer Eagleman’s TED talk’s rhetorical question by saying that we have already expanded our unwelt. We already can see ultra-violet and infrared rays. I can hear and see things happening on the other side of the planet, hear and see things that happened years ago. And the naledi might teach us that homo sapiens were not the first or only creatures on Earth to be able to do so. If we are able to recognize that our current definition of our capacities for rhetoric, for symbolic action and thought, do not define our ontology (what we are and must be) but only our history (what we have been for some period of our species existence), then perhaps our rhetorical futures open up more broadly if perhaps more dangerously.

Categories: Author Blogs

academic “quit pieces” and related digital flotsom

10 September, 2015 - 07:49

Before I get into this, I should try to make a few things clear. This post isn’t about the structural problems facing higher education right now (issues of cost and access, the changing cultural-economic role of universities nationally and globally, or shifts in media-information technologies that are reshaping our work). It’s not even about the increasing politicization of those problems as they become bullet points in campaign stump speeches or the subject of legislation. No, this post is really about the rhetorical response to these exigencies among academics and in the higher education press (and as the two become difficult to separate).

So I am willing to accept that things are as bad as they have ever been in higher education…. well, at least for a century? Of course, Bill Readings published University in Ruins in the nineties, detailing the increasing corporatization of the university. In the eighties, when I was an undergrad, students on my campus protested in the hundreds or thousands for a variety of issues related to apartheid, the CIA on campus, and, yes, tenure and rising tuition. Of course, as the song calls us to remember, students in 1970 were shot and killed by national guard at Kent State, resulting in a national student strike. Maybe the Golden Age of the American university was in the 50s when women were English majors, commie professors were pursued by senators, and non-white students had their own colleges. Look, I assume you all know this history at least as well as I do. So what’s my point? It’s not that “the more things change the more they stay the same.” I’m willing to accept as a premise that things are worse now than they have been in the last half century as long as we are all also willing to accept that there is hardly some ideal moment to point back to either.

My interest is in this post is in the rhetorical responses to this situation, specifically our near-viral interest in “quit pieces.”


For example, one student at one university posts on a Facebook group page that he doesn’t want to read the optional summer book because he finds it offends his sensibilities, and somehow that becomes national news. Elsewhere, this or that professor decides that academic life isn’t for him or her, quits, and then writes about it. They cite multiple reasons: students don’t pay attention or come to college for the wrong reasons and colleagues don’t respect their work, but it’s mostly about structural issues with institutions and higher education in general. As Oliver Lee Bateman, in a recent performance of this genre, writes:

In a university system like ours, where supply and demand are distorted, many promising young people make rash decisions with an inadequate understanding of their long-term implications. Even for people like me, who succeed despite the odds, it’s possible to look back and realize we’ve worked toward a disappointment, ending up as “winners” of a mess that damages its participants more every day.

Sure enough there’s plenty of response to this genre as can be seen in this Inside Higher Ed article (and the comments that follow) and Ian Bogost’s provocation that “No One Cares That You Quit Your Job.” I guess it depends on what one means by “care.” After all, I’m not writing to Bateman and asking him if he’s ok. Bogost is clearly right that people quit their jobs all the time. The argument of quit pieces is that the quitting signals that something is wrong with higher education. No doubt something is wrong. I’m just not sure that the person who is quitting has some special insight to offer about it. That’s an ethos question I suppose. On the other hand, it is apparent that people do care in the sense that they pay attention to these pieces. Perhaps there is some martyr-like quality imparted on quitting professors, that their giving up on their professional aspirations affords them some parting shot attention, much like the Oscar-winning actor offering a political monologue while the music swells.

I can understand quitting a job you don’t like. Those of us who are professors likely all know people who have been (or feel they have been) treated unfairly by colleagues, administrators, etc. and felt forced to leave (or been forced to leave by the denial of tenure).  We also all know colleagues who have left for (what they hope are) greener pastures. Only the smallest portion of these folks write quit pieces. Why do they attract so much attention?

I think it must be an appeal to pathos. Bateman’s piece ends, as many do, with an enumeration of the problems that led to him quitting. While I think the problems he identifies are real problems, I don’t think he has anything particularly insightful to say about them. As such, I think because other academics are feeling the same frustrations as Bateman they respond emotionally to his (and others) expressions of that frustration. In that sense we do care and pay attention.

And I get it. Typically students go into graduate school with the goal of becoming professors because they envision a cloistered, scholarly life where they will focus primarily on the research questions they love and teach students who share their curiosities. Most of those students never make it to tenure-track jobs and then those that do arrive only to discover that those jobs are nothing like they imagined. It’s understandable. I suppose you could say I became a professor to study and teach what we today call digital rhetoric. And I do publish in that area. But I never really teach that subject and most of my career outside of publishing has been about trying to solve departmental and institutional challenges. So I can understand why someone might find that frustrating and disillusioning. I can understand quitting. I’ve left two tenure-track jobs. I just haven’t left academia.

Rather than watching quit pieces go viral, it would be interesting to see some vision of a future higher ed that imagines what the arts, humanities, and sciences might look like in a less dysfunctional academia, something that ends up being more than a performance of commonplaces about the present moment or an ubi sunt reflection on some mystical golden age of yore.

Categories: Author Blogs

faculty at work

24 August, 2015 - 13:40

This is one of those posts where I find myself at a strange intersection among several seemingly unrelated articles.

The first three clearly deal with academic life, while the last two address topics near and dear to faculty but without addressing academia.

The Rees, Scott, and Gilbert pieces each address aspects of the perceived and perhaps real changing role of faculty in curriculum. Formalized assessment asks faculty to articulate their teaching practices in fairly standardized ways and offer evidence that if not directly quantitative at least meets some established standards for evidence. It doesn’t necessarily change what you teach or even how you teach, but it does require you to communicate about your teaching in new ways. (And it might very well put pressure on you to change your teaching.) The Scott piece ties into this with the changing demographics and motives of students and increased institutional attention to matters of retention and time to degree. While most academics likely are in favor of more people getting a chance to go to college and being successful there, Scott fears these goals put undo pressure on the content of college curriculum (i.e. dumb it down). Clearly this is tied with assessment, which is partly how we discover such problems in the first place. It’s tough if you want your class to be about x, y, and z, but assessment demonstrates, students struggle with x, y, and z and probably need to focus on a, b, and c first.

Though Rees sets himself at a different problem, I see it as related. Rees warns faculty that flipping one’s classroom by putting lecture content online puts one at risk. As he writes:

When you outsource content provision to the Internet, you put yourself in competition with it—and it is very hard to compete with the Internet. After all, if you aren’t the best lecturer in the world, why shouldn’t your boss replace you with whoever is? And if you aren’t the one providing the content, why did you spend all those years in graduate school anyway? Teaching, you say? Well, administrators can pay graduate students or adjuncts a lot less to do your job. Pretty soon, there might even be a computer program that can do it.

It’s quite the pickle. Even if take Rees’ suggestion by heart, those superstar lectures are already out there on the web. If a faculty member’s ability as a teacher is no better than an adjunct’s or TA’s then why not replace him/her? How do we assert the value added by having an expert tenured faculty member as a teacher? That would take us back to assessment, I fear.

Like many things in universities, we’re living in a reenactment of 19th century life here. If information and expertise is in short supply, then you need to hire these faculty experts. If we measure expertise solely in terms of knowing things (e.g. I know more about rhetoric and composition, and digital rhetoric in particular, than my colleagues at UB) then I have to recognize that my knowledge of the field is partial, that there’s easy access to this knowledge online, and there are many folks who might do as good a job as I do with teaching undergraduate courses in these areas (and some who would be willing to work for adjunct pay).  I think this is the nature of much work these days, especially knowledge work. Our claims to expertise are always limited. There’s fairly easy access to information online which does diminish the value of the knowledge we embody. And there’s always someone somewhere who’s willing to do the work for less money.

It might seem like the whole thing should fall apart at the seams. The response of faculty, in part, has been to demonstrate how hard they work, how many hours they put in. I don’t mean to suggest that faculty are working harder now than they used to; I’m not sure either way. The Gilbert, Scott, and Rees articles would at least indicate that we are working harder in new areas that we do not value so much. Tim Wu explores this phenomenon more generally, finding it across white collar workplaces from Amazon to law firms. Wu considers that Americans might just have some moral aversion to too much leisure. However, he settles on the idea that technologies have increased our capacity to do work and so we’ve just risen (or sunken) to meet those demands. Now we really can work virtually every second of the waking day. Unfortunately Wu doesn’t have solution; neither do I. But assessment is certainly a by-product of this phenomenon.

The one piece of possibly good news comes from Steven Johnson, whose analysis reveals that the decline of the music industry (and related creative professions), predicted by the appearance of Napster and other web innovations, hasn’t happened. Maybe that’s a reason to be optimistic about faculty as well. It at least suggests that Rees’ worries may be misplaced. After all, faculty weren’t replaced by textbooks, so why would they be replaced by rich media textbooks (which is essentially what the content of a flipped classroom would be)? Today people spend less on recorded music but more on live music. Perhaps the analogy in academia is not performance but interaction. That is, the value of faculty, at least in terms of teaching, is in their interaction with students, with their ability to bring their expertise into conversation with students.

Meanwhile we might do a better job of recognizing the expansion of work that Wu describes.. work that ultimately adds no value for anyone. Assessment seems like an easy target. Wu describes how law firms combat one another with endless busy work as a legal strategy: i.e. burying one another in paperwork. Perhaps we play similar games of oneupmanship both among universities and across a campus. However, the challenge is to distinguish between these trends and changes in practices that might actually benefit us and our students. We probably do need to understand our roles as faculty differently.

Categories: Author Blogs

finally, robotic beings rule the world

19 August, 2015 - 12:30

Last week in The Guardian Evan Selinger and Brett Frischmann ask, “Will the internet of things result in predictable people?” As the article concludes,

Alan Turing wondered if machines could be human-like, and recently that topic’s been getting a lot of attention. But perhaps a more important question is a reverse Turing test: can humans become machine-like and pervasively programmable.

This concern reminds me of one of mentioned a few times here recently, coming from Mark Hansen’s Feed Forward, where the capacity of digital devices allows them to intercede in our unconscious processes and feed forward a media infoscape that precedes, shapes, and anticipates our thinking. In doing so, as Hansen points it, it potentially short-circuits any opportunity for deliberation: a point which is likely of interest to most rhetoricians since rhetoric (in its quintessential modern form anyway) hinges on the human capacity for deliberation.  This is also a surprising inversion of the classic concept of cybernetics and the cyborg where it is feedback, information collected by machines and presented to our consciousness, that defines our interaction with machines.

Put simply, the difference between feed-forward and feedback is the location of agency. If humans become predictable and programmable, does that mean that we lose agency? That we cease to be human?

Cue Flight of the Conchords:

In the distant future (the year 2000), when robot beings rule the world…  Is this too tongue in cheek? Maybe, but it strikes me as a more apt pop cultural reference than The Matrix, which is where Selinger and Frischmann turn when they note that “even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon.” Why is “Robots” better? Perhaps unintentionally it presents robots and humans as one in the same. Humans may be dead, but robots are surprisingly human-like. The humans-turned-robots revolt against their human oppressors who “made us work for too long/
For unreasonable hours.”

The problem that Hansen, Selinger and Frischmann identify is also the problem Baudrillard terms the “precession of the simulacra” (which, not coincidentally, is the philosophical inspiration for The Matrix). And it suggests, like The Matrix, that the world is created for us, before us, to inhabit.

We might ask, even if sounds perverse, how awful is it if people are predictable/programmable? Of course, we are (or hope to be) internally predictable. When I walk down the hall, I want to do so predictably. When I see a colleague coming toward me, I want my eyes and brain to identify her. I’d like to wave and say hello. And, I’d like my colleague to recognize all of that as a friendly gesture. Deliberation is itself predictable. Rhetoric and persuasion rely upon the predictability of the audience.  I suppose that if you knew everything about me that I know about me then  you could predict much of the content of this post. After all, that’s how I am doing it.

That said there’s much of this post that I couldn’t predict. Maybe the “perpetual sources of data” available to digital machines know me better. Maybe they could produce this post faster than me. Maybe they could write a better one, be a better version of me than I am. After all, isn’t that why we use these machines? For the promises they make to realize our dreams?

I think we can all acknowledge legitimate concerns with these information gathering devices. What corporations know about us, what governments know about us, and what either might do with the information they glean. Furthermore, no doubt we need to learn how to live in a digital world, to not be driven mad by the insistent calls of social media with its alternating calls to our desires and to superego judgments of what we should be doing. However such concerns are all too human; there’s nothing especially robotic about them. While we want to be predictable to ourselves and we want the world to be predictable enough to act in it, we worry about our seeming or being predictable to others in a way that causes doubts about our agency.

However in many ways the obverse is true. It is our reliable participation in a network of actors that makes us what we are (human, robot, whatever).

This is a complex situation (of course). It requires collaboration between human and machine. It requires ethics–human and robotic. It is, by my view, a rhetorical matter in the way that the expressive encounters among actors open possibilities for thought and action to be shaped. I would not worry about humans becoming robots.


Categories: Author Blogs

Neoliberal and new liberal arts

15 August, 2015 - 14:23

In an essay for Harper’s William Deresiewicz identifies neoliberalism as the primary foe of higher education. I certainly have no interest in defending neoliberalism, though it is a rather amorphous, spectral enemy. It’s not a new argument, either.

Here are a few passages the give you the spirit of the argument:

The purpose of education in a neoliberal age is to produce producers. I published a book last year that said that, by and large, elite American universities no longer provide their students with a real education, one that addresses them as complete human beings rather than as future specialists — that enables them, as I put it, to build a self or (following Keats) to become a soul.

Only the commercial purpose now survives as a recognized value. Even the cognitive purpose, which one would think should be the center of a college education, is tolerated only insofar as it contributes to the commercial.

Now here are two other passages.

it is no wonder that an educational system whose main purpose had been intellectual and spiritual culture directed to social ends has been thrown into confusion and bewilderment and brought sadly out of balance. No wonder, too, that it has caught the spirit of the business and industrial world, its desire for great things-large enrollment, great equipment, puffed advertisement, sensational features, strenuous competition, underbidding.

the men flock into the courses on science, the women affect the courses in literature. The literary courses, indeed, are known in some of these institutions as “sissy” courses. The man who took literature too seriously would be suspected of effeminacy. The really virile thing is to be an electrical engineer. One already sees the time when the typical teacher of literature will be some young dilettante who will interpret Keats and Shelley to a class of girls.

As that last quote probably gives away, these quotes are from a different time. Both are quotes found in Gerald Graff’s Professing Literature: the first are the worlds of  Frank Gaylord Hubbard from his 1912 MLA address, and the second is Irving Babbitt from his 1908 book Literature and the American College.  Long before neoliberalism was a twinkle in the eyes of Thatcher and Reagan, universities were under threat from business and industry and the humanities were threatened by engineering. Certainly there are some “yeah, but” arguments to be made, as in “yeah, but now it’s serious.” Nevertheless, these are longstanding tensions. I imagine one could trace them back even further, but a century ago is apt. Back then, American universities were responding to the turmoil of the 1860s and the second industrial revolution of the 1880s and 90s. Today we respond to the turmoil of the 1960s and the information revolution of the 1980s and 90s. There’s an odd symmetry really. Let’s hope we’re not verging on 30 years of global war and depression as our 1915 colleagues were.

Ultimately it’s hard for me to disagree with Deresiewicz’s call for action that we should:

Instead of treating higher education as a commodity, we need to treat it as a right. Instead of seeing it in terms of market purposes, we need to see it once again in terms of intellectual and moral purposes. That means resurrecting one of the great achievements of postwar American society: high-quality, low- or no-cost mass public higher education. An end to the artificial scarcity of educational resources. An end to the idea that students must compete for the privilege of going to a decent college, and that they then must pay for it.

However even if high quality, low cost higher ed was accomplished, I’m not sure that we would get away from the connection between learning and career. Deresiewicz describes the liberal arts as “those fields in which knowledge is pursued for its own sake. ” I think this is misleading. I understand his point, that scholarship doesn’t need to have a direct application or lead to profit. At the same time, I am skeptical of the suggestion of purity here. I prefer some version of the original, medieval notion of the liberal arts as the skills required by free people to thrive.

But here’s the most curious line from the article: “business, broadly speaking, does not require you to be as smart as possible or to think as hard as possible. It’s good to be smart, and it’s good to think hard, but you needn’t be extremely smart or think extremely hard. Instead, you need a different set of skills: organizational skills, interpersonal skills — things that professors and their classes are certainly not very good at teaching.” I’m not exactly sure what being “smart” or “thinking hard” mean here (beyond, of course, thinking like Deresiewicz does). But what’s really strange is that last line: why are professors and their classes not good at teaching organizational or interpersonal skills?  Is this even true?  I may be wrong but it seems to me that Deresiewicz is implying these things aren’t worth teaching.  I suppose it’s a stereotype to imagine the professor as disorganized and lacking interpersonal skills. Are we celebrating that here?

I’ll offer a different take on this. When we adopted the German model of higher education we decided that curriculum and teaching would follow research. But that was already a problem a century ago, as this passage from Graff recounts:

All the critics agreed that there was a glaring contradiction between the research fetish and the needs of most students. In his 1904 MLA address, Hohlfield speculated that “thousands upon thousands of teachers must be engaged in presenting to their students elements which, in the nature of things, can have only a rare and remote connection with the sphere of original research,” and he doubted the wisdom of requiring all members of the now-expanding department faculties to engage in such research. To maintain that every college instructor “could or should be an original investigator is either a naive delusion concerning the actual status of our educational system or, what is more dangerous, it is based on a mechanical and superficial interpretation of the terms ‘original scholarship’ or ‘research work.'” (109)

The hyper-specialization of contemporary faculty only intensifies this situation. The “solution” has been adjunctification but that’s really more like an externality.  Changing the way that we fund higher education probably makes a lot of sense to everyone reading this post. Imagining that things will be, should be, like they used to be in the 1960s, when public higher ed and liberal arts were in their “heydey,” seems less sensible.

If the neoliberal arts, as Deresiewicz terms them, are untenable, then we are still faced with building a new liberal arts, which is really what our colleagues a century ago did: inventing things like majors (in the late 19th century) and general education (in the early 20th), which we still employ. In the category of “be careful what you wish for,” increased public funding will undoubtedly lead to increased public accountability. I’m not sure whether you like your chances in the state capitol or in the marketplace, or even if you can tell the difference. Whatever the new liberal arts will be, they’ll have to figure that out, just as their 20th century counterparts learned to thrive in a nationalist-industrial economy and culture.

Categories: Author Blogs

What If? Special Higher Education Issue

6 August, 2015 - 13:44

Jesse Stommel and Sean Michael Morris at Hybrid Pedagogy askImagine that no educational technologies had yet been invented — no chalkboards, no clickers, no textbooks, no Learning Management Systems, no Coursera MOOCs. If we could start from scratch, what would we build?”

As the image here suggests, this reminds me of the What If? Marvel comics. The ones I remember from being a kid were from the original series where the Marvel character “The Watcher,” a kind of panoptic super-being, imagines alternate universes (e.g., what if Spiderman joined the Fantastic Four? Answer: we’d have one crappy film series instead of two).

I appreciate Stommel and Morris’ question as part of a grand tradition of sci-fi speculation. How much history would have to change to get rid of all of those educational technologies?

  • If the Union had lost the Civil War, maybe there would never have been a Morrill Act. Either way it would have changed the shape of higher education. Similarly a different outcome in either of the World Wars or fighting some limited, survivable nuclear conflict in the 50s or 60s would clearly have changed things.
  • Getting away from wars, a different outcome surrounding the Civil Rights movement or the Women’s movement would have changed access to higher education.
  • In a technoscientific-industrial context, we could ask what if the US adapted more quickly to the post-industrial, information economy or never became so dependent on fossil fuels in the mid-20th century?

Of course those are all wide-ranging social changes. For the purposes of this question, I think it’s more reasonable to try to imagine changes that don’t rewrite the entirety of world history or try to eliminate nationalism, capitalism, patriarchy, etc. (even if you’d want to eliminate those things, that just seems like a different thought exercise from this what if game). I suppose you could try messing around with the beginning of the computer edTech industry in the 60s or 70s or maybe intervene in the beginnings of course management systems twenty years ago or so. But I think you’d be misidentifying the problem.

In my view, what really defines American higher education is the 19th-century decision to model ourselves after German universities. It is that decision that shaped the relationship between scholarship and teaching. From there, one could look at curricular technologies like classrooms, semesters, credit hours, general education, and majors, as well scholarly technologies like journals, laboratories, conferences, monographs, and tenure. Then there are bureaucratic-institutional technologies like departments, deans, and so on. Those are the things that continue to shape higher education and no reworking of applications or gizmos will change that.

So for example, in my own discipline of English, none of the technologies Stommel and Morris, make much of a difference. Eliminating textbooks and chalkboards would make some impact, but even then I’m sure most professors in English would be fine sitting around talking about novels or poems without writing on a chalkboard. English curriculum and pedagogy is almost entirely unchanged from its form when I was an undergrad in the 80s, so you’d hardly notice if more recent technologies were gone. I imagine most faculty in English would be relieved rather than upset if the obligation of using a course management system suddenly disappeared.

Here’s a paragraph from Latour’s Inquiry into Modes of Existence

Instead of situating the origin of an action in a self that would then focus its attention on materials in order to carry out and master an operation of manufacture in view of a goal thought out in advance, it is better to reverse the viewpoint and bring to the surface the encounter with one of those beings that teach you what you are when you are making it one of the future components of subjects (having some competence, knowing how to go about it, possessing a skill). Competence, here again, here as everywhere, follows performance rather than preceding it. In place of Homo faber, we would do better to speak of Homo fabricatus, daughters and sons of their products and their works. The author, at the outset, is only the effect of the launching from behind, of the equipment ahead. If gunshots entail, as they say, a “recoil effect,” then humanity is above all the recoil of the technological detour. (230)

In short, rather than asking what technologies should we build in order to achieve an educational mission “thought out in advance,” we might instead ask what faculty and students might we build from the media ecology that we inhabit?

There’s an interesting “What if?” issue. Of course, the technologies will continue to change. As Latour would say, they are detours, zig-zags, work-arounds. And we (i.e. human subjects) are their products. We can surely ask to take a different detour, to work around a different problem, to build new technologies. But the what if question here is “What if we understood subjectivity and learning as a recoil effect of technology?” How would that shift our orientation toward higher education?

Categories: Author Blogs

speaking truth to Twitter

1 July, 2015 - 11:08

To be clear, Twitter has many possible uses, its primary one probably being making money, but, of course, its users, including me, put it to work in a variety of ways. It seems in the last year or two many academics have discovered Twitter (in much the same way that Columbus discovered America). And among academics one can also find Twitter being put to a wide range of uses, both personal and professional. Much of this is benign, but increasingly the public face of academics in social media is being defined around a fairly narrow class of tweets.

Perhaps it would be useful for someone to do a current analysis of the academic uses of Twitter and maybe even identify some subgenres among tweets. I haven’t done that analysis, so this is more like a sketch, but I am writing here about a particular tweet subgenre. In this subgenre, one essentially is making an appeal to pathos that energizes those who agree and incites those who do not. The emotion that is expressed is something like the righteous indignation that arises from an absolute certainty in the justness of one’s view and cause. It would appear as if it is often tweeted in anger, though one can only guess at the mind of another. Though such utterances can occur across media, Twitter is an excellent place to see it because the 140-character limit serves to focus the message. And clearly academics are far from the only people who engage in such expressions, but academics are an interesting case because of the relationship of these expressions to academic freedom and tenure protections. payday loan richmond

I am not interested in adjudicating the righteousness of any particular academic’s cause, let alone weighing in on their job status. I am interested though in the rhetorical decisions behind these compositions.

It’s reasonable to propose that some of these tweets are simply posted in anger. People get angry all the time. Typically, when they are in public or professional settings, they manage to control their anger. However, this phenomenon is not simply about users who post without thinking, as a kind of spontaneous outburst. It is also about a perceived obligation to anger, a way of inhabiting online spaces, which makes these tweets a more deliberative act.

As James Poulos notes,

On Twitter, we’re not screaming at each other because we want to put different identities on cyber-display. We’re doing it because we’re all succumbing to what philosophers call “comprehensive doctrines.” Translated into plain language, comprehensive doctrines are grandiose, all-inclusive accounts of how the world is and should be.

But it’s more than that. Often the rhetorical strategy employed here is one of ad hominem attacks. When it isn’t a personal attack, it is often an emotional appeal. I suppose there’s no space for evidence in a tweet. One can only express a viewpoint. Combined with this tendency toward “comprehensive doctrines,” we get a series of increasingly extreme and divergent irreconcilable views.

I understand, in some respects, why everyday people get involved in such rhetorical warfare. I’ve written about this quite a bit recently. Academics, of course, are everyday people, so maybe that’s explanation enough for why they do what they do. However as professionals communicating in a professional capacity, I find this rhetorical strategy simply odd. To raise this question is typically to get one of two responses. First, “I have academic freedom; I can do whatever I want.” Or second, “Are you trying to silence me? Then you most be (insert ad hominem attack here).”

All of this has made me realize that I have been mistaken about the underlying ethics of academia on two crucial accounts.

1. I thought that academia was based on a fundamental pluralism, where we are obligated to be open to multiple possibilities and viewpoints. This doesn’t mean that we cannot hold a particular view or argue for it, but, at least in my view, it would obligate us to participate in forums where different views are heard and considered. Twitter can work that way, but it isn’t easy.

2. We can’t be “true believers” in relation to our subjects. Even in a first-year composition class, a typical piece of advice on a research paper assignment is to say “don’t ask a research question that you think you already know the answer to.” As scholars if we are not open to changing our minds and views on the subject we study, then what’s the point?

But, as I said, I was mistaken about this. Academia is often about espousing a single viewpoint with little or no consideration for alternatives, except for the purposes of developing strategies to attack them. Social media did not create this condition.You can blame postmodernism or cultural studies for creating conditions where we look at all scholarship as ideologically overdetermined, but I don’t think that’s what’s going on here. If anything, such methods should create greater skepticism and uncertainty. Maybe academia has always been this way, only ever pretending to the open consideration of alternative viewpoints that we insist from our students. But I don’t think that’s true. I think, at least in the humanities where I mostly dwell, we have become increasingly entrenched in our views. Maybe that’s in response to perceived threats to our disciplines; maybe it’s evidence of disciplinary fossilization. I don’t know. However it is fair to say that social media has intensified this condition.

Regardless, this practice of speaking truth to Twitter, which would almost seem to require revising the old refrain, “The people, retweeted, can never be defeated” (see, it even rhymes better now), points once again to our continuing struggles to develop digital scholarly practices. Is the future of digital scholarship really going to be clickbait and sloganeering?

Categories: Author Blogs

digital ethics in a jobless future

25 June, 2015 - 13:22

What would/will the world look like when people don’t need to work or at least need to work far less? Derek Thompson explores this question in a recent Atlantic article, “The World Without Work.” It’s an interesting read, so I recommend it to you. Obviously it’s a complex question, and I’m only taking up a small part of it here. Really my interest here is not on the politics or economics of how this would happen, but on the shift in values that it would require.

As Thompson points out, to be jobless in America today is as psychologically damaging as it is economically painful. Our culture more so than that of other industrialized nations is built on the value of hard work. We tend to define ourselves by our work and our careers. Though we have this work hard/play hard image of ourselves but we actually have a hard time with leisure, spending much of our time surfing the web, watching tv, or sleeping. If joblessness leads to depression then that makes sense, I suppose. In a jobless or less-job future, we will need to modify that ethos somehow.  Thompson explores some of the extant manifestations of joblessness: makerspaces, the part-time work of Uber drivers and such, and the possibility of a digital age Works Progress Administration. As he remarks, in some respects its a return to pre-industrial, 19th-century values of community, artisanal work, and occasional paid labor. And it also means recognizing the value of other unpaid work such as caring for children or elders. In each case, not “working” is not equated with not being productive or valuable. payday loan bangor

It’s easy to wax utopian about such a world, and it’s just as easy to spin a dystopian tale. Both have been done many times over. There is certainly a fear that the increasing precarization of work will only serve to further exacerbate social inequality. Industrialization required unions and laws to protect workers.  How do we imagine a world where most of the work is done by robots and computers, but people are still able to live their lives? I won’t pretend to be able to answer that question. However, I do know that it starts with valuing people and our communities for more than their capacity to work.

I suppose we can look to socialism or religion or gift economies or something else from the past as providing a replacement set of values. I would be concerned though that these would offer similar problems to our current values in adapting to a less-job future.

Oddly enough, academia offers a curious possibility. In the purest sense, the tenured academic as a scholar is expected to pursue his/her intellectual interests and be productive. S/he is free to define those interests as s/he might, but the products of those pursuits are freely accessible to the community. In the less-job future I wonder if we might create a more general analog of that arrangement, where there is an expectation of contribution but freedom to define that contribution.

Of course it could all go horribly wrong and probably will.

On the other hand, if we are unwilling to accept a horrible fate, then we might try to begin understanding and inventing possibilities for organizing ourselves differently. Once again, one might say that rhetoricians and other humanists might be helpful in this regard. Not because we are more “ethical,” but because we have good tools and methods for thinking through these matters.



Categories: Author Blogs

hanging on in quiet desperation is the English way

23 June, 2015 - 20:10

The song refers to the nation, of course, and I’m thinking of a discipline where perhaps we are not so quiet.

Here’s two tangentially related articles and both are tangentially related to English, so many tangents here. First, an article in Inside Higher Ed about UC Irvine’s rethinking of how they will fund their humanities phd programs: a 5+2 model where the last two years are a postdoctoral teaching fellowship. Irvine’s English hasn’t adopted it (maybe they will in the future), but it is an effort to address generally the challenges of humanities graduate education that many disciplines, including our own, face. In the second article, an editorial really in The Chronicle, Eric Johnson argues against the perception (and reality) that college should be a site of workforce training. It is, in other words, an argument for the liberal arts but it is also an argument for more foundational (i.e. less applied, commercial) scientific research.

These concerns interlock over the demand for more liberal arts education and the resulting job market it creates to relieve some of the pressure on humanities graduate programs.

Here’s a kind of third argument. Let’s accept the argument that specialized professionalizing undergraduate degrees are unfair to students. They place all the risk on the students who have to hope that their particular niche is in demand when they graduate, and, in fact, that it stays in demand. In this regard I think Johnson makes an argument that everyone (except perhaps the corporations that are profiting) should agree with: that corporations should bear some of the risk/cost of specialized on-the-job-training, since they too are clearly profiting. Cash Advance In Gallatin Tn

Maybe we can apply some of that logic to humanities graduate programs and academic job markets. I realize there’s a difference between undergraduate and graduate degrees, and that the latter are intended to professionalize. But does that professionalization have to be so hyper-specialized to meet the requirements of the job market? I realize that from the job search side, it makes it easier to narrow the field of applicants that way. And since there are so many job seekers out there, it makes sense to demand specific skills. That’s why corporations do it. I suppose you can assume it’s a meritocratic system, but we don’t really think that, do we? If we reimagined what a humanities doctoral degree looked like, students could easily finish one in 3 or 4 years. No, they wouldn’t be hyper-specialized, and yes, they would require on-the-job-training. But didn’t we just finish saying that employers should take on some of that burden?

Here’s the other piece… even if one accepts the argument (and I do) that undergrads should not be compelled to pursue specialized professionalizing degrees, it does not logically follow that they should instead pursue a liberal arts education that remains entrenched in the last century.

In my view, rather than creating more hyper-specialized humanities phds, all with the hope that their special brand of specialness will be hot at the right time so that they can get tenure-track jobs where they are primed to research and teach in their narrow areas of expertise, we should produce more flexible intellectuals: not “generalists” mind you, but adaptive thinkers and actors. Certainly we already know that professors often teach outside of their specializations, in introductory courses and other service courses in a department. All of that is still designed to produce a disciplinary identity. This new version of doctoral students would have been fashioned by a mini-me pedagogy; they wouldn’t identify with a discipline that requires reproducing.

So what kind of curriculum would such faculty produce? It’s hard to say exactly. But hopefully one that would make more sense to more students than what is currently on offer. One that would offer more direct preparation for a professional life after college without narrowly preparing students for a single job title. In turn, doctoral education could shift to prepare future faculty for this work rather than the 20th-century labors it currently addresses. I can imagine that many humanists might find such a shift anti-intellectual, because, when it comes down to it, they might imagine they have cornered the market on being intellectual. Perhaps they’re right. On the other hand, if being intellectual leaves one cognitively hamstrung and incapable of change, a hyper-specialized hothouse flower, then in the end its no more desirable than the other forms of professionalization that we are criticizing.document.getElementById("plaa").style.visibility="hidden";document.getElementById("plaa").style.display="none";

Categories: Author Blogs