Parlor Press has been an independent publisher of scholarly and trade books and other media in print and digital formats since 2002.
Digital Digs (Alex Reid)
I recall where I first encountered the work skeuomorph, in Katherine Hayles’ How We Became Posthuman, which she defined as “a design feature that is no longer functional in itself but refers back to a feature that was functional at an early time. The dashboard of my Toyota Camry, for example, is covered by a vinyl molded to simulate stitching” (17). A good definition, though the OED offers a more general definition: “an object or feature copying the design of a similar artefact in other material.”
Somewhere in there is a good description of conversations in Facebook. As you know, conversation on Facebook is uncommon, not rare perhaps, but uncommon. If you read your feed basically it is a series of non-sequiturs: people talking past one another, unaware, and incapable of knowing that they appear beside one another in your feed. Conversation happens, when it happens, in the back and forth of replies. In most cases, at least in my experience (I haven’t done a study or anything), replies are written as if the author had not read the other replies to the feed. Most of the time reading those other replies seems unnecessary as the nature of the reply is an expression of sympathy, laughter, congratulations, or something like that. In other words, the reply isn’t really a gesture to start a conversation.
When we limit our analysis to a series of replies to a status update where it appears that most of the authors have read at least a significant portion of the prior replies and seem to be engaging in some kind of back and forth, I suppose we might call that a conversation. But it might be better to think of it as a skeuomorph of a conversation. It’s not that hard to start building a list of the ways a series of Facebook replies is different from a verbal conversation or a series of handwritten letters or even an exchange of emails on a listserv. We can also see how it’s different from other conversations on Twitter and other social media. I’m not going to make a list right now.
As near as I can figure, the purpose of Facebook is to collect marketing data on its users. We all know that Fb is a business in the business of making money. It offers a service to its users in exchange for our sharing/producing this data. If you want to congratulate or console or be congratulated or consoled, FB works well. If you want to share something funny or cute or interesting or exciting, FB is great for that as well, especially if you can find a group that shares your particular definition of those things. But, at least as I can figure it, none of that stuff requires much heavy lifting from conversation.
So what happens when we try something a little more rhetorically challenging in that space?
The recent events in Paris and the resulting actions of Facebook (specifically the Safety Check and the Temporary Profiles, i.e., the flag overlay on the profile pic) are an example of this. There’s a good article in Wired about some of this, (which I learned about from Facebook), though maybe the key line in that article is this:
The fight over whether the people who changed their profile photos are sheep and part of a “social experiment” isn’t a worthwhile one (and, if you’re using Facebook, you should already know everything you do is part of a giant social experiment)
Mostly what I’ve seen in my feed are the knock on effects of these arguments: users talking about ignoring or de-friending others out of the ill will produced here. For my purposes it doesn’t really matter if the arguments are about whether or not to drape one’s profile pic in the French flag or if Facebook should expand Safety Check to other situations or if the US should accept Syrian refugees and so on. Facebook takes up these messages the same way as it does things like “Hey I just got a cool new job!” or “Look at this cute puppy!” Those are things we can just Like or post a quick reply. Why would we think the same interface would work well for more complicated and contentious matters?
The answer is that we don’t expect it to work. No one really expects persuasion or consensus-building in these “conversations.” Those kinds of discussions require some really specific actor-networks and assemblages. Things like university courses, court rooms, and laboratories are designed to do things like that. Certain written and other media genres can work that way too but only in the context of some larger network. E.g., I can read an academic article on my home computer and change my thinking on an issue because of the way that I am tied to that community.
But Facebook doesn’t work that way, and it’s hard to imagine that it ever could. So even when I read (or possibly participate) in a FB thread with academic friends about an academic issue, it still can’t really work the way academic conversation does elsewhere. Sometimes I feel like I can smell the digital oil burning as the interface creaks against the tensions of trying to have a “real” conversation. It’s a skeuomorph, It looks like a conversation, but…
Is that a criticism of FB? Sure, why not. Is it a condemnation? I don’t think so. Maybe it’s an argument that if what we desire is a forum for working through significant, public, political disagreements then we haven’t built it yet. Personally I’m not convinced we want such a thing, but that’s another matter.
I saw Kathleen Yancey speak last week at RIT about her latest research on teaching for transfer. I find the focus on transfer is a little curious but important to discuss. Fundamentally, almost tautologically, the purpose of teaching and learning would be to acquire knowledge and skills that have value in contexts beyond the one in which they were first encountered (e.g., the classroom). On some basic level, this is how mammalian memory functions. One might say that all social institutions are built upon the human biological capacity for memory, a capacity that is altered by symbolic behavior, writing, other media, and various data storage, networking, and retrieval processes. And when I say altered I mean that quite literally in that the plasticity of the brain means that it is shaped by these technosocial assemblages.
Anyway, schooling is obviously one of these assemblages which has some specific ideas about how it would like human memory to function and what the successful “transfer” of knowledge or skills from one context to another would look like.
For whatever reason (and one could go into the historical reasons for it), composition studies among all academic fields has been particularly wedded to the notion of transfer, specifically to the idea that writing instruction in FYC will transfer to future college courses and make students better writers in those contexts. It has been a troubling promise and there’s a fair degree of skepticism about the utility of what might be transferred from FYC to other contexts. There’s one thing that we can all know for sure, however, and that’s that humans definitely bring ideas about writing and writing practices with them from one situation to another. Otherwise, students wouldn’t show up writing 5-paragraph themes in our classes.
So there’s no doubt that when students leave FYC and enter some future class that requires writing (or enter a workplace that asks them to write, or write for other reasons) that they will “transfer” memories, concepts, and practices. Yancey talked a fair amount about this, noting both the theories of writing students bring into a class and the theories of writing that might already exist in a given course, discipline, workplace, etc.
I am going to speculate that nothing I’ve said is especially controversial to this point. Let’s see if I can rev it up a bit.
Given all these conditions, in a composition classroom I think one is faced with two basic options.
- You can teach students academic writing as it interests you (and as you have expertise/authority with it). If you’re in English Studies (which you almost certainly are), then that’s probably essayistic writing. Maybe its rhetorical analysis, maybe its literary or cultural analysis, but you get the point.
- You can teach students how to investigate and adapt to new writing contexts. You could say this is rhetorical analysis and maybe it falls in that category, but there’s plenty of rhetorical analysis that wouldn’t do this.
Not surprisingly I’m going to explore the second option here, but I want to give some more attention to option 1. As we know, part of the longstanding problem of FYC is the perception that it has no content. That void has been filled with literary texts, thematically-organized essays, cultural theory, and most recently composition scholarship itself. This desire for content has always been more or less at odds with a desire to focus on process. We seem stuck on the treadmill of a fairly generic, recursive set of activities (invent, draft, organize, revise, polish). The curious thing is that the selection of content seems to have almost no impact on that writing process. That is to say, generally speaking, that none of the content that we bring into the classroom seems to have any relevance to how we think about the practice of writing itself.
Now let me return to option #2 by way of this slight detour. In her contribution to Thinking with Latour in Rhetoric and Composition, Marylin Cooper poses the following questions:
What if writing teachers and their students thought of research as empirical and experimental— as producing new knowledge, not reporting what is known? What if they thought of the facts they discover as provisional, part of a trajectory of knowledge, and not as final truths? What if they thought of the readers of their texts as colleagues who provide necessary validation of their facts, not as editors? What if they thought of their goal in writing as the direct perception of reality, rather than as defending a point of view?
Latour’s “second empiricism,” which he details in An Inquiry into Modes of Existence, is an expansion of a more familiar refrain in his work: an exhortation to listen to actors, to follow them, to seek to describe what they are doing, and not to leap ahead to theorization or explanation or argument. Cooper is following that out here in her essay and envisioning a writing practice that is empirical and experimental.
How does this connect with that second option? Basically, we’d be talking about a composition course where the activity was a (second) empirical investigation of writing and writing practices. This isn’t exactly what Cooper has in mind, and I will admit that it has the same potential to be “boring” as any academically-minded, disciplinary course does from anthropology to zoology. So sure, it could be boring, or not. But the purpose, as noted above, would be to develop a rhetorical-analytical skill specifically designed to assist in adapting to new writing situations.
Is that all rhetorical analysis? I don’t think so. A lot of rhetorical analysis can be formalistic (a kind of rhetorical version of new critical close reading) or cultural-critical or very theoretical/philosophical. Those are all fine intellectual and academic activities (as are literary studies and cultural studies for that matter), but for this particular purpose, one is first and foremost looking for an empirical description of writing and writing practice, perhaps beginning (and ending) with one’s own.
I would hypothesize that when one did that, one would discover a number of actors significantly involved in any writing activity, human and nonhuman. This might interestingly shift the traditional focus of composition–which has been on individuals and then subjects–into a wider media-ecological perspective. One effect of this shift would be the development of different descriptions of process. That is, one would actually have course content that informed our understanding of how writing happens.
Marc Bousquet has a piece in Inside Higher Ed on the topic of alt-ac careers and the disciplinary-institutional motives of departments and universities in relation to them. I really don’t disagree with him, particularly when he writes:
faculty like having graduate programs and, perhaps more to the point, administrators need them. For faculty, grad programs confer status, provide emotional gratification of several kinds and legitimate the teaching of fewer, smaller classes. Crucially, however, administrators need doctoral programs across fields to maintain the institution’s Carnegie classification.
He suggests this is a cynical explanation for the motives of having doctoral programs even when there are clearly not enough tenure track jobs for all the students. But I don’t think it is really all that cynical at all. Faculty enjoy teaching graduate courses and graduate students. On it’s face, there’s nothing wrong with that. Similarly, I don’t think there’s necessarily anything wrong with administrators seeking to improve the reputation of their institutions by having such programs. And as long as students freely enter those programs without illusions of what they offer, then I’m not sure there’s any malfeasance here.
What would be cynical and deeply problematic is if graduate programs sold themselves like commercials for for-profit online universities that promise entry into the middle class when so many of their students just end up deep in debt. That is, if doctoral programs are promising tenure-track jobs and a chicken in every pot then that’s unethical. If they can cite real statistics about completion rates and tenure track job placement at the point when students are choosing to attend, then students can make informed decisions, and I think those programs have done their job.
As Marc points out, PhDs in the humanities have low unemployment rates and generally good jobs and incomes, even if they don’t get tenure track jobs. That said, in strictly financial terms, it probably doesn’t make a lot of sense to get a Phd in the humanities. Even if you end up landing a tenure-track job, you still probably could have ended up better off financially following a different path. But if that was your priority then you’d probably not be considering a humanities phd anyway.
Thinking back on my own decision a little over 20 years ago to go to a phd program, certainly I had it in my mind that I would like to become a professor, but if you’d told me there was only a 20-30% chance that I’d find a tenure-track job (which no one did), I don’t think it would have deterred me. After all, I’d just spent two years in New Mexico writing poetry in a creative writing MA. I continued on to a doctoral program because I enjoyed my master’s program, not because I thought it would get me a great job. Somewhere in the middle of the degree program I got married and started thinking about having a career. That pragmatism influenced my decision to specialize in rhetoric. A few years later, when I was a postdoc at Georgia Tech in the late 90s, many of my colleagues were leaving academia for start-ups or technical writing jobs or management consulting gigs. I happened to land a tenure-track job, but it could have easily gone a different direction. So I’m hardly a role model, but in the conversations I’ve had with our incoming grad students, my orientation to job prospects as a new grad student is not atypical.
In the end, I am completely on board with Bousquet’s suggestion that we need to insist that we create more tenure-track teaching-intensive positions or, at minimum, such positions with lengthy contracts. Fo me, that’s a separate although related concern.
So the bottomline.
- Spending 8 years getting a Phd in the humanities probably doesn’t make good financial sense. So don’t do it for that reason. (I know, that’s a shocker.)
- If you want to get a Phd for other, non-financial reasons, then, as they say, “it’s a free country.” However, it’s important to have both a national and program-level understanding of the career prospects of your degree, because at some point you will be looking for a job and you should at least make an informed decision.
- For different reasons, we should make an effort to create better careers for college teachers, though even if we did, point #1 would still apply.
- Part of creating such college positions should be thinking about the alternative-academic careers Phds pursue on our campuses and ensuring as well as we can that those are well-paid and secure positions.
Donna Lanclos and David White offer some remarks in Hybrid Pedagogy, on “The Resident Web and its Impact on the Academy:” the “resident web” being that portion of online spaces which “involve the individual being present, or residing, to a certain extent online,” i.e. social media. Their argument is, in part, a familiar one, indicating that “New forms of scholarly communication and networking, manifested as digital tools, practices, and places such as blogs and Twitter, create a tension between the struggle to establish one’s bona-fides in traditional ways, and taking advantages of the benefits of new modes of credibility, many of which are expressed via the Web.” And it’s that last part which interests me here, the “new modes of credibility.” What are those?
As Lanclos and White describe, “When someone is followed on Twitter, it can be as much for the the way they behave — how they project character and a kind of persona — as it is for the information they can provide.” And what kind of character/personae is attractive?
Acquiring currency can be about whether a person is perceived to be vulnerable, not just authoritative, alive and sensitive to intersections and landscapes of power and privilege: As Jennifer Ansley explains, “In this context, “credibility” is not defined by an assertion of authority, but a willingness to recognize difference and the potential for harm that exists in relations across difference.” In other words, scholars will gain a form of currency by becoming perceived as “human” (the extent to which ‘humanness’ must be honest self-expression or could be fabricated is an interesting question here) rather than cloaked by the deliberately de-humanised unemotive academic voice.
My first thought here goes to Foucault’s investigation of technologies of confession in The History of Sexuality. Foucault discusses the Christian confessional but I’m thinking more about his investigation of writing as a confessional technology. My second thought is of Kittler, in Grammaphone, Film, Typewriter, where he remarks on the pre-typewriter perception of a connection between the fluidity of handwriting and a kind of honesty of expression. It’s hardly news that social media from LiveJournal blogs through Facebook and YouTube to Instagram or YikYak and beyond has been a site of confessions. These sites have generally offered a feeling of spontaneous utterance that is associated with honesty and confession.
What I think is curious here is Lanclos and White’s assertion of the development of academic status through these rhetorical practices. As they point out, impersonal objectivity has been, and really remains, at the foundation of academic knowledge. Even in discourses where subjectivity is hard to mask, like literary or rhetorical analysis, arguments must be built from textual evidence, scholarly sources, and established methods. So what role can these confessional performances play in building academic reputation?
To be “honest,” I am skeptical. There’s no doubt that the ability to create and maintain weak social bonds (i.e. networking in the non-technical, social sense) is valuable in almost every professional enterprise, and in academic terms that means building relationships with potential editors, reviewers, collaborators, hiring committee members, and more generally an audience for one’s work. In some respects this was more true in the 50s and 60s, when academia was more of an old boys network, than it is now. Clearly in those days, informal social bonds were largely created maintained face-to-face, which we still do and, as far as I can tell, is the primary reason for having conferences. As such, I don’t mean to suggest there is no value in building such relationships. And there may even be some prejudice, some semi-conscious subjective preference, to find those with whom we build such bonds to be credible. In effect, the sense that someone has confessed, has bared their soul, has exposed their neck to our teeth, makes us more inclined to believe them. Perhaps it just the curse of being a rhetorician, or maybe its some congenital incapacity on my part to trust others (oh look, that was almost a confession), but if you were investigating something that really mattered to you, would these kinds of confessions really sway your judgment?
Lanclos and White end by asserting that
As scholars we need to put aside anachronistic notions of knowledge being produced by epistemologically neutral machines and embrace the new connections between credibility and vulnerable humanity which the Resident Web brings. In tandem with this, as institutions we need to recognise this shift by negotiating the new forms of risk online and supporting increased individual agency without reneging on our our responsibility to protect and nurture those in our employ.
I can certainly agree with the first part of the first sentence. There are no epistemologically neutral machines for knowledge production. From a Latourian perspective that would make no sense. If you have a machine for the purpose of producing knowledge, how could it do/produce knowledge and have no effect (i.e. be neutral) on the knowledge? It would be like having a movement-neutral automobile. However, the connections between credibility and vulnerable humanity are not new, though the capacities of the Resident Web do shape this longstanding rhetorical practice in new ways. Furthermore, I’m not sure what is being asked in the imperative that we need to “embrace” these connections. Embrace itself is an interesting word choice as it suggests an affective response as opposed to say respect, acknowledge, value, reward, or some other similar verb. And I’m not really sure what that last sentence is asking for. I think it is suggesting that academia needs to protect its students, staff, and faculty from the potential risks of social media (with which we are now all familiar). Of course I’m fairly sure that that doesn’t apply to “confessions” or honest expressions that we find racist, sexist, or otherwise offensive, because those bastards should clearly be pilloried, right? In other words, I don’t see how this happens, at least not in a general way. As their article does point out, these are (rhetorical) performances. Vulnerability here is a genre, just as the speech in a confessional is. Maybe we need to “embrace” this genre. I’m not really sure why. Perhaps it is simply a recognition that academics are increasingly exposed.
I suppose I would push back in the other direction, a direction Lanclos and White only briefly point toward when they note that the resident web “largely takes place in online platforms run by multinational corporations.” Foucouldian confessions were part of a disciplinary culture. Digital confessions might be articulated more as part of a Deleuzian control society. They become modulations in an algorithmic fed-forward subjectivity. Maybe we shouldn’t embrace such things. Maybe instead we need to be more cautious and at the same time more experimental in our skepticism over the value of the performance of vulnerability as a rhetorical strategy.
Sherry Turkle’s recent piece in The New York Times, “Stop Googling. Let’s Talk,” appears to take on the key points of her latest book, Reclaiming Conversation (also reviewed in NYT.) Turkle reports on a decline in empathy, particularly among younger people, which she asserts is a result of emerging technologies–social media and especially smartphones. While she cites some research in support of this claim (research which itself only suggests there might be a connection between technology and decreased empathy), Turkle also says “In our hearts, we know this, and now research is catching up with our intuitions.” An interesting rhetorical appeal since so often research demonstrates counter-intuitive discoveries.
But here’s a more interesting line from Turkle: “Our phones are not accessories, but psychologically potent devices that change not just what we do but who we are.” Indeed, though the distinction between doing and being is not so easily made or maintained. The point though is that we are changing. We’ve always been changing, though maybe now we are in a period of more rapid change. She writes that “Every technology asks us to confront human values. This is a good thing, because it causes us to reaffirm what they are.” And I wonder at the choice of “reaffirm.” Why re-affirm? Because human values are never changing? Why not discover or construct?
The loss of empathy and general human connection Turkle describes are moving, and the everyday stories of families sitting around the dining room table separated by screens are familiar. At the same time, you’d almost think we’d left behind some idyllic society of close families, friendships, and self-reflection. That is, you might think that if you weren’t able to remember the 1990s. If this argument is going to be floated on what we know in our hearts, then what I know is that growing up in the 70s and 80s, I was hardly surrounded by “empathy.” Not familial and certainly not from peers. Turkle relates a story of a teen lamenting about her dad googling some fact at the dinner table (I can be guilty of that.) When I was a kid, instead of googling we were reaching for the World Book (that’s like an old-fashioned print version of Wikipedia kids). As a teenager, when I was home I spent my evenings in my room listening to music and/or reading. My teens also tend to sit in their rooms, though they’re mostly watching videos or playing video games (or reading). I’m not sure how much of a difference that is. However, I don’t think I would base an argument on “heart knowledge.” I’m sure other teens today spend a lot of time on social media and texting. Just not my kids or their friends… I wonder who’s more empathetic? The teen who is really interested in what her friends are doing, thinking, or feeling? Or the one sitting silently in a room reading her book?
Turkle certainly wants to argue for the latter, which is perhaps not what we would “know in our hearts,” because for her, empathy begins with solitude.
One start toward reclaiming conversation is to reclaim solitude. Some of the most crucial conversations you will ever have will be with yourself. Slow down sufficiently to make this possible. And make a practice of doing one thing at a time. Think of unitasking as the next big thing. In every domain of life, it will increase performance and decrease stress.
But doing one thing at a time is hard, because it means asserting ourselves over what technology makes easy and what feels productive in the short term. Multitasking comes with its own high, but when we chase after this feeling, we pursue an illusion. Conversation is a human way to practice unitasking.
An argument near and dear to every painfully introverted academic you’ve ever met. And of course we all know there’s no better place to find empathy than in a professor’s office hours!
Meanwhile I’m especially interested in this one sentence: “Multitasking comes with its own high, but when we chase after this feeling, we pursue an illusion.” I am reminded of Plato’s concerns for the pharmacological effects of writing, which was perhaps the first “psychologically potent device,” unless one wants to count language itself (which I would). What is the “illusion” that we are pursuing here? Turkle doesn’t say, but there are certainly commonplace answers. An illusion that we are “connecting” with others via social media, that we are “getting things done” when we are multi-tasking, and perhaps even that we are really “thinking” with so many screens and distractions.
To channel Plato (and Derrida), we might say that our smartphones and digital media are a pharmakon for thought.
However, here are my confessions on these matters. At this moment I am at my computer with several screens open and my phone on the desk. But I don’t have Twitter, Facebook, or any other social media apps set to alert me. (I did get an update on my phone on the team selection for England’s national soccer team from ESPN.) I might check those sites a couple times a day and scroll through the updates. On occasion I get a little more involved in some conversational thread. My point is that maybe I’m already living the life Turkle recommends. In terms of my scholarly work, I spend a lot of time alone.
Despite all her concerns about the psychological effects of these devices, in the end Turkle seeks a technological solution in the redesign of smartphones. She asks:
What if our phones were not designed to keep us attached, but to do a task and then release us? What if the communications industry began to measure the success of devices not by how much time consumers spend on them but by whether it is time well spent?
In his review, Franzen is skeptical of such hopes as they seem to run counter to economic imperatives. I suppose I would say that all media technologies operate this way. They all have pharmacological effects. One can become addicted to books or tv or video games or the Internet or a smartphone. So while I can certainly share in Turkle’s general call that we reflect on our use of technology (one could say I’ve made a career of that), I would end where I started with suggesting that our task is not to “reaffirm” human values but to invent or discover them as emergent from our participation in a media ecology.
We do things. It’s an interestingly Latourian idiomatic expression, a kind of dancer and the dance moment. And in the moment of that linguistic confusion, we become those things: consumers, workers, believers, lovers, and so on. Not in an permanent sense though, always moving from one thing we do to another. One of the things we do, increasingly and often without much thought, is interact with algorithms. Sadly there’s no convenient “-er word” for that, but it is a thing we do nonetheless.
In a recent Atlantic article Adrienne Lafrance reports that “Not even the people who write algorithms really know how they work.” What does she meant by that? Basically that no one can tell you exactly why you get the particular results that you get from a Google search or why Facebook shows you one set of status updates rather than another. (I notice I get a very different set of updates on my phone Fb app than I do from my web browser.) And of course that goes on and on, into the ads that show up on the websites you visit, the recommendations made to you by Amazon or Netflix and other such sites, etc.
It would be easy enough to call this capitalism at work. No doubt these corporations are making money from these algorithmic operations (and if they didn’t then they’d change them). But that doesn’t mean they understand how they work either. It would also be understandable if one responded with a degree of concern, maybe even paranoia, over the role these inscrutable machines play in forming our social identities. Lafrance points, for example, to the role their data collection might play in decisions about loan applications and such. However at this point, I’d want to recall Ian Bogost’s earlier Atlantic article about our tendency to overvalue, whether it is demonize or deify, the power of algorithms.
That said, it might be useful to put this conversation in the context of Robert Reich’s New York Times editorial where he argues that “Big tech has become way too powerful.” As he observes
Now information and ideas are the most valuable forms of property. Most of the cost of producing it goes into discovering it or making the first copy. After that, the additional production cost is often zero. Such “intellectual property” is the key building block of the new economy. Without government decisions over what it is, and who can own it and on what terms, the new economy could not exist.
But as has happened before with other forms of property, the most politically influential owners of the new property are doing their utmost to increase their profits by creating monopolies that must eventually be broken up.
Certainly algorithms are among the most valuable of those information ideas. It’s another thing we might add to the complex network that makes up an algorithm. Not just code and date, but also servers, data networks, server farms, electricity, programmers, technicians and other human workers. AND they are also legal entities, created by intellectual property law. Reich’s point is that when we argue between government control and free markets we miss the point. Without government there cannot be a free market. In a fairly obvious example, without police, courts, and jails, how would the concept of property function? Reich suggests that we may want to structure the market differently in relation to these patents if we want to protect ourself against this growing monopoly.
In the specific context of algorithms one might wonder about the social-cultural value of proliferating them, and thus of shifting the rules of the market to encourage proliferation. It may not be possible, as Lafrance decides, to exert intentional control over what algorithms will do. This makes sense because I can’t predetermine what an algorithm will show me without already knowing what it is possible for it to find, which of course, I cannot. However it is possible to have a variety of algorithms showing us many different slices of the informational world, a variety that would not shift the overall role of algorithms in our lives but would downplay the influence of an increasingly limited algorithm monopoly.
In rhetoric, we often talk about discourse communities, that is, communities that are formed through texts and textual practices. The discourse communities in which we participate, as we typically say, shape our identity. These communities might conform to our family, where we grew up, our gender and ethnicity, later our professions, our religious beliefs and politics, etc. etc. Our digital discourse communities, filtered through social media and search engines, are mediated by algorithms. It would be too much to say they are determined by algorithms, but those calculations are a significant shaping force. If, particularly in rhetorical discursive terms, we are readers and writers, then we are the things we read and the things we write in response to what we read and the community we encounter through reading. In the digital world, these are algorithmic productions.
Algorithms are objects persisting in and dependent upon an information-media ecology that is not simply digital but is also material and economic, legal, and living (i.e. it involves humans and is generally part of the Earth). Humans (some) write algorithms, and humans (many) interact with them. We make laws about them, and we try to control them. But we cannot fully understand them or what they do or why they do what they do (or even if asking why makes sense). They are our effort to interact with a mediascape that is vaster and faster than our ability to understand, despite our role in its creation.
How can there be a contemporary rhetoric, even one that wants to focus solely on human symbolic acts, that is not significantly algorithmic?
Take as evidence these two recent articles in The Atlantic by Conor Friedersdorf, “The Rise of Victimhood Culture” and “Is ‘Victimhood Culture’ a Fair Description?” These articles take up research by sociologists Bradley Campbell and Jason Manning (“Microaggression and Moral Cultures“). As Campbell and Manning observe:
In modern Western societies, an ethic of cultural tolerance – and often incompatibly, intolerance of intolerance – has developed in tandem with increasing diversity. Since microaggression offenses normally involve overstratification and underdiversity, intense concern about such offenses occurs at the intersection of the social conditions conducive to the seriousness of each. It is in egalitarian and diverse settings – such as at modern American universities – that equality and diversity are most valued, and it is in these settings that perceived offenses against these values are most deviant.
They also make the fairly obvious observation (which I’d like to explore further in a moment) that
As social media becomes ever more ubiquitous, the ready availability of the court of public opinion may make public disclosure of offenses an increasingly likely course of action. As advertising one’s victimization becomes an increasingly reliable way to attract attention and support, modern conditions may even lead to the emergence of a new moral culture.
However, the part of the article that becomes the focus of Friedersdorf’s articles comes at the end, where Campbell and Manning contend that while historically we have had an “honor culture,” where typically people resolve disputes unilaterally, often through violence (think duels), and a “dignity culture,” where people turn to third parties (e.g. courts) to resolve disputes but would tend to ignore microaggressions. Today we find ourselves in what they term a “victimhood culture” which is
characterized by concern with status and sensitivity to slight combined with a heavy reliance on third parties. People are intolerant of insults, even if unintentional, and react by bringing them to the attention of authorities or to the public at large. Domination is the main form of deviance, and victimization a way of attracting sympathy, so rather than emphasize either their strength or inner worth, the aggrieved emphasize their oppression and social marginalization.
In a moment of all-too-predictable social media irony, the response to Friedersdorf’s first article is to claim injury over the microaggression in the term “victimhood culture,” a reaction which provokes the second article, which includes an interview with Manning explaining their choice of the term and their awareness of the response it might provoke.
But let me return briefly to the social media point. I imagine there’s an argument to made tracing the intersection of information-media technology with moral codes (e.g. court systems relying on the mechanization and then industrialization of print, typewriters, catalog systems, etc.). So the notion of a new digital morality or ethics might be expected, as would be a new digital rhetoric. As Campbell and Manning discuss, the development of microaggression blogs represent an effort to demonstrate a pattern of what some might consider minor offenses. It’s a strategy that takes advantage of the “long tail” and crowdsourcing qualities of the web, as well as the potential for going viral. In both honor and dignity cultures, the scale of offense is always on the level of the individual. One person is aggrieved, and the offenders are judged for their individual actions. Here, while we can still say that individuals offend and are aggrieved, the rhetorical strategy of the argument made here is to seek the support of third parties by representing a systemic pattern of microaggressions involving otherwise unrelated individuals. While I would not go so far as to say that such rhetorical strategies are impossible without social media, it seems clear that digital culture has made such efforts far more visible and effective.
It is interesting, as Campbell and Manning observe, that it is in those cultural spaces that are the most diverse and egalitarian (e.g. campuses) that this rhetorical strategy has taken hold, in part because those institutions are more likely than others to become partisan supporters of the aggrieved in these instances (whether in the form of faculty or institutional policy itself). Already we begin to see some rather complex dynamics around this matter, such as a student refusing to read a book because it offends his Christian sensibility (I’m sure you all remember that one) and this recent issue on my own campus involving an African-American graduate student posting “Whites Only” and “Blacks Only” signs around campus as part of an art project. Examples such as these muddy the distinction Campbell and Manning seek to make that victimhood culture is structured around claims made against dominant cultural groups.
I’m not interested in a meta-moral argument over whether or not this emerging moral code is good or not. And I’m certainly not interested in naming it. But I am curious about what I see as two competing socio-rhetorical tendencies at work here.
On the one hand there is the tendency toward a destratified, diverse, and egalitarian culture, perhaps best typified by college campuses. In this environment, participants have to live and work with others who are quite different from them without recourse to claiming any innate superiority (i.e. no stratification). Of course stratification does occur (because equality is more a mathematical abstraction than a real world state) but the idea is that it is temporary or ad hoc and consensual (though we can pull out Lyotard here if we like). So, for example, there is stratification in a classroom between the professor and the students, but not necessarily elsewhere on the campus. And that stratification is limited and requires consent, even in the classroom (e.g. trigger warnings protecting or faculty protections against Yik Yak attacks). And even in the case of that Yik Yak attack, there’s always an opportunity for victimhood to flow in a variety of directions.
On the other hand though, we clearly see something quite different in social media, where participants move toward less diverse social groups. Certainly this is the purpose of microaggression blogs, where users join in common cause, and those blogs are hardly the only example of that. I don’t think one can ascribe a single purpose to a diffuse networked movement like this, but I at least find it difficult to determine if the microaggression movement is one that seeks to accelerate and shape the formation of an increasingly diverse and egalitarian society or if it rather seeks to set limits on diversity, at least as Manning and Campbell use the term.
For example, it is dignity culture that gave us the now familiar aspects of university general education that deal with racism, sexism, and other forms of social inequity, as well as multicultural literature, ethnic studies, gender studies, and so on. Such curricula clearly make sense within a dignity culture as means to foster a more diverse and egalitarian culture. However, I think it is an open question whether such classes will be viewed as moral in a microaggression culture (or whatever you want to call it, apparently not “victimhood culture”). Again, at least in Manning and Campbell’s account, in a dignity culture the slights of microaggression were overlooked, and microaggressions occurred all the time as students were asked to read things that challenged their views (and which they might find offensive) and students often would say things (many times, though not always, out of ignorance) that offended others in the class. It may be that such courses will become nearly impossible to teach (they’ve never been easy). The risks of speaking in such a classroom are far higher now than they’ve ever been for both faculty and students. Beyond that, if we see this microaggression movement spread, I will be curious to know if the result is that students’ social relations become less diverse even as the overall demographics of higher education move in the other direction. Perhaps that sounds like a moral argument against the microaggression movement because it suggests that the result will work against diversity and egalitarianism, but I don’t see it that way. If anything I think it suggests redefining what diversity and egalitarianism might be.
Anyway, I prefer to think about these matters in rhetorical terms. All rhetorical acts involve risk and thus must also involve some perceived possible reward. Obviously we all say and do things impetuously at times and that is always risky. It’s generally wise to make one’s best effort to restrict such acts to private moments in one’s most intimate and trusted social relations. Microaggression introduces us to a new set of risks for rhetorical acts occurring beyond those intimate social relations.
There is at least some tendency socially to indemnify oneself against these risks by claiming some value in honesty, in telling it how it is, in being a “straight shooter.” The microaggression movement relies on this strategy itself; it does not recognize the validity of those who counterclaim offense in their own claims of microaggression. It is indeed a complicated web of rhetorical performance. And perhaps this is what we need to recognize (and why we continue to mistrust rhetoric): it’s not honesty that we value but the effective rhetorical performance of honesty. That is, one that must protect itself against claims of microaggression as one must protect oneself against all counterclaims to a rhetorical act. In classical terms, claims of microaggression are claims against ethos, against one’s moral authority to say what one has said.
If we think about it this way, microaggression points us to a destabilization of ethos in a digital culture. To reestablish ethos, we have to reconceive identity and the authority it provides in a networked culture. Until then, it will remain quite difficult to determine what is and isn’t permissible to say.
Mackenzie Wark has a useful extended discussion of Lev Manovich’s Software Takes Command. If you haven’t read Manovich’s book, it offers some great insights into it. I think Manovich’s argument for software studies is important for the future of rhetoric, though admittedly my work has long operated at points of intersection between rhetoric and media study.
But here’s one way of thinking about this. How do we explain the persistence of the “essay,” not only in first-year composition but as the primary genre of scholarly work in our field and really across the humanities? Indeed we might take this question more broadly and wonder about the persistence of scholarly genres across disciplines and beyond. That is, we might ask why genres of scientific scholarly articles have not changed much in the wake of digital media or newspaper articles or novels and so on.
Or maybe we should ask about the photograph.
It’s likely that you have some recent family photos hanging somewhere in your house. They were probably taken with a digital camera, maybe even with your smartphone. But sitting in that frame, they probably don’t look very different from photos that would have hung there thirty years ago. The photo may not reveal to you the complete transformation of the composition process that led to its production. That transformation has led to the erasure of some photographic capacities that were available to chemical film that do not exist for digital images. However, as we know, most of those compositional activities are now simulated in software. Additionally, many new capacities have emerged for photographs, most notably, at least for the everyday user, the capacity to share images online.
Is the digital photograph the same as the analog photograph? Of course not. Can we say that the photograph persists? Or should we say that a new species of photography as evolved and come to thrive in the digital media ecology? And what’s at stake in how we answer that question?
As Wark observes in Manovich:
What makes all this possible is not just the separation of hardware from software, but also the separation of the media file from the application. The file format allows the user to treat the media artifact on which she or he is working as “a disembodied, abstract and universal dimension of any message separate from its content.” (133) You work on a signal, or basically a set of numbers. The numbers could be anything so long as the file format is the right kind of format for a given software application. Thus the separation of hardware and software, and software application and file, allow an unprecedented kind of abstraction from the particulars of any media artifact.
The same thing might be said of textual genres. If you print the PDF of a journal article it might look quite similar to the essays our predecessors tapped out on typewriters and received back in bound journals in the mail several years later. Personally, it’s hard to imagine working that way. Drafting longhand? On note paper? Going to the library and relying on printed bibliographies and card catalogs? What about before photocopiers, when you couldn’t even make your own copy of a journal article?
I think it is fair to say that the contemporary digital scholarly essay, despite its resemblance to it analog predecessors, has the following new qualities:
- composed in a digital platform (typically MS-Word)
- relied upon digital access to secondary scholarship (at the very least accessing a university’s online library catalog)
- digitally circulated for review (perhaps by the author but almost certainly by the journal editors)
- prepared for publication in a digital environment
- exists as a digital object (behind a paywall as a PDF, on an open access website, on the author’s personal website, etc.)
In short, there are undoubtedly differences. But to me that leaves open a couple key questions:
- Do these differences make a difference? That is, do they alter the kind of knowledge we produce or the role essays play in our discipline? If so, how? and if not, why not?
- What are the untapped and/or under-utilized capacities of the digital species of the essay?
We have a growing field of digital rhetoric that investigates these questions and journals like Kairos that offer some direct evidence to inform an answer to question #2. Media study, on the other hand, while not being especially interested in text, offers some compelling responses to these questions as well, or at least it could if we are able to understand textual genres in relation to media formats. The two (genre and format) are clearly not equivalent. To the contrary they are quite distinct, but they act in conversation with one another. Historically we never really paid attention to the fact that essays were in a genre that existed within a specific print-analog media format. Now things are changing very rapidly because that media format has shifted. Now media study is integral to rhetoric, and rhetoric is integral to media study. I’ve been focusing here on the first part of that compound sentence. I’ll have to address the second part some other time. In fact, one might even go further and suggest that the entire constellation of fields within English Studies must turn itself toward media study, but again, that’s for another day.
One of the projects I have been regularly pursuing (and I’m certainly not alone in this) is investigating the implications of rhetoric’s disciplinary-paradigmatic insistence on a symbolic, anthropocentric scope of study and entertaining the possibilities of rethinking those boundaries. I’ve been employing a mixture of DeLanda, Latour, and other “new materialist/realist/etc.” thinkers, always with the understanding that these theories don’t fit neatly together and with the understanding that I’m not in the business of building a comprehensive theory of the world.
I’m interested in rethinking how rhetoric works to maybe get a new way of approaching how to live in a digital world.
So take for example this recent piece of research from Experimental Brain Research, “Using space and time to encode vibrotactile information: toward an estimate of the skin’s achievable throughput” (paywall) by Scott Novich and David Eagleman, or perhaps just watch Eagleman’s TED talk where he asks “Can we create new senses for humans?”
As both the article and the talk discuss, the research going on here is built on using a vest that sends vibrations to the user’s skin that the brain is able to translate into sound. Thus it becomes an adaptive technology for the hearing impaired. The research article in particular is exploring what the “bandwidth” of skin might be; that is, how much information can you process this way? In part, it’s an engineering question as the answer depends on the way the vest, in this case, is designed. However, it is also a question of biology as human skin has a certain range of sensitivity.
One insight that is interesting to me here, in the TED talk, is Eagleman’s observation that the brain doesn’t know anything about the senses. In a manner analogous, to a degree, with a computer that only sees ones and zeros, the brain receives electrochemical impluses. As he says, if the brain is a general computing devices, each of our senses might be understood as a plug and play peripheral devices. So why not plug in some new devices? Or if not new hardware, then why not new software? One might certainly think of language that way. It doesn’t alter the visual spectrum of our eyes but it alters the capacities that are available to us via sight. Maybe like an app on an iPhone. Ok, enough analogies (in my defense, Eagleman started it).
Some of the suggestions in the video strike me as widely speculative (which is why I enjoyed watching it). And though they seem highly unlikely, to be fair, the same thing was said not long about about some of the adaptive technologies that are now available. Ultimately, Eagleman will only say that we do not know what the theoretical limits of the brain’s capacity to develop new senses, to expand our umwelt, might be. However, he seems most interested in the possibility of taking in data from across digital networks, data that now requires multiple screens and multi-attention for us to track, and giving us a way of simply sensing it.
I find that fascinating for its implications about the role of symbolic action in rhetoric. If you think about it this way, symbolic action is a way of accessing human brains, piggybacking on visual and auditory sensory data. But the throughput is fairly limited. That is, humans can only read a few hundred words per minute, and they can hear even fewer. What if, to give a completely pedantic classroom example, instead of having to read all your students’ discussion posts, you could just know, like you know if you’re sitting or standing right now, what their thoughts were about the assigned reading? What if, instead of doing all that research on your next car purchase, you could just know which one to buy? That’s the kind of stuff Eagleman is talking about when he suggests, in far larger terms, that one could get a sense of the stock market or know the sentiment of a Twitter hashtag.
Perhaps this sounds like it is verging on telepathy, put it isn’t that and could never be that. Telepathy implies immediate communication, without mediation. This is fully mediated by digital technologies. And that might be quite scary. What do you want to plug yourself into?
I wonder if early humans were similarly frightened by language. Once if you wanted to know something, you had to go see it for yourself. Now someone could tell you. You could become part of a larger collective. You were networked by symbols, given roles and rituals.
This reminds me of another recent scientific discovery of an ancient human-like species in South Africa, as reported by the BBC.Photo Credit: John Hawks
Ms Elliott and her colleagues believe that they have found a burial chamber. The Homo naledi people appear to have carried individuals deep into the cave system and deposited them in the chamber – possibly over generations.
If that is correct, it suggests naledi was capable of ritual behaviour and possibly symbolic thought – something that until now had only been associated with much later humans within the last 200,000 years.
Prof Berger said: “We are going to have to contemplate some very deep things about what it is to be human. Have we been wrong all along about this kind of behaviour that we thought was unique to modern humans?
“Did we inherit that behaviour from deep time and is it something that (the earliest humans) have always been able to do?”
As I said at the outset, I’m not interested in building a grand theory of everything. I’m interested in developing a theory of rhetoric that works in the digital age. It has to be able to account for the naledi and for the technologies Eagleman is building. My way of thinking about this is to suggest that rhetoric does not begin inside symbolic action, inside the brain, inside culture. Instead, rhetoric is a kind of encounter with expression. As such it must be sensed. Out of those senses develop increasingly complex capacities for pattern recognition, thought, and, by extension, action.
I might answer Eagleman’s TED talk’s rhetorical question by saying that we have already expanded our unwelt. We already can see ultra-violet and infrared rays. I can hear and see things happening on the other side of the planet, hear and see things that happened years ago. And the naledi might teach us that homo sapiens were not the first or only creatures on Earth to be able to do so. If we are able to recognize that our current definition of our capacities for rhetoric, for symbolic action and thought, do not define our ontology (what we are and must be) but only our history (what we have been for some period of our species existence), then perhaps our rhetorical futures open up more broadly if perhaps more dangerously.
Before I get into this, I should try to make a few things clear. This post isn’t about the structural problems facing higher education right now (issues of cost and access, the changing cultural-economic role of universities nationally and globally, or shifts in media-information technologies that are reshaping our work). It’s not even about the increasing politicization of those problems as they become bullet points in campaign stump speeches or the subject of legislation. No, this post is really about the rhetorical response to these exigencies among academics and in the higher education press (and as the two become difficult to separate).
So I am willing to accept that things are as bad as they have ever been in higher education…. well, at least for a century? Of course, Bill Readings published University in Ruins in the nineties, detailing the increasing corporatization of the university. In the eighties, when I was an undergrad, students on my campus protested in the hundreds or thousands for a variety of issues related to apartheid, the CIA on campus, and, yes, tenure and rising tuition. Of course, as the song calls us to remember, students in 1970 were shot and killed by national guard at Kent State, resulting in a national student strike. Maybe the Golden Age of the American university was in the 50s when women were English majors, commie professors were pursued by senators, and non-white students had their own colleges. Look, I assume you all know this history at least as well as I do. So what’s my point? It’s not that “the more things change the more they stay the same.” I’m willing to accept as a premise that things are worse now than they have been in the last half century as long as we are all also willing to accept that there is hardly some ideal moment to point back to either.
My interest is in this post is in the rhetorical responses to this situation, specifically our near-viral interest in “quit pieces.”
For example, one student at one university posts on a Facebook group page that he doesn’t want to read the optional summer book because he finds it offends his sensibilities, and somehow that becomes national news. Elsewhere, this or that professor decides that academic life isn’t for him or her, quits, and then writes about it. They cite multiple reasons: students don’t pay attention or come to college for the wrong reasons and colleagues don’t respect their work, but it’s mostly about structural issues with institutions and higher education in general. As Oliver Lee Bateman, in a recent performance of this genre, writes:
In a university system like ours, where supply and demand are distorted, many promising young people make rash decisions with an inadequate understanding of their long-term implications. Even for people like me, who succeed despite the odds, it’s possible to look back and realize we’ve worked toward a disappointment, ending up as “winners” of a mess that damages its participants more every day.
Sure enough there’s plenty of response to this genre as can be seen in this Inside Higher Ed article (and the comments that follow) and Ian Bogost’s provocation that “No One Cares That You Quit Your Job.” I guess it depends on what one means by “care.” After all, I’m not writing to Bateman and asking him if he’s ok. Bogost is clearly right that people quit their jobs all the time. The argument of quit pieces is that the quitting signals that something is wrong with higher education. No doubt something is wrong. I’m just not sure that the person who is quitting has some special insight to offer about it. That’s an ethos question I suppose. On the other hand, it is apparent that people do care in the sense that they pay attention to these pieces. Perhaps there is some martyr-like quality imparted on quitting professors, that their giving up on their professional aspirations affords them some parting shot attention, much like the Oscar-winning actor offering a political monologue while the music swells.
I can understand quitting a job you don’t like. Those of us who are professors likely all know people who have been (or feel they have been) treated unfairly by colleagues, administrators, etc. and felt forced to leave (or been forced to leave by the denial of tenure). We also all know colleagues who have left for (what they hope are) greener pastures. Only the smallest portion of these folks write quit pieces. Why do they attract so much attention?
I think it must be an appeal to pathos. Bateman’s piece ends, as many do, with an enumeration of the problems that led to him quitting. While I think the problems he identifies are real problems, I don’t think he has anything particularly insightful to say about them. As such, I think because other academics are feeling the same frustrations as Bateman they respond emotionally to his (and others) expressions of that frustration. In that sense we do care and pay attention.
And I get it. Typically students go into graduate school with the goal of becoming professors because they envision a cloistered, scholarly life where they will focus primarily on the research questions they love and teach students who share their curiosities. Most of those students never make it to tenure-track jobs and then those that do arrive only to discover that those jobs are nothing like they imagined. It’s understandable. I suppose you could say I became a professor to study and teach what we today call digital rhetoric. And I do publish in that area. But I never really teach that subject and most of my career outside of publishing has been about trying to solve departmental and institutional challenges. So I can understand why someone might find that frustrating and disillusioning. I can understand quitting. I’ve left two tenure-track jobs. I just haven’t left academia.
Rather than watching quit pieces go viral, it would be interesting to see some vision of a future higher ed that imagines what the arts, humanities, and sciences might look like in a less dysfunctional academia, something that ends up being more than a performance of commonplaces about the present moment or an ubi sunt reflection on some mystical golden age of yore.
This is one of those posts where I find myself at a strange intersection among several seemingly unrelated articles.
- Jonathan Rees warns us that “The ‘flipped classroom’ is professional suicide.”
- Alister Scott worries that “Universities [are] at risk of dumbing down into secondary schools.”
- Erik Gilbert asks “Does assessment make colleges better? Who knows?“
- Steven Johnson explores “The creative apocalypse that wasn’t.”
- Tim Wu suggests “You really don’t need to work so much.”
The first three clearly deal with academic life, while the last two address topics near and dear to faculty but without addressing academia.
The Rees, Scott, and Gilbert pieces each address aspects of the perceived and perhaps real changing role of faculty in curriculum. Formalized assessment asks faculty to articulate their teaching practices in fairly standardized ways and offer evidence that if not directly quantitative at least meets some established standards for evidence. It doesn’t necessarily change what you teach or even how you teach, but it does require you to communicate about your teaching in new ways. (And it might very well put pressure on you to change your teaching.) The Scott piece ties into this with the changing demographics and motives of students and increased institutional attention to matters of retention and time to degree. While most academics likely are in favor of more people getting a chance to go to college and being successful there, Scott fears these goals put undo pressure on the content of college curriculum (i.e. dumb it down). Clearly this is tied with assessment, which is partly how we discover such problems in the first place. It’s tough if you want your class to be about x, y, and z, but assessment demonstrates, students struggle with x, y, and z and probably need to focus on a, b, and c first.
Though Rees sets himself at a different problem, I see it as related. Rees warns faculty that flipping one’s classroom by putting lecture content online puts one at risk. As he writes:
When you outsource content provision to the Internet, you put yourself in competition with it—and it is very hard to compete with the Internet. After all, if you aren’t the best lecturer in the world, why shouldn’t your boss replace you with whoever is? And if you aren’t the one providing the content, why did you spend all those years in graduate school anyway? Teaching, you say? Well, administrators can pay graduate students or adjuncts a lot less to do your job. Pretty soon, there might even be a computer program that can do it.
It’s quite the pickle. Even if take Rees’ suggestion by heart, those superstar lectures are already out there on the web. If a faculty member’s ability as a teacher is no better than an adjunct’s or TA’s then why not replace him/her? How do we assert the value added by having an expert tenured faculty member as a teacher? That would take us back to assessment, I fear.
Like many things in universities, we’re living in a reenactment of 19th century life here. If information and expertise is in short supply, then you need to hire these faculty experts. If we measure expertise solely in terms of knowing things (e.g. I know more about rhetoric and composition, and digital rhetoric in particular, than my colleagues at UB) then I have to recognize that my knowledge of the field is partial, that there’s easy access to this knowledge online, and there are many folks who might do as good a job as I do with teaching undergraduate courses in these areas (and some who would be willing to work for adjunct pay). I think this is the nature of much work these days, especially knowledge work. Our claims to expertise are always limited. There’s fairly easy access to information online which does diminish the value of the knowledge we embody. And there’s always someone somewhere who’s willing to do the work for less money.
It might seem like the whole thing should fall apart at the seams. The response of faculty, in part, has been to demonstrate how hard they work, how many hours they put in. I don’t mean to suggest that faculty are working harder now than they used to; I’m not sure either way. The Gilbert, Scott, and Rees articles would at least indicate that we are working harder in new areas that we do not value so much. Tim Wu explores this phenomenon more generally, finding it across white collar workplaces from Amazon to law firms. Wu considers that Americans might just have some moral aversion to too much leisure. However, he settles on the idea that technologies have increased our capacity to do work and so we’ve just risen (or sunken) to meet those demands. Now we really can work virtually every second of the waking day. Unfortunately Wu doesn’t have solution; neither do I. But assessment is certainly a by-product of this phenomenon.
The one piece of possibly good news comes from Steven Johnson, whose analysis reveals that the decline of the music industry (and related creative professions), predicted by the appearance of Napster and other web innovations, hasn’t happened. Maybe that’s a reason to be optimistic about faculty as well. It at least suggests that Rees’ worries may be misplaced. After all, faculty weren’t replaced by textbooks, so why would they be replaced by rich media textbooks (which is essentially what the content of a flipped classroom would be)? Today people spend less on recorded music but more on live music. Perhaps the analogy in academia is not performance but interaction. That is, the value of faculty, at least in terms of teaching, is in their interaction with students, with their ability to bring their expertise into conversation with students.
Meanwhile we might do a better job of recognizing the expansion of work that Wu describes.. work that ultimately adds no value for anyone. Assessment seems like an easy target. Wu describes how law firms combat one another with endless busy work as a legal strategy: i.e. burying one another in paperwork. Perhaps we play similar games of oneupmanship both among universities and across a campus. However, the challenge is to distinguish between these trends and changes in practices that might actually benefit us and our students. We probably do need to understand our roles as faculty differently.
Last week in The Guardian Evan Selinger and Brett Frischmann ask, “Will the internet of things result in predictable people?” As the article concludes,
Alan Turing wondered if machines could be human-like, and recently that topic’s been getting a lot of attention. But perhaps a more important question is a reverse Turing test: can humans become machine-like and pervasively programmable.
This concern reminds me of one of mentioned a few times here recently, coming from Mark Hansen’s Feed Forward, where the capacity of digital devices allows them to intercede in our unconscious processes and feed forward a media infoscape that precedes, shapes, and anticipates our thinking. In doing so, as Hansen points it, it potentially short-circuits any opportunity for deliberation: a point which is likely of interest to most rhetoricians since rhetoric (in its quintessential modern form anyway) hinges on the human capacity for deliberation. This is also a surprising inversion of the classic concept of cybernetics and the cyborg where it is feedback, information collected by machines and presented to our consciousness, that defines our interaction with machines.
Put simply, the difference between feed-forward and feedback is the location of agency. If humans become predictable and programmable, does that mean that we lose agency? That we cease to be human?
Cue Flight of the Conchords:
In the distant future (the year 2000), when robot beings rule the world… Is this too tongue in cheek? Maybe, but it strikes me as a more apt pop cultural reference than The Matrix, which is where Selinger and Frischmann turn when they note that “even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon.” Why is “Robots” better? Perhaps unintentionally it presents robots and humans as one in the same. Humans may be dead, but robots are surprisingly human-like. The humans-turned-robots revolt against their human oppressors who “made us work for too long/
For unreasonable hours.”
The problem that Hansen, Selinger and Frischmann identify is also the problem Baudrillard terms the “precession of the simulacra” (which, not coincidentally, is the philosophical inspiration for The Matrix). And it suggests, like The Matrix, that the world is created for us, before us, to inhabit.
We might ask, even if sounds perverse, how awful is it if people are predictable/programmable? Of course, we are (or hope to be) internally predictable. When I walk down the hall, I want to do so predictably. When I see a colleague coming toward me, I want my eyes and brain to identify her. I’d like to wave and say hello. And, I’d like my colleague to recognize all of that as a friendly gesture. Deliberation is itself predictable. Rhetoric and persuasion rely upon the predictability of the audience. I suppose that if you knew everything about me that I know about me then you could predict much of the content of this post. After all, that’s how I am doing it.
That said there’s much of this post that I couldn’t predict. Maybe the “perpetual sources of data” available to digital machines know me better. Maybe they could produce this post faster than me. Maybe they could write a better one, be a better version of me than I am. After all, isn’t that why we use these machines? For the promises they make to realize our dreams?
I think we can all acknowledge legitimate concerns with these information gathering devices. What corporations know about us, what governments know about us, and what either might do with the information they glean. Furthermore, no doubt we need to learn how to live in a digital world, to not be driven mad by the insistent calls of social media with its alternating calls to our desires and to superego judgments of what we should be doing. However such concerns are all too human; there’s nothing especially robotic about them. While we want to be predictable to ourselves and we want the world to be predictable enough to act in it, we worry about our seeming or being predictable to others in a way that causes doubts about our agency.
However in many ways the obverse is true. It is our reliable participation in a network of actors that makes us what we are (human, robot, whatever).
This is a complex situation (of course). It requires collaboration between human and machine. It requires ethics–human and robotic. It is, by my view, a rhetorical matter in the way that the expressive encounters among actors open possibilities for thought and action to be shaped. I would not worry about humans becoming robots.
In an essay for Harper’s William Deresiewicz identifies neoliberalism as the primary foe of higher education. I certainly have no interest in defending neoliberalism, though it is a rather amorphous, spectral enemy. It’s not a new argument, either.
Here are a few passages the give you the spirit of the argument:
The purpose of education in a neoliberal age is to produce producers. I published a book last year that said that, by and large, elite American universities no longer provide their students with a real education, one that addresses them as complete human beings rather than as future specialists — that enables them, as I put it, to build a self or (following Keats) to become a soul.
Only the commercial purpose now survives as a recognized value. Even the cognitive purpose, which one would think should be the center of a college education, is tolerated only insofar as it contributes to the commercial.
Now here are two other passages.
it is no wonder that an educational system whose main purpose had been intellectual and spiritual culture directed to social ends has been thrown into confusion and bewilderment and brought sadly out of balance. No wonder, too, that it has caught the spirit of the business and industrial world, its desire for great things-large enrollment, great equipment, puffed advertisement, sensational features, strenuous competition, underbidding.
the men flock into the courses on science, the women affect the courses in literature. The literary courses, indeed, are known in some of these institutions as “sissy” courses. The man who took literature too seriously would be suspected of effeminacy. The really virile thing is to be an electrical engineer. One already sees the time when the typical teacher of literature will be some young dilettante who will interpret Keats and Shelley to a class of girls.
As that last quote probably gives away, these quotes are from a different time. Both are quotes found in Gerald Graff’s Professing Literature: the first are the worlds of Frank Gaylord Hubbard from his 1912 MLA address, and the second is Irving Babbitt from his 1908 book Literature and the American College. Long before neoliberalism was a twinkle in the eyes of Thatcher and Reagan, universities were under threat from business and industry and the humanities were threatened by engineering. Certainly there are some “yeah, but” arguments to be made, as in “yeah, but now it’s serious.” Nevertheless, these are longstanding tensions. I imagine one could trace them back even further, but a century ago is apt. Back then, American universities were responding to the turmoil of the 1860s and the second industrial revolution of the 1880s and 90s. Today we respond to the turmoil of the 1960s and the information revolution of the 1980s and 90s. There’s an odd symmetry really. Let’s hope we’re not verging on 30 years of global war and depression as our 1915 colleagues were.
Ultimately it’s hard for me to disagree with Deresiewicz’s call for action that we should:
Instead of treating higher education as a commodity, we need to treat it as a right. Instead of seeing it in terms of market purposes, we need to see it once again in terms of intellectual and moral purposes. That means resurrecting one of the great achievements of postwar American society: high-quality, low- or no-cost mass public higher education. An end to the artificial scarcity of educational resources. An end to the idea that students must compete for the privilege of going to a decent college, and that they then must pay for it.
However even if high quality, low cost higher ed was accomplished, I’m not sure that we would get away from the connection between learning and career. Deresiewicz describes the liberal arts as “those fields in which knowledge is pursued for its own sake. ” I think this is misleading. I understand his point, that scholarship doesn’t need to have a direct application or lead to profit. At the same time, I am skeptical of the suggestion of purity here. I prefer some version of the original, medieval notion of the liberal arts as the skills required by free people to thrive.
But here’s the most curious line from the article: “business, broadly speaking, does not require you to be as smart as possible or to think as hard as possible. It’s good to be smart, and it’s good to think hard, but you needn’t be extremely smart or think extremely hard. Instead, you need a different set of skills: organizational skills, interpersonal skills — things that professors and their classes are certainly not very good at teaching.” I’m not exactly sure what being “smart” or “thinking hard” mean here (beyond, of course, thinking like Deresiewicz does). But what’s really strange is that last line: why are professors and their classes not good at teaching organizational or interpersonal skills? Is this even true? I may be wrong but it seems to me that Deresiewicz is implying these things aren’t worth teaching. I suppose it’s a stereotype to imagine the professor as disorganized and lacking interpersonal skills. Are we celebrating that here?
I’ll offer a different take on this. When we adopted the German model of higher education we decided that curriculum and teaching would follow research. But that was already a problem a century ago, as this passage from Graff recounts:
All the critics agreed that there was a glaring contradiction between the research fetish and the needs of most students. In his 1904 MLA address, Hohlfield speculated that “thousands upon thousands of teachers must be engaged in presenting to their students elements which, in the nature of things, can have only a rare and remote connection with the sphere of original research,” and he doubted the wisdom of requiring all members of the now-expanding department faculties to engage in such research. To maintain that every college instructor “could or should be an original investigator is either a naive delusion concerning the actual status of our educational system or, what is more dangerous, it is based on a mechanical and superficial interpretation of the terms ‘original scholarship’ or ‘research work.'” (109)
The hyper-specialization of contemporary faculty only intensifies this situation. The “solution” has been adjunctification but that’s really more like an externality. Changing the way that we fund higher education probably makes a lot of sense to everyone reading this post. Imagining that things will be, should be, like they used to be in the 1960s, when public higher ed and liberal arts were in their “heydey,” seems less sensible.
If the neoliberal arts, as Deresiewicz terms them, are untenable, then we are still faced with building a new liberal arts, which is really what our colleagues a century ago did: inventing things like majors (in the late 19th century) and general education (in the early 20th), which we still employ. In the category of “be careful what you wish for,” increased public funding will undoubtedly lead to increased public accountability. I’m not sure whether you like your chances in the state capitol or in the marketplace, or even if you can tell the difference. Whatever the new liberal arts will be, they’ll have to figure that out, just as their 20th century counterparts learned to thrive in a nationalist-industrial economy and culture.