Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 7 hours 34 min ago

identity and pedagogy in first-year composition

5 December, 2016 - 11:57

Two weeks ago I wrote a post about Mark Lilla’s NY Times op-ed, “The End of Identity Liberalism.” As I noted then, I did not imagine many of my colleagues would share his views (and neither do I, as I think that post made clear, though perhaps I had different objections than other academics). Chris Newfield offers a particularly worthwhile response to Lilla, and I want to consider it specifically in relation to teaching FYC.

First though I need to pay some direct attention to Lilla and Newfield and start by differentiating goals from methods. That is, should we imagine their differences lie in having different goals, different visions, of the future of America? Or do they share a vision but differ over the means to get there? In my reading of Newfield’s argument, it’s the former. In writing about the emergence of Clintonism, Newfield writes, “The basic stakes were whether whites were going to demand that post-1960s ethnic groups assimilate to a common culture that whites defined, or, on the other hand, move toward a polycentric society in which fundamental values would be achieved through negotiation within shared legal ground rules.” He contends that for the Clintons, it was the former, what he goes on to call “cultural unionism,” a position he then attaches to Lilla. Where Eurocentrism insisted other cultural groups assimilate to Western, white culture, cultural unionism offered a softer touch: “The crucial compromise of the latter was that it offered flexible tolerance while still rejecting cultural parity or equality, and insisting instead on unity and shared foundations. The unionists trained their fire on calls for cultural autonomy (like Afrocentrism) that seemed to them to reject their kinder, gentler version of assimilation to an implicitly rather than aggressively white common culture.” As almost goes without saying, the Trump ideology assumes the superiority of Euro-American culture.  Perhaps Lilla is a cultural unionist. He certainly wants to argue that an effective liberal politics moving forward is one that emphasizes common interests among Americans rather than what he perceives (or at least what is perceived by many) as the interests of particular groups.

In any case, I share Newfield’s closing observation, quoting Stephen Steinberg that “where there is social, economic, and political parity among the constituent groups, ethnic conflict, when it occurs, tends to be at a low level and rarely spills over into violence.” I would put it this way: we are in the situation in which we find ourselves now because many “constituent groups” view themselves as having been treated unjustly, that there is a lack of parity.  I will leave it to someone else to judge those claims of injustice, rank grievances, and so on.

Instead I want to turn to higher education’s role, especially the role of FYC. I think Newfield does a good job of briefly describing the role higher education has played in the last 20 years as a mechanism of neoliberal capitalism: developing the individual human capital of STEM-trained professions; creating culturally-tolerant office workers for a global economy; and preparing flexible subjects for a lifetime of retraining, part-time “freelance” work, and geographic relocation. We become always connected contingent workers whose cultural-subjective differences can be tolerated and even celebrated as long as they don’t amount to anything beyond an expression or style. Whatever higher education’s complicity in such goals as an “industry,” in more local terms colleges and faculty have also resisted and critiqued such efforts, developing curricula along those lines. The rise of cultural studies in the composition classroom is one such development, though I think there’s always been some tension over whether such pedagogies manage to create truly resistance critical thinkers or rather only strip away some of the hometown, parochial prejudices of students in preparation for their roles as tolerant global workers. It’s probably not an either/or proposition.

It’s inevitable that higher education has a massive, though variable effect on American culture, so I think generalizations are difficult. Newfield ends with this claim, “The public university can either stand for racial and economic parity as a unified project, or it can continue its decline.” I think this might mean many things. For instance, one could argue that a cultural unionist position seeks racial and economic parity through higher education. It says “come to college and learn to be one of ‘us,’ so you can make a good living.” I’m pretty sure that’s not what Newfield has in mind. So what we have here is a common means–college education–put to presumably different ends. I’m assuming that Newfield’s ends are, roughly speaking, in line with the “polycentric society in which fundamental values would be achieved through negotiation within shared legal ground rules” mentioned above.  In that case, I would think the role of FYC would not be to argue for a particular set of “fundamental values” but rather develop rhetorical capacities for citizens to participate in that negotiation irrespective of the position they would bring to the negotiation table.  That’s a different role than the one generally ascribed to the cultural studies composition classroom.

Categories: Author Blogs

robot empathy and ethics in a jobless future

29 November, 2016 - 15:26

Perhaps this is a departure from concerns of distributed deliberation, fake news and such. Perhaps not. Here though I begin with the rhetoric of an emerging sub-genre regarding humanity’s slow, dismal apocalypse in the wake of intelligent machines. I offer two examples, one from the New Yorker, “Silicon Valley Has an Empathy Problem” by Om Malik and the other from The Atlantic, “Watching the World Rot at Europe’s Largest Tech Conference” by Sam Kriss. Viewers of HBO’s WestWorld (like myself) should have a good sense of the tone of such work but to offer a taste.

Malik writes,

when you are a data-driven oligarchy like Facebook, Google, Amazon, or Uber, you can’t really wash your hands of the impact of your algorithms and your ability to shape popular sentiment in our society. We are not just talking about the ability to influence voters with fake news. If you are Amazon, you have to acknowledge that you are slowly corroding the retail sector, which employs many people in this country. If you are Airbnb, no matter how well-meaning your focus on delighting travelers, you are also going to affect hotel-industry employment.

Kriss writes,

In the accounts given by philosophers like Bernard Stiegler, the human stands on the point of vanishing entirely; we become something incidental to a total technological system. As he points out, a human being without any technological prostheses is nothing, an unsteady sac of flesh defined only by what it doesn’t have: no shelter, no protection, no society. We create tools, but technical apparatuses and their milieus advance according to their own logic, and these non-living objects have their own strange form of life. Our brains developed to control our hands; human consciousness itself was only the by-product of a technical evolution that moved from flint-knapping to the hammer to the virtual bartender; its real job isn’t to perform any particular task but to perpetuate itself.  “Robots,” he writes, are “seemingly designed no longer to free humanity from work but to consign it either to poverty or stress.” Whatever illusion of predominance we had is fading: For others, like Benjamin Bratton, the real political subject is no longer a human individual but a “user,” which can be any kind of biological or digital assemblage. With production automated according to algorithmically generated targets, with the vast majority of all written language taking the form of spam and junk code, this system has less and less use for us—even as a moving part—with every passing day.

Web Summit is where humanity rushes towards its extinction.

There’s some perhaps gallows humor in the notion that the qualities that are supposed to separate us from machines like intelligence, empathy, and ethics are qualities we so often fail to display. But let’s press on. One point shared by Malik and Kriss that connects with the election is the way technological developments are making human labor obsolete. In many respects, this is not a new story. Think of the story of John Henry. Indeed many have argued that we are entering a historical period when we will no longer be able to define ourselves by the work we do or place so much moral value on labor. Read this article for example, where James Livingston suggests that we “Fuck Work.” Or you might consider these articles on the 538 Blog and in The New Yorker on the idea of having universal basic income (i.e. where every citizen gets sent a check each month). Regardless of whether or not you think basic income is a good solution, the problem it seeks to address is plain. We may not need people to work as much as we once did. More poignantly, we don’t require people to do the same kinds of work.

In another example Malik offers

Otto, a Bay Area startup that was recently acquired by Uber, wants to automate trucking—and recently wrapped up a hundred-and-twenty-mile driverless delivery of fifty thousand cans of beer between Fort Collins and Colorado Springs. From a technological standpoint it was a jaw-dropping achievement, accompanied by predictions of improved highway safety. From the point of view of a truck driver with a mortgage and a kid in college, it was a devastating “oh, shit” moment. That one technical breakthrough puts nearly two million long-haul trucking jobs at risk.

One of this year’s primary election narratives is of Trump supporters’ hope that he will keep his promise to bring back their lost factory jobs. Those lost jobs are blamed on trade agreements, which leads to other kinds of political-ideological affects, but many have been lost to technological change. Read about it in The Economist, The Washington Postand Fortune. Citing this report, the Fortune article notes that only about 13% of lost manufacturing jobs are a result of trade agreements. The rest are a result of domestic shifts, primarily the increased productivity per worker of factories as a result of automation (i.e. robots). However, if the truck driver didn’t need the truck to have a home or pay college tuition, then would s/he care about the robot getting behind the wheel? One might believe that an idea like basic income is far too socialist for American tastes, and that’s likely right. But government intervention in technological development to ensure that truck drivers or factory workers aren’t replaced by robots is really no less socialist in the end, right? The only other option is training/education for those displaced workers (and hopefully not just into another industry that is soon to be automated).

So this is where I depart from the spirit/tone of Malik and Kriss’ articles. There’s no doubt that they are describing a real problem. And do we need to think more carefully and empathically about the ethical implications of technological developments. Sure. I mean, who’s going to say “no we don’t” to such a proposition?  When I hear such conversations, I think about the Italian Futurists fascist aesthetics or Walter Benjamin’s angel of history. There are many challenges and pitfalls here.  However, I also think of how Fredric Jameson defined modernity in Postmodernism as “the way ‘modern’ people feel about themselves: the word would seem to have something to do not with the products (either cultural or industrial) but with the producers and the consumers, and how they feel either producing the products or living among them. This modern feeling now seems to consist in the conviction that we ourselves are somehow new, that a new age is beginning… we have to be somehow absolutely, radically modern; which is to say (presumably) that we have to make ourselves modern, too; it’s something we do, not merely something that happens to us” (310). Writing in the eighties and early nineties, Jameson suggested that we no longer felt this modern feeling in the postmodern era. But now I think we might once again. It’s fair enough to point to the insensitivity of developers, tech designers, Silicon Valley investors, and so on. They have a part in this. So too do the engineers, computer scientists, and corporate executives automating one industry after another. But the largest burden falls on all of us to build new subjective relations among these new systems. The question should not be what use the system has for us but rather what use we have for the system. Perhaps like the moderns of a century ago who met the challenges of the second industrial revolution, we have our own technocultural challenges to face.

We can see robots as mechanical, as incapable of empathy or ethical choices beyond those their programming forces them to obey, but if we view empathy and ethics as emergent network effects, then we can recognize that nonhumans have always participated in our capacities for empathic, ethical actions. Moving human labor out of the factory can be an empathic and ethical act enabled by robots as long as the humans affected are not relegated to “poverty or stress” as Stiegler and Kriss would suggest. That’s the political problem that needs solving.

Categories: Author Blogs

consensual and competing media hallucinations

28 November, 2016 - 12:02

In Neuromancer William Gibson famously described cyberspace as

A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data.

In my last post I wrote about “distributed deliberation:” a recognition of the way in which our capacities for deliberation, like our capacities for cognition, rely upon our relations with our media ecology. Understanding the role information media technologies play in deliberative processes is the first step toward addressing problems like fake news and echo chambers. Of course it’s not really that simple. Yes, there are stories like this one in The New York Times about a college student in Georgia (the Russian one) and his fake news site. He’s freely acknowledging that he is producing fake news and doing it to make money. Alongside those who are indifferently but intentionally presenting false information are those who might be propagandists who also intentionally present false information but for specific political purposes. As tricky as it may or may not prove to be to identify such content, at least I think we can all agree in our desire not to be deliberately deceived. Whether or not we can agree that it is unethical to use deception to further our own political ends against either foreign or domestic political opponents is another matter.

From there the territory gets murkier. The Washington Post published a widely-shared story on fake news a few days ago. It reports on a number of events and refers to a series of sources. One of the sources is an anonymous group of “researchers with foreign policy, military and technology backgrounds” who call themselves PropOrNot. Here’s their website, which includes a list of publications they deemed to be propaganda. This article prompted a number of responses from publications and others who asserted they had been unjustly identified. Two examples are here and here.  I would suggest that you take a look at them and make up your own mind about what’s going on there. I just want to talk about one aspect of it. In one of the two articles linked above, Caitlin Johnstone describes the Post‘s interest in fake news as “their latest frantic attempt to claw power back to the neoliberal establishment.” In the other, Newton and Greenwald write,

The Post story served the agendas of many factions: those who want to believe Putin stole the election from Hillary Clinton; those who want to believe that the internet and social media are a grave menace that needs to be controlled, in contrast to the objective truth which reliable old media outlets once issued; those who want a resurrection of the Cold War. So those who saw tweets and Facebook posts promoting this Post story instantly clicked and shared and promoted the story without an iota of critical thought or examination of whether the claims were true, because they wanted the claims to be true. That behavior included countless journalists.

In short, these claims introduce what I would consider a more difficult challenge. As you know, those on the right would name the Post and the NY Times as  prime examples of liberal media (just as those on the left would look at Fox or Breitbart as conservative media). Here we have those further on the left seeing the Post and Times as centrist, neoliberal media. For what it’s worth, here’s another article (this one from the right) making the same basic claim about PropOrNot (that it’s a liberal centrist conspiracy). Let’s assume they are all correct and all news, all sources of information, are always already overdetermined by ideological commitments to the point where no information can be trusted. I would assume one couldn’t even trust oneself as you’d have to be the biggest megalomaniac ever to believe you were the one person on the planet immune to this overdetermination.

What do we do next? What we have done is pick a side and decide that it is the most true. (Or maybe one might say that even the act of picking a side is itself overdetermined.) I’d call this consensual hallucination. News and facts become tools and weapons in the political conflict that ensues. They become intertwined with analysis and polemic in such a way that it becomes very difficult to distinguish among them. The thing about such practices is that the facts employed do not need to be fabricated. One just needs to be selective about facts and then rhetorically skilled to build an argument around them. Once one sets aside the profiteering fake news hucksters and shadowy enemy state propagandists, one might still find  cynical manipulators of news production at media outlets but one also finds many people who firmly believe in the work they are producing and view themselves as ethical, even though their opponents do not see them as such. So if you look at an organization like PropOrNot, do we imagine it is a cynical operation designed to mislead readers for specific purposes or an organization genuinely attempting to identify and limit the effects of Russian propaganda? If it’s the latter, then we could still say “Well, you’re not going a very good job of it.” As far as that goes, what about The Washington Post or any of these other authors or publications?

In this situation what we have is a fundamental breakdown of institutional function. If news is fundamentally ideological, then you’d be a fool to believe it, even the news that reflects your own views, maybe especially that news. In theory, the news media, as a collective institution, is supposed to do the work for citizens of ensuring that news reporting, while always limited and open to error, is as fair and accurate as we can make it. It does that work, so we don’t have to. If you want to be really gloomy about it, you can add all the other social institutions along with it: government, law enforcement, education, corporations, religion, healthcare, etc. etc. Why believe anyone is out there doing anything other than promoting their individual interests in a free market capitalist environment devoid of any values and/or being suckered into serving someone else’s interests? Perhaps that is the case, though if so we are probably screwed.

Personally I’m not that cynical. However, we do need to rebuild these institutions. I’m not going to talk about how here in a post that is already far too long. The obvious first step though is deciding that we want to rebuild them. We need to recognize that while we will continue to disagree about many things that we would all benefit from a news media whose practices made visible their own limits and made explicit efforts to reduce bias within those limits to produce news that was as trustworthy for citizens as possible. To me that doesn’t mean inviting two or more hopelessly partisan hacks to offer dueling “spins.”

Categories: Author Blogs

distributed deliberation: beyond echo chambers and fake news

25 November, 2016 - 10:57

If some Americans are slowly rousing to the realization that getting information via social media resulted in a distorted (and sometimes completely false) view of past election, perhaps they might be able to extend that epiphany to recognize that the distortion is ongoing and not limited to presidential politics. It is also not limited to the specific social media effects of echo chambers and fake news or even strictly social media.  I will talk about these things as available means of persuasion (they persuaded you, right?). The problem now is what to do. And it is a serious issue; one might say it is a matter of national security as other nation states employ social media and fake news to destabilize our democracy. Indeed as this Washington Post story reports, the boundaries between fake news and propaganda are blurry.

Before we can move forward though we’d have to agree on a few things. The first is that we don’t want to be dupes. There are plenty of pieces popping up about how to identify fake news when you encounter it. But the primary hurdle to that is whether or not you care about the accuracy of something you’re reading or sharing. The total fabrications are probably not hard to detect. Then there are things with some truth in them (analogous to political smear–ahem, I mean ad–campaigns) but are selective and slanted. Then there’s discerning the difference between an op-ed piece and a news piece. There’s recognizing that news can only ever tell part of the story and that in necessarily choosing to report on one story another goes unreported. (I.e., we have limits). Once we get through all of that, then we can start on figuring out interpretive processes for understanding the significance of a particular news report.

Before we get too overwhelmed though, we can recognize the elements of this challenge are longstanding. Maybe you remember Jon Stewart lampooning CNN’s Crossfire? Or maybe you read Chomsky’s Manufacturing Consent from the 80s. Presumably we all know of the operation of Nazi and Soviet propaganda from the middle of the century (even if we sometimes failed to recognize it among ourselves) or the rise of advertising in the same period (you watched Mad Men, right?).And there’s the yellow journalism that dragged us into the Spanish-American War. You can go further back but once you start predating mass media the media ecological conditions are so different the comparisons are quite strained. The good news is that we already have some very good mechanisms for producing good information, not perfect information, but “good enough,” by which I mean information which, when relied upon for action, bears through. The scientific method and other research methods carried out by experts and professionals are some examples. And we have accessible strategies for critically evaluating media we encounter, as I mentioned above.

However none of that is really the problem I want to talk about. I really just wanted to point out that 1)we probably should want good information, 2) these things have happened before and 3) we already have some means for producing and evaluating good information. The obvious current problem is that not only that “good information” is mixed in with other types (faulty, maliciously misleading, playfully misleading, etc.) but also that there is an overwhelming amount of potentially good information and some of it exists in genres that are not easily understood by the citizenry as a whole. I say “potentially” good information because even if the information is well-produced it may not be the information you need for the particular question your asking. Just looking in English Studies, there’s a ton of good information about literature. Is it useful to you? Maybe.

So that’s where we get what I’m calling “distributed deliberation.” It’s a term the plays off of Edwin Hutchins “distributed cognition” (read about it in Wikipedia if you like). Basically the idea is that our capacities for cognition arise through relations with our environment, and we become capable of more and different cognitive tasks as we develop that environment. In this sense, we’ve “always already” had a distributed deliberative environment. We’ve relied on trusted friends, families, community leaders, and institutions to help us: churches and schools, even the media and government. In the contemporary media ecology, we also need to rely more directly and self-evidently upon machines. The processes (or procedural rhetorics to borrow Ian Bogost’s term) of Google, Facebook, Amazon or YouTube (just to name the four most visited sites in the US) must make choices for you; they must deliberate. Even some pseudo-random presentation of links, posts, products, or videos would be a form of deliberation, of choosing.

Certainly there are humans deeply involved in those processes. Humans write and manage the algorithms at work here, though that does not mean that they can fully predict how they will work. And sometimes when they do, some humans make calculated decisions that do not benefit the rest of us, as with Facebook’s actions related to fake news. But before we show up as Zuckerberg’s door with pitchforks, we need to understand that deliberation isn’t easy, as this Bloomberg article discusses. How do we teach machines to recognize grey areas when we have trouble doing so ourselves? Perhaps in another post I’ll write–half seriously–about the future of “artisanal information.”

So here’s the bottomline:

  • We need to recognize the role of the media ecology in deliberation and that it’s not inherently bad that deliberation is distributed.
  • We need to develop specific understandings of the deliberative processes of the sites on which we rely the most (e.g. when you look at your FB page or the results of a Google search, do you know why you are seeing the particular information you are seeing?)
  • We need to re-articulate the specific epistemological methods–including their advantages and limits–that we will accept as legitimate constructors of knowledge.
  • We need to understand the rhetorical features of the genres that participate in the construction and communication of that knowledge such that we can know when a particular piece of media has gone out of bounds.
  • We need to build technologies that can allow us to reward and punish media appropriately so that our deliberative acts can shape the automated processes that build the media ecologies we experience.

But maybe before that, we need to agree on what a “fact” is, how facts are made, and how we judge whether or not a particular fact is well-made.

Categories: Author Blogs

pluralism and the nonmodern, nonliberal society

20 November, 2016 - 10:04

In a New York Times editorial, “The End of Identity Liberalism,” Mark Lilla, a Columbia history professor, makes an argument that runs against much of the discourse I hear from the academic left. I am curious what others think of it. In part I’m writing this to work through my own thoughts on the matter. All of this stuff here is exploratory and rudimentary.

So Lilla writes the following, specifically targeting teachers and the press:

the fixation on diversity in our schools and in the press has produced a generation of liberals and progressives narcissistically unaware of conditions outside their self-defined groups, and indifferent to the task of reaching out to Americans in every walk of life. At a very young age our children are being encouraged to talk about their individual identities, even before they have them. By the time they reach college many assume that diversity discourse exhausts political discourse, and have shockingly little to say about such perennial questions as class, war, the economy and the common good. In large part this is because of high school history curriculums, which anachronistically project the identity politics of today back onto the past, creating a distorted picture of the major forces and individuals that shaped our country.

And then later I believe this is his core thesis:

We need a post-identity liberalism, and it should draw from the past successes of pre-identity liberalism. Such a liberalism would concentrate on widening its base by appealing to Americans as Americans and emphasizing the issues that affect a vast majority of them. It would speak to the nation as a nation of citizens who are in this together and must help one another. As for narrower issues that are highly charged symbolically and can drive potential allies away, especially those touching on sexuality and religion, such a liberalism would work quietly, sensitively and with a proper sense of scale. (To paraphrase Bernie Sanders, America is sick and tired of hearing about liberals’ damn bathrooms.)

Having attended a panel on campus on Friday on “healing the divide” after the election (the title was created at the beginning of the semester, btw), I don’t think Lilla’s argument would have been widely popular there, or indeed in many of the academic communities in which I’ve worked for really my entire adult life. With only a modest caveat about generalizations, I would say that quite clearly one of the expressed goals of pedagogy and scholarship (broadly speaking, but certainly in the humanities) has been to teach students not to be racist, sexist, homophobic, etc. Really the greatest tension I typically hear around this objective is simultaneously recognizing that one is unavoidably all these things, so that complicates teaching the mitigation of these problems.

My non-expert understanding of liberalism is that, in its Classical, 18th century-Enlightenment sense, it was grounded on notions of freedom: free speech, freedom of religion, free markets, secular government, and so on. It’s one of the forms of government that arises with modernity. In its later 19th-20th century forms as Social Liberalism, the concerns turn more toward equality. The notion I float in the title of a “nonmodern, nonliberal society” is an obvious gesture toward Latour. It would suggest that the ontological principles on which liberalism is based are misunderstood. What are the ontological principles? I suppose you could look at the self-evident truths of the Declaration of Independence. To be clear, when I say these truths are misunderstood, I don’t mean that they are untrue but rather that we have a limited and problematic understanding of how these ontological conditions are produced and maintained. (After all the Declaration asserts that they were given to us by a “Creator.”) I’ll circle back to that.

Similarly I am employing a non-expert notion of pluralism to mean accepting that people in a community will have different values and objectives. The one common value/objective a pluralist society requires is that its members will not seek to impose their values or objectives on one another. In part, learning this value might emerge from a practical understanding of the impossibility of common values. It’s a game of whack-a-mole. Differences always arise.

What is necessary however is an ability to agree on common practices. Continuing my headlong tumble into areas in which I have little expertise, I think about this in terms of a non-zero sum game and cooperative gaming. In such a scenario, I do not need to share values or even objectives with my partners. As long as I agree that our actions are desirable for me (i.e., consonant with my own values and objectives) then our collective agreement to undertake this action is a benefit to me, even if the others are doing it for different reasons. Of course it only works with the basic agreement on pluralism, which I don’t think we have in  our society right now, so this post shouldn’t be taken in any way as a suggestion for an immediate course of action (don’t worry, I didn’t really imagine you were reading it that way; it’s more of a disclaimer). In no way do I mean to suggest that this is easy. We all still have values and goals, and when those are very divergent, as they are now, mutually beneficial practices, or even tit-for-tat compromises, can be almost unachievable. In such scenarios we often begin by trying to appeal to common values but they may not exist or be common in name only. That’s why I think focusing on practices can be useful, and the more detailed and specific the practices are the better it is, as the lines back to values and goals become fuzzier. Developing, agreeing, and abiding to common practices can be a foundation for moving forward with people with whom you disagree.

Turning back to where I began with Lilla… I think he is arguing that we should minimize identity politics and emphasize our collectivity over our differences. I would look at it a little differently. Perhaps by “Americans” we mean to name a group of actors gathered around a series of matters of concern (as Lilla puts it, “issues that affect a vast majority”). Is it possible to explore those concerns and build practices we can agree will address them without shared values?

Categories: Author Blogs

The fantasies and limits of experts and elites

19 November, 2016 - 09:33

I went to see Arrival last night. I don’t think it’s a spoiler to say that it fits into the subgenre of science fiction where scientists save the world in the face of aliens, panicked citizens, paranoid politicians, and trigger-happy soldiers. To be sure there are other kinds of contact movies (E.T. for example) where friendly aliens are met by kids or everyday folks, who protect them from scientists and government types, but the idea of scientists, engineers, and related experts coming to the rescue is a common sci-fi theme. One could say Star Trek is founded on that notion both as a setting (in a near utopian society founded on technoscientific solutions) and as a plot device (problems and solutions are articulated in pseudo-technical and scientific ways).

However, I’m not so much interested in talking about science fiction here as one of the aspects of our current political situation: the role of experts and elites, particularly those who might fall within the pejorative moniker of “liberal elites.” I would call liberal elite a category but I don’t think it really hangs together that well. The term is applied to people from many disciplines who have both disciplinary and political differences among them. And, as I increasingly tend to these days, I am interested in understanding this issue in a kind of Latourian, new materialist rhetorical way, which has the added benefit of pissing off almost everyone. But really what that means here in this most preliminary gesture is taking people at their own word when they account for how they were made to act.

So take, for example, this article in Politico, which is basically a series of interviews with folks in rural PA who voted for Trump. Really any cultural critic could make quick work of these folks, demonstrating the racist, sexist, homophobic (etc.) ideological foundations of their views. One could unveil how they misunderstand the nature of the problems they face and how they need to be solved (e.g., why environmental regulations are important for them even though it means they can’t work in the coal mine or why nationalized health care and regulated global markets are a benefit to them). One might even make appeals to history and the Constitution to show why we should be moving toward a more open and diverse society, even though doing so makes permanent cultural changes to the places where they have lived. Fundamentally it would be an argument about how government is best directed by the advice and knowledge of experts.

I’m not doing any of those things here. There are plenty of places to find such arguments. I’m also not saying that I don’t find many of those arguments convincing myself. But that’s not the point. Furthermore, to make this clear. One could equally go into black, Hispanic, Muslim, and LGBT communities and gain a parallel understanding of their own accounts for how they were made to act. And one could also do the same among liberal, college-educated whites. One could undertake the same critical moves, and there’s also plenty of evidence of people doing just that, as the various intersectional tensions of the coalition on the left burst at the seams. And just with the critiques of Trump voters, I find many of these critiques of Clinton, Bernie, and Green Party supporters equally convincing.

But what do I mean by convincing? Fundamentally I mean that I acknowledge that they conform to the rhetorical and discursive requirements of a genre that I recognize as having value in describing our experience. But what’s the limit of that?

Let me take a slightly less inflammatory part of this: climate change. I don’t want to get into this issue, but for basics, this wikipedia article cites a number of studies regarding the number of climate scientists who believe the human impact on climate change is “significant.” There are a number of polls out there about the varying attitudes of Americans. I don’t really want to talk about whether or not climate change is real or significant. I want to talk about the shape of the discourse. It’s perhaps understandable that disagreements over the particular policies and actions the government should take regarding climate change would be organized around the primary ideological poles in our nation. What is strange though is that acceptance of the scientific conclusions about climate change should also mirror the same ideological commitments. In part what’s odd about that is that in many other instances, my generally liberal-left colleagues in the humanities are skeptical and critical about the claims of science but not so when it comes to climate change. Similarly, people on the right are happy to endorse scientific methods when they are building our military-technological infrastructure or helping oil companies drill and frack but reject science as conspiracy when it comes to the climate.

Putting a rhetorical twist on a Latourian insight, part of the challenge for experts is convincing multiple, diverse audiences that the processes of scientific and academic knowledge-making result in constructions that have value. In a new materialist rhetorical perspective, this would include building human-nonhuman spaces that can facilitate this communication. I.e., it’s not just a matter of words. And it can’t just be unidirectional. That is, expert, academic-scientific discourses must contend with other discourses in this space and cannot expect to be able to demand recognition as truth. Clearly that has not worked in these communities, not only in terms of climate change but perhaps more importantly in terms of social policy and justice.

I have little optimism in the prospect of such things happening soon. We are far too divided not only on what the effects of proposed right-wing policies will be and the goals we should be seeking to accomplish but even fundamentally on the nature of the reality in which we are living. We can barely agree on the color of the sky, and we certainly cannot agree on why it is that color. Furthermore, I can’t even argue that we should be trying to overcome our divisions. This may very well be a situation of class-ideological struggle were one side is going to win out rather than there being anything like a compromise. It would probably take some singular event on the order of aliens showing up to unite us.

Really all I want to keep in mind is the limits of the expert discourses whose repudiation is one of the many apparent outcomes of the election.

Categories: Author Blogs

friendship, encryption, and servers

30 October, 2016 - 11:04

Let’s side aside the partisan politics for a moment and consider these matters more broadly. I believe it was Ben Franklin who said, “Three men can keep a secret if two of them are dead.” It suggests a couple things. Perhaps a lack of self control come to mind, but I think of the sociality of information and the interweaving of our consciousness and subjective with expression, perhaps what Diane Davis terms response-ability. But it also makes me think about forensics. We can learn many things, CSI-style, from the dead, but we cannot retrieve their memories. However, forensics can retrieve computer memory, even from “dead” hard drives. Matthew Kirschenbaum devotes some time to discussing how difficult it is to ensure that data is finally irretrievable and erased.

In The Two Virtuals I focused more on the opposite side of this concern–how the process of “rip, mix, and burn” as a compositional process is connected to accessibility. The capacities of that burned media object vary in terms of who might read/view/use it and how easily it can be ripped and mixed in future compositions. These are rhetorical decisions.

And sometimes the decision might be “let’s severely limit the accessibility of this data” so that only our “friends” can use it. Now I’m using the term “friends” even more loosely than Facebook does, but Fb is a good example of this, right? We all set our security settings in relation to a group of users we call “friends” and we rely upon FB network security and encryption to prevent others from accessing that data. (Sort of. We realize that anyone can read our posts on our friends’ screens over their shoulders or that our friends can screen capture or whatever and share that publicly.)

In any case, friendship, audience, and encryption/access are all tied to one another and there all connected to a broader understanding of rhetorical acts involving risk. That is, we never fully know how others will respond to what we say/write. There’s a risk. Furthermore, we can never by 100% sure who will be in that audience, let alone the context in which they will receive our messages. This what we saw with Climategate as well as what we’ve seen with some of the recent Wikileaks stuff. It’s also at stake with all kinds of private/secret information from individual finance or health records to corporate secrets or national security or even something like FERPA regulations at a university.

The Ben Franklin approach works as long as no one actually needs to use the information for any purpose. Once it does need to be used, then we move into the “need to know” approach and try to secure those boundaries. But the tighter those boundaries are, the harder it is to use that information. For example, you could keep the data on an air-gapped computer in a bunker protected by military forces. You have to go through all these layers of security just to get in the room. I’m sure you’ve since this movie before. It’s the one about how uninvited guests get in anyway. Aside from such cyber-fantasies we are then talking about secure, encrypted networks. How’s that working for you? The weakest link is almost always the people. It’s called social engineering. What’s social engineering? It’s sophistic rhetoric. Watch Mr. Robot for more details. Basically what you’re doing here is convincing someone that you’re their friend. That’s what a password is. Like a secret sign to get into a private club.

I’m not saying there isn’t a wide range of technical matters here that are well beyond my expertise. Of course there are. But this is an argument for the importance of digital rhetoric, and specifically a digital rhetoric that is attentive to the role that nonhumans play in digital media ecologies.

John Oliver quipped that Hillary’s problems with email all stemmed from an unwillingness to carry two phones. It’s the first time that “wear cargo pants” was actually a good piece of advice (he said something to that effect). That’s funny. But the larger digital rhetorical question begins with understanding how these devices interoperate (or fail to do so) to create the media ecologies in which we live. And those understandings must include the risks inherent in rhetorical acts as they slip beyond our friends and intended audiences, as encryptions fail, and mediums make the dead speak.

Categories: Author Blogs

designing rhetorical technologies of deliberation

14 October, 2016 - 09:17

An interesting article in The Atlantic,The Binge Breaker,” discusses the challenges of ethical design for social media, smartphones, and related technologies. The article focuses on familiar and widespread experiences in digital culture: its addictive qualities and attentional demands. It is no surprise that devices and apps are built with the express purpose of attracting user attention: “the digital version of pumping sugar, salt, and fat into junk food in order to induce bingeing. McDonald’s hooks us by appealing to our bodies’ craving for certain flavors; Facebook, Instagram, and Twitter hook us by delivering what psychologists call ‘variable rewards.'” So sure we are used to such subconscious inducements across most aspects of our consumer culture, and our default response is to call upon individual will to make good choices. To that rather uncritical response one might suggest that the best way individuals can make good choices here would be to begin by choosing to act collectively to insist on a different approach to design.

As I’ve discussed here in the past and has become a recurrent topic in the field, Ian Bogost’s conception of procedural rhetoric highlights the way in which digital media can undertake rhetorical, persuasive objectives through its design and computational procedures. Adding in the kinds of insights Mark Hansen brings about the way ever-faster technologies constitute a kind of precession of deliberation, making decisions for us before we even realize there are decisions to be made, we find ourselves in a situation where it is necessary to acknowledge, investigate, and intercede in the ways our media ecology encourages particular cognitive and agential capacities in our relations with it.

In straightforward terms, how do we approach the design of media and technology with a different set of values and purposes? How do we foreground a desire for our relations with the media ecology to develop a different set of cognitive-agential-rhetorical capacities? And what should those be?

Sure, there’s a certain amount one can do as a consumer. There’s no law requiring anyone to own a smartphone or have a Facebook account. If we set aside the “Just say no” option, one could experiment with a range of practices. The article mentions several in its focus on one particular individual, Tristan Harris, who is leading a kind of industry crusade on this matter. An obvious example would be shutting off all the automatic notifications. You could not use the fingerprint login on your smartphone and instead give yourself a very long and complicated password. You could set schedules for how you use your phone. Leave it in an out of the way place in your house when your home where you can hear it if someone really is trying to contact you but is not a temptation to just check.

But you could also design these apps so that they were less addictive. What if every time you went on Facebook it began by announcing how many times you’d been to the site in the last 24 hours and the amount of time you’d spent. Then it started running a clock on the time of your current visit. Would that be annoying? Probably. So maybe they could just stop doing all the things they do to suck you in, like videos that run automatically in your feed.

Those are all what I would consider kinds of brute force solutions. Back when I started teaching composition, violence on television shows was a common theme and the common answer was always “channel blockers” to prevent kids from watching the wrong shows. These answers are kind of like that. The more complicated question was to ask why we had shows like that, what their effects really were, and how we wanted the world to be different. Maybe we don’t want social media to be different; clearly a significant part of our collective psyches finds this all quite appealing. Indeed as this article observes in noting the counter-argument, one might say that “social media merely satisfies our appetite for entertainment in the same way TV or novels do, and that the latest technology tends to get vilified simply because it’s new, but eventually people find balance.” I think it’s possible to agree with that observation and say that what we’re talking about here is finding that balance and incorporating it into our design approach.

This is really where digital rhetoric should be working right? Examining how different people, communities, and cultures participate in digital media ecologies; describing how digital media technologies operate to foster rhetorical capacities; teaching students an awareness of the rhetorical function of digital media in their lives; helping students develop compositional strategies for these environments; developing best, ethical practices for technology design and use in different contexts–schools, universities, workplaces, civic life, etc; and getting directly involved in the design of these applications and technologies.

There must be dozens if not hundreds of different angles, methods, focuses, etc., etc. that one might take as a digital rhetorician coming out of this.  This particular angle interests me.

 

Categories: Author Blogs

one new materialist rhetoric and its relation to object-oriented ontology

2 October, 2016 - 08:06

There have been some “conversations” on social media and apparently on a panel at the Cultural Rhetorics conference going on this weekend regarding object-oriented ontology and rhetoric. I’m not at that conference, but I have read some of the online discussion on Twitter and Facebook. I’m not interested in rehashing that here, but I thought I would try to make my own position clear. Of course by clear I mean something fairly academic and abstract, but I figure anyone who is in a position to offer legitimate academic critique of such matters should be familiar with all of these references and contexts.

Really quickly though, most of my recent publications dealing with these subjects are in the list below, so you can check those out. Also, here’s a link to all the posts I’ve put in the category of “object-oriented rhetoric,”88 in total (89 with this one). I’ve also, more recently, been using the category “new materialism.” As that would suggest I’ve categorized a fair amount of my own blog writing under the term object-oriented rhetoric. In my experience over the last five years, many of our colleagues seem to use that term in a general and less precise way then I would choose to, so that’s how I use it informally here for categorizing purposes on a blog.

In more precise, academic terms I would use object-oriented rhetoric to refer to an exploration of how rhetoric might function within the context of object-oriented ontology (OOO), which, to me, is a philosophy that has been principally espoused by Graham Harman and Timothy Morton, to a lesser extent Ian Bogost, and once upon a time by Levi Bryant. I’ve spent a fair amount of time exploring these issues and learned a great deal, but in the end I was never quite able to come up with an OOO-based rhetoric that worked for me. OOO as we know de-emphasizes the role of relations, especially in comparison to popular postmodern thought, and rhetoric, as near as I can figure it, describes a relation. Of course objects in OOO do relate to one another and it is possible to describe the rhetorical qualities of those relations when they occur. There can be an object-oriented rhetoric. It’s just not what I wanted to do, in the end.

So I describe my work as a new materialist rhetoric, which is a different, though related, and more capacious term than object-oriented rhetoric is in my view. I draw principally on the work of Manual DeLanda, Bruno Latour, and Jane Bennett, particularly in my current manuscript. I don’t know that Latour would call himself a new materialist. I think Bennett would and does. DeLanda is one of two people to whom the term is attributed (in the early 90s), but these days he calls himself a “realist philosopher.” In short, it turns out to be tricky to categorize people. Who knew?

Anyway, in the most general terms, here’s my thinking about a new materialist rhetoric:

  1. Rhetoric is a capacity that arises in the relations among two or more humans and/or nonhumans. Humans are not required. I describe these humans and nonhumans in terms of assemblages (DeLanda and Bennett) or actors (Latour). I assume that if you’re in the position of critiquing such concepts then you don’t need me to explain how they operate in these authors’ works.
  2. This capacity called rhetoric engenders further capacities for thought and action. As Latour says at times, we are “made to act.” Or one might think of distributed cognition, enhanced mind, or cognitive ecology as concepts coming out of cognitive science as ways of describing how thoughts arise in relation to environment. DeLanda’s notion of capacity itself suggests qualities that are hidden and unavailable without relation.
  3. I know there are many questions about ethics and politics in relation to these theories. I view these theories primarily as methods for description. They certainly can describe how ethical and political practices arise. Are those descriptions useful? You’d have to see for yourself I guess. At times they do suggest that certain explicit or implicit decisions we’ve made about how the world works are erroneous in ways that lead to further problems. Ultimately they don’t tell you what you should do. I consider that a good thing.

So, my own work for the last 20+ years has focused primarily on the effects of digital technologies on rhetoric and pedagogy. Maybe you think that’s a stupid thing to focus on and that I should be studying something else. Whatever. Anyway, the emergence of digital media has fostered a wide range of matters of concern  (to borrow Latour’s phrase) both inside and outside of the university: from worries that “Google is making us stupid” to MOOCs to uproars over what someone tweeted. I find new materialist rhetoric to be an effective descriptive method for investigating these matters of concern primarily because it offers me a way to describe the rhetorical operation of nonhumans in a way that I find useful. To find out more, read my work.

Of course this is academia. All ideas are subject to critique. Critique is a common practice among my colleagues. If you want to critique my work, I’d like to be properly cited. It would also be great if my work was accurately represented. There’s no need to mischaracterize my work in order to critique it. It is 100% “critique-able” as is.  There’s plenty of stuff my work does not do, so that’s a legitimate critique. It does draw on certain concepts rather than others. For example, it isn’t an object-oriented rhetoric, so if you want one of those you could critique me for not providing it.

 

 

 

Categories: Author Blogs

populating threshold concepts in writing studies

9 September, 2016 - 13:52

In our Teaching Practicum, we’re reading Naming What We Know: Threshold Concepts of Writing StudiesIf you aren’t familiar with it, it’s an interesting texts with many contributors that seeks to identify some of the threshold concepts of our discipline where “threshold concepts” have some specific, though unsurprising, characteristics:

  • Learning them is generally transformative, involving “an ontological as well as a conceptual shift . . . becoming a part of who we are, how we see, and how we feel” (Cousin 2006).
  • Once understood, they are often irreversible and the learner is unlikely to forget them.
  • They are integrative, demonstrating how phenomena are related, and helping learners make connections.
  • They tend to involve forms of troublesome knowledge, what Perkins refers to as knowledge that is “alien” or counterintuitive (qtd. in Meyer and Land 2006, 3).

I’ll return to that characterization in a moment. The book is edited by Linda Adler-Kassner and Elizabeth Wardle, who start our their introduction to the collection by reminding us of the struggles we have had even with naming our discipline, let alone figuring out what it is about. There they come up with the following line, while we have struggle to define what the field is about, “researchers and teachers in the field have, at the same time, focused on questions related to a common theme: the study of composed knowledge.” The study of composed knowledge?  Trying to define this discipline is an unenviable task, and I’m not saying that I have a better answer, but… the study of composed knowledge? They continue:

Within this theme, our work has been expansive. To name just a few areas of practice within it, we have studied what composed knowledge looks like in specific contexts; how good and less-than-good qualities of composed knowledge are defined, by whom, and with what values associated with those definitions and qualities; how to help learners compose knowledge within specific contexts and with what consequences for learner and context; the relationships between technologies and processes for composing knowledge; connections between affordances and potential for composing knowledge; and how composed knowledge can be best assessed and why.

This explanation helps make clear what is meant by composed knowledge, and I imagine most scholars in the field can see themselves in these generalities somewhere. Still. One might “compose knowledge” by practicing a golf swing, performing a laboratory experiment, or watching your dog play in the yard. Some of that knowledge may be trivial. Some might involve writing somewhere along the way, or not. So either I’m confused about what “compose” means or I’m confused by what “knowledge” means because I think we study a very narrow slice of composed knowledge, and, I think we study things that are not composed knowledge also. That is, in my mind anyway, rhetorical-expressive processes and events are not simply knowledge, if by knowledge we mean something along the lines of declarative statements. Maybe I could replace “composed knowledge” with “rhetorical-expressive processes and events” in the passage above, but I’m not sure if that’s very helpful.

Fortunately I think we’re better off not attempting to identify essential, defining characteristics of the discipline but rather describing the population of assemblages within it. How do you know where the boundary is? Good question. The short answer is that you have to go look. Perhaps it will be clear, as when the land reaches the sea, but maybe not. Meanwhile though, the rest of this book is about trying to describe those assemblages or threshold concepts.  All the threshold concepts in the book are about writing. That is, each concept makes a claim, and most of the claims are about writing. Some are about writers or text or words or genre. I guess it would be too tautological to say that writing studies studies writing. If, as DeLanda observes, assemblages can be territorialized by code, then one way our field is populated is through activities coded as writing. Writing itself has become destabilized (or decoded or deterritorialized) in the digital era, so that boundary is not as crisp as it once was.

But let me return to the notion of threshold concept itself. As described above, they sound to me like “aha” moments. The overarching threshold concept in Naming What We Know is “Writing is an Activity and an Object of Study.”  As Adler-Kassner and Wardle note, “the idea that writing is not only an activity in which people engage but also a subject of study often comes as a surprise, partially because people tend to experience writing as a finished product that represents ideas in seemingly rigid forms but also because writing is often seen as a ‘basic skill’ that a person can learn once and for all and not think about again.” Yeah… ok. I think that’s probably because people tend to separate “writing” from “composing knowledge.” And also they probably don’t give the matter much thought.  The average person knows they can’t sit down and write an article for a physics journal but that’s because (they would say) they don’t know physics not because they don’t know how to write. The idea that part of learning to know physics would be learning to how to write physics articles is one of those subtle points. Actually it’s not that subtle but let’s give “people” the benefit of the doubt. Admittedly it’s frustrating when the physics professors thinks she can teach the physics and I can teach the writing and the two will just magically combine inside students minds so that they can just write physics papers. Probably no one really believes it happens that way. We just don’t tend to give a lot of thought to how it actually does happen. But then again, that’s what writing studies is for! Investigating how writing happens… or as I so charmingly put it in an earlier paragraph investigating “rhetorical processes and events.”

So, does this concept seem to be a threshold? Is it that kind of aha moment? I’ve had some aha moments as a scholar. I remember in grad school when I first “got” the idea of rhizomes and the next week felt pretty trippy as I was seeing rhizomes everywhere. I don’t recall having that kind of experience around this notion. Maybe because it came on slowly. Maybe it was because I wanted to write sci-fi novels when I was a kid and so I’ve been studying writing and trying to figure it out for a long time. In fact, I’m not sure I ever though of writing as a basic skill that a person learns once.

For me, the metaconcept here is that writing is a messy business, and that’s something around which I have had recent aha moments in building a WID curriculum on campus. It’s one thing to understand this conceptually. It’s another to encounter it on the ground.

 

 

 

Categories: Author Blogs

genres, population thinking, and what the hell do you think you’re doing?

6 September, 2016 - 13:56

I’ve been working some more on basic concepts coming from assemblage theory and DeLanda, specifically in this case “population thinking.” Very briefly, populations are the way that Delanda thinks about relations among individual singularities. The idea is that individuals form a population in a statistical way through the historical use of a common set of compositional processes (or assemblages). Depending on the particular assemblages at play a population may have greater or lesser degrees of variation within it and more or less fixed boundaries. One of the examples I was using in what I was writing earlier today is the industrial corn field with its population of corn plants with little variation and very fixed boundaries assured through certain industrial-genetic-chemical processes of farming.  Of primary interest to DeLanda here (and often) is describing ontological processes without relying upon a concept of essence. So in this case there is no essential corn (or corniness, I guess), no Platonic corn critter in some heavenly plane. Instead there is a fixed process of assembly that results in a statistically reliable process of producing this population of corn. Of course “errors” in the process do occur, which is what is meant by statistically reliable.

But I got to thinking about populations that are more germane to rhetorical study: genre, for example. I won’t attempt a review of the rhetorical scholarship on genre here: write your own lit review! However, of the available definitions, I am sympathetic to that of activity theory where genres might be defined by the actions they accomplish. But in activity theory it’s more complicated than that because it’s never the “genre” of the memo that does the work in an office, for example. It’s the particular piece of writing that has the word “Memo” at the top of the page. The genre is always somewhere else it would seem. In addition, the qualities that define a genre always remain elusive: e.g., what’s an “A” paper? What Derrida’s line about this? “The re-mark of belonging does not belong”? Something like that.

Population thinking offers a different approach to the problem, one that looks at the processes that produce a group of individuals that form a given population. For example, the students at my university: they do not have common essential characteristics but rather are produced through a process of admissions and enrollment in the institution. The degree of variation among the individual students, as opposed to other universities or a random sampling of humans, can be described through those admissions and enrollment processes: high school degree (or equivalent), test scores, English-speaking ability, ability to pay (or eligibility for financial aid), etc.

So, for example, let’s consider the genre of the academic journal article. Where to start? First, these are historical processes: i.e., they develop through time. Second, one can think about genres as part of larger genres or other populations. So the genre of journal articles in English has a history (going back to the 1880s anyway). It is part of larger concept of scholarly genres and other populations of human symbolic action. If one wants to think about those commonalities, they would include shared media ecologies, cultural values surrounding authorship, and common institutional formations. These things resulted in writers sitting in offices with pen and paper (and/or typewriters) surrounded by books and other printed materials. One can look at how material limits and economics shaped the size of journals (and the length of journal articles), as well as the labor involved in producing articles in all the steps, as well as how those things fit into procedures like hiring or tenure review. These are the kinds of matters that all academics share in one way or another.

As all academics have experienced, one goes through this process in graduate school of reading and discussing journal articles and of writing seminar papers, dissertation chapters, and journal articles oneself.  One gets feedback (from colleagues, mentors, editors, reviewers) and attempts to understand the necessary features of a journal article in one’s field in order to get published. While those particular descriptions of the characteristics of published articles may prove useful in helping one write a publishable text, from the point of view of assemblage theory what is at stake are the mechanisms. This is about statistical distribution. Thousands of graduate students and assistant professors typing away and trying to produce their first published journal article: many will fail, especially at first, but over time most will figure it out, or at least enough figure it out for the population to sustain itself. If you wanted to be cruel about it, you could think of how many monkeys and typewriters it takes to reproduce Hamlet. An assemblage of monkeys and typewrites would produce a highly deterritorialized population of texts.  A population of human writers, thoroughly trained through years of higher education, and linked to a common media ecology with territorialized and coded disciplinary structures and mechanisms, is a far more reliable assemblage.

The point here is that while one can go about describing the characteristics of individuals within a population like articles in a particularly discipline, you might be better off looking at the assemblages that produce them. And the real advantage to that is how it might switch one’s orientation on the relationship between humans and genres.

Ulmer writes in his introduction to Holmevik’s Inter/Vention that one might say that humans are the sex organs of machines. DeLanda has a similar line in War in the Age of Intelligent Machines where his imagines a robot historian for whom “the role of humans would be seen as little more than that of industrious insects pollinating an independent species of machine-flowers that simply did not possess its own reproductive organizes during a segment of its evolution.” What if we think of genres this way? We are the sex organs of genres? Pollinating an independent species of genre-flowers? If so, one cannot define genres by what they do (for us) or even what with do with them for us.  

All these blog posts, tweets, status updates, text messages and so on. Genres? I suppose. Nonhumans? Undoubtedly. We re/produce them. Do they exist for us? I don’t think you can really say that. They are a population, a growing population, growing among and with us, but not for us. That’s a genre.

Categories: Author Blogs

digital humanities and the close, hyper, machine

18 August, 2016 - 16:47

As you may have seen, the LA Review of Books completed its series on the digital humanities today with an interview with Richard Grusin. I don’t know Richard all that well, though of course I am familiar with his work, and our paths did cross at Georgia Tech when I was a Brittain Fellow in the nineties. I have expressed some disagreement in the past with his arguments regarding the “dark side” of the digital humanities, but I’m not going to rehash those here. Instead I want to focus on this interview and connect it with some other ideas banging around in my head these days.

Grusin begins with this:

In the 1990s, I was really enthusiastic for this change because I was convinced that Western culture had undergone a major transformation in technologies of representation, communication, information, and so forth. It seemed to me that since education was not a natural form — it emerged at a certain historical moment under certain historical and technological conditions — and since those conditions were changing, we needed to change our response to it.

I’m not sure what to make of the past tense here. Is he no longer convinced there has been a “major transformation”? Certainly he still sees education as a historical process, and as such it would be only logical to assert that technological changes would result in educational changes–regardless of the degree to which “we” steered those changes. I put “we” in scare quotes as I’m not sure who is in that group. And, to be clear, education has changed a fair amount in the last 20-25 years. Maybe not as much as some other aspects of our culture, maybe not in ways we like, but it has changed.  And while we should (and do, extensively) discuss larger technological, economic, and other cultural forces that have shaped those changes, we (meaning those of us in the humanities, and English Studies in particular) should acknowledge our general, collective failure to rise to this challenge over the last quarter century.

If I were to summarize the response of English to the Internet over the last 25 years (roughly the period of my participation in the profession, starting in grad school), I’d say we ignored these changes at our peril and got steamrolled.

But on to the next quote, actually two quotes:

digital technologies in the classroom are really a way of engaging students, both in terms of talking to them about social media and in terms of objects of analysis. This is the life students lead. And as a teacher I think making what you teach relevant to your students is really important.

I think there are two places where digital work in the humanities is being done, and often being done outside the academy. One of these places is participatory culture. There has been an explosion of students writing online, be it blogging or fan fiction or whatever. And I think this is really one of the places where digital work in the humanities is being done as a result of changes in technology. We haven’t really made enough of a connection between this kind of participatory culture and the classroom, but I think we are moving in that direction. The other place is in the classroom. We think of the public in a kind of consumerist way. But our students are also the public.

In effect, here are the changes mentioned in the earlier quote. Humans interact in fundamentally different ways than 25 years ago. They routinely make things, share things, and do things that were largely unimagined in the 90s. This is as true of our students as it is anyone else, maybe more so. Undoubtedly those digital cultural spaces are replete with social, political, ethical, rhetorical, and aesthetic challenges. In other words, there’s plenty for us humanists to do there: plenty for us to study, to make, and to teach. I imagine that’s what Grusin was thinking himself 25 years ago. Maybe it’s too late now for the humanities. Maybe. But no one can really know that and there’s no point for humanists to act as if that were the case.

So when we think about the digital humanist engagement with these issues (and here I’m going back to the other part of the title of this blog post), we think about questions of the impact of technologies on literacy and thinking (or at least that’s where my head goes). I’ve been thinking about Katherine Hayles’ now familiar distillation of reading practices into the “close” (what you were taught in grad school), the “hyper” (what teenagers do on their phones), and the “machine” (what some many of the DH debates are about). As I was mulling these over in my head, I kept repeating them: close… hyper… machine… close… hyper… machine… Close, hyper machine.

Sure, it’s obvious. It’s that phone in your pocket. That machine that is close against your body, prone to spasmodic and hyper vibrations and tones. It’s that device you touch an average of 2,617 times every day. [I’ll give you time to insert your own joke here.]

giphy

The smartphone works as a good starting point for what interests me. However I also want to think about a more abstract machine, but one that is no less proximate, intimate, or excitable for its abstraction. Grusin notes his own enduring interest in mediation, and that’s what at stake here. The smartphone is one instantiation of an interface between the human body and the digital media ecology, but there are others. On the other side of that interface are a plethora of nonhuman interactions, and then, somewhere beyond them, there’s you, reading this text (that is, assuming that you’re a human reader). It’s important to investigate all those nonhuman interactions, but our relations with these close, hyper machines are equally pressing (pun intended). If we can think about media, documents, genres, software, hardware, discourse, symbols as these close hyper machines that activate our capacities for conscious thought (as well as other affective, subconscious, and unconscious responses), then I really think we’re on to something durable as humanists. That’s what I think Grusin was seeing in the 90s. Certainly other people were seeing it, other humanists (Ulmer comes to mind).

I know all this digital humanities talk is largely about something else. It’s about what literary scholars are going to get up to in the next decade or so. As I’ve said before, that’s not a problem to which I give much thought. I do find interesting the ways they articulate digital media when they argue for or against this or that. I wish I could say that now that this LARB series is over we could just put all that business in the rear view mirror.  Certainly we could use with a more productive discourse about how the humanities will operate in the digital world, maybe one that started by drawing on the rich existing conversations in media study, the cultural study of technology, and digital rhetoric about these matters.

buzz… buzz..

Categories: Author Blogs