Feed aggregator

rhetoric in the late age of the internet

Digital Digs (Alex Reid) - 21 May, 2017 - 11:51

Some 25 years ago, Jay Bolter described the “late age of print” not as an era when print media were disappearing but rather as time when the question of an impending end began to characterize how we understood the technology. In imagining a late ago of the internet, some semantic clarification is necessary. I do not think we are in a moment when we are questioning the end of a time when information is digital and networked. If any thing, that transition is only beginning. However, we do appear to be in the late age of a particular version and vision of the web, and its confluence and shared fate with postmodern theory is worth noting, particularly for those of us in the humanities.

Here are two curious articles worth a read. The first and briefer one in the NY Times, “‘The internet is broken:’ @ev is trying to salvage it” focus on Twitter co-founder Evan Williams and his somewhat quixotic attempt to forge a respectable public online sphere through his online publishing venture Medium. Williams recounts a familiar problem: “The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.”

The other is a far longer and somewhat meandering tale about the emergence of a transhumanist, alt-right movement called neoreactionism by Shuja Haider in Viewpoint. Haider tracks the emergence of this concept focusing on a few controversial figures, including one Nick Land, a mid nineties postmodern philosophy professor turned apparently mad turned clearly into a quite extremist alt-right ideologue. The whole thing certainly reads like a late cyberpunk Neal Stephenson/Bruce Sterling mash-up, a combination of Snow Crash and Distraction  maybe. I won’t attempt to summarize this article for you except to offer this “It’s a strange intellectual path that begins with ‘Current French Philosophy’ and settles on a right-wing Silicon Valley blogger whose writing is more Dungeons and Dragons than Deleuze and Guattari.”

You could look at Land’s story as an idiosyncratic tale of theory gone horribly wrong… You could if you weren’t able to trace the resonances between transhumanism and posthumanism that have been there for decades. You could say they’re two sides of a coin. You could think about how the internet was born into language about rhizomatic hypertexts, cyborgian politics, temporary autonomous zones, and so on and in the interplay between cyberpunk literature and the “theory-fictions” of the era (which Land himself still writes). Arguably all of this is fairly plain to see in A Thousand Plateaus where the potential for liberatory, nomadic, anti-state lines of flight can easily turn fascist. How does one discern the differences among the transcapitalist will to a globalist erasure of state power, terror networks grounded in anti-modern, anti-global religious fundamentalism, and the alt-right, libertarian, technocratic opposition to government? In some respects it’s easy, right? However, each is a version of a kind of rhizomatic, deterritorializing, nomadic assemblage operating against the modern, liberal, nation state.

While we’re at it, of course, we need to keep in mind that really all of critical theory in the humanities  is aimed at dismantling the state as well–as patriarchal, capitalist, colonialist, etc. Is there a Left version of these deterritorializing politics? Sure. There are a variety of leftist accelerationist politics that essentially look to speed up and/or push through capitalism to whatever comes next. As Alex Williams and Nick Srnicek write, “We believe the most important division in today’s left is between those that hold to a folk politics of localism, direct action, and relentless horizontalism, and those that outline what must become called an accelerationist politics at ease with a modernity of abstraction, complexity, globality, and technology… an accelerationist politics seeks to preserve the gains of late capitalism while going further than its value system, governance structures, and mass pathologies will allow.” Perhaps you find the idea of a leftist accelerationism enticing, but it’s worth remembering how easily these ideas turn fascist.

But let’s turn back to Evan Williams, Twitter, and Medium. In effect, Williams hope appears to be that the Internet could become some version of an egalitarian,  Habermasian public sphere: a place where all citizens (or netizens as we once romantically imagined ourselves) could gather for rational conversation and deliberation. In this scheme, it’s a move in the opposite direction: a reterritorialization of the web to reassert the modern state and its political rhetoric. I sympathize with the desire to do something about the mess that has been made of the web by capitalism, fundamentalism, extremism, and what ultimately amounts to little more than a pure affective urge to self-destruction, but an adequate response doesn’t lie in the 20th-century.

As much as the needed response is not a technological fix, it also is not not a technological fix. We simply need, for one thing, a better understanding of our digital media-ecological rhetorical situation. That’s something rhetoricians can provide, and while I wouldn’t say it’s the biggest piece of the puzzle, there’s still plenty of work to do. The question the late age of the internet poses is what will follow. That is, what follows on the social media communities and digital marketplaces that typify our daily engagement with the web and represent the globe’s most visited websites? The web began in the nineties as a fantasy about escaping the real world, as a place where we would have separate second lives and form parallel virtual communities. And the social web that followed in the next decade largely built on that fantasy by making it more accessible. But we can’t really think about the web that way. The digital world is not a separate world, as if it ever really was. We need a new web, one that supplants the social web as the social web supplanted web 1.0, one that recognizes the rhetorical-material stakes differently.

It’s anyone’s guess how to sort out the larger political problems, with time one would suspect and let’s hope that we have enough of it to spare. But if we’re happy with the contention that print technologies spurred literacy and hence democracy and capitalism but also a fundamentalist reaction, then certainly we can ask the same questions of digital media. What can we build? Hopefully something better than is on offer here!

Categories: Author Blogs

academia’s weird pseudo-productivity: the summer edition

Digital Digs (Alex Reid) - 12 April, 2017 - 11:19

First off, what a bizarre intractable rhetorical situation this is! There is the broad cultural characterization that professors do little work because they teach so few classes, which even in itself is accurate characterization of many professors’ workloads. This is followed by a whole sub-genre of essays describing the intense demands placed on academics, how they work 60 hours a week and so on. All of that is further complicated by the conversations around adjunct faculty. In that context it just seems gauche for tenured faculty to complain about their work.

barton_fink_02And so it goes… into the summer. Here’s a recent article in The Chronicle of Higher Education on “Making Summer Work.” This is the basic premise if you’re not an academic (though why you’d find this post interesting I’m not sure): academics generally have 10-month contracts, so they have no specific work obligations in the summer. And we are not paid to work in the summer either. At the same time, faculty generally work. They do research and/or they might teach a summer class for extra money. This article essentially offers advice on how to make the unstructured time of the summer more productive by establishing routines and setting short-term goals. That’s fine, but I think the whole thing misses the point. A larger context is called for.

What is that context? First, it’s American work culture. The average American worker gets 10 days of paid vacation.  And, as you probably know (or this Wikipedia page will describe), many countries have far more minimum days of paid holiday and vacation days: more like five or six weeks instead of two. And that’s the minimum. This article suggests that faculty should take some time off during the summer away from work. “At least a week” they suggest. One should note that these are unpaid vacation days. That’s a week of unpaid vacation carved out of the expectation of my otherwise two and half months of unpaid work days, right?

Now before anyone gets too upset about that claim (see the first paragraph), we have to recognize that academic work doesn’t fit all that well into our general understanding of labor. You could punch a clock if you want but there’s never going to be a fixed relationship between time spent and productivity. Spending more time won’t necessarily make you more productive as either a researcher or a writer. An extra week spent reviewing secondary research won’t assure you of a new insight. Spending 8 hours in front of a word processor instead of 4 won’t mean that you end up with more publishable prose.

I’m fully sympathetic to the situation of academics, especially those who are untenured though we all have expectations for productivity to meet. The measures of grants submitted and won, articles published and cited, books published and reviewed are all direct evidence of a kind of productivity, but they are at best correlations if the ultimate measure one has in mind is that one is making a meaningful contribution to society or at least a field of knowledge. That’s why I call it pseudo-productivity. Still I understand the drive to use the summer to grind out a couple publications or whatever. I am even open to the argument that even though technically academics are on 10-month contracts that really the expectation is that it’s a 12-month job and that this contract language is really there to protect academics’ time and make sure they have space to meet expectations for research, professional development, course planning, and so on.

That said, I still object to the unexamined assumptions of articles like these. The hamster wheel of publication will produce enough juice to crank the tenure and promotion engine, and that’s part of our reality, but let’s keep in mind what’s going on. As the opening paragraph makes clear, no one is going to sympathize with the plight of academics trying to figure out how to make their “summers off” productive. Not even other academics. I would be reluctant to play into any of these commonplaces about working harder, putting in hours, and increasing productivity.

In other words, “I’ll show you the life of the mind.”

Categories: Author Blogs

the social-rhetorical challenges of information technology

Digital Digs (Alex Reid) - 10 April, 2017 - 12:11

I spent about an hour this morning responding to two different institutional surveys about technology: one coming from the library and asking about digital scholarship and the other coming from IT and focusing on their services and classroom technologies.

  • What technologies do scholars in your field use? What do you use?
  • What frustrations do you experience with publishing?
  • Which technologies of ours do you use in the classroom?
  • Do you teach online?
  • What do you think of this/that piece of hardware we offer you?
  • And so on.

It’s not that there aren’t technological problems with technological solutions in English or in the classroom. There are. But in my view the primary challenges lie at the intersection of these technologies with physical space, social organization, and rhetorical practices. For example, here’s a classroom commonly used by the composition program.  This room seats 21 students. The photo is taken from the door into the classroom. The white desk at bottom right of the image is designed to be wheelchair accessible. You can see the instructor desk, the project (partially), and along the far wall the technology cabinet with a monitor. Inside there’s a PC. There’s a document camera too and various connections if you want to bring your own laptop. Not pictured is the whiteboard. Also not pictured are a couple more desks: 20 plus the one accessible desk. (BTW I think those are some small windows with the shades pulled down.) Probably the most traditional composition and discussion-led classroom arrangement would put the student desks into a circle. There’s absolutely no space for anything approximating that. Another conventional practice would have students working in small groups. That too is very difficult to arrange in this tight space. The space is clearly designed for lecture, even though it only seats 21 students. The reason it is stuffed to the rafters with desks is economic, not pedagogical.

This is why a survey coming from IT asking me about the usefulness of the technology in the classroom seems tone deaf to me. The problem isn’t the technology or if there are problems with the technology then they are obscured by the limits of the physical space. I would like for students to have enough space to bring their laptops, move around, work in groups, share their screens (even if only by all moving around in front of a laptop), and have conversations without getting in each others way.  I’d also like to be able to move among those groups without worrying about pulling a muscle.

If I had that kind of space where such learning was possible then we could start asking questions about software that would enhance collaboration, give students more personalized control over their learning environments, and facilitate communication in a variety of media. But that would introduce a whole range of other social-organizational limits ranging from the structure of classroom meeting times and semesters to the shape of curriculum, pedagogies and learning objectives. These are not problems that the library or IT department can resolve. I don’t expect them to ask such questions. But it makes answering their questions seem pointless and mildly comical. Sure, there are many things that I would do, given the time and space to do them. Of course I’d be building a bridge to nowhere, in a curricular sense, but I could do it just to amuse myself. However, since I have no illusions of such practices becoming institutionalized in any substantive way, there’s really no point in involving IT or the library. All I’m likely to get for my trouble is some litany of policies, forms, and demands for assessment. I’m much better off without their help.

To be honest, once upon a time, that seemed like enough, and I know I wasn’t alone in thinking that (and maybe some people still do). Being a bit of maverick, working under the radar, and doing your own thing seemed in line with a certain spirit of the web… 20 years ago. Maybe it still could be, but not so much for me.

I’ve had a similar experience in the realm of scholarship. I got my first couple academic jobs in part because my technical expertise (which was never all that stellar) set me apart from other candidates. In the early 00’s there weren’t a lot of assistant professors who’d been teaching in computer labs and teaching online for a decade. There weren’t a lot signing up to teach students who to write for the web or to train preservice teachers to teach with technology and so on. This blog helped to establish my professional reputation. I published articles in online journals with images, audio, flash, and video components. Such work continues to happen in journals like Kairos, Enculturation, and others. However, when I think about the obstacles to developing digital scholarship, I don’t think of technological limitations. I think about the intractability of genres.

When you think about a scholarly genre like an article or a monograph, you might ask what (social/communication) problem does it attempt to solve? The first answer might be “to share research with colleagues.” The second answer might be “to validate research through peer review.” A third, more cynical answer might be “to provide a standard for tenure and promotion.” However, in English Studies, I think it is also true that the article and monograph are means of knowledge production not just communication. That is, it is through writing in these genres that knowledge is discovered/made. Because of this, publishing is the way one becomes a scholar, not just provides evidence that one is a scholar. It’s a kind of incorporeal transformation.

So what happens if you stop writing in those genres?  Well, you stop being a scholar, or a least you no longer have a way of becoming the kind of scholar that your predecessors were.  Maybe, certainly, you become a different kind of scholar, but what does that mean? It’s a more difficult question than trying to decide how to “count” some digital publication.

These problems are only intensified by the continuing churn of digital media. It’s one thing to create a video as (a part of) a scholarly article. At least that’s still recognized as a kind of authoring. You could even make an argument for blogging as scholarship (though I’ve never asked this work to be counted in any particular way, even though I do list it as a thing I do). But where do we go from there?

What are the disciplinary and social challenges we are seeking to address? What communication tools might we use to address them? What genres and other rhetorical practices might emerge as we do? And how do we make sense of this as part of the social-organizational context of our work as academics?

I didn’t really know how to phrase that within the context of the survey.


Categories: Author Blogs

What does/would “data rhetoric” look like?

Digital Digs (Alex Reid) - 25 March, 2017 - 08:35

This is something of a follow-up on my last post, where I concluded by suggesting that we might need a “data humanities” and a “data rhetoric” that paralleled the emergence of data science. I should probably say first that I don’t mean this as a replacement for terms like digital rhetoric or digital humanities. It’s probably closer to a specialization within those umbrella terms but all these things are such interdisciplinary mash-ups anyway.

If you squint your eyes a little, you can see that data science has been around for a long time. You could say that it is statistics plus computer science. You could look at cryptanalysis in WWII as data science I suppose. You could look at cybernetics or information science as data science.  Moneyball, Freakonomics, and the 538 blog could all be examples of data science. On the other hand, data science is something new. It was a termed “coined” in 2008 by D.J.. Patil and in this decade we’ve begun to see an explosion of jobs for people with the title “data scientist.” If you want to know more, here’s a Forbes article on “The Rise of the Data Scientist” and another in the Harvard Business Review that proclaims “Data Scientist: The Sexiest Job of the 21st Century.” Basically though, data scientists respond to a recognizable challenge. We are collecting increasing amounts of data. How do we make sense of it? And from a business perspective, how do we monetize it?

During this period, methods in the digital humanities variously called distant reading, macroanalysis, cultural analytics, and so on represent efforts within the humanities that run parallel to data science, calling on the same computational methods. Similar work has also been done in rhetoric and composition, though with less fanfare or controversy in the Chronicle of Higher Education. I would broadly characterize these efforts as using data scientific methods to explore traditional objects of study and often ask fairly conventional research questions. If that sounds like a criticism then I have not expressed myself well. I think this is valuable and interesting work (and something I might pursue once I’m done with my current book project). Here are two curiously related examples, “Finding Genre Signals in Academic Writing” by Ryan Omizo and Bill Hart-Davidson in the Journal of Writing Research and “The Life Cycles of Genres” by Ted Underwood in the Journal of Cultural Analytics. Each obviously investigates genres, one from a rhetoric perspective and the other from a literary standpoint.  A comparison of these two might make for an interesting blog post, but not today. In any case, work like this is certainly part of what a data humanities/data rhetoric looks like.

It we might reduce that to the data-analysis of the humanistic/rhetorical objects of study, then we can also observe the inverse, which is the cultural-rhetorical critique of data. That work also has its value, and there’s plenty of it. That’s something our disciplines already know how to do very well. It’s basically about turning one’s critical lens onto this particular subject matter.

Predictably, my interest here is in something tangential to those two scholarly moves. And (equally predictably) it begins with new materialist ontological premises about the space humans and nonhumans share and the emergence of cognitive, expressive, and rhetorical capacities in the relations among us all in that space. And then it turns to something like Mark Zuckerberg’s February 16th announcement about his vision for Facebook. It’s nearly 6000 words long, but here’s the thesis: “In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.” Holy hell and good intentions Batman! Think about it this way. How has the lived experience of human life, the communities we’ve grown, the knowledge we’ve constructed and shared, the material culture we’ve built, and the effects of all of that on our planet been shaped over the last 500 years by the social-technological-informational infrastructure of print media? Ask that same question about the last 5000 years and writing. Now ask it about computers and the last 50 years. Or “big data” and the last 5 years. Hell, ask it about big data and the last five months!

Yes, we need to figure out how to use data-analytical methods, and yes, we should continue to employ cultural-rhetorical critical methods to study these phenomena. But I think there’s more. We might investigate and experiment with emerging rhetorical capacities of these media-turned-data ecologies. I really wish I could tell you what that means, but I think the best I can come up with is that it will require a significant degree of openness. What I would consider the underlying ontological/compositional questions of rhetoric would remain with us. That is, how do our encounters with the expressive forces of data ecologies foster rhetorical and cognitive capacities? How would we describe those capacities? How might we recursively shape those capacities through technological design, institutional structures, laws, ethics, pedagogies, genres, and other individual and community practices?

I’ll try to end this with something concrete by returning to Zuckerberg’s discussion of Facebook as a “civically-engaged community.”

There are two distinct types of social infrastructure that must be built: The first encourages engagement in existing political processes: voting, engaging with issues and representatives, speaking out, and sometimes organizing. Only through dramatically greater engagement can we ensure these political processes reflect our values. The second is establishing a new process for citizens worldwide to participate in collective decision-making. Our world is more connected than ever, and we face global problems that span national boundaries. As the largest global community, Facebook can explore examples of how community governance might work at scale. Now there are a lot of questions to ask about this! But the one I’m thinking of is how do processes of data collection and data analysis feed into how groups are structured and presented to users? What effect do individual actions or even the actions of relatively small groups of people (e.g. tens of thousands in a multi-billion user population) affect such things? As a result what new expressive and persuasive practices do we encounter as audiences and might we employ as users? If I want to start a new political movement, how would I make rhetorical use of data infrastructures to build coalitions, shape discourses, and move people to action? The bottom line is that symbolic and expressive behaviors operate in entirely new ways here. That’s where data rhetoric lies.


Categories: Author Blogs

the late age of close reading and the data humanities

Digital Digs (Alex Reid) - 18 March, 2017 - 07:36

I have been working on my book, so I haven’t found as much time to write here, and this post comes out of the work I’m doing there rather than any particular current event (though I’d like to think it has some currency!). In the broadest terms the manuscript considers the value of a particular kind of new materialist digital rhetoric in addressing some of the major cultural and disciplinary concerns with emerging digital media:  attention, deliberation (e.g. google is making us stupid), digital humanities debates, valuing digital scholarship, “moocification.” Those are some touchstones I guess.  As I’m writing it though, one of the other consistent themes that comes across is English Studies’ reliance on the concept and practice of close reading. Literary studies is most associated with close reading, but it strikes me it is also integral to rhetorical scholarship and the conventions of writing pedagogy.

As I discuss in the book, when I say close reading, I don’t mean it in the original, specific definition within New Criticism but in the broader way the term gets thrown around in English Studies. Katherine Hayles discusses this a fair amount in How We Think and never really identifies a clear disciplinary definition of what close reading is, even though it is clear that the practice is foundation to the discipline. As she notes, in my scholars’ eyes (and I would believe including her own), “close reading not only assures the professionalism of the profession but also makes literary studies an important asset to the culture.” There’s no little irony in the fact that the thing that makes us professional and gives us value to the culture is something that we can’t actually define. Well, I’m going to give it a shot (and this is examined in more detail, and in a different way, in the book).

Close reading has mean something different from just reading. It can’t simply mean giving one’s full attention to the text and reading all the words and sentences. These are things that people have to do in a lot of disciplines and professions: law, medicine, engineering, finance, the sciences, etc, etc. Hayles sets up categories of close, hyper, and machine reading, and that works ok for me to a certain degree, but not when one starts to mistake whatever close reading signifies in those categories with what happens in English Studies. But let’s stick with English Studies for the moment. Close reading can be tied to a lot of interpretive methods, maybe all of them besides some in the digital humanities. Basically it involves long hours spent in solitary acts of reading long texts–underlining, highlighting, writing in margins. This is not to suggest this isn’t a social activity. To become a disciplinary close reader takes years of study, it takes a shared community of practitioners, and it requires a material, information, technological, and cultural space that facilitates the activity (e.g. turn off your phone). But it’s much more than that. It’s really founded on the premise that interpretation and hence the meaning of the text is to be found/made in the careful consideration of word choices, style, specific sentences, and so on. A good amount of contemporary close reading is connected with what some call symptomatic interpretation (following on Fredric Jameson), which basically means that one views the text as a symptom of a larger cultural issue. As a result, close readings–in both literary studies and rhetoric–tend to move from quoting specific passages out of extensive texts to making fairly large arguments about race, gender, class, sexuality, and so on.

As I discuss in my book, close reading also informs our scholarly compositional activities. It is why we read papers at conferences–because everything rests on the specific choice and order of words, you can’t just extemporize or riff from an outline. It’s not only primary texts that we must read closely to create the evidence for our claims, but also the secondary scholarly material. As such, we must be able to read our own texts closely and compose them to be read closely. And make no mistake, the expectations of an audience of close readers shapes our scholarly genres quite heavily. But we don’t stop at scholarship: we read our students’ essays closely, application letters for jobs and graduate school in our department closely, various university missives closely, even your status updates. It’s not hard to understand how a scholar in English Studies might make the categorical slip that Hayles does and mistake all non-hyper reading practices for the kind of close reading that English scholars do. In fact, I’m not even sure it is a mistake. I actually think that for many in English–literary and rhetoric scholars alike–the kind of reading that everyone does is “close reading” and we just happen to be the experts at it.

It’s the reading equivalent of the notion that English is the place where people learn “to write.” I think we’ve managed to cut away at that conceit a fair amount, but somehow the presumption regarding reading remains quite strong. This is an important point though. It does appear to be the case as Hayles and many others observe that students are less interested in the English disciplinary practices of close reading. We also, in broad cultural terms, talk about the struggles of attention in the wake of smartphones, social media, and so on. It’s probably natural to want to connect these dots, so we see them connected all the time.

But here’s the thing, this close reading-attention-literacy crisis thing. We’ve been in this situation for at least a decade. That Nick Carr article was published in 2008 and that was far from the first time such issues were raised. And yet, does it really seem to you that there is a reading crisis among professionals in America? That doctors, lawyers, engineers, managers, teachers, journalists, social workers, nurses, computer scientists, etc., etc. are unable to do the reading needed to perform their jobs? I’m pointing to professionals because we’re talking about college students to begin with here. If we think of close reading not as a disciplinary practice but rather as some general ability to sit and read a text for information, then I don’t think we have a crisis there.

In fact, I think it’s fairly obvious that the challenge lies at the other end of the informational spectrum. How do we handle the massive flows of data we now gather?

As I suggested above, I understand disciplinary close reading as a technosocial practice. It emerged as a capacity developed among English scholars within a specific set of media-informational conditions, a particular media ecology. Compared to the century before, the 20th-century era of industrial print and mass media was information rich; compared to today it’s an information desert. While we will continue to need to read texts carefully in some generic sense with different professional disciplinary versions of that, the notion of close reading as a foundational practice (or as the epitome of what reading is) is long gone. Instead, we have a new set of rhetorical and aesthetic challenges in relation to media and information in relation to an emerging digital media ecology. As we know, the flows of information are simply too intensive for humans to process using 20th-century reading practices. We require the mediation of digital technologies (what I call close, hyper machines, jamming together Hayles’ three reading practices). These are things like smartphones, apps, the networked algorithmic procedures fuel them, and the broad material network that makes the whole thing go. With this in mind, I tend to focus on the thing that sits in our hands: the point of interface between our bodies and media ecologies. I’m not saying these things are great. Far from it. I’m saying we need to develop rhetorical and aesthetic practices in relation to them and, in turn, shape those technologies as well.

Across universities, you are starting to see new majors and graduate programs in “data science.” Go on a job search site and look for data scientist jobs. They span industries. It’s interdisciplinary stuff, drawing on engineering, math, computer science and so on. It also often tied to the particular kind of data in question. There’s interesting and important work going on there trying to figure out in technical terms how to process and visualize data.

However there are humanistic questions and challenges to be considered here. No doubt we can and will manage to generate symptomatic close readings of the work data scientists produce. But that’s not really the same thing as addressing the challenges I’m talking about. And we are already performing some kinds of data scientific style work, like macroanalysis or cultural analytics in the digital humanities, where we process and visualize information from data sets comprised of literary texts. And that’s fine too (at least in my eyes) but it’s also not what I’m talking about.

To be honest, I’m not sure what a “data humanities” would look like, but it would require new reading and scholarly methods. In my mind, at its core, it would ask “What new rhetorical capacities emerge through our relations with emerging media ecologies?” It would need to approach this in both interpretive and experimental ways. That is, in part it would require discovering/building those capacities.

Anyway, clearly I don’t get around to writing here for weeks, and then I write 1500 words. So I’m going to leave this off here for now.


Categories: Author Blogs

universities, politics, Devoss, and conservatives

Digital Digs (Alex Reid) - 26 February, 2017 - 08:39

Some of my colleagues, like Seth Kahn and Steve Krause, have written about DeVoss’ comments at CPAC. It’s all very much a rehearsal of the same old conservative red meat about liberal professors indoctrinating students. Like many such criticisms, I think they often reveal more about the critic than the object of her criticism. That is, as a conservative ideologue perhaps you could not imagine not insisting that your students thought the same way as you and punishing them if they did not. After all, considering the way the administration treats journalists who ask questions, one could easily imagine how students would be treated. Also, this is the pedagogical operation of religious indoctrination, which is the primary education model of conservatives. So I would guess that conservatives imagine that professors just act the same way as they would but on the other end of the political spectrum.

One of the frustrating things for conservatives is that higher education is a complicated entity. Even one university is a complicated entity. Take UB for example. We have a UB Council, appointed by the governor, and basically these are business people (e.g. the president of M&T Bank). Needless to say we also report to the governor and other elected officials. We’re a tuition-driven university, like most are, which means we largely thrive (or not) by serving students. Many of the things we do institutionally (for good or bad) are based on decisions of attracting, keeping, and supporting students. When you look at the research and curriculum of the university you’ll see several things. First, most of the research grant money comes from the NIH or the NSF. These and related federal agencies go a long way to establishing the research agendas of universities because, at least in STEM fields, you can’t really do research without funding from external sources like this. Similarly the curriculum of universities in many majors is managed by large accrediting agencies that oversee say Engineering or Nursing or whatever. Like many public research universities, we have a lot of Engineering majors. So let’s say you’re studying mechanical engineering at UB.  Driven by accreditation requirements this major requires 111 credits. Then there are 17 additional credits of general education. That’s it. No electives. So maybe “MAE 204: Thermodynamics” is some kind of liberal conspiracy (since evolution and climate change are, I don’t know, maybe), but I would tend to think not. I mean if you think thermodynamics is a liberal conspiracy then do you drive your car Fred Flintstone style? In short, there are a lot of things going on at universities and many different people there, so it’s hard to paint them all with the same brush.

Of course these liberal indoctrination accusations are typically reserved for a particular segment of professors typically on the non-STEM side of the campus. So let’s start with one thing. People, including professors, are allowed to have and express political opinions. It’s against the law to discriminate in hiring based on political affiliations (as some recent crackpot state legislators want to propose). In New York state, at least at SUNY, it’s against the law to advocate for a political candidate in the classroom. I.e., it would be illegal to try to convince your students to vote for Clinton or Trump. Outside of the classroom you’re like any other citizen.

But let’s get down to brass tacks and I’ll give you a personal example. My first job out of graduate school was at Georgia Tech teaching a required first-year writing course all “Introduction to Cultural Studies.” The task really had two parts. One was to teach students how to write academic essays. The second was to introduce them to the field of cultural studies. Undoubtedly cultural studies draws on a body of theories and methods largely associated with the political left: Marxism, feminism, postcolonial theory, etc. Many of the faculty who teach cultural studies are politically active, outside of class, and these theories inform their actions. Also many scholars employ these theories in their research as they see such approaches as producing valuable insight into various aspects of culture. That said, in a class like this one, what the students need to do is demonstrate an understanding of how the theories work on their own terms. They certainly do not need to agree with them. To the contrary, their inclination would more often be to disagree (or to be indifferent). In those disagreements the conversations we would have would often be about refining their understanding of the theories (e.g., explaining how their criticisms were based on faulty understandings, which is not to say there are not legitimate criticisms to make).

Is that indoctrination? I don’t think so. It is an introduction into a disciplinary body of knowledge, which is basically what all college courses do. Every discipline has its own theories, methods, interests, and ways of looking at the world. And I’m guessing this is what pundits imagine is a liberal conspiracy. The idea that the world is complicated, that there are many ways of examining and understanding it, and that there is some fundamental educational value in encountering that pluralism, challenging those ideas and challenging one’s own ideas. I do think there are conservatives who agree with this premise and when one looks at the university there really is a broad range of different ideas going on there, but when it comes to appealing to the base, not so much.


Categories: Author Blogs

How do you think rhetoric works?

Digital Digs (Alex Reid) - 25 February, 2017 - 18:17

A recent article by Elizabeth Kolbert in The New Yorker seeks to explain “Why Facts Don’t Change Our Minds.” The article is in reference to several new books written by cognitive scientists. The first, by Hugo Mercier and Dan Sperber, called The Enigma of Reason recounts numerous psychological studies examining the various ways in which people hold on to their views even when presented with evidence that those views are totally incorrect. This includes familiar problems like confirmation bias and forms the groundwork for familiar pieces of advice such as the importance of making a good first impression. Mercier and Sprerber’s contribution to this topic is to provide a kind of evolutionary explanation for why human minds work this way.

Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

(I have no idea what the “ö” is about.) And what are those problems? Essentially “to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.”

I’ll get back to that in a second.

The article then turns to another book by Steven Sloman and Philip Fernbach called The Knowledge Illusion: Why We Never Think Alone and specifically a concept they term “the illusion of explanatory depth.” Their first example is the toilet. Most people imagine they know how a toilet works, but it turns out to be quite complex. As I would put this, this is how technologies, discourses, and institutions are meant to function. They expand our capacities for thought and agency by embedding these capacities into networks. I do not need to know how to build a computer or a network in order to write a blog post. That’s how knowledge works among bees, you know? A bee finds a flower and it can instruct another bee through a dance where to find that flower. But the bee that learns that dance can’t teach it to another bee. (I’m channeling Kittler here, I think.) For us, information works differently. Technologies work differently. I’m not exactly sure what the illusion is however. Do people really think they know how the technologies around them work? Kostler goes on to bring this to a kairotic moment:

If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

One more step. Kostler turns to a third book by Jack and Sara Gorman, Denying to the Grave: Why We Ignore the Facts That Will Save Us, which tries to figure out how to overcome problems like confirmation bias and its physiological foundations (as they see it). It turns out not to be that easy. As Kostler concludes, “Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science.”

Gee, that is a poser. But maybe we can start with some of the built-in confirmation biases at work here.

  1. Reason doesn’t work the way that it is imagined to function here.
  2. Because reason doesn’t work that way, science doesn’t work as it is imagined here either.
  3. If you have a poor model of science and reason then it isn’t going to be very effective in addressing this concern about how people become convinced and then hold on to views in the face of compelling evidence to the contrary.

Let’s return to Kostler’s ACA example and insert the most inane version of it. Let’s say I am opposed to “Obamacare” (because I hate anything with Obama’s name attached to it) but have no idea that Obamacare and the ACA are the same thing. I rely on the ACA and I’m happy with it, but I hate Obamacare and what it done away with. Can you get any stupider than that? I don’t know. Are there warnings on gas pumps not to drink the gasoline? Despite this, this imagined person’s position is not “baseless.”  There is reasoning. It’s a straightforward syllogism.

  1. I hate all things related to Obama.
  2. Obamacare is related to Obama.
  3. Socrates needed better health care.

Maybe when this person figures out ACA and Obamacare are the same thing, that opinion shifts, but perhaps not as far as you’d think. This is the underlying issue with all of the major areas of political disagreement: education, health care, human rights, climate change, economic regulations, foreign policies, etc.

Effectively the modern state insists that citizens must accept that their world operates in ways that they cannot directly experience and can never fully understand. Even the most fully educated person in the world can only have understanding in a very narrow slice of the world and only then through ongoing participation in a complex and extended system of human and nonhuman partners. Even with this, the knowledge we produce is never fully “true;” it is only the best construction that we can manage. It’s a construction over which experts disagree and which is continually revised and refined. This comparatively fragile and carefully wrought expert knowledge then butts up against the felt, but also reasoned, sense of reality as it is directly experienced by citizens, both individually and in small communities (families, friends, co-workers, etc.).

So on the one hand you have dozens of people from a variety of intelligence agencies reviewing hundreds of reports and thousands of data points to determine the likelihood of an immigration ban based on nationality being an effective deterrent against terrorism in the US. You end up with lots of conversations and data, and conclusions that are carefully parsed and reasoned. But even though the conclusion may be straightforward (i.e., this won’t work), working through the reasoning is hard if not impossible if you aren’t an expert.  On the other hand, you have citizens and their friends who feel threatened, whose direct experience with Arabs is quite limited if non-existent, and have a logical argument (albeit one that is based on misinformation and predicates). If you want to compare it to some technological arguments. People might feel that seat belts in cars or motorcycle helmets are unnecessary or that owning a gun makes them safer. These are equally examples of how people have an illusion of their depth of knowledge, believing they know how these mundane technologies function (and their dangers) when they do not.

None of that answers the question of how to change people’s minds. Obviously it isn’t easy.  But if you realize that people gain confidence of their worldviews through networks of humans or nonhumans then shifting that confidence probably means altering those networks and their strength.  One might say that the Trump administration is seeking to weaken some networks supported by mainstream media. Of course that’s not very subtle and probably only serves to strengthen the faith of his opponents in those networks. Different exertions of political power might work. If you’re not the president however you will need a different strategy.


Categories: Author Blogs

reality checks

Digital Digs (Alex Reid) - 15 February, 2017 - 13:48

Maybe you saw John Oliver on Last Week Tonight describe his plan to begin airing commercials on morning shows Trump watches in order to educate him on a few key points. If you haven’t, it’s worth a laugh.

Oliver’s basic argument though is that we have a president who doesn’t believe that an agreed upon reality exists. Instead, he gets to believe whatever he wants, his supporters get to believe whatever they want, and his critics and opponents are simply people with a different set of beliefs. But none of us has access to reality. In this context, Oliver argues, as many have, that we must have a basis for establishing facts, and without such a basis we’re in serious trouble on many levels.

Of course it is not just Trump supporters who believe in conspiracies. In the NY Times, Sydney Ember points to an increasing number of democrats embracing conspiracy theories. As Ember points out though (and which sounds to me like an echo of Fredric Jameson), conspiracy theories arise as a way of asserting some control over a situation, of making something vast and complex more understandable by depicting it as the actions of a group of people with recognizable motives. On the other hand, as the saying goes, just because you’re paranoid…

As is obvious, I have no more idea than you what is going on with Trump, the White House, the Russians, and so on, though all of this does make me think there should be a new version of The Americans set in the present day.  Of course, I can offer you a theory. I’ve got a whole bag full of hermeneutic strategies, plus I’ve been to the movies and read spy thrillers and cyberpunk dystopia novels. I think I’ve seen every James Bond film.  I could go on all day: white supremacist militias, egomaniacal theocrats, oil magnate star chambers, genocidal fascists, tripped out technocrats, disgraced generals, rouge spy networks, etc. etc. What do you want?

Here’s the thing though… there is something or some things that are actually happening in reality. We need to know what they are. That knowledge has to be built. If something happens right before your eyes, your mind makes sense of it. Even knowledge from direct observed experience is built. And when you’re trying to construct knowledge of something that cannot be directly observed–because it is distributed or hidden, too big or too small, and so on–then constructing that knowledge is harder. It requires time, effort, and material resources. Often it requires the collaboration of multiple people. And in our culture that means it takes money.

As a result a reality check is also a bank check: knowledge has to be paid for. Because that’s true we can always doubt the motives of the people constructing the knowledge. They are researchers working for a chemical company or bank executives or government officials. Scientists at universities do the research that funding agencies will support. Journalists report the stories their editors will publish or air. Politicians tell you the things they think will get them re-elected.  But there is no undoing that. When it comes to knowing about the world, there’s no such thing as a free lunch. That said, we can evaluate the strength of the knowledge we produce though in doing so we must write another check. This is where we find ourselves in Latourian matters of concern.

Some might say that our democratic republic is coming to an end. Again, I can offer you many interpretations and stories about that. To me though the failures of our government begin and end with our not understanding well enough how the thing works on a material level. As a result we get all these conspiracy theories, and even though those things are poor constructions of reality, they are more than powerful enough to elect presidents, topple governments, start wars, and worse.

As I sit here, I am honestly mystified by what goes through people’s minds. Certainly, I have values which may be different from yours, I have a vision of the society in which I’d like to live, and I would and do work toward creating that world. At the same time, I can distinguish between what I’d want and what is. Similarly, though I can interpret the world (and we all must do so regularly in order to live), I can recognize that my interpretation is always limited and can be wrong. These must be recursive processes. That is, as I refine my understanding of the world, my values, vision, and actions must also be refined. However these processes can get all confused, so that for example one’s interpretation of a religious text can drive a systems of values and an understanding of the world. If those interpretations are flawed but cannot be revised because of belief then one ends up with some serious cognitive shortfalls.

In other words, shoehorning the world into one’s existing belief structure is a bad long-term survival strategy. That’s what we might call knowledge on the cheap. It works fine for simple, reliable stuff like gravity. In fact it probably worked just fine for most purposes through most of human existence. But not for stuff like this.  Not for constructing knowledge about networks of dozens and hundreds of actors distributed around the world. Not for running a government with thousands of employees, representing hundreds of millions of citizens. Knowledge like that comes with a big bill and must be carefully constructed and tested, but it can’t take forever to make either. You need to have systems in place. You need elaborate institutions with trained professionals to make those institutions work. If you don’t have those things, then all you’re left with is bullshit conspiracy theories constructed by jamming into your brain whatever random knowledge you encounter and spitting out some preconceived notion.

Not smart.

Categories: Author Blogs

interpretation, tarot cards, and the power of truth

Digital Digs (Alex Reid) - 3 February, 2017 - 09:58

Long ago, when I was an undergrad, I learned how to read Tarot cards. (Hey, stop rolling your eyes; I saw that.) I haven’t done it in years, though when I was a professor at Cortland we’d go on writing retreats to this Adirondack camp with our students and my colleague, Vicky Boynton, and I would do readings for students for a laugh.  No one I’ve ever read cards for believes they are magical or that I am psychic. At the same time, very often people who were generally strangers to me would remark on how uncanny the experience was, how I seemed to know things about their past, and how the predictions of the future made sense. The immediate explanation one might want to offer is that I was conning them, that it was a sly rhetorical performance where I read their reactions, fished for information, and other things like that.

But I’ll be honest: I wasn’t working that hard.

A better explanation is equally obvious and even less magical than that. If you know anything about Tarot cards, you’d know that the various suits and the major arcana have story arcs to them and that there’s a great deal of structural similarity among the story arcs (e.g. the aces always have something to do with beginnings and the tens always have something to do with endings). On top of that, the various patterns in which the cards are laid out (the Celtic cross is the most recognizable) also have a story structured into them (some spots are about the past, others about the people around you, your hopes and fears, etc.).  In short a Tarot reading is a pseudo-random story generator where all the stories fall within a particular set of plots and themes. And then the people getting their cards read do the rest. Just as you might read a novel and get taken up in the story, people who are willing can see themselves in the story of a tarot reading. And since the built-in morality of the cards (what you should or shouldn’t do, what to be careful about, what an opportunity looks like, etc) are culturally familiar, the inherent lessons aren’t hard to follow or at least see as meaningful and sensible.

So that’s Tarot cards. I’m assuming I didn’t give away any secrets there. Here’s the more difficult step. Critical theories work the same way as Tarot cards. They are heuristics, sets of procedures, for constructing interpretations. They feel true in the same way Tarot readings feel true, because while they rely upon known and predictable structures of meaning in the theory (which, for example, help readers identify the symptoms of various cultural-ideological structures as they manifest in a text), they also almost invariably generate some unexpected connections. We call these insights, and they strike us with the power of truth. I often tell graduate students about the first time I read A Thousand Plateaus and I was seeing rhizomes everywhere for weeks. What was compelling for me about that book and a lot of postmodern theory was how it would generate that affective/aesthetic experience of the truth. It’s probably the closest thing I’ve ever had to a “religious experience.”

The difference is that about a year into graduate school I caught on to the trick. After that, I could still appreciate the power and usefulness of theories and interpretations. That’s like the part where as a reader of Tarot cards you recognize that actually you can make a single layout of the cards tell different stories if you like. And each of those stories has the same capacity to impart that experience of “truth,” provided that you tell it well enough. All of that lead me early on to view theory as a heuristic for composing rather than a hermeneutic for revealing. That the results could be valued for their significance, what they were about to do, rather than their signification, what they claimed to represent.

So let’s turn this toward current events… Perhaps you have seen some interpretations of, oh I don’t know,

  • why Clinton lost
  • why people voted for Trump
  • why people still support him
  • what Trump’s actions and plans will mean for American and the world
  • what those who oppose Trump should do next, etc. etc.

Am I suggesting that we should stop interpreting? Of course not, as if such a thing were possible. I am suggesting that one might view the ontological status of interpretations differently, not as revelations of the truth but as tools that create capacities. This is not as dramatic a difference as it might first seem. At the core, representations of truth (or claims to such) have value because decisions and actions we take based on them work as we might hope. (Or at least that’s my argument here.) I’m saying something slightly different which is that knowledge that we construct has value if it allows us to do things. We draw a map through the wilderness. Does it “truly” represent the wilderness? Maybe. What do you mean by true? If we follow the map, do we get to the other side? Yes. Well, okay then, that’s what we’re after.

Are the Trump administration and its supporters truly racist, religious  extremists? Maybe. If it quacks like a duck, etc. Perhaps such an interpretation strikes you with the ringing power of truth. Maybe it pisses you off. Here though, the question is what does such a construction allow us to do? The difference is that claims to represent the truth lock you in. If you believe in a particular interpretation of the Bible, for example, you’re locked into those capacities. You can’t even consider a different interpretation of the world. The same thing can happen, though perhaps with not quite the same degree of intransigence, with critical theory (though critical theories have their true believers as well).

The dangers with such things, as we’ve already seen, is that people take all this to me that there’s no such thing as truth, that one can pick whatever “alternative facts” suit their purposes, and, when necessary, wholly fabricate statistics (or terrorist attacks that never happened). Not to psychologize this business, but that’s kind of like truth withdrawal or something. If we can’t have the Truth then we know nothing but lies and all lies are equally untrue. Yes, it is crazy but one hopes it’s a temporary insanity that’s part of the recovery process. The tough part is that when you can’t rely on truth, when you can’t expect some pre-formated  procedure to spit out truth like pressing keys on a calculator, then you have to work a lot harder. Every connection, every mediation, must be tested and made strong. And ultimately the interpretation will need to prove itself in the capacities it offers us.

In short, instead of telling me what you think is true, make something useful.

Categories: Author Blogs
Syndicate content