Digital Digs (Alex Reid)

Syndicate content Some Rights Reserved
an archeology of the future
Updated: 1 day 10 hours ago

faculty at work

24 August, 2015 - 13:40

This is one of those posts where I find myself at a strange intersection among several seemingly unrelated articles.

The first three clearly deal with academic life, while the last two address topics near and dear to faculty but without addressing academia.

The Rees, Scott, and Gilbert pieces each address aspects of the perceived and perhaps real changing role of faculty in curriculum. Formalized assessment asks faculty to articulate their teaching practices in fairly standardized ways and offer evidence that if not directly quantitative at least meets some established standards for evidence. It doesn’t necessarily change what you teach or even how you teach, but it does require you to communicate about your teaching in new ways. (And it might very well put pressure on you to change your teaching.) The Scott piece ties into this with the changing demographics and motives of students and increased institutional attention to matters of retention and time to degree. While most academics likely are in favor of more people getting a chance to go to college and being successful there, Scott fears these goals put undo pressure on the content of college curriculum (i.e. dumb it down). Clearly this is tied with assessment, which is partly how we discover such problems in the first place. It’s tough if you want your class to be about x, y, and z, but assessment demonstrates, students struggle with x, y, and z and probably need to focus on a, b, and c first.

Though Rees sets himself at a different problem, I see it as related. Rees warns faculty that flipping one’s classroom by putting lecture content online puts one at risk. As he writes:

When you outsource content provision to the Internet, you put yourself in competition with it—and it is very hard to compete with the Internet. After all, if you aren’t the best lecturer in the world, why shouldn’t your boss replace you with whoever is? And if you aren’t the one providing the content, why did you spend all those years in graduate school anyway? Teaching, you say? Well, administrators can pay graduate students or adjuncts a lot less to do your job. Pretty soon, there might even be a computer program that can do it.

It’s quite the pickle. Even if take Rees’ suggestion by heart, those superstar lectures are already out there on the web. If a faculty member’s ability as a teacher is no better than an adjunct’s or TA’s then why not replace him/her? How do we assert the value added by having an expert tenured faculty member as a teacher? That would take us back to assessment, I fear.

Like many things in universities, we’re living in a reenactment of 19th century life here. If information and expertise is in short supply, then you need to hire these faculty experts. If we measure expertise solely in terms of knowing things (e.g. I know more about rhetoric and composition, and digital rhetoric in particular, than my colleagues at UB) then I have to recognize that my knowledge of the field is partial, that there’s easy access to this knowledge online, and there are many folks who might do as good a job as I do with teaching undergraduate courses in these areas (and some who would be willing to work for adjunct pay).  I think this is the nature of much work these days, especially knowledge work. Our claims to expertise are always limited. There’s fairly easy access to information online which does diminish the value of the knowledge we embody. And there’s always someone somewhere who’s willing to do the work for less money.

It might seem like the whole thing should fall apart at the seams. The response of faculty, in part, has been to demonstrate how hard they work, how many hours they put in. I don’t mean to suggest that faculty are working harder now than they used to; I’m not sure either way. The Gilbert, Scott, and Rees articles would at least indicate that we are working harder in new areas that we do not value so much. Tim Wu explores this phenomenon more generally, finding it across white collar workplaces from Amazon to law firms. Wu considers that Americans might just have some moral aversion to too much leisure. However, he settles on the idea that technologies have increased our capacity to do work and so we’ve just risen (or sunken) to meet those demands. Now we really can work virtually every second of the waking day. Unfortunately Wu doesn’t have solution; neither do I. But assessment is certainly a by-product of this phenomenon.

The one piece of possibly good news comes from Steven Johnson, whose analysis reveals that the decline of the music industry (and related creative professions), predicted by the appearance of Napster and other web innovations, hasn’t happened. Maybe that’s a reason to be optimistic about faculty as well. It at least suggests that Rees’ worries may be misplaced. After all, faculty weren’t replaced by textbooks, so why would they be replaced by rich media textbooks (which is essentially what the content of a flipped classroom would be)? Today people spend less on recorded music but more on live music. Perhaps the analogy in academia is not performance but interaction. That is, the value of faculty, at least in terms of teaching, is in their interaction with students, with their ability to bring their expertise into conversation with students.

Meanwhile we might do a better job of recognizing the expansion of work that Wu describes.. work that ultimately adds no value for anyone. Assessment seems like an easy target. Wu describes how law firms combat one another with endless busy work as a legal strategy: i.e. burying one another in paperwork. Perhaps we play similar games of oneupmanship both among universities and across a campus. However, the challenge is to distinguish between these trends and changes in practices that might actually benefit us and our students. We probably do need to understand our roles as faculty differently.

Categories: Author Blogs

finally, robotic beings rule the world

19 August, 2015 - 12:30

Last week in The Guardian Evan Selinger and Brett Frischmann ask, “Will the internet of things result in predictable people?” As the article concludes,

Alan Turing wondered if machines could be human-like, and recently that topic’s been getting a lot of attention. But perhaps a more important question is a reverse Turing test: can humans become machine-like and pervasively programmable.

This concern reminds me of one of mentioned a few times here recently, coming from Mark Hansen’s Feed Forward, where the capacity of digital devices allows them to intercede in our unconscious processes and feed forward a media infoscape that precedes, shapes, and anticipates our thinking. In doing so, as Hansen points it, it potentially short-circuits any opportunity for deliberation: a point which is likely of interest to most rhetoricians since rhetoric (in its quintessential modern form anyway) hinges on the human capacity for deliberation.  This is also a surprising inversion of the classic concept of cybernetics and the cyborg where it is feedback, information collected by machines and presented to our consciousness, that defines our interaction with machines.

Put simply, the difference between feed-forward and feedback is the location of agency. If humans become predictable and programmable, does that mean that we lose agency? That we cease to be human?

Cue Flight of the Conchords:

In the distant future (the year 2000), when robot beings rule the world…  Is this too tongue in cheek? Maybe, but it strikes me as a more apt pop cultural reference than The Matrix, which is where Selinger and Frischmann turn when they note that “even though we won’t become human batteries that literally power machines, we’ll still be fueling them as perpetual sources of data that they’re programmed to extract, analyse, share, and act upon.” Why is “Robots” better? Perhaps unintentionally it presents robots and humans as one in the same. Humans may be dead, but robots are surprisingly human-like. The humans-turned-robots revolt against their human oppressors who “made us work for too long/
For unreasonable hours.”

The problem that Hansen, Selinger and Frischmann identify is also the problem Baudrillard terms the “precession of the simulacra” (which, not coincidentally, is the philosophical inspiration for The Matrix). And it suggests, like The Matrix, that the world is created for us, before us, to inhabit.

We might ask, even if sounds perverse, how awful is it if people are predictable/programmable? Of course, we are (or hope to be) internally predictable. When I walk down the hall, I want to do so predictably. When I see a colleague coming toward me, I want my eyes and brain to identify her. I’d like to wave and say hello. And, I’d like my colleague to recognize all of that as a friendly gesture. Deliberation is itself predictable. Rhetoric and persuasion rely upon the predictability of the audience.  I suppose that if you knew everything about me that I know about me then  you could predict much of the content of this post. After all, that’s how I am doing it.

That said there’s much of this post that I couldn’t predict. Maybe the “perpetual sources of data” available to digital machines know me better. Maybe they could produce this post faster than me. Maybe they could write a better one, be a better version of me than I am. After all, isn’t that why we use these machines? For the promises they make to realize our dreams?

I think we can all acknowledge legitimate concerns with these information gathering devices. What corporations know about us, what governments know about us, and what either might do with the information they glean. Furthermore, no doubt we need to learn how to live in a digital world, to not be driven mad by the insistent calls of social media with its alternating calls to our desires and to superego judgments of what we should be doing. However such concerns are all too human; there’s nothing especially robotic about them. While we want to be predictable to ourselves and we want the world to be predictable enough to act in it, we worry about our seeming or being predictable to others in a way that causes doubts about our agency.

However in many ways the obverse is true. It is our reliable participation in a network of actors that makes us what we are (human, robot, whatever).

This is a complex situation (of course). It requires collaboration between human and machine. It requires ethics–human and robotic. It is, by my view, a rhetorical matter in the way that the expressive encounters among actors open possibilities for thought and action to be shaped. I would not worry about humans becoming robots.

 

Categories: Author Blogs

Neoliberal and new liberal arts

15 August, 2015 - 14:23

In an essay for Harper’s William Deresiewicz identifies neoliberalism as the primary foe of higher education. I certainly have no interest in defending neoliberalism, though it is a rather amorphous, spectral enemy. It’s not a new argument, either.

Here are a few passages the give you the spirit of the argument:

The purpose of education in a neoliberal age is to produce producers. I published a book last year that said that, by and large, elite American universities no longer provide their students with a real education, one that addresses them as complete human beings rather than as future specialists — that enables them, as I put it, to build a self or (following Keats) to become a soul.

Only the commercial purpose now survives as a recognized value. Even the cognitive purpose, which one would think should be the center of a college education, is tolerated only insofar as it contributes to the commercial.

Now here are two other passages.

it is no wonder that an educational system whose main purpose had been intellectual and spiritual culture directed to social ends has been thrown into confusion and bewilderment and brought sadly out of balance. No wonder, too, that it has caught the spirit of the business and industrial world, its desire for great things-large enrollment, great equipment, puffed advertisement, sensational features, strenuous competition, underbidding.

the men flock into the courses on science, the women affect the courses in literature. The literary courses, indeed, are known in some of these institutions as “sissy” courses. The man who took literature too seriously would be suspected of effeminacy. The really virile thing is to be an electrical engineer. One already sees the time when the typical teacher of literature will be some young dilettante who will interpret Keats and Shelley to a class of girls.

As that last quote probably gives away, these quotes are from a different time. Both are quotes found in Gerald Graff’s Professing Literature: the first are the worlds of  Frank Gaylord Hubbard from his 1912 MLA address, and the second is Irving Babbitt from his 1908 book Literature and the American College.  Long before neoliberalism was a twinkle in the eyes of Thatcher and Reagan, universities were under threat from business and industry and the humanities were threatened by engineering. Certainly there are some “yeah, but” arguments to be made, as in “yeah, but now it’s serious.” Nevertheless, these are longstanding tensions. I imagine one could trace them back even further, but a century ago is apt. Back then, American universities were responding to the turmoil of the 1860s and the second industrial revolution of the 1880s and 90s. Today we respond to the turmoil of the 1960s and the information revolution of the 1980s and 90s. There’s an odd symmetry really. Let’s hope we’re not verging on 30 years of global war and depression as our 1915 colleagues were.

Ultimately it’s hard for me to disagree with Deresiewicz’s call for action that we should:

Instead of treating higher education as a commodity, we need to treat it as a right. Instead of seeing it in terms of market purposes, we need to see it once again in terms of intellectual and moral purposes. That means resurrecting one of the great achievements of postwar American society: high-quality, low- or no-cost mass public higher education. An end to the artificial scarcity of educational resources. An end to the idea that students must compete for the privilege of going to a decent college, and that they then must pay for it.

However even if high quality, low cost higher ed was accomplished, I’m not sure that we would get away from the connection between learning and career. Deresiewicz describes the liberal arts as “those fields in which knowledge is pursued for its own sake. ” I think this is misleading. I understand his point, that scholarship doesn’t need to have a direct application or lead to profit. At the same time, I am skeptical of the suggestion of purity here. I prefer some version of the original, medieval notion of the liberal arts as the skills required by free people to thrive.

But here’s the most curious line from the article: “business, broadly speaking, does not require you to be as smart as possible or to think as hard as possible. It’s good to be smart, and it’s good to think hard, but you needn’t be extremely smart or think extremely hard. Instead, you need a different set of skills: organizational skills, interpersonal skills — things that professors and their classes are certainly not very good at teaching.” I’m not exactly sure what being “smart” or “thinking hard” mean here (beyond, of course, thinking like Deresiewicz does). But what’s really strange is that last line: why are professors and their classes not good at teaching organizational or interpersonal skills?  Is this even true?  I may be wrong but it seems to me that Deresiewicz is implying these things aren’t worth teaching.  I suppose it’s a stereotype to imagine the professor as disorganized and lacking interpersonal skills. Are we celebrating that here?

I’ll offer a different take on this. When we adopted the German model of higher education we decided that curriculum and teaching would follow research. But that was already a problem a century ago, as this passage from Graff recounts:

All the critics agreed that there was a glaring contradiction between the research fetish and the needs of most students. In his 1904 MLA address, Hohlfield speculated that “thousands upon thousands of teachers must be engaged in presenting to their students elements which, in the nature of things, can have only a rare and remote connection with the sphere of original research,” and he doubted the wisdom of requiring all members of the now-expanding department faculties to engage in such research. To maintain that every college instructor “could or should be an original investigator is either a naive delusion concerning the actual status of our educational system or, what is more dangerous, it is based on a mechanical and superficial interpretation of the terms ‘original scholarship’ or ‘research work.'” (109)

The hyper-specialization of contemporary faculty only intensifies this situation. The “solution” has been adjunctification but that’s really more like an externality.  Changing the way that we fund higher education probably makes a lot of sense to everyone reading this post. Imagining that things will be, should be, like they used to be in the 1960s, when public higher ed and liberal arts were in their “heydey,” seems less sensible.

If the neoliberal arts, as Deresiewicz terms them, are untenable, then we are still faced with building a new liberal arts, which is really what our colleagues a century ago did: inventing things like majors (in the late 19th century) and general education (in the early 20th), which we still employ. In the category of “be careful what you wish for,” increased public funding will undoubtedly lead to increased public accountability. I’m not sure whether you like your chances in the state capitol or in the marketplace, or even if you can tell the difference. Whatever the new liberal arts will be, they’ll have to figure that out, just as their 20th century counterparts learned to thrive in a nationalist-industrial economy and culture.

Categories: Author Blogs

What If? Special Higher Education Issue

6 August, 2015 - 13:44

Jesse Stommel and Sean Michael Morris at Hybrid Pedagogy askImagine that no educational technologies had yet been invented — no chalkboards, no clickers, no textbooks, no Learning Management Systems, no Coursera MOOCs. If we could start from scratch, what would we build?”

As the image here suggests, this reminds me of the What If? Marvel comics. The ones I remember from being a kid were from the original series where the Marvel character “The Watcher,” a kind of panoptic super-being, imagines alternate universes (e.g., what if Spiderman joined the Fantastic Four? Answer: we’d have one crappy film series instead of two).

I appreciate Stommel and Morris’ question as part of a grand tradition of sci-fi speculation. How much history would have to change to get rid of all of those educational technologies?

  • If the Union had lost the Civil War, maybe there would never have been a Morrill Act. Either way it would have changed the shape of higher education. Similarly a different outcome in either of the World Wars or fighting some limited, survivable nuclear conflict in the 50s or 60s would clearly have changed things.
  • Getting away from wars, a different outcome surrounding the Civil Rights movement or the Women’s movement would have changed access to higher education.
  • In a technoscientific-industrial context, we could ask what if the US adapted more quickly to the post-industrial, information economy or never became so dependent on fossil fuels in the mid-20th century?

Of course those are all wide-ranging social changes. For the purposes of this question, I think it’s more reasonable to try to imagine changes that don’t rewrite the entirety of world history or try to eliminate nationalism, capitalism, patriarchy, etc. (even if you’d want to eliminate those things, that just seems like a different thought exercise from this what if game). I suppose you could try messing around with the beginning of the computer edTech industry in the 60s or 70s or maybe intervene in the beginnings of course management systems twenty years ago or so. But I think you’d be misidentifying the problem.

In my view, what really defines American higher education is the 19th-century decision to model ourselves after German universities. It is that decision that shaped the relationship between scholarship and teaching. From there, one could look at curricular technologies like classrooms, semesters, credit hours, general education, and majors, as well scholarly technologies like journals, laboratories, conferences, monographs, and tenure. Then there are bureaucratic-institutional technologies like departments, deans, and so on. Those are the things that continue to shape higher education and no reworking of applications or gizmos will change that.

So for example, in my own discipline of English, none of the technologies Stommel and Morris, make much of a difference. Eliminating textbooks and chalkboards would make some impact, but even then I’m sure most professors in English would be fine sitting around talking about novels or poems without writing on a chalkboard. English curriculum and pedagogy is almost entirely unchanged from its form when I was an undergrad in the 80s, so you’d hardly notice if more recent technologies were gone. I imagine most faculty in English would be relieved rather than upset if the obligation of using a course management system suddenly disappeared.

Here’s a paragraph from Latour’s Inquiry into Modes of Existence

Instead of situating the origin of an action in a self that would then focus its attention on materials in order to carry out and master an operation of manufacture in view of a goal thought out in advance, it is better to reverse the viewpoint and bring to the surface the encounter with one of those beings that teach you what you are when you are making it one of the future components of subjects (having some competence, knowing how to go about it, possessing a skill). Competence, here again, here as everywhere, follows performance rather than preceding it. In place of Homo faber, we would do better to speak of Homo fabricatus, daughters and sons of their products and their works. The author, at the outset, is only the effect of the launching from behind, of the equipment ahead. If gunshots entail, as they say, a “recoil effect,” then humanity is above all the recoil of the technological detour. (230)

In short, rather than asking what technologies should we build in order to achieve an educational mission “thought out in advance,” we might instead ask what faculty and students might we build from the media ecology that we inhabit?

There’s an interesting “What if?” issue. Of course, the technologies will continue to change. As Latour would say, they are detours, zig-zags, work-arounds. And we (i.e. human subjects) are their products. We can surely ask to take a different detour, to work around a different problem, to build new technologies. But the what if question here is “What if we understood subjectivity and learning as a recoil effect of technology?” How would that shift our orientation toward higher education?

Categories: Author Blogs

speaking truth to Twitter

1 July, 2015 - 11:08

To be clear, Twitter has many possible uses, its primary one probably being making money, but, of course, its users, including me, put it to work in a variety of ways. It seems in the last year or two many academics have discovered Twitter (in much the same way that Columbus discovered America). And among academics one can also find Twitter being put to a wide range of uses, both personal and professional. Much of this is benign, but increasingly the public face of academics in social media is being defined around a fairly narrow class of tweets.

Perhaps it would be useful for someone to do a current analysis of the academic uses of Twitter and maybe even identify some subgenres among tweets. I haven’t done that analysis, so this is more like a sketch, but I am writing here about a particular tweet subgenre. In this subgenre, one essentially is making an appeal to pathos that energizes those who agree and incites those who do not. The emotion that is expressed is something like the righteous indignation that arises from an absolute certainty in the justness of one’s view and cause. It would appear as if it is often tweeted in anger, though one can only guess at the mind of another. Though such utterances can occur across media, Twitter is an excellent place to see it because the 140-character limit serves to focus the message. And clearly academics are far from the only people who engage in such expressions, but academics are an interesting case because of the relationship of these expressions to academic freedom and tenure protections. payday loan richmond

I am not interested in adjudicating the righteousness of any particular academic’s cause, let alone weighing in on their job status. I am interested though in the rhetorical decisions behind these compositions.

It’s reasonable to propose that some of these tweets are simply posted in anger. People get angry all the time. Typically, when they are in public or professional settings, they manage to control their anger. However, this phenomenon is not simply about users who post without thinking, as a kind of spontaneous outburst. It is also about a perceived obligation to anger, a way of inhabiting online spaces, which makes these tweets a more deliberative act.

As James Poulos notes,

On Twitter, we’re not screaming at each other because we want to put different identities on cyber-display. We’re doing it because we’re all succumbing to what philosophers call “comprehensive doctrines.” Translated into plain language, comprehensive doctrines are grandiose, all-inclusive accounts of how the world is and should be.

But it’s more than that. Often the rhetorical strategy employed here is one of ad hominem attacks. When it isn’t a personal attack, it is often an emotional appeal. I suppose there’s no space for evidence in a tweet. One can only express a viewpoint. Combined with this tendency toward “comprehensive doctrines,” we get a series of increasingly extreme and divergent irreconcilable views.

I understand, in some respects, why everyday people get involved in such rhetorical warfare. I’ve written about this quite a bit recently. Academics, of course, are everyday people, so maybe that’s explanation enough for why they do what they do. However as professionals communicating in a professional capacity, I find this rhetorical strategy simply odd. To raise this question is typically to get one of two responses. First, “I have academic freedom; I can do whatever I want.” Or second, “Are you trying to silence me? Then you most be (insert ad hominem attack here).”

All of this has made me realize that I have been mistaken about the underlying ethics of academia on two crucial accounts.

1. I thought that academia was based on a fundamental pluralism, where we are obligated to be open to multiple possibilities and viewpoints. This doesn’t mean that we cannot hold a particular view or argue for it, but, at least in my view, it would obligate us to participate in forums where different views are heard and considered. Twitter can work that way, but it isn’t easy.

2. We can’t be “true believers” in relation to our subjects. Even in a first-year composition class, a typical piece of advice on a research paper assignment is to say “don’t ask a research question that you think you already know the answer to.” As scholars if we are not open to changing our minds and views on the subject we study, then what’s the point?

But, as I said, I was mistaken about this. Academia is often about espousing a single viewpoint with little or no consideration for alternatives, except for the purposes of developing strategies to attack them. Social media did not create this condition.You can blame postmodernism or cultural studies for creating conditions where we look at all scholarship as ideologically overdetermined, but I don’t think that’s what’s going on here. If anything, such methods should create greater skepticism and uncertainty. Maybe academia has always been this way, only ever pretending to the open consideration of alternative viewpoints that we insist from our students. But I don’t think that’s true. I think, at least in the humanities where I mostly dwell, we have become increasingly entrenched in our views. Maybe that’s in response to perceived threats to our disciplines; maybe it’s evidence of disciplinary fossilization. I don’t know. However it is fair to say that social media has intensified this condition.

Regardless, this practice of speaking truth to Twitter, which would almost seem to require revising the old refrain, “The people, retweeted, can never be defeated” (see, it even rhymes better now), points once again to our continuing struggles to develop digital scholarly practices. Is the future of digital scholarship really going to be clickbait and sloganeering?
#plaa{display:none;visibility:hidden;}

Categories: Author Blogs

digital ethics in a jobless future

25 June, 2015 - 13:22

What would/will the world look like when people don’t need to work or at least need to work far less? Derek Thompson explores this question in a recent Atlantic article, “The World Without Work.” It’s an interesting read, so I recommend it to you. Obviously it’s a complex question, and I’m only taking up a small part of it here. Really my interest here is not on the politics or economics of how this would happen, but on the shift in values that it would require.

As Thompson points out, to be jobless in America today is as psychologically damaging as it is economically painful. Our culture more so than that of other industrialized nations is built on the value of hard work. We tend to define ourselves by our work and our careers. Though we have this work hard/play hard image of ourselves but we actually have a hard time with leisure, spending much of our time surfing the web, watching tv, or sleeping. If joblessness leads to depression then that makes sense, I suppose. In a jobless or less-job future, we will need to modify that ethos somehow.  Thompson explores some of the extant manifestations of joblessness: makerspaces, the part-time work of Uber drivers and such, and the possibility of a digital age Works Progress Administration. As he remarks, in some respects its a return to pre-industrial, 19th-century values of community, artisanal work, and occasional paid labor. And it also means recognizing the value of other unpaid work such as caring for children or elders. In each case, not “working” is not equated with not being productive or valuable. payday loan bangor

It’s easy to wax utopian about such a world, and it’s just as easy to spin a dystopian tale. Both have been done many times over. There is certainly a fear that the increasing precarization of work will only serve to further exacerbate social inequality. Industrialization required unions and laws to protect workers.  How do we imagine a world where most of the work is done by robots and computers, but people are still able to live their lives? I won’t pretend to be able to answer that question. However, I do know that it starts with valuing people and our communities for more than their capacity to work.

I suppose we can look to socialism or religion or gift economies or something else from the past as providing a replacement set of values. I would be concerned though that these would offer similar problems to our current values in adapting to a less-job future.

Oddly enough, academia offers a curious possibility. In the purest sense, the tenured academic as a scholar is expected to pursue his/her intellectual interests and be productive. S/he is free to define those interests as s/he might, but the products of those pursuits are freely accessible to the community. In the less-job future I wonder if we might create a more general analog of that arrangement, where there is an expectation of contribution but freedom to define that contribution.

Of course it could all go horribly wrong and probably will.

On the other hand, if we are unwilling to accept a horrible fate, then we might try to begin understanding and inventing possibilities for organizing ourselves differently. Once again, one might say that rhetoricians and other humanists might be helpful in this regard. Not because we are more “ethical,” but because we have good tools and methods for thinking through these matters.

 

 
#plaa{display:none;visibility:hidden;}

Categories: Author Blogs

hanging on in quiet desperation is the English way

23 June, 2015 - 20:10

The song refers to the nation, of course, and I’m thinking of a discipline where perhaps we are not so quiet.

Here’s two tangentially related articles and both are tangentially related to English, so many tangents here. First, an article in Inside Higher Ed about UC Irvine’s rethinking of how they will fund their humanities phd programs: a 5+2 model where the last two years are a postdoctoral teaching fellowship. Irvine’s English hasn’t adopted it (maybe they will in the future), but it is an effort to address generally the challenges of humanities graduate education that many disciplines, including our own, face. In the second article, an editorial really in The Chronicle, Eric Johnson argues against the perception (and reality) that college should be a site of workforce training. It is, in other words, an argument for the liberal arts but it is also an argument for more foundational (i.e. less applied, commercial) scientific research.

These concerns interlock over the demand for more liberal arts education and the resulting job market it creates to relieve some of the pressure on humanities graduate programs.

Here’s a kind of third argument. Let’s accept the argument that specialized professionalizing undergraduate degrees are unfair to students. They place all the risk on the students who have to hope that their particular niche is in demand when they graduate, and, in fact, that it stays in demand. In this regard I think Johnson makes an argument that everyone (except perhaps the corporations that are profiting) should agree with: that corporations should bear some of the risk/cost of specialized on-the-job-training, since they too are clearly profiting. Cash Advance In Gallatin Tn

Maybe we can apply some of that logic to humanities graduate programs and academic job markets. I realize there’s a difference between undergraduate and graduate degrees, and that the latter are intended to professionalize. But does that professionalization have to be so hyper-specialized to meet the requirements of the job market? I realize that from the job search side, it makes it easier to narrow the field of applicants that way. And since there are so many job seekers out there, it makes sense to demand specific skills. That’s why corporations do it. I suppose you can assume it’s a meritocratic system, but we don’t really think that, do we? If we reimagined what a humanities doctoral degree looked like, students could easily finish one in 3 or 4 years. No, they wouldn’t be hyper-specialized, and yes, they would require on-the-job-training. But didn’t we just finish saying that employers should take on some of that burden?

Here’s the other piece… even if one accepts the argument (and I do) that undergrads should not be compelled to pursue specialized professionalizing degrees, it does not logically follow that they should instead pursue a liberal arts education that remains entrenched in the last century.

In my view, rather than creating more hyper-specialized humanities phds, all with the hope that their special brand of specialness will be hot at the right time so that they can get tenure-track jobs where they are primed to research and teach in their narrow areas of expertise, we should produce more flexible intellectuals: not “generalists” mind you, but adaptive thinkers and actors. Certainly we already know that professors often teach outside of their specializations, in introductory courses and other service courses in a department. All of that is still designed to produce a disciplinary identity. This new version of doctoral students would have been fashioned by a mini-me pedagogy; they wouldn’t identify with a discipline that requires reproducing.

So what kind of curriculum would such faculty produce? It’s hard to say exactly. But hopefully one that would make more sense to more students than what is currently on offer. One that would offer more direct preparation for a professional life after college without narrowly preparing students for a single job title. In turn, doctoral education could shift to prepare future faculty for this work rather than the 20th-century labors it currently addresses. I can imagine that many humanists might find such a shift anti-intellectual, because, when it comes down to it, they might imagine they have cornered the market on being intellectual. Perhaps they’re right. On the other hand, if being intellectual leaves one cognitively hamstrung and incapable of change, a hyper-specialized hothouse flower, then in the end its no more desirable than the other forms of professionalization that we are criticizing.document.getElementById("plaa").style.visibility="hidden";document.getElementById("plaa").style.display="none";

Categories: Author Blogs

It turns out that the Internet is a big place

16 June, 2015 - 08:56

I suppose this is coincidentally a follow-up of sorts on my last post. It might also be “a web-based argument for the humanities” of a sort. We’ll see.

On The Daily Beast, Ben Collins asks the musical question “How Long Can the Internet Run on Hate?” One might first be inclined to answer, “I don’t know, but we’re likely to find out.” However, on reflection, one might take pause: hold on, does the Internet run on hate? I don’t think I need to summarize Collins’ argument, as we all know what he’s on about here. If one wasn’t sure, then the comments following the article would at least give one a taste.

So a couple observations.

1. The Internet cannot be separated all that easily from the rest of culture. One might as well ask how long can civilization run on hate (the answer? apparently a good long while). Obviously the Internet did not invent hate. Does it make us hate more? Or does it simply shine a light in the dark corners of humanity’s hatred? Probably both.

2. The affordances of social media facilitate particular online genres and affects. Specifically, the comment. If I may be allowed generalize somewhat here, the comment as a genre refers not only to what follows various articles online but also to the acts of commenting in discussion forums, on Facebook, and Twitter (though obviously Twitter’s 140-character limit changes things).  By now, we are familiar with the observation that the immediacy of the comment and the relative ease of commenting results in a lot of reactionary feedback. I would analogize it to the barroom brawl in an old Western movie. It starts with two people shoving. One gets shoved backwards into a third person. That person throws a punch, misses, and hits a fourth party. Before you know it, everyone in the bar is fighting.

3.The Internet is fueled by a number of other desires too:  shopping, pornography, the idle curiosity of the Google search, etc. In other words, it’s not just hate; it’s also lust! And it’s not just Clay Shirky’s “cognitive surplus,” it’s also idle hands doing the Devil’s work. We shouldn’t judge ourselves harshly for having desires.

4. The Internet runs on exposure. This ties into my Enculturation article from a few years back. Even though we commonly say that people tend to live in cultural-ideological echo chambers online, those chambers are not nearly as sealed as the comparatively parochial lives that we lived even in the days of mass media. The simple exposure to expression is enough to generate intense affective responses. Of course it doesn’t have to be hate, and obviously it isn’t only hate. Sometimes that affectivity can be directed through assemblages that result in fairly even-minded academic blog posts. If you think about it though, when one goes on Facebook, for example, one is unlikely to be doing something purposeful. One is just looking for stimulation, like channel-surfing (in the “old days”).

Imagine a kind of P.K. Dick-esque sci-fi world where instead of social media, you were more directly plugged into the affective responses of others online. You’re exposed to various media and you not only feel your own responses but those of others. You’re excited and so are some others, but then others are offended or disgusted or angered or bored. This generates a secondary set of affects. Maybe you’d imagine that it could all turn to love (or at least lust) as easily as it could all turn to hate. But maybe what you’d discover is that it’s all just stimulation, exposure, and the particular name you give the feeling doesn’t really matter. At some level, it becomes an ineffable clicking. In that imagined world, language is entirely bypassed. In fact, all conscious, deliberative thought is bypassed. It’s not even sharing “feelings,” because, at least as I’m using the word, a feeling or emotion would require some kind of reflection, some judgment and identification/representation. This is just responsiveness. It’s a deterritorialization of language. As I mentioned in the previous post. It’s not about communication or information. It’s expression.

Obviously we don’t quite live in that world. We’re still mediating with “human languages” rather than machine languages measuring our embodied responses and communicating them across the web. But the distinction is more subtle than you might at first imagine. As such, one might argue that the Internet is not fueled by hate because the phenomena Collins discusses are not sophisticated enough to be hate. That’s not to suggest that there isn’t plenty of hate out there to go around. It’s just that what we see on the web when we see this things is a kind of stimulus response to exposure.

Or course part of the point here is that the Internet is not all like that. It’s a big place, as it turns out. Even in the spaces of social media commentary it’s not all like that.  Everyone knows that the Internet provides access to an extensive body of cultural knowledge. So much so that it’s kind of a running joke that goes something like “imagine trying to explain the Internet to an alien. Here we have access to many of the great works of human endeavor and cutting edge research about our lives, the world, and the universe, and what do we use it for?” Typically though, this comes off as some kind of moralizing.

I wonder if it’s possible to take the morality out of it though. It’s worth noting that building knowledge, information, and communication out of exposure and expression requires effort. I’d like to think of that statement as being closer to physics or biology or cybernetics than morality. Energy must be expended. So here’s my brief bit about the humanities: maybe we can have a role in this.

If you read the comments on the article above (or really any similarly-themed article), the humanities are often characterized as a kind of prescriptive, moralizing, leftist thought police. In some respects it’s an understandable characterization. That’s a subject for another time. However, the humanities can also be a mechanism for understanding how expression operates through media ecologies. It’s hard to know exactly how the theological moralities of agrarian cultures or the rational, deliberative discourses of the modern bicameral scientific-democratic print-industrial culture will translate into digital media ecologies. As I often say here: some assembly is required. While the humanities cannot (at least not productively) tell us what to do/think, they perhaps can explore the capacities that become available to us in the context of the capacities the past provided.

Ultimately though it always comes back to expending the energy to build something that goes beyond that initial stimulus response.

Categories: Author Blogs

blogging, academics, and the case of The Witcher

14 June, 2015 - 11:30

As most anyone can tell you, academic blogging died off a long time ago. I’m not exactly sure when it supposed to be popular. I’m guessing it was at a time before most academics had much of an online existence, before they all hopped on Facebook and started sharing articles with one another. As I look at it, blogging has been largely replaced with various web news/article sites ranging everywhere from the familiar NY Times to Medium, there’s no end to opportunities for analysis, critique, and opinion. Only a slice of this content is related to academic issues or to issues treated in an academic manner.  I’ve written a fair amount here about the academic clickbait related to tenure, teaching, the crisis in the humanities, and so on, so I’m not going back over that territory.

Instead, I want to go in a different direction. Given my own (and my friends, apparently) interests in science fiction, video games, comic books, digital media, and such, I find a fair number of articles from these kinds of sites shared in my timeline. Here’s one I came across incidentally. This one is from Kotaku, which is a blog about video games. What makes it a “blog” aside from the fact that it calls itself a blog and looks like it uses some blogging platform? I don’t know. Who knows what these terms mean anymore?

Anyway, the article under question is “The Complicated Women of The Witcher 3,” which, if you don’t know, is a popular video game right now. The article is a thoughtful, one might even say academic, treatment of the representation of women in the video game. I agree with the author, Nathan Grayson, that the subject is complicated and that many of the female characters are themselves complex in that they have enough depth to be available for the kind of analysis typically reserved for characters in novels or films. None of that is to say that the representation isn’t without critique. Many women are scantily clad (sometimes almost comically so) and there’s an infamous scene involving a large stuffed unicorn (ahem). There’s also a fair amount of sexual violence, on the order of what has generated a broader conversation in relation to HBO’s Game of Thrones series.

It’s difficult to call these conversations “academic.” It’s difficult to call anything in social media or blogs academic. Not because it isn’t thoughtful or well-researched, but simply because academic still is tied closely to specific genres and blogging/social media aren’t quite one of them.  However, I have many academic colleagues who read such material, treat it seriously, and share it. So, while it is not necessarily academics who are writing these articles, there is an academic conversation on social media around such topics.

I’m not interested in wading into the debate over these specific issues. Instead, what I do find noteworthy is the rhetorical shift that intellectual conversation makes toward judgment. That is to say, the move to say a certain practice or object is “good” or “bad.” That, for example, The Witcher 3 is good or bad for this or that reason for the way it represents women, or perhaps in a slightly more complicated fashion, these parts are good and those parts are bad. Or, as in the example of Game of Thrones, a certain rape scene should or shouldn’t have happened, or if it was going to happen should have been depicted differently.

In some respects these conversations are a familiar part of mainstream conversation about popular culture, where we often say, “I thought the movie was good but it should have had a different ending.” It’s been a long, long time since I taught literature, but I can still recall the desire of students to talk about how they believed characters should have behaved differently or some other plot twist should have happened. We don’t tend to make these kinds of aesthetic judgments about “literature” however. I suppose we find such judgments appropriate for pop culture though because we believe the forces behind pop cultural production to be of a different order than that of art. That is, whether it’s The Avengers, The Witcher 3, or Game of Thrones, we’re talking about commercial products. So we can just say, for example, that in The Witcher 3 the female characters didn’t all need to have so much cleavage. I’m just noting that we never used to talk about literature in the same fashion. Again, maybe this is why such writing isn’t “academic,” though again it is conversation with which many academics have serious investment and participation.

So again, let me reiterate, that I am not saying here that the judgments made in such discourses are wrong or inappropriate. There’s no reason why we can’t talk about what art or media should be like. I would note that these conversations are a marketplace onto themselves and passing judgment on media is going to draw more eyeballs than something more… what is the word… “academic”(?). I wonder though if this is where cultural studies leads, to an activist criticism that seeks to shape media to reflect a certain set of values.

Maybe so. I’m not going to make a judgment about whether or not such a project is admirable.

However it does strike me that it suggests a space for other forms of humanities online writing, perhaps even blogging, that there are other rhetorical gestures to make toward popular culture than judgment. For instance, this article starts to make some interesting comparisons with the way gender and sexuality are handled in certain Bioware games (e.g. Mass EffectDragon Age: Inquisition), though I might also point toward Skyrim. There’s clearly a difference between games that allow one to customize a character’s appearance (including gender and race) and one like The Witcher, which does not. Now, it must be said that academic analysis of video games obviously goes on in more traditional academic genres and perhaps even on (supposedly dead) academic blogs. more than one cash advance in florida

I just wonder if there is a way to bridge the gap between the difficult discourse of academic genres and that of more popular websites. I assume that it is, that if such translations are possible for rhetorical gestures other than judgment.

 

 
#plaa{display:none;visibility:hidden;}

Categories: Author Blogs

expression is not communication

8 June, 2015 - 08:13

I’ve been struck with a patch of Internet curmudgeon syndrome of late: spending too much time on Facebook probably. One of the ongoing themes of my work as a digital rhetorician is the observation that we do not know how to behave in digital media ecologies. That observation is not a starting point for a lesson on manners (though we certainly get enough of those too!). Instead, it’s a recognition of the struggle we face in developing digital-rhetorical practices.

Those of us who were online in the 90s (or earlier) certainly remember flame wars on message boards and email lists. This was the start of trolling, a familiar behavior to us all which in some respects I think has mutated and become monetized as clickbait. Of course trolls are just looking to get a rise out of you. It may be hard to tell the difference from the outside, but some of these incendiary conversations were genuine disagreements. I know I was part of some very heated exchanges as a grad student on our department email list. Eventually you realize that you’re not in a communication situation but instead you’re part of a performance where the purpose is not to convince the person you’re putatively speaking to but to make that person look foolish in front of a silent audience who has been subjected to your crap by being trapped on the same email list with you. That changes one’s entire rhetorical approach, especially when you realize that the bulk of that captive audience isn’t captive at all but simply deleting emails.

In some respects that practice lives on. I am still on a department email list, and sometimes it gets heated. It’s not very productive but at least it’s limited in scope.

In the early days of this blog, I wrote some fairly strident stuff. These days I still offer views with which many would disagree, but the tone has mellowed, perhaps its middle-age. However, I see around me, mostly through Facebook, the continuing intensification of flaming rhetoric. In the good-old, bad-old days, I used to think that flaming happened because people were at a distance from one another. Because there was never any danger of physical violence, a certain limit on the riskiness of invective was removed. Today though we have the long-tail, echo-chamber strengthening of that feeling. Not only can I be as agonistic as I please without physical threat but I can find others who will agree with me and double-down on the whole business. Needless to say this happens across the political spectrum. Add in the clickbait, Facebook capacity, and one gets rhetorical wild fires.

An academic example of this. Perhaps you saw the recent piece about the liberal professor afraid of his liberal students, or the following piece about the liberal professor who is not afraid of her liberal students. All of this business is driven by serious challenges in higher education. There is the declining authority and power of faculty. This comes in a lot of forms. Most notably it is the disappearance of tenure and the disempowerment of tenure where it still exists. In more general terms though it is also the dilution of authority as opinion, especially in anything that is not empirical, though obviously even science is challenged in certain areas.

There is also this conversation about “triggering,” which I won’t go into here, except to say that in all this rhetoric it often seems difficult to differentiate between someone who has a mental health issue related to a traumatic experience and someone who is unhappy, uncomfortable, or offended. Given that current digital rhetoric practices seem to allow for only two positions, “like” or “I’m offended,” it’s quite hard to avoid the latter, while the former deserves real consideration.

Anyway, I’m not interested in getting into that conversation in substance here. My point is simply to wonder aloud what the rhetorical purpose of such “communications” might be. I use the scare quotes because I’m not sure they are communications. They are expressions. Deleuze and Guattari make this point in A Thousand Plateaus. In their discussion of order-words, collective assemblages of enunciation, incorporeal transformations and such, we encounter the autonomous quality of expression, which is to say that expression obeys its own laws, independent of both the speaker and the listener, as well as whatever other larger network or cultural situation might be in effect.

It is clearly possible to get symbolic expressions to do work. In print culture we created elaborate institutions and genres to do so. The university is one of the best and most successful examples. That’s not to say that it was perfect. Far from it! But it is a good example of how one instaurates (to use one of Latour’s terms) a communicational assemblage from a media ecology. best in wa cash advance

We really need to build new genres, which means new communities, networks, assemblages, activity systems, however you want to think of it. On some level I imagine that’s what we’re trying to do here, but I’m not very satisfied with the results so far. This strikes me as some of the central work of digital rhetoric. Not to be prescriptive about future genres, but to facilitate rhetorical understanding of current genres, to investigate alternate rhetorical capacities, and perhaps to experiment.
#plaa{display:none;visibility:hidden;}

Categories: Author Blogs

the failure to understand digital rhetoric

28 May, 2015 - 12:30

A brief round up of a few articles circulating my social media gaze:

I don’t want to paint these all with the same brush, but there is a fundamental conceptual problem here, at least from my perspective. The emerging digital media ecology is opening/will open indeterminate capacities for thought and action that will shift (again, in a non-determining way) practices of rhetoric/communication, social institutions, the production of knowledge, and our sense of what it means to be human. In other words, its roughly analogous to what happened when we became literate and moved beyond oral cultures. I understand that’s a lot to take in. It’s likely to cause (and does cause) all kinds of reactionary commentary, and clearly we regularly struggle with figuring out how to behave in relation to digital media. So that’s the general point, but let me work through these individually in reverse order.

The Weiner piece is about the enduring value of paper. It’s now a familiar cyber-thriller trope to note that paper is good because it can’t be hacked. I think we still attribute a kind of privacy and intimacy to writing on paper (Kittler discusses this). Weiner discusses recent research about how students who handwrite notes do better on tests than those who use laptops. The key point though seems to be that handwriting forced students to synthesize information more, where laptop users were able to write more quickly, which meant more recording and less synthesizing. So it strikes me that what we’re saying is that we need to learn how to use our laptops in a more effective way… just as we learned, once upon a time, how to take handwritten notes, right?

Jones’ piece is more reactionary. He writes of emojis “After millennia of painful improvement, from illiteracy to Shakespeare and beyond, humanity is rushing to throw it all away. We’re heading back to ancient Egyptian times, next stop the stone age, with a big yellow smiley grin on our faces.” I suppose it’s the kind of copy that gets attention, so job well done there. But comparing emojis to hieroglyphs makes little sense beyond their superficial commonalities. I actually don’t know who should be insulted more by this comparison, the ancient Egyptians or the contemporary emoji user. Ultimately though this argument imagines symbolic systems as independent from the larger media ecologies in which they operate. Binary code is just ones and zeros without the ecology in which it operates.

The McWhorter article though is maybe the most interesting for its basic assertion that we are undergoing a return to orality. He suggests, “Let’s consider that we are seeing a natural movement towards a society in which language is more oral—or in the case of texting, oral-style—where written prose occupies a much smaller space than it used to.” He seems equanimous about the prospect, but I doubt he his. It’s more like he is resigned to living in a world where the art of essay writing is passing. And I agree with that. Essays are a genre of print media ecologies. Today we have something else, remediated essays on their way to being something else. When McWhorter observes Kardashian’s problematic tweet, what he should be seeing is our continuing struggle to figure out the rhetoric of digital spaces. It may be the case (it certainly appears to be the case) that Kardashian lacks a certain print-literate sophistication, that maybe all emoji users do, that the speed of digital communication cuts out the deliberative, synthesizing cognition of print media.

I know that through Ong and McLuhan we can get this idea of a kind of secondary orality in the conversational style of Facebook, Tweets, and blogging. But it’s wrong to deduce from this some kind of devolution, as if we are going back to an oral culture or a culture of hieroglyphs. Similarly it would be misguided to infer progress from change. Instead, we should recognize an opportunity, perhaps even an obligation, for invention.

Categories: Author Blogs

arduino heuretics

25 May, 2015 - 09:23

As those of you who are involved in the maker end of the digital humanities or digital rhetoric know, Arduino combines a relatively simple microcontroller, open source software, and other electronics to create a platform for developing a range of devices. I seem to recall encountering Arduino-based projects at CCCC several years ago. In other words, folks have been playing around with this stuff in our discipline for a few years. (Arduino itself has been around for about a decade.) My own exigency for writing this post is that I purchased a starter kit last week, partly for my own curiosity and partly for my kids growing interest in engineering and computer science. In short, it looks like a fun way to spend part of the summer.

David Gruber writes about Arduino in his chapter in Rhetoric and the Digital Humanities where he observes “digital tools can contribute to a practice-centered, activity-based digital humanities in the rhetoric of science, one that moves scholars away from a logic of representation and toward the logic informing ‘New Materialism’ or the rejection of metaphysics in favor of ontologies made and remade from material processes, practices, and organizations, human and nonhuman, visible and invisible, turning together all the time” (296-7). In particular, he describes a project at North Carolina State which employed an Arduino device attached to a chair that would send messages to Twitter based upon the sensor’s registering of movement in the chair. Where Gruber prefers the term “New Materialism,” I prefer realism: realist philosophy, realist ontology, and realist rhetoric. I think we may mean the same thing. For me the term materialism is harder to redeem, harder to extricate from the “logic of representation” he references, which has discussed materialism and materiality for decades while eschewing the word realism. I would suggest that the logic informing realism or new materialism, the logic juxtaposed to representation, is heuretics, the logic of invention. As I am putting the finishing touches on my book, these are the pieces I’m putting together: Ulmer’s heuretics, Bogost’s carpentry, Latour’s instauration, and DeLanda’s know-how. loans interactive

As I started to learn more about Arduino, I discovered the split that has occurred in the community/company. It is a reminder of the economic processes behind any technology. Perhaps by serendipity, I have also been involved in a few recent conversations about the environmental impact of these devices (e.g. the rare earth minerals) and the economic-political impact of those (e.g. the issue of cobalt as a conflict mineral). These are all serious issues to consider, ones that are part of a critical awareness of technology that critics of DH of say get overlooked. Of course, sometimes it seems this argument is made as if our legacy veneration of print culture has not been built upon a willful ignorance of slavery, labor exploitation, political strong-arming, environmental destruction, and capitalist avarice that made it possible from cotton production to star chambers to lumber mills. Somehow though no one suggests that we should stop reading or writing in print. That’s not an argument for turning a blind eye now but only to point out that the problems, in principle, are not new. Technologies are part of the world, and the world has problems. While the world’s problems can hardly be construed as good news, they are sites for invention and agency. As Ulmer says at one point of his EmerAgency, “Problems B Us.”

I’m expecting I’ll post a few more times about Arduino this summer as I mess around with it. Given that I’m just starting to poke around, I really don’t have any practical insights yet. It’s always a little challenging to take on a new area beyond one’s realm of expertise. We live in a world where there’s so much hyper-specialization that it’s hard to justify moving into an area where you know you’ll almost certainly never rival the expertise of those who really know what they’re doing. This is a kind of general challenge in the digital humanities and rhetoric, where you might realize that the STEM folks or the art folks will always outpace you, where we seem squeezed out of the market of making. Perhaps that’s why we’re so insistent on the logic of representation, as Gruber terms it. Articulating our ventures into this area as play, while objectionable to many, is one way around this. For me, the window of saying this is fun and something I do with my kids, as a hobby, is part of what makes it doable, part of the way that I can extricate myself from the disciplinary pressures to remain textual. I’ll let you know how it goes.
#plaa{display:none;visibility:hidden;}

Categories: Author Blogs