Feed aggregator

fake news and the distribution of critical thinking

Digital Digs (Alex Reid) - 12 November, 2018 - 14:09

Wired published an article a few days back based on this research from the journal Cognition. As the Wired article’s title suggests, if you want to be resistant to fake news then “don’t be lazy.” Basically this particular study indicates that people who exhibit critical thinking skills are more resistant to fake news than those who do not, regardless of ideological bent and regardless of whether the fake news favors them ideologically. Here’s the abstract to that article:

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news –even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se– a finding that opens potential avenues for fighting fake news.

It’s worth noting that these findings are somewhat inconsistent with other research (like this) which suggests that even when people demonstrate critical literacy/numeracy they tend to “use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks.”

One thing that lies outside the scope of either of these pieces of research is how one acquires a capacity for the kind of critical-analytical thinking described here. Our received notions about this is that it is either innate (some people are just smarter than others) or learned. However, even in the learned case, our typical sense is that it’s kind of a one-shot deal or inoculation. E.g., you can learn critical thinking in high school and/or college, and once you have it you pretty much don’t lose it. However, we all know that isn’t true. If you’re drunk or tired or angry or excited or even just distracted, these “higher reasoning” skills suffer.

And I think I’ve just described the mental states of a significant portion of social media users while they’re on social media.

Both the Wired article and the researchers it cites moralize this situation by accusing those who fall prey to fake news of laziness (which is a mortal sin after all). Maybe. But that judgment fails to account for the media ecological conditions of social media, specifically its ubiquity/pervasiveness. It fails to account for its intentional design as an intrusive and addictive technology. So I say “maybe” as we can certainly ask more of one another when it comes to sharing fake news, and I think most people have become more skeptical in the last few years in regards to what they read online.

On the other hand, if one thinks about cognition as a distributed phenomenon then one would want to account for the media-ecological conditions that made social media such fertile ground for fake news and then ask how we might change those conditions. Clearly some of that is happening as social media corporations begin to own some modicum of responsibility here in terms of trying to detect and stop the spread of fake news. But I wonder if other strategies might not be possible. Namely, if we can design social media, smartphones, and related tech to incite our interactions with them, then can we also design them to facilitate a critical-analytical orientation? I’m not sure. It’s quite possible that those are irreconcilable intentions–simultaneously spurring our desire to engage while also encouraging a more deliberative approach to that engagement. For example, we might just decide “I don’t want to go on Facebook, Twitter, etc. right now because that’s too much work.”

Part of that challenge too is the see-saw of content. Just to give a quick example. I took a look at the first ten posts in my FB feed. 4 were personal updates. 3 were colleagues talking about their classes, asking advice, etc. 3 were articles shared, of which two were political news (one from USA Today and the other from Washington Post). I’m sure you get something similar, by which I mean that your rhetorical relationship to the author of the post and/or the content is shifting: family and old friends, work colleagues, neighbors, etc. and humorous videos/memes, personal news with varying emotional registers, interesting stories, advertising, political commentary, and news. You wouldn’t want to take the same critical-analytical orientation to each of these.

I’m just spitballing here but maybe we’d prefer to not have all this stuff in a single stream. Maybe with some intelligent digital assistant support we could split it up, so that when I’m interested in political news (and up for the responsibility of being a critical-analytical reader), I can dive into that feed, but that I’m not expected to be at my level best every time I idly turn to Facebook.

 

 

Categories: Author Blogs

the challenges of reading Latour

Digital Digs (Alex Reid) - 5 November, 2018 - 14:32

A couple of Latour-related articles have been going around lately, particularly this article in the NY Times and more recently this critical piece by Alex Galloway at least partly occasioned by the Times article. Galloway’s rejection of Latour (and Deleuzian, new materialism in general, if one reads other works of his) comes down to the infelicity of this kind of thinking for his political project. That is, it is, in my view, an ideological objection. And I don’t have any problem with that. Well, let me rephrase that. I don’t have any problem with people–academics or otherwise–having a goal and selecting the best tools for achieving that goal.

That said, at the end I think the only conclusion you can draw is that Latour doesn’t share Galloway’s political commitments, is not seeking to carry out Galloway’s political objectives through his research, and that therefore Galloway believes his work has little or no merit.

I will leave it up to you to determine whether or not you find that piece of news useful.

In passing though, I will point out what strike me as some misreadings of Latour. Galloway writes,

Latour very clearly enacts a “reticular decision” of economic exchange in which markets and networks are sufficient to describe any situation whatsoever. And thus to avoid these Latourian difficulties one might “degrow” this particular reticular decision — so engorged, so sufficient — refusing to decide in favor of the network, and ultimately discovering the network’s generic insufficiency. Latour does the reverse. Networks overflow with sufficient capacity.

I see this as a key point in Galloway’s critique as this notion of a reticular fallacy is something he has turned to before. As is suggested here, the reticular fallacy has to do with seeing everything as rhizomatic or networked or horizontal, plus assuming such structures are intrinsically better, freer, more just, or some such. I completely agree that it would be an error to see everything that way or assume there’s something necessarily better.

But I am confused as to how one sees that in Latour. Take for example, the concept of plasma as discussed in Reassembling the Social

plasma, namely that which is not yet format- ted, not yet measured, not yet socialized, not yet engaged in metro- logical chains, and not yet covered, surveyed, mobilized, or subjectified. How big is it? Take a map of London and imagine that the social world visited so far occupies no more room than the subway. The plasma would be the rest of London, all its buildings, inhabitants, climates, plants, cats, palaces, horse guards. (244)

To be clear, one can be critical of plasma also, but it strikes me that networks are like the subway system. They are hardly capacious at all despite Galloway’s assertion. And if plasma seems like a fairly minor point in Latour’s work, then one might try reading An Inquiry into Modes of Existence, which begins with networks as one of fifteen modes–a number which he does not claim to be exhaustive. Really Galloway’s point is that he believes Latour’s way of thinking is not progressive, that it merely reiterates an existing perspective when “The goal of critical thinking, indeed the very definition of thought in the broadest sense, is to establish a relationship of the two vis-a-vis its object, a relation of difference, distinction, decision, opposition.”

I can agree with that, but it’s that same value that is the basis of my dissatisfaction with Galloway’s argument. While he argues that Latour’s thought creates no difference or distinction in relation to its object of study, my complaint with Galloway is that he never really enters into a relationship with his object of study, having already predetermined his opposition. Perhaps that is just his rhetorical style. Maybe somewhere along the way, in the distant past, he engaged with Latour’s work in a way that was open to its possibilities. However reading this, you’d wonder how far along Galloway went before he came to this judgment or if he arrived at the text with this judgment in hand. And I don’t really care if the latter was the case. Most people are true believers of one sort or another. He already knows what the world is, how it can change, and how it should change. In that light the purpose of humanities scholarship can only be a political-rhetorical one: to persuade people to accept one’s beliefs and take up one’s cause.

The error one can find in Latourian-Deleuzian thinking comes when it is used in this same way, as if networks, rhizomes, becomings, etc. represent a teleology, as if we’d all be better off as nomads, schizos, or something. That would be a reticular fallacy as Galloway might put it. However I wouldn’t attribute such claims to either Latour or Deleuze themselves.

Latour’s methods might only be useful to people who do not believe they know how some part of the world works before they examine it and/or who are uncertain about how to act next. Even then, it’s quite possible that you won’t find Latour’s methods all that useful to you–if it doesn’t create more understanding and more importantly if doesn’t expand your capacity to act effectively in the world.

 

 

Categories: Author Blogs

why we can’t have nice things

Digital Digs (Alex Reid) - 4 September, 2018 - 11:25

It’s that old saying but one that might cut in two directions. Yes, “we” can’t have nice things because “you” are always ruining them with your irresponsible behavior, lack of class, etc. But possibly we also can’t have nice things because we’re always getting crap shoved in front of us. Or both. Facebook is case in point. Sure it’s a cesspool because of the way people behave, but it’s also crap in and of itself. Why can’t we have a better way to live online? And why don’t we live that better way?

n+1 has a piece apropos to this topic (h/t to Casey pointing this out, via Fb of course). It focuses on the treacherous minefield (are there other kinds of minefields?) that is the social media environment surrounding op-ed writing–in online journals like their own but principally in mainstream media, specifically the NY Times and Washington Post. There’s a range of concerns and complaints here. Authors and editors who write/publish works knowing they’ll be re-litigating them on Twitter. And readers have it no better. “In the not so distant past, we could sit with an article and decide for ourselves, in something resembling isolation, whether it made any sense or not. Now the frantic give-and-take leaves us with little sovereignty over our own opinions.” Surely I am far from the only one who encounters something shared in social media and the ensuing “conversation” and thinks “I have something to say about that, but why bother?”

In the few days since this piece was posted there’s been a whole story about the New Yorker Festival announcing Steve Bannon as a headliner, a bunch of other celebrities dropping out, a flurry of social media complaints, Bannon being dropped, and resulting analysis over whether or not that’s the right decision. My wife turns on MSNBC this morning and the pundit crowd is tut-tutting the decision, trotting out the typical argument about how these ideas need to be dragged into the light of day and debated in the public square where they will wilt. How naive is that? As if they aren’t doing that every day already on Morning Joe with their collection of refugees from the GOP. As if the Times and the Post don’t have their own cadres of neocon pundits.

It’s a peculiar, though founding, fantasy of the US that at their core people are the same, they are kind, they are rational, they have a “strong moral compass,” and so on.

But here’s the thing. At their core, people are pretty stupid. I don’t mean most people are stupid or people are stupid these days. I don’t mean people who don’t resemble me are stupid. I mean we are all stupid in the sense that as individuals, as independent entities, to the extent that we can be independent (try going it on your own without oxygen), we lack the cognitive resources to make the kinds of judgments necessary for democratic participation, especially in the very complex global present.

I mean this in roughly the same way as Nick Bostrom does when he observes, “Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.” To put it more generously, I mean it in the way Katherine Hayles does when she takes a cue from Edwin Hutchins’ theory of distributed cognition and writes “Modern humans are capable of more sophisticated cognition than cavemen not because moderns are smarter, Hutchins concludes, but because they have constructed smarter environments in which to work.” Of course this is also Bostrom’s observation (and fear): that these smarter environments are becoming to smart for their own, or at least our own, good. But that’s a different subject.

It is in this context that one might be tempted by Nick Carr’s “google is making us stupid” claim, but really my assertion is simply that we don’t need any help being stupid. Instead we might want to ask what it means to suggest a la Hayles that we are “capable of more sophisticated cognition.” Can we be more precise about the nature of those capacities? In what ways are our environments smarter? What does smarter mean?

Empirically, we have access to a tremendous amount of media/data. In a digital context, media are data and data are mediated; it’s the resampling of the McLuhan maxim. We also have unprecedented capacities for communication. The choke point in this system is human consciousness, so of course we need to build smarter environments that can swim up the media/data stream and handle that firehose for us. So the first problem is that environment turns out to not be that smart. As you know, people connecting to Google, Facebook, Twitter, Reddit, etc. etc. are not really demonstrating much capability for “sophisticated cognition,” at least not by any sense of that term I can conjure. To the contrary, wading into this media/data stream seems to reinforce poor reasoning and bad information. Maybe it’s confirmation bias or the Dunning-Kruger effect. IDK.

The other part of this is communication. I am reminded of a line. I’m probably misremembering it but my memory is that it comes from Virilio’s Art of the Motor. It has something to do with how when train travel became available in Europe, the belief in France was that it would reduce wars on the continent by making it possible for people to travel and improve understanding among nations. Meanwhile in Germany the realization was that trains would make moving troops and supplies to the front more efficient. The arrival of the Internet, especially the social media that have made billions of humans into online participants, was similarly meant to foster mutual understanding among people around the world… I don’t think I need to say more about that, do you?

So maybe this post appears to be moving toward a conclusion that this stuff is bad, but it’s not. I’m not in the business of making judgments like that. I am, however, in the business of evaluating the rhetorical effects of digital media. Give a proto-human a bone club and he’ll bash his neighbor’s skull in (a la 2001, which I just recently saw again). Give the same, slightly more evolved human a web connection and he’ll join in conspiracy theories about how those bones were put there to test our faith in a young Earth. That’s what humans are: just not very smart. It’s not really a fixable situation. But the situation isn’t hopeless. We actually have managed to disabuse ourselves of bad ideas in the past; it could happen again.

But you can’t really change people’s minds by talking to them. You change people’s minds by changing the environment in which they think, the distributed part of their distributed cognition. Not understanding this is a common error. The idea of a public debate is that a critical mass of people are there so that when the audience is persuaded the whole community is shifted. But that doesn’t happen anymore. It certainly isn’t happening in your social media feed.

 

Categories: Author Blogs

informally proposing a “materials rhetoric”

Digital Digs (Alex Reid) - 17 August, 2018 - 11:26

In the briefest terms, my idea here is conceiving of a materials rhetoric that is roughly analogous to materials science. I’ll return to that in a moment but first a few detours.

  1. Since the 80s at least, rhetoric as concerned itself with “materialism” and far more recently with “new materialism.” Materialism has generally been another way of naming various Marxian critical traditions, including Foucaultian and cultural studies theories. New materialism? Well, I talk about that here enough as it is, but basically its various forms of posthumanism, speculative realism, etc. Materialism, ironically enough, has never been that interested in actual things. New materialism has, but what I’m thinking about here is something that would include but not be limited to what we’ve seen of new materialism so far.
  2. This proposal also stems from my discontent with the term “digital rhetoric” (along with its various predecessors–new media rhetoric, computers and writing, computers and composition). It’s a term that was initially meant to differentiate scholars who studied “stuff about computers/internet” from those who focused on print or speech. Of course for some time now, unless you study the history of rhetoric or limit your research to some fairly unique contemporary cultural circumstances, digital media, networks, computers, and other information technologies are just a part of what you do. As such, contemporary digital rhetoricians tend to identify themselves as those who focus on the role of digital stuff in rhetorical practices as opposed to other rhetoricians who study rhetorical practices that involve the use of digital stuff but I guess just don’t emphasize that aspect. For the most part, that digital rhetoric as been materialist/cultural studies-esque in its operation.

But to get back to this analogy with materials science… So in some respects materials science has been around for a while. Apparently the first materials science department was formed in 1955 at Northwestern. Conversely here at UB, we only recently created a department of Materials Design and Innovation. That said, materials science is an interdisciplinary field involving both traditional sciences and engineering. It tends to trace its history back through metallurgy and ceramics (which takes us back at least as far as the Bronze Age I suppose). At its core, materials science investigates the physical structures and properties of matter for the purpose of designing new materials for human purposes (hence the intersection of science and engineering).

Similarly a materials rhetoric studies the rhetorical properties, tendencies and capacities of materials for the purpose of designing new technologies and rhetorical practices for human purposes. As such, a materials rhetoric would include empirical methods (quantitative, qualitative, and “second empirical” a la Latour), philosophical speculation, and experimentation, along with more familiar rhetorical-critical interpretive analysis. As an interdisciplinary project, materials rhetoric wouldn’t go about drawing boundaries regarding theory/method. What does distinguish materials rhetoric from current-postmodern (or should that be postmodern-traditional) rhetoric?

  • A non-anthropocentric conception of rhetoric. I.e., if you think that rhetoric essentially begins and ends within humans, or if you think that rhetoric is essentially symbolic and overdetermined by ideology, then you probably have little interest in the role of materials in rhetoric, which is not to say that you don’t recognize that rhetoric is conveyed through materials but rather that the specifics of those materials are a distinction without a difference. [And it’s worth noting that these anthropocentric, idealist notions of rhetoric reflect the paradigmatic view of the discipline.]
  • An emphasis on invention/experimentation over interpretation/hermeneutics. All rhetorical scholarship requires both invention and interpretation. However, current-postmodern rhetoric is primarily interested in interpreting, truth-seeking (if not Truth-seeking), and typically in making moral-ethical-political judgments. The emphasis in materials rhetoric is in creating new rhetorical capacities for humans through invention and experimentation.

My particular interest in this idea follows from the roots of materials science in metallurgy. Those of you familiar with Deleuze and Guattari will likely recall the role of geology, of metallurgy, and of the smith in A Thousand Plateaus. As they argue, metallurgy is a nomad science (btw this is something Jussi Parrika also takes up in Geology of Media). Metallurgy’s nomadic quality arises from its pursuit of intensifications: melting points, mixtures to form alloys, tempering, annealing, etc. In new materialist terms, metallurgy is a practice that alters the properties of particular piece of metal by interacting with its tendencies (e.g. it’s melting point, maximum hardness, etc.) and activating certain capacities through its interaction with other things (e.g., other metals to form an alloy, water or oil for quenching, etc.). Materials rhetoric is similarly interested in the identification of and tactical engagement with the rhetorical tendencies and capacities of materials.

This brings me back to digital rhetoric where I’ll end this. For my purposes, digital rhetoric has principally been about the material-rhetorical operation of technologies. Information is a physical thing; media are things. The cables, wires, cell/wireless signals that are the media of information are things. Obviously our devices are material, as are we. This is where I begin to verge upon research already going on in various posthuman, new materialist, and/or ecocompositional flavors of digital rhetoric, as well as in media study. However, in thinking about a materials rhetoric I’m imagining something that expands its purview beyond the digital (and perhaps is more historical in some respects) but is principally more applied, that is, more focused on expanding capacities as a priority than interpreting/truth telling (though obviously those are integral elements).

 

Categories: Author Blogs

The post WPA life: one year on

Digital Digs (Alex Reid) - 15 August, 2018 - 09:41

From the summer of 2010 through the summer of 2017, I served as the director of composition at UB. In the last year I’ve gone back to being a rank and file professor in the department. I’ve been thinking about writing this reflection for a little while but wanted to mark the occasion of a full year on. In my seven years on the job, I certainly learned a lot, and I’d like to think we accomplished a great deal (not that there isn’t always more to do). I’m not going to go into that here as this is not an epideictic post.

I will say that on a daily basis the job was basically an exercise in pissing into the wind. It was labor-intensive, stressful, anxiety-inducing, frustrating, and infuriating in equal measure. Though WPA jobs certainly vary from school to school I’d strongly warn against pursuing one if what makes you happy is accruing plaudits.

I think the easiest way to measure the professional impact–at least on me–of WPA work is in relation to scholarly production. As a WPA I had summer administrative duties (for which I received a stipend) and normal academic year duties (for which I received course release), but basically the expectations for scholarly production for me were not different from those of my other colleagues. I can compare my production over those seven years with my productivity at Cortland from 2002-2009. It’s worth noting that while at Cortland I worked a 4-3 and then a 3-3 teaching load. So during that time at Cortland I published 7 articles and a book. At UB, I published 12 articles. I also completed a book manuscript but that’s still in the works.  So the way I look at it, I was a little less productive as a WPA than I was at Cortland as a more junior scholar (which makes publishing more challenging) with a higher teaching load. In short, being a WPA had a significant impact on my scholarly productivity, which I don’t think is surprising to anyone whose done the job, especially if your scholarship is not related to WPA work or even composition studies (as mine is not).

The real effects though were more personal/mental. I don’t mean to suggest that it was continually miserable being a WPA. It really wasn’t. But over time I just got used to the regular role in my life of addressing complaints, solving logistical problems, strategizing, arguing/persuading/wheedling, and so on. I got used to planning to spend my day doing one thing only to wake up to some BS email that sent me down one rabbit hole or another. But beyond that was the constant awareness that:

  • the adjuncts and others I employed were getting screwed over
  • the TAs in my care were too
  • the administration was always giving me the runaround
  • the students weren’t getting what they needed or at least what they might have gotten if things were better, and
  • there was always more that could be done.

It was like a filter of unhappiness over the lens through which I viewed the world, one that I’d gotten used to and forgotten was there until one day a few months ago I realized that it was gone.

But anyway, the good news.  Thanks to the aptitude of my successor, I was able to extricate myself smoothly from the daily operations of WPA life. But it took my mind more than a semester to go through decompression. I think that, combined with the fact that I had completed my manuscript around the same time as I was ending my WPA stint, left me with an open space that has taken some time for me to figure out. So it’s really just been in the last four or five months that I’ve started moving down some new paths: doing new research, teaching new courses, picking up some new technical skills and renewing some old ones. Not for nothing I also took off 50 pounds this summer, so I feel like a new man with literal and figurative weights lifted from me.

I’m looking forward to seeing what the next year brings.

 

Categories: Author Blogs
Syndicate content