Monthly Archives: January 2013

#edcmooc Misgivings and Thanksgivings

I share some of the concerns Steve Krause and Alex Reid have expressed about the five-week E-Learning and Digital Cultures MOOC offered by the University of Edinburgh in which more that 41,000 people are participating. Alex notes the reductive ways in which the introductory readings are framed, pointing out that the engagement with “Prensky’s digital immigrants and digital natives” terminology “is an unproductive and even damaging perspective” but observing that “as with the utopian/dystopian discourse, perhaps the concept is to move people away from these positions.” I’m with him there, and I’ll add that this is a strategy many of us have used in our own teaching: to begin from a perhaps obvious and engaging perspective and then to gradually complicate matters. I’m not sure I agree with his complaint about the content of the readings, though, particularly in his assertion that “[w]hile technologies do not determine culture, they clearly participate in shaping the world (both naturally and culturally if you wish to make those problematic distinctions)”: well, depending on what positions you’re coming from, as the readings (even in their very basic and introductory nature) suggest, that’s a position that’s open to debate. I would argue the same about his statement that “[w]e could say that technologies are market-driven, but we wouldn’t want to mistakenly believe that the market overdetermines technology. As if the market were some uniform entity. As if the market were not capable of error.” The market had nothing to do with the Internet: that was all government and university-driven. Ditto for the space program. I’m not disagreeing with Alex for the sake of disagreeing, but simply to say that disagreements about positions offered by readings in the course are different from disagreements about how the course is conducted, and I suspect that the course leadership might have some idea about the types of engagement they were trying to promote and the range of positions they were offering for examination.

And that’s why I find myself liking the generous-but-skeptical way I see Steve Krause thinking about the course leadership’s methods when he observes that “Knox et al seem to be attempting an alternative to the ‘drill and grill’ approach, though it remains to be seen if they’ll be successful.  40,000 people have signed up for this MOOC, and I have to wonder if many/most of them will understand the dispersed learning experience. And I have to wonder if this dispersed kind of learning is ultimately scalable.” This experiential mode is a good thing, I think, and I’m curious to see how my fellow participants find their own ways through the material. With 41,000 participants, there’s way more activity and interaction than I could ever take in, but I’m starting to get a handle on which threads I might check in on — journalism has long demonstrated, and web discussion fora have long confirmed, that the ability to write a kicky and informative headline and lede can sometimes give you an idea about the quality of the discourse within.

More importantly, though, and what ought to make folks like Cheryl Ball rejoice, is the way the course leadership have designed and characterized the final peer-evaluated project that determines one’s performance in the course: as they put it, in a language and conceptual approach likely familiar (and that’s not a bad thing) to many of us in computers and writing,

Text is the dominant mode of expressing academic knowledge, but digital environments are multimodal by nature – they contain a mixture of text, images, sound, hyperlinks and so on. To express ourselves well on the web, we need to be able to communicate in ways that are “born digital” — that work with, not against, the possibilities of the medium. This can be challenging when what we want to communicate is complex, especially for those who are used to more traditional forms of academic writing. Nevertheless, there are fantastic possibilities in digital environments for rethinking what it means to make an academic argument, to express understanding of complex concepts, and to interpret and evaluate digital work.

That open-ended and multi-modal approach to a final project has a lot of people in the course nervous, but also makes me really excited: there’s finally starting to be some big, widespread recognition of and engagement with (and even validation of?) the affordances of new media composing. Even if 90% of MOOC participants drop out, that’s still 3100 new media compositions to be excited about. Anybody looking for a possible Kairos Topoi submission? I’d love to see a big-data approach to assessing that corpus of new media compositions. Talk to me.

The Bridle and the MOOC

I’m enrolled in in the “E-Learning and Digital Cultures” MOOC (#edcmooc) that the University of Edinburgh is offering through Coursera, and it’s offering an interesting bit of synchronicity with some of the other things I’m working on, including taking part in a reading group with five graduate students as we work our way through Marx’s Capital volume 1, and teaching the spring-semester iteration of a 300-level WSU course (DTC356) called “Electronic Research and the Rhetoric of Information.” As you might imagine, reader, there’s a bit of overlap, and some curiously shifting perspectives.

In the reading group, we just finished the notorious Chapter 3, the chapter on money, and the dialectical back-and-forth got a bit head-spining. It’s the first chapter where Marx mentions accumulation, and the impulse toward accumulation, but it’s also an amazing analysis of how capitalism when it works perfectly inevitably tends toward crisis because of the way it works perfectly. The chapter takes Marx’s foundational work with the commodity (and its instantiation of frozen socially necessary abstract labor: in other words, the first way we see labor undertaking its transformation into capital) as its starting point and then investigates the curious and contradictory ways that money functions, winding its analysis toward the function of paper money and credit as a human-created technology. Marx notes that there are some items that possess value (in that they are frozen labor) and a price, and that there are other items that possess no value in his technical sense of the term (because no labor went into them: his examples are honor and conscience) but that do possess a price. I’ll leave my quibbles with that second half of the definition for later — I believe that social constructs like honor and conscience themselves require labor to produce even if we are seldom conscious of that labor — because the important thing to note is that there are some things that have prices but that do not have values. I would extend this to say that there are some things that have prices but that have negative values: for example, the collateralized debt obligations (CDOs) and credit default swaps (CDSs) that were intentionally crafted to be so mathematically complex as to be beyond understanding and so to be able to hide the so-called “toxic” mortgage loans that were incorporated into them, with that complexity allowing bankers to sell them to investors while those bankers simultaneously bet against those instruments as investments, and thereby profited from the collapse of the product that they had sold knowing that they had designed them to fail. Those CDOs and CDSs are human-designed technologies of capitalism, and they carry prices as mechanisms for the redistribution of wealth (from sucker investors to savvy bankers, apparently), but I’m still wondering whether or not they fulfill Marx’s definition of a commodity as carrying the value of the abstract social labor that went into their production.

Here’s an analogue for that question: given enough computational and analytical power — or, in other words, given enough human labor translated into the digital capital of financial systems analytic software via lines of code written and accounting formulae written and aggregated study and expertise all operating on machines designed by teams of engineers and experts who relied on previous insights and innovations going back even prior to the invention of the transistor — could the ways that CDOs and CDSs contributed to the Crash of 2008 have been anticipated or prevented? Did CDOs and CDSs as technologies of capitalism determine that such a Crash *must* have happened at some point? In the DTC356 course I’m teaching, we’re reading about Claude Shannon as an information theorist who believed the necessary step to decode information was to discard meaning: we don’t care about meaning, Shannon argued. We care about the signal, about the code. Focus enough on the code and discard the context and one can decode any information. In this sense, I suspect Shannon was largely a technological instrumentalist of the sort produced by the first half of the 20th century, particularly if we understand “technology” to exist as a field that includes “tools, instruments, machines, organizations, media, methods, techniques, and systems” (“Reification”). Technological instrumentalists believe technologies to be use-neutral and subject only to human intention, even as their invention seems to demand their use, even as they seem to exist as autonomous entities divorced from us, apart from society, simply things laying to hand to be used.

To my mind, though, what Marx helps to show is the ways in which human social arrangements give rise to systems that blinker us in specific ways, that point us toward certain ways of being and certain technologies, so that in a capitalist system CDOs and CDSs make perfect sense even as they precipitate crises that demolish enormous amounts of actually-existing value (as instances of frozen human labor). I don’t (or won’t) identify as a technological determinist (although I tend much more easily toward an overdetermined technological determinism than toward a technological instrumentalism), but when I look at the intersection of social, political, and economic habits and practices with technologies like computers, cell phones, CDOs, and CDSs, I can’t help but think of the end of the classic Raymond Carver story “The Bridle” and its attitude about technologies like the bridle: Marge looks at the bridle — that instance of frozen labor, that commodity, that technology — after all that has gone on in the story, and thinks, “If you had to wear this thing between your teeth, I guess you’d catch on in a hurry. When you felt it pull, you’d know it was time. You’d know you were going somewhere.” That circumstance at the end of the story, though, seems to me to point to the same circumstance that finally happened, however inexorably, in 2008: the overdetermined combination of heterogeneously massed human intent and reified technologies that some understood better than others produced a perfect crisis. We socially design our own technological affordances, and often, as with the bridle, we elect to wear those affordances.