It's Not Technology; It's the Desire for Peace and Perfection
09-20-2010Jaron Lanier has another excellent essay probing the dangers that technophilia poses to human thinking. His theme is technology in the classroom and he asks: Does the Digital Classroom Enfeeble the Mind?
What makes Lanier perhaps the most thoughtful essayist on the question of human thinking is his insistence on asking the question: What happens when we use technology to solve problems that we humans cannot understand? In this case he questions: what are the possibile advantages and dangers of using technology to teach our children to think?
Clearly educational technology has its advantages. My five-year-old daughter plays brainquest on my iPhone and has learned addition and division earlier than most. Her new Kindergarten classroom is outfitted with Smartboards that bring children into the educational process. My colleague at Bard has students post drafts of their papers on Moodle (an online teaching resource) and has the students to edit each others' papers before re-writing them). And technological evaluation of teachers and students allows us to identify what works and what doesn't work in the classroom.
I have no doubt that computers can make teachers better and may, in certain tasks, be better than a human teacher. A robot may be better than a teacher at drilling a young student on vocabulary and math. A computer may also be a better grader than a teacher, especially given the rampant grade inflation of recent years. If every teacher were presented with a computer generated grade of a student test or paper based on objective criteria, that might go a long way to counteracting the emotionally laden tendency to soften our grades and simply pass on weak students without challenging them to do better.
As technology infiltrates the classroom and shines light on what teachers do well and what they do badly, it also reorients teaching in a way that denies the magic of thinking and teaching well. As Lanier insightfully notes, there is a magic to teaching that our reliance on technology seems destined to overlook. I am a teacher and I can tell you that the most valuable and extraordinary moments of teaching and learning happen in those surprising moments when teacher and student are transfixed on the precipice of a question that lingers for minutes, days, or weeks. To be able to breakthrough a student's commonsense assumptions and confidence about how they see the world and open up new paths and ways of inquiry and thought--that is what makes teaching wonderful.
Can a computer do such a thing? I admit it is possibile. But I also say that there is no model for doing so, and the point is that we do not know how it is done. It is something a teacher--a good teacher--learns to do and we cannot yet possibly program a computer to do such a thing--although it may be possible that computers learn to do it themselves. We cannot, as Lanier suggests, bottle the magic of teaching in a computer, because we do not yet understand the magical aspects of human thinking. To ignore our ignorance and subject more and more of our educational efforts to technological controls and measures is to forget that even today, as Lanier writes, "Learning at its truest is a leap into the unknown."
If I have one quibble with Lanier's overwhelmingly intelligent approach to these questions, it is his focus on technology as the question. In his essays and his book You Are Not A Gadget, Lanier presents our challenge as how to deal with technology. In this most recent essay, he offers a version of the alternative that he has now presented in many different forms. Either we use computers and robots and systems to "measure and represent the students and teachers," or you employ technology to help teach the students how to "build a virtual spaceship." The former approach subjects thinking and teaching to computerized models. The latter frees students to use technology in their own thoughtful pursuits. Lanier is right that we should encourage the building of spaceships and be wary of teaching software that forgets that thinking is a magical process beyond our comprehension.
What such an opposition overlooks, however, is that the human decisions about whether and how to use technology are themselves subject to a deeper desire: The desire to relieve ourselves of the burdens of thinking.
Hannah Arendt diagnosed this same desire to overcome our thinking, our acting, and thus our humanity in her book The Human Condition, published in 1958. Discussing Sputnik, what she calls the most important event of the modern age, Arendt cites The New York Times, that Sputnik was the first "step toward escape from men's imprisonment on earth." She sets this sentiment alongside a 1911 quote from the Russian scientist Konstantin E. Tsiolkovsky-Kaluga: "Mankind will not remain bound to the earth forever." Together, these statements manifest a profound desire to leave behind the earth--a desire that Arendt believes long precedes our technological ability to do so.
The desire to leave the earth is, for Arendt, the desire to abandon our humanity. As she writes:
"The earth is the very quintessence of the human condition."
When she speaks of the "earth," Arendt means that aspect of our lives which mankind does not create--that which is beyond human control and human artifice. The earth names that quintessential essence of man that as it has been given, free, as she writes, from human intervention. Our earthliness is "a free gift from nowhere." This free gift of human existence can, of course, be understood religiously as man's divine creation by God. But it can also be understood, as Arendt means it, in a secular sense, as the fact that mortal beings are subject to fate and chance beyond their control and comprehension. It is this earthly subjection to chance that Arendt says is the "quintessence of the human condition."
Is Arendt right that the subjection to chance and fate is at the essence of being human?
If she is, then she is right to worry that our dreams of abandoning the earth, along with our dreams of creating life in the test tube, our dreams of producing superior human beings as well as our dreams of breeding intelligent servants, our dreams of living forever, and, of course, our dreams of creating robots intelligent enough to teach and think for us--all of these dreams manifest an urgent desire on behalf of humanity to cut the last tie to our humanity. What we humans want, Arendt argues, is to commit suicidal genocide. It is not technology that is the danger, but technology is only an expression of our darker and deeper urges to overcome ourselves.
Lanier is a prescient guide to the right questions about our engagement with technology, but when he expresses the hope that we will decide to use technology wisely, he proceeds on the assumption that it is the technology that is the danger and we thoughtful humans need to understand and resist that danger. What Arendt places before us is the troubling insight that technology's dangers are only symptoms of an all-too-human wish to extinguish the very thoughtful, soulful, and creative impulses that distinguish us.
What needs to be asked is not whether new possibilities will emerge for using technology well--of course they will, although these possibilities will likely be ever more rare. The bigger question is what is driving our communal desire to exchange our earthly humanity for ever-more-rational and ever-more-expertly-conceived ways of life? Arendt offers us an answer. We want to exchange the free gift of life for a planned life. We want to exchange freedom for behavior. And we want to exchange thoughtful creativity for the security of reason. It is these fundamental desires--themselves human--that we must grapple with in an increasingly inhuman age.
RB