The “Role” of Human Thought in the Internet Age
by Baglan Nurhan Rhymes, SVP at AnchorFree
Recent events provide a devastating critique of the near past, especially when those concerned experience themselves as living in a new epoch, “the Internet Age.” The NSA and Britain’s GCHQ have been collecting data of an extremely sensitive nature from smartphone apps, like that for Angry Birds. If you are a user, the chances are that details such as your age, gender, location and even sexual orientation have been mined. As such, imagine the naiveté, the technocentric utopianism, of the developers of Bin Cam. Remember Bin Cam?
BinCam was a British-German project to provide the world with smart and socially responsible trash bins. A camera installed under the lid would take a snapshot of the garbage every time the bin was closed. The images were then to be subjected to human analysis for a determination of one’s real-time commitment to recycling and responsible consumerism. The evaluated photos were then to be uploaded to the bin owner’s Facebook account and a grand competition would begin. Scores were to be totaled regularly and we all would race our friends to a healthier planet.
BinCam should be recognizable as an expression of a very familiar narrative. The incentivized Pavlovian dogs of cyberspace chase the fool’s gold of virtual prizes to defeat their friends and feel good about themselves. However, troubling questions arise. The foundational argument is of the efficacy of incentives upon a childishly suggestible people. An arsenal of data-mining powers wait to be unleashed, for good or for ill, but a fundamental human consideration underlies it all. In the case of BinCam, the question is whether the conventional wisdom is correct in assigning the blame? What if an element involved is ideological in the classical sense—a displacement of blame (say, upon the consumer rather than manufacturers, upon the gardener rather than Dow Chemical)— and spawns game-playing eco-vigilantes furthering an agenda that puts the emphasis precisely in the wrong place and deepens the problem? Instead of discovering once again how to work together as good citizens and restore our institutions to responsiveness, we engage each other as competing, isolated individuals in an endless cycle of guilt and catharsis. While we seem to be optimizing behavior at the atomistic level, the more substantial gains to be made collectively are lost sight of and neglected.
There is no easy answer to this question: Do we want to use technology so as to depoliticize our people, render them less civic-minded than before? This form of self-alienation (assuming the human being is in fact a political animal) is not an intrinsic property of cyberspace. It is, however, a symptom of the careless misuse of the Internet. Do-gooders, vested interests and power mongers of all stripes have felt the tantalizing pull of smart sensor-powered technologies, and because we have largely vacated the public sphere (as evidenced by dramatic declines in membership in everything from the Lion’s Club to the local bowling league) our socio-political lives are being hijacked by the most tech-savvy (hence influential) groups in the world, groups which are by no means invariably benevolent. The incentivized citizen is just as much an oxymoron as responsible consumerism. There are, for example, relatively high-tech solutions to obesity, including invasive surgical procedures, but it isn’t clear that this dependence (on “experts,” things, methods, techniques) is healthy or can ever substitute for informed, deliberative self-discipline. Should we unleash the mob psychology of Facebook to bare our dirty linen or unconscionable trash in public, all the better to enforce conformity? In other words, at the same time that the public sphere is collapsing with the atomization of our communities, the individualism that would otherwise make solitude bearable is eroded. Instead we have introverted users without, in their fast accumulating dependencies, the resources for profitable, ennobling or rewarding introspection—a kind of wasteland of the soul.
A profound theorist has addressed precisely the central issue here, a reliance upon the “conventional wisdom” (arguably a self-contradictory phrase), only he speaks of “the historical process” (a more encompassing term, including sign systems, traditions, beliefs, values, ways of life and so on that inform shared understandings and individual decision making). He writes, “Ecology: in spite of the infinite adaptability of capitalism which, in the case of an acute ecological catastrophe or crisis, can easily turn ecology into a new field of capitalist investment and competition, the very nature of the risk involved fundamentally precludes a market solution—why? Capitalism…implies trust in the market’s ‘invisible hand’ (one manifestation of the historical process) which guarantees that the competition of individual egotisms work for the common good. However, we are currently experiencing a radical change. Up until now, the [historical process] has played its role as the medium and foundations of [the actions of the individual]. What looms on the horizon today is the unprecedented possibility that a [individual action] will intervene [in the historical process], catastrophically disturbing its course by triggering an ecological catastrophe, a fateful biogenetic mutation, [a cyber attack], a nuclear catastrophe.” This is a (historically speaking) newly possible reversal of the power relation between the historical process and the individual, whereby the individual continues to rely upon the supports of history (language, customs, art, concepts, practices) while possessing the ability to derail the historical process altogether.
We might consider a counterexample to that provided by the shortsighted BinCam. A company called BigBelly Solar set about designing solar-powered bins that reported on trash levels and, given an algorithm, predicted the time for the necessity of emptying. In this fashion, half-full cans would not be dumped and the costs of trash collection significantly reduced. According to one study, “The city of Philadelphia has been experimenting with [self-reporting] bins since 2009; as a result, it cut its central garbage-collecting sorties from 17 to 2.5 times a week and reduced the number of staff from thirty-three to just seventeen, bringing in $900,000 in savings in just one year.”
Any mature choice between BinCam-style solutions and those of BigBelly Solar has to include careful consideration of the effect upon the quality of life for all the individuals of the community, and it is much more difficult to see anything like the same cost-benefit tradeoffs in the case of Philadelphia as with BinCam. In fact, self-monitoring garbage bins observe the same principle of utility that railroad schedules once and continue to uphold. It is the principle of the economy of movement, which applied to machines and schedules works. It is something else when applied to human beings (as in, say, Taylorism). Philadelphia oughtn’t have secrets to protect from the NSA. These are critical distinctions that must be taken with the utmost seriousness.
It is a regular occurrence in the technology sector to not only overlook basic psychological and sociological considerations while keeping a Cyclops eye on the “mathematics,” but to identify problems that are not actually problematic. The blind ambition to obviate every difficulty, inconvenience or resistance is fraught with danger, for it is not clear that these are problems. (And much the same kind of blindness affects the approach to genuine problems as well. As exemplified in those who seek quick solutions to environmental degradation, those who argue in techno-scientific utopian fashion for a silver bullet of sorts (rare earth elements, new energy sources, disposal systems, means of containment) that would bring the planet back to health, Cyclopean tunnel vision is a greater danger to human existence than planetary warming. According to leading environmental scientists, “if humanity were to abruptly stop its immense industrial activity and let nature on Earth take its balanced course, the result would be a total breakdown, an unimaginable catastrophe. ‘Nature’ on Earth is already so ‘adapted’ to human interventions in the shaky and fragile balance of ‘natural’ reproduction of the Earth, that a cessation would cause a catastrophic imbalance.” The central culprit, again, is analytic, one-sided thinking. Reality is quite otherwise. “Nature” [a synthetic/organic hybrid] is no longer “natural” [without anti-natural impregnation] but “adapted” [or is it “maladapted”?] to “pollutants” [necessary to life]. Ecologists who fanatically opposed the destruction of forests by pressing for the strictest measures of fire suppression damaged more virgin forest than the logging industry. Periodic fires are indispensable to forest self-reproduction. Likewise, the quick cessation of coal-burning in many parts of the world would be disastrous for whole species of avian and aquatic life. This is by no means an argument for passivity. We need to take control of our planetary destiny, but to do that we need smart humans more than technologies. We needs an educational system that actually works so that our people will be able to think rather than wave the magic wand of some simplistic principle (e.g. free markets, the gold standard, scientism, old-fashioned ways and so on).
The movement has been underway for some years to create online learning environments. Efforts are ongoing despite the paucity of encouraging empirical data, and this less-than-scientific persistence leads one to reflect on the so-often-heard flip designation of computers as “just tools.” The same computers that are envisioned in all their AI Hollywood glory are frequently dismissed as mere tools, and sometimes by the same people. But what if we are, in important ways, reflected back to ourselves in our tools, what if homo sapiens the toolmaker is as much a function of “tools” as of the thumb (this biological tool)?
According to Pamela Hieronymi of UCLA, in her research into online learning, “Education is not the transmission of information or ideas. As information breaks loose from bookstores and libraries and floods onto computers and mobile devices, [the] training [of minds] becomes more important, not less.” Furthermore, what if education, if it is to be transformational (i.e., make a real difference in our lives), simply must have a human face? The president of Williams College, Adam Falk, has claimed that, according to his analysis of the data, “the best predictor of students’ intellectual success in college is not the major or GPA but the amount of personal, face-to-face contact they have with professors.”
The thoughtlessness of the rollout of technology in the field of education is the single most interesting fact about the virtual classroom. Coursera, for instance, has had student grades tied directly to the random selection of five peers in their assessments via an average. Everything from grading to interactivity and testing has been turned over to the advocates, whether reflective or not, of the wall-to-wall technologization of all processes. Plunging test scores in the US, an early adopter of the virtual classroom model, do not support the facile equation of access and learning. The successful transmission of ideas is not the same as the proper reception of them. If we are to choose the best employment of technology, for I am very far from saying it has no role to play in education, we have to take possession of ourselves once more as human beings. As MIT’s Sherry Turkle has argued, “In the late 1970s and early 1980s, I witnessed a moment when we were confronted with machines that invited us to think differently about human thought, memory, and understanding. The computer was an evocative object that provoked self-reflection….Face-to-‘face’ with a computer, people reflected on who they were in the mirror of the machine.”
Turkle is certainly correct that there has been a formative reflection of the individual into the computer. However, I would make the case that this has not been an especially thoughtful meditation, that the tool has largely hijacked our self-conception. And close historical parallels are not hard to find. While Newton was no fan of a mechanical or clockwork conception of the universe, on theistic grounds, his contemporaries were. Newton-supporter Samuel Clark, replying to an advocate of the clockwork paradigm, Gottlieb Leibniz, observed that the “notion of the world’s being a great machine, going on without the interposition of God, as a clock continues to go without the assistance of a clockmaker is the notion of materialism and fate, and tends to exclude Providence and God’s government in reality out of the world.” But despite Newton’s best efforts, the reigning metaphor of the universe, the gigantic clock, spread like a prairie fire, undergoing numerous permutations (e.g. the Cartesian conception of the soulless animal as the merest machine). We are so much the function of our tools that the dominant metaphors for the world around us have been borrowed (and imposed) from the workplace. Alarm bells sound only when life, authenticity, presence take on the characteristics of things and become moot.
One researcher into the psychological impact of the Internet upon the user threw pizza parties to interrogate people about their online experiences. She writes, “They described the erosion of boundaries between the real and virtual as they moved in and out of their lives on the screen. Views of self became less unitary, more protean. I again felt witness, through the prism of technology, to a shift in how we create and experience our own identities….I was meeting people, many people, who found online life more satisfying than what some derisively called ‘RL,’ that is, real life.” Alternatives to RL have a notoriously bad track record, including, as they do, opium dens, fixations, psychoses and everyday denial. RL is full of inefficiencies that only came to be experienced as inefficiencies as we reflected upon ourselves in our tools. Dating, for instance, is dangerous. You could get hurt, have your heart broken, experience loss. So we use technology to protect ourselves from those aspects of human relationships that have more than ever come to seem optional. Why not text the bad news to your girlfriend? Wouldn’t that beat the inefficiencies of enduring face-to-face truth telling? On the other hand, what if these inefficiencies—the prospects of pain, loss, heartache, broken faith, death—are in fact the necessary conditions for social performance, for mutual trust, and as such for maturation? It has always been bad parenting to turn a child over to the babysitting TV, but babysitting has gone flat-screen and wall-to-wall and mobile (installed in the automobile). A 2010 Nielson study found that the typical teen spends in excess of three thousand text messages monthly. With all this time reflecting upon ourselves via things, one would think that someday we would all go dead inside and prefer things to living beings, that we would become fascinated with vampires and zombies and the dead (even as lovers). Someday?
The Zhu Zhu pet hamster was the Christmas season craze of 2009-2010. It was offered to the public as “better” than any living creature. According to the associated marketing, a Zhu Zhu is lovable, affectionate, mess- and death-free. Furthermore, the following summer, the New York Times and theWall Street Journal featured celebratory reports on robotic pedagogues, friends, and therapists. While it is said that tomorrow never arrives, it would seem that someday already has.
Take the time and talk to people face-to-face. Let them share their experiences. A colleague told me of a visit to a museum where actual tortoises from the Galapagos Islands of Darwinian fame were on display. Her daughter observed, “They could have used a robot.” A little girl standing nearby chimed in, “Its water looks dirty. Gross.” Another child said, “For what the turtles do, you didn’t have to have the live ones.” (This way of putting things—the live ones, as if those were a subspecies—suggested to my colleague a genuinely rooted assimilation of certain premises.) A father gazed at his daughter with astonishment, “But the point is that they are real. That’s the whole point.” None of the children seemed to get the point. The idea of the authentic, the original, of immediate presence was entirely lost on these children, and this indifference is widely supported in the relevant scholarly literature. One academic concludes, “I believe that in our culture of simulation, the notion of authenticity is for us what sex was for the Victorians—threat and obsession, taboo and fascination.” Another writes that “aliveness” has lost much of its “intrinsic value.” And perhaps the crowning bit of evidence in this regard is the book by David Levy, a renowned computer scientist, titled Love and Sex with Robots. Levy waxes lyrical over the near future, “Love with robots will be as normal as love with other humans, while the number of sexual acts and lovemaking positions commonly practiced between humans will be extended, as robots teach more than is in all of the world’s published sex manuals combined.” Levy goes on to advocate human-robot unions, justifying his position in terms of efficiencies: no extra-marital affairs, no loss, no heartbreak or hassles of accommodating a fellow human being.
The mindless universal application of technology, the belief in it as a potential solution for every problem, is a truly destructive force. The wise employment of technology depends upon a far-reaching, holistic consideration. We must take into account the limitations of any given tool within all the shifting contexts of its use. What, for instance, might be an intrinsic limitation of the technology that would aspire to true conversational proficiency? One is traceable to a logical divergence between programming code and human speech. Code is largely linear, while human language is contextual through and through. Code is rule-governed, while language is not. Code is world-less, while language is world-full.
The ever-shifting contextual apprehension of linguistic meaning is suggested every time we refer to a usage that “doesn’t sound right” (rather as a man in a powdered wig might not look right). We know, for example, that the word pour tends to be content-referential (about what gets poured). We say “Pour the milk” or “Pour the water,” but not “Pour the container full.” The word fill is container-referential. We say “Fill the pitcher” or “Fill the hole,” but not “Fill the milk into the hole.” However, the word load is both content- and container-referential. We say “Load the hay” or “Load the wagon.” The point is that there is no underlying rule that determines this usage. This is in fact how we use these words. Some linguistic acts “get done” (just as some baseball hats “get worn” backwards) and some don’t (or no longer do—e.g. “So gag me with a spoon”). Those that do not “get done” sound strange, and words once in usage (e.g. groovy, daddy-o, hepcat, etc.) can grow increasingly strange.
Language is a constantly evolving “thing” that grammarians chase after with their categorizing nets to impose a largely imaginary order. The wordman means what it does only in terms of the context—and not just that of the language used. Language, considered on its own, is meaningless. The world always enters into language, provides it with sense. (As such, aliens will never actually listen into our radio and TV broadcasts, not with comprehension.) So, for example, the word man, if one is standing in a courtroom (though it varies depending upon where or in what role one stands in a courtroom), if one is in a certain spot in the world versus another—the word man can be benign or explosive. If I point to someone and declare, “That is the man!” I could well be acting appropriately. My declaration would sound right. However, were I to refer to the judge as man (as in “Hey, man!”), I am probably playing with fire. But in every case, the significance of what I say depends upon where precisely I “stand” in the world. It isn’t that human language cannot be thought, obviously. It is just that a computer can’t think it, because computers don’t think as human beings do. Humans think in terms of constitutive relationships. Machines do not.
A similar divergence obtains in the case of dance, music, cooking and other everyday human activities. Mark “Corky” Ballas, nine-times world champion ballroom dancer, once told me that the steps painted on the floor, the mechanical movements of the technically proficient, do not a dancer make. Dance is not sheer repetition. (A metronome is sheer repetition.) Neither are music or cooking quintessentially repetitious. Dr. Michael Oakesholtt writes, “It might be supposed that an ignorant man, some edible materials, and a cookery book compose together the necessities of a self-moved (or concrete) activity called cooking. But nothing is further from the truth….The book speaks only to those who know already the kind of thing to expect from it and consequently how to interpret it.”
The analytic, techno-scientific mindset does not know how to cope with this contextual logic. By its lights, the pancake would simply be nothing other than Bisquick, eggs, sugar, baking powder, lemon juice, vanilla and milk--all heated. But this is the purest reduction of the pancake. Just as a man cannot be a father absent a son or daughter, the pancake is the product of a relationship between elements. Take away the element of child and fatherhood is lost. But fatherhood has always been something quite real in addition to the mere fact of procreation (just as water has always been H20, but also the seas). It is more than the sum of its biological parts. It is the product of the constitutive relationship by which our young are raised and preserved against the chaos.
What most software programmers could never grasp is that no one really knows what Coca-Cola tastes like. (To Chairman Mao, Pepsi tasted like medicine.) The American palate typically cannot distinguish between Coke and Pepsi in a blind taste test. However, knowing that one is in fact drinking a Coke changes everything. One drinks America, a grand tradition, a classic—all of the notions that have accrued to one of the most successful branding operations in human history. Really, ask yourself, what does Coke taste like?
It can only be imagined that Jinna Lei has no time for such speculations. The University of Washington computer scientist has designed a smart kitchen. Cameras recognize objects by their geometry. A chef about to make a mistake, deviate from a recipe, can be warned. One commentator refers to Lei’s kitchen as “a temple of modern-day Taylorism” and proceeds to observe, “[C]ooking thrives on failure and experimentation, deviating from recipes is what creates culinary innovators and pushes a cuisine forward, and inefficiencies are not to be discarded as whimsical or irrelevant. For many such well-meaning innovators, the context of the practice they seek to improve doesn’t matter—not as long as efficiency can be increased. As a result, chefs are imagined not as autonomous virtuosi or gifted craftsmen but as enslaved robots who should never defy the commands of their operating systems.” But it might be time to step back and ask a question. Nothing breaks matters open anew like a well-aimed question. How are we faring with all of this efficiency? We have an obesity pandemic, an ADHD pandemic, and it looks rather as if the whole world is having a nervous breakdown. Enough coffee is consumed to make every creature on Earth resemble Charlie Chaplin as the neurotic reflex of repetition on the assembly line of Modern Times.
However, it should be observed that techno-science utopians of “the Age of the Internet” do not see themselves within the tradition of the assembly line or Taylorism or mass production or interchangeable parts or the telegraph. They tend to see themselves as occupying the far side of the revolution, of a rupture with the past best exemplified in the Internet. According to Gabrielle Hecht of the University of Michigan, the invention of the atomic bomb and nuclear electric power were experienced as “a historical break, the dawn of a new era—here, ‘the nuclear age’—in which everything, everywhere, was forever different.” And it would be tempting to reconsider all the horrendous blunders of the Cold War in terms of a reductionist binary logic.
The danger that resides in reading history as discontinuous and thereby removing oneself from it is that the historical process goes on regardless. That just such a discontinuous reading of history is widespread in the discourse of cyberspace thought leaders can be readily confirmed. As one commentator notes, “epochalist rhetoric…explains both the religious zeal with which [Internet thought leaders] embark on and justify their quest to ameliorate the human conditions as well as their lack of empathy for industries and institutions that are currently in crisis.” They have separated themselves off from the past, as suddenly irrelevant, and the future has turned inconceivable except in terms of the Internet—and their thinking reflects very precisely this immensely impoverished viewpoint. Hence there obtains the mindless universalizing of solutions and messianic claims for the power of the Internet. The historical process goes on despite our fixed ideas, and the designers of the BinCam are undeterred by the example of Angry Bird, though one has to stand in amazement at the precisely historical shortsightedness by which the patently obvious connection goes overlooked, still today
This story was provided by an individual or organization for use on the Ohio.com community site, http://www.ohio.com/upublish. We do not endorse and cannot guarantee the accuracy of this posting, though we do reject announcements with inappropriate content. You can read our full user agreement here.