Morlock Elloi on Sun, 11 Mar 2018 21:17:15 +0100 (CET) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
<nettime> Slide-Rule Studies Department (was Re: The System Development Corporation) |
I think that this is a major 'elephant in the room' phenomenon that has permeated everything. Including nettime, of course - I'm referring to philosophical discourses on primitive and simpleton application of Merkle trees in Blockchain algorithm, assigning this f*cking idiocy magic powers to do evil, good, or a combination thereof.
From https://thebaffler.com/salvos/blame-the-computer-pein Blame the Computer The fake science that keeps threatening to kill us Corey PeinEvidence mounts that the forces of digital civilization have produced a technological dystopia run by artificially unintelligent algorithms designed in the interests of greed for maximum efficiency. And true to the tropes of many a dark sci-fi reverie, these impersonal arbiters of our collective fate evince neither pity nor mercy—which means, among other things, that one entirely foreseeable byproduct of their operation is to inflict maximum terror on the human population, whose participation in the system is ritualistic at best. Had there been any residual reason to doubt any part of this glum portrait of our remorselessly data-engineered vision of the human future, well, it was rudely laid to rest on Saturday, January 13, at 8:07 a.m. Hawaii local time.
The bad news arrived, as all news seemingly does now, as a smartphone push notification. But this notification looked and sounded different than most. It was, in fact, an emergency alert from the state government. The form of the alert was not a radio wave or an electronic signal, as in decades past, but computer code. Attached to the alert was an audio file with a recording of a synthesized male voice, for the vision-impaired. “The U.S. Pacific Command has detected a missile threat to Hawaii,” the voice said. “A missile may impact on land or sea within minutes. This is not a drill. If you are indoors, stay indoors. If you are outdoors, seek immediate shelter.” The stilted tone of the robot voice was all the more eerie, tasked as it was with effectively announcing the impending death of whoever heard it. “We will announce when the threat has ended,” it said. “Take immediate action measures.”
Take what? And who was “we”? For many, “action measures” meant running around in panic. More level-headed folks tore through their pantries searching for bottled water and canned foods, then hid under a pile of mattresses, or squeezed into bathtubs with their bawling children. Thousands said tearful goodbyes to loved ones over the phone or, failing that, to strangers over social media. Some had their possible last words mangled by autocorrect: “Is a missile ducking coming. Holy shit.” Ducking hell.
Managers at Starbucks and McDonalds, in the style of British cavalry officers in Crimea, ordered workers to stay on duty, missiles be damned! One young woman responded with a definitive anti-endorsement of an Oahu confectionary chain, writing: “DON’T TELL ME THIS MISSILE THING IS REAL I AM NOT DYING AT COOKIE CORNER.”
The Wrong ButtonA few skeptics wondered why they didn’t hear sirens. Five minutes after sounding the alarm, and having received confirmation from the United States military that there was, in fact, no missile headed their way, Hawaiian civil defense officials attempted to “cancel” the mass alert. But it was too late. The freak-out signal had been received, and there was no taking it back. Finally, at 8:46 a.m., almost forty minutes after receiving the first urgent message, phones across the islands began to buzz with the morning’s second official announcement: “There is no missile threat or danger to the State of Hawaii. Repeat. False Alarm.”
Dread gave way to bewilderment, relief, and various forms of catharsis. The website PornHub released statistics showing a 48 percent increase in web traffic from Hawaii once the emergency was rescinded (whereas the initial alert had prompted a 77 percent drop in porn-surfing on the site). Next came outrage. Heads must roll! But whose?
It took five more hours for Hawaii Governor David Ige to appear on television to take responsibility for the false alarm. (Later, Ige confessed he hadn’t known the password to his official Twitter account, which had further delayed the state’s effort to reverse its error and send an all-clear signal.) Apologies wouldn’t cut it, however. What the hell had happened? The public demanded an explanation.
Ige delivered one. It seemed too absurd to be real, but too embarrassing to be a lie, which gave it the ring of truth. Someone, the governor said, had “pressed the wrong button.”
“OOPS!” replied the front page of the Honolulu Star-Advertiser, in a font size typically reserved for actual declarations of war. The news that no one would be fired for the mistake only aggravated public anger, and stoked more calls for swift managerial retribution. Was improper training to blame? Not likely, it seemed—the state said the unnamed, butterfingered state employee was a “veteran” on the job. (This characterization later changed dramatically; the newly problematic employee was duly thrown under the bus, and his boss resigned.)
The System WorkedThe quest for scapegoats next turned toward the Trump administration. The president himself had (thankfully no doubt) been out of the loop, spending the morning of Hawaii’s false alarm on the links at his eponymously branded golf course in West Palm Beach, Florida. Well, then, what about his Federal Emergency Management Agency chief, Brock Long, who had botched the post-hurricane response in Puerto Rico? In the great tradition of evasive bureaucratic action, FEMA shifted blame to the Hawaii Emergency Management Agency, whose administrator, Vern Miyagi, oversaw the alert system. What about him? Indeed, what about the state contracting officer who had hired the outfit that made the agency’s software—Alert Solutions, Inc., of California? What about its Israeli founder, Efraim Petel? What about BlackBerry, Ltd., the Canadian multinational that had acquired Petel’s company and inherited its annual maintenance contract with Hawaii? Did BlackBerry CEO John Chen have anything to answer for? No one asked.
Eventually, the source of the panic was uncovered—in the clunky, counterintuitive design of the software that the Hawaii Emergency Management Agency used. It turns out that, when users engage the missile alert prompt in the system, they’re greeted by a drop-down menu with only two choices: one to test the missile alarm, and another to sound it. A few software designers and “user experience” experts from the tech industry faulted this dangerously simplistic virtual construction—a task made no doubt easier by their perfect hindsight.
Managers at Starbucks and McDonalds, in the style of British cavalry officers in the Crimea, ordered workers to stay on duty, missiles be damned!
But here’s the thing: however much critics and alarmed Hawaiians might crave the cathartic release of blaming, and cashiering, a fellow human, there was no plausible scapegoat on offer. In other words, the system had, in fact, worked as designed—it was just obeying a cosmically disastrous user prompt.
Only one month prior to the January fiasco, Miyagi’s agency faced stern criticism from the press after emergency sirens failed to sound in a nuclear attack drill. Here was something of a photographic negative of the missile attack scare, in which the emergency alert system didn’t appear to be stoking enough hypothetical alarm. Miyagi reassured the Star-Advertiser that despite this testing glitch, his failsafe system had actually achieved one of its major goals—alarms could be sounded using “a single button in an emergency operations center in Honolulu.” Indeed, everyone conceivably responsible in the system’s test run had performed more or less to specification.
So the panic-stoking false alarm in January was a snafu of software engineering—and the implicit faith that every problem, even nuclear apocalypse, awaits a simple, convenient, digital solution. Put another way, it was a carefully designed byproduct of computer science.
As if to drive this point home for the world at large, a nearly identical false alarm went off three days later, sending out an electronic alert about an impending nuclear strike by North Korea. As before, there was no missile. This time, the purported target was Japan, where, as in Hawaii, millions felt the same terrifying surge of adrenaline and apocalyptic dread as they looked to the skies. The state broadcaster, NHK, blamed an unexplained “switching error” on an unspecified “device.” Translation: someone pressed the wrong button.
Too Brain-dead to FailIt’s undeniably a good thing that so far these snafus happened with alarm systems rather than missile launchers. But it doesn’t take much imagination to foresee how easily a false alarm could prompt a globally catastrophic retaliatory strike—especially considering how often President Mr. Big Button lets it be known that he trusts Twitter and TV more than his own intelligence officers. In mulling over the multitude of nightmarish possibilities with a writer for The Atlantic, one arms control expert grimly concluded that “there is no fail safe against errors in judgment by human beings or the systems that provide early warning.”
Those systems are also made by human beings, of course. Many are tempted to think that the world would be safer if they weren’t—if only somehow the systems we depended upon could perfect their own design. Perhaps humans should not be trusted with something so important as crafting the warning system intended to safeguard the future of the species.
This reflexive distrust of the human mind is the conventional wisdom of our technological age. It’s most clearly evidenced in the present tech-industry infatuation with “artificial intelligence” startups. But almost all political decisions—from the construction of electoral districts to the drafting of political slogans to the deployment of military drones—now come before nominal human decision-makers only after they’ve been filtered through a computerized process. And despite the abundant evidence to the contrary, many will tell you this is a good thing. But after spending four years immersed in the madness of Silicon Valley for a book, I’ve come to a different conclusion. The great peril we face comes not from an over-reliance on human judgment, but from a distinct lack of it. Indeed, the most bone-jarring risks before us have less to do with human error than with engineering hubris—and that hubris has been synthesized into the uncritically celebrated discipline of computer science. Even more than “military intelligence,” computer science is an oxymoron.
The Call-Service ApocalypseI’m not talking here of the “deep state” or the psyops confecting of a “fake news” campaign or “false flag” operation sprung on unwitting American netizens. The grinding rationalization of pre-existing power relations and bureaucratic prerogative at the heart of computer-science theorizing is nothing so dramatic. Nor is it secret.
No, the plodding, catechistic precepts of software design mainly serve to ratify the status quo drudgery of bureaucratic servitude—and indeed to elevate it into a theory of crudely incentivized mass deference. Strip away the nomenclature of cybernetic systems theory and software design, and you have something very close to the plot prospectus for a Philip K. Dick novel.
The basic setup is as follows: we’re victims of a terrible subroutine that’s been grinding away for more than seventy years within the larger program of capitalism. The imperative for economic efficiency has created irresistible incentives for the automation of thinking. Every decision that can be made in advance by managers, and reduced by engineers to a series of switches inside a computer program, has been dutifully ground down into a binary decision tree. All that’s left for humans is to make a selection from a drop-down menu.
Thus our day-to-day lives have come to resemble a call-center phone maze, defined by a wearying succession of false choices—all of them miserable—and eventually escalating toward a frustrating confrontation with a powerless authority figure. Inevitably, this person cannot help because “the computer won’t let me.” Their voice sounds human, but their words might as well belong to an AI program, smoothly reciting a self-preserving script amid the specter of apocalyptic ruin: I’m sorry, Dave. I’m afraid I can’t do that.
Biological evolution took sixty-five million years to produce the human brain. We outsourced that asset at the earliest opportunity. In a few short generations, we’ve reached such an atrophied mental state that nuclear geopolitics works exactly like the customer service department at Comcast or Wells Fargo.
Engineering MindlessnessDo I exaggerate? Consider the bloodless, and clueless, way that the national security intelligentsia responded to the Hawaii fiasco. As islanders’ panic subsided, the retired U.S. Army Lt. Gen. Mark Hertling, a CNN commentator, publicly chastised U.S. Rep. Tulsi Gabbard, who broke protocol by notifying her constituents via Twitter that the missile alert was mistaken. “For the record, Congresswoman Gabbard inserting herself into the process . . . is NOT a good thing,” Hertling wrote. How could it be that the truth—that Hawaiians faced no danger—was too dangerous to speak?
Well, because such notifications fell well outside the boundaries of permissible official conduct, as modeled by the software. Hertling claimed that military simulations had shown that comparable political “interference” in such an emergency could result in tens of thousands of additional deaths. Significantly, the final authority in Hertling’s ideal scenario did not belong to elected officials who might “interfere” with “the process,” but to the process itself—a semi-automated, fully computerized system devised and controlled by the military-industrial elite.
How did we get to this seeming endpoint of utterly nonreflective computer agency? In the perfect world envisioned by Hertling and countless other natsec apparatchiks, the automation of judgment has become so thorough, and “the process” so holy, that those authorities will insist upon deference to instructions from a computer, even when they know the instructions were made in error. What sort of organization asks people to behave this way? No participatory government could withstand the sort of brainless obedience “the process” demands. However, it would certainly be expected inside any authoritarian cult.
The curious thing about the computer-science cult is that it was first consolidating in the face of a barrage of criticism warning against just this sort of outcome. In the middle of the last century, when cybernetic intelligence was still largely a drawing-board proposition, the inherent limitations of computers were better understood, even by members of the cult.
The late MIT professor Joseph Weizenbaum, who taught in the computer science department, understood the mulish shortcomings of the standard computer program better than most of his contemporaries. As a result, he ended his career a heretic and an outcast from the field. His first heresy was to insist that scientists, who had fallen in love with computers, didn’t really need them to do their jobs. For example, he noted, the scientists recruited into the Manhattan Project managed to invent the atomic bomb without the help of computers. Weizenbaum was sure, however, that if those same scientists did have access to computers at the time, they would have sworn the job was impossible. In other words, he grasped just how seamlessly the power of automation worked to indulge our laziest tendencies as a species. Fortune Magazine, 1955
Up the AcademyIn Weizenbaum’s view, many in his field were no more than “tinkerers with techniques”—charlatans who had managed to associate themselves with science in order to “siphon legitimacy from the reservoir it has accumulated.” This insight seems shocking now, at a moment when computer science has casually annexed much of our common world: just because most scientists used computers, didn’t make all computer users scientists.
The curious thing about the computer-science cult is that it was first consolidating in the face of a barrage of criticism warning against just this sort of outcome.
Weizenbaum blamed “accidents of history” for the christening of computer science departments within academia. “All work done in such departments is indiscriminately called ‘science,’ even if only part of it deserves that honorable appellation,” he lamented. “Not everyone who calls himself a singer has a voice.”
As it happened, Weizenbaum got one key point wrong: the elevation of computer science was not an accident, but a deliberate branding decision made by veterans of the postwar military-industrial complex. These grey-suited gadget peddlers banded together to misappropriate the name of science in the service of sales. They needed a civilian market for their products, and so naturally gravitated toward a sector with deep pockets and establishment cachet: the university system.
The first big fish to take the bait was Stanford University provost Frederick Terman, a radio engineer by training and an administrator by ambition. Although Terman’s role in the development of early Silicon Valley has been overshadowed by contemporaries such as William Shockley (the notoriously racist physicist and semiconductor-company executive), Terman was arguably more important, given his talent for finding money—especially grant opportunities from the Defense Department.
In the early 1950s, Terman, then dean of engineering, convinced his Stanford superiors to set aside a large parcel of land as an “industrial park,” where private tech companies could enjoy favorable leases and access to university resources. Hewlett-Packard, cofounded by two of Terman’s students, was among the first companies to set up shop at Terman’s park, soon to become renowned as the center of Silicon Valley. His vision was to position Stanford as an “entrepreneurial university,” more responsive to the fleeting imperatives of money and power than to pedagogical traditions. In that respect, he was decidedly ahead of his time.
In addition to cutting real-estate deals, Terman also wanted to raise the profile of computers on campus. He’d been introduced to the devices by his former MIT doctoral adviser, Vannevar Bush, who was the government’s chief administrator on the Manhattan Project (and no relation to the presidential dynasty). It’s hard to imagine now, with men such as Elon Musk and Jeff Bezos having attained the status of billionaire demigods, but in the early 1950s, computer nerds were pretty much social nonentities. The gadgeteers lacked the standing of their more prestigious contemporaries—mathematicians wrestling with unsolved theorems, physicists who tackled cosmic problems by questioning the basic assumptions of our perceived reality, and social scientists who delved into the ambiguities of human structures and relationships. In those departments, computers were seen as mere tools—novelties, even. A department dedicated to computers made as much sense as a Department of Slide-Rule Studies.
But Terman was undaunted by such narrow thinking. He saw the pecuniary gains to be won by catering to the needs of the growing, government-backed high-tech industry. He also grasped that the study of computers lacked a certain gravitas. And so he decided to conjure the field’s animating mission—and far from coincidentally, its fundraising appeal—out of thin air, with the assistance of another bureaucratic visionary. In 1958, Terman commissioned a computer salesman named Louis Fein to study and report on the feasibility of launching a new university department devoted to computers.
Synnoetics on the MakeIn reaching out to Fein, Terman had selected an ideal emissary for the comp-sci cult’s fundraising gospel. Fein was a former Raytheon engineer who’d worked on missile guidance systems during the war (yes, the cursed missile-tracking platform that triggered mass panic in Hawaii more than half a century later was present at the very creation of computer science). He’d gone on to found his own business based on Raytheon’s technology, called the Computer Control Company. Fein also worked as a consultant for the Stanford Research Institute, another early beachhead in the military-industrial march on American academia. The institute essentially laundered publicly funded research through its nonprofit status, prior to the work’s eventual patenting, privatization, and profiteering at the hands of savvy middlemen like Fein. Such institutes created a system whereby the taxpayers would pick up the bill for research leading to such innovations as the computer mouse and the internet, but the profits from their commercialization would accrue to a few lucky insiders.
Fein approached his task by reading narrowly and schmoozing widely. In 1961, he published a kind of manifesto in American Scientist magazine. Fein’s paper was framed as a fictional speech by a prestigious university president, set in the future year of 1975, and looking back upon the tremendous progress of “the computer-related sciences” in their long slog toward respectability. Fein’s narrator, a thinly veiled alter ego, proposed that the burgeoning field of “computer-related sciences” be further elevated with a new moniker, one scarcely heard since: “synnoetics.”
This new field, Fein insisted, was about more than computers—although “we were acutely aware of the public relations value of this word.” Synnoetics would encompass not only computers—which were “but one species of automata,” he wrote—but cybernetics, “intellectronics,” and other buzzwords that might as well have come from a Bay Area TED Talk circa 2010.
Still, Fein’s central neologism hinted at grander ambitions. The word synnoetics, “derived from the Greek, means pooling together the resources of the mind,” Fein explained. Synnoetic technologies would enhance man’s ability to solve problems—“to lift himself by his own bootstraps.” As he saw it, synnoetics was “supradisciplinary”—its purported power to improve human mental abilities placed it above other fields. (Here, Fein echoes the allied academic cult of neoliberal economics, which has tirelessly sought to promote itself to the credulous world at large as “the imperial science”—i.e., the discipline that effectively explains, and rules over, all others.)
As he laid out his vision, it became clear that Fein had grand designs for the spread of computer-driven inquiry throughout the known intellectual world. He gushed over how the computers would elevate the practice of “engineering, law, music, chemistry, physics, medicine, psychology, and other disciplines”—but especially “management and control.” Tellingly, the first example he concocted to demonstrate the power of applied synnoetics involved the deployment of robot Pinkertons to break a strike. “I am sure you all recall how the famous strike of 1970 was settled when one of our faculty mediators used an automaton to aid both parties in agreeing to what was at once an optimum settlement for both sides,” Fein wrote. There should be no mystery why Fein’s fantasies exerted instant and widespread appeal for administrators and executives.
The Boss’s DataIt’s important to recall, at this late date in the computerized enclosure of the American commons, that Fein was not proposing anything resembling the promiscuously mobile and networked computer scene of the twenty-first century. In the early postwar period, computers were rare, expensive, and so big they might take up entire rooms. But in his manifesto, Fein described an arrangement whereby these apparent disadvantages could be leveraged to the benefit of the computerized campus.
Universities could buy computers at a discount from the manufacturers, on the condition that they train a certain number of students, staff, and faculty in how to operate the devices—thus effectively covering the cost of workforce training for those companies. As an added sideline, universities could sell time on their fancy new machines to interested third parties, especially private companies. Adopting this model, the university computer lab immediately became a profit center, and faculty from other departments found themselves competing for resources with private companies that had become paying customers of the university.
Stanford established its division of computer science in 1963; it became a full-fledged department two years later. (Purdue’s computer science department came earlier, in 1962, also at Fein’s urging.) Fein took his show on the road and began pitching the lucrative new field to universities all over the United States and Europe. Other “entrepreneurial” institutions such as MIT hopped aboard the bandwagon, and as the Cold War escalated, computer science departments sprouted across the campuses of the land like poisonous mushrooms.
At least 295 U.S. colleges and universities offer degrees in computer science. Graduates number in the many tens of thousands each year, and their ranks swell with double-digit annual percentage growth rates as colleges embrace their new role as glorified job-training centers, and students flock to the promise of a secure career path. Politicians, too, have seized upon the purported value of computer mastery as the solution to all social and economic ills.
Coding with ImpunityThe term synnoetics obviously never caught on. But Fein’s concept of computer science as a “supradiscipline” definitely did. Computer science is the most exalted field in the new academic paradigm of STEM supremacy. The profit- fueled fetish for “digital learning” has coincided with the chauvinist denigration of the humanities and social sciences. Computer skills have become synonymous with talent and ingenuity. And the occupation of programming, which in its earliest iteration carried the stigma of “women’s work,” has become a high-status, highly compensated, and highly male-dominated field. The tech bros are all ninjas and rock stars in their own minds and ours. The most powerful among them, like Google’s Sergey Brin, actually aspire to become immortal gods.
And yet the problems with computer science as any sort of credible stand-alone academic discipline are persistent and well known. A 2006 study by Michael J. Quinn, a computer science professor at Oregon State University, polled a sample of fifty accredited computer science programs to determine how they taught students about ethical issues—presuming, that is, they even bothered to try. Most gave ethics minimal consideration—a single credit hour’s worth, taught either by a professor inside the computer science department (unversed in ethics) or an outsider from the philosophy departments (ignorant of computers). Despite two-plus decades of study by the National Science Foundation on the marked inadequacy of ethics instruction in the field, ethics and humanities education in computer science departments has scarcely improved, even as the tech industry has swallowed an ever-increasing share of the economy. Even the conservative Stanford Review, founded by the right-wing venture capitalist Peter Thiel when he was a student, last year complained that the university’s ethical instruction for computer science students was “insufficient.”
“Computer science” is something more pernicious than a non-science—it is an outright enemy of scientific reasoning.
While the plaint about missing ethics curricula may seem like so much humanist caviling, it actually highlights a deep and abiding flaw in the conception of computer science. In practical terms, the omission of such instruction, or any other form of reflexive self-criticism within the field, means that mercenary military contractors funded the creation of a pedagogy without ethics, which supplied the labor for a tech industry without ethics, which powered the rise of state-sanctioned monopoly tech corporations that exercise unprecedented control over global markets and have intrusive access to all the digitized data of our lives. In a nightmare fulfillment of Fein’s original vision, the dogmas of this industry, branded as computer science, infect everything—even the proposed solutions to problems created by the industry.
Garbage InWhich opens, in turn, to an awkward question: What if the problems with “computer science” aren’t fixable? What if the real problem is that the field never deserved the respect it has obtained—or, more precisely, purchased? What if the early academic skeptics of “computer science,” who considered these devices to be mere tools—people like Weizenbaum, and his MIT colleague Norbert Wiener, a math professor who dismissed the computer obsessives as incurious “gadget worshippers”—were correct? In retrospect, it’s clear that their objections were never answered in any substantive fashion, but merely overruled by profit-minded administrators.
It’s now painfully clear that computer science is not actually a science, by the simplest definition of that word—a method of obtaining, organizing, and analyzing knowledge about the universe. Granted, computers may assist with the tasks of obtaining, organizing, and analyzing. But “computer science” as a specialized field of gadget-enabled inquiry is not concerned with the natural universe—it is, rather, engaged in exploring an entirely fabricated universe that exists inside the computer. By virtue of their influence in society as tycoons and technocrats, the computer scientists demand that we must adapt to fit their models.
Defenders of the field maintain that this myopic concentration of collective effort is a feature, not a bug—and, what’s more, that anything that exists outside the machine can be input and modeled inside it. In this respect, the gadget worshippers again invite comparison with their dismal cousins, the classical economists. Both disciplines draw conclusions from fabricated simulacra, models based on how they imagine things ought to work—rather than through patient, ongoing observation of how they actually do work. The universe is alive, but every computer algorithm is dead on arrival.
Strip away the nomenclature of cybernetic systems theory and software design, and you have something very close to the plot prospectus for a Philip K. Dick novel.
In sizing up the gross cognitive deficiencies of computer science, Weizenbaum went even further, noting that every computer system “permits the asking of only certain kinds of questions” and “accepts only certain kinds of ‘data.’” To create a computer program is not to enhance one’s mental abilities, as boosters like Fein claimed, but rather to restrict one’s options to a set of (always biased and often mistaken) assumptions. “A computing system has effectively closed many doors that were open before it was installed,” Weizenbaum wrote.
Because it is devoted to the creation of systems that limit choice, “computer science” is something more pernicious than a non-science—it is an outright enemy of scientific reasoning. As digitization has polluted our conception of reality by shifting our focus to inferior models, it has crippled our imaginations by restricting what we consider legitimate “input”: if it’s not online, it doesn’t exist.
Worse, as Weizenbaum noted, the sprawling complexity of any computer system meant that it “cannot even in principle be understood by those who rely on it.” How could true scientists put all their trust in tools they cannot explain?
Thinking Like a Data StateThe uninterrogated premises of computer science have now worked, as critics like Weizenbaum foresaw, to concentrate lethal quantities of social and military power in the hands of dubiously accountable agents of the security state and the neoliberal political economy.
Now applied computer science concerns itself with the technological refinement of the police state, otherwise known as “cybersecurity.” In civilian commercial applications, the focus is much as Fein foresaw—with computer-powered startups concerned chiefly with the pillaging of labor. There’s little need for robot Pinkertons, because the bright minds of Silicon Valley have done their part to ensure that workers are so thoroughly atomized by the “gig economy” that organizing to make collective demands has become almost unimaginable.
And in a development that an arch-disruptor like Fein would have relished, the all-consuming supradiscipline of synnoetics has even begun to nibble at the belly of the universities that spawned it. Why bother funding traditional universities, with vestigial departments promoting obsolete subjects, when schools could be structured for the sole purpose of teaching people how to work computers? Hence the recent spread of for-profit, unaccredited, learn-to-code “boot camps,” where dislocated workers trade the skills of their former trades and crafts for a brighter future pushing buttons. There are at least ninety-five companies running such coding boot camps around the country, graduating nearly 23,000 students last year and charging an average of $11,400 in tuition fees for a typical fourteen-week course, according to Course Report, a startup that tracks and promotes the fast-growing industry. Many boot camps are pitched as socially beneficial worker-retraining programs. They’ve been so ineffective in that regard that at least one coding boot camp, targeting unemployed miners in Appalachia, has inspired a class-action lawsuit on the grounds that students were inadequately trained and didn’t receive their promised stipends. What’s more, the complaint reportedly alleges that not a single student found work in a tech job, although placement was “guaranteed.”
But even when such programs meet their promises, the enterprise remains a dubious one. Students pay tuition in order to learn how to write software that will one day take over their own jobs, without being taught to question why. Obedience is simply baked into the coders’ curriculum. Indeed, the very name of these courses—“boot camps”—recalls the martial origins of the industry.
The Great DictatorsComputer science education inevitably promotes authoritarianism. In the best-case scenario, graduates of these programs will go on to toil as “code monkeys” in the most despotic corners of capitalism—weapons manufacturing, robotics and AI, and finance. On a deeper level, they will absorb the innately authoritarian assumptions of the field. The binary worldview, with no tolerance for ambiguity, has created some disturbing mental excretions. Last year a college instructor in Boston shared with me the following note, written by a student, and apparently lost or discarded. The context of its creation is a mystery, but nevertheless, it represents a grim snapshot of our intellectual moment.
• Authoritarian leaders would be more effective for technology, engineering, and more scientific related companies because those are the kinds of jobs you are either right or wrong and involve the most centered and determined employees.
• Democratic leaders would fit inn [sic] in a more political and social environment because in this industry decisions affect more people and are better made with the opinions of a group of people.
Of course, it’s not only students who’ve intuited the integral connection between technology and authoritarianism, while suggesting that the former justifies the latter. Sam Altman, the reliably pompous president of the tech venture capital fund Y Combinator, went so far as to praise the ancillary benefits to “innovation” of China’s notoriously restrictive, censorship-addicted one-party system in a December blog post. “I realized I felt more comfortable discussing controversial ideas in Beijing than in San Francisco,” Altman wrote. “That showed me just how bad things have become” back home. Bad for who, though? Altman lamented that “credible people” in his circles had left the Bay Area because “they found the reaction to their work to be so toxic.” Not so in China! Techies there, freed from American-style “political correctness,” may gleefully explore such “heresies” as “pharmaceuticals for intelligence augmentation, genetic engineering, and radical life extension,” Altman gushed. Funny, he never mentioned the Chinese government’s Great Firewall—the world’s most comprehensive and effective system of internet censorship—or the tens of thousands of dissent-crushing online speech monitors it employs, or the policy of re-education through labor, which once upon a time would’ve sent quasi-libertarian tycoons like Altman into spasms of indignation. But hey, the Chinese government lets scientists clone primates, so who cares about the political prisoners? How quickly the techies’ righteous esteem for freedom of thought vanishes when the powers that be promise them new toys to play with.
This political tendency is one that dates to the early days of the field. In his most important book, Computer Power and Human Reason, published in 1976, Weizenbaum depicts the computer as a reactionary device. Since its invention, he wrote, the computer “was used to conserve America’s social and political institutions. It buttressed them and immunized them, at least temporarily, against enormous pressures for change.”
The conventional wisdom in his field held that society relied upon computers to solve increasingly complex problems created by burgeoning populations and new technologies—especially nuclear weapons. It was said that computers had arrived “just in time” to help capitalist society cope with rapidly increasing complexity. “Yes, the computer did arrive ‘just in time,’” Weizenbaum wrote. “But in time for what? In time to save—and save very nearly intact, indeed, to entrench and stabilize—social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.”
Weizenbaum believed computers were standing in the way of necessary revolution. He grew disgusted by his colleagues’ amoral servility before power. And he was unwilling to let them off the hook for enabling monstrous abuses by the powers that be, especially warmongers like Robert McNamara, who carpet-bombed Southeast Asian peasants with statistical perfection. “The scientist and the technologist can no longer avoid the responsibility for what he does,” Weizenbaum wrote.
Weizenbaum was also among the first thinkers in the field to recognize that code was ideology. He saw computers as the natural product of an imperialistic process that had corrupted and “reduced reason itself to only its role in the domination of things, man, and, finally, nature.” In this flattened-out world of instrumental reason, every stroke of the keyboard is an offering to the war machine, and every swipe of the touchscreen is a little prayer of thanks to the Pentagon, which made it all possible.
Meanwhile, as Weizenbaum observed, computers served as dispensers of moral indulgences for powerful decision-makers. “The computer, as presently used by the technological elite, is not a cause of anything. It is rather an instrument pressed into the service of rationalizing, supporting, and sustaining the most conservative, indeed, reactionary, ideological components of the current Zeitgeist,” he wrote. Computerization meant that no one had any incentive to take responsibility for difficult decisions—and, by the same token, no one could be held accountable for bad ones. Sound familiar?
Toggling Toward BethlehemIn the course of researching this story, I slogged through several hundred pages of federal technical manuals and how-to guides for states and localities interested in adopting the emergency management system created by President Bush’s executive order in 2006. As with any computer-driven process, the most important choices have already been made. The first order of business on one federal to-do list for local authorities is to go shopping—rather, to “select IPAWS compatible software.” As it happens, the feds publish a list of pre-approved vendors with off-the-shelf solutions, and the importance of “private sector partners” is frequently stressed in official materials. “It is clearly in the national interest to ensure private sector participation,” notes FEMA’s June 2010 Strategic Plan for the Integrated Public Alert and Warning System (IPAWS) Program. Incidentally, it turns out, FEMA will not provide technical support. Instead, local authorities are advised to “contact your vendor.”
The profit-fueled fetish for “digital learning” has coincided with the chauvinist denigration of the humanities and social sciences. Computer skills have become synonymous with talent and ingenuity.
Which is exactly what Hawaii state officials did after the false missile alarm of January 13. They were quick to publicize a short list of fast fixes, none of which were “ditch this stupid software and go back to using the telephone.” In the state’s final investigative report on the matter, conducted by a retired brigadier general in the Hawaii National Guard and released January 29, the employee who sent the alarm, initially described as an experienced veteran—who would not lose his or her job over the incident—was recast as a longstanding “source of concern” who had more than once seemed “confused” when it came to distinguishing drills and real emergencies. The employee was fired after all, and the agency administrator, Miyagi, also resigned. Heads rolled. Problem solved?
Until the next false alarm, we must reckon with the knowledge that the fragile, clumsy, harebrained system put in place to alert the public of impending nuclear disaster closely mirrors the antiquated, harebrained system that will be used to create a nuclear disaster. Thanks, computer science! The discipline has given us a system that can only ask, “Shall we launch the missiles now? Or shall we merely pretend to launch some missiles?” For all the trillions of dollars poured into military research and Silicon Valley solutions over the past seven decades, not a single member of that gadget-worshipping cabal has yet given the president a “peace” button to push.
That’s because peace, like every important problem we still face as a species, exceeds the conceptual scope of the servile tinkerers and their phony “science.” Louis Fein, the scapegoat of this story, disagreed, of course. In his defense, he seems to have meant well. “What the hell are we making these machines for, if not to free people?” he told a Time magazine reporter in 1965. The question came perhaps a little too late.
Programming for PeaceIn 1963, a full year after the Cuban missile crisis, Fein published still another paper on what he saw as the boundless potential of computer science. In it, he proposed a six-phase program “on the prevention of nuclear war and the establishment of the basis for future peace on Earth.” True, war may have plagued every previous generation of humanity. But those people didn’t have computers—or the next best thing, computer consultants. “Imagine,” Fein began, “a management consulting and research type of organization called, say, the Universal Study Center for the Salvage and Reorganization of Institutions in Imminent Danger of Destruction Applying Computers Wherever Feasible (USCSRIIDDACWF).”
Perhaps humans should not be trusted with something so important as crafting the warning system intended to safeguard the future of the species.
Inputs might include Christian teachings about universal love, as well as (pre-USSR) Marxist doctrines. Upon crunching the relevant data, Fein wrote, USCSRIIDDACWF’s computers would prescribe “an optimum Earth reorganization”—revolution at the push of a button.
If only his computers were programmed correctly, and supplied with the right sort of information, Fein imagined that they would produce invaluable “strategies and tactics” for the shift to a “democratic socialist society where the Augustinian slogan ‘from each according to his ability and to each according to his need’ would be the guiding policy for the prevention of war and the establishment of peace and prosperity.”
Fein understood that opposition to this program would be “all-pervasive.” He conceded that “obtaining moral and financial support . . . may be extremely difficult.” Maybe, he thought, the United Nations would help? Or perhaps the March of Dimes could be used as a fundraising model?
At last the great intellectual forefather of computer science dared to venture beyond the binary scheme of cognitive deference to synnoetics and pre-filtered menu choices, into the world that humans actually inhabit. And tellingly, he could only imagine funding and defending the effort, not from the largesse of the Cold War national security state, but via a door-to-door philanthropic appeal. His goals may have been noble, but how impoverished his imagination had grown by overexposure to that computerized milieu.
Unwittingly, Fein furnished a sort of parable for the digital age: the regime of maximally programmed deference to authority can’t magically remedy, by user command, all the many social pathologies that it has conspired to create. The problem is, no one’s about to learn that in any computer-science class.
# distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: http://mx.kein.org/mailman/listinfo/nettime-l # archive: http://www.nettime.org contact: [email protected] # @nettime_bot tweets mail w/ sender unless #ANON is in Subject: