The People Who Hated the Web Even Before Facebook

As the World Wide Web turns 30, a look back at its early skeptics

A very 90s Cisco booth at Comdex in 1999
A very '90s Cisco booth at Comdex in 1999 (Pat Benic / Reuters)

Thirty years ago this week, the British scientist Tim Berners-Lee invented the World Wide Web at CERN, the European scientific-research center. Suffice it to say, the idea took off. The web made it easy for everyday people to create and link together pages on what was then a small network. The programming language was simple, and publishing was as painless as uploading something to a server with a few tags in it.

There was real and democratic and liberatory potential, and so it’s not at all surprising that people—not least Berners-Lee himself—are choosing to remember and celebrate this era. This was the time before social media and FAANG supremacy and platform capitalism, when the internet was not nearly as dependent on surveillance and advertising as it is now. Attention was more widely distributed. The web broke the broadcast and print media’s hold on the distribution of stories. HTML felt like a revolution.

Not to everyone, though. Just a few years after the internet’s creation, a vociferous set of critics—most notably in Resisting the Virtual Life, a 1995 anthology published by City Lights Books—rose to challenge the ideas that underlay the technology, as previous groups had done with other, earlier technologies. This wasn’t the humbuggery of Clifford Stoll’s Newsweek essay arguing that the internet basically sucked. These were deeper criticisms about the kind of society that was building the internet, and how the dominant values of that culture, once encoded into the network, would generate new forms of oppression and suffering, at home and abroad.

Resisting the Virtual Life assails “the new machinery of domination,” contemplates an “ungovernable world,” considers the discriminatory possibilities of data harvesting, catalogs the unfairness of gender online, examines the “masculinist world of software engineers,” laments the “reduction of public space,” speculates on “the shape of truth to come,” and even proposes a democratic way forward. Its essays foresaw the economic instability the internet might bring, how “the cult of the boy engineer” would come to pervade everyone’s life, and the implications of the creation of huge amounts of personal data for corporate processing. “What could go wrong with the web?” the authors asked. The answer they found was: A lot. They called themselves “the resistance.”

This was before Jeff Bezos was the richest man in the world. It was before Facebook, before the iPhone, before Web 2.0, before Google went public, before the dot-com bust, before the dot-com bubble, before almost anyone outside Finland texted. Eighteen million American homes were “online” in the sense that they had America Online, Prodigy, or CompuServe, but according to the Pew Research Center, only 3 percent had ever seen the web. Amazon, eBay, and Craigslist had just launched. But the critiques in Resisting the Virtual Life are now commonplace. You hear them about Facebook, Amazon, Google, Apple, the venture-backed start-up ecosystem, artificial intelligence, self-driving cars—even though the internet of 1995 bears almost no resemblance, technically or institutionally, to the internet of 2019.

Marc Andreessen, then the co-founder and vice president of technology for Netscape (Reuters)

Maybe as a major technological movement begins to accelerate—but before its language, corporate power, and political economics begin to warp reality—a brief moment occurs when critics see the full and awful potential of whatever’s coming into the world. No, the new technology will not bring better living (at least not only that). There will be losers. Oppression will worm its way into even the most seemingly liberating spaces. The noncommercial will become hooked to a vast profit machine. People of color will be discriminated against in new ways. Women will have new labors on top of the old ones. The horror-show recombination of old systems and cultures with new technological surfaces and innards is visible, like the half-destroyed robot face of Arnold Schwarzenegger in Terminator 2.

Then, if money and people really start to pour into the technology, the resistance will be swept away, left dusty and coughing as what gets called progress rushes on.


In the post-2016 world of the left, socialism is back and computers are bad. But computers have been bad before, and, not coincidentally, when various socialisms were popular.

Long before the internet and Resisting the Virtual Life, people fought the very idea of computers—mainframes, initially—beginning with the 1960s student movements. It wasn’t pure Luddism; computers were, quite literally, war machines. At Stanford, then a hotbed of radicalism, students staged sit-ins and occupied administration buildings. Even as the Vietnam War ebbed, many on the left worried that technology in the form of computerization and automation was destroying working-class jobs, helping bosses crush unions, and making work life worse for those who remained employed.

Offutt Air Force Base computers (Library of Congress)

But as the 1970s crept into the 1980s, some of the military-industrial stank began to rub off. A computer that spit out Vietnam War predictions for Robert McNamara was one thing, but what about a network of computers that let anyone wander a digital frontier, associating with whomever they wanted beyond national borders or established identities? The meaning of networked computing began to change. These 1s and 0s could be bent to freedom.

“To a generation that had grown up in a world beset by massive armies and by the threat of nuclear holocaust, the cybernetic notion of the globe as a single, interlinked pattern of information was deeply comforting: in the invisible play of information, many thought they could see the possibility of global harmony,” wrote Fred Turner in From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism.

Turner’s book begins with a question: “How did the cultural meaning of information technology shift so drastically” from the Vietnam War protest days to the beginning of the dot-com boom? And his answer is that a set of figures in the Bay Area, led by Stewart Brand, who founded the Whole Earth Catalog, transformed the notion of computers from military-industrial infrastructure to personal tool through the 1970s.

Brand positioned these technologies as boons not for bureaucrats calculating missile trajectories, but more for hacker-freaks planning winning video-game maneuvers. In Rolling Stone, he declared the arrival of computing “good news, maybe the best since psychedelics.”

It helped that the United States had entered a period the historian Daniel Rodgers has called “the age of fracture.” American institutions, collectivities, and solidarities broke down in favor of a wildly individualist model of consumer action. “One heard less about society, history, and power and more about individuals, contingency, and choice,” Rodgers wrote. “The importance of economic institutions gave way to notions of flexible and instantly acting markets.”

The world was a place for individuals to make choices, and what they needed to make better choices was more information. Information was stored in computers, and therefore, networking individual people to one another would lead to new forms of collective action.

Apple and its charismatic salesman Steve Jobs were there to commercialize this new idea of the computer. Liberal technology enthusiasts such as Al Gore and conservative technology enthusiasts such as Newt Gingrich joined the movement to create a new consensus that the only role of government in the industry would be to create an environment friendly to the development of internet businesses and niche communities alike.

So when Berners-Lee wrote his 1989 proposal for the web, the world was ready. Tellingly, an institutional breakdown motivated his desire for a hypertext system. People kept leaving CERN and taking information with them. Organizational memory was lacking. At the same time, systems for creating that memory required that people agree to certain hierarchies of information and keyword taxonomies, which they were loathe to do. His answer to this problem became a radically individual one: Anyone could create a page and link to anything. Get enough people doing so, and the flywheel of content creation would keep spinning. No institutions required. No holds barred. This was personal freedom, as implemented in a network protocol.

Tim Berners-Lee talks about the World Wide Web in 1998. (Pierre Virot / Reuters)

The internet’s early proponents saw all this potential. They gushed in the pages of Wired, headquartered south of Market Street, in San Francisco, long a down-and-out part of town. But across Market and up Columbus Avenue, in the heart of North Beach, where aging beatniks still had some small purchase, the poets and writers of City Lights Bookstore were not swayed.


“There are alternatives to the capitalist utopia of total communication, suppressed class struggle, and ever-increasing profits and control that forgets rather than resolves the central problems of our society,” wrote James Brook and Iain Boal, the editors of Resisting the Virtual Life. Those problems were obvious: “people sorted into enclaves and ghettos, growing class and racial antagonisms, declining public services (including schools, libraries, and transportation), unemployment caused by automation and wandering capital, and so on.”

And yet, for most people, the personal computer and the emerging internet obscured the underlying structural forces of society. “‘Personal’ computers and CD-ROMs circulate as fetishes for worshipers of ‘the free market’ and ‘the free flow of information,’” Brook and Boal wrote.

They knew they were up against “much—one could say ‘everything’—” in trying to assemble the resistance to the explosion of the internet. But their goal was not necessarily to win, but rather to “address an almost unnameable object—‘information age,’ ‘information superhighway,’ ‘cyberspace,’ ‘virtuality,’ and the proliferating variants—from a critical democratic perspective.”

It’s almost like they wanted to mark for future generations that there were people—all kinds of different people—who saw the problems. “Resisting the Virtual Life intends to provide correctives more profound than those generated by the cybernetic feedback mechanisms of ‘the marketplace of ideas,’” the editors wrote, “where scandalous defects are always answered by pseudocritiques that assure us that all is well, except for the inevitable bugs that the system itself will fix.”

A typical Silicon Valley office-park scene (Alexis Madrigal)

The essays in the book are uneven, as you might expect. But some of them are stunningly prescient. In “It’s the Discrimination, Stupid!” which reads as a prequel to 2018’s The Age of Surveillance Capitalism, the University of Southern California professor Oscar H. Gandy Jr. argues that “personal information is used to determine our life changes in our role as citizens as well as in our lives as employees and consumers.” In a powerful reflection on the landscape of Silicon Valley, Rebecca Solnit concludes that, as a place, it is a nowhere, but one linked by supply chains to changes across the globe. The University of California at San Diego communications professor and media critic Herbert Schiller points out how the internet could reduce the power of nation-states, weakening them while transnational corporations grow stronger. Could electronically enhanced armies hold the people down, he writes, “while privately initiated economic forces are contributing to wildly disproportionate income distribution and gravely distorted resource utilization, locally and globally?”

A CIA website for kids from 1998 (Reuters)

And Ellen Ullman, who has continued to critique the tech world from the inside, might have made the most perfect critique of how the human desire for convenience would rule how technology was seen. “The computer is about to enter our lives like blood in the capillaries,” she wrote. “Soon, everywhere we look, we will see pretty, idiot-proof interfaces designed to make us say, ‘OK.’”

The on-demand economy would rule. “We don’t need to involve anyone else in the satisfaction of our needs,” she wrote. “We don’t even have to talk.” Smuggled inside these programs would be “the cult of the boy engineer,” “alone, out-of-time, disdainful of anyone far from the machine.”

If these critics from another era seem to have cataloged all that could go wrong, they also had a sense that things could go differently. The writer and historian Chris Carlsson, for example, saw hope in the organizing potential of online communities. “The threads of subversion we weave so quietly today must find their way to transform the self-destructive, brutal, and dehumanizing lives we lead at work, at school, and in the streets,” he wrote. “The trust we place in electronic links must again find a common home among our social links, until electronic ‘experiences’ take their rightful place as supplements to a rich, varied human life.”

Then again, he acknowledged that “it’s easier to imagine a lot of empty pointless verbiage flying around the electronic world, matched only by the piles of data gathered by our corporate and governmental institutions.”


Since the 2016 election—both its course and its outcome—Americans have been engaged in a newfound struggle to understand how the internet has changed their country and the world. It’s simply not possible to celebrate the birth of the web without acknowledging that the cyber-utopia never arrived. Look at many tech titans’ behavior over the past few years and you will see both scandalous defects as well as “pseudocritques that assure us all is well.”

In this long moment of reevaluation, the industry and its products have come under attack from almost every angle: inside and outside, local and global, economic and social, legislative and rhetorical, capitalist and socialist. That hasn’t stopped the profits from raining down. The biggest tech companies are among the top 10 most valuable companies in the world. According to one ranking of brand equity, the four strongest brands in the world are Apple, Google, Amazon, and Microsoft. But once the consensus dropped away that internet technology was equal to progress, people inside and outside the industry found an unending fount of questionable practices. People treat their phone as they once did cigarettes.

As diagnoses are reached and suggestions made, these early critiques are worth remembering precisely to inoculate against the nostalgia for an earlier time. The seeds of our current technologically mediated problems were obvious to critics with the eyes to see them in 1995.

Examining the history of the web might yield a better internet in the future, but not by looking only at what we loved about the early days. The premises of the early web—valorizing individual choice, maximalist free speech, and dispersed virtual networks; ignoring institutional power and traditional politics—might require revision to build a new, prosocial web.

Alexis Madrigal is a contributing writer at The Atlantic and the host of KQED’s Forum.