In 1983, a programmer at MIT’s Artificial Intelligence Laboratory typed a message that would reverberate through computing history. Richard M. Stallman announced his intention to create a complete, Unix-compatible operating system that would be entirely free—free not just in price, but in the fundamental sense that users could study it, modify it, share it, and build upon it without restriction. He called it GNU, a recursive acronym meaning “GNU’s Not Unix.”
This wasn’t merely a technical project. It was a declaration of war against the emerging proprietary software industry, a philosophical manifesto disguised as a systems programming initiative, and ultimately, a movement that would reshape how software is created, distributed, and owned. But to understand why Stallman felt compelled to undertake this seemingly quixotic quest, we must first understand the world he was rebelling against—a world that had only recently transformed from open collaboration to locked-down ownership.
The story of GNU is inseparable from the story of Unix, and both are fundamentally human stories about collaboration, betrayal, idealism, and commerce.
The Unix Genesis: The Culture That GNU Would Emulate
Bell Labs and the Birth of Collaborative Computing
In the late 1960s, Bell Telephone Laboratories—the research arm of AT&T—was one of the world’s premier industrial research facilities. Its halls hosted Nobel laureates and pioneering computer scientists, operating under a mandate to pursue fundamental research that might, someday, prove commercially valuable. The culture was collaborative, academic, and relatively unconcerned with immediate profit.
In 1969, Ken Thompson, a researcher at Bell Labs, found himself with an underutilised DEC PDP-7 minicomputer and three weeks whilst his wife and young son were on holiday. Thompson had previously worked on Multics, an ambitious time-sharing operating system being developed jointly by Bell Labs, MIT, and General Electric. Bell Labs had recently withdrawn from the project, deeming it too complex and expensive. But Thompson missed the productive programming environment Multics had provided.
During that three-week period, Thompson wrote a simple operating system for the PDP-7. He implemented a kernel, shell, editor, and assembler—the core components needed for productive work. The system was elegant and minimalist, reflecting Thompson’s aesthetic: simple tools that could be combined in powerful ways. He called it Unix (a pun on Multics—where Multics aimed for multiplicity, Unix aimed for unity).
Dennis Ritchie, Thompson’s colleague and collaborator, soon joined the effort. Together, they made a decision that would prove historically momentous: they rewrote Unix in a high-level programming language rather than assembly code. Ritchie created the C programming language specifically for this purpose, and by 1973, Unix was almost entirely written in C.
This decision had profound implications. Assembly code was specific to particular hardware; C code could, in principle, be recompiled for different machines. Unix became portable—a revolutionary concept when operating systems were typically tied to specific hardware. More importantly, C code was readable. Programmers could study Unix’s source code and understand how it worked, a transparency that would prove crucial.
The Academic Community and Unix’s Golden Age
AT&T, operating under antitrust regulations, couldn’t sell computer systems commercially. Consequently, Bell Labs distributed Unix to universities for minimal cost—essentially the price of media and shipping. The source code was included. Universities weren’t merely users; they were collaborators.
This created an extraordinary educational and research environment. At the University of California, Berkeley, students and faculty began enhancing Unix, adding virtual memory support, networking capabilities, and improved file systems. The Berkeley Software Distribution (BSD) became an influential Unix variant, pioneering technologies like TCP/IP networking that would prove foundational to the internet.
Computer science departments worldwide ran Unix. Graduate students studied its source code, learned from its elegant design patterns, and contributed improvements. A culture developed—one that treated software as a form of scholarship, where sharing knowledge and building collaboratively were the norms.
Richard Stallman entered this world in 1971, joining MIT’s Artificial Intelligence Laboratory as an undergraduate. The AI Lab embodied the collaborative ethos even more thoroughly than most academic institutions. Programmers shared code freely, modified each other’s programs, and built a substantial ecosystem of tools and utilities. Proprietary software—code with restrictions on modification or sharing—was virtually unknown. This wasn’t ideological; it was simply how things were done.
As Stallman would later write:
“We did not call our software ‘free software’, because that term did not yet exist; but that is what it was. Whenever people from another university or a company wanted to port and use a program, we gladly let them.”
The Fall: Commercialisation and the Closing of Code
The Xerox Laser Printer Incident
The transformation Stallman experienced—from a world of sharing to one of secrecy—came in stages, each one eroding the collaborative culture he valued. One incident, though small in itself, became emblematic.
In the late 1970s, the MIT AI Lab acquired a laser printer from Xerox. The lab’s previous printer had been programmable; Stallman had written software to notify users when their print jobs completed or when the printer jammed. This simple enhancement dramatically improved productivity—programmers could focus on their work rather than repeatedly checking the printer.
The new Xerox printer, however, came without source code. When Stallman requested the software to add similar functionality, Xerox refused. The code was proprietary—a trade secret. This seemingly minor restriction represented a profound shift. The printer, which should have served the lab’s needs, instead dictated terms. The lab’s programmers, among the world’s most skilled, couldn’t improve their own tools.
Later, Stallman encountered a programmer from Carnegie Mellon who had the printer’s source code but had signed a non-disclosure agreement. Despite Stallman’s request, the programmer refused to share, honouring his legal commitment to Xerox over his colleague’s legitimate need.
This incident crystallised something for Stallman. The collaborative culture was being deliberately dismantled. Non-disclosure agreements—once rare in academic computing—were becoming commonplace, transforming colleagues into adversaries, each constrained from helping the other.
The Collapse of the AI Lab Community
The erosion accelerated through the late 1970s and early 1980s. Companies began recruiting aggressively from academic labs, and the MIT AI Lab was a prime target. Symbolics and Lisp Machines Inc. (LMI)—two companies commercialising Lisp machine technology developed at MIT—hired away many of the lab’s most talented programmers.
These companies didn’t just hire people; they claimed ownership of code. Software developed collaboratively at MIT became proprietary products. Former colleagues couldn’t share improvements with each other; they worked under non-disclosure agreements and proprietary licences.
By 1982, the AI Lab’s hacker community had largely dissolved. The PDP-10 computers—around which the community had coalesced—were becoming obsolete. The lab purchased newer equipment, but it came with proprietary software and restrictive licences. The culture of sharing and collaboration was being systematically replaced by one of ownership and restriction.
Stallman faced a choice. He could accept this new world—sign non-disclosure agreements, work on proprietary software, participate in the industry’s transformation. Many of his colleagues made this choice, often reluctantly, seeing it as inevitable.
Or he could resist.
Unix Itself Turns Proprietary
The final blow came from Unix itself. In 1982, AT&T—following the breakup of the Bell System—was no longer constrained from commercial software sales. The company that had once distributed Unix freely to universities now asserted full proprietary control. Unix System V became a commercial product with expensive licences and legal restrictions on modification and redistribution.
Universities that had been Unix collaborators became Unix customers. The source code that had been an educational resource became a trade secret. Students who had learned operating system design by studying and modifying Unix could no longer do so legally without expensive licences.
Berkeley’s BSD Unix faced legal challenges from AT&T, leading to years of litigation that would chill academic Unix development. The collaborative culture that had made Unix great was being destroyed by the very legal instruments designed to protect it as property.
For Stallman, this represented a kind of theft—not of physical property, but of the collaborative culture and shared knowledge that had made computing productive and intellectually exciting. Proprietary software wasn’t merely a business model; it was an attack on the scientific and engineering communities.
The Rebellion: Richard Stallman’s Radical Response
The Man and His Convictions
To understand the GNU project, one must understand Richard Matthew Stallman—a figure as controversial as he is consequential. Brilliant, uncompromising, and possessed of absolute certainty about ethical matters, Stallman doesn’t merely disagree with proprietary software; he views it as morally wrong, a form of subjugation.
Stallman was an exceptional programmer even by MIT’s demanding standards. He had written TECO EMACS, a highly regarded text editor, and had contributed substantially to the AI Lab’s software infrastructure. But his defining characteristic wasn’t technical skill—it was moral absolutism.
Where others saw business opportunities or inevitable market forces, Stallman saw ethical imperatives. He didn’t accept that software restrictions were merely unfortunate; he argued they were unjust. This wasn’t pragmatism about licensing models; it was philosophy about human freedom and cooperation.
In September 1983, Stallman announced the GNU project on several Usenet newsgroups with characteristic directness:
“Starting this Thanksgiving I am going to write a complete Unix-compatible software system called GNU (for Gnu’s Not Unix), and give it away free to everyone who can use it.”
The announcement continued with both technical plans and philosophical justification. GNU would be Unix-compatible because Unix’s design was sound and because compatibility would allow users to switch easily. But it would be entirely free—not just available at no cost, but free in the sense that users could study, modify, and share it.
The Four Freedoms
Stallman would later formalise his philosophy into the “Four Freedoms” that define free software:
- Freedom 0: The freedom to run the program as you wish, for any purpose
- Freedom 1: The freedom to study how the program works, and change it to make it do what you wish
- Freedom 2: The freedom to redistribute copies so you can help your neighbour
- Freedom 3: The freedom to distribute copies of your modified versions to others
These freedoms weren’t primarily about economics; they were about power and autonomy. Proprietary software, in Stallman’s view, gave developers power over users. Free software ensured users controlled their computing.
This framework would prove remarkably influential. It provided moral clarity—software was either free or non-free, liberating or restricting. There was no middle ground in Stallman’s philosophy, and this absolutism would be both strength and weakness.
The GNU Manifesto
In March 1985, Stallman published the GNU Manifesto, a document that combined technical planning with radical political and ethical arguments. The Manifesto addressed anticipated objections systematically:
“Won’t programmers starve?” Stallman argued that free software doesn’t preclude compensation—programmers could sell copies, provide customisation, teach classes, or offer support. The issue wasn’t payment, but restrictions.
“Don’t developers deserve rewards?” Stallman contended that deserving rewards doesn’t justify restricting others. If contribution deserves reward, society can arrange payment without imposing restrictions that harm users.
“Won’t removing ownership remove incentives?” Stallman argued that many incentives exist beyond ownership—scientific curiosity, desire to help others, professional reputation. The proprietary model had existed for barely a decade; the collaborative model had centuries of scientific precedent.
The Manifesto remains a remarkable document—simultaneously practical and utopian, addressing mundane technical details alongside fundamental questions about property, freedom, and cooperation.
Building GNU: The Long March to Freedom
Starting with the Tools
Stallman’s strategy was methodical. Rather than writing a kernel first, he began with the tools programmers need most: compilers, debuggers, editors. These tools could run on existing Unix systems, immediately providing value whilst the complete GNU system remained under development.
The first major project was GNU Emacs, released in 1985. Emacs was far more than a text editor—it was an extensible programming environment, customisable through a Lisp interpreter. Stallman had written the original EMACS at MIT; GNU Emacs was a complete rewrite, but one that embodied his design philosophy.
GNU Emacs succeeded spectacularly. It was powerful, extensible, and free. Programmers worldwide adopted it, often as their primary development tool. This created a constituency for the GNU project—users who had experienced the practical benefits of free software and understood Stallman’s vision.
The GNU C Compiler
In 1987, the GNU project released the GNU C Compiler (GCC). This was strategically crucial. A compiler is foundational—you need a compiler to build everything else, including more compilers. GCC’s existence meant the GNU project could potentially bootstrap itself, building a complete system using only free tools.
GCC was technically impressive. It supported multiple programming languages (C, C++, Objective-C, later many more) and multiple hardware architectures. It competed successfully with proprietary compilers, often producing better optimised code.
More importantly, GCC became infrastructure. Operating system developers, researchers, and commercial entities adopted it. By the early 1990s, GCC was among the most widely used compilers in the world. This gave the free software movement legitimacy and demonstrated that the collaborative development model could produce industrial-strength software.
The Freedom-Preserving Licence
Stallman faced a paradox: how to ensure software remained free when anyone could take it, modify it, and redistribute it under proprietary terms? Nothing prevented a company from taking GNU software, adding proprietary extensions, and selling the result with restrictions.
The solution was the GNU General Public Licence (GPL), first released in 1989. The GPL used copyright law—the same legal mechanism that enabled proprietary restrictions—to enforce freedom. It granted all four freedoms unconditionally, but imposed one requirement: if you distributed modified versions, you must distribute them under the same licence, with the same freedoms.
This “copyleft” mechanism was ingenious. It prevented the proprietary appropriation that had killed previous collaborative efforts. Code licensed under the GPL would remain free, even as it spread and evolved. Companies could use GPL software, even commercially, but they couldn’t lock it down.
The GPL proved controversial. Some argued it was too restrictive, preventing legitimate business models. Others celebrated it as necessary protection against exploitation. But its impact was undeniable—it became one of the most widely used software licences and the legal foundation for much of the free software movement.
The Free Software Foundation
In 1985, Stallman founded the Free Software Foundation (FSF) as the organisational home for the GNU project. The FSF’s purpose was threefold:
- Development: Employ programmers to work on GNU software
- Legal: Maintain the GPL and defend software freedom legally
- Philosophical: Promote understanding of free software principles
The FSF gave the movement institutional stability. It could accept donations, employ developers, and speak with organisational authority. Stallman served as president, providing ideological direction whilst others managed operations.
The Foundation published the GNU Manifesto, maintained a catalogue of free software, and provided legal resources for developers. It became the movement’s public face, articulating Stallman’s philosophy to broader audiences whilst the GNU project continued technical development.
The Missing Piece: The Kernel Problem
Everything Except the Kernel
By the early 1990s, the GNU project had achieved remarkable success. It had created:
- GCC: A world-class compiler supporting multiple languages and platforms
- GNU Emacs: A powerful, extensible editor
- GDB: A sophisticated debugger
- GNU Make: A build automation tool
- Bash: A Unix shell (Bourne Again SHell)
- GNU Coreutils: Essential command-line utilities (ls, cp, rm, etc.)
- Glibc: A C standard library
Collectively, these tools provided most of what one needed for a complete operating system. There was just one critical problem: no kernel. Without a kernel—the core program managing hardware, processes, and resources—these tools couldn’t function independently. They ran on Unix and Unix-like systems, ironically depending on the proprietary software they sought to replace.
The HURD: Ambition Meets Reality
The GNU project’s kernel was called the HURD (Hird of Unix-Replacing Daemons, where “Hird” stood for “Hurd of Interfaces Representing Depth”—another recursive acronym). The HURD adopted a microkernel architecture, running many services as user-space processes rather than in the kernel itself.
This architectural choice was philosophically and technically motivated. Microkernels were theoretically more robust and secure than monolithic kernels, isolating faults and allowing individual services to crash without bringing down the entire system. More importantly for GNU’s philosophy, they aligned perfectly with the commitment to user freedom—services running in user space could be more easily replaced, modified, and understood by users. The architecture embodied the very transparency and modularity that free software championed.
But microkernels proved extraordinarily difficult to implement well. The HURD project, begun in 1990, struggled with complexity. Inter-process communication overhead created performance problems—every interaction between services required expensive context switches and message passing. The Mach microkernel they initially used had its own issues, including bloat and performance penalties that contradicted Unix’s minimalist philosophy. Progress was slow, delayed by both technical challenges and the small number of developers willing to tackle such ambitious low-level work.
By 1991, the GNU project—after eight years since Stallman’s initial announcement—had created an impressive suite of tools but still lacked a working kernel. The HURD was in early development but far from ready. The movement needed something to make the vision real, to demonstrate that a completely free operating system was possible.
That something would come from an unexpected source—not from MIT or the FSF, but from a 21-year-old student in Finland.
The Unexpected Catalyst: Linux and the Completion of the Vision
Linus Torvalds and a “Hobby Project”
In August 1991, Linus Torvalds, a computer science student at the University of Helsinki, posted a message to comp.os.minix:
“I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones.”
Torvalds had been experimenting with Minix, a small Unix-like system created by Andrew Tanenbaum for teaching. Frustrated by Minix’s limitations and licensing restrictions, Torvalds began writing his own kernel. He called it Linux (a combination of Linus and Unix).
Crucially, Torvalds released Linux under the GPL. This decision—initially pragmatic rather than ideological—meant Linux could be legally combined with GNU tools. By early 1992, developers were building complete operating systems using the Linux kernel and GNU software.
The combination was transformative. Suddenly, all the tools the GNU project had developed over eight years had a free kernel to run on. Users could install a completely free operating system—no proprietary components required. The GNU project’s vision was realised, though not entirely as Stallman had planned.
GNU/Linux: A Naming Controversy
Stallman insists the system should be called “GNU/Linux,” crediting both the kernel and the tools that make it usable. Torvalds and much of the user community simply call it “Linux.” This naming dispute reflects deeper tensions.
From Stallman’s perspective, calling the system “Linux” erases the GNU project’s contributions and obscures the free software philosophy that motivated it. The kernel, whilst important, is just one component; the compilers, shell, utilities, and libraries came from years of GNU development.
From Torvalds’ perspective, and that of many developers, “Linux” is simply the name the community adopted. Torvalds himself was less concerned with philosophical purity than with building good software. He valued freedom, but pragmatically rather than absolutistically.
This tension—between Stallman’s ideological purity and the broader community’s pragmatic focus—would characterise the free software and open-source movements going forward.
The Legacy: How GNU Changed Everything
The Free Software Movement’s Influence
The GNU project and Free Software Foundation fundamentally changed software development. Before GNU, collaborative development existed but lacked theoretical framework and legal protection. Stallman provided both.
The GPL demonstrated that copyleft could work. It protected collaborative work from proprietary appropriation whilst allowing commercial use. Thousands of projects adopted the GPL, creating a body of software that had to remain free.
The Four Freedoms articulated principles that resonated beyond software. They influenced open-access publishing, creative commons licensing, and broader debates about digital rights and ownership. Stallman’s insistence that software freedom was an ethical issue, not merely a technical or economic one, shifted discourse.
GNU’s Technical Contributions
Beyond philosophy, GNU’s technical contributions were substantial:
- GCC became one of the world’s most important compilers, supporting numerous languages and platforms. It’s used to build operating systems, embedded systems, and countless applications
- GNU Emacs demonstrated the power of extensible software, influencing editor design for decades
- The GNU toolchain (compiler, linker, debugger, build tools) became standard development infrastructure
- Bash became the default shell for most Linux distributions and macOS
- GNU Coreutils provide essential functionality for Unix-like systems worldwide
These weren’t merely adequate replacements for proprietary tools—they often became the preferred implementations, used even by developers with access to commercial alternatives.
The Open Source Schism
In 1998, a group including Eric Raymond, Bruce Perens, and others coined the term “open source” as an alternative to “free software.” They argued that “free software” confused people (free as in freedom vs. free as in price) and that emphasising practical benefits rather than ethics would appeal more to businesses.
Stallman rejected this framing. For him, the issue was fundamentally about freedom and ethics, not merely practical advantages. Calling it “open source” obscured the philosophical core.
The split created two movements with overlapping goals but different emphases. Open source advocates highlighted technical benefits, business opportunities, and development methodologies. Free software advocates emphasised user freedom, ethical computing, and resistance to proprietary control.
In practice, the movements coexist. Most free software is also open source, and vice versa. But the philosophical divide remains—a testament to Stallman’s insistence that how we frame technology matters as much as the technology itself.
Modern Implications
Today, free and open-source software dominates vast areas of computing:
- The internet runs primarily on free software—Apache web servers, Linux operating systems, and countless tools and libraries
- Cloud computing infrastructure relies heavily on open-source technologies
- Scientific research increasingly uses and produces free software
- Artificial intelligence development depends on open-source frameworks and libraries
- Smartphones run operating systems (Android, based on Linux) built on free software
The proprietary software that Stallman rebelled against still exists and often dominates consumer computing. But the collaborative, freedom-respecting development model he championed has proven extraordinarily productive and resilient.
Reflections: The Revolution That Succeeded
The GNU project began with one man’s refusal to accept that software—a purely intellectual creation, infinitely replicable at zero marginal cost—should be locked behind legal restrictions that prevented sharing and collaboration. Richard Stallman’s response was both radical and utterly practical: recreate Unix, piece by piece, without proprietary constraints.
That vision succeeded, though not exactly as planned. The GNU tools combined with Torvalds’ Linux kernel to create a complete free operating system. The GPL provided legal protection that enabled sustainable collaborative development. The Free Software Foundation institutionalised the movement and articulated its principles.
But the success goes deeper than creating an operating system or even a development model. Stallman’s insistence that software freedom was an ethical issue, not merely a technical preference, changed how people thought about code, ownership, and collaboration. The idea that users deserve control over their computing, that sharing is often better than hoarding, and that collaboration can be protected legally—these concepts now seem obvious, but they weren’t in 1983.
The tension between Stallman’s absolutism and the broader community’s pragmatism continues. His uncompromising stance on ethical matters alienates potential allies and sparks endless debates. Yet that same uncompromising quality made the GNU project possible. A more moderate figure might have sought accommodation with the proprietary software industry. Stallman chose revolution.
As we navigate contemporary debates about digital rights, privacy, and corporate control of computing, the GNU project’s history offers lessons. Fundamental change sometimes requires uncompromising vision. Legal tools can protect collaborative cultures. And sometimes the seemingly impossible—replacing an entire operating system to preserve freedom—proves entirely achievable when motivated by clear ethical principles and sustained effort.
The story of GNU is ultimately a story about refusing to accept that the world’s trajectory is inevitable, that commercial interests must override collaborative values, or that individual programmers can’t challenge entire industries. Richard Stallman looked at the transformation of computing from open collaboration to proprietary control and simply said: “No.”
That simple refusal, backed by years of work and unwavering conviction, helped reshape the digital world we inhabit today.
Postscript: Where They Are Now
Richard Stallman continues to lead the Free Software Foundation and advocate for software freedom. He remains an uncompromising, controversial figure—celebrated by some as a visionary defender of digital rights, criticised by others for inflexibility and problematic statements on various subjects. His influence on computing is undeniable, even amongst those who disagree with his methods or conclusions.
Linus Torvalds continues to maintain the Linux kernel, coordinating contributions from thousands of developers worldwide. Linux has become the dominant operating system for servers, embedded systems, and mobile devices. Torvalds’ pragmatic, engineering-focussed approach contrasts with Stallman’s ideology, but both have proven essential to the free software ecosystem.
The GNU HURD remains under development, though it has never achieved production readiness. It exists as a testament to both the difficulty of kernel development and the philosophical commitment to architectural purity over pragmatic compromise.
The Free Software Foundation continues its work, maintaining the GPL, supporting GNU development, and advocating for software freedom. It remains influential, though sometimes controversial, in debates about digital rights and computing freedom.
The GNU tools continue to evolve, maintained by communities of developers and used by millions worldwide. GCC, Emacs, Bash, and the GNU toolchain remain foundational infrastructure for modern computing.
The revolution Stallman began in 1983 didn’t end with the creation of a free operating system. It continues in every project licensed under the GPL, every developer choosing freedom over restriction, and every user exercising the four freedoms. The battle Stallman identified—between software freedom and proprietary control—remains ongoing, fought now over cloud computing, artificial intelligence, and new frontiers of digital technology.
But one thing is certain: the world where proprietary Unix destroyed the collaborative culture of the 1970s no longer exists. In its place is a world where free software is not merely possible, but prevalent—a testament to the power of clear ethical vision combined with sustained, collaborative effort.