For many people, "peer to peer" evokes shared folders, late-night downloads and endless catalogs. But reducing P2P to file sharing is like describing a highway by talking only about vacation trips. Beneath the surface, peer to peer is an architectural choice: it changes where power concentrates, where costs are paid, where the network breaks when something goes wrong. The IETF’s RFC 5694 doesn’t define P2P as a single protocol but as a family of architectures.

What is a centralized network

To understand P2P you have to start from its opposite. In a centralized network everything goes through a server: authentication, content, permissions. Clients connect, ask, receive. It’s an efficient model for many use cases, but it has a structural cost: the center has to bear everyone’s load, pay for bandwidth, stay online. If it goes down, everything goes down.

Whoever controls the server controls the service. It’s a technical detail with political consequences: rules, filters, prices and terms of service are the center’s prerogative. Peripheral nodes can only accept or leave.

What changes when nodes are "peers"

In peer to peer every node is potentially both client and server. It requests resources from others but also offers them. It’s a symmetry that sounds trivial yet changes everything:

  • there is no single point of failure;
  • the more users join the network, the more resources become available;
  • bandwidth, storage and compute costs are distributed;
  • resilience does not depend on a single actor.

It’s a radical idea because it overturns a model of authority: no longer "users consuming a service" but participants who together are the service.

P2P is a logic of distributed coordination: every time a community wants to reduce its dependence on an intermediary, this same shape re-emerges.

Why the Internet makes P2P possible

The Internet was born with a distributed soul. The IP protocol does not really know "servers" and "clients": it knows addresses, packets, routes. At the network layer, any two machines can talk to each other if they know each other’s address. The client-server model is an application convention, not a constraint of the network. P2P leverages exactly this openness: it builds, on top of TCP/IP, logics in which roles are interchangeable.

It’s no coincidence that modern P2P explodes when home bandwidth grows and millions of computers stay on long enough to be useful as nodes. Without that infrastructural condition, the idea would have remained a lab experiment.

Napster as a cultural turning point in 1999

In 1999 Shawn Fanning released Napster. Technically it’s not pure P2P: the index of files is centralized, only the content travels directly between users. Yet perception changes forever. Tens of millions of people experienced for the first time a network where content does not sit on someone’s server but on the disks of network neighbors.

Napster’s turning point is not so much the code: it’s the gesture. The download becomes an exchange, the catalog becomes collective, distribution is no longer a privilege of those who own infrastructure. The music industry’s reaction would turn it into a legal case before a technological one.

From hybrid to fully decentralized P2P

After Napster, the question becomes: can we remove the central index too? Projects like Gnutella, born in 2000, try precisely this path: no search server, queries propagate among nodes. The price of this freedom is efficiency, the gain is resilience: shutting down a single machine no longer kills the network.

It’s in this passage that P2P stops being a trick to download and becomes an architectural proposal. The rest of the story — BitTorrent, eMule, Bitcoin, IPFS — is born from here.

The lingering question

Every generation of network faces a fork: center or peers? It’s a choice that returns with mail, with the web, with streaming, with AI. Peer to peer is not a memory of the 2000s: it’s a way of thinking that comes back every time someone asks who is really in charge of the network we use.