Peer to peer was born to distribute: resources, control, access. The AI of recent years, instead, tends toward concentration: huge models, expensive infrastructure, centralized platforms hosting both compute and data. They are two directions pulling opposite ways, and the tension between them is perhaps the most interesting political question of the moment.

The original philosophy of P2P

P2P was never just a technique. It is a proposal about the distribution of power: instead of a center serving many peripherals, have many participants who together are the system. Whoever contributes resources is entitled to the services. Whoever uses the services contributes resources. The network belongs to no one because it belongs to everyone who composes it.

It is a philosophy that values autonomy, resilience, access. It reduces dependence on a few actors. It distributes costs and responsibilities. It is not perfect — P2P has problems with coordination, quality, incentives — but it starts from an intuition that remains powerful.

Where AI is going today

Modern AI, in its most visible form, does the opposite. Large-scale models require enormous GPU clusters, amounts of data only a few organizations can collect, investments that shift the equilibrium toward a small number of actors. Inference happens mostly on remote servers, accessed via APIs. The end user is a client: sends prompts, receives answers.

This model has undeniable advantages: result quality, speed, ease of use. But it has the same shape as many of the centers P2P sought to challenge: few providers, many users, little capacity to verify what happens at lower layers.

The P2P promise: autonomy, resilience, access

Picking up the P2P grid again helps see what we risk losing if we let AI centralize without alternatives:

  • autonomy: being able to run a model without asking permission from an API;
  • resilience: not depending on a single provider who can disappear, change prices, change policies;
  • access: ensuring that those who can’t pay for luxury infrastructure aren’t excluded;
  • verifiability: being able to inspect weights, architectures, datasets;
  • privacy: keeping sensitive data local instead of sending it to a third party.

Where a more distributed AI can grow

It is not science fiction: there are already concrete directions where P2P and AI are meeting.

  • Open weight models. Weights released with licenses that allow local execution, fine-tuning, redistribution. They become reusable primitives for the entire community.
  • Local inference. Consumer hardware increasingly capable of running significant models without calling a remote server. The modern equivalent of "the node on your PC".
  • Edge AI. Distributed intelligence on devices — smartphones, sensors, gateways — processing locally instead of sending everything to the cloud.
  • Federated learning. Models that learn without centralizing data, aggregating updates that stay on user devices.
  • P2P networks for compute. Experiments in sharing GPUs among peers, distributing training or inference across volunteer nodes, mirroring torrent mechanisms applied to compute.
P2P returns asking the same old question, but on a new object: who really controls the infrastructure that is becoming the substrate of our digital lives?

The risk: a new monopoly

It must be said honestly: centralizing AI is not an accident, it is a direction that suits many. Economies of scale, network effects, data lock-in, very high entry barriers — everything pushes toward a few dominant platforms. If today we’re talking about "who wrote the model", tomorrow we’ll be talking about "who owns the interface through which we access every digital service". It is a qualitative leap in power.

The risk is not trivial. A fully centralized AI is not just a competition problem: it is an epistemic diversity problem. If the mediation of nearly all content goes through a few models trained by a few actors, their choices — conscious or not — weigh enormously on the way we think.

From passive consumption to active participation

The lesson P2P brings into the AI era is the same as thirty years ago: the difference between a healthy and a fragile system is not its power, it is its structure. An AI spread across many verifiable versions, that can run on hardware within everyone’s reach, whose updates and datasets are readable, that does not necessarily go through a single provider, is slower to reach — but also much more resilient.

It is not about choosing between "centralized AI yes / P2P AI no". It is about not accepting that the only possible model is the centralized one. P2P is a technical and cultural tradition that tells us: it can be done differently, it has been done differently, and sometimes it has worked surprisingly well.

The same question, again

Peer to peer is not a museum piece. It is a way of thinking that comes back every time a new center forms. With the web, with payments, with storage, now with artificial intelligence. Every generation faces the same choice: a network governed by a few centers or a network made of relationships among peers? Answering it is, literally, building the future.