From Example Problems
Jump to navigation Jump to search

P2P redirects here. For the telecommunications term PTP, see Point-to-Point.
P2P can also stand for Pay-to-play in gaming.

A peer-to-peer (or P2P) computer network is a network that relies on the computing power and bandwidth of the participants in the network rather than concentrating it in a relatively few servers. P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and realtime data, such as telephony traffic, is also passed using P2P technology.

A pure peer-to-peer network does not have the notion of clients or servers, but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example for a non peer-to-peer file transfer is an FTP server where the client and server programs are quite distinct, and the clients initiate the download/uploads and the servers react to and satisfy these requests.

Some networks and channels, such as Napster, OpenNAP, or IRC @find, use a client-server structure for some tasks (e.g., searching) and a peer-to-peer structure for others. Networks such as Gnutella or Freenet use a peer-to-peer structure for all purposes, and are sometimes referred to as true peer-to-peer networks, although Gnutella is greatly facilitated by directory servers that inform peers of the network addresses of other peers.

Peer-to-peer architecture embodies one of the key technical concepts of the internet, described in the first internet Request for Comments, "RFC 1, Host Software" [1] dated 7 April 1969. More recently, the concept has achieved wide prominence among the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.

Operation of peer-to-peer networks

Three major types of P2P network are:

Pure P2P:

  • Peers act as clients and server
  • There is no central server
  • There is no central router

Hybrid P2P:

  • Has a central server that keeps information on peers and responds to requests for that information.
  • Peers are responsible for hosting the information (as the central server does not store files), for letting the central server know what files they want to share, and for downloading its shareable resources to peers that request it.
  • Route terminals are used addresses, which are referenced by a set of indices to obtain an absolute address.

Mixed P2P:

  • Has both pure and hybrid characteristics

Advantages of peer-to-peer networks

An important goal in peer-to-peer networks is that all clients provide resources, including bandwidth, storage space, and computing power. Thus, as nodes arrive and demand on the system increases, the total capacity of the system also increases. This is not true of a client-server architecture with a fixed set of servers, in which adding more clients could mean slower data transfer for all users.

The distributed nature of peer-to-peer networks also increases robustness in case of failures by replicating data over multiple peers, and -- in pure P2P systems -- by enabling peers to find the data without relying on a centralized index server. In the latter case, there is no single point of failure in the system.

When the term peer-to-peer was used to describe the Napster network, it implied that the peer protocol nature was important, but, in reality, the great achievement of Napster was the empowerment of the peers (i.e., the fringes of the network) in association with a central index, which made it fast and efficient to locate available content. The peer protocol was just a common way to achieve this.

Academic peer-to-peer network

Recently, developers at Pennsylvania State University, in conjunction with Massachusetts Institute of Technology Open Knowledge Initiative, researchers at Simon Fraser University, and the Internet2 P2P Working Group, have been working on an academic application for the peer-to-peer network. This project referred to as LionShare is based on a second generation network, more specifically the Gnutella model. The main purpose of this network is to share academic material between users at many different academic institutions. The LionShare network is based on a hybrid model that mixes the Gnutella decentralized peer-to-peer network with a more traditional client-server network. Users of this program are able to upload files to a server where they can be shared continuously, regardless of whether or not the user is online. This network allows for a much smaller than normal sharing community.

The main difference between this network and virtually all other peer-to-peer networks is the fact that the users of LionShare will not be anonymous. The purpose of this is to deter the sharing of copyrighted material over the network, and thus avoid legal issues. Another difference is the ability to selectively share individual files with specific groups. A user is able to select on an individual basis which users are able to receive an individual file or group of files.

This technology is needed in the academic community because of the use of more and larger multimedia files in the classroom setting. More and more professors are using multimedia files such as audio, video and powerpoint. Transferring these files to students is a difficult task that would be made much easier by a network such as LionShare.

Legal controversy

Under US law, "the Betamax decision" case holds that copying "technologies" are not inherently illegal, if substantial non-infringing use can be made of them. This decision, predating the widespread use of the Internet applies to most data networks, including peer-to-peer networks, since distribution of correctly licensed files can be performed. These non-infringing uses include sending open source software, public domain files and out of copyright works. Other jurisdictions tend to view the situation in somewhat similar ways.

In practice, many, often most, of the files shared on peer-to-peer networks are copies of copyrighted popular music and movies in wide variety of formats (MP3, MPEG, RM, etc.) Sharing of these copies is illegal in most jurisdictions. This has led many observers, including most media companies and some peer-to-peer advocates, to conclude that the networks themselves pose grave threats to the established distribution model. The research that attempts to measure actual monetary loss has been somewhat equivocal. Whilst on paper the existence of these networks results in massive losses, the actual income does not seem to have changed much since these networks started up. Whether the threat is real or not, both the RIAA and the MPAA now spend large amounts of money attempting to lobby lawmakers for the creation of new laws, and some copyright owners pay companies to help legally challenge users engaging in illegal sharing of their material.

In spite of the Betamax decision, peer-to-peer networks themselves have been targeted by the representatives of those artists and organizations who license their creative works, including industry trade organizations such as the RIAA and MPAA as a potential threat. The Napster service was shut down by an RIAA lawsuit. In this case, Napster had been deliberately marketed as a way to distribute audio files without permission from the copyright owners.

As actions to defend copyright infringement by media companies expand, the networks have quickly adapted and constantly become both technologically and legally more difficult to dismantle. This has caused the users that are actually breaking the law to become targets, because whilst the underlying technology may be legal, the abuse of it by individuals redistributing content in a copyright infringing way is clearly not.

Anonymous peer-to-peer networks allow for distribution of material - legal or not - with little or no legal accountability across a wide variety of jurisdictions. Many profess that this will lead to greater or easier trading of illegal material and even (as some suggest) facilitate terrorism, and call for its regulation on those grounds. Others counter that the potential for illegal uses should not prevent the technology from being used for legal purposes, that the presumption of innocence must apply, and that non peer-to-peer technologies like e-mail, which also possess anonymizing services, have similar capabilities.

Important Cases

Computer science perspective

Technically, a completely pure peer-to-peer application must implement only peering protocols that do not recognize the concepts of "server" and "client". Such pure peer applications and networks are rare. Most networks and applications described as peer-to-peer actually contain or rely on some non-peer elements, such as DNS. Also, real world applications often use multiple protocols and act as client, server, and peer simultaneously, or over time. Completely decentralized networks of peers have been in use for many years: two examples are Usenet (1979) and FidoNet (1984).

Many P2P systems use stronger peers (super-peers, super-nodes) as servers and client-peers are connected in a star-like fashion to a single super-peer.

Sun added classes to the Java technology to speed the development of peer-to-peer applications quickly in the late 1990s so that developers could build decentralized real time chat applets and applications before Instant Messaging networks were popular. This effort is now being continued with the JXTA project.

Peer-to-peer systems and applications have attracted a great deal of attention from computer science research; some prominent research projects include the Chord project, ARPANET, the PAST storage utility, the P-Grid, a self-organized and emerging overlay network and the CoopNet content distribution system (see below for external links related to these projects).

Attacks on Peer-to-peer networks

Many peer-to-peer networks are under constant attack by people with a variety of motives.

Examples include:

  • poisoning attacks (providing files whose contents are different than the description)
  • denial of service attacks (attacks that may make the network run very slowly or break completely)
  • defection attacks (users or software that make use of the network without contributing resources to it)
  • insertion of viruses to carried data (e.g., downloaded or carried files may be infected with viruses or other malware)
  • malware in the peer-to-peer network software itself (e.g., the software may contain spyware)
  • filtering (network operators may attempt to prevent peer-to-peer network data from being carried)
  • identity attacks (e.g., tracking down the users of the network and harassing or legally attacking them)
  • spamming (e.g., sending unsolicited information across the network- not necessarily as a denial of service attack)

Most attacks can be defeated or controlled by careful design of the peer-to-peer network and through the use of encryption. P2P network defense is in fact closely related to the "Byzantine Generals Problem". However, almost any network will fail when the majority of the peers are trying to damage it, and many protocols may be rendered impotent by far fewer numbers.

Networks, protocols and applications


  • network/protocol: list of applications using that network (operating system)

All networks and protocols are in alphabetical order except very similar applications which are listed in one entry with the most important one first, determining the place of this very similar applications in the list.

An earlier generation of peer-to-peer systems were called "metacomputing" or were classed as "middleware". These include: Legion, Globus, Condor, ByteTornado

Multi-network applications

Format: application (networks/protocols) (operating systems) (open source?)

See also

External links


  • Ross J. Anderson. The eternity service. In Pragocrypt 1996, 1996.
  • Stephanos Androutsellis-Theotokis and Diomidis Spinellis. A survey of peer-to-peer content distribution technologies. ACM Computing Surveys, 36(4):335–371, December 2004. doi:10.1145/1041680.1041681.
  • Biddle, Peter, Paul England, Marcus Peinado, and Bryan Willman, The Darknet and the Future of Content Distribution. In 2002 ACM Workshop on Digital Rights Management, 18 November 2002.
  • Antony Rowstron and Peter Druschel, Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems. In proceedings Middleware 2001 : IFIP/ACM International Conference on Distributed Systems Platforms. Heidelberg, Germany, November 12-16, 2001. Lecture Notes in Computer Science, Volume 2218, Jan 2001, Page 329.
  • Andy Oram et al., Peer-to-Peer:Harnessing the Power of Disruptive Technologies, Oreilly 2001
  • I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan. Chord: A scalable peer-to-peer lookup service for internet applications. In Proceedings of SIGCOMM 2001, August 2001.
  • Ralf Steinmetz, Klaus Wehrle (Eds). Peer-to-Peer Systems and Applications. ISBN: 3-540-29192-X, Lecture Notes in Computer Science, Volume 3485, Sep 2005

ca:D'igual a igual cs:Peer-to-peer da:Peer-to-peer de:Peer-to-Peer es:P2P fa:همتا به همتا (رایانه) fr:Poste-à-poste ko:P2P it:Peer-to-peer he:קצה לקצה lt:P2P hu:Peer-to-peer ms:Rakan-ke-rakan nl:Peer-to-peer ja:P2P no:Peer-to-peer pl:P2P pt:P2P ru:Одноранговая сеть sk:Peer-to-Peer fi:Vertaisverkko sv:P2P-nätverk uk:Peer-to-peer zh:點對點技術