Analysis of the Caduceus P2P network’s three-tier design Part3: P2P network architecture and DHP network (storage)
Blockchain has brought huge improvements to Internet companies, but it has also caused P2P network efficiency difficulties, therefore, it is vital to address the efficiency issues in the P2P network immediately. In practice, the bulk of blockchain companies’ guiding concepts and technology tools are still anchored in the era of the old Internet. Sadly, many organisations have just a fundamental understanding of the network’s core architecture, such as P2P networks, and do not apply other essential improvements.
From the perspective of the large-scale P2P network transmission technology of the Caduceus platform, this paper analyzes the three-layer architecture of the Caduceus P2P network architecture technology based on the underlying technology principles of the P2P network, topology network, hypercube, DHP network, and Gossip.
The DHP network (storage) structure is the communication layer of consensus storage nodes, IPFS nodes, and edge rendering nodes in the Caduceus three-layer P2P network design.
Each computer and node in a P2P (peer-to-peer) network are peers and offer services for the whole network collectively. In addition, in the absence of a centralised server, any host may respond to requests as a server and use services offered by other nodes.
P2P communication does not need to address verification from other companies or CAs, hence it reduces the danger of third-party manipulation and spoofing. With this notion of blockchain technology, the P2P network is then decentralised and accessible.
Problems such as node naming, error recovery, and data query often need to be handled when P2P systems develop a topological structure. The current architectures of P2P networks are as follows:
Hybrid P2P structure
The hybrid P2P architecture is not entirely distributed in a P2P way. There are still servers in this architecture, but their function has evolved. Unlike conventional C/S, the server just allows the coordination of several nodes. In general, this kind of server is referred to as an index server. In this arrangement, resources are not saved on the server but rather on each computer, which may significantly lower the server’s burden, while server reliance still remains.
Purely decentralised P2P architecture
Two forms of purely distributed P2P structures exist: unstructured and organised models. The unstructured model follows the organisation technique of random graphs, while the connection between different computers and the method of data insertion are not rigorously regulated. This model’s primary benefit is its excellent stability, while its main downfall is its relatively poor query efficiency. The distribution of computers and the placement of data in the structured model are governed primarily by distributed hash tables. The key benefit of this model is its great query efficiency, whereas its main drawback is its relatively poor stability.
Unstructured P2P Model
The unstructured P2P architecture employs a totally random forwarding mechanism based on a graph. It overcomes the issue of network structure centralization and has excellent scalability and fault tolerance. However, it adopts the protocol broadcast by the application process resulting in an excessive number of messages and a burdensome network load. There is no way to determine the topology of the whole network or the machines that comprise it. In addition, such systems are more susceptible to malicious assaults from spam or even viruses, and due to the flooding mechanism, the diameter of the inquiry cannot be controlled, resulting in relatively poor query efficiency.
Structured P2P Model
In unstructured P2P networks, an effective and scalable search method is lacking. In recent years, a great deal of effort has been devoted to the creation of scalable search methods, and the most significant accomplishment to date is Distributed Hash Table (DHT, Distributed Hash Table). From a technological standpoint, the evolution of P2P networks may be broken down into three stages:
Phase 1: Centralised “Peer-to-Peer” Network
This network type employs a centralised structure. Since file index information is maintained on the central server, each initial node must connect to the central server in order to locate resources. Its greatest benefit is its easy maintenance and quick indexing. However, since the whole network strongly depends on the central server, performance bottlenecks and single points of failure are common.
Stage 2: Unstructured Distributed Networks
This kind of network utilises the “Flooding” search method, and each search broadcasts the query message to all network nodes. When a node wishes to download a file, it generates a query including the file name or keyword and sends it to all nodes that are linked to it. If there is a file on this node, it establishes a connection with it; otherwise, redirects the query to surrounding nodes until the file location is discovered.
It has been discovered that when the network size increases, this search approach would generate a “broadcast storm” that consumes network bandwidth and node system resources significantly. It is inefficient to avoid the “single point of failure” issue in centralised peer-to-peer networks.
Stage 3: Structured Distributed Network
Currently, the most popular network is the structured distributed network, or DHT-based network (distributed hash table).
DHT employs a more organised routing approach based on key-value pairs in order to accomplish the efficiency and accuracy of Napster and the decentralisation of “Gnutella”.
DHP Network (Storage)
DHP stands for Diffie Hellman Protocol, which is a method for transferring keys. The same key is used for both encryption and decryption in a private key encryption system, where the key is not public. DPH is responsible for sending the key (storage) across the public channel.
The DHP algorithm has the following main points:
The association rule method creates the number of candidate sets, which increases the speed of locating candidate item sets in each transaction and significantly improves the “Apriori” algorithm’s performance bottleneck.
Second, it decreases the transaction database’s content. The smaller the candidate set created by the DHP algorithm, the more it is able to minimise the transaction database’s content by using the clipping strategy while constructing the two-item set. It may include lowering the number of transactions in the total database (i.e., the number of rows) as well as the number of transactions in each transaction item and consequently, it drastically reduces the amount of computing required in following rounds.
Reducing database scanning and 1/0 disk access, and after pruning, the candidate set to be processed is smaller; more content may be executed in memory, and because the DHP method does not get the item set each time it scans the database, it can save some database scans. Delaying the decision until a later pass lowers disk 1/0 access.
In return for quicker execution time, the DHP method decreases the candidate pool for processing at the cost of adding a hash table computation and database table storage space (for database pruning).
In contrast, based on the existing underlying technology, no public chain can realistically achieve 100k TPS in a WAN setting. Why is this the case? Let’s do some elementary arithmetic computations.
Whether it is the most popular token transfer transaction in Ethereum or Solana’s minimum packet with payload (see page 29 in the Solana whitepaper), each transaction needs at least 170 bytes, and the network bandwidth needed for 100k TPS is at least 170 Bytes*100k/s = 17MB/s = 136Mbps. It seems inconceivable that this quantity of data must also be broadcast to all consensus nodes. If the standard Gossip protocol is employed, the quantity of data transmitted by the node is at least 10 times that of this data, the number of consensus nodes is also greater, and the more the number of networks necessary for broadcasting, the greater the amount of data sent by the node.
In comparison, Bitcoin requires 15 minutes to generate a block; the block size is 1MB with an average throughput of 9.1kbps. Ethereum’s maximum block size has been increased from 1.865MB to 10MB, and the block time has been reduced from 15 to 12 seconds (ETH2.0) with an average throughput of 1 6.65Mbps.
Therefore, while developing a blockchain with an operational index more than 10,000 TPS, network issues must be accounted for and appropriate solutions must be offered. Caduceus offers a viable solution to the issue of inadequate network layer storage.
In the network communication layer, Caduceus employs a 3-layer semi-structured P2P network, where the topology layer reduces network connectivity. The hypercube layer achieves fast and efficient broadcasting (reduced broadcast message volume is N-1, broadcast time is log(N)), and the edge layer is responsible for securing network integrity. The computer layer is responsible for edge rendering and distributed storage and so, a large-scale, intercontinental P2P network is created with a bandwidth of 200Mbps and a latency of no more than 300ms (10k100k nodes).
Simply said, Caduceus is capable of storing up to 1.3T each day at the network communication layer. Moreover, as an entry point for edge computing, it may rapidly construct a point-to-point network with robust and efficient storage capacities.
Caduceus’s infrastructure will influence the development of the Metaverse by implementing a three-layer P2P network design. Its underlying network architecture respects the original development ideas of the Metaverse and will serve as the value basis for the long-term benefit of all Caduceus ecosystems and developers.
Caduceus will concentrate on developing a new generation of network efficiency solutions with blockchain technology, as opposed to solving new challenges based on intrinsic cognition like previous businesses have done. Caduceus will construct an open, free, interoperable Metaverse ecosystem in the future and become a vital infrastructure and gateway to the Metaverse.
Join the community: