Swarm alpha public pilot and the basics of Swarm


With the long awaited geth 1.5 (“let there bee light”) release, Swarm made it into the official go-ethereum release as an experimental feature. The current version of the code is POC 0.2 RC5 — “embrace your daemons” (roadmap), which is the refactored and cleaner version of the codebase that was running on the Swarm toynet in the past months.

The current release ships with the swarmcommand that launches a standalone Swarm daemon as separate process using your favourite IPC-compliant ethereum client if needed. Bandwidth accounting (using the Swarm Accounting Protocol = SWAP) is responsible for smooth operation and speedy content delivery by incentivising nodes to contribute their bandwidth and relay data. The SWAP system is functional but it is switched off by default. Storage incentives (punitive insurance) to protect availability of rarely-accessed content is planned to be operational in POC 0.4. So currently by default, the client uses the blockchain only for domain name resolution.

With this blog post we are happy to announce the launch of our shiny new Swarm testnet connected to the Ropsten ethereum testchain. The Ethereum Foundation is contributing a 35-strong (will be up to 105) Swarm cluster running on the Azure cloud. It is hosting the Swarm homepage.

We consider this testnet as the first public pilot, and the community is welcome to join the network, contribute resources, and help us find issues, identify painpoints and give feedback on useability. Instructions can be found in the Swarm guide. We encourage those who can afford to run persistent nodes (nodes that stay online) to get in touch. We have already received promises for 100TB deployments.

Note that the testnet offers no guarantees! Data may be lost or become unavailable. Indeed guarantees of persistence cannot be made at least until the storage insurance incentive layer is implemented (scheduled for POC 0.4).

We envision shaping this project with more and more community involvement, so we are inviting those interested to join our public discussion rooms on gitter. We would like to lay the groundwork for this dialogue with a series of blog posts about the technology and ideology behind Swarm in particular and about Web3 in general. The first post in this series will introduce the ingredients and operation of Swarm as currently functional.

What is Swarm after all?

Swarm is a distributed storage platform and content distribution service; a native base layer service of the ethereum Web3 stack. The objective is a peer-to-peer storage and serving solution that has zero downtime, is DDOS-resistant, fault-tolerant and censorship-resistant as well as self-sustaining due to a built-in incentive system. The incentive layer uses peer-to-peer accounting for bandwidth, deposit-based storage incentives and allows trading resources for payment. Swarm is designed to deeply integrate with the devp2p multiprotocol network layer of Ethereum as well as with the Ethereum blockchain for domain name resolution, service payments and content availability insurance. Nodes on the current testnet use the Ropsten testchain for domain name resolution only, with incentivisation switched off. The primary objective of Swarm is to provide decentralised and redundant storage of Ethereum’s public record, in particular storing and distributing dapp code and data as well as blockchain data.

There are two major features that set Swarm apart from other decentralised distributed storage solutions. While existing services (Bittorrent, Zeronet, IPFS) allow you to register and share the content you host on your server, Swarm provides the hosting itself as a decentralised cloud storage service. There is a genuine sense that you can just ‘upload and disappear’: you upload your content to the swarm and retrieve it later, all potentially without a hard disk. Swarm aspires to be the generic storage and delivery service that, when ready, caters to use-cases ranging from serving low-latency real-time interactive web applications to acting as guaranteed persistent storage for rarely used content.

The other major feature is the incentive system. The beauty of decentralised consensus of computation and state is that it allows programmable rulesets for communities, networks, and decentralised services that solve their coordination problems by implementing transparent self-enforcing incentives. Such incentive systems model individual participants as agents following their rational self-interest, yet the network’s emergent behaviour is massively more beneficial to the participants than without coordination.

Not long after Vitalik’s whitepaper the Ethereum dev core realised that a generalised blockchain is a crucial missing piece of the puzzle needed, alongside existing peer-to-peer technologies, to run a fully decentralised internet. The idea of having separate protocols (shh for Whisper, bzz for Swarm, eth for the blockchain) was introduced in May 2014 by Gavin and Vitalik who imagined the Ethereum ecosystem within the grand crypto 2.0 vision of the third web. The Swarm project is a prime example of a system where incentivisation will allow participants to efficiently pool their storage and bandwidth resources in order to provide global content services to all participants. We could say that the smart contracts of the incentives implement the hive mind of the swarm.

A thorough synthesis of our research into these issues led to the publication of the first two orange papers. Incentives are also explained in the devcon2 talk about the Swarm incentive system. More details to come in future posts.

How does Swarm work?

Swarm is a network, a service and a protocol (rules). A Swarm network is a network of nodes running a wire protocol called bzz using the ethereum devp2p/rlpx network stack as the underlay transport. The Swarm protocol (bzz) defines a mode of interaction. At its core, Swarm implements a distributed content-addressed chunk store. Chunks are arbitrary data blobs with a fixed maximum size (currently 4KB). Content addressing means that the address of any chunk is deterministically derived from its content. The addressing scheme falls back on a hash function which takes a chunk as input and returns a 32-byte long key as output. A hash function is irreversible, collision free and uniformly distributed (indeed this is what makes bitcoin, and in general proof-of-work, work).

This hash of a chunk is the address that clients can use to retrieve the chunk (the hash’s preimage). Irreversible and collision-free addressing immediately provides integrity protection: no matter the context of how a client knows about an address,
it can tell if the chunk is damaged or has been tampered with just by hashing it.

Swarm’s main offering as a distributed chunkstore is that you can upload content to it.
The nodes constituting the Swarm all dedicate resources (diskspace, memory, bandwidth and CPU) to store and serve chunks. But what determines who is keeping a chunk?
Swarm nodes have an address (the hash of the address of their bzz-account) in the same keyspace as the chunks themselves. Lets call this address space the overlay network. If we upload a chunk to the Swarm, the protocol determines that it will eventually end up being stored at nodes that are closest to the chunk’s address (according to a well-defined distance measure on the overlay address space). The process by which chunks get to their address is called syncing and is part of the protocol. Nodes that later want to retrieve the content can find it again by forwarding a query to nodes that are close the the content’s address. Indeed, when a node needs a chunk, it simply posts a request to the Swarm with the address of the content, and the Swarm will forward the requests until the data is found (or the request times out). In this regard, Swarm is similar to a traditional distributed hash table (DHT) but with two important (and under-researched) features.

Swarm uses a set of TCP/IP connections in which each node has a set of (semi-)permanent peers. All wire protocol messages between nodes are relayed from node to node hopping on active peer connections. Swarm nodes actively manage their peer connections to maintain a particular set of connections, which enables syncing and content-retrieval by key-based routing. Thus, a chunk-to-be-stored or a content-retrieval-request message can always be efficiently routed along these peer connections to the nodes that are nearest to the content’s address. This flavour of the routing scheme is called forwarding Kademlia.

Combined with the SWAP incentive system, a node’s rational self-interest dictates opportunistic caching behaviour: The node caches all relayed chunks locally so they can be the ones to serve it next time it is requested. As a consequence of this behavior, popular content ends up being replicated more redundantly across the network, essentially decreasing the latency of retrievals we say that [call this phemon/outcome/?] Swarm is ‘auto-scaling’ as a distribution network. Furthermore, this caching behaviour unburdens the original custodians from potential DDOS attacks. SWAP incentivises nodes to cache all content they encounter, until their storage space has been filled up. In fact, caching incoming chunks of average expected utility is always a good strategy even if you need to expunge older chunks.
The best predictor of demand for a chunk is the rate of requests in the past. Thus it is rational to remove chunks requested the longest time ago. So content that falls out of fashion, goes out of date, or never was popular to begin with, will be garbage collected and removed unless protected by insurance. The upshot is that nodes will end up fully utilizing their dedicated resources to the benefit of users. Such organic auto-scaling makes Swarm a kind of maximum-utilisation elastic cloud.

Documents and the Swarm hash

Now we’ve explained how Swarm functions as a distributed chunk store (fix-sized preimage archive), you may wonder, where do chunks come from and why do I care?

On the API layer Swarm provides a chunker. The chunker takes any kind of readable source, such as a file or a video camera capture device, and chops it into fix-sized chunks. These so-called data chunks or leaf chunks are hashed and then synced with peers. The hashes of the data chunks are then packaged into chunks themselves (called intermediate chunks) and the process is repeated. Currently 128 hashes make up a new chunk. As a result the data is represented by a merkle tree, and it is the root hash of the tree that acts as the address you use to retrieve the uploaded file.

When you retrieve this ‘file’, you look up the root hash and download its preimage. If the preimage is an intermediate chunk, it is interpreted as a series of hashes to address chunks on a lower level. Eventually the process reaches the data level and the content can be served. An important property of a merklised chunk tree is that it provides integrity protection (what you seek is what you get) even on partial reads. For example, this means that you can skip back and forth in a large movie file and still be certain that the data has not been tampered with. advantages of using smaller units (4kb chunk size) include parallelisation of content fetching and less wasted traffic in case of network failures.

Manifests and URLs

On top of the chunk merkle trees, Swarm provides a crucial third layer of organising content: manifest files. A manifest is a json array of manifest entries. An entry minimally specifies a path, a content type and a hash pointing to the actual content. Manifests allow you to create a virtual site hosted on Swarm, which provides url-based addressing by always assuming that the host part of the url points to a manifest, and the path is matched against the paths of manifest entries. Manifest entries can point to other manifests, so they can be recursively embedded, which allows manifests to be coded as a compacted trie efficiently scaling to huge datasets (i.e., Wikipedia or YouTube). Manifests can also be thought of as sitemaps or routing tables that map url strings to content. Since each step of the way we either have merkelised structures or content addresses, manifests provide integrity protection for an entire site.

Manifests can be read and directly traversed using the bzzr url scheme. This use is demonstrated by the Swarm Explorer, an example Swarm dapp that displays manifest entries as if they were files on a disk organised in directories. Manifests can easily be interpreted as directory trees so a directory and a virtual host can be seen as the same. A simple decentralised dropbox implementation can be based on this feature. The Swarm Explorer is up on swarm: you can use it to browse any virtual site by putting a manifest’s address hash in the url: this link will show the explorer browsing its own source code.

Hash-based addressing is immutable, which means there is no way you can overwrite or change the content of a document under a fixed address. However, since chunks are synced to other nodes, Swarm is immutable in the stronger sense that if something is uploaded to Swarm, it cannot be unseen, unpublished, revoked or removed. For this reason alone, be extra careful with what you share. However you can change a site by creating a new manifest that contains new entries or drops old ones. This operation is cheap since it does not require moving any of the actual content referenced. The photo album is another Swarm dapp that demonstrates how this is done. the source on github. If you want your updates to show continuity or need an anchor to display the latest version of your content, you need name based mutable addresses. This is where the blockchain, the Ethereum Name Service and domain names come in. A more complete way to track changes is to use version control, like git or mango, a git using Swarm (or IPFS) as its backend.

Ethereum Name Service

In order to authorise changes or publish updates, we need domain names. For a proper domain name service you need the blockchain and some governance. Swarm uses the Ethereum Name Service (ENS) to resolve domain names to Swarm hashes. Tools are provided to interact with the ENS to acquire and manage domains. The ENS is crucial as it is the bridge between the blockchain and Swarm.

If you use the Swarm proxy for browsing, the client assumes that the domain (the part after bzz:/ up to the first slash) resolves to a content hash via ENS. Thanks to the proxy and the standard url scheme handler interface, Mist integration should be blissfully easy for Mist’s official debut with Metropolis.

Our roadmap is ambitious: Swarm 0.3 comes with an extensive rewrite of the network layer and the syncing protocol, obfuscation and double masking for plausible deniability, kademlia routed p2p messaging, improved bandwidth accounting and extended manifests with http header support and metadata. Swarm 0.4 is planned to ship client side redundancy with erasure coding, scan and repair with proof of custody, encryrption support, adaptive transmission channels for multicast streams and the long-awaited storage insurance and litigation.

In future posts, we will discuss obfuscation and plausible deniability, proof of custody and storage insurance, internode messaging and the network testing and simulation framework, and more. Watch this space, bzz…



Source link

Leave a comment

Your email address will not be published. Required fields are marked *

  • bitcoinBitcoin (BTC) $ 98,696.00
  • tetherTether (USDT) $ 1.00
  • xrpXRP (XRP) $ 1.39
  • dogecoinDogecoin (DOGE) $ 0.390122
  • usd-coinUSDC (USDC) $ 1.00
  • staked-etherLido Staked Ether (STETH) $ 3,366.84
  • leo-tokenLEO Token (LEO) $ 8.70