A huge thank you for excellent input and feedback from Sacha Saint-Leger, Joseph Schweitzer, Josh Stark, and protolambda.
I spend a lot of my time explaining and answering questions about eth2, and I mean a lot. Some of this on a deep and technical level as I help communicate research and specifications to technical contributors, but more and more these days I’m fielding questions from the community about eth2 progress, direction, motivations, design decisions, delays, and more. I actually really enjoy these conversations. I get super excited as I explain eth2, come up with new ways to describe various components, or find the right analogies depending on the audience to get the gears turning and the light bulb to switch on.
But this dynamic/conversational method, while valuable, leaves a ton of the community in the dark. I get asked the same questions time and time again, and more concerningly, I get asked the same questions 6 months later! Clearly there is an information problem. This information exists, but it is scattered across the web — research posts, specs, spec explainers, public calls, public channels, reddit, blog posts. My first attempt after devcon5 to bridge the information gap between those deep in eth2 and the rest of the community manifested itself as a new blog series, “eth2 quick update”. These are little snippets to help follow along, but I’m realizing they don’t really communicate the bigger picture. The bigger picture does get communicated and discussed on podcasts, AMAs, and conferences, but, even then, a written form will still aid these efforts.
So here we are. This post is aimed at the community, to provide you with a comprehensive look at what eth2 is today: where it’s going, what it might become, and what it means for you, the Ethereum community. I will attempt to provide the right amount of technical substance to illustrate the motivations, the vision, the current state of the project, and the work to come, without getting bogged down in too much math or deep jargon.
This post might also be useful for those deep Ethereum technical experts that have to date kept eth2 at arms distance. No worries, I understand. This project is big, complicated, and always seemed like it was far enough in the future that you could ignore it while you solved the pressing problems at hand. Hopefully this post will help you better understand the things to come.
As for the eth2 folks, you might also get something out of this post — a broader perspective on where we’re at now, and how I’m thinking about the things to come.
Disclaimer: this is how I, Danny Ryan, personally see things today. There are many voices and opinions driving the ever growing, ever evolving eth2. This is just a snapshot of a slice of my interpretation.
eth2, wtf
“Eth2 is a scalable proof-of-stake infrastructure”
If you’ve heard me speak at all in the past 6 months, you’ve heard me say this time and time again. Eth2 is built for Ethereum and ultimately is Ethereum. It aims to be a more secure and scalable context for the current Ethereum mainnet, providing little disruption to the way things are done today. At the same time, it provides an upgraded context for us to grow into.
Since before Ethereum launched, it was known that a single blockchain paradigm would not provide enough bandwidth to serve as the backbone of a new decentralized internet. Ethereum related proof-of-stake and sharding research traces its history back to as early as 2014. Both proof-of-stake and sharding aim to answer the following question: Given a certain amount of capital backing a crypto-economic system, can we improve security and throughput while still allowing consumer hardware to participate in consensus and follow the chain? While I won’t get into the history here, this exploration took years and was marked by many false starts. In the end, the answer is a resounding yes, and has manifested itself as the eth2 project.
Eth2 is an ambitious, multi-year project that will be rolled out in phases. This is widely documented and discussed, but I’ll give you a quick, not-so-technical look at what these entail.
Phase 0
Phase 0, the Beacon Chain, is the core of the new consensus mechanism. This is where all the system level activity and orchestration happens. Phase 0 is all about coming to consensus with hundreds of thousands of consensus entities (validators), distributed across thousands of nodes around the world.
Due to the technical requirements of distributing subsets of validators across shards in phase 1+, we need to be able to handle a huge amount of validators. Much of the engineering complexity stems from this requirement. Other non-sharded, proof-of-stake mechanisms have 100s or maybe 1000s of validators, but eth2 is designed to have a bare minimum of ~16k validators with the expectation that this figure will be in the hundreds of thousands within a couple of years.
Phase 1
Phase 0 is about coming to consensus, whereas Phase 1 is about coming to consensus on a lot of stuff. This “stuff” comes in the form of many shard chains. You can think of a shard chain as its own blockchain with approximately the same complexity as Ethereum today, but living under the eth2 consensus (i.e. living under and built/controlled by the Beacon Chain). The validators from the Beacon Chain are given random short-term assignments to build and validate shard chains, making crypto-economic commitments to the state, availability, and validity of each chain back into the core system.
Today, we expect there to be 64 shards to start, and for the total data available to the system to be in the 1 to 4 MB/s range (YES, that’s a ton of data).
Phase 1.5
Phase 1.5 is the integration of Ethereum mainnet into the new eth2 consensus mechanism as a shard (existing as one of the many shards created in Phase 1). Instead of the Ethereum we know and love being built by a proof-of-work mining algorithm, it will be built by the eth2 validators. For existing applications and users, this hot swap of the consensus mechanism will largely be transparent. Applications will continue chugging along, but developers will now have a much more powerful system to build on (better security properties, proper economic finality, more layer 1 data for rollups and other fun applications).
Phase 2
Phase 2 is the addition of state and execution on more shards than just the original Ethereum shard. There are many forms that this can take. Figuring out which form, and the details behind it, is a hot bed of research and prototyping today. I’ll discuss that a bit more in sections below.
Okay, so we have all these phases coming and Phase 0 actually feels like it’s just around the corner. But that roadmap still sounds a little long. What should I actually expect from eth2 during the phases of the upgrade?
Great question! In general, expect a wave of upgrades that increasingly touch more of Ethereum and more of the community at each step. As a user, you can either get involved early with staking in Phase 0, or you can simply wait until Ethereum fully migrates into eth2 at Phase 1.5 (a transition which should be seamless from the point of view of both dapp developers and users). Regardless of how engaged you choose to be and at what phase, there are important milestones and benefits worth being aware of as this all starts to roll out.
The first is that I know a lot of you are die-hard ETH holders who are anxious to get in on the staking action. To all the potential validators out there, especially the hobbyists, Phase 0 is for you. Phase 0 comes with its own risks and time horizons that will make it unappealing for some participants, so I personally hope this phase is a boon both for hobbyists and long term Ethereum believers. This is a unique chance to get in on the ground, to help influence the vision over time, and to receive a higher ETH reward for being an early adopter.
What about Phase 1? Is there anything useful we can do with all this data before the integration of Ethereum into eth2? Yes, glad you asked!
Layer 1 data is incredibly useful even without native computation. In fact, the most promising layer 2 scaling solutions in the past 12 months are these so called “rollup” chains (both optimistic and ZK) which scale with the availability of layer 1 data. The eth2 data layer is expected to provide Ethereum with somewhere between 1 and 4 MB/s of data availability which translates into massive scalability gains when coupled with rollup tech. But due to the initial disjointedness of Ethereum and the new sharded universe at the start, making claims about the eth2 shard data is hard. That’s one of the reasons EIP 2537 is so important for Ethereum mainnet. With a native BLS (new eth2 signing algorithm) precompile, we can write an efficient eth2 light client as a solidity contract, opening up the ability for Ethereum applications to make claims about data in eth2 before the Phase 1.5 integration.
As discussed above, Phase 1.5 is huge. Eth2 is built for Ethereum and at this point, eth2 becomes Ethereum. All of the applications we know and love become integrated in the upgraded eth2 consensus mechanism, retaining the feature-set we are used to while simultaneously opening up the vast new landscape of a secure proof-of-stake consensus with native access to a highly scalable data layer. This is the meat of the process in my opinion. This is the moment of grand success as we anchor Ethereum fully into its new reality.
Beyond that, additional scalability gains will likely be made over time by enabling state/execution on additional shard chains. This may come in the form of the EVM or a new VM called eWASM. Regardless of the choice of VM, the existing Ethereum EVM shard and the new shard chains will be able to interact and communicate natively via the Beacon Chain, completing the multi-execution, sharded vision.
See? It’s a journey, but there are major gains to be made along the way.
The difficulties of this approach, and why it’s worth it
So many validators
A key component of sharding relies upon the random sampling of consensus participants (validators) into committees to validate a subsection of the protocol (e.g. a shard). Given enough validators in the protocol, and an attacker of an assumed max size (controlling 1/3 of the validators, say) it becomes mathematically improbable (vanishingly so, think probability on the order of 1 / 2^40) for the attacker to overtake any one committee and corrupt the system. This allows us to design the system such that anyone with a consumer machine (e.g. a laptop or maybe even an old phone) can become a validator (since validators are assigned to subsections of the system, and validating any subsection can be done with the compute resources of a single machine).
This is what makes sharding incredible and, at the same time, hard. For one, we must have enough validators to make this random sampling safe: which means eth2 has far more expected validators than most (I think any) other proof-of-stake protocol. This introduces challenges in every layer of the process — from research, to consensus mechanism specification, to networking, to resource consumption and optimizations in clients. Each additional validator induces load on the system that must be accounted for at every stage in the process. Eth2 client teams have accomplished the Herculean task of managing the consensus of hundreds of thousands of validators so that we can safely and efficiently integrate many shards come Phase 1.
So many shards
Another fundamental design decision that makes what we’re building so hard is that, in Ethereum, we choose to gain scalability without compromising on decentralization.
It’s not hard to scale a blockchain to tens of thousands of transactions per second, if we don’t care about users actually being able to validate the chain for themselves, or about guaranteeing that the data is actually available to the network. The complexity of a sharded consensus mechanism is required so that the system can be broken up into bite-sized validate-able chunks. Spec’ing and implementing such a consensus mechanism is quite simply a difficult task.
So many clients
A core tenet of Ethereum is that Ethereum is protocol first. Ethereum is the abstract set of rules that makes up the protocol rather than any specific implementation of those set of rules. To that end, the Ethereum community has encouraged many client implementations since day 0. On Ethereum mainnet today, this comes in the form of besu, ethereumJS, geth, nethermind, nimbus, open-ethereum, trinity, and turbo-geth. And in the eth2 landscape, this manifests as cortex, lighthouse, lodestar, nimbus, prysm, teku, and trinity.
The multi-client paradigm has many significant advantages:
- Having many clients allows for a wider exploration of ideas, algorithms, and architectures (each client brings their own approach and point of view). There is a healthy cross-pollination in this process as we all build more robust systems.
- Clients often have different design goals. This leads to a more diverse set of users and applications as time progresses. Clients may be more or less focused on any of the following — performance, security, horizontal scaling, UI/UX, light clients, browsers, resource constrained devices, etc, etc.
- With many production grade clients on mainnet, a significant attack that can bring down any one client (e.g. a DoS attack) is met with resilience as the rest of the clients stand strong. This was seen very early in Ethereum’s history during the “Shanghai DoS Attacks” when a series of DoS attacks were able to bring down geth and parity but never both at the same time.
- Each client serves as a gateway to a programming language community. The foundation of a client in a particular language opens and invites experimentation and innovation in that language. The base tooling around the client often snowballs into a robust ecosystem of tools and contributors in that language. The multi-client paradigm reinforces the gravitational well that is Ethereum.
With these distinct advantages come some difficulties:
- The spec and testing must be air-tight to avoid any accidental forking on mainnet. If there is only one implementation of the protocol, then that implementation becomes the protocol. In the single client case if there were any sort of consensus “bug” hit on mainnet, then it would become baked into the reality of the protocol. This isn’t great from a purity perspective, but it eliminates any risk of an accidental fork. As a counter to this difficulty — if we have a healthy distribution of clients on mainnet (e.g. no client has more than 1/3 of total nodes/validators), the network can remain live even in the face of a single client having a consensus issue.
- Coordination of N clients at best results a linear overhead compared to just a single client, but in some cases might induce a quadratic overhead (N^2). There are techniques we employ to reduce this overhead — e.g. consensus (and soon network) test suites — but it will always be there in some capacity.
State of eth2 clients and testnets
Phase 0 eth2 clients have become quite sophisticated pieces of software over the past 2 years, being able to handle the distributed consensus of hundreds of thousands of validators across thousands of nodes. We are currently in the testnet phase and inching closer to launch every day. I expected the last mile to be long. It turns out that it is.
I ask you during this period before launch, to get out of your comfort zone and try multiple clients. There are many tradeoffs between them and you’re going to have to get your hands dirty to find out which works best for you. As discussed above, Ethereum operates in a mult-client paradigm. To gain the benefits of this paradigm, we need users to run a diverse set of clients (to create a healthy distribution across all the types of clients).
Beyond that, there are anti-correlation incentives built into the protocol. In extreme situations in which a major client accidentally causes validators to either go offline, or commit a slashable offence, if your validator’s behaviour is correlated with that client, you will be penalized much more than if you did something wrong but uncorrelated with others. In other words, in these situations it’s much better to be running a minority client rather than a client with a huge portion of the network.
To be absolutely clear — if there is more than one viable and secure client, it is your duty to run minority client software to promote a healthy distribution of client software on the network.
Also, don’t be shy. If you run into issues with the docs, let someone know. If you see a typo, submit a PR. If something crashes or a bug pops up, please-please-please report it on github or the client discord. You are the beta users and with your help we can make this better for everyone.
Testnets
We are currently running small public devnets, which we restart approximately every one to two weeks. I say “devnet” because they are first and foremost for client team developers to work through bugs, optimizations, etc. They are public and you’re welcome to join, but be aware that they aren’t yet long-lived like Goerli or Rinkeby. The most recent launch, led by Afri Schoedon, is the Witti testnet running the v0.11 spec (check out the README here if you want to run some nodes).
Client teams are actively upgrading to the v0.12 spec which integrates the latest version of the IETF BLS standard. From there, we’ll transition the devnets to v0.12 as we continue to increase the size of the nets, inducing more and more load on the clients. After we have 2-3 clients reliably kicking off successful v0.12 nets and running at high load, we’ll do a more public testnet where you will run most of the nodes and validators. The intention here is to create a long-standing multi-client testnet that mimics mainnet as much as possible (where users can reliably practice running nodes and test anything else they want). The ideal is to spin this up just once and to sort through any failures while maintaining the net. But depending on the presence, and severity, of failures, we might need a couple runs before we get there.
In addition to the normal testnets, we’ll also provide an incentivized “attack net” where client teams operate a stable testnet, and we invite you to try to break it in a number of different ways. For successful attacks, the EF will provide ETH rewards. More info on this soon — so stay tuned!
While tooling for eth2 is quite nascent, it’s an exciting and growing effort. As mentioned above, tooling often stems from a client codebase and the efforts of the client team, but more and more hands are getting involved everyday. To better interact with, understand, secure, and enhance eth2, we as a community need build out and build upon basic eth2 tooling.
I want to give a huge shout-out to the teams and individuals that have already provided immense value with their eth2 tooling, and I want to welcome everyone else to build new tools and to extend and enhance what’s already there.
Eth2 tooling is a green-field opportunity. This is an incredible chance to dig in, provide real value, and make your mark.
The following is a sample of the work in progress, but there’s a great deal more to do!
And here’s a sample of some open tooling ideas:
- Eth2 validator alerts: provide a service that alerts node operators when their validators are not performing optimally
- Validator deposit tracking: help bridge between the current Ethereum and eth2 explorers by tracking the validator deposit process
- Validator protection via proxies: use a proxy to track validator messages to ensure your client can’t send unsafe messages
And so much more — this is the type of contribution that is not limited to a spec. Creativity is important. If you want to contribute, talk to eth2 client teams to get started.
State of eth1+eth2 integrations
In an Ethereum client today (e.g. geth, etc) almost all of the complexity lies in handling user-level activity — transaction pool, block creation, virtual machine computation, and state storage/retrieval. The actual core consensus — proof-of-work — is rather simple in protocol. Most of the complexity is handled by sophisticated hardware outside of the core protocol.
On the other hand, an eth2 client is entirely consensus. In proof-of-stake and sharding, many complexities are brought in-protocol to achieve the goals of a scalable consensus.
This separation of concerns makes for a beautiful pairing of eth1 and eth2 clients.
There is initial work being done on merging the two by members of the geth (EF) and TXRX (ConsenSys) teams. The work involves (1) defining a communication protocol between eth1 and eth2 clients, (2) adding a consensus engine to eth1 clients that can be controlled via the communication protocol, and (3) prototyping and simulating eth2 phase 1 behaviour to test the coupling. We expect to see some concrete results on these points this summer.
You can read more about the high level eth1+eth2 client relationship here, and about the technical scope of the merger here.
State of execution and communication across shards
As mentioned, the exact path to enable execution across many shards is a hotly researched and debated area. There are many questions to answer. For example:
- How many shards should be enable with execution?
- For additional shards, do we use EVM or eWASM for the virtual machine?
- How do we efficiently structure and process cross-shard transactions?
- What changes do we need to make to existing EVM to support cross-shard transactions?
- Can/should execution and account structures be generally extensible?
The eWASM (EF) and Quilt (ConsenSys) teams have conducted a great deal of research in these areas over the past 12 months. It turns out the solution domain is huge, and although we now have a good handle on the breadth of the domain, the recent focus has been on digging into simple, tangible solutions to be able to test, prototype, and really ground the conversation. Out of this was born eWASM’s Eth1x64 initiative (read about the high-level view of the project and check out some recent specs under discussion).
There has been rapid progress in bringing the abstract cross-shard ideas into concrete specs for discussion and ultimately prototypes. Keep an eye on this area of progress, especially if you are a dapp developer. We intend to have something you can understand, play with, and provide feedback on in the coming months.
Relationship of Stateless Ethereum to eth2
There is another major R&D effort happening in parallel to eth2 called “Stateless Ethereum”. Stateless Ethereum is an effort to solve the state size growth problem. It allows participants to validate blocks without having to store the entirety of the state locally. Right now, there is an implicit input in the Ethereum state transition function: the entirety of the state. With Stateless Ethereum, proofs (witnesses) about the requisite state will be provided inside of blocks. This allows a block to be transitioned/validated as a pure function of just the block.
What this translates to for users is a world in which you can follow the chain, and even follow portions of the state that you care about, without storing all of the state. Some network participants likely will store all of the state (block producers, block explorers, state-for-a-fee providers), but the vast majority of participants will become some shade (less than full) of stateful.
For eth2, this is an important technical mechanism to ensure that nodes and validators can validate and secure the protocol without the burden of storing the full user state of each shard. Instead, validators will likely opt-in to being block producers for some set of shards, while the baseline validator may only validate stateless blocks. Stateless Ethereum is an incredibly valuable addition to the eth2 vision, keeping the base of the sharded protocol very thin. While we’re planning on eth2 operating statelessly, we do have a few options in the event that the stateless path does not ultimately prove viable (although I’m pretty confident in statelessness myself 😄).
I won’t get any deeper into Stateless Ethereum for this post. Just know that it’s an exciting parallel R&D path to ensure Ethereum’s sustainability in the long term. If you’re curious to learn more, check out Griffin’s The 1.x Files blog series.
tl;dr
Eth2 is a huge undertaking to provide an upgraded, next-generation, highly-scalable and secure, decentralized consensus to Ethereum. There are dozens of teams and hundreds of individuals working each day to make this a reality. The path we’ve chosen is difficult, but immense progress has and continues to be made.
The core of this new mechanism is just around the corner.
If you’re an aspiring validator, now is the time to dig in. Support the multi-client paradigm by trying out multiple clients, and help instill a strong base of rich client diversity from eth2’s genesis.
If you’re a user or dapp developer, keep pushing on Ethereum today while we continue to prepare this more secure and scalable context for you. When the time comes, the swap to eth2 will be as seamless as possible.
Thank you to the incredible teams and individuals keeping Ethereum alive and well today; thank you to all those of you preparing for Ethereum’s future in eth2; and thank you to all the users and developers that make Ethereum awesome 🚀