Keep it coming
tldr;
Runtime Verification audit and appraisal of deposit contract
Runtime Verification has recently finalized their audit and formal review of the eth2 deposit contract bytecode. This marks a crucial milestone, advancing us toward the eth2 Phase 0 mainnet. With this work now accomplished, I encourage the community to review and provide feedback. Should any discrepancies or inaccuracies be present in the formal specification, kindly open an issue on the eth2 specs repository.
The formal semantics outlined in the K Framework delineate the exact behaviors that the EVM bytecode should demonstrate and verify that these behaviors are maintained. This encompasses input validations, modifications to the iterative Merkle tree, logs, among others. Have a look here for a (semi)high-level overview of the specifications, and delve deeper here for the complete formal K specification.
I’m grateful to Daejun Park (Runtime Verification) for steering this initiative, and to Martin Lundfall and Carl Beekhuizen for their extensive feedback and review during the process.
Once more, if this subject captivates you, now is the opportune moment to share your thoughts and insights regarding the formal verification — please take a moment to explore it.
The term of the month is “optimization”
The previous month has centered around optimizations.
While a 10x optimization here and a 100x optimization there may not resonate profoundly within the Ethereum community at present, this development phase holds equal significance in propelling us toward our goal.
Beacon chain optimizations are essential
(why can’t we simply maximize our machines with the beacon chain)
The beacon chain — the foundation of eth2 — is a crucial element for the entire sharded architecture. In order to synchronize any shard — whether it is a singular shard or multiple, a client must synchronize the beacon chain. Therefore, to effectively operate the beacon chain alongside several shards on a consumer-grade machine, it is crucial that the beacon chain keeps its resource consumption in check, even under high validator participation (~300k+ validators).
In pursuit of this goal, a significant amount of effort from the eth2 client teams in the past month has been directed toward optimizations — diminishing the resource needs of phase 0, the beacon chain.
I’m thrilled to announce that we are achieving remarkable progress. What follows is not exhaustive, but merely a snapshot to provide you with a sense of the ongoing work.
Lighthouse operates 100k validators effortlessly
Lighthouse shut down their ~16k validator testnet a few weeks ago following an attestation gossip relay loop that caused the nodes to essentially self-DDoS. Sigma Prime promptly addressed this issue and focused on larger and improved objectives — namely, a 100k validator testnet! The past two weeks have been devoted to optimizations to actualize this real-world scale testnet.
The goal of each successive Lighthouse testnet is to guarantee that thousands of validators can smoothly operate on a minimal VPS equipped with 2 CPUs and 8GB of RAM. Initial tests with 100k validators revealed that clients utilized a consistent 8GB of RAM, but after several days of optimizations, Paul managed to reduce this to a stable 2.5GB with further reductions anticipated soon. Lighthouse also achieved a 70% improvement in state hashing, which, alongside BLS signature verification, is turning out to be the primary computational bottleneck for eth2 clients.
The upcoming Lighthouse testnet launch is imminent. Join them on their discord to keep up with developments
Prysmatic testnet still progressing and synchronization significantly enhanced
A couple of weeks ago, the current Prysm testnet celebrated their 100,000th slot with over 28k validators engaged in validation. Today, the testnet surpassed slot 180k and has over 35k active validators. Maintaining a public testnet while simultaneously rolling out updates, optimizations, stability patches, etc., is quite an accomplishment.
There is a considerable amount of tangible advancement occurring in Prysm. I’ve conversed with several validators in recent months, and from their viewpoint, the client is visibly improving. One particularly thrilling aspect is the enhanced synchronization speeds. The Prysmatic team has streamlined their client synchronization from approximately 0.3 blocks/second to over 20 blocks/second. This significantly enhances the user experience for validators, enabling them to connect and begin contributing to the network much more swiftly.
Another exciting enhancement to the Prysm testnet is alethio’s new eth2 node monitor — eth2stats.io. This is an opt-in feature that enables nodes to consolidate statistics in one central location. This will facilitate a better understanding of the state of testnets and ultimately the eth2 mainnet.
Don’t take my word for it! Download it and experience it yourselfit out for yourself.
Everyone adores proto_array
The fundamental eth2 specification often (deliberately) outlines anticipated behavior in a non-optimal manner. The code within the specification is optimized for clarity of purpose rather than efficiency.
A specification delineates the correct functionality of a system, whereas an algorithm is a method for achieving a defined behavior. Numerous algorithms can accurately implement the same specification. Consequently, the eth2 specification permits a wide range of diverse implementations of each component as client teams consider various trade-offs (e.g. computational complexity, memory consumption, complexity of implementation, etc).
A notable instance is the fork choice — the specification utilized to identify the head of the chain. The eth2 specification outlines the behavior using a straightforward algorithm to vividly illustrate the moving components and edge scenarios — for example, how to refresh weights when a new attestation is received, what actions to take when a new block is confirmed, and so on. A literal application of the specification algorithm would not satisfy the operational demands of eth2. Instead, client teams must engage in deeper contemplation regarding computational trade-offs within the scope of their client operations and implement a more advanced algorithm to fulfill those requirements.
Fortunately for client teams, approximately 12 months prior Protolambda executed several different fork choice algorithms, documenting the advantages and trade-offs of each. Recently, Paul from Sigma Prime detected a significant bottleneck in Lighthouse’s fork choice algorithm and sought out a new solution. He discovered proto_array in proto’s previous compilation.
It required substantial effort to adapt proto_array to comply with the latest specification, but once incorporated, proto_array demonstrated “to operate in orders of magnitude less time and significantly reduce database reads.” After its initial integration into Lighthouse, it was rapidly adopted by Prysmatic as well and is included in their most current release. Given this algorithm’s distinct advantages over alternatives, proto_array is swiftly becoming a popular choice among the community, and I sincerely anticipate that other teams will adopt it shortly!
Continuing Phase 2 research — Quilt, eWASM, and now TXRX
Phase 2 of eth2 involves the incorporation of state and execution into the segmented eth2 ecosystem. While some foundational principles are comparatively established (e.g. inter-shard communication through crosslinks and Merkle proofs), the design landscape for Phase 2 remains largely open. Quilt (the ConsenSys research team) and eWASM (the Ethereum Foundation research team) have dedicated much of their time over the past year to investigating and refining this expansive design arena alongside the ongoing efforts to define and build Phases 0 and 1.
In this regard, there has been a surge of recent activities including public calls, discussions, and posts on ethresear.ch. There are invaluable resources available to help navigate the current landscape. Below is merely a small sampling:
In addition to Quilt and eWASM, the newly established TXRX (ConsenSys research team) is allocating a portion of their resources toward Phase 2 research as well, initially emphasizing a better comprehension of cross-shard transaction complexity and investigating potential integration pathways for eth1 into eth2.
All of the Phase 2 research and development represents an area of considerable potential. There exists a significant opportunity to delve deeply and effect change. Over the course of this year, anticipate more concrete specifications as well as developer environments to engage with.
Whiteblock publishes libp2p gossipsub test outcomes
This week, Whiteblock unveiled the libp2p gossipsub testing results as the result of a grant co-funded by ConsenSys and the Ethereum Foundation. This initiative seeks to validate the gossipsub algorithm for the requirements of eth2 and provide insights into the performance limits to facilitate subsequent tests and enhancements of the algorithm.
In summary, the findings from this testing wave appear robust, although further investigations are necessary to better evaluate how message propagation scales with the size of the network. Check out the complete report outlining their methodology, network topology, experiments, and results!
An Exciting Spring Ahead!
This Spring is brimming with exhilarating conferences, hackathons, eth2 bounties, and beyond! There will be a collective of eth2 researchers and engineers at each of these gatherings. We invite you to come and converse with us! We would love to engage with you about engineering advancements, validating on testnets, expectations for this year, and anything else that may be on your mind.
Now is an opportune moment to get involved! Numerous clients are at the testnet phase, so there are various tools to create, experiments to conduct, and enjoyment to embark upon.
Here is a preview of the many events set to feature solid representation from eth2:
🚀