Site icon WSJ-Crypto

Navigating the Evolution of Metacoins and the Multichain Future

Special acknowledgment to Vlad Zamfir for much of the conceptualization behind multi-chain cryptoeconomic frameworks

To begin with, a brief history lesson. In October 2013, during my visit to Israel as part of my journey around the Bitcoin landscape, I became acquainted with the core teams behind the colored coins and Mastercoin initiatives. Once I fully grasped Mastercoin and its possibilities, I was instantly captivated by the immense strength of the protocol; however, I found it frustrating that the protocol was structured as a fragmented collection of “features”, furnishing a considerable amount of functionality for users, yet offering no liberty to break free from that limitation. Aspiring to enhance Mastercoin’s capabilities, I drafted a proposal for a concept referred to as “ultimate scripting” – a versatile stack-based programming language that Mastercoin could integrate to enable two parties to execute a contract based on any mathematical formula. The design would encompass savings wallets, contracts for differences, various forms of gambling, among other functions. It was still rather constrained, permitting only three phases (open, fill, resolve) and lacking internal memory, confined to two parties per contract, but it represented the initial true seed of the Ethereum concept.

I presented the proposal to the Mastercoin team. They appreciated it, but chose not to adopt it too swiftly out of a desire for caution and deliberation; a principle that the project adheres to to this day and which David Johnston highlighted at the recent Tel Aviv conference as Mastercoin’s main distinctive attribute. Consequently, I resolved to strike out on my own and simply develop the concept myself. Within the next three weeks, I produced the original Ethereum whitepaper (regrettably now lost, but an earlier version can be found here). The fundamental components were all in place, except that the programming language was register-based rather than stack-based, and, since I was/am not proficient enough in peer-to-peer networking to create an autonomous blockchain client from the ground up, it was intended to function as a meta-protocol on top of Primecoin – not Bitcoin, as I aimed to address the concerns of Bitcoin developers who were frustrated by meta-protocols adding excessive data to the blockchain.

Once capable developers like Gavin Wood and Jeffrey Wilcke, who did not share my limitations in writing peer-to-peer networking code, joined the initiative, and when sufficient interest emerged that I realized there would be funds to employ more, I decided to promptly transition to a standalone blockchain. I explained the rationale behind this decision in my whitepaper in early January:

The benefit of a metacoin protocol is that it can permit more sophisticated transaction types, including custom currencies, decentralized exchanges, derivatives, etc, that are unattainable atop Bitcoin itself. Nevertheless, metacoins built on Bitcoin suffer from a critical drawback: simplified payment verification, already challenging with colored coins, becomes outright impossible with a metacoin. The reason being that while one can utilize SPV to ascertain that there is a transaction transmitting 30 metacoins to address X, that alone does not confirm that address X possesses 30 metacoins; what if the sender of the transaction did not hold 30 metacoins initially, rendering the transaction invalid? Determining any aspect of the current state essentially requires reviewing all transactions from the metacoin’s original inception to identify which transactions are valid and which are not. This renders it impractical to maintain a truly secure client without downloading the entire 12 GB Bitcoin blockchain.

In essence, metacoins are ineffective for light clients, making them rather insecure for smartphones, users with outdated computers, internet-of-things devices, and eventually, as the blockchain scales sufficiently for desktop users as well. In contrast, Ethereum’s independent blockchain is specifically designed with a highly advanced light client protocol; unlike the situation with meta-protocols, contracts atop Ethereum inherit the Ethereum blockchain’s light client-friendly properties completely. Finally, much later, I realized that by establishing an independent blockchain, we can experiment with enhanced versions of GHOST-style protocols, safely reducing the block time to 12 seconds.

So what’s the significance of this narrative? Essentially, had circumstances been altered, we easily might have chosen the path of being “built on Bitcoin” from the very beginning (in fact, we could still pivot in that direction if we wished), but substantial technical reasons existed at that time which led us to believe it was more prudent to create an independent blockchain, and these justifications remain, in nearly the same form, today.

Since several readers anticipated a response regarding how Ethereum as an autonomous blockchain would remain advantageous despite the recent announcement of a metacoin utilizing Ethereum technology, here it is. Scalability. When utilizing a metacoin on BTC, you benefit from easier interactions back and forth with the Bitcoin blockchain, but creating an independent chain grants you the capacity to ensure significantly stronger security guarantees, especially for less robust devices. There certainly are scenarios where a greater level of connectivity with BTC is vital; for those instances, a metacoin would indeed be superior (though keep in mind that even an independent blockchain can interact quite well with BTC using virtually the same technology that will be elaborated upon in the remainder of this blog post). Thus, overall, it will undoubtedly benefit the ecosystem if the identical standardized EVM is available across all platforms.

Beyond 1.0

However, in the long run, even light clients represent a flawed solution. If we genuinely anticipate cryptoeconomic platforms to evolve into a foundational layer for a substantial volume of global infrastructure, then it is likely that there will be numerous crypto-transactions overall, such that no individual computer, except perhaps a few extensive server farms operated by companies like Google and Amazon, will be powerful enough to process them all. Consequently, we need to break the essential constraint of cryptocurrency: the necessity for nodes that handle every transaction. Overcoming this constraint is pivotal in transforming a cryptoeconomic platform’s database from simply massively duplicated to genuinely distributed. Nevertheless, surmounting this obstacle is challenging, especially if it’s essential to preserve the expectation that all components of the ecosystem should bolster each other’s security.

To realize this objective, there are three primary tactics:

  1. Creating protocols built on Ethereum that utilize Ethereum solely as a last-resort auditing backend, thereby reducing transaction costs.
  2. Transforming the blockchain into something significantly resembling a high-dimensional interconnected mesh where all segments of the database mutually enhance each other’s security over time.
  3. Reverting to a model of one-protocol (or a single service)-per-chain, and devising systems for the chains to (1) communicate, and (2) share consensus stability.

Among these approaches, it is important to note that only (1) is ultimately compatible with maintaining the blockchain in a form analogous to what the Bitcoin and Ethereum protocols currently support. (2) necessitates a significant redesign of the basic infrastructure, while (3) demands the generation of thousands of chains, and for the sake of preventing fragility, the most effective method would be to employ numerous currencies (to ease complexity for the user, we can utilize stable-coins to effectively establish a universal cross-chain currency standard, whereby any minor fluctuations in the stable-coins for users would be depicted in the UI as interest or demurrage, ensuring that users only track one unit of account).

We have already examined (1) and (2) in previous articles, hence today we shall provide a preliminary overview of some principles related to (3).

Multichain

This model bears resemblance in many aspects to the Bitshares framework, except we do not presuppose that DPOS (or any other POS) will provide security for arbitrarily small chains. Instead, acknowledging the overarching strong similarities between cryptoeconomics and broader societal institutions, particularly legal frameworks, we recognize the existence of a substantial body of shareholder legislation safeguarding minority shareholders in real companies from situations akin to a 51% attack (in other words, 51% of shareholders voting to appropriate all funds for themselves). Thus, we aspire to replicate a similar system here by having every chain, to a certain extent, “monitor” every other chain either directly or indirectly through an interlinked transitive graph. The required policing is straightforward – focused on preventing double-spending and censorship attacks from local majority coalitions, and thus, the necessary protective mechanisms can be entirely executed in code.

Before delving into the intricate issue of inter-chain security, let us first address what eventually emerges as a comparatively simpler matter: inter-chain interaction. What do we mean by multiple chains “interacting”? Formally, this term could encompass one of two concepts:

  1. Entities internal to chain A (i.e., scripts, contracts) can securely obtain information about the status of chain B (information transfer)
  2. It is feasible to generate a pair of transactions, T in A and T’ in B, such that both T and T’ either get confirmed or neither do (atomic transactions)

A sufficiently broad implementation of (1) implies (2), as “T’ was (or was not) confirmed in B” constitutes a fact regarding the status of chain B. The most straightforward method to achieve this is via Merkle trees, which are elaborated upon further here and here; essentially, Merkle trees enable the complete state of a blockchain to be hashed into the block header in such a manner that one can generate a “proof” that a specific value is located at a particular spot in the tree, which is logarithmic in size compared to the entire state (i.e., at most a few kilobytes in length). The fundamental idea is that contracts in one chain authenticate these Merkle tree proofs of contracts within the other chain.

A significant challenge that varies in difficulty depending on consensus algorithms is how a contract in one chain can validate the actual blocks in another chain. Essentially, this results in a contract functioning as a complete “light client” for the other chain, processing blocks within that chain and probabilistically verifying transactions (and monitoring challenges) to ensure security. For this system to be effective, a certain amount of proof of work must be present on each block, making it impossible to cheaply generate numerous blocks that might be difficult to declare invalid; as a general guideline, the effort required by the block producer to create a block should surpass the total cost of the network combined to reject it.

Moreover, it’s essential to point out that contracts are simplistic in nature; they cannot assess reputation, social consensus, or any other such “fuzzy” indicators of whether a specific blockchain is valid. As a result, purely “subjective” consensus, similar to Ripple’s approach, will be challenging to implement within a multi-chain environment. Bitcoin’s proof of work is (entirely in theory, mostly in actual practice) “objective”: there exists a precise definition of the current state (namely, the state achieved by processing the chain with the longest proof of work), and any node globally, observing the totality of available blocks, will arrive at the same conclusion regarding which chain (and thus which state) holds validity. Proof-of-stake systems, contrary to what many cryptocurrency developers presume, can indeed be secure, but must be “weakly subjective” – this means that nodes which have been online at least once every N days since the inception of the chain will necessarily reach the same conclusion; nevertheless, long-inactive nodes and newly initiated nodes still require a hash as an initial reference. This is necessary to preventcertain categories of inevitable long-range assaults. Weakly subjective agreement functions effectively with contracts-as-automated-light-clients, as contracts are perpetually “active”.

It is essential to recognize that atomic transactions can be supported without the transfer of information; TierNolan’s secret revelation protocol can facilitate this even among relatively simplistic chains like BTC and DOGE. Therefore, in general, interaction is not overly challenging.

Security

The more significant issue, however, is security. Blockchains are susceptible to 51% assaults, and smaller blockchains are exposed to lesser 51% assaults. Ideally, for enhanced security, we would prefer that multiple chains can leverage each other’s security, ensuring that no chain is compromised unless all chains are compromised simultaneously. In this context, we have two primary paradigm options to consider: centralized or decentralized.

Centralized Decentralized

A centralized model essentially involves each chain, directly or indirectly, relying on a singular master chain; advocates of Bitcoin often envision Bitcoin as the central chain, although it may regrettably be another option since Bitcoin was not originally crafted with the necessary level of versatile functionality in mind. Conversely, a decentralized model resembles Ripple’s network of unique node lists, with chains collaborating: every chain maintains a list of other consensus mechanisms it trusts, and these mechanisms collectively ascertain block legitimacy.

The centralized model has the advantage of simplicity; the decentralized model offers the benefit of permitting a cryptoeconomy to more readily exchange various components, preventing it from relying on obsolete protocols for decades. Nonetheless, the pressing inquiry is how we can effectively “piggyback” on the security of one or more other chains?

To address this inquiry, we will first introduce a formalism referred to as an assisted scoring function. Generally, blockchains operate with a scoring function for blocks, where the highest-scoring block establishes the current state. Assisted scoring functions rank blocks considering not only the blocks themselves but also checkpoints from other chains (or multiple chains). The overarching concept is to employ checkpoints to establish that a certain fork, despite appearing dominant from the perspective of the local chain, can be shown to have originated later through the checkpointing method.

A straightforward method would involve a node penalizing forks where the blocks are significantly distant from each other temporally, with the time of a block determined by the median earliest known checkpoint of that block in the alternative chains; this would identify and penalize forks occurring post-factum. Nonetheless, this approach presents two challenges:

  1. An attacker can timely submit the hashes of the blocks into the checkpoint chains and later disclose the blocks
  2. An attacker may simply allow two forks of a blockchain to develop roughly evenly at the same time and eventually advocate for his preferred fork with full intensity

To counter (2), we can assert that only the valid block of a specific block number, determined by the earliest average checkpointing time, can be part of the main chain, thereby effectively averting double-spends or even censorship forks; every new block would have to reference the last known previous block. However, this does not address (1). To resolve (1), the most effective general solutions involve some form of “voting on data availability” (see also: Jasper den Ouden’s previous post discussing a similar concept); essentially, participants in the checkpointing contract on each of the other chains would Schelling-vote on the availability of the complete data of the block at the time the checkpoint was established, and a checkpoint would be dismissed if the vote tends towards “no”.


For a block to be deemed valid, it must receive endorsement through a positive outcome from one or more external Schelling-vote mechanisms

It is worth noting that there are two variants of this tactic. The first version involves participants voting solely on data availability (i.e., that every part of the block exists online). This allows voters to be relatively simple-minded and capable of voting on the availability for any blockchain; the process to ascertain data availability merely consists of repeatedly executing a reverse hash lookup query on the network until all “leaf nodes” are identified, ensuring nothing is lacking. A clever strategy to encourage nodes to not be complacent during this check is to prompt them to recompute and vote on the root hash of the block using an alternate hash function. Once all data is accessible, if the block is invalid an efficient Merkle-tree proof of invalidity can be submitted to the contract (or simply disclosed for nodes to retrieve when determining whether to acknowledge the provided checkpoint).

The second approach is less modular: have the Schelling-vote participants cast votes on block validity. This would streamline the process somewhat but at the expense of making it more chain-dependent: one would need to access the source code of a particular blockchain to partake in the voting. Consequently, this results in a reduced number of voters automatically securing your chain. Regardless of which of these two strategies is adopted, the chain could fund the Schelling-vote contract on the other chain(s) through a cross-chain exchange.

The Scalability Aspect

So far, we still lack any true “scalability”; a chain is only as robust as the number of nodes willing to download (though not process) every block. Naturally, there are remedies for this issue: challenge-response protocols and randomly chosen juries, both delineatedin the earlier blog entry regarding hypercubes, comprises the two that are presently most recognized. Nonetheless, the solution presented here is quite distinct: rather than firmly establishing and institutionalizing a specific algorithm, we will allow market forces to dictate the outcome.

The term “market” is described as follows:

  1. Chains aim to be secure while also conserving resources. Chains are required to choose one or more Schelling-vote contracts (or potentially other mechanisms) to act as sources of security (demand)
  2. Schelling-vote contracts serve as sources of security (supply). These contracts vary in the level of subsidy needed to attain a particular participation level (price) and in the difficulty faced by an attacker in attempting to bribe or commandeer the schelling-vote to produce an incorrect outcome (quality).

Consequently, the cryptoeconomic system will naturally lean toward schelling-vote contracts that provide superior security at a reduced cost, and those engaging with these contracts will gain from having more voting chances. However, merely stating that an incentive is present does not suffice; there exists a substantial incentive to address aging, yet we are still quite distant from achieving that. Proof must also be presented to validate that scalability is truly feasible.

The superior of the two algorithms discussed in the article on hypercubes, jury selection, is straightforward. For each block, a random selection of 200 nodes is chosen to vote on it. The group of 200 is nearly as secure as the complete set of voters, given that the specific 200 are not predetermined and an attacker would require control over 40% of the participants to have a meaningful chance of obtaining 50% of any group of 200. If we differentiate voting on data availability from voting on validity, then these 200 can be selected from the entire pool of participants in a singular abstract Schelling-voting contract on the blockchain, as it is feasible to vote on the data availability of a block without needing to fully comprehend the blockchain’s regulations. Thus, instead of every node in the network validating the block, a mere 200 verify the data, and only a limited number of nodes need to search for actual mistakes, since if any one node detects an error, it can produce a proof and alert everyone else.

Conclusion

What, then, is the ultimate outcome of all this? Fundamentally, we have numerous chains, some dedicated to a single application, along with general-purpose chains like Ethereum, as certain applications gain from the incredibly tight interoperability facilitated by being within a single virtual machine. Each chain would delegate the essential aspect of consensus to one or more voting mechanisms located on other chains, with these mechanisms structured differently to ensure maximal incorruptibility. Because security can be derived from all chains, a significant portion of the stake in the entire cryptoeconomy would be employed to safeguard every chain.

It might be essential to accept some level of security compromise; if an attacker possesses 26% of the stake, they can execute a 51% takeover of 51% of the subcontracted voting mechanisms or Schelling-pools available; however, 26% of the stake still represents a considerable security buffer in a hypothetical multi-trillion-dollar cryptoeconomy, and thus, this trade-off might be justifiable.

The real advantage of this type of framework is how little needs to be standardized. Each chain, upon establishment, can select a number of Schelling-voting pools to trust and subsidize for security, and through a tailored contract, it can adapt to any interface. Merkle trees must be compatible with all the various voting pools, but the only aspect requiring standardization is the hashing algorithm. Different chains can utilize varying currencies, implementing stable-coins to create a reasonably consistent cross-chain unit of value (and, naturally, these stable-coins can themselves interact with other chains that deploy assorted types of endogenous and exogenous estimators). Ultimately, the vision of one among thousands of chains, with the diverse chains “purchasing services” from one another. Services may encompass data availability verification, timestamping, general information provision (e.g., price feeds, estimators), private data storage (potentially even consensus on private data through secret sharing), and much more. The ultimate decentralized crypto-economy.



Source link

Exit mobile version