One of the obstacles encountered when developing a new cryptocurrency is determining what the distribution model will entail. Who will obtain the units of currency, at which intervals, and what mechanism will govern these decisions? Despite the significant relevance of this inquiry, notably less consideration has been given to it in comparison to other facets of currency, such as consensus protocols and functionalities. This issue is particularly complex because, similar to numerous challenges in the cryptocurrency arena that have analogs in the broader “real world,” cryptocurrencies must adhere to the principle of decentralization: it is deemed unacceptable for a cryptographic platform’s sustained operation to hinge on the endurance of any particular entity over the long run. In light of this rather stringent stipulation, how should a new currency allocate itself?
Thus far, this dilemma remains in its nascent stages of discourse. While the matter of short-term distribution is a vibrant and evolving debate encompassing various categories of asset transfers, one-way distributions, two-way pegs, pre-mines, pre-sales, and additional mechanisms emerging almost monthly, long-term distribution approaches in nearly every cryptocurrency predominantly lean towards one of two methods: none whatsoever, or mining. The rationale behind the undesirability of having a fixed, unchanging supply is evident: it promotes wealth concentration and fosters a static community of holders with limited access for newcomers, indicating that the coin lacks any means to incentivize specific types of long-term activities. Conversely, the concern surrounding mining is more intricate. Cryptocurrency mining typically fulfills two roles: firstly, it secures the network, and secondly, it functions as a distribution model, providing hundreds of thousands globally with a pathway to acquire a few coins. Until now, mining has been viewed as essential for the former and a viable means for the latter. However, of late, there has been significant interest and research into proof of stake, encompassing strategies such astransactions as proof-of-stake, delegated proof of stake, and a partial resolution to the nothing-at-stake dilemma, Slasher, suggesting that mining may not be a necessity after all. Furthermore, the emergence of both ASICs and professional GPU mining farms is transitioning mining into a more concentrated and somewhat centralized community, indicating that any newly distributed mining currency will quickly fall under the control of professional companies rather than the general populace. If these trends persist, and mining is shown to be an ineffective distribution model, it will need to be replaced. But then, the question arises, with what?
At this point, we have identified several potential solutions:
- Ignore the issue altogether. This is the approach adopted by most proof-of-stake cryptocurrencies and, surprisingly, even those based on proof-of-work today.
- Centralized allocation: allow a central authority to distribute coins based on a specific formula.
- Functional proof-of-work: reward coins to anyone conducting a particular socially beneficial computation, such as weather forecasting. This algorithm need not serve for consensus; it can simply function to hand out coins while proof-of-stake handles the crucial task of maintaining consensus.
- Algorithmic consensus distribution. Essentially, implementing a dynamic, adaptive, consensus-based system to decide who receives new coins.
The latter is theoretically the most robust; currency units can be allocated either to everyone globally for utmost fairness or utilized to reward bounties for protocol enhancement, external charitable initiatives, or other purposes. Nevertheless, concurrently employing such a mechanism arguably undermines the fundamental purpose of a cryptocurrency: that it is decentralized and not reliant on any particular entity for its ongoing existence. Hence, we may view the centralized distributor as an ideal we strive towards, much like the concept of a bureaucrat god found within economic efficiency discourse, assessing how closely we can align with that ideal while still preserving a structure that is guaranteed, or at least highly probable, to remain stable.
Functional Proof of Work As Distribution: A Relaxed Algorithm
Functional proof of work likely represents the more straightforward concept. Initially, it was deemed unfeasible to develop a proof of work founded on useful computation due to the verification challenge: a proof-of-work task cannot exceed a few thousand steps since all nodes in the network need to validate it to accept the block. Primecoin was the closest we achieved, yet even there, calculating chains of prime numbers lacks substantial practical utility. Currently, thanks to the advent of a programming environment equipped with an integrated computational stack trace mechanism, an alternative method that overcomes this specific hurdle exists, employing spot-checking and deposit forfeits to ensure work is executed correctly. The approximate algorithm to achieve this unfolds as follows.
-
Assume that F(k) is a function that takes 32 bytes of random data as input, performs some computation requiring n steps (where n is considerably large, let’s say ten billion) and then returns a value R that is socially beneficial.
-
To execute one round of mining, commence by selecting a random m, and let B represent the block header. Let k = sha3(B + m) be the seed.
-
Define a function STEP(P, D) -> D’ where P denotes the program code, D is a tuple of data possibly encompassing stack, memory, and program counter reflecting the state of computation, and STEP executes one computational step while returning the updated computational state D’.
-
Let D[0] = { pc: 0, stack: [], memory: [k] } (or another configuration incorporating k within a different computational paradigm). Let D[i] = STEP(P, D[i-1]) where P pertains to the program executing F. D[n] should ideally encapsulate the outcome of F.
-
Establish H as a hash function of D[i]; something…like sha3(pc + str(stack) + str(memory)) fulfills the role of a quick-and-dirty solution. Let H[i] = H(D[i]). Calculate all D[i] and all H[i] and let R represent the root of a Merkle tree constructed from all H[i]. If R
Essentially, we capture the state of the program following each computational measure (we can optionally have STEP execute a few thousand computational actions for enhanced efficiency; this does not significantly undermine anything), and construct a Merkle tree from the complete dataset and examine the root. This is moderately complex to implement; happily, though, the Ethereum virtual machine and block layout closely resemble this algorithm, enabling one to utilize that code nearly directly.
The method outlined above inherently possesses a clear gap: it lacks easy verification, allowing fraudulent miners to unduly contaminate the network with seemingly valid blocks. Therefore, as a spam and fraud-deterrent system, we necessitate the following:
-
To engage in mining, nodes are required to acquire a “mining bond” priced at N * R (for example, R = 10^18 and N = 100), which is returned to the miner after 10000 blocks. Each mining bond permits the miner to submit one piece of work at a time.
-
If a miner presents a seemingly-valid submission, incorporating the m and k values, the root, and the socially beneficial output, then the reward of the mining bond increases by R
-
Any other miner holding a mining bond can verify the work independently. If the Merkle root at the conclusion is contradictory, they can issue a “challenge” transaction composed of a certain count (say, 16) of sub-nodes. At this point, the initial submitter can choose to either concede (defined as failing to provide a response within 25 blocks), surrender their entire mining bond to the challenger, or create a “response” transaction indicating the first of those subnodes they contest. Should a response be submitted, the challenger must respond by going down one level further, showcasing the sixteen subnodes between the last mutually accepted subnode and the first disputed subnode, continuing this process until it converges on the interval separating two adjacent H[i] and H[i+1] values in the tree. At this juncture, the miner is obliged to provide the values of D[i] and D[i+1] in a transaction, which is deemed valid only if P(D[i]) = D[i+1].
The challenge, however, is that the verification process consumes as much time as the initial computation, thus necessitating a rationale as to why anyone would engage in it. If all miners frequently attempt to deceive, then performing spot-checks to reclaim the deposit (which we assumed to be 100x) makes sense; however, if miners recognize this and consequently refrain from cheating, the incentive to verify dissipates, leading to unchecked opportunities for miners to deceive. This embodies a classichawk-dove equilibrium paradox, resolvable through game theory (here, we consider mining to have a cost of 0.5 and a reward of 1):
Cheats | Does not cheat | |
Checks | (-100, 101) | (0.5,-0.5) |
Does not check | (1,0) | (0.5,0) |
Calculating a mixed-strategy equilibrium in this simplified two-player scenario reveals the miner resorts to cheating 0.5% of the time while the verifier checks 0.5% of the time; under these circumstances, each participant is indifferent to the other’s strategy, thus lacking an opportunity to further refine and deceive. If we gravitate closer to the economic equilibrium of mining asserting that mining incurs a cost of 0.9, the equilibrium reflects a cheating frequency of 0.9% along with a checking frequency of 0.9%. Therefore, economically motivated spot-checking emerges as a credible tactic for identifying fraudulent mining endeavors, which can maintain cheating rates at exceedingly low levels if collateral requirements are escalated.
What type of work can we perform? Firstly, it might be advantageous to exclude computations unable to manage noise, i.e., where an erroneous response accepted as valid generates over 100x more detriment than an accurate response. Secondly, the algorithm outlined here accommodates work that is not readily verifiable; however, it does not support work that is data-intensive. For instance, SETI is data-heavy – you require an image of the sky to search it for extraterrestrial life. Thirdly, the algorithm needs to be amenable to parallelization. Executing a machine learning algorithm on terabytes of data isn’t easily fragmentable into discrete segments, even sizable ones. The second requirement can potentially be alleviated; as there’s no substantial advantage to mining with poor data compared to quality data, an SETI foundation can be established to provide a steady data stream for miners to utilize and incorporate a minimal subsidy to motivate miners to use it. In theory, the foundation could even be decentralized, functioning as a proof-of-stake voting methodology on a blockchain. The most straightforward form of socially beneficial computation, however, may involve genetic algorithms. Genetic algorithms are frequently employed to resolve problems that are unsolvable in a closed form, such as determining optimal radio antenna configurations, space travel trajectories, aerodynamic shapes, and others; the blockchain may serve as an ideal platform for executing such computations on various nodes at no cost. Certain types of data search and aggregation challenges could also potentially be fragmented, although they tend to be much more data-intensive while genetic algorithms are nearly data-free once underway.
Parliaments And Enhanced Algorithms
Distributing algorithmic consensus presents a captivating prospect. What if a consensus algorithm could be developed to allocate tokens over time, wherein that algorithm could reward arbitrary beneficial efforts? For instance, one may want to provide bounties to individuals contributing to the ecosystem or even to the world at large. The most straightforward method seems to be to randomly select a “parliament” – every N blocks, stakeholders may vote on 200 nodes tasked with deciding the allocation of the newly generated funds.
An obvious question arises: what is the economic framework for this? In theory, nodes will aim to choose the distribution that maximally benefits the community at large, thereby boosting their chances of re-election. However, are there avenues for corruption? We all recognize that traditional democracy is greatly imperfect; thus, how can we be assured that our crypto-fueled wealth distribution system will be any superior? Fortunately, a compelling argument exists suggesting it indeed will be. The reason lies in the fact that conventional democracies feature numerousof extremely grave failure modes; for instance, a legislature can confiscate individuals’ assets, compel individuals into military service for conflict, restrict freedom of expression, etc. Nevertheless, in this instance, there exists a distinctly defined maximum limit on the extent of damage a legislature could inflict: it could reroute the funds for personal benefit. There is also the potential that the legislature will gather public resources for a project detrimental to society but beneficial among themselves (e.g., warfare); however, they lack an established military framework to draw on and there is no prevailing public agreement on the justification for using coercive authority, placing them in no superior position to execute such actions compared to any other organization with similar economic capabilities. Therefore, if we assume that legislatures experience failures, let’s say, 33% of the time, we can observe how in a democratic system this would spell disaster, while here it merely implies that the distribution mechanism becomes 67% as effective as it could potentially be.
Another counterpoint is that such a structure, irrespective of how it might be designed, will inevitably generate some form of political ruling class, which will consequently fixate around a narrow range of political ideologies, cultivate its own type of inequality, and ultimately lead to a prolonged hostile takeover. This impact would be confined but nonetheless, at its most severe, 100% of the new currency issuance may be drained away by a crypto-political elite. One possible resolution is to create randomly selected legislatures (i.e., demarchy) instead of elected ones, which would diminish the likelihood of such conspiracies but at the expense of reducing the parliament’s anticipated level of proficiency in optimal distribution and its capability to establish long-term consistent institutions; however, if we aim to build a framework that politically appears neutral and decentralized, that might actually be a desirable outcome.
Nevertheless, we likely can, and undoubtedly must at least attempt, to be more innovative. Legislatures and voting represent merely the most basic and rudimentary method of organizing a decentralized entity; there are almost certainly superior alternatives based on concepts like holarchy, liquid democracy, futarchy and various combinations of these and additional concepts we have yet to conceive, which could become feasible due to the substantially enhanced levels of both interconnectivity and information processing efficiency made possible by contemporary technology. Ideally, as much of the operation as feasible would be automated in some manner – the system should operate as a DAO, rather than a DO, and the position of supreme authority or its closest philosophical equivalent should be held by an algorithm rather than by a group of individuals – perhaps a loss from the perspective of optimality at any given moment, but arguably, a benefit for long-term stability, and an especially suitable selection for a cryptographic framework intent on asserting some notion of impartiality.
A straightforward implementation based on futarchy might function as follows. Consider there are N initiatives applying for a grant encompassing the entire currency supply to be allocated over a specific timeframe, and the objective is to identify the one that will enhance the coin’s value after one year. We establish N sub-tokens, T[0] … T[N-1], where the value of T[i] is zero if project i is not selected but can be exchanged for one currency unit after one year if the initiative is chosen. Subsequently, we develop subtokens R[0] … R[N-1], where the value of R[i] is zero if the initiative is not selected or an amount of currency units equivalent to 232 computational steps in value (we incorporate a small useful-PoW or useless-PoW market into the coin for this purpose) if the project is selected. Now, suppose the likelihood of project i being selected is P[i] and the value of the token in the event that project i is selected after one year is V[i]. We determine that the value of T[i] is P[i] _ V[i] and the value of R[i] is P[i] _ K, where K is the cost of executing 232 computational steps. Thus, the project with maximumP[i] / R[i] simultaneously maximizes V[i] / K and hence V[i], which implies that project is deemed to maximize the coin’s value and thus is selected. The only remaining challenge is identifying the potential risks of market manipulation attacks assuming individuals holding significant market power. This approach appears more mathematically pure and less prone to devolving into something centralized; however, conversely, it seems to present fewer safeguards against turning malevolent. The most suitable response might simply be that a currency managed by a malevolent DAO will lose public backing, and consequently lose value, so the futarchy algorithm itself might effectively discourage such undesirable actions. Furthermore, the futarchy does not possess a military, and there is no pre-existing public consensus permitting it to employ any form of coercive power.
In the end, both of these strategies could be integrated. One might have a parliament, or a futarchy, select advantageous proof of work algorithms or even data for specific useful proof of work algorithms, or one could establish a parliament or futarchy with useful proof of work as its voting mechanism. Nevertheless, one significant conclusion here is that both algorithms delineated are intricate; there is no straightforward solution to determining how to allocate coins effectively. This, considering the condition of the broader financial system, is sensible; had it been easy to distribute coins equitably, the US dollar and other fiat currencies would likely have been supplanted by such alternatives in at least some regions long ago. Due to the complexity involved, it is improbable that either will be utilized for ether itself; ether aims to be conventional crypto-gasoline with straightforward properties to prioritize maximum stability and reliability, not an extraordinarily advanced economically innovative decentralized autonomous organization. Therefore, if you wish to see GeneticAlgoCoin, FutarchyCoin, and ParliamentCoin developed, feel free to operate them on top of Ethereum as sub-currencies; the Serpent compiler is entirely at your disposal for experimentation.
Credit goes to Neal Koblitz for proposing the concept of spot-checking and enlightening me on the significance of useful PoW, Robin Hanson for conceptualizing futarchy, and likely several cryptographers who developed the idea of multi-round challenge-response protocols before me