Throughout the past day, with assistance from the community, we have compiled a crowdsourced inventory of all significant issues related to smart contracts on Ethereum so far, encompassing both the DAO incident and various smaller thefts ranging from 100 to 10,000 ETH, as well as losses incurred in games and token contracts.
The following list (original source here) is as follows:
We can organize the list into categories of issues:
- Variable/function name confusions: FirePonzi, Rubixi
- Public data that ought to remain confidential: the public RNG seed casino, susceptible RPS
- Re-entrancy (A calling B calling A): the DAO, Maker’s ETH-backed token
- Failed sends due to 2300 gas limit: King of the Ether
- Arrays/loops and gas limitations: Governmental
- Subtler game-theoretic vulnerabilities where, at the limit, individuals debate whether they are bugs: the DAO
Numerous solutions have been proposed to enhance smart contract security, ranging from improved development environments to superior programming languages, as well as formal verification and symbolic execution, with researchers having begun developing such tools. My personal stance on the subject is that a crucial primary conclusion is the following: advancement in smart contract security will inevitably be layered, gradual, and fundamentally reliant on defense-in-depth. There will be additional bugs, and we will gain further insights; there will not be a singular miraculous technology that resolves all issues.
The rationale behind this fundamental conclusion is as follows. All occurrences of smart contract theft or loss – in fact, the very essence of smart contract theft or loss, hinge fundamentally on discrepancies between implementation and intention. If, in any instance, the implementation and intent are aligned, then any occurrence of “theft” is effectively a donation, while any occurrence of “loss” equates to voluntary economic waste, analogous to a proportional contribution to the ETH token holder community via deflation. This leads to the subsequent challenge: intent is intrinsically intricate.
The philosophy underlying this reality has been aptly formalized by the amicable AI research community, where it is referred to as “complexity of value” and “fragility of value“. The core argument is simple: we, as human beings, possess numerous values, and they are highly intricate – so intricate that we cannot fully articulate them, and any attempt to do so will inevitably overlook some corner case. The significance of this concept in AI research is profound, as a super-intelligent AI would search every nook and cranny, including obscure corners that we consider so non-intuitive that we do not even contemplate them, to optimize its objective. Instruct a superintelligent AI to cure cancer, and it will nearly accomplish it with moderately intricate modifications in molecular biology, but it will soon realize that it could achieve 100% effectiveness by instigating human extinction through nuclear war and/or a biological pandemic. Direct it to cure cancer without harming humans, and it might simply compel all humans to enter cryostasis, reasoning that it is not technically killing since it could revive the humans if it chose to – it just won’t. And so on.
In the realm of smart contracts, the situation is similarly complex. We believe that we value attributes such as “fairness,” yet it is challenging to define what fairness entails. You might assert that “it should not be feasible for someone to merely steal 10,000 ETH from a DAO,” but what if, in a particular withdrawal scenario, the DAO genuinely sanctioned the transfer because the recipient rendered a valuable service? However, if the transfer was authorized, how can we ascertain that the mechanism for determining this wasn’t exploited through a game-theoretic vulnerability? What exactly constitutes a game-theoretic vulnerability? What about “splitting”? In a blockchain-based market, what about front-running? If a specific contract designates an “owner” entitled to collect fees, what if the capacity for anyone to assume ownership was intentionally part of the regulations, to enhance the experience?
All of this does not diminish the value of experts in formal verification, type theory, unconventional programming languages, and related fields; the astute among them already acknowledge and appreciate these complexities. However, it highlights a fundamental barrier to what can be accomplished, and “fairness” is not a concept that can be mathematically validated in a theorem – in various instances, the set of fairness claims is so extensive and complex that one might wonder if the claims set itself could harbor a flaw.
Towards a Mitigation Strategy
That said, there are numerous areas where the gap between intention and implementation can be significantly diminished. One approach is to endeavor to establish common patterns and hardcode them: for instance, the Rubixi issue might have been circumvented by making owner a keyword that could solely be initialized to equal msg.sender within the constructor and potentially transferred in a transferOwnership function. Another approach is to attempt to devise as many standardized mid-level components as feasible; for example, we might want to discourage every casino from developing its own random number generator, and instead guide individuals to RANDAO (or a variation like my RANDAO++ proposal, once implemented).
Nonetheless, a more critical category of solutions involves addressing the specific and counterintuitive peculiarities of the EVM execution environment. These encompass: the gas limit (the cause of the Governmental loss, as well as losses due to recipients consuming excessive gas upon receiving funds), re-entrancy (the reason for the DAO and Maker ETH contract issues), and the call stack limit. The call stack limit, for example, can be alleviated through this EIP, which effectively removes it from consideration by substituting its function with an adjustment to gas mechanics. Re-entrancy could be outright banned (e.g., permitting only one execution at a time).instance of each agreement permitted at one time), yet this could likely bring about new kinds of perplexity, so a superior resolution is probably needed.
The gas limit, nonetheless, will remain; therefore, the only resolutions are likely to lie within the development environment itself. Compilers ought to issue a warning if a contract does not conclusively consume less than 2300 gas when invoked with no data; they should also provide a warning if a function cannot be shown to conclude within a safe gas limit. Variable names might be highlighted (e.g., using RGB based on the initial three bytes of the name’s hash), or perhaps a heuristic alert might be triggered if two variable names are overly similar.
Furthermore, certain programming patterns carry higher risks than others, and while they shouldn’t be prohibited, they ought to be distinctly marked, necessitating that developers justify their use. A particularly intricate example is as follows. There are two categories of call operations that are evidently secure. The first is a send that includes 2300 gas (assuming we accept the convention that it is the recipient’s obligation not to utilize more than 2300 gas in the case of empty data). The second is a call to a trusted contract that has already been confirmed as safe (notably, this definition disqualifies re-entrancy since you would then need to establish that A is safe before confirming that A is safe).
It turns out that numerous contracts can be encompassed by this definition. However, not all can; one exception is the notion of a “general-purpose decentralized exchange” contract where anyone is able to submit orders to trade a specific amount of asset A for a specific quantity of asset B, where A and B are arbitrary ERC20-compatible tokens. One could create a specific contract limited to a few assets, thereby qualifying for the “trusted callee” exemption, but having a universal one appears to be a highly beneficial idea. However, in such a case, the exchange would be required to execute transfer and transferFrom from unknown contracts as well as provide them sufficient gas to function and potentially initiate a re-entrant call to attempt exploiting the exchange. In this scenario, the compiler might prefer to issue a clear warning unless a “mutex lock” is employed to prevent the contract from being accessed again during those calls.
A third class of strategies involves defense in depth. For instance, to avert losses (though not thefts), it is advisable that all contracts not designed to be permanent have an expiration date, post which the owner may take any actions deemed necessary on behalf of the contract; thus, losses would only occur if (i) the contract fails, and simultaneously (ii) the owner is absent or untrustworthy. Reliable multisig “owners” might arise to ease (ii). Thefts could be diminished by instituting waiting periods. The DAO incident was significantly mitigated in scale precisely because the child DAO was restricted for 28 days. A suggested feature in MakerDAO is to introduce a delay before any governance modification activates, allowing token holders dissatisfied with the change time to offload their tokens; this approach is also commendable.
Formal verification can be layered atop. A straightforward application is as a means of validating termination, significantly reducing gas-related complications. Another application is demonstrating specific characteristics – for instance, “if all participants conspire, they can extract their funds in all scenarios,” or “if you transfer your tokens A to this contract, you are assured to either receive the desired quantity of token B or have a complete refund option.” Or “this contract adheres to a constrained subset of Solidity that renders re-entrancy, gas issues, and call stack problems infeasible.”
A concluding remark is that while all concerns thus far have pertained to unintentional bugs, malevolent bugs pose an additional issue. How confident can we truly be that the MakerDAO decentralized exchange is devoid of a loophole that permits them to withdraw all funds? Some community members may be acquainted with the MakerDAO team and regard them as honorable individuals, yet the fundamental aim of the smart contract security framework is to ensure guarantees robust enough to endure even if that is not the case, thus allowing entities that are not sufficiently connected or established for automatic trust and lack the means to demonstrate their reliability through a multimillion-dollar licensing procedure to innovate, enabling consumers to utilize their services with confidence in their security. Therefore, any checks or alerts should not merely exist at the development environment level; they should also be present at the block explorers and other platforms where independent agents can confirm the source code.
Specific actions that the community can undertake include:
- Commencing the development of a superior development environment, alongside an enhanced block/source code explorer, which incorporates some of these features
- The standardization of as many components as feasible
- Starting a project to explore various smart contract programming languages, coupled with formal verification and symbolic execution tools
- Engaging in discussions regarding coding standards, EIPs, amendments to Solidity, etc., that can alleviate the risk of accidental or intentional errors
- If you are developing a multimillion-dollar smart contract application, contemplate reaching out to security researchers and collaborate with them on utilizing your project as a test case for diverse verification tools
Keep in mind that, as mentioned in a prior blog post, DEVGrants and other funding options are available for much of the aforementioned.