DeFi on Bitcoin and private ZK Proofs
Where we try to build DeFi on Bitcoin-like networks and, in the process, discover some interesting facts about zero-knowledge proof rollups.
In 2017 MakerDAO invented a revolutionary way to issue a decentralised stablecoin backed by native digital assets by cleverly leveraging market forces, game theory and maths. Around the same time, Vitalik Buterin ideated a way to build decentralised Automated Market Makers(AMM), and in 2018 Uniswap launched a working version. The innovation wave triggered by these two ideas and a few others has led to an explosion of financial solutions collectively known as “DeFi”.
What all these solutions have in common is they rely on something we’ll call: “Autonomous Agents”.
This article will explore what characteristics a decentralised platform needs to support Autonomous Agents and thus DeFi.
What is an “Autonomous Agent” in this context?
Most commonly referred to as a “smart contract”, it is a program that runs on a decentralised platform that can accumulate and control any on-ledger value and, most importantly, has the power to dispense this value based only on coded predictable logic.
Note that we are not using the term “smart contract” because that is a very overloaded term.
Another important property that it inherits from the underlying network and its logic is that it must be decentralised and censorship-resistant.
The Rule
General-purpose censorship-resistant Autonomous Agents (AA — from now on) can be built in a scalable way only on networks that support command execution.
Or, AAs cannot be built to be scalable on networks that only support transaction (or proof) verification.
Command execution and proof verification
The ultimate purpose of a blockchain network is to provide a decentralised environment that processes end-user transactions.
There are currently two ways in which the major platforms achieve this:
Verification only: Users build transactions by calculating the desired output, and the miners’ role is to verify the result and the proof and include it only if it is valid. Bitcoin is the most famous example of such a system, but more generally, all “UTXO” blockchains fall in this category.
Command execution: Users send commands to the network, and the miners are responsible for processing each command and calculating the result. This model is also known as the “Accounts model”. Ethereum, Solana and others fall into this category.
We introduced this new classification for this article instead of talking in terms of the usual “UTXO” and “Accounts” because it eliminates noise and they are the right abstractions.
Note that “UTXO” and “Accounts” are implementations of this pattern. For example, the “Accounts” model is named like that based on how it exposes the underlying database to smart contracts, and “UTXO” is based on how it links transactions.
There are tradeoffs for each of these approaches. The well-known tradeoff for the “Accounts” model is that it is notoriously difficult to scale. The rule we defined earlier describes a significant tradeoff of the “verification-only” (UTXO) model.
Building an Autonomous Agent
Our task is to show that AAs cannot be built to be scalable on networks that only support transaction verification.
To do that, we’ll attempt a not very rigorous form of “reductio ad absurdum”. We will try to build one using a couple of approaches. While it is not impossible that some other innovative solution exists or will be invented, this exercise is still useful for building intuition.
High-level design considerations
As per the definition, our AA needs to control value and dispense it only based on pre-coded rules. In the current context of the DeFi space, “value” means that the AA has to be the owner of some tokens (fungible or non-fungible).
To own tokens, there must be a way for users (or other AAs) to transfer value to the agent or request the agent pay them.
In a typical verification-only blockchain, end users control private keys, which they use to sign transactions that transfer value. The private keys are typically held securely in a “wallet”.
Our AA cannot be a user-controlled wallet since that would break the “Autonomy” requirement. This means it has to be a special construct, which has to boil down to some code that verifies transactions since that is all that our network supports.
To send money to this AA, users must construct a transaction that logically removes value from their wallet and adds it to the AA balance. Similarly, to request a payment from the AA, users must build a transaction that removes value from the AA and adds it to their own account.
This last statement is vital.
Let’s assume multiple users are interacting with the AA simultaneously. How can they construct transactions or proofs in this highly mutable and concurrent environment, since every user must know the current balance of the AA (resulting from the transaction of the previous user) to build on top of it?
One of the significant insights that made decentralised blockchains viable was that transactions must be batched into blocks.
To build our AA (assuming there is no very clever insight or a specialised solution for a particular use case), it means that user transactions have to be constructed linearly and either submitted one by one or in batches to the main verification-only chain.
Note that specialisation means that the “general-purpose” requirement is broken.
The “Linear” approach
The assumption is that all AA users coordinate in such a way that every user knows the result of the previous users’ transactions in real-time.
It is extremely challenging to build a coordination mechanism that is fast and scalable since it must implement a “two-phase commit” type mechanism. Every user must first declare their intent, then be given an order in the sequence, and then confirm by actually submitting a signed transaction or proof. If a single user does not confirm or delays the confirmation, it invalidates all future transactions.
Attempting to build such a mechanism in a decentralised way only compounds the problems.
It is safe to claim that this approach breaks scalability and possibly censorship resistance.
Batching
The usual go-to performance-improving technique is applying “batching”, which is done by introducing new actors in the system.
On a high level, users will send transactions or commands to “Batchers” or “Sequencers”, who will, in turn, submit batches to the main verification-only chain.
Note: If users are responsible for submitting the transactions themselves, these sequencers are merely the coordinators from the “Linear” approach.
There are two possible ways forward based on the capabilities of the Sequencers:
I. Sequencers have the power to move value from users’ wallets.
In this case, users send commands to sequencers, who process them and build a single transaction to send to the main chain. This transaction contains all the correctly updated balances and the proof that it was generated correctly.
1. If this empowered sequencer is a centralised entity, this breaks our autonomy/decentralisation requirement. To put this into perspective, we have a single entity that controls the balances of multiple users.
2. If we build a decentralised sequencer, it will take the form of a permissionless network that can execute commands and generate proofs accepted by the main chain.
This approach breaks the “verification-only” requirement since we had to build a side-chain that supports command execution.
It is worth analysing this a bit more. Users on the verification-only main chain want to interact with an AA like Uniswap. Let’s say they want to swap some ABC tokens for some XYZ tokens. To do that, they must build a command to instruct the decentralised sequencer, and more importantly, this command must be accompanied by the right amount of ABC tokens. This means they must build a transaction on the main chain that transfers the ABC to something like an escrow or bridge. After this transaction is confirmed, the decentralised sequencer can process the command and produce a proof that will move the XYZ token to the user.
Notice that in order to satisfy our requirements, we had to build a “command-execution” side-chain or a rollup solution with a bridge on-chain.
II. Sequencers cannot move user value
Since the sequencer is not empowered to move value on behalf of users, like in the previous scenario, the sequencer must collect signatures from all users before submitting the batch transaction containing all instructions.
This approach doesn’t scale for anonymous users since a single user that doesn’t sign will compromise a batch.
Note that the sequencer doesn’t have to be a central entity, but this fact doesn’t change the scalability issues.
The Miners are the Autonomous Agent
Another approach worth exploring is where the Miners of the verification-only network who are engaged in a byzantine consensus anyway run the AA as a side-business, taking commands from users, executing them, and updating balances.
While this is decentralised because the miners are diverse, and it can be scalable, it breaks the “verification-only” requirement.
Zero-knowledge proofs
It turns out that the ‘verification-only’ and ‘command-execution’ classification of blockchains is very useful for analysing a very active area of research, the ZKP based layer2 solutions.
As the “ZK proof” name suggests, the essence of the solution is that a proof is being generated by a centralised or decentralised entity and then verified on a blockchain. Even if that network supports execution, the proof must be verified in a “verification-only” fashion because the command execution generally happened when the proof was generated.
We will explore this further in a future article from this series, where we will attempt to build Autonomous Agents with privacy on ZKP verification-only networks.
Conclusion
In this article, we went through a non-rigorous but relatively thorough process and showed that Autonomous Agents, and by extension DeFi, could not easily be built on top of networks that only support proof verification.
The only way to do it is to either sacrifice scalability or build another network that supports command execution and bridge it back into the main network.
The rule is probably not a surprise to most people in this space. Still, the classification and reasoning process we used here has some surprising implications around the capabilities of “Zero-knowledge proof” networks, which we’ll cover next.
Find out more
If you’re interested in learning more about what we’re building, check out our other blog posts here or dive into our whitepaper. Please chat with us on Discord and Telegram, and follow us on Twitter.