Namada

Welcome to the Namada specification!

What is Namada?

Namada is a sovereign proof-of-stake blockchain, using Tendermint BFT consensus, which enables multi-asset private transfers for any native or non-native asset using a multi-asset shielded pool derived from the Sapling circuit. Namada features full IBC protocol support, a natively integrated Ethereum bridge, a modern proof-of-stake system with automatic reward compounding and cubic slashing, a stake-weighted governance signalling mechanism, and a dual proactive/retroactive public goods funding system. Users of shielded transfers are rewarded for their contributions to the privacy set in the form of native protocol tokens. A multi-asset shielded transfer wallet is provided in order to facilitate safe and private user interaction with the protocol.

You can learn more about Namada here.

What is Anoma?

The Anoma protocol is designed to facilitate the operation of networked fractal instances, which intercommunicate but can utilise varied state machines and security models. A fractal instance is an instance of the Anoma consensus and execution protocols operated by a set of networked validators. Anoma’s fractal instance architecture is an attempt to build a platform which is architecturally homogeneous but with a heterogeneous security model. Thus, different fractal instances may specialise in different tasks and serve different communities.

How does Namada relate to Anoma?

The Namada instance is the first such fractal instance, focused exclusively on the use-case of private asset transfers. Namada is also a helpful stepping stone to finalise, test, and launch a protocol version that is simpler than the full Anoma protocol but still encapsulates a unified and useful set of features.

Raison d'être

Privacy should be default and inherent in the systems we use for transacting, yet safe and user-friendly multi-asset privacy doesn't yet exist in the blockchain ecosystem. Up until now users have had the choice of either a sovereign chain that reissues assets (e.g. Zcash) or a privacy preserving solution built on an existing smart contract chain. Both have large trade-offs: in the former case, users don't have assets that they actually want to transact with, and in the latter case, the restrictions of existing platforms mean that users leak a ton of metadata and the protocols are expensive and clunky to use.

Namada can support any fungible or non-fungible asset on an IBC-compatible blockchain and fungible or non-fungible assets (such as ERC20 tokens) sent over a custom Ethereum bridge that reduces transfer costs and streamlines UX as much as possible. Once assets are on Namada, shielded transfers are cheap and all assets contribute to the same anonymity set.

Users on Namada can earn rewards, retain privacy of assets, and contribute to shared privacy.

Layout of this specification

The Namada specification documents are organised into four sub-sections:

This book is written using mdBook. The source can be found in the Namada repository.

Contributions to the contents and the structure of this book should be made via pull requests.

Base ledger

The base ledger of Namada includes a consensus system, validity predicate-based execution system, and signalling-based governance mechanism. Namada's ledger also includes proof-of-stake, slashing, fees, and inflation funding for staking rewards, shielded pool incentives, and public goods — these are specified in the economics section.

Consensus

Namada uses Tendermint Go through the tendermint-rs bindings in order to provide peer-to-peer transaction gossip, BFT consensus, and state machine replication for Namada's custom state machine. Tendermint Go implements the Tendermint BFT consensus algorithm, which you can read more about here.

Execution

The Namada ledger execution system is based on an initial version of the Anoma execution model. The system implements a generic computational substrate with WASM-based transactions and validity predicate verification architecture, on top of which specific features of Namada such as IBC, proof-of-stake, and the MASP are built.

Validity predicates

Conceptually, a validity predicate (VP) is a function from the transaction's data and the storage state prior and posterior to a transaction execution returning a boolean value. A transaction may modify any data in the accounts' dynamic storage sub-space. Upon transaction execution, the VPs associated with the accounts whose storage has been modified are invoked to verify the transaction. If any of them reject the transaction, all of its storage modifications are discarded; if all accept, the storage modifications are written.

Namada ledger

The Namada ledger is built on top of Tendermint's ABCI interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently not being executed in ABCI's DeliverTx method, but rather in the EndBlock method. The reason for this is to prepare for future DKG and threshold decryption integration.

The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP).

Interaction with the Namada ledger are made possible via transactions (note transaction whitelist below). In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction and/or an account that was explicitly elected by the transaction as the verifier will all have their validity predicates verifying the transaction. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses.

Supported validity predicates

While the execution model is fully programmable, for Namada only a selected subset of provided validity predicates and transactions will be permitted through pre-defined whitelists configured at network launch.

There are some native VPs for internal transparent addresses that are built into the ledger. All the other VPs are implemented as WASM programs. One can build a custom VP using the VP template or use one of the pre-defined VPs.

Supported validity predicates for Namada:

  • Native
    • Proof-of-stake (see spec)
    • IBC & IbcToken (see spec)
    • Governance (see spec)
    • SlashFund (see spec)
    • Protocol parameters
  • WASM
    • Fungible token (see spec)
    • MASP (see spec)
    • k-of-n multisignature VP (see spec)

Governance

Before describing Namada governance, it is useful to define the concepts of validators, delegators, and NAM:

  • Namada's economic model is based around a single native token, NAM, which is controlled by the protocol.
  • A Namada validator is an account with a public consensus key, which may participate in producing blocks and governance activities. A validator may not also be a delegator.
  • A Namada delegator is an account that delegates some tokens to a validator. A delegator may not also be a validator.

Namada introduces a governance mechanism to propose and apply protocol changes without the need for a hard fork, and to signal stakeholder approval for potential hard forks. Anyone holding some NAM will be able to propose some changes in a proposal for which delegators and validators will cast their yay or nay votes; in addition it will also be possible to attach some payloads to proposals, in specific cases, to embed additional information. Governance on Namada supports both signaling and voting mechanisms. The signaling mechanism is used for changes which require a hard fork, while the voting mechanism is used for changes which merely alter state. In cases where the chain is not able to produce blocks anymore, Namada relies on off chain signaling to agree on a common move.

Further information about delegators, validators, and NAM can be found in the economics section.

On-chain protocol

Governance Address

Governance adds 2 internal addresses:

  • GovernanceAddress
  • SlashFundAddress

The first internal address contains all the proposals under its address space. The second internal address holds the funds of rejected proposals.

Governance storage

Each proposal will be stored in a sub-key under the internal proposal address. The storage keys involved are:

/$GovernanceAddress/proposal/$id/content: Vec<u8>
/$GovernanceAddress/proposal/$id/author: Address
/$GovernanceAddress/proposal/$id/type: ProposalType
/$GovernanceAddress/proposal/$id/start_epoch: Epoch
/$GovernanceAddress/proposal/$id/end_epoch: Epoch
/$GovernanceAddress/proposal/$id/grace_epoch: Epoch
/$GovernanceAddress/proposal/$id/proposal_code: Option<Vec<u8>>
/$GovernanceAddress/proposal/$id/funds: u64
/$GovernanceAddress/proposal/epoch/$id: u64

An epoch is a range of blocks or time that is defined by the base ledger and made available to the PoS system. This document assumes that epochs are identified by consecutive natural numbers. All the data relevant to PoS are associated with epochs.

Field semantics are as follows:

  • The content value should follow a standard format. We leverage a similar format to what is described in the BIP2 document:
{
    "title": "<text>",
    "authors": "<authors email addresses> ",
    "discussions-to": "<email address / link>",
    "created": "<date created on, in ISO 8601 (yyyy-mm-dd) format>",
    "license": "<abbreviation for approved license(s)>",
    "abstract": "<text>",
    "motivation": "<text>",
    "details": "<AIP number(s)> - optional field",
    "requires": "<AIP number(s)> - optional field",
}
  • The Author address field will be used to credit the locked funds if the proposal is approved.
  • The ProposalType imply different combinations of:
    • the optional wasm code attached to the proposal
    • which actors should be allowed to vote (delegators and validators or validators only)
    • the threshold to be used in the tally process
    • the optional payload (memo) attached to the vote

The correct logic to handle these different types will be hardcoded in protocol. We'll also rely on type checking to strictly enforce the correctness of a proposal given its type. These two approaches combined will prevent a user from deviating from the intended logic for a certain proposal type (e.g. providing a wasm code when it's not needed or allowing only validators to vote when also delegators should, etc...). More details on the specific types supported can be found in the relative section of this document.

  • /$GovernanceAddress/proposal/$epoch/$id is used for efficient iteration over proposals by epoch. $epoch refers to the same value as the one specified in the grace_epoch field.

GovernanceAddress parameters and global storage keys are:

/$GovernanceAddress/counter: u64
/$GovernanceAddress/min_proposal_fund: u64
/$GovernanceAddress/max_proposal_code_size: u64
/$GovernanceAddress/min_proposal_period: u64
/$GovernanceAddress/max_proposal_content_size: u64
/$GovernanceAddress/min_proposal_grace_epochs: u64
/$GovernanceAddress/pending/$proposal_id: u64
  • counter is used to assign a unique, incremental ID to each proposal.
  • min_proposal_fund represents the minimum amount of locked tokens to submit a proposal.
  • max_proposal_code_size is the maximum allowed size (in bytes) of the proposal wasm code.
  • min_proposal_period sets the minimum voting time window (in Epoch).
  • max_proposal_content_size tells the maximum number of characters allowed in the proposal content.
  • min_proposal_grace_epochs is the minimum required time window (in Epoch) between end_epoch and the epoch in which the proposal has to be executed.
  • /$GovernanceAddress/pending/$proposal_id this storage key is written only before the execution of the code defined in /$GovernanceAddress/proposal/$id/proposal_code and deleted afterwards. Since this storage key can be written only by the protocol itself (and by no other means), VPs can check for the presence of this storage key to be sure that a proposal_code has been executed by the protocol and not by a transaction.

The governance machinery also relies on a subkey stored under the NAM token address:

/$NAMAddress/balance/$GovernanceAddress: u64

This is to leverage the NAM VP to check that the funds were correctly locked. The governance subkey, /$GovernanceAddress/proposal/$id/funds will be used after the tally step to know the exact amount of tokens to refund or move to Treasury.

Supported proposal types

At the moment, Namada supports 3 types of governance proposals:


#![allow(unused)]
fn main() {
pub enum ProposalType {
  /// Carries the optional proposal code path
  Custom(Option<String>),
  PGFCouncil,
  ETHBridge,
}
}

Custom represents a generic proposal with the following properties:

  • Can carry a wasm code to be executed in case the proposal passes
  • Allows both validators and delegators to vote
  • Requires 2/3 of the total voting power to succeed
  • Doesn't expect any memo attached to the votes

PGFCouncil is a specific proposal to elect the council for Public Goods Funding:

  • Doesn't carry any wasm code
  • Allows both validators and delegators to vote
  • Requires 1/3 of the total voting power to vote for the same council
  • Expect every vote to carry a memo in the form of a tuple Set<(Set<Address>, BudgetCap)>

ETHBridge is aimed at regulating actions on the bridge like the update of the Ethereum smart contracts or the withdrawing of all the funds from the Vault :

  • Doesn't carry any wasm code
  • Allows only validators to vote
  • Requires 2/3 of the validators' total voting power to succeed
  • Expect every vote to carry a memo in the form of a tuple (Action, Signature)

GovernanceAddress VP

Just like PoS, also governance has its own storage space. The GovernanceAddress validity predicate task is to check the integrity and correctness of new proposals. A proposal, to be correct, must satisfy the following:

  • Mandatory storage writes are:
    • counter
    • author
    • type
    • funds
    • voting_start epoch
    • voting_end epoch
    • grace_epoch
  • Lock some funds >= min_proposal_fund
  • Contains a unique ID
  • Contains a start, end and grace Epoch
  • The difference between StartEpoch and EndEpoch should be >= min_proposal_period.
  • Should contain a text describing the proposal with length < max_proposal_content_size characters.
  • Vote can be done only by a delegator or validator (further constraints can be applied depending on the proposal type)
  • If delegators are allowed to vote, than validators can vote only in the initial 2/3 of the whole proposal duration (end_epoch - start_epoch)
  • Due to the previous requirement, the following must be true, (EndEpoch - StartEpoch) % 3 == 0
  • If defined, proposalCode should be the wasm bytecode representation of the changes. This code is triggered in case the proposal has a position outcome.
  • The difference between grace_epoch and end_epoch should be of at least min_proposal_grace_epochs

Once a proposal has been created, nobody can modify any of its fields. If proposal_code is Empty or None, the proposal upgrade will need to be done via hard fork, unless this is a specific type of proposal: in this case the protocol can directly apply the required changes.

It is possible to check the actual implementation here.

Examples of proposalCode could be:

  • storage writes to change some protocol parameter
  • storage writes to restore a slash
  • storage writes to change a non-native vp

This means that corresponding VPs need to handle these cases.

Proposal Transactions

The on-chain proposal transaction will have the following structure, where author address will be the refund address.


#![allow(unused)]
fn main() {
struct Proposal {
    id: u64,
    content: Vec<u8>,
    author: Address,
    r#type: ProposalType,
    votingStartEpoch: Epoch,
    votingEndEpoch: Epoch,
    graceEpoch: Epoch,
}
}

The optional proposal wasm code will be embedded inside the ProposalType enum variants to better perform validation through type checking.

Vote transaction

Vote transactions have the following structure:


#![allow(unused)]
fn main() {
struct OnChainVote {
    id: u64,
    voter: Address,
    yay: ProposalVote,
}
}

Vote transaction creates or modifies the following storage key:

/$GovernanceAddress/proposal/$id/vote/$delegation_address/$voter_address: ProposalVote

where ProposalVote is an enum representing a Yay or Nay vote: the yay variant also contains the specific memo (if any) required for that proposal.

The storage key will only be created if the transaction is signed either by a validator or a delegator. In case a vote misses a required memo or carries a memo with an invalid format, the vote will be discarded at validation time (VP) and it won't be written to storage.

If delegators are allowed to vote, validators will be able to vote only for 2/3 of the total voting period, while delegators can vote until the end of the voting period.

If a delegator votes differently than its validator, this will override the corresponding vote of this validator (e.g. if a delegator has a voting power of 200 and votes opposite to the delegator holding these tokens, than 200 will be subtracted from the voting power of the involved validator).

As a small form of space/gas optimization, if a delegator votes accordingly to its validator, the vote will not actually be submitted to the chain. This logic is applied only if the following conditions are satisfied:

  • The transaction is not being forced
  • The vote is submitted in the last third of the voting period (the one exclusive to delegators). This second condition is necessary to prevent a validator from changing its vote after a delegator vote has been submitted, effectively stealing the delegator's vote.

Tally

At the beginning of each new epoch (and only then), in the finalize_block function, tallying will occur for all the proposals ending at this epoch (specified via the grace_epoch field of the proposal). The proposal has a positive outcome if the threshold specified by the ProposalType is reached. This means that enough yay votes must have been collected: the threshold is relative to the staked NAM total.

Tallying, when no memo is required, is computed with the following rules:

  1. Sum all the voting power of validators that voted yay
  2. For any validator that voted yay, subtract the voting power of any delegation that voted nay
  3. Add voting power for any delegation that voted yay (whose corresponding validator didn't vote yay)
  4. If the aforementioned sum divided by the total voting power is greater or equal to the threshold set by ProposalType, the proposal outcome is positive otherwise negative.

If votes carry a memo, instead, the yay votes must be evaluated net of it. The protocol will implement the correct logic to make sense of these memos and compute the tally correctly:

  1. Sum all the voting power of validators that voted yay with a specific memo, effectively splitting the yay votes into different subgroups
  2. For any validator that voted yay, subtract the voting power of any delegation that voted nay or voted yay with a different memo
  3. Add voting power for any delegation that voted yay (whose corresponding validator voted nay or yay with a different memo)
  4. From the yay subgroups select the one that got the greatest amount of voting power
  5. If the aforementioned voting power divided by the total voting power is greater or equal to the threshold set by ProposalType, the proposal outcome is positive otherwise negative.

All the computation will be done on data collected at the epoch specified in the end_epoch field of the proposal.

It is possible to check the actual implementation here.

Refund and Proposal Execution mechanism

Together with tallying, in the first block at the beginning of each epoch, in the finalize_block function, the protocol will manage the execution of accepted proposals and refunding. For each ended proposal with a positive outcome, it will refund the locked funds from GovernanceAddress to the proposal author address (specified in the proposal author field). For each proposal that has been rejected, instead, the locked funds will be moved to the SlashFundAddress. Moreover, if the proposal had a positive outcome and proposal_code is defined, these changes will be executed right away. To summarize the execution of governance in the finalize_block function:

If the proposal outcome is positive and current epoch is equal to the proposal grace_epoch, in the finalize_block function:

  • transfer the locked funds to the proposal author
  • execute any changes specified by proposal_code

In case the proposal was rejected or if any error, in the finalize_block function:

  • transfer the locked funds to SlashFundAddress

The result is then signaled by creating and inserting a [Tendermint Event](https://github.com/tendermint/tendermint/blob/ab0835463f1f89dcadf83f9492e98d85583b0e71/docs/spec/abci/abci.md#events.

SlashFundAddress

Funds locked in SlashFundAddress address should be spendable only by proposals.

SlashFundAddress storage

/$SlashFundAddress/?: Vec<u8>

The funds will be stored under:

/$NAMAddress/balance/$SlashFundAddress: u64

SlashFundAddress VP

The slash_fund validity predicate will approve a transfer only if the transfer has been made by the protocol (by checking the existence of /$GovernanceAddress/pending/$proposal_id storage key)

It is possible to check the actual implementation here.

Off-chain protocol

Create proposal

A CLI command to create a signed JSON representation of the proposal. The JSON will have the following structure:

{
  content: Base64<Vec<u8>>,
  author: Address,
  votingStart: TimeStamp,
  votingEnd: TimeStamp,
  signature: Base64<Vec<u8>>
}

The signature is produced over the hash of the concatenation of: content, author, votingStart and votingEnd. Proposal types are not supported off-chain.

Create vote

A CLI command to create a signed JSON representation of a vote. The JSON will have the following structure:

{
  proposalHash: Base64<Vec<u8>>,
  voter: Address,
  signature: Base64<Self.proposalHash>,
  vote: Enum(yay|nay)
}

The proposalHash is produced over the concatenation of: content, author, votingStart, votingEnd, voter and vote. Vote memos are not supported off-chain.

Tally

Same mechanism as on chain tally but instead of reading the data from storage it will require a list of serialized json votes.

Interfaces

  • Ledger CLI
  • Wallet

k-of-n multisignature

The k-of-n multisignature validity predicate authorizes transactions on the basis of k out of n parties approving them. This document targets the encrypted wasm transactions: there won't be support for multisignature on wrapper or protocol transactions.

Protocol

Namada transactions get signed before being delivered to the network. This signature is then checked by the VPs to determine the validity of the transaction. To support multisignature we need to modify the current SignedTxData struct to the following:


#![allow(unused)]
fn main() {
pub struct SignedTxData {
    /// The original tx data bytes, if any
    pub data: Option<Vec<u8>>,
    /// The signature is produced on the tx data concatenated with the tx code
    /// and the timestamp.
    pub sig: Vec<(u8, common::Signature)>,
}
}

The sig field now holds a vector of tuples where the first element is an 8-bit integer and the second one is a signature. The integer serves as an index to match a specific signature to one of the public keys in the list of accepted ones. This way, we can improve the verification algorithm and check each signature only against the public key at the provided index (linear in time complexity), without the need to cycle on all of them which would be .

This means that non-multisig addresses will now be seen as 1-of-1 multisig accounts.

VPs

Since all the addresses will be multisig ones, we will keep using the already available vp_user as the default validity predicate. The only modification required is the signature check which must happen on a set of signatures instead of a single one.

To perform the validity checks, the VP will need to access two types of information:

  1. The multisig threshold
  2. A list of valid signers' public keys

This data defines the requirements of a valid transaction operating on the multisignature address and it will be written in storage when the account is created:

/$Address/threshold/: u8
/$Address/pubkeys/: LazyVec<PublicKey>

The LazyVec struct will split all of its elements on different subkeys in storage so that we won't need to load the entire vector of public keys in memory for validation but just the ones pointed by the indexes in the SignedTxData struct.

To verify the correctness of the signatures, this VP will proceed with a two-step verification process:

  1. Check to have enough unique signatures for the given threshold
  2. Check to have enough valid signatures for the given threshold

Step 1 allows us to short-circuit the validation process and avoid unnecessary processing and storage access. Each signature will be validated only against the public key found in the list at the specified index. Step 2 will halt as soon as it retrieves enough valid signatures to match the threshold, meaning that the remaining signatures will not be verified.

Addresses

The vp introduced in the previous section is available for established addresses. To generate a multisig account we need to modify the InitAccount struct to support multiple public keys and a threshold, as follows:


#![allow(unused)]
fn main() {
pub struct InitAccount {
    /// The VP code
    pub vp_code: Vec<u8>,
    /// Multisig threshold for k-of-n
    pub threshold: u8,
    /// Multisig signers' pubkeys to be written into the account's storage. This can be used
    /// for signature verification of transactions for the newly created
    /// account.
    pub pubkeys: Vec<common::PublicKey>
}
}

Finally, the tx performs the following writes to storage:

  • The multisig vp
  • The threshold
  • The list of public keys of the signers

Internal addresses may want a multi-signature scheme on top of their validation process as well. Among the internal ones, PGF will require multisignature for its council (see the relative spec). The storage data necessary for the correct working of the multisig for an internal address are written in the genesis file: these keys can be later modified through governance.

Implicit addresses are not generated by a transaction and, therefore, are not suitable for a multisignature scheme since there would be no way to properly construct them. More specifically, an implicit address doesn't allow for:

  • A custom, modifiable VP
  • An initial transaction to be used as an initializer for the relevant data

Multisig account init validation

Since the VP of an established account does not get triggered at account creation, no checks will be run on the multisig parameters, meaning that the creator could provide wrong data.

To perform validation at account creation time we could:

  1. Write in storage the addresses together with the public keys to trigger their VPs
  2. Manually trigger the multisig VP even at creation time
  3. Create an internal VP managing the creation of every multisig account

All of these solutions would require the init transaction to become a multisigned one.

Solution 1 actually exhibits a problem: in case the members of the account would like to exclude one of them from the account, the target account could refuse to sign the multisig transaction carrying this modification. At validation time, his private VP will be triggered and, since there's no signature matching his own public key in the transaction, it would reject it effectively preventing the multisig account to operate on itself even with enough signatures to match the threshold. This goes against the principle that a multisig account should be self-sufficient and controlled by its own VP and not those of its members.

Solution 2 would perform just a partial check since the logic of the VP will revolve around the threshold.

Finally, solution 3 would require an internal VP dedicated to the management of multisig addresses' parameters both at creation and modification time. This could implement a logic based on the threshold or a logic requiring a signature by all the members to initialize/modify a multisig account's parameters. The former effectively collapses to the VP of the account itself (making the internal VP redundant), while the latter has the same problem as solution 1.

In the end, we don't implement any of these checks and will leave the responsibility to the signer of the transaction creating the address: in case of an error he can simply submit a new transaction to generate the correct account. On the other side, the participants of a multisig account can refuse to sign transactions if they don't agree on the parameters defining the account itself.

Transaction construction

To craft a multisigned transaction, the involved parties will need to coordinate. More specifically, the transaction will be constructed by one entity which will then distribute it to the signers and collect their signatures: note that the constructing party doesn't necessarily need to be one of the signers. Finally, these signatures will be inserted in the SignedTxData struct so that it can be encrypted, wrapped and submitted to the network.

Namada does not provide a layer to support this process, so the involved parties will need to rely on an external communication mechanism.

Fungible token

The fungible token validity predicate authorises token balance changes on the basis of conservation-of-supply and approval-by-sender.

Multitoken

A token balance is stored with a storage key. The token balance key should be {token_addr}/balance/{owner_addr} or {token_addr}/{sub_prefix}/balance/{owner_addr}. {sub_prefix} can have multiple key segments. These keys can be made with token functions.

We can have multitoken balances with the same token and the same owner by {sub_prefix}, e.g. a token balance received over IBC is managed in {token_addr}/ibc/{ibc_token_hash}/balance/{receiver_addr}. It is distinguished from the receiver's original balance in {token_addr}/balance/{receiver_addr} to know which chain the token was transferred from.

The transfers between the following keys are allowed:

SourceTarget
{token_addr}/balance/{sender_addr}{token_addr}/balance/{receiver_addr}
{token_addr}/{sub_prefix}/balance/{sender_addr}{token_addr}/{sub_prefix}/balance/{receiver_addr}

A transfer can be allowed from a balance without {sub_prefix} to another one without {sub_prefix} and between balances with the same {sub_prefix}. The {sub_prefix} can be given with --sub-prefix argument when Namada CLI namadac transfer.

Some special transactions can transfer to another balance with the different {sub_prefix}. IBC transaction transfers from a balance with {sub_prefix} to another balance with a different {sub_prefix}. IBC transfers handle the sub prefix ibc/{port_id}/{channel_id} for the IBC escrow, mint, and burn accounts and the sub prefix ibc/{ibc_token_hash} for receiving a token. IBC transaction transfers a token between the following keys:

IBC operationSourceTarget
Send (as the source){token_addr}/balance/{sender_addr}{token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_ESCROW
Send (to the source){token_addr}/balance/{sender_addr}{token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_BURN
Refund (when sending as the source){token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_ESCROW{token_addr}/balance/{sender_addr}
Refund (when sending to the source){token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_BURN{token_addr}/balance/{sender_addr}
Receive (as the source){token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_ESCROW{token_addr}/balance/{receiver_addr}
Receive (from the source){token_addr}/ibc/{port_id}/{channel_id}/balance/IBC_MINT{token_addr}/ibc/{ibc_token_hash}/balance/{receiver_addr}

IBC token validity predicate should validate these transfers. These special transfers like IBC should be validated by not only the fungible token validity predicate but also other validity predicates.

Replay Protection

Replay protection is a mechanism to prevent replay attacks, which consist of a malicious user resubmitting an already executed transaction (also mentioned as tx in this document) to the ledger.

A replay attack causes the state of the machine to deviate from the intended one (from the perspective of the parties involved in the original transaction) and causes economic damage to the fee payer of the original transaction, who finds himself paying more than once. Further economic damage is caused if the transaction involved the moving of value in some form (e.g. a transfer of tokens) with the sender being deprived of more value than intended.

Since the original transaction was already well formatted for the protocol's rules, the attacker doesn't need to rework it, making this attack relatively easy.

Of course, a replay attack makes sense only if the attacker differs from the source of the original transaction, as a user will always be able to generate another semantically identical transaction to submit without the need to replay the same one.

To prevent this scenario, Namada supports a replay protection mechanism to prevent the execution of already processed transactions.

Context

This section will illustrate the pre-existing context in which we are going to implement the replay protection mechanism.

Encryption-Authentication

The current implementation of Namada is built on top of Tendermint which provides an encrypted and authenticated communication channel between every two nodes to prevent a man-in-the-middle attack (see the detailed spec).

The Namada protocol relies on this substrate to exchange transactions (messages) that will define the state transition of the ledger. More specifically, a transaction is composed of two parts: a WrapperTx and an inner Tx


#![allow(unused)]
fn main() {
pub struct WrapperTx {
    /// The fee to be payed for including the tx
    pub fee: Fee,
    /// Used to determine an implicit account of the fee payer
    pub pk: common::PublicKey,
    /// The epoch in which the tx is to be submitted. This determines
    /// which decryption key will be used
    pub epoch: Epoch,
    /// Max amount of gas that can be used when executing the inner tx
    pub gas_limit: GasLimit,
    /// the encrypted payload
    pub inner_tx: EncryptedTx,
    /// sha-2 hash of the inner transaction acting as a commitment
    /// the contents of the encrypted payload
    pub tx_hash: Hash,
}

pub struct Tx {
    pub code: Vec<u8>,
    pub data: Option<Vec<u8>>,
    pub timestamp: DateTimeUtc,
}
}

The wrapper transaction is composed of some metadata, the encrypted inner transaction itself and the hash of this. The inner Tx transaction carries the Wasm code to be executed and the associated data.

A transaction is constructed as follows:

  1. The struct Tx is produced
  2. The hash of this transaction gets signed by the author, producing another Tx where the data field holds the concatenation of the original data and the signature (SignedTxData)
  3. The produced transaction is encrypted and embedded in a WrapperTx. The encryption step is there for a future implementation of DKG (see Ferveo)
  4. Finally, the WrapperTx gets converted to a Tx struct, signed over its hash (same as step 2, relying on SignedTxData), and submitted to the network

Note that the signer of the WrapperTx and that of the inner one don't need to coincide, but the signer of the wrapper will be charged with gas and fees. In the execution steps:

  1. The WrapperTx signature is verified and, only if valid, the tx is processed
  2. In the following height the proposer decrypts the inner tx, checks that the hash matches that of the tx_hash field and, if everything went well, includes the decrypted tx in the proposed block
  3. The inner tx will then be executed by the Wasm runtime
  4. After the execution, the affected validity predicates (also mentioned as VP in this document) will check the storage changes and (if relevant) the signature of the transaction: if the signature is not valid, the VP will deem the transaction invalid and the changes won't be applied to the storage

The signature checks effectively prevent any tampering with the transaction data because that would cause the checks to fail and the transaction to be rejected. For a more in-depth view, please refer to the Namada execution spec.

Tendermint replay protection

The underlying consensus engine, Tendermint, provides a first layer of protection in its mempool which is based on a cache of previously seen transactions. This mechanism is actually aimed at preventing a block proposer from including an already processed transaction in the next block, which can happen when the transaction has been received late. Of course, this also acts as a countermeasure against intentional replay attacks. This check though, like all the checks performed in CheckTx, is weak, since a malicious validator could always propose a block containing invalid transactions. There's therefore the need for a more robust replay protection mechanism implemented directly in the application.

Implementation

Namada replay protection consists of three parts: the hash-based solution for both EncryptedTx (also called the InnerTx) and WrapperTx, a way to mitigate replay attacks in case of a fork and a concept of a lifetime for the transactions.

Hash register

The actual Wasm code and data for the transaction are encapsulated inside a struct Tx, which gets encrypted as an EncryptedTx and wrapped inside a WrapperTx (see the relative section). This inner transaction must be protected from replay attacks because it carries the actual semantics of the state transition. Moreover, even if the wrapper transaction was protected from replay attacks, an attacker could extract the inner transaction, rewrap it, and replay it. Note that for this attack to work, the attacker will need to sign the outer transaction himself and pay gas and fees for that, but this could still cause much greater damage to the parties involved in the inner transaction.

WrapperTx is the only type of transaction currently accepted by the ledger. It must be protected from replay attacks because, if it wasn't, a malicious user could replay the transaction as is. Even if the inner transaction implemented replay protection or, for any reason, wasn't accepted, the signer of the wrapper would still pay for gas and fees, effectively suffering economic damage.

To prevent the replay of both these transactions we will rely on a set of already processed transactions' digests that will be kept in storage. These digests will be computed on the unsigned transactions, to support replay protection even for multisigned transactions: in this case, if hashes were taken from the signed transactions, a different set of signatures on the same tx would produce a different hash, effectively allowing for a replay. To support this, we'll need a subspace in storage headed by a ReplayProtection internal address:

/<span class="katex"><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mord mathnormal">e</span><span class="mord mathnormal">p</span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span><span class="mord mathnormal">a</span><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">o</span><span class="mord mathnormal">t</span><span class="mord mathnormal">e</span><span class="mord mathnormal">c</span><span class="mord mathnormal">t</span><span class="mord mathnormal">i</span><span class="mord mathnormal">o</span><span class="mord mathnormal">n</span><span class="mord mathnormal">A</span><span class="mord mathnormal">d</span><span class="mord mathnormal">d</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">e</span><span class="mord mathnormal">s</span><span class="mord mathnormal">s</span><span class="mord">/</span></span></span></span>tx0_hash: None
/<span class="katex"><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mord mathnormal">e</span><span class="mord mathnormal">p</span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span><span class="mord mathnormal">a</span><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">o</span><span class="mord mathnormal">t</span><span class="mord mathnormal">e</span><span class="mord mathnormal">c</span><span class="mord mathnormal">t</span><span class="mord mathnormal">i</span><span class="mord mathnormal">o</span><span class="mord mathnormal">n</span><span class="mord mathnormal">A</span><span class="mord mathnormal">d</span><span class="mord mathnormal">d</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">e</span><span class="mord mathnormal">s</span><span class="mord mathnormal">s</span><span class="mord">/</span></span></span></span>tx1_hash: None
/<span class="katex"><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mord mathnormal">e</span><span class="mord mathnormal">p</span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span><span class="mord mathnormal">a</span><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">o</span><span class="mord mathnormal">t</span><span class="mord mathnormal">e</span><span class="mord mathnormal">c</span><span class="mord mathnormal">t</span><span class="mord mathnormal">i</span><span class="mord mathnormal">o</span><span class="mord mathnormal">n</span><span class="mord mathnormal">A</span><span class="mord mathnormal">d</span><span class="mord mathnormal">d</span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mord mathnormal">e</span><span class="mord mathnormal">s</span><span class="mord mathnormal">s</span><span class="mord">/</span></span></span></span>tx2_hash: None
...

The hashes will form the last part of the path to allow for a fast storage lookup.

The consistency of the storage subspace is of critical importance for the correct working of the replay protection mechanism. To protect it, a validity predicate will check that no changes to this subspace are applied by any wasm transaction, as those should only be available from protocol.

Both in mempool_validation and process_proposal we will perform a check (together with others, see the relative section) on both the digests against the storage to check that neither of the transactions has already been executed: if this doesn't hold, the WrapperTx will not be included into the mempool/block respectively. If both checks pass then the transaction is included in the block and executed. In the finalize_block function we will add the transaction's hash to storage to prevent re-executions. We will first add the hash of the wrapper transaction. After that, in the following block, we deserialize the inner transaction, check the correct order of the transactions in the block and execute the tx: if it runs out of gas then we'll avoid storing its hash to allow rewrapping and executing the transaction, otherwise we'll add the hash in storage (both in case of success or failure of the tx).

Forks

In the case of a fork, the transaction hash is not enough to prevent replay attacks. Transactions, in fact, could still be replayed on the other branch as long as their format is kept unchanged and the counters in storage match.

To mitigate this problem, transactions will need to carry a ChainId identifier to tie them to a specific fork. This field needs to be added to the Tx struct so that it applies to both WrapperTx and EncryptedTx:


#![allow(unused)]
fn main() {
pub struct Tx {
    pub code: Vec<u8>,
    pub data: Option<Vec<u8>>,
    pub timestamp: DateTimeUtc,
    pub chain_id: ChainId
}
}

This new field will be signed just like the other ones and is therefore subject to the same guarantees explained in the initial section. The validity of this identifier will be checked in process_proposal for both the outer and inner tx: if a transaction carries an unexpected chain id, it won't be applied, meaning that no modifications will be applied to storage.

Transaction lifetime

In general, a transaction is valid at the moment of submission, but after that, a series of external factors (ledger state, etc.) might change the mind of the submitter who's now not interested in the execution of the transaction anymore.

We have to introduce the concept of a lifetime (or timeout) for the transactions: basically, the Tx struct will hold an extra field called expiration stating the maximum DateTimeUtc up until which the submitter is willing to see the transaction executed. After the specified time, the transaction will be considered invalid and discarded regardless of all the other checks.

By introducing this new field we are setting a new constraint in the transaction's contract, where the ledger will make sure to prevent the execution of the transaction after the deadline and, on the other side, the submitter commits himself to the result of the execution at least until its expiration. If the expiration is reached and the transaction has not been executed the submitter can decide to submit a new transaction if he's still interested in the changes carried by it.

In our design, the expiration will hold until the transaction is executed: once it's executed, either in case of success or failure, the tx hash will be written to storage and the transaction will not be replayable. In essence, the transaction submitter commits himself to one of these three conditions:

  • Transaction is invalid regardless of the specific state
  • Transaction is executed (either with success or not) and the transaction hash is saved in the storage
  • Expiration time has passed

The first condition satisfied will invalidate further executions of the same tx.

In anticipation of DKG implementation, the current struct WrapperTx holds a field epoch stating the epoch in which the tx should be executed. This is because Ferveo will produce a new public key each epoch, effectively limiting the lifetime of the transaction (see section 2.2.2 of the documentation). Unfortunately, for replay protection, a resolution of 1 epoch (~ 1 day) is too low for the possible needs of the submitters, therefore we need the expiration field to hold a maximum DateTimeUtc to increase resolution down to a single block (~ 10 seconds).


#![allow(unused)]
fn main() {
pub struct Tx {
    pub code: Vec<u8>,
    pub data: Option<Vec<u8>>,
    pub timestamp: DateTimeUtc,
    pub chain_id: ChainId,
    /// Lifetime of the transaction, also determines which decryption key will be used
    pub expiration: DateTimeUtc,
}

pub struct WrapperTx {
    /// The fee to be payed for including the tx
    pub fee: Fee,
    /// Used to determine an implicit account of the fee payer
    pub pk: common::PublicKey,
    /// Max amount of gas that can be used when executing the inner tx
    pub gas_limit: GasLimit,
    /// the encrypted payload
    pub inner_tx: EncryptedTx,
    /// sha-2 hash of the inner transaction acting as a commitment
    /// the contents of the encrypted payload
    pub tx_hash: Hash,
}
}

Since we now have more detailed information about the desired lifetime of the transaction, we can remove the epoch field and rely solely on expiration. Now, the producer of the inner transaction should make sure to set a sensible value for this field, in the sense that it should not span more than one epoch. If this happens, then the transaction will be correctly decrypted only in a subset of the desired lifetime (the one expecting the actual key used for the encryption), while, in the following epochs, the transaction will fail decryption and won't be executed. In essence, the expiration parameter can only restrict the implicit lifetime within the current epoch, it can not surpass it as that would make the transaction fail in the decryption phase.

The subject encrypting the inner transaction will also be responsible for using the appropriate public key for encryption relative to the targeted time.

The wrapper transaction will match the expiration of the inner for correct execution. Note that we need this field also for the wrapper to anticipate the check at mempool/proposal evaluation time, but also to prevent someone from inserting a wrapper transaction after the corresponding inner has expired forcing the wrapper signer to pay for the fees.

Wrapper checks

In mempool_validation and process_proposal we will perform some checks on the wrapper tx to validate it. These will involve:

  • Valid signature
  • Enough funds to pay the fee
  • Valid chainId
  • Valid transaction hash
  • Valid expiration

These checks can all be done before executing the transactions themselves (the check on the gas cannot be done ahead of time). If any of these fails, the transaction should be considered invalid and the action to take will be one of the followings:

  1. If the checks fail on the signature, chainId, expiration or transaction hash, then this transaction will be forever invalid, regardless of the possible evolution of the ledger's state. There's no need to include the transaction in the block. Moreover, we cannot include this transaction in the block to charge a fee (as a sort of punishment) because these errors may not depend on the signer of the tx (could be due to malicious users or simply a delay in the tx inclusion in the block)
  2. If the checks fail only because of an insufficient balance, the wrapper should be kept in mempool for a future play in case the funds should become available
  3. If all the checks pass validation we will include the transaction in the block to store the hash and charge the fee

The expiration parameter also justifies step 2 of the previous bullet points which states that if the validity checks fail only because of an insufficient balance to pay for fees then the transaction should be kept in mempool for future execution. Without it, the transaction could be potentially executed at any future moment, possibly going against the mutated interests of the submitter. With the expiration parameter, now, the submitter commits himself to accept the execution of the transaction up to the specified time: it's going to be his responsibility to provide a sensible value for this parameter. Given this constraint the transaction will be kept in memepool up until the expiration (since it would become invalid after that in any case), to prevent the mempool from increasing too much in size.

This mechanism can also be applied to another scenario. Suppose a transaction was not propagated to the network by a node (or a group of colluding nodes). Now, this tx might be valid, but it doesn't get inserted into a block. Without an expiration, this tx can be replayed (better, applied, since it was never executed in the first place) at a future moment in time when the submitter might not be willing to execute it anymore.

Possible optimizations

In this section we describe two alternative solutions that come with some optimizations.

Transaction counter

Instead of relying on a hash (32 bytes) we could use a 64 bits (8 bytes) transaction counter as nonce for the wrapper and inner transactions. The advantage is that the space required would be much less since we only need two 8 bytes values in storage for every address which is signing transactions. On the other hand, the handling of the counter for the inner transaction will be performed entirely in wasm (transactions and VPs) making it a bit less efficient. This solution also imposes a strict ordering on the transactions issued by a same address.

NOTE: this solution requires the ability to yield execution from Wasmer which is not implemented yet.

InnerTx

We will implement the protection entirely in Wasm: the check of the counter will be carried out by the validity predicates while the actual writing of the counter in storage will be done by the transactions themselves.

To do so, the SignedTxData attached to the transaction will hold the current value of the counter in storage:


#![allow(unused)]
fn main() {
pub struct SignedTxData {
    /// The original tx data bytes, if any
    pub data: Option<Vec<u8>>,
    /// The optional transaction counter for replay protection
    pub tx_counter: Option<u64>,
    /// The signature is produced on the tx data concatenated with the tx code
    /// and the timestamp.
    pub sig: common::Signature,
}
}

The counter must reside in SignedTxData and not in the data itself because this must be checked by the validity predicate which is not aware of the specific transaction that took place but only of the changes in the storage; therefore, the VP is not able to correctly deserialize the data of the transactions since it doesn't know what type of data the bytes represent.

The counter will be signed as well to protect it from tampering and grant it the same guarantees explained at the beginning of this document.

The wasm transaction will simply read the value from storage and increase its value by one. The target key in storage will be the following:

/Address/inner_tx_counter: u64

The VP of the source address will then check the validity of the signature and, if it's deemed valid, will proceed to check if the pre-value of the counter in storage was equal to the one contained in the SignedTxData struct and if the post-value of the key in storage has been incremented by one: if any of these conditions doesn't hold the VP will discard the transactions and prevent the changes from being applied to the storage.

In the specific case of a shielded transfer, since MASP already comes with replay protection as part of the Zcash design (see the MASP specs and Zcash protocol specs), the counter in SignedTxData is not required and therefore should be optional.

To implement replay protection for the inner transaction we will need to update all the VPs checking the transaction's signature to include the check on the transaction counter: at the moment the vp_user validity predicate is the only one to update. In addition, all the transactions involving SignedTxData should increment the counter.

WrapperTx

To protect this transaction we can implement an in-protocol mechanism. Since the wrapper transaction gets signed before being submitted to the network, we can leverage the tx_counter field of the SignedTxData already introduced for the inner tx.

In addition, we need another counter in the storage subspace of every address:

/Address/wrapper_tx_counter: u64

where Address is the one signing the transaction (the same implied by the pk field of the WrapperTx struct).

The check will consist of a signature check first followed by a check on the counter that will make sure that the counter attached to the transaction matches the one in storage for the signing address. This will be done in the process_proposal function so that validators can decide whether the transaction is valid or not; if it's not, then they will discard the transaction and skip to the following one.

At last, in finalize_block, the ledger will update the counter key in storage, increasing its value by one. This will happen when the following conditions are met:

  • process_proposal has accepted the tx by validating its signature and transaction counter
  • The tx was correctly applied in finalize_block (for WrapperTx this simply means inclusion in the block and gas accounting)

Now, if a malicious user tried to replay this transaction, the tx_counter in the struct would no longer be equal to the one in storage and the transaction would be deemed invalid.

Implementation details

In this section we'll talk about some details of the replay protection mechanism that derive from the solution proposed in this section.

Storage counters

Replay protection will require interaction with the storage from both the protocol and Wasm. To do so we can take advantage of the StorageRead and StorageWrite traits to work with a single interface.

This implementation requires two transaction counters in storage for every address, so that the storage subspace of a given address looks like the following:

/Address/wrapper_tx_counter: u64
/Address/inner_tx_counter: u64

An implementation requiring a single counter in storage has been taken into consideration and discarded because that would not support batching; see the relative section for a more in-depth explanation.

For both the wrapper and inner transaction, the increase of the counter in storage is an important step that must be correctly executed. First, the implementation will return an error in case of a counter overflow to prevent wrapping, since this would allow for the replay of previous transactions. Also, we want to increase the counter as soon as we verify that the signature, the chain id and the passed-in transaction counter are valid. The increase should happen immediately after the checks because of two reasons:

  • Prevent replay attack of a transaction in the same block
  • Update the transaction counter even in case the transaction fails, to prevent a possible replay attack in the future (since a transaction invalid at state Sx could become valid at state Sn where n > x)

For WrapperTx, the counter increase and fee accounting will per performed in finalize_block (as stated in the relative section).

For InnerTx, instead, the logic is not straightforward. The transaction code will be executed in a Wasm environment (Wasmer) till it eventually completes or raises an exception. In case of success, the counter in storage will be updated correctly but, in case of failure, the protocol will discard all of the changes brought by the transactions to the write-ahead-log, including the updated transaction counter. This is a problem because the transaction could be successfully replayed in the future if it will become valid.

The ideal solution would be to interrupt the execution of the Wasm code after the transaction counter (if any) has been increased. This would allow performing a first run of the involved VPs and, if all of them accept the changes, let the protocol commit these changes before any possible failure. After that, the protocol would resume the execution of the transaction from the previous interrupt point until completion or failure, after which a second pass of the VPs is initiated to validate the remaining state modifications. In case of a VP rejection after the counter increase there would be no need to resume execution and the transaction could be immediately deemed invalid so that the protocol could skip to the next tx to be executed. With this solution, the counter update would be committed to storage regardless of a failure of the transaction itself.

Unfortunately, at the moment, Wasmer doesn't allow yielding from the execution.

In case the transaction went out of gas (given the gas_limit field of the wrapper), all the changes applied will be discarded from the WAL and will not affect the state of the storage. The inner transaction could then be rewrapped with a correct gas limit and replayed until the expiration time has been reached.

Batching and transaction ordering

This replay protection technique supports the execution of multiple transactions with the same address as source in a single block. Actually, the presence of the transaction counters and the checks performed on them now impose a strict ordering on the execution sequence (which can be an added value for some use cases). The correct execution of more than one transaction per source address in the same block is preserved as long as:

  1. The wrapper transactions are inserted in the block with the correct ascending order
  2. No hole is present in the counters' sequence
  3. The counter of the first transaction included in the block matches the expected one in storage

The conditions are enforced by the block proposer who has an interest in maximizing the amount of fees extracted by the proposed block. To support this incentive, we will charge gas and fees at the same moment in which we perform the counter increase explained in the storage counters section: this way we can avoid charging fees and gas if the transaction is invalid (invalid signature, wrong counter or wrong chain id), effectively incentivizing the block proposer to include only valid transactions and correctly reorder them to maximize the fees (see the block rejection section for an alternative solution that was discarded in favor of this).

In case of a missing transaction causes a hole in the sequence of transaction counters, the block proposer will include in the block all the transactions up to the missing one and discard all the ones following that one, effectively preserving the correct ordering.

Correctly ordering the transactions is not enough to guarantee the correct execution. As already mentioned in the WrapperTx section, the block proposer and the validators also need to access the storage to check that the first transaction counter of a sequence is actually the expected one.

The entire counter ordering is only done on the WrapperTx: if the inner counter is wrong then the inner transaction will fail and the signer of the corresponding wrapper will be charged with fees. This incentivizes submitters to produce valid transactions and discourages malicious user from rewrapping and resubmitting old transactions.

Mempool checks

As a form of optimization to prevent mempool spamming, some of the checks that have been introduced in this document will also be brought to the mempool_validate function. Of course, we always refer to checks on the WrapperTx only. More specifically:

  • Check the ChainId field
  • Check the signature of the transaction against the pk field of the WrapperTx
  • Perform a limited check on the transaction counter

Regarding the last point, mempool_validate will check if the counter in the transaction is >= than the one in storage for the address signing the WrapperTx. A complete check (checking for strict equality) is not feasible, as described in the relative section.

Alternatives considered

In this section we list some possible solutions that were taken into consideration during the writing of this solution but were eventually discarded.

Mempool counter validation

The idea of performing a complete validation of the transaction counters in the mempool_validate function was discarded because of a possible flaw.

Suppose a client sends five transactions (counters from 1 to 5). The mempool of the next block proposer is not guaranteed to receive them in order: something on the network could shuffle the transactions up so that they arrive in the following order: 2-3-4-5-1. Now, since we validate every single transaction to be included in the mempool in the exact order in which we receive them, we would discard the first four transactions and only accept the last one, that with counter 1. Now the next block proposer might have the four discarded transactions in its mempool (since those were not added to the previous block and therefore not evicted from the other mempools, at least they shouldn't, see block rejection) and could therefore include them in the following block. But still, a process that could have ended in a single block actually took two blocks. Moreover, there are two more issues:

  • The next block proposer might have the remaining transactions out of order in his mempool as well, effectively propagating the same issue down to the next block proposer
  • The next block proposer might not have these transactions in his mempool at all

Finally, transactions that are not allowed into the mempool don't get propagated to the other peers, making their inclusion in a block even harder. It is instead better to avoid a complete filter on the transactions based on their order in the mempool: instead we are going to perform a simpler check and then let the block proposer rearrange them correctly when proposing the block.

In-protocol protection for InnerTx

An alternative implementation could place the protection for the inner tx in protocol, just like the wrapper one, based on the transaction counter inside SignedTxData. The check would run in process_proposal and the update in finalize_block, just like for the wrapper transaction. This implementation, though, shows two drawbacks:

  • it implies the need for an hard fork in case of a modification of the replay protection mechanism
  • it's not clear who's the source of the inner transaction from the outside, as that depends on the specific code of the transaction itself. We could use specific whitelisted txs set to define when it requires a counter (would not work for future programmable transactions), but still, we have no way to define which address should be targeted for replay protection (blocking issue)
In-protocol counter increase for InnerTx

In the storage counter section we mentioned the issue of increasing the transaction counter for an inner tx even in case of failure. A possible solution that we took in consideration and discarded was to increase the counter from protocol in case of a failure.

This is technically feasible since the protocol is aware of the keys modified by the transaction and also of the results of the validity predicates (useful in case the transaction updated more than one counter in storage). It is then possible to recover the value and reapply the change directly from protocol. This logic though, is quite dispersive, since it effectively splits the management of the counter for the InnerTx among Wasm and protocol, while our initial intent was to keep it completely in Wasm.

Single counter in storage

We can't use a single transaction counter in storage because this would prevent batching.

As an example, if a client (with a current counter in storage holding value 5) generates two transactions to be included in the same block, signing both the outer and the inner (default behavior of the client), it would need to generate the following transaction counters:

[
    T1: (WrapperCtr: 5, InnerCtr: 6),
    T2: (WrapperCtr: 7, InnerCtr: 8)
]

Now, the current execution model of Namada includes the WrapperTx in a block first to then decrypt and execute the inner tx in the following block (respecting the committed order of the transactions). That would mean that the outer tx of T1 would pass validation and immediately increase the counter to 6 to prevent a replay attack in the same block. Now, the outer tx of T2 will be processed but it won't pass validation because it carries a counter with value 7 while the ledger expects 6.

To fix this, one could think to set the counters as follows:

[
    T1: (WrapperCtr: 5, InnerCtr: 7),
    T2: (WrapperCtr: 6, InnerCtr: 8)
]

This way both the transactions will be considered valid and executed. The issue is that, if the second transaction is not included in the block (for any reason), than the first transaction (the only one remaining at this point) will fail. In fact, after the outer tx has correctly increased the counter in storage to value 6 the block will be accepted. In the next block the inner transaction will be decrypted and executed but this last step will fail since the counter in SignedTxData carries a value of 7 and the counter in storage has a value of 6.

To cope with this there are two possible ways. The first one is that, instead of checking the exact value of the counter in storage and increasing its value by one, we could check that the transaction carries a counter >= than the one in storage and write this one (not increase) to storage. The problem with this is that it the lack of support for strict ordering of execution.

The second option is to keep the usual increase strategy of the counter (increase by one and check for strict equality) and simply use two different counters in storage for each address. The transaction will then look like this:

[
    T1: (WrapperCtr: 5, InnerCtr: 5),
    T2: (WrapperCtr: 6, InnerCtr: 6)
]

Since the order of inclusion of the WrapperTxs forces the same order of the execution for the inner ones, both transactions can be correctly executed and the correctness will be maintained even in case T2 didn't make it to the block (note that the counter for an inner tx and the corresponding wrapper one don't need to coincide).

Block rejection

The implementation proposed in this document has one flaw when it comes to discontinuous transactions. If, for example, for a given address, the counter in storage for the WrapperTx is 5 and the block proposer receives, in order, transactions 6, 5 and 8, the proposer will have an incentive to correctly order transactions 5 and 6 to gain the fees that he would otherwise lose. Transaction 8 will never be accepted by the validators no matter the ordering (since they will expect tx 7 which got lost): this effectively means that the block proposer has no incentive to include this transaction in the block because it would gain him no fees but, at the same time, he doesn't really have a disincentive to not include it, since in this case the validators will simply discard the invalid tx but accept the rest of the block granting the proposer his fees on all the other transactions.

A similar scenario happens in the case of a single transaction that is not the expected one (e.g. tx 5 when 4 is expected), or for a different type of inconsistencies, like a wrong ChainId or an invalid signature.

It is up to the block proposer then, whether to include or not these kinds of transactions: a malicious proposer could do so to spam the block without suffering any penalty. The lack of fees could be a strong enough measure to prevent proposers from applying this behavior, together with the fact that the only damage caused to the chain would be spamming the blocks.

If one wanted to completely prevent this scenario, the solution would be to reject the entire block: this way the proposer would have an incentive to behave correctly (by not including these transactions into the block) to gain the block fees. This would allow to shrink the size of the blocks in case of unfair block proposers but it would also cause the slow down of the block creation process, since after a block rejection a new Tendermint round has to be initiated.

Wrapper-bound InnerTx

The solution is to tie an InnerTx to the corresponding WrapperTx. By doing so, it becomes impossible to rewrap an inner transaction and, therefore, all the attacks related to this practice would be unfeasible. This mechanism requires even less space in storage (only a 64 bit counter for every address signing wrapper transactions) and only one check on the wrapper counter in protocol. As a con, it requires communication between the signer of the inner transaction and that of the wrapper during the transaction construction. This solution also imposes a strict ordering on the wrapper transactions issued by a same address.

To do so we will have to change the current definition of the two tx structs to the following:


#![allow(unused)]
fn main() {
pub struct WrapperTx {
    /// The fee to be payed for including the tx
    pub fee: Fee,
    /// Used to determine an implicit account of the fee payer
    pub pk: common::PublicKey,
    /// Max amount of gas that can be used when executing the inner tx
    pub gas_limit: GasLimit,
    /// Lifetime of the transaction, also determines which decryption key will be used
    pub expiration: DateTimeUtc,
    /// Chain identifier for replay protection
    pub chain_id: ChainId,
    /// Transaction counter for replay protection
    pub tx_counter: u64,
    /// the encrypted payload
    pub inner_tx: EncryptedTx,
}

pub struct Tx {
    pub code: Vec<u8>,
    pub data: Option<Vec<u8>>,
    pub timestamp: DateTimeUtc,
    pub wrapper_commit: Option<Hash>,
}
}

The Wrapper transaction no longer holds the inner transaction hash while the inner one now holds a commit to the corresponding wrapper tx in the form of the hash of a WrapperCommit struct, defined as:


#![allow(unused)]
fn main() {
pub struct WrapperCommit {
    pub pk: common::PublicKey,
    pub tx_counter: u64,
    pub expiration: DateTimeUtc,
    pub chain_id: ChainId,
}
}

The pk-tx_counter couple contained in this struct, uniquely identifies a single WrapperTx (since a valid tx_counter is unique given the address) so that the inner one is now bound to this specific wrapper. The remaining fields, expiration and chain_id, will tie these two values given their importance in terms of safety (see the relative section). Note that the wrapper_commit field must be optional because the WrapperTx struct itself gets converted to a Tx struct before submission but it doesn't need any commitment.

Both the inner and wrapper tx get signed on their hash, as usual, to prevent tampering with data. When a wrapper gets processed by the ledger, we first check the validity of the signature, checking that none of the fields were modified: this means that the inner tx embedded within the wrapper is, in fact, the intended one. This last statement means that no external attacker has tampered data, but the tampering could still have been performed by the signer of the wrapper before signing the wrapper transaction.

If this check (and others, explained later in the checks section) passes, then the inner tx gets decrypted in the following block proposal process. At this time we check that the order in which the inner txs are inserted in the block matches that of the corresponding wrapper txs in the previous block. To do so, we rely on an in-storage queue holding the hash of the WrapperCommit struct computed from the wrapper tx. From the inner tx we extract the WrapperCommit hash and check that it matches that in the queue: if they don't it means that the inner tx has been reordered or rewrapped and we reject the block. Note that, since we have already checked the wrapper at this point, the only way to rewrap the inner tx would be to also modify its commitment (need to change at least the tx_counter field), otherwise the checks on the wrapper would have spotted the inconsistency and rejected the tx.

If this check passes then we can send the inner transaction to the wasm environment for execution: if the transaction is signed, then at least one VP will check its signature to spot possible tampering of the data (especially by the wrapper signer, since this specific case cannot be checked before this step) and, if this is the case, will reject this transaction and no storage modifications will be applied.

In summary:

  • The InnerTx carries a unique identifier of the WrapperTx embedding it
  • Both the inner and wrapper txs are signed on all of their data
  • The signature check on the wrapper tx ensures that the inner transaction is the intended one and that this wrapper has not been used to wrap a different inner tx. It also verifies that no tampering happened with the inner transaction by a third party. Finally, it ensures that the public key is the one of the signer
  • The check on the WrapperCommit ensures that the inner tx has not been reordered nor rewrapped (this last one is a non-exhaustive check, inner tx data could have been tampered with by the wrapper signer)
  • The signature check of the inner tx performed in Vp grants that no data of the inner tx has been tampered with, effectively verifying the correctness of the previous check (WrapperCommit)

This sequence of controls makes it no longer possible to rewrap an InnerTx which is now bound to its wrapper. This implies that replay protection is only needed on the WrapperTx since there's no way to extract the inner one, rewrap it and replay it.

WrapperTx checks

In mempool_validation and process_proposal we will perform some checks on the wrapper tx to validate it. These will involve:

  • Valid signature
  • Enough funds to pay for the fee
  • Valid chainId
  • Valid transaction counter
  • Valid expiration

These checks can all be done before executing the transactions themselves. The check on the gas cannot be done ahead of time and we'll deal with it later. If any of these fails, the transaction should be considered invalid and the action to take will be one of the followings:

  1. If the checks fail on the signature, chainId, expiration or transaction counter, then this transaction will be forever invalid, regardless of the possible evolution of the ledger's state. There's no need to include the transaction in the block nor to increase the transaction counter. Moreover, we cannot include this transaction in the block to charge a fee (as a sort of punishment) because these errors may not depend on the signer of the tx (could be due to malicious users or simply a delay in the tx inclusion in the block)
  2. If the checks fail only because of an insufficient balance, the wrapper should be kept in mempool for a future play in case the funds should become available
  3. If all the checks pass validation we will include the transaction in the block to increase the counter and charge the fee

Note that, regarding point one, there's a distinction to be made about an invalid tx_counter which could be invalid because of being old or being in advance. To solve this last issue (counter greater than the expected one), we have to introduce the concept of a lifetime (or timeout) for the transactions: basically, the WrapperTx will hold an extra field called expiration stating the maximum time up until which the submitter is willing to see the transaction executed. After the specified time the transaction will be considered invalid and discarded regardless of all the other checks. This way, in case of a transaction with a counter greater than expected, it is sufficient to wait till after the expiration to submit more transactions, so that the counter in storage is not modified (kept invalid for the transaction under observation) and replaying that tx would result in a rejection.

This actually generalizes to a more broad concept. In general, a transaction is valid at the moment of submission, but after that, a series of external factors (ledger state, etc.) might change the mind of the submitter who's now not interested in the execution of the transaction anymore. By introducing this new field we are introducing a new constraint in the transaction's contract, where the ledger will make sure to prevent the execution of the transaction after the deadline and, on the other side, the submitter commits himself to the result of the execution at least until its expiration. If the expiration is reached and the transaction has not been executed the submitter can decide to submit a new, identical transaction if he's still interested in the changes carried by it.

In our design, the expiration will hold until the transaction is executed, once it's executed, either in case of success or failure, the tx_counter will be increased and the transaction will not be replayable. In essence, the transaction submitter commits himself to one of these three conditions:

  • Transaction is invalid regardless of the specific state
  • Transaction is executed (either with success or not) and the transaction counter is increased
  • Expiration time has passed

The first condition satisfied will invalidate further executions of the same tx.

The expiration parameter also justifies step 2 of the previous bullet points which states that if the validity checks fail only because of an insufficient balance to pay for fees than the transaction should be kept in mempool for a future execution. Without it, the transaction could be potentially executed at any future moment (provided that the counter is still valid), possibily going against the mutated interests of the submitter. With the expiration parameter, now, the submitter commits himself to accepting the execution of the transaction up to the specified time: it's going to be his responsibility to provide a sensible value for this parameter. Given this constraint the transaction will be kept in memepool up until the expiration (since it would become invalid after that in any case), to prevent the mempool from increasing too much in size.

This mechanism can also be applied to another scenario. Suppose a transaction was not propagated to the network by a node (or a group of colluding nodes). Now, this tx might be valid, but it doesn't get inserted into a block. Without an expiration, if the submitter doesn't submit any other transaction (which gets included in a block to increase the transaction counter), this tx can be replayed (better, applied, since it was never executed in the first place) at a future moment in time when the submitter might not be willing to execute it any more.

Since the signer of the wrapper may be different from the one of the inner we also need to include this expiration field in the WrapperCommit struct, to prevent the signer of the wrapper from setting a lifetime which is in conflict with the interests of the inner signer. Note that adding a separate lifetime for the wrapper alone (which would require two separate checks) doesn't carry any benefit: a wrapper with a lifetime greater than the inner would have no sense since the inner would fail. Restricting the lifetime would work but it also means that the wrapper could prevent a valid inner transaction from being executed. We will then keep a single expiration field specifying the wrapper tx max time (the inner one will actually be executed one block later because of the execution mechanism of Namada).

To prevent the signer of the wrapper from submitting the transaction to a different chain, the ChainId field should also be included in the commit.

Finally, in case the transaction run out of gas (based on the provided gas_limit field of the wrapper) we don't need to take any action: by this time the transaction counter will have already been incremented and the tx is not replayable anymore. In theory, we don't even need to increment the counter since the only way this transaction could become valid is a change in the way gas is accounted, which might require a fork anyway, and consequently a change in the required ChainId. However, since we can't tell the gas consumption before the inner tx has been executed, we cannot anticipate this check.

WrapperCommit

The fields of WrapperTx not included in WrapperCommit are at the discretion of the WrapperTx producer. These fields are not included in the commit because of one of these two reasons:

  • They depend on the specific state of the wrapper signer and cannot be forced (like fee, since the wrapper signer must have enough funds to pay for those)
  • They are not a threat (in terms of replay attacks) to the signer of the inner transaction in case of failure of the transaction

In a certain way, the WrapperCommit not only binds an InnerTx no a wrapper, but effectively allows the inner to control the wrapper by requesting some specific parameters for its creation and bind these parameters among the two transactions: this allows us to apply the same constraints to both txs while performing the checks on the wrapper only.

Transaction creation process

To craft a transaction, the process will now be the following (optional steps are only required if the signer of the inner differs from that of the wrapper):

  • (Optional) the InnerTx constructor request, to the wrapper signer, his public key and the tx_counter to be used
  • The InnerTx is constructed in its entirety with also the wrapper_commit field to define the constraints of the future wrapper
  • The produced Tx struct get signed over all of its data (with SignedTxData) producing a new struct Tx
  • (Optional) The inner tx produced is sent to the WrapperTx producer together with the WrapperCommit struct (required since the inner tx only holds the hash of it)
  • The signer of the wrapper constructs a WrapperTx compliant with the WrapperCommit fields
  • The produced WrapperTx gets signed over all of its fields

Compared to a solution not binding the inner tx to the wrapper one, this solution requires the exchange of 3 messages (request tx_counter, receive tx_counter, send InnerTx) between the two signers (in case they differ), instead of one. However, it allows the signer of the inner to send the InnerTx to the wrapper signer already encrypted, guaranteeing a higher level of safety: only the WrapperCommit struct should be sent clear, but this doesn't reveal any sensitive information about the inner transaction itself.

Block space allocator

Block space in Tendermint is a resource whose management is relinquished to the running application. This section covers the design of an abstraction that facilitates the process of transparently allocating space for transactions in a block at some height , whilst upholding the safety and liveness properties of Namada.

On block sizes in Tendermint and Namada

Block sizes in Tendermint (configured through the consensus parameter) have a minimum value of , and a hard cap of , reflecting the header, evidence of misbehavior (used to slash Byzantine validators) and transaction data, as well as any potential protobuf serialization overhead. Some of these data are dynamic in nature (e.g. evidence of misbehavior), so the total size reserved to transactions in a block at some height might not be the same as another block's, say, at some height . During Tendermint's PrepareProposal ABCI phase, applications receive a parameter whose value already accounts for the total space available for transactions at some height . Namada does not rely on the parameter of RequestPrepareProposal; instead, app-side validators configure a parameter at genesis (or through governance) and set Tendermint blocks' parameter to its upper bound.

Transaction batch construction

During Tendermint's PrepareProposal ABCI phase, Namada (the ABCI server) is fed a set of transactions , whose total combined size (i.e. the sum of the bytes occupied by each ) may be greater than . Therefore, consensus round leaders are responsible for selecting a batch of transactions whose total combined bytes .

To stay within these bounds, block space is allotted to different kinds of transactions: decrypted, protocol and encrypted transactions. Each kind of transaction gets about worth of allotted space, in an abstract container dubbed the TxBin. A transaction may be dumped to a TxBin, resulting in a successful operation, or an error, if is rejected due to lack of space in the TxBin or if 's size overflows (i.e. does not fit in) the TxBin. Block proposers continue dumping transactions from into a TxBin until a rejection error is encountered, or until there are no more transactions of the same type as 's in . The BlockSpaceAllocator contains three TxBin instances, responsible for holding decrypted, protocol and encrypted transactions.

block space allocator tx bins

During occasional Namada protocol events, such as DKG parameter negotiation, all available block space should be reserved to protocol transactions, therefore the BlockSpaceAllocator was designed as a state machine, whose state transitions depend on the state of Namada. The states of the BlockSpaceAllocator are the following:

  1. BuildingDecryptedTxBatch - As the name implies, during this state the decrypted transactions TxBin is filled with transactions of the same type. Honest block proposers will only include decrypted transactions in a block at a fixed height if encrypted transactions were available at . The decrypted transactions should be included in the same order of the encrypted transactions of block . Likewise, all decrypted transactions available at must be included.
  2. BuildingProtocolTxBatch - In a similar manner, during this BlockSpaceAllocator state, the protocol transactions TxBin is populated with transactions of the same type. Contrary to the first state, allocation stops as soon as the respective TxBin runs out of space for some . The TxBin for protocol transactions is allotted half of the remaining block space, after decrypted transactions have been allocated.
  3. BuildingEncryptedTxBatch - This state behaves a lot like the previous state, with one addition: it takes a parameter that guards the encrypted transactions TxBin, which in effect splits the state into two sub-states. When WithEncryptedTxs is active, we fill block space with encrypted transactions (as the name implies); orthogonal to this mode of operation, there is WithoutEncryptedTxs, which, as the name implies, does not allow encrypted transactions to be included in a block. The TxBin for encrypted transactions is allotted bytes, where is the block space remaining after allocating space for decrypted and protocol transactions.
  4. FillingRemainingSpace - The final state of the BlockSpaceAllocator. Due to the short-circuit behavior of a TxBin, on allocation errors, some space may be left unutilized at the end of the third state. At this state, the only kinds of transactions that are left to fill the available block space are of type encrypted and protocol, but encrypted transactions are forbidden to be included, to avoid breaking their invariant regarding allotted block space (i.e. encrypted transactions can only occupy up to of the total block space for a given height ). As such, only protocol transactions are allowed at the fourth and final state of the BlockSpaceAllocator.

For a fixed block height , if at and no encrypted transactions are included in the respective proposals, the block decided for height will only contain protocol transactions. Similarly, since at most of the available block space at a fixed height is reserved to encrypted transactions, and decrypted transactions at will take up (at most) the same amount of space as encrypted transactions at height , each transaction kind's TxBin will generally get allotted about of the available block space.

Example

Consider the following diagram:

block space allocator example

We denote D, P and E as decrypted, protocol and encrypted transactions, respectively.

  • At height , block space is evenly divided in three parts, one for each kind of transaction type.
  • At height , we do not include encrypted transactions in the proposal, therefore protocol transactions are allowed to take up to of the available block space.
  • At height , no encrypted transactions are included either. Notice that no decrypted transactions were included in the proposal, since at height we did not decide on any encrypted transactions. In sum, only protocol transactions are included in the proposal for the block with height .
  • At height , we propose encrypted transactions once more. Just like in the previous scenario, no decrypted transactions are available. Encrypted transactions are capped at of the available block space, so the remaining of the available block space is filled with protocol transactions.
  • At height , allocation returns to its normal operation, thus block space is divided in three equal parts for each kind of transaction type.

Transaction batch validation

Batches of transactions proposed during ABCI's PrepareProposal phase are validated at the ProcessProposal phase. The validation conditions are relaxed, compared to the rigid block structure imposed on blocks during PrepareProposal (i.e. with decrypted, protocol and encrypted transactions appearing in this order, as examplified above). Let us fix as the height of the block currently being decided through Tendermint's consensus mechanism, as the batch of transactions proposed at as 's payload and as the current set of active validators. To vote on , each validator checks:

  • If the length of in bytes, defined as , is not greater than .
  • If does not contain more than worth of encrypted transactions.
    • While not directly checked, our batch construction invariants guarantee that we will constrain decrypted transactions to occupy up to bytes of the available block space at (or any block height, in fact).
  • If all decrypted transactions from have been included in the proposal , for height .
  • That no encrypted transactions were included in the proposal , if no encrypted transactions should be included at .
    • N.b. the conditions to reject encrypted transactions are still not clearly specced out, therefore they will be left out of this section, for the time being.

Should any of these conditions not be met at some arbitrary round of , all honest validators will reject the proposal . Byzantine validators are permitted to re-order the layout of typically derived from the BlockSpaceAllocator , under normal operation, however this should not be a compromising factor of the safety and liveness properties of Namada. The rigid layout of is simply a consequence of allocating in different phases.

On validator set updates

Validator set updates, one type of protocol transactions decided through BFT consensus in Namada, are fundamental to the liveness properties of the Ethereum bridge, thus, ideally we would also check if these would be included once per epoch at the ProcessProposal stage. Unfortunately, achieving a quorum of signatures for a validator set update between two adjacent block heights through ABCI alone is not feasible. Hence, the Ethereum bridge is not a live distributed system, since there is the possibility to cross an epoch boundary without constructing a valid proof for some validator set update. In practice, however, it is nearly impossible for the bridge to get "stuck", as validator set updates are eagerly issued at the start of an epoch, whose length should be long enough for consensus(*) to be reached on a single validator set update.

(*) Note that we loosely used consensus here to refer to the process of acquiring a quorum (e.g. more than of voting power, by stake) of signatures on a single validator set update. "Chunks" of a proof (i.e. individual votes) are decided and batched together, until a complete proof is constructed.

We cover validator set updates in detail in the Ethereum bridge section.

Governance

Governance parameter update proposals for that take effect at , where is some arbitrary block height, should be such that , to leave enough room for all decrypted transactions from at . Subsequent block heights should eventually lead to allotted block space converging to about for each kind of transaction type.

Multi-asset shielded pool

The multi-asset shielded pool (MASP) is an extension to the Sapling circuit which adds support for sending arbitrary assets.

See the following documents:

MASP integration spec

Overview

The overall aim of this integration is to have the ability to provide a multi-asset shielded pool following the MASP spec as an account on the current Namada blockchain implementation.

Shielded pool validity predicate (VP)

The shielded value pool can be an Namada established account with a validity predicate which handles the verification of shielded transactions. Similarly to zcash, the asset balance of the shielded pool itself is transparent - that is, from the transparent perspective, the MASP is just an account holding assets. The shielded pool VP has the following functions:

  • Accepts only valid transactions involving assets moving in or out of the pool.
  • Accepts valid shielded-to-shielded transactions, which don't move assets from the perspective of transparent Namada.
  • Publishes the note commitment and nullifier reveal Merkle trees.

To make this possible, the host environment needs to provide verification primitives to VPs. One possibility is to provide a single high-level operation to verify transaction output descriptions and proofs, but another is to provide cryptographic functions in the host environment and implement the verifier as part of the VP.

In future, the shielded pool will be able to update the commitment and nullifier Merkle trees as it receives transactions. This could likely be achieved via the temporary storage mechanism added for IBC, with the trees finalized with each block.

The input to the VP is the following set of state changes:

  • updates to the shielded pool's asset balances
  • new encrypted notes
  • updated note and nullifier tree states (partial, because we only have the last block's anchor)

and the following data which is ancillary from the ledger's perspective:

  • spend descriptions, which destroy old notes:
struct SpendDescription {
  // Value commitment to amount of the asset in the note being spent
  cv: jubjub::ExtendedPoint,
  // Last block's commitment tree root
  anchor: bls12_381::Scalar,
  // Nullifier for the note being nullified
  nullifier: [u8; 32],
  // Re-randomized version of the spend authorization key
  rk: PublicKey,
  // Spend authorization signature
  spend_auth_sig: Signature,
  // Zero-knowledge proof of the note and proof-authorizing key
  zkproof: Proof<Bls12>,
}
  • output descriptions, which create new notes:
struct OutputDescription {
  // Value commitment to amount of the asset in the note being created
  cv: jubjub::ExtendedPoint,
  // Derived commitment tree location for the output note
  cmu: bls12_381::Scalar,
  // Note encryption public key
  epk: jubjub::ExtendedPoint,
  // Encrypted note ciphertext
  c_enc: [u8; ENC_CIPHERTEXT_SIZE],
  // Encrypted note key recovery ciphertext
  c_out: [u8; OUT_CIPHERTEXT_SIZE],
  // Zero-knowledge proof of the new encrypted note's location
  zkproof: Proof<Bls12>,
}

Given these inputs:

The VP must verify the proofs for all spend and output descriptions (bellman::groth16), as well as the signature for spend notes.

Encrypted notes from output descriptions must be published in the storage so that holders of the viewing key can view them; however, the VP does not concern itself with plaintext notes.

Nullifiers and commitments must be appended to their respective Merkle trees in the VP's storage as well, which is a transaction-level rather than a block-level state update.

In addition to the individual spend and output description verifications, the final transparent asset value change described in the transaction must equal the pool asset value change. As an additional sanity check, the pool's balance of any asset may not end up negative.

NB: Shielded-to-shielded transactions in an asset do not, from the ledger's perspective, transact in that asset; therefore, the asset's own VP cannot run as described above because the shielded pool is asset-hiding.

Client capabilities

The client should be able to:

  • Make transactions with a shielded sender and/or receiver
  • Scan the blockchain to determine shielded assets in one's possession
  • Generate payment addresses from viewing keys from spending keys

To make shielded transactions, the client has to be capable of creating and spending notes, and generating proofs which the pool VP verifies.

Unlike the VP, which must have the ability to do complex verifications, the transaction code for shielded transactions can be comparatively simple: it delivers the transparent value changes in or out of the pool, if any, and proof data computed offline by the client.

The client and wallet must be extended to support the shielded pool and the cryptographic operations needed to interact with it. From the perspective of the transparent Namada protocol, a shielded transaction is just a data write to the MASP storage, unless it moves value in or out of the pool. The client needs the capability to create notes, transactions, and proofs of transactions, but it has the advantage of simply being able to link against the MASP crates, unlike the VP.

Protocol

Note Format

The note structure encodes an asset's type, its quantity and its owner. More precisely, it has the following format:

struct Note {
  // Diversifier for recipient address
  d: jubjub::SubgroupPoint,
  // Diversified public transmission key for recipient address
  pk_d: jubjub::SubgroupPoint,
  // Asset value in the note
  value: u64,
  // Pedersen commitment trapdoor
  rseed: Rseed,
  // Asset identifier for this note
  asset_type: AssetType,
  // Arbitrary data chosen by note sender
  memo: [u8; 512],
}

For cryptographic details and further information, see Note Plaintexts and Memo Fields. Note that this structure is required only by the client; the VP only handles commitments to this data.

Diversifiers are selected by the client and used to diversify addresses and their associated keys. v and t identify the asset type and value. Asset identifiers are derived from asset names, which are arbitrary strings (in this case, token/other asset VP addresses). The derivation must deterministically result in an identifier which hashes to a valid curve point.

Transaction Format

The transaction data structure comprises a list of transparent inputs and outputs as well as a list of shielded inputs and outputs. More precisely:

struct Transaction {
    // Transaction version
    version: u32,
    // Transparent inputs
    tx_in: Vec<TxIn>,
    // Transparent outputs
    tx_out: Vec<TxOut>,
    // The net value of Sapling spends minus outputs
    value_balance_sapling: Vec<(u64, AssetType)>,
    // A sequence ofSpend descriptions
    spends_sapling: Vec<SpendDescription>,
    // A sequence ofOutput descriptions
    outputs_sapling: Vec<OutputDescription>,
    // A binding signature on the SIGHASH transaction hash,
    binding_sig_sapling: [u8; 64],
}

For the cryptographic constraints and further information, see Transaction Encoding and Consensus. Note that this structure slightly deviates from Sapling due to the fact that value_balance_sapling needs to be provided for each asset type.

Transparent Input Format

The input data structure decribes how much of each asset is being deducted from certain accounts. More precisely, it is as follows:

struct TxIn {
    // Source address
    address: Address,
    // Asset identifier for this input
    token: AssetType,
    // Asset value in the input
    amount: u64,
    // A signature over the hash of the transaction
    sig: Signature,
    // Used to verify the owner's signature
    pk: PublicKey,
}

Note that the signature and public key are required to authenticate the deductions.

Transparent Output Format

The output data structure decribes how much is being added to certain accounts. More precisely, it is as follows:

struct TxOut {
    // Destination address
    address: Address,
    // Asset identifier for this output
    token: AssetType,
    // Asset value in the output
    amount: u64,
}

Note that in contrast to Sapling's UTXO based approach, our transparent inputs/outputs are based on the account model used in the rest of Namada.

Shielded Transfer Specification

Transfer Format

Shielded transactions are implemented as an optional extension to transparent ledger transfers. The optional shielded field in combination with the source and target field determine whether the transfer is shielding, shielded, or unshielded. See the transfer format below:

/// A simple bilateral token transfer
#[derive(..., BorshSerialize, BorshDeserialize, ...)]
pub struct Transfer {
    /// Source address will spend the tokens
    pub source: Address,
    /// Target address will receive the tokens
    pub target: Address,
    /// Token's address
    pub token: Address,
    /// The amount of tokens
    pub amount: Amount,
    /// The unused storage location at which to place TxId
    pub key: Option<String>,
    /// Shielded transaction part
    pub shielded: Option<Transaction>,
}

Conditions

Below, the conditions necessary for a valid shielded or unshielded transfer are outlined:

  • A shielded component equal to None indicates a transparent Namada transaction
  • Otherwise the shielded component must have the form Some(x) where x has the transaction encoding specified in the Multi-Asset Shielded Pool Specs
  • Hence for a shielded transaction to be valid:
    • the Transfer must satisfy the usual conditions for Namada ledger transfers (i.e. sufficient funds, ...) as enforced by token and account validity predicates
    • the Transaction must satisfy the conditions specified in the Multi-Asset Shielded Pool Specification
    • the Transaction and Transfer together must additionally satisfy the below boundary conditions intended to ensure consistency between the MASP validity predicate ledger and Namada ledger
  • A key equal to None indicates an unpinned shielded transaction; one that can only be found by scanning and trial-decrypting the entire shielded pool
  • Otherwise the key must have the form Some(x) where x is a String such that there exists no prior accepted transaction with the same key

Boundary Conditions

Below, the conditions necessary to maintain consistency between the MASP validity predicate ledger and Namada ledger are outlined:

  • If the target address is the MASP validity predicate, then no transparent outputs are permitted in the shielded transaction
  • If the target address is not the MASP validity predicate, then:
    • there must be exactly one transparent output in the shielded transaction and:
      • its public key must be the hash of the target address bytes - this prevents replay attacks altering transfer destinations
        • the hash is specifically a RIPEMD-160 of a SHA-256 of the input bytes
      • its value must equal that of the containing transfer - this prevents replay attacks altering transfer amounts
      • its asset type must be derived from the token address raw bytes and the current epoch once Borsh serialized from the type (Address, Epoch):
        • the dependency on the address prevents replay attacks altering transfer asset types
        • the current epoch requirement prevents attackers from claiming extra rewards by forging the time when they began to receive rewards
        • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer
  • If the source address is the MASP validity predicate, then:
    • no transparent inputs are permitted in the shielded transaction
    • the transparent transaction value pool's amount must equal the containing wrapper transaction's fee amount
    • the transparent transaction value pool's asset type must be derived from the containing wrapper transaction's fee token
      • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer
  • If the source address is not the MASP validity predicate, then:
    • there must be exactly one transparent input in the shielded transaction and:
      • its value must equal that of amount in the containing transfer - this prevents stealing/losing funds from/to the pool
      • its asset type must be derived from the token address raw bytes and the current epoch once Borsh serialized from the type (Address, Epoch):
        • the address dependency prevents stealing/losing funds from/to the pool
        • the current epoch requirement ensures that withdrawers receive their full reward when leaving the shielded pool
        • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer

Remarks

Below are miscellaneous remarks on the capabilities and limitations of the current MASP implementation:

  • The gas fees for shielded transactions are charged to the signer just like it is done for transparent transactions
    • As a consequence, an amount exceeding the gas fees must be available in a transparent account in order to execute an unshielding transaction - this prevents denial of service attacks
  • Using the MASP sentinel transaction key for transaction signing indicates that gas be drawn from the transaction's transparent value pool
    • In this case, the gas will be taken from the MASP transparent address if the shielded transaction is proven to be valid
  • With knowledge of its key, a pinned shielded transaction can be directly downloaded or proven non-existent without scanning the entire blockchain
    • It is recommended that pinned transaction's key be derived from the hash of its payment address, something that both transaction parties would share
    • This key must not be reused, this is in order to avoid revealing that multiple transactions are going to the same entity

Multi-Asset Shielded Pool Specification Differences from Zcash Protocol Specification

The Multi-Asset Shielded Pool Specification referenced above is in turn an extension to the Zcash Protocol Specification. Below, the changes from the Zcash Protocol Specification assumed to have been integrated into the Multi-Asset Shielded Pool Specification are listed:

  • 3.2 Notes
  • 4.1.8 Commitment
    • NoteCommit and ValueCommit must be parameterized by asset type
  • 4.7.2 Sending Notes (Sapling)
    • Sender must also be able to select asset type
    • NoteCommit and hence cm must be parameterized by asset type
    • ValueCommit and hence cv must be parameterized by asset type
    • The note plaintext tuple must include asset type
  • 4.8.2 Dummy Notes (Sapling)
    • A random asset type must also be selected
    • NoteCommit and hence cm must be parameterized by asset type
    • ValueCommit and hence cv must be parameterized by asset type
  • 4.13 Balance and Binding Signature (Sapling)
    • The Sapling balance value is now defined as the net value of Spend and Convert transfers minus Output transfers.
    • The Sapling balance value is no longer a scalar but a vector of pairs comprising values and asset types
    • Addition, subtraction, and equality checks of Sapling balance values is now done component-wise
    • A Sapling balance value is defined to be non-negative iff each of its components is non-negative
    • ValueCommit and the value base must be parameterized by asset type
    • Proofs must be updated to reflect the presence of multiple value bases
  • 4.19.1 Encryption (Sapling and Orchard)
    • The note plaintext tuple must include asset type
  • 4.19.2 Decryption using an Incoming Viewing Key (Sapling and Orchard)
    • The note plaintext extracted from the decryption must include asset type
  • 4.19.3 Decryption using a Full Viewing Key (Sapling and Orchard)
    • The note plaintext extracted from the decryption must include asset type
  • 5.4.8.2 Windowed Pedersen commitments
    • NoteCommit must be parameterized by asset type
  • 5.4.8.3 Homomorphic Pedersen commitments (Sapling and Orchard)
    • HomomorphicPedersenCommit, ValueCommit, and value base must be parameterized by asset type
  • 5.5 Encodings of Note Plaintexts and Memo Fields
    • The note plaintext tuple must include asset type
    • The Sapling note plaintext encoding must use 32 bytes inbetween d and v to encode asset type
    • Hence the total size of a note plaintext encoding should be 596 bytes
  • 5.6 Encodings of Addresses and Keys
    • Bech32m [BIP-0350] is used instead of Bech32 [ZIP-173] to further encode the raw encodings
  • 5.6.3.1 Sapling Payment Addresses
    • For payment addresses on the Testnet, the Human-Readable Part is "patest"
  • 7.1 Transaction Encoding and Consensus
    • valueBalanceSapling is no longer scalar. Hence it should be replaced by two components:
      • nValueBalanceSapling: a compactSize indicating number of asset types spanned by balance
      • a length nValueBalanceSapling sequence of 40 byte values where:
        • the first 32 bytes encode the asset type
        • the last 8 bytes are an int64 encoding asset value
    • In between vSpendsSapling and nOutputsSapling are two additional rows:
      • First row:
        • Bytes: Varies
        • Name: nConvertsMASP
        • Data Type: compactSize
        • Description: The number of Convert descriptions in vConvertsMASP
      • Second row:
        • Bytes: 64*nConvertsMASP
        • Name: vConvertsMASP
        • Data Type: ConvertDescription[nConvertsMASP]
        • Description: A sequence of Convert descriptions, encoded as described in the following section.
  • 7.4 Output Description Encoding and Consensus
    • The encCiphertext field must be 612 bytes in order to make 32 bytes room to encode the asset type

Additional Sections

In addition to the above components of shielded transactions inherited from Zcash, we have the following:

Convert Descriptions

Each transaction includes a sequence of zero or more Convert descriptions.

Let ValueCommit.Output be as defined in 4.1.8 Commitment. Let B[Sapling Merkle] be as defined in 5.3 Constants. Let ZKSpend be as defined in 4.1.13 Zero-Knowledge Proving System.

A convert description comprises (cv, rt, pi) where

  • cv: ValueCommit.Output is value commitment to the value of the conversion note
  • rt: B[Sapling Merkle] is an anchor for the current conversion tree or an archived conversion tree
  • pi: ZKConvert.Proof is a zk-SNARK proof with primary input (rt, cv) for the Convert statement defined at Burn and Mint conversion transactions in MASP.

Convert Description Encoding

Let pi_{ZKConvert} be the zk-SNARK proof of the corresponding Convert statement. pi_{ZKConvert} is encoded in the zkproof field of the Convert description.

An abstract Convert description, as described above, is encoded in a transaction as an instance of a ConvertDescription type:

  • First Entry
    • Bytes: 32
    • Name: cv
    • Data Type: byte[32]
    • Description: A value commitment to the value of the conversion note, LEBS2OSP_256(repr_J(cv)).
  • Second Entry
    • Bytes: 32
    • Name: anchor
    • Data Type: byte[32]
    • Description: A root of the current conversion tree or an archived conversion tree, LEBS2OSP_256(rt^Sapling).
  • Third Entry
    • Bytes: 192
    • Name: zkproof
    • Data Type: byte[192]
    • Description: An encoding of the zk-SNARK proof pi_{ZKConvert} (see 5.4.10.2 Groth16).

Required Changes to ZIP 32: Shielded Hierarchical Deterministic Wallets

Below, the changes from ZIP 32: Shielded Hierarchical Deterministic Wallets assumed to have been integrated into the Multi-Asset Shielded Pool Specification are listed:

Storage Interface Specification

Namada nodes provide interfaces that allow Namada clients to query for specific pinned transactions, transactions accepted into the shielded pool, and allowed conversions between various asset types. Below we describe the ABCI paths and the encodings of the responses to each type of query.

Shielded Transfer Query

In order to determine shielded balances belonging to particular keys or spend one's balance, it is necessary to download the transactions that transferred the assets to you. To this end, the nth transaction in the shielded pool can be obtained by getting the value at the storage path <MASP-address>/tx-<n>. Note that indexing is 0-based. This will return a quadruple of the type below:

(
    /// the epoch of the transaction's block
    Epoch,
    /// the height of the transaction's block
    BlockHeight,
    /// the index of the transaction within the block
    TxIndex,
    /// the actual bytes of the transfer
    Transfer
)

Transfer is defined as above and (Epoch, BlockHeight, TxIndex) = (u64, u64, u32).

Transaction Count Query

When scanning the shielded pool, it is sometimes useful know when to stop scanning. This can be done by querying the storage path head-tx, which will return a u64 indicating the total number of transactions in the shielded pool.

Pinned Transfer Query

A transaction pinned to the key x in the shielded pool can be obtained indirectly by getting the value at the storage path <MASP address>/pin-<x>. This will return the index of the desired transaction within the shielded pool encoded as a u64. At this point, the above shielded transaction query can then be used to obtain the actual transaction bytes.

Conversion Query

In order for MASP clients to convert older asset types to their latest variants, they need to query nodes for currently valid conversions. This can be done by querying the ABCI path conv/<asset-type> where asset-type is a hexadecimal encoding of the asset identifier as defined in Multi-Asset Shielded Pool Specification. This will return a quadruple of the type below:

(
    /// the token address of this asset type
    Address,
    /// the epoch of this asset type
    Epoch,
    /// the amount to be treated as equivalent to zero
    Amount,
    /// the Merkle path to this conversion
    MerklePath<Node>
)

If no conversions are available, the amount will be exactly zero, otherwise the amount must contain negative units of the queried asset type.

Asset name schema

MASP notes carry balances that are some positive integer amount of an asset type. Per both the MASP specification and the implementation, the asset identifier is an 32-byte Blake2s hash of an arbitrary asset name string, although the full 32-byte space is not used because the identifier must itself hash to an elliptic curve point (currently guaranteed by incrementing a nonce until the hash is a curve point). The final curve point is the asset type proper, used in computations.

The following is a schema for the arbitrary asset name string intended to support various uses - currently fungible tokens and NFTs, but possibly others in future.

The asset name string is built up from a number of segments, joined by a separator. We use / as the separator.

Segments may be one of the following:

  • Controlling address segment: a Namada address which controls the asset. For example, this is the fungible token address for a fungible token. This segment must be present, and must be first; it should in theory be an error to transparently transact in assets of this type without invoking the controlling address's VP. This should be achieved automatically by all transparent changes involving storage keys under the controlling address.

  • Epoch segment: An integer greater than zero, representing an epoch associated with an asset type. Mainly for use by the incentive circuit. This segment must be second if present. (should it be required? could be 0 if the asset is unepoched) (should it be first so we can exactly reuse storage keys?) This must be less than or equal to the current epoch.

  • Address segment: An ancillary address somehow associated with the asset. This address probably should have its VP invoked, and is probably in the transparent balance storage key.

  • ID segment: A nonnegative (?) integer identifying something, i.e., a NFT id. (should probably not be a u64 exactly - for instance, I think ERC721 NFTs are u256)

  • Text segment: A piece of text, normatively but not necessarily short (50 characters or less), identifying something. For compatibility with non-numeric storage keys used in transparent assets generally; an example might be a ticker symbol for a specific sub-asset. The valid character set is the same as for storage keys.

For example, suppose there is a virtual stock certificate asset, incentivized (somehow), at transparent address addr123, which uses storage keys like addr123/[owner address]/[ticker symbol]/[id]. The asset name segments would be:

  • Controlling address: just addr123
  • Epoch: the epoch when the note was created
  • Owner address: an address segment
  • Ticker symbol: a text segment
  • ID: an ID segment

This could be serialized to, e.g., addr123/addr456/tSPY/i12345.

Burn and Mint conversion transactions in MASP

Introduction

Ordinarily, a MASP transaction that does not shield or unshield assets must achieve a homomorphic net value balance of 0. Since every asset type has a pseudorandomly derived asset generator, it is not ordinarily feasible to achieve a net value balance of 0 for the transaction without each asset type independently having a net value balance of 0. Therefore, intentional burning and minting of assets typically requires a public "turnstile" where some collection of assets are unshielded, burned or minted in a public transaction, and then reshielded. Since this turnstile publicly reveals asset types and amounts, privacy is affected.

The goal is to design an extension to MASP that allows for burning and minting assets according to a predetermined, fixed, public ratio, but without explicitly publicly revealing asset types or amounts in individual transactions.

Approach

In the MASP, each Spend or Output circuit only verifies the integrity of spending or creation of a specific note, and does not verify the integrity of a transaction as a whole. To ensure that a transaction containing Spend and Output descriptions does not violate the invariants of the shielded pool (such as the total unspent balance of each asset in the pool) the value commitments are added homomorphically and this homomorphic sum is opened to reveal the transaction has a net value balance of 0. When assets are burned or minted in a MASP transaction, the homomorphic net value balance must be nonzero, and offset by shielding or unshielding a corresponding amount of each asset.

Instead of requiring the homomorphic sum of Spend and Output value commitments to sum to 0, burning and minting of assets can be enabled by allowing the homomorphic sum of Spend and Output value commitments to sum to either 0 or a multiple of an allowed conversion ratio. For example, if distinct assets A and B can be converted in a 1-1 ratio (meaning one unit of A can be burned to mint one unit of B) then the Spend and Output value commitments may sum to a nonzero value.

Allowed conversions

Let be distinct asset types. An allowed conversion is a list of tuples where are signed 64-bit integers.

The asset generator of an allowed conversion is defined to be: where is the asset generator of asset .

Each allowed conversion is committed to a Jubjub point using a binding Bowe-Hopwood commitment of its asset generator (it is not necessary to be hiding). All allowed conversion commitments are stored in a public Merkle tree, similar to the Note commitment tree. Since the contents of this tree are entirely public, allowed conversions may be added, removed, or modified at any time.

Convert circuit

In order for an unbalanced transaction containing burns and mints to get a net value balance of zero, one or more value commitments burning and minting assets must be added to the value balance. Similar to how Spend and Output circuits check the validity of their respective value commitments, the Convert circuit checks the validity and integrity of:

  1. There exists an allowed conversion commitment in the Merkle tree, and
  2. The imbalance in the value commitment is a multiple of an allowed conversion's asset generator

In particular, the Convert circuit takes public input:

and private input:

and the circuit checks:

  1. Merkle Path validity: is a valid Merkle path from to .
  2. Allowed conversion commitment integrity: opens to
  3. Value commitment integrity: where is the value commitment randomness base

Note that 8 is the cofactor of the Jubjub curve.

Balance check

Previously, the transaction consisted of Spend and Output descriptions, and a value balance check that the value commitment opens to 0. Now, the transaction validity includes:

  1. Checking the Convert description includes a valid and current
  2. Checking the value commitment opens to 0

Directionality

Directionality of allowed conversions must be enforced as well. That is, must be a non-negative 64 bit integer. If negative values of are allowed (or equivalently, unbounded large values of in the prime order scalar field of the Jubjub curve) then an allowed conversion could happen in the reverse direction, burning the assets intended to be minted and vice versa.

Cycles

It is also critical not to allow cycles. For example, if and are allowed conversions, then an unlimited amount of may be minted from a nonzero amount of . Since

Alternative approaches

It may theoretically be possible to implement similar mechanisms with only the existing Spend and Output circuits. For example, a Merkle tree of many Notes could be created with asset generator and many different values, allowing anyone to Spend these public Notes, which will only balance if proper amounts of asset type 1 are Spent and asset type 2 are Output.

However, the Nullifier integrity check of the Spend circuit reveals the nullifier of each of these Notes. This removes the privacy of the conversion as the public nullifier is linkable to the allowed conversion. In addition, each Note has a fixed value, preventing arbitrary value conversions.

Conclusion

In principle, as long as the Merkle tree only contains allowed conversions, this should permit the allowed conversions while maintaining other invariants. Note that since the asset generators are not derived in the circuit, all sequences of values and asset types are allowed.

Convert Circuit

Convert Circuit Description

The high-level description of Convert can be found Burn and mint.

The Convert provides a mechanism that burning and minting of assets can be enabled by adding Convert Value Commitments in transaction and ensuring the homomorphic sum of Spend, Output and Convert value commitments to be zero.

The Convert value commitment is constructed from AllowedConversion which was published earlier in AllowedConversion Tree. The AllowedConversion defines the allowed conversion assets. The AllowedConversion Tree is a merkle hash tree stored in the ledger.

AllowedConversion

An AllowedConversion is a compound asset type in essence, which contains distinct asset types and the corresponding conversion ratios.

AllowedConversion is an array of tuple

  • : is a bytestring representing the asset identifier of the note.
  • : is a signed 64-bit integer in the range .

Calculate:

Note that PedersenHashToPoint is used the same as in NoteCommitment for now.

An AllowedConversion can be issued, removed and modified as public conversion rule by consensus authorization and stored in AllowedConversion Tree as leaf node.

An AllowedConversion can be used by proving the existence in AllowedConversion Tree(must use the latest root anchor), and then generating a Convert Value Commitment to be used in transaction.

Convert Value Commitment

Convert Value Commitment is a tuple

  • : is an unsigned integer representing the value of conversion in range .

Choose independent uniformly random commitment trapdoors:

Check that is of type , i.e. it is a valid ctEdwards Curve point on the JubjubCurve (as defined in the original Sapling specification) not equal to . If it is equal to , is an invalid asset identifier.

Calculate

Note that is used the same as in NoteCommitment for now.

AllowedConversion Tree

AllowedConversion Tree has the same structure as Note Commitment Tree and is an independent tree stored in ledger.

  • : 32(for now)
  • leaf node:

Convert Statement

The Convert circuit has 47358 constraints.

Let , , , , , be as defined in the original Sapling specification.

A valid instance of assures that given a primary input:

the prover knows an auxiliary input:

such that the following conditions hold:

  • AllowedConversion cm integrity:

  • Merkle path validity: Either is 0; or is a valid Merkle path of depth , as as defined in the original Sapling specification, from to the anchor

  • Small order checks: is not of small order, i.e..

  • Convert Value Commitment integrity:

Return

Notes:

  • Public and auxiliary inputs MUST be constrained to have the types specified. In particular, see the original Sapling specification, for required validity checks on compressed representations of Jubjub curve points. The ValueCommit.Output type also represents points, i.e. .
  • In the Merkle path validity check, each layer does not check that its input bit sequence is a canonical encoding(in ) of the integer from the previous layer.

Incentive Description

Incentive system provide a mechanism in which the old asset(input) is burned, the new asset(output) is minted with the same quantity and incentive asset(reward) is minted with the convert ratio meanwhile.

Incentive AllowedConversion Tree

As described in Convert circuit, the AllowedConversion Tree is an independent merkle tree in the ledger and contains all the Incentive AllowedConversions.

Incentive AllowedConversion Struct

In general, there are three items in Incentive AllowedConversion Struct(but not mandatory?),i.e. input, output and reward. And each item has an asset type and a quantity(i64, for the convert ratio).

Note that the absolute value of input and output must be consistent in incentive system. The quantity of input is negative and the quantity of output is positive.

To guarantee the input and output to be open as the same asset type in future unshielding transactions, the input and output assets have the same prefix description(e.g. BTC_1, BTC_2...BTC_n). To prevent repeated shielding and unshielding and encourage long-term contribution to privacy pool, the postfix timestamp is used to distinguish the input and output assets. The timestamp depends on the update period and can be defined flexibly (e.g. date, epoch num). When a new timestamp occurs, the AllowedConversion will be updated to support all the "history asset" conversion to the latest one.

Incentive AllowedConversion Operation

Incentive AllowedConversion is governed by the incentive system, which will be in charge of issuing new incentive plan, updating(modifying) to the latest timestamp, and removing disabled conversion permissions.

  • Issue
    • Issue a new incentive plan for new asset.
    • Issue for the last latest AllowedConversion when new timestamp occurs.
  • Update
    • For every new timestamp that occurs, update the existing AllowedConversion. Keep the input but update the output to the latest asset and modify the reward quantity according to the ratio.
  • Destroy
    • Delete the AllowedConversion from the tree.
  • Query Service
    • A service for querying the latest AllowedConversion, return (anchor, path, AllowedConversion).

Workflow from User's Perspective

  • Shielding transaction
    • Query the latest timestamp for target asset(non-latest will be rejected in tx execution)
    • Construct a target shielded note and shielding tx
    • Add the note to shielded pool if tx executes successfully(check the prefix and the latest timestamp).
  • Converting transaction
    • Construct spend notes from shielded notes
    • Construct convert notes(query the latest AllowedConversion)
    • Construct output notes
    • Construct convert tx
    • Get incentive output notes with latest timestamp and rewards if tx executes successfully
  • Unshielding transaction
    • Construct unshielding transaction
    • Get unshielded note if tx executes successfully(check the prefix)

Namada Trusted Setup

This spec assumes that you have some previous knowledge about Trusted Setup Ceremonies. If not, you might want to check the following two articles: Setup Ceremonies - ZKProof and Parameter Generation - Zcash.

The Namada Trusted Setup (TS) consists of running the phase 2 of the MPC which is a circuit-specific step to construct the multi-asset shielded pool circuit. Our phase 2 takes as input the Powers of Tau (phase 1) ran by Zcash that can be found here. You can sign up for the Namada Trusted Setup here.

Contribution flow

Overview

  1. Contributor compiles or downloads the CLI binary and runs it.
  2. CLI generates a 24-words BIP39 mnemonic.
  3. CLI can choose to participate in the incentivized program or not.
  4. CLI joins the queue and waits for its turn.
  5. CLI downloads the challenge from the nearest AWS S3 bucket.
  6. Contributor can choose to contribute on the same machine or another.
  7. Contributor can choose to give its own seed of randomness or not.
  8. CLI contributes.
  9. CLI uploads the response to the challenge and notifies the coordinator with its personal info.

Detailed Flow

NOTE: add CLI flag --offline for the contributors that run on an offline machine. The flag will skip all the steps where there is communication with the coordinator and go straight to the generation of parameters in step 14.

  1. Contributor downloads the Namada CLI source from GitHub, compiles it, runs it.
  2. CLI asks the Contributor a couple of questions: a) Do you want to participate in the incentivized trusted setup? - Yes. Asks for personal information: full name and email. - No. Contribution will be identified as Anonymous.
  3. CLI generates a ed25519 key pair that will serve to communicate and sign requests with the HTTP REST API endpoints and receive any potential rewards. The private key is generated through BIP39 where we use it as a seed for the ed25519 key pair and a 24 word seed-phrase is presented to the user to back-up.
  4. CLI sends request to the HTTP REST API endpoint contributor/join_queue. Contributor is added to the queue of the ceremony.
  5. CLI polls periodically the HTTP REST API endpoint contributor/queue_status to get the current the position in the queue. CLI also sends periodically a heartbeat request to HTTP REST API endpoint contributor/heartbeat to tell the Coordinator that it is still connected. CLI shows the current position in the queue to the contributor.
  6. When Contributor is in position 0 in the queue, it leaves the queue. CLI can then acquire the lock of the next chunk by sending a request to the HTTP REST API endpoint contributor/lock_chunk.
  7. As soon as the file is locked on the Coordinator, the CLI asks for more info about the chunk through the endpoint download/chunk. This info is later needed to form a new contribution file and send it back to the Coordinator.
  8. CLI gets the actual blob challenge file by sending a request to the endpoint contributor/challenge.
  9. CLI saves challenge file namada_challenge_round_{round_number}.params in the root folder.
  10. CLI computes challenge hash.
  11. CLI creates contribution file namada_contribution_round_{round_number}_public_key_{public_key}.params in the root folder.
  12. Previous challenge hash is appended to the contribution file.
  13. Contributor decides whether to do the computation on the same machine or on a different machine. Do you want to use another machine to run your contribution? NOTE: be clear that if users choose to generate the parameters on a OFFLINE machine then they will have max. 15 min to do all the operations.
  • No. Participant will use the Online Machine to contribute. CLI runs contribute_masp() that executes the same functions as in the contribute() function from the masp-mpc crate. CLI asks the contributor if he wants to input a custom seed of randomness instead of using the combination of entropy and OS randomness. In both cases, he has to input something. CLI creates a contribution file signature ContributionFileSignature of the contribution.
  • Yes. Participant will use an Offline Machine to contribute. CLI display a message with instructions about the challenge and contribution files. Participant can export the Contribution file namada_contribution_round_{round_number}_public_key_{public_key}.params to the Offline Machine and contribute from there. When the Contributor is done, he gives the path to the contribution file. Before continuing, CLI checks if the required files are available on path and if the transformation of the parameters is valid. NOTE: CLI will display a countdown of 10 min with an extension capability of 5 min.
  1. CLI generates a json file saved locally that contains the full name, email, the public key used for the contribution, contribution hash, hash of the contribution file, contribution file signature, plus a signature of the metadata. -> display the signature and message that needs to be posted somewhere over the Internet
  2. CLI uploads the chunk to the Coordinator by using the endpoint upload/chunk.
  3. When the contribution blob was transferred successfully to the Coordinator. CLI notifies the Coordinator that the chunk was uploaded by sending a request to endpoint contributor/contribute_chunk.
  4. Coordinator verifies that the chunk is valid by executing function verify_transform() from the crate masp-mpc. If the transformation is correct, it outputs the hash of the contribution.
  5. Coordinator calls try_advance() function that tries to advance to the next round as soon as all contributions are verified. If it succeeds, it removes the next contributor from the queue and adds him as contribturo to the next round.
  6. Repeat.

Subcomponents

Our implementation of the TS consists of the following subcomponents:

  1. A fork of the Aleo Trusted Setup where we re-used the Coordinator Library (CL) contained in the phase1-coordinator folder.
  2. A HTTP REST API that interfaces with the CL.
  3. A CLI implementation that communicates with the HTTP REST API endpoints.
  4. An integration of the masp-mpc crypto functions (initialize, contribute and verify) in the CL.

Let's go through each subcomponents and describe them.

1. Coordinator Library (CL)

Description

The CL handles the operation steps of the ceremony: adding a new contributor to the queue, authentificating a contributor, sending and receiving challenge files, removing inactive contributors, reattributing challenge file to a new contributor after a contributor dropped, verifying contributions, creating new files...

"Historical" context

The CL was originally implemented for the Powers of Tau (phase 1 of the MPC). In this implementation, there was a tentative to optimise the whole operational complexity of the ceremony. In short, to reduce the contribution time to the parameters, the idea was to split the parameters during a round into multiple chunks that can be then distributed to multiple participants in parallel. That way, the computation time would be reduced by some linear factor on a per-round basis. You can read more about it in this article.

CL in the Namada context

Splitting the parameters into multiple chunks is useful, if it takes hours to contribute. In our case, the contribution time is about some seconds and in the worst case about some minutes. So, we don't need to split the parameters into chunks. Though, since we forked from the Aleo Trusted Setup, we still have some references to "chunked" things like folder, variable or function names. In our implementation, this means that we have one contributor and one chunk per round. For example, the contribution file of a round i from a participant will always be located at transcript/round_{i}/chunk_0/contribution_1.unverified. To be able to re-use the CL without heavy refactoring, we decided to keep most of the Aleo code as it is and only change the parts that needed to be changed, more precisely the crypto functions (initialize, contribute and verify) and the coordinator config environment.rs.

2. HTTP REST API

Description

The HTTP REST API is a rocket web server that interfaces with the CL. All requests need to be signed to be accepted by the endpoints. It's the core of the ceremony where the Coordinator is started together with utility functions like verify_contributions  and update_coordinator.

Endpoints

  • /contributor/join_queue Add the incoming contributor to the queue of contributors.
  • /contributor/lock_chunk Lock a Chunk in the ceremony. This should be the first function called when attempting to contribute to a chunk. Once the chunk is locked, it is ready to be downloaded.
  • /contributor/challenge Get the challenge key on Amazon S3 from the Coordinator.
  • /upload/chunk Request the urls where to upload a Chunk contribution and the ContributionFileSignature.
  • /contributor/contribute_chunk Notify the Coordinator of a finished and uploaded Contribution. This will unlock the given Chunk.
  • /contributor/heartbeat Let the Coordinator know that the participant is still alive and participating (or waiting to participate) in the ceremony.
  • /update Update the Coordinator state. This endpoint is accessible only by the coordinator itself.
  • /stop Stop the Coordinator and shuts the server down. This endpoint is accessible only by the coordinator itself.
  • /verify Verify all the pending contributions. This endpoint is accessible only by the coordinator itself.
  • /contributor/queue_status Get the queue status of the contributor.
  • /contributor/contribution_info Write ContributionInfo to disk.
  • /contribution_info Retrieve the contributions' info. This endpoint is accessible by anyone and does not require a signed request.
  • /healthcheck Retrieve healthcheck info. This endpoint is accessible by anyone and does not require a signed request.

Saved files

  • contributors/namada_contributor_info_round_{i}.json contributor info received from the participant. Same file as described below.
  • contributors.json list of contributors that can be exposed to a public API to be displayed on the website
[
   {
      "public_key":"very random public key",
      "is_another_machine":true,
      "is_own_seed_of_randomness":true,
      "ceremony_round":1,
      "contribution_hash":"some hash",
      "contribution_hash_signature":"some hash",
	// (optional) some timestamps that can be used to calculate and display the contribution time
      "timestamp":{
         "start_contribution":1,
         "end_contribution":7
      }
   },
   // ...
   {
      "public_key":"very random public key",
      "is_another_machine":true,
      "is_own_seed_of_randomness":true,
      "ceremony_round":42,
      "contribution_hash":"some hash",
      "contribution_hash_signature":"some hash",
      "timestamp":{
         "start_contribution":1,
         "end_contribution":7
      }
   }
]

3. CLI Implementation

Description

The CLI communicates with the HTTP REST API accordingly to the overview of the contribution flow.

Saved files

  • namada_challenge_round_{round_number}.params challenge file downloaded from the Coordinator.
  • namada_contribution_round_{round_number}.params contribution file that needs to be uploaded to the Coordinator
  • namada_contributor_info_round_{round_number}.json contributor info that serves to identify participants.
{
   "full_name":"John Cage",
   "email":"john@cage.me",
   // ed25519 public key
   "public_key":"very random public key",
   // User participates in incentivized program or not
   "is_incentivized":true,
   // User can choose to contribute on another machine
   "is_another_machine":true,
   // User can choose the default method to generate randomness or his own.
   "is_own_seed_of_randomness":true,
   "ceremony_round":42,
   // hash of the contribution run by masp-mpc, contained in the transcript
   "contribution_hash":"some hash",
   // FIXME: is this necessary? so other user can check the contribution hash against the public key?
   "contribution_hash_signature":"signature of the contribution hash",
   // hash of the file saved on disk and sent to the coordinator
   "contribution_file_hash":"some hash",
   "contribution_file_signature":"signature of the contribution file",
   // Some timestamps to get performance metrics of the ceremony
   "timestamp":{
		// User starts the CLI
      "start_contribution":1,
      // User has joined the queue
      "joined_queue":2,
      // User has locked the challenge on the coordinator
      "challenge_locked":3,
      // User has completed the download of the challenge
      "challenge_downloaded":4,
      // User starts computation locally or downloads the file to another machine
      "start_computation":5,
      // User finishes computation locally or uploads the file from another machine
      "end_computation":6,
      // User attests that the file was uploaded correctly
      "end_contribution":7
   },
   "contributor_info_signature":"signature of the above fields and data"
}

4. Integration of the masp-mpc

Description

There are 4 crypto commands available in the CL under phase1-coordinator/src/commands/:

  1. aggregations.rs this was originally used to aggregate the chunks of the parameters. Since we don't have chunks, we don't need to aggregate anything. However, this logic was required and kept to transition between rounds. It doesn't affect any contribution file.
  2. computation.rs is used by a participant to contribute. The function contribute_masp() contains the logic from masp-mpc/src/bin/contribute.rs.
  3. initialization.rs is used to bootstrap the parameters on round 0 by giving as input the Zcash's Powers of Tau. The function initialize_masp() contains the logic from masp-mpc/src/bin/new.rs.
  4. verification.rs is used to verify the correct transformation of the parameters between contributions. The function verify_masp() contains the logic from masp-mpc/src/bin/verify_transform.rs.

Interoperability

Namada can interoperate permissionlessly with other chains through integration of the IBC protocol. Namada also includes a bespoke Ethereum bridge operated by the Namada validator set.

Ethereum bridge

The Namada - Ethereum bridge exists to mint ERC20 tokens on Namada which naturally can be redeemed on Ethereum at a later time. Furthermore, it allows the minting of wrapped NAM (wNAM) tokens on Ethereum.

The Namada Ethereum bridge system consists of:

  • An Ethereum full node run by each Namada validator, for including relevant Ethereum events into Namada.
  • A set of validity predicates on Namada which roughly implements ICS20 fungible token transfers.
  • A set of Ethereum smart contracts.
  • An automated process to send validator set updates to the Ethereum smart contracts.
  • A relayer binary to aid in submitting transactions to Ethereum

This basic bridge architecture should provide for almost-Namada consensus security for the bridge and free Ethereum state reads on Namada, plus bidirectional message passing with reasonably low gas costs on the Ethereum side.

Topics

Resources which may be helpful

There will be multiple types of events emitted. Validators should ignore improperly formatted events. Raw events from Ethereum are converted to a Rust enum type (EthereumEvent) by Namada validators before being included in vote extensions or stored on chain.


#![allow(unused)]
fn main() {
pub enum EthereumEvent {
    // we will have different variants here corresponding to different types
    // of raw events we receive from Ethereum
    TransfersToNamada(Vec<TransferToNamada>)
    // ...
}
}

Each event will be stored with a list of the validators that have ever seen it as well as the fraction of total voting power that has ever seen it. Once an event has been seen by 2/3 of voting power, it is locked into a seen state, and acted upon.

There is no adjustment across epoch boundaries - e.g. if an event is seen by 1/3 of voting power in epoch n, then seen by a different 1/3 of voting power in epoch m>n, the event will be considered seen in total. Validators may never vote more than once for a given event.

Minimum confirmations

There will be a protocol-specified minimum number of confirmations that events must reach on the Ethereum chain, before validators can vote to include them on Namada. This minimum number of confirmations will be changeable via governance.

TransferToNamada events may include a custom minimum number of confirmations, that must be at least the protocol-specified minimum number of confirmations.

Validators must not vote to include events that have not met the required number of confirmations. Voting on unconfirmed events is considered a slashable offence.

Storage

To make including new events easy, we take the approach of always overwriting the state with the new state rather than applying state diffs. The storage keys involved are:

# all values are Borsh-serialized
/eth_msgs/$msg_hash/body : EthereumEvent
/eth_msgs/$msg_hash/seen_by : Vec<Address>
/eth_msgs/$msg_hash/voting_power: (u64, u64)  # reduced fraction < 1 e.g. (2, 3)
/eth_msgs/$msg_hash/seen: bool

$msg_hash is the SHA256 digest of the Borsh serialization of the relevant EthereumEvent.

Changes to this /eth_msgs storage subspace are only ever made by internal transactions crafted and applied by all nodes based on the aggregate of vote extensions for the last Tendermint round. That is, changes to /eth_msgs happen in block n+1 in a deterministic manner based on the vote extensions of the Tendermint round for block n.

The /eth_msgs storage subspace does not belong to any account and cannot be modified by transactions submitted from outside of the ledger via Tendermint. The storage will be guarded by a special validity predicate - EthSentinel - that is part of the verifier set by default for every transaction, but will be removed by the ledger code for the specific permitted transactions that are allowed to update /eth_msgs.

Including events into storage

For every Namada block proposal, the vote extension of a validator should include the events of the Ethereum blocks they have seen via their full node such that:

  1. The storage value /eth_msgs/$msg_hash/seen_by does not include their address.
  2. It's correctly formatted.
  3. It's reached the required number of confirmations on the Ethereum chain

Each event that a validator is voting to include must be individually signed by them. If the validator is not voting to include any events, they must still provide a signed voted extension indicating this.

The vote extension data field will be a Borsh-serialization of something like the following.


#![allow(unused)]
fn main() {
pub struct VoteExtension(Vec<SignedEthEvent>);

/// A struct used by validators to sign that they have seen a particular
/// ethereum event. These are included in vote extensions
#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, BorshSchema)]
pub struct SignedEthEvent {
    /// The address of the signing validator
    signer: Address,
    /// The proportion of the total voting power held by the validator
    power: FractionalVotingPower,
    /// The event being signed and the block height at which
    /// it was seen. We include the height as part of enforcing
    /// that a block proposer submits vote extensions from
    /// **the previous round only**
    event: Signed<(EthereumEvent, BlockHeight)>,
}
}

These vote extensions will be given to the next block proposer who will aggregate those that it can verify and will inject a protocol transaction (the "vote extensions" transaction).


#![allow(unused)]
fn main() {
pub struct MultiSigned<T: BorshSerialize + BorshDeserialize> {
    /// Arbitrary data to be signed
    pub data: T,
    /// The signature of the data
    pub sigs: Vec<common::Signature>,
}

pub struct MultiSignedEthEvent {
    /// Address and voting power of the signing validators
    pub signers: Vec<(Address, FractionalVotingPower)>,
    /// Events as signed by validators
    pub event: MultiSigned<(EthereumEvent, BlockHeight)>,
}

pub enum ProtocolTxType {
    EthereumEvents(Vec<MultiSignedEthEvent>)
}
}

This vote extensions transaction will be signed by the block proposer. Validators will check this transaction and the validity of the new votes as part of ProcessProposal, this includes checking:

  • signatures
  • that votes are really from active validators
  • the calculation of backed voting power

It is also checked that each vote extension came from the previous round, requiring validators to sign over the Namada block height with their vote extension. Furthermore, the vote extensions included by the block proposer should have at least 2 / 3 of the total voting power of the previous round backing it. Otherwise the block proposer would not have passed the FinalizeBlock phase of the last round. These checks are to prevent censorship of events from validators by the block proposer.

In FinalizeBlock, we derive a second transaction (the "state update" transaction) from the vote extensions transaction that:

  • calculates the required changes to /eth_msgs storage and applies it
  • acts on any /eth_msgs/$msg_hash where seen is going from false to true (e.g. appropriately minting wrapped Ethereum assets)

This state update transaction will not be recorded on chain but will be deterministically derived from the vote extensions transaction, which is recorded on chain. All ledger nodes will derive and apply this transaction to their own local blockchain state, whenever they receive a block with a vote extensions transaction. This transaction cannot require a protocol signature as even non-validator full nodes of Namada will be expected to do this.

The value of /eth_msgs/$msg_hash/seen will also indicate if the event has been acted on on the Namada side. The appropriate transfers of tokens to the given user will be included on chain free of charge and requires no additional actions from the end user.

Namada Validity Predicates

There will be three internal accounts with associated native validity predicates:

  • #EthSentinel - whose validity predicate will verify the inclusion of events from Ethereum. This validity predicate will control the /eth_msgs storage subspace.
  • #EthBridge - the storage of which will contain ledgers of balances for wrapped Ethereum assets (ERC20 tokens) structured in a "multitoken" hierarchy
  • #EthBridgeEscrow which will hold in escrow wrapped Namada tokens which have been sent to Ethereum.

Transferring assets from Ethereum to Namada

Wrapped ERC20

The "transfer" transaction mints the appropriate amount to the corresponding multitoken balance key for the receiver, based on the specifics of a TransferToNamada Ethereum event.


#![allow(unused)]
fn main() {
pub struct EthAddress(pub [u8; 20]);

/// Represents Ethereum assets on the Ethereum blockchain
pub enum EthereumAsset {
    /// An ERC20 token and the address of its contract
    ERC20(EthAddress),
}

/// An event transferring some kind of value from Ethereum to Namada
pub struct TransferToNamada {
    /// Quantity of ether in the transfer
    pub amount: Amount,
    /// Address on Ethereum of the asset
    pub asset: EthereumAsset,
    /// The Namada address receiving wrapped assets on Namada
    pub receiver: Address,
}
}
Example

For 10 DAI i.e. ERC20(0x6b175474e89094c44da98b954eedeac495271d0f) to atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt

#EthBridge
    /erc20
        /0x6b175474e89094c44da98b954eedeac495271d0f
            /balances
                /atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt 
                += 10

Namada tokens

Any wrapped Namada tokens being redeemed from Ethereum must have an equivalent amount of the native token held in escrow by #EthBridgeEscrow. The protocol transaction should simply make a transfer from #EthBridgeEscrow to the receiver for the appropriate amount and asset.

Transferring from Namada to Ethereum

To redeem wrapped Ethereum assets, a user should make a transaction to burn their wrapped tokens, which the #EthBridge validity predicate will accept.

Once this burn is done, it is incumbent on the end user to request an appropriate "proof" of the transaction. This proof must be submitted to the appropriate Ethereum smart contract by the user to redeem their native Ethereum assets. This also means all Ethereum gas costs are the responsibility of the end user.

The proofs to be used will be custom bridge headers that are calculated deterministically from the block contents, including messages sent by Namada and possibly validator set updates. They will be designed for maximally efficient Ethereum decoding and verification.

For each block on Namada, validators must submit the corresponding bridge header signed with a special secp256k1 key as part of their vote extension. Validators must reject votes which do not contain correctly signed bridge headers. The finalized bridge header with aggregated signatures will appear in the next block as a protocol transaction. Aggregation of signatures is the responsibility of the next block proposer.

The bridge headers need only be produced when the proposed block contains requests to transfer value over the bridge to Ethereum. The exception is when validator sets change. Since the Ethereum smart contract should accept any header signed by bridge header signed by 2 / 3 of the staking validators, it needs up-to-date knowledge of:

  • The current validators' public keys
  • The current stake of each validator

This means the at the end of every Namada epoch, a special transaction must be sent to the Ethereum contract detailing the new public keys and stake of the new validator set. This message must also be signed by at least 2 / 3 of the current validators as a "transfer of power". It is to be included in validators vote extensions as part of the bridge header. Signing an invalid validator transition set will be consider a slashable offense.

Due to asynchronicity concerns, this message should be submitted well in advance of the actual epoch change, perhaps even at the beginning of each new epoch. Bridge headers to ethereum should include the current Namada epoch so that the smart contract knows how to verify the headers. In short, there is a pipelining mechanism in the smart contract.

Such a message is not prompted by any user transaction and thus will have to be carried out by a bridge relayer. Once the transfer of power message is on chain, any time afterwards a Namada bridge process may take it to craft the appropriate message to the Ethereum smart contracts.

The details on bridge relayers are below in the corresponding section.

Signing incorrect headers is considered a slashable offense. Anyone witnessing an incorrect header that is signed may submit a complaint (a type of transaction) to initiate slashing of the validator who made the signature.

Namada tokens

Mints of a wrapped Namada token on Ethereum (including NAM, Namada's native token) will be represented by a data type like:


#![allow(unused)]
fn main() {
struct MintWrappedNam {
    /// The Namada address owning the token
    owner: NamadaAddress,
    /// The address on Ethereum receiving the wrapped tokens
    receiver: EthereumAddress,
    /// The address of the token to be wrapped 
    token: NamadaAddress,
    /// The number of wrapped Namada tokens to mint on Ethereum
    amount: Amount,
}
}

If a user wishes to mint a wrapped Namada token on Ethereum, they must submit a transaction on Namada that:

  • stores MintWrappedNam on chain somewhere
  • sends the correct amount of Namada token to #EthBridgeEscrow

Just as in redeeming Ethereum assets above, it is incumbent on the end user to request an appropriate proof of the transaction. This proof must be submitted to the appropriate Ethereum smart contract by the user. The corresponding amount of wrapped NAM tokens will be transferred to the receiver on Ethereum by the smart contract.

Namada Bridge Relayers

Validator changes must be turned into a message that can be communicated to smart contracts on Ethereum. These smart contracts need this information to verify proofs of actions taken on Namada.

Since this is protocol level information, it is not user prompted and thus should not be the responsibility of any user to submit such a transaction. However, any user may choose to submit this transaction anyway.

This necessitates a Namada node whose job it is to submit these transactions on Ethereum at the conclusion of each Namada epoch. This node is called the Designated Relayer. In theory, since this message is publicly available on the blockchain, anyone can submit this transaction, but only the Designated Relayer will be directly compensated by Namada.

All Namada validators will have an option to serve as bridge relayer and the Namada ledger will include a process that does the relaying. Since all Namada validators are running Ethereum full nodes, they can monitor that the message was relayed correctly by the Designated Relayer.

During the FinalizeBlock call in the ledger, if the epoch changes, a flag should be set alerting the next block proposer that they are the Designated Relayer for this epoch. If their message gets accepted by the Ethereum state inclusion onto Namada, new NAM tokens will be minted to reward them. The reward amount shall be a protocol parameter that can be changed via governance. It should be high enough to cover necessary gas fees.

Ethereum Smart Contracts

The set of Ethereum contracts should perform the following functions:

  • Verify bridge header proofs from Namada so that Namada messages can be submitted to the contract.
  • Verify and maintain evolving validator sets with corresponding stake and public keys.
  • Emit log messages readable by Namada
  • Handle ICS20-style token transfer messages appropriately with escrow & unescrow on the Ethereum side
  • Allow for message batching

Furthermore, the Ethereum contracts will whitelist ETH and tokens that flow across the bridge as well as ensure limits on transfer volume per epoch.

An Ethereum smart contract should perform the following steps to verify a proof from Namada:

  1. Check the epoch included in the proof.
  2. Look up the validator set corresponding to said epoch.
  3. Verify that the signatures included amount to at least 2 / 3 of the total stake.
  4. Check the validity of each signature.

If all the above verifications succeed, the contract may affect the appropriate state change, emit logs, etc.

Starting the bridge

Before the bridge can start running, some storage may need to be initialized in Namada.

Resources which may be helpful:

Security

On Namada, the validators are full nodes of Ethereum and their stake is also accounting for security of the bridge. If they carry out a forking attack on Namada to steal locked tokens of Ethereum their stake will be slashed on Namada. On the Ethereum side, we will add a limit to the amount of assets that can be locked to limit the damage a forking attack on Namada can do. To make an attack more cumbersome we will also add a limit on how fast wrapped Ethereum assets can be redeemed from Namada. This will not add more security, but rather make the attack more inconvenient.

Bootstrapping the bridge

Overview

The Ethereum bridge is not enabled at the launch of a Namada chain. Instead, there are two governance parameters:

  • eth_bridge_proxy_address
  • eth_bridge_wnam_address

Both are initialized to the zero Ethereum address ("0x0000000000000000000000000000000000000000"). An overview of the steps to enable the Ethereum bridge for a given Namada chain are:

  • A governance proposal should be held to agree on a block height h at which to launch the Ethereum bridge by means of a hard fork.
  • If the proposal passes, the Namada chain must halt after finalizing block h-1. This requires
  • The Ethereum bridge smart contracts are deployed to the relevant EVM chain, with the active validator set at block height h as the initial validator set that controls the bridge.
  • Details are published so that the deployed contracts can be verified by anyone who wishes to do so.
  • If active validators for block height h regard the deployment as valid, the chain should be restarted with a new genesis file that specifies eth_bridge_proxy_address as the Ethereum address of the proxy contract.

At this point, the bridge is launched and it may start being used. Validators' ledger nodes will immediately and automatically coordinate in order to craft the first validator set update protocol transaction.

Facets

Governance proposal

The governance proposal can be freeform and simply indicate what the value of h should be. Validators should then configure their nodes to halt at this height. The grace_epoch is arbitrary as there is no code to be executed as part of the proposal, instead validators must take action manually as soon as the proposal passes. The block height h must be in an epoch that is strictly greater than voting_end_epoch.

Value for launch height h

The active validator set at the launch height chosen for starting the Ethereum bridge will have the extra responsibility of restarting the chain if they consider the deployed smart contracts as valid. For this reason, the validator set at this height must be known in advance of the governance proposal resolving, and a channel set up for offchain communication and co-ordination of the chain restart. In practise, this means the governance proposal to launch the chain should commit to doing so within an epoch of passing, so that the validator set is definitely known in advance.

Deployer

Once the smart contracts are fully deployed, only the active validator set for block height h should have control of the contracts so in theory anyone could do the Ethereum bridge smart contract deployment.

Backing out of Ethereum bridge launch

If for some reason the validity of the smart contract deployment cannot be agreed upon by the validators who will responsible for restarting Namada, it must remain possible to restart the chain with the Ethereum bridge still not enabled.

Example

In this example, all epochs are assumed to be 100 blocks long, and the active validator set does not change at any point.

  • A governance proposal is made to launch the Ethereum bridge at height h = 3400, i.e. the first block of epoch 34.
{
    "content": {
        "title": "Launch the Ethereum bridge",
        "authors": "hello@heliax.dev",
        "discussions-to": "hello@heliax.dev",
        "created": "2023-01-01T08:00:00Z",
        "license": "Unlicense",
        "abstract": "Halt the chain and launch the Ethereum bridge at Namada block height 3400",
        "motivation": "",
    },
    "author": "hello@heliax.dev",
    "voting_start_epoch": 30,
    "voting_end_epoch": 33,
    "grace_epoch": 33,
}
  • The governance proposal passes at block 3300 (the first block of epoch 33)

  • Validators for epoch 33 manually configure their nodes to halt after having finalized block 3399, before that block is reached

  • The chain halts after having finalized block 3399 (the last block of epoch 33)

  • Putative Ethereum bridge smart contracts are deployed at this point, with the proxy contract located at 0x00000000000000000000000000000000DeaDBeef

  • Verification of the Ethereum bridge smart contracts take place

  • Validators coordinate to craft a new genesis file for the chain restart at 3400, with the governance parameter eth_bridge_proxy_address set to 0x00000000000000000000000000000000DeaDBeef and eth_bridge_wnam_address at 0x000000000000000000000000000000000000Cafe

  • The chain restarts at 3400 (the first block of epoch 34)

  • The first ever validator set update (for epoch 35) becomes possible within a few blocks (e.g. by block 3410)

  • A validator set update for epoch 35 is submitted to the Ethereum bridge smart contracts

Ethereum Events Attestation

We want to store events from the smart contracts of our bridge onto Namada. We will include events that have been seen by at least one validator, but will not act on them until they have been seen by at least 2/3 of voting power.

There will be multiple types of events emitted. Validators should ignore improperly formatted events. Raw events from Ethereum are converted to a Rust enum type (EthereumEvent) by Namada validators before being included in vote extensions or stored on chain.


#![allow(unused)]
fn main() {
pub enum EthereumEvent {
    // we will have different variants here corresponding to different types
    // of raw events we receive from Ethereum
    TransfersToNamada(Vec<TransferToNamada>)
    // ...
}
}

Each event will be stored with a list of the validators that have ever seen it as well as the fraction of total voting power that has ever seen it. Once an event has been seen by 2/3 of voting power, it is locked into a seen state, and acted upon.

There is no adjustment across epoch boundaries - e.g. if an event is seen by 1/3 of voting power in epoch n, then seen by a different 1/3 of voting power in epoch m>n, the event will be considered seen in total. Validators may never vote more than once for a given event.

Minimum confirmations

There will be a protocol-specified minimum number of confirmations that events must reach on the Ethereum chain, before validators can vote to include them on Namada. This minimum number of confirmations will be changeable via governance.

TransferToNamada events may include a custom minimum number of confirmations, that must be at least the protocol-specified minimum number of confirmations but is initially set to 100.

Validators must not vote to include events that have not met the required number of confirmations. Voting on unconfirmed events is considered a slashable offence.

Storage

To make including new events easy, we take the approach of always overwriting the state with the new state rather than applying state diffs. The storage keys involved are:

# all values are Borsh-serialized
/eth_msgs/$msg_hash/body : EthereumEvent
/eth_msgs/$msg_hash/seen_by : BTreeSet<Address>
/eth_msgs/$msg_hash/voting_power: (u64, u64)  # reduced fraction < 1 e.g. (2, 3)
/eth_msgs/$msg_hash/seen: bool

$msg_hash is the SHA256 digest of the Borsh serialization of the relevant EthereumEvent.

Changes to this /eth_msgs storage subspace are only ever made by nodes as part of the ledger code based on the aggregate of votes by validators for specific events. That is, changes to /eth_msgs happen in block n in a deterministic manner based on the votes included in the block proposal for block n. Depending on the underlying Tendermint version, these votes will either be included as vote extensions or as protocol transactions.

The /eth_msgs storage subspace will belong to the EthBridge validity predicate. It should disallow any changes to this storage from wasm transactions.

Including events into storage

For every Namada block proposal, block proposer should include the votes for events from other validators into their proposal. If the underlying Tendermint version supports vote extensions, consensus invariants guarantee that a quorum of votes from the previous block height can be included. Otherwise, validators can only submit votes by broadcasting protocol transactions, which comes with less guarantees (i.e. no consensus finality).

The vote of a validator should include the events of the Ethereum blocks they have seen via their full node such that:

  1. It's correctly formatted.
  2. It's reached the required number of confirmations on the Ethereum chain

Each event that a validator is voting to include must be individually signed by them. If the validator is not voting to include any events, they must still provide a signed empty vector of events to indicate this.

The votes will include be a Borsh-serialization of something like the following.


#![allow(unused)]
fn main() {
/// This struct will be created and signed over by each
/// active validator, to be included as a vote extension at the end of a
/// Tendermint PreCommit phase or as Protocol Tx.
pub struct Vext {
    /// The block height for which this [`Vext`] was made.
    pub block_height: BlockHeight,
    /// The address of the signing validator
    pub validator_addr: Address,
    /// The new ethereum events seen. These should be
    /// deterministically ordered.
    pub ethereum_events: Vec<EthereumEvent>,
}
}

These votes will be given to the next block proposer who will aggregate those that it can verify and will inject a signed protocol transaction into their proposal.

Validators will check this transaction and the validity of the new votes as part of ProcessProposal, this includes checking:

  • signatures
  • that votes are really from active validators
  • the calculation of backed voting power

If vote extensions are supported, it is also checked that each vote extension came from the previous round, requiring validators to sign over the Namada block height with their vote extension. Signing over the block height also acts as a replay protection mechanism.

Furthermore, the vote extensions included by the block proposer should have a quorum of the total voting power of the epoch of the block height behind it. Otherwise the block proposer would not have passed the FinalizeBlock phase of the last round of the last block.

These checks are to prevent censorship of events from validators by the block proposer. If vote extensions are not enabled, unfortunately these checks cannot be made.

In FinalizeBlock, we derive a second transaction (the "state update" transaction) from the vote aggregation that:

  • calculates the required changes to /eth_msgs storage and applies it
  • acts on any /eth_msgs/$msg_hash where seen is going from false to true (e.g. appropriately minting wrapped Ethereum assets)

This state update transaction will not be recorded on chain but will be deterministically derived from the protocol transaction including the aggregation of votes, which is recorded on chain. All ledger nodes will derive and apply the appropriate state changes to their own local blockchain storage.

The value of /eth_msgs/$msg_hash/seen will also indicate if the event has been acted upon on the Namada side. The appropriate transfers of tokens to the given user will be included on chain free of charge and requires no additional actions from the end user.

Transferring assets from Ethereum to Namada

In order to facilitate transferring assets from Ethereum to Namada, There will be two internal accounts with associated native validity predicates:

  • #EthBridge - Controls the /eth_msgs/ storage
  • and ledgers of balances for wrapped Ethereum assets (ERC20 tokens) structured in a "multitoken" hierarchy
  • #EthBridgeEscrow which will hold in escrow wrapped Namada tokens which have been sent to Ethereum.

Wrapped ERC20

If an ERC20 token is transferred to Namada, once the associated TransferToNamada Ethereum event is included into Namada, validators mint the appropriate amount to the corresponding multitoken balance key for the receiver, or release the escrowed native Namada token.


#![allow(unused)]
fn main() {
pub struct EthAddress(pub [u8; 20]);

/// An event transferring some kind of value from Ethereum to Namada
pub struct TransferToNamada {
    /// Quantity of ether in the transfer
    pub amount: Amount,
    /// Address on Ethereum of the asset
    pub asset: EthereumAsset,
    /// The Namada address receiving wrapped assets on Namada
    pub receiver: Address,
}
}
Example

For 10 DAI i.e. ERC20(0x6b175474e89094c44da98b954eedeac495271d0f) to atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt

#EthBridge
    /ERC20
        /0x6b175474e89094c44da98b954eedeac495271d0f
            /balance
                /atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt 
                += 10

Namada tokens

Any wrapped Namada tokens being redeemed from Ethereum must have an equivalent amount of the native token held in escrow by #EthBridgeEscrow. Once the associatedTransferToNamada Ethereum event is included into Namada, validators should simply make a transfer from #EthBridgeEscrow to the receiver for the appropriate amount and asset.

Transferring from Namada to Ethereum

Moving assets from Namada to Ethereum will not be automatic, as opposed the movement of value in the opposite direction. Instead, users must send an appropriate transaction to Namada to initiate a transfer across the bridge to Ethereum. Once this transaction is approved, a "proof", or the parts necessary to create a proof, will be created and posted on Namada.

It is incumbent on the end user to request an appropriate proof of the transaction. This proof must be submitted to the appropriate Ethereum smart contract by the user to redeem Ethereum assets / mint wrapped assets. This also means all Ethereum gas costs are the responsibility of the end user.

A relayer binary will be developed to aid users in accessing the proofs generated by Namada validators as well as posting this proof to Ethereum. It will also aid in batching transactions.

Moving value to Ethereum

To redeem wrapped Ethereum assets, a user should make a transaction to burn their wrapped tokens, which the #EthBridge validity predicate will accept. For sending NAM over the bridge, a user should send their NAM to #EthBridgeEscrow. In both cases, it's important that the user also adds a PendingTransfer to the Bridge Pool.

Batching

Ethereum gas fees make it prohibitively expensive to submit the proof for a single transaction over the bridge. Instead, it is typically more economical to submit proofs of many transactions in bulk. This batching is described in this section.

A pool of transfers from Namada to Ethereum will be kept by Namada. Every transaction to Ethereum that Namada validators approve will be added to this pool. We call this the Bridge Pool.

The Bridge Pool should be thought of as a sort of mempool. When users who wish to move assets to Ethereum submit their transactions, they will pay some additional amount of NAM (of their choosing) as a way of covering the gas costs on Ethereum. Namada validators will hold these fees in a Bridge Pool Escrow.

When a batch of transactions from the Bridge Pool is submitted by a user to Ethereum, Namada validators will receive notifications via their full nodes. They will then pay out the fees for each submitted transaction to the user who relayed these transactions (still in NAM). These will be paid out from the Bridge Pool Escrow.

The idea is that users will only relay transactions from the Bridge Pool that make economic sense. This prevents DoS attacks by underpaying fees as well as obviating the need for Ethereum gas price oracles. It also means that transfers to Ethereum are not ordered, preventing other attack vectors.

The Bridge Pool will be organized as a Merkle tree. Every time it is updated, the root of tree must be signed by a quorum of validators. When a user wishes to construct a batch of transactions to relay to Ethereum, they include the signed tree root and inclusion proofs for the subset of the pool they are relaying. This can be easily verified by the Ethereum smart contracts.

If vote extensions are available, these are used to collect the signatures over the Merkle tree root. If they are not, these must be submitted as protocol transactions, introducing latency to the pool. A user wishing to relay will need to wait until a Merkle tree root is signed for a tree that includes all the transactions they wish to relay.

The Ethereum smart contracts won't keep track of this signed Merkle root. Instead, part of the proof of correct batching is submitting a root to the contracts that is signed by quorum of validators. Since the smart contracts can trust such a signed root, it can then use the root to verify inclusion proofs.

Bridge Pool validity predicate

The Bridge Pool will have associated storage under the control of a native validity predicate. The storage layout looks as follows.

# all values are Borsh-serialized
/pending_transfers: Vec<PendingTransfer>
/signed_root: Signed<MerkleRoot>

The pending transfers are instances of the following type:


#![allow(unused)]
fn main() {
pub struct TransferToEthereum {
    /// The type of token 
    pub asset: EthAddress,
    /// The recipient address
    pub recipient: EthAddress,
    /// The amount to be transferred
    pub amount: Amount,
    /// a nonce for replay protection
    pub nonce: u64,
}

pub struct PendingTransfer {
    /// The message to send to Ethereum to 
    /// complete the transfer
    pub transfer: TransferToEthereum,
    /// The gas fees paid by the user sending
    /// this transfer
    pub gas_fee: GasFee,
}

pub struct GasFee {
    /// The amount of gas fees (in NAM)
    /// paid by the user sending this transfer
    pub amount: Amount,
    /// The address of the account paying the fees
    pub payer: Address,
}
}

When a user submits initiates a transfer, their transaction should include wasm to craft a PendingTransfer and append it to the pool in storage as well as send the relevant gas fees into the Bridge Pool's escrow. This will be validated by the Bridge Pool vp.

The signed Merkle root is only modifiable by validators. The Merkle tree only consists of the TransferToEthereum messages as Ethereum does not need information about the gas fees paid on Namada.

If vote extensions are not available, this signed root may lag behind the list of pending transactions. However, it should be the eventually every pending transaction is covered by the root or it times out.

Replay Protection and timeouts

It is important that nonces are used to prevent copies of the same transaction being submitted multiple times. Since we do not want to enforce an order on the transactions, these nonces should land in a range. As a consequence of this, it is possible that transactions in the Bridge Pool will time out. Transactions that timed out should revert the state changes on Namada including refunding the paid in fees.

Proofs

A proof for the bridge is a quorum of signatures by a valid validator set. A bridge header is a proof attached to a message understandable to the Ethereum smart contracts. For transferring value to Ethereum, a proof is a signed Merkle tree root and inclusion proofs of asset transfer messages understandable to the Ethereum smart contracts, as described in the section on batching

A message for transferring value to Ethereum is a TransferToNamada instance as described here.

Additionally, when the validator set changes, the smart contracts on Ethereum must be updated so that it can continue to recognize valid proofs. Since the Ethereum smart contract should accept any bridge header signed by 2 / 3 of the staking validators, it needs up-to-date knowledge of:

  • The current validators' public keys
  • The current stake of each validator

This means that by the end of every Namada epoch, a special transaction must be sent to the Ethereum smart contracts detailing the new public keys and stake of the new validator set. This message must also be signed by at least 2 / 3 of the current validators as a "transfer of power".

If vote extensions are available, a fully crafted transfer of power message will be made available on-chain. Otherwise, this message must be crafted offline by aggregating the protocol txs from validators in which the sign over the new validator set.

If vote extensions are available, this signed data can be constructed using them. Otherwise, validators must send protocol txs to be included on the ledger. Once a quorum exist on chain, they can be aggregated into a single message that can be relayed to Ethereum. Signing an invalid validator transition set will be considered a slashable offense.

Due to asynchronicity concerns, this message should be submitted well in advance of the actual epoch change. It should happen at the beginning of each new epoch. Bridge headers to ethereum should include the current Namada epoch so that the smart contract knows how to verify the headers. In short, there is a pipelining mechanism in the smart contract - the active validators for epoch n submit details of the active validator set for epoch n+1.

Such a message is not prompted by any user transaction and thus will have to be carried out by a bridge relayer. Once the necessary data to construct the transfer of power message is on chain, any time afterwards a Namada bridge process may take it to craft the appropriate header to the Ethereum smart contracts.

The details on bridge relayers are below in the corresponding section.

Signing incorrect headers is considered a slashable offense. Anyone witnessing an incorrect header that is signed may submit a complaint (a type of transaction) to initiate slashing of the validator who made the signature.

Namada Bridge Relayers

Validator changes must be turned into a message that can be communicated to smart contracts on Ethereum. These smart contracts need this information to verify proofs of actions taken on Namada.

Since this is protocol level information, it is not user prompted and thus should not be the responsibility of any user to submit such a transaction. However, any user may choose to submit this transaction anyway.

This necessitates a Namada node whose job it is to submit these transactions on Ethereum by the conclusion of each Namada epoch. This node is called the bridge relayer. In theory, since this message is publicly available on the blockchain, anyone can submit this transaction, but only the bridge relayer will be directly compensated by Namada.

The bridge relayer will be chosen to be the proposer of the first block of the new epoch. Anyone else may relay this message, but must pay for the fees out of their own pocket.

All Namada validators will have an option to serve as bridge relayer and the Namada ledger will include a process that does the relaying. Since all Namada validators are running Ethereum full nodes, they can monitor that the message was relayed correctly by the bridge relayer.

If the Ethereum event spawned by relaying their message gets accepted by the Ethereum state inclusion onto Namada, new NAM tokens will be minted to reward them. The reward amount shall be a protocol parameter that can be changed via governance. It should be high enough to cover necessary gas fees.

Recovering from an update failure

If vote extensions are not available, we cannot guarantee that a quorum of validator signatures can be gathered for the message that updates the validator set before the epoch ends.

If a significant number of validators become inactive in the next epoch, we need a means to complete validator set update. Until this is done, the bridge will halt.

In this case, the validators from that epoch will need to craft a transaction with a quorum of signatures offline and submit it on-chain. This transaction should include the validator set update.

The only way this is impossible is if more than 1/3 of the validators by stake from that epoch delete their ethereum keys, which is extremely unlikely.

Ethereum Smart Contracts

Contracts

There are five smart contracts that make up an Ethereum bridge deployment.

  • Proxy
  • Bridge
  • Governance
  • Vault
  • wNAM

Proxy

The Proxy contract serves as a dumb storage for holding the addresses of other contracts, specifically the Governance contract, the Vault contract and the current Bridge contract. Once deployed, it is modifiable only by the Governance contract, to update the address for which contract is the current Bridge contract.

The Proxy contract is fixed forever once the bridge has been deployed.

Bridge

The Bridge contract is the only contract that unprivileged users of the bridge may interact with. It provides methods for transferring ERC20s to Namada (holding them in escrow in the Vault), as well as releasing escrowed ERC20s from the Vault for transfers made from Namada to Ethereum. It holds a whitelist of ERC20s that may cross the bridge, and this whitelist may be updated by the Governance contract.

Governance

The Governance contract may "upgrade" the bridge by updating the Proxy contract to point to a new Bridge contract and/or a new Governance contract. It may also withdraw all funds from the Vault to any specified Ethereum address, if a quorum of validators choose to do so.

wNAM

The wNAM contract is a simple ERC20 token with a fixed supply, which is all minted when the bridge is first deployed. After initial deployment, the entire supply of wNAM belongs to the Vault contract. As NAM is transferred from Namada to Ethereum, wNAM may be released from the Vault by the Bridge.

The wNAM contract is fixed forever once the bridge has been deployed.

Vault

The Vault contract holds in escrow any ERC20 tokens that have been sent over the bridge to Namada, as well as a supply of wNAM ERC20s to represent NAM that has been sent from Namada to Ethereum. Funds held by the Vault may only be spendable by the current Bridge contract. When ERC20 tokens are transferred from Ethereum to Namada, they must be deposited to the Vault via the Bridge contract.

The Vault contract is fixed forever once the bridge has been deployed.

Namada-side configuration

When an account on Namada becomes a validator, they must provide two Ethereum secp256k1 keys:

  • the bridge key - a hot key for normal operations
  • the governance key - a cold key for exceptional operations, like emergency withdrawal from the bridge

These keys are used to control the bridge smart contracts, via signing of messages. Validators should be challenged periodically to prove they still retain knowledge of their governance key, which is not regularly used.

Deployment

The contracts should be deployable by anyone to any EVM chain using an automated script. The following configuration should be agreed up front by Namada governance before deployment:

  • details of the initial active validator set that will control the bridge - specifically, for each validator:
    • their hot Ethereum address
    • their cold Ethereum address
    • their voting power on Namada for the epoch when the bridge will launch
  • the total supply of the wNAM ERC20 token, which will represent Namada-native NAM on the EVM chain
  • an initial whitelist of ERC20 tokens that may cross the bridge from Ethereum to Namada - specifically, for each whitelisted ERC20:
    • the Ethereum address of the ERC20 contract
    • a cap on the total amount that may cross the bridge, in units of ERC20

After a deployment has finished successfully, the deployer must not have any privileged control of any of the contracts deployed. Any privileged actions must only be possible via a message signed by a validator set that the smart contracts are storing details of.

Communication

From Ethereum to Namada

A Namada chain's validators are configured to listen to events emitted by the smart contracts pointed to by the Proxy contract. The address of the Proxy contract is set in a governance parameter in Namada storage. Namada validators treat emitted events as authoritative and take action on them. Namada also knows the address of the wNAM ERC20 contract via a governance parameter, and treats transfers of this ERC20 to Namada as an indication to release native NAM from the #EthBridgeEscrow account on Namada, rather than to mint a wrapped ERC20 as is the case with all other ERC20s.

From Namada to Ethereum

At any time, the Governance and Bridge contracts must store:

  • a hash of the current Namada epoch's active validator set
  • a hash of another epoch's active validator set. When the bridge is first deployed, this will also be the current Namada epoch's active validator set, but after the first validator set update is submitted to the Governance smart contract, this hash will always be an adjacent Namada epoch's active validator set i.e. either the previous epoch's, or the next epoch's

In the case of the Governance contract, these are hashes of a map of validator's cold key addresses to their voting powers, while for the Bridge contract it is hashes of a map of validator's hot key addresses to their voting powers. Namada validators may post signatures as on chain of relevant messages to be relayed to the Ethereum bridge smart contracts (e.g. validator set updates, pending transfers, etc.). Methods of the Ethereum bridge smart contracts should generally accept:

  • some message
  • full details of some active validator set (i.e. relevant Ethereum addresses + voting powers)
  • signatures over the message by validators from the this active validator set

Given this data, anyone should be able to make the relevant Ethereum smart contract method call, if they are willing to pay the Ethereum gas. A call is then authorized to happen if:

  • The active validator set specified in the call hashes to either of the validator set hashes stored in the smart contract
  • A quorum (i.e. more than 2/3 by voting power) of the signatures over the message are valid

Validator set updates

Initial deployment aside, at the beginning of each epoch, the smart contracts will contain details of the current epoch's validator set and the previous epoch's validator set. Namada validators must endeavor to sign details of the next epoch's validator set and post them on Namada chain in a protocol transaction. Details of the next epoch's validator set and a quorum of signatures over it by validators from the current epoch's validator set must then be relayed to the Governance contract before the end of the epoch, which will update both the Governance and Bridge smart contracts to have the hash of the next epoch's validator set rather than the previous epoch's validator set. This should happen before the current Namada epoch ends. If this does not happen, then the Namada chain must either halt or not progress to the next epoch, to avoid losing control of the bridge.

When a validator set update is submitted, the hashes for the oldest validator set are effectively "evicted" from the Governance and Bridge smart contracts. At that point, messages signed by that evicted validator set will no longer be accepted by the bridge.

Example flow

  • Namada epoch 10 begins. Currently, the Governance contract knows the hashes of the validator sets for epochs 9 and 10, as does the Bridge contract.
  • Validators for epoch 10 post signatures over the hash of details of the validator set for epoch 11 to Namada as protocol transactions
  • A point is reached during epoch 10 at which a quorum of such signatures is present on the Namada chain
  • A relayer submits a validator set update for epoch 11 to Governance, using a quorum of signatures from the Namada chain
  • The Governance and Bridge contracts now know the hashes of the validator sets for epochs 10 and 11, and will accept messages signed by either of them. It will no longer accept messages signed by the validator set for epoch 9.
  • Namada progresses to epoch 11, and the flow repeats

NB: the flow for when the bridge has just launched is similar, except the contracts know the details of only one epoch's validator set - the launch epoch's. E.g. if the bridge launches at epoch 10, then initially the contracts know the hash only for epoch 10 and not epochs 10 and 11, until the first validator set update has been submitted

IBC integration

IBC transaction

An IBC transaction tx_ibc.wasm is provided. We have to set an IBC message to the transaction data corresponding to execute an IBC operation.

The transaction decodes the data to an IBC message and handles IBC-related data, e.g. it makes a new connection ID and writes a new connection end for MsgConnectionOpenTry. The operations are implemented in IbcActions. The transaction doesn't check the validity for the state changes. IBC validity predicate and IBC token validity predicate are in charge of the validity.

IBC validity predicate

IBC validity predicate checks if an IBC transaction satisfies IBC protocol. When an IBC transaction is executed, i.e. a transaction changes the state of the key that contains InternalAddress::Ibc, IBC validity predicate (one of the native validity predicates) is executed. For example, if an IBC connection end is created in the transaction, IBC validity predicate validates the creation. If the creation with MsgConnectionOpenTry is invalid, e.g. the counterpart connection end doesn't exist, the validity predicate makes the transaction fail.

Fungible Token Transfer

The transfer of fungible tokens over an IBC channel on separate chains is defined in ICS20.

In Namada, the sending tokens is triggered by a transaction having MsgTransfer as transaction data. A packet including FungibleTokenPacketData is made from the message in the transaction execution.

Namada chain receives the tokens by a transaction having MsgRecvPacket which has the packet including FungibleTokenPacketData.

The sending and receiving tokens in a transaction are validated by not only IBC validity predicate but also IBC token validity predicate. IBC validity predicate validates if sending and receiving the packet is proper. IBC token validity predicate is also one of the native validity predicates and checks if the token transfer is valid. If the transfer is not valid, e.g. an unexpected amount is minted, the validity predicate makes the transaction fail.

A transaction escrowing/unescrowing a token changes the escrow account's balance of the token. The key is {token_addr}/ibc/{port_id}/{channel_id}/balance/IbcEscrow. A transaction burning a token changes the burn account's balance of the token. The key is {token_addr}/ibc/{port_id}/{channel_id}/balance/IbcBurn. A transaction minting a token changes the mint account's balance of the token. The key is {token_addr}/ibc/{port_id}/{channel_id}/balance/IbcMint. The key including IbcBurn or IbcMint have the balance temporarily for validity predicates. It isn't committed to a block. IbcEscrow, IbcBurn, and IbcMint are addresses of InternalAddress and actually they are encoded in the storage key. When these addresses are included in the changed keys after transaction execution, IBC token validity predicate is triggered.

The receiver's account is {token_addr}/ibc/{ibc_token_hash}/balance/{receiver_addr}. {ibc_token_hash} is a hash calculated with the denomination prefixed with the port ID and channel ID. It is NOT the same as the normal account {token_addr}/balance/{receiver_addr}. That's because it should be origin-specific for transferring back to the source chain. We can transfer back the received token by setting ibc/{ibc_token_hash} or {port_id}/{channel_id}/{token_addr} as denom in MsgTransfer.

For example, we transfer a token #my_token from a user #user_a on Chain A to a user #user_b on Chain B, then transfer back the token from #user_b to #user_a. The port ID and channel ID on Chain A for Chain B are transfer and channel_42, those on Chain B for Chain A are transfer and channel_24. The denomination in the FungibleTokenTransferData at the first transfer should be #my_token.

  1. User A makes MsgTransfer as a transaction data and submits a transaction from Chain A

#![allow(unused)]
fn main() {
    let token = Some(Coin {
        denom, // #my_token
        amount: "100000".to_string(),
    });
    let msg = MsgTransfer {
        source_port,    // transfer
        source_channel, // channel_42
        token,
        sender,   // #user_a
        receiver, // #user_b
        timeout_height: Height::new(0, 1000),
        timeout_timestamp: (Timestamp::now() + Duration::new(100, 0)).unwrap(),
    };
}
  1. On Chain A, the specified amount of the token is transferred from the sender's account #my_token/balance/#user_a to the escrow account #my_token/ibc/transfer/channel_42/balance/IbcEscrow
  2. On Chain B, the amount of the token is transferred from #my_token/ibc/transfer/channel_24/balance/IbcMint to #my_token/ibc/{hash}/balance/#user_b
    • The {hash} is calculated from a string transfer/channel_24/#my_token with SHA256
    • The {hash} is a fixed length because of hashing even if the original denomination becomes too long with many prefixes after transferring through many chains
  3. To transfer back, User B makes MsgTransfer and submits a transaction from Chain B

#![allow(unused)]
fn main() {
    let token = Some(Coin {
        denom, // ibc/{hash} or transfer/channel_24/#my_token
        amount: "100000".to_string(),
    });
    let msg = MsgTransfer {
        source_port,    // transfer
        source_channel, // channel_24
        token,
        sender,   // #user_b
        receiver, // #user_a
        timeout_height: Height::new(0, 1000),
        timeout_timestamp: (Timestamp::now() + Duration::new(100, 0)).unwrap(),
    };
}
  1. On Chain B, the amount of the token is transferred from #my_token/ibc/{hash}/balance/#user_b to #my_token/ibc/transfer/channel_24/IbcBurn
  2. On Chain A, the amount of the token is transferred from #my_token/ibc/transfer/channel_42/balance/IbcEscrow to #my_token/balance/#user_a

IBC message

IBC messages are defined in ibc-rs. The message should be encoded with Protobuf (NOT with Borsh) as the following code to set it as a transaction data.


#![allow(unused)]
fn main() {
use ibc::tx_msg::Msg;

pub fn make_ibc_data(message: impl Msg) -> Vec<u8> {
    let msg = message.to_any();
    let mut tx_data = vec![];
    prost::Message::encode(&msg, &mut tx_data).expect("encoding IBC message shouldn't fail");
    tx_data
}
}

Economics

Namada users pay transaction fees in NAM and other tokens (see fee system and governance), so demand for NAM can be expected to track demand for block space. On the supply side, the protocol mints NAM at a fixed maximum per-annum rate based on a fraction of the current supply (see inflation system), which is directed to three areas of protocol subsidy: proof-of-stake, shielded pool incentives, and public-goods funding. Inflation rates for these three areas are adjusted independently (the first two on PD controllers and the third based on funding decisions) and excess tokens are slowly burned.

Fee system

In order to be accepted by the Namada ledger, transactions must pay fees. Transaction fees serve two purposes: first, the efficient allocation of block space given permissionless transaction submission and varying demand, and second, incentive-compatibility to encourage block producers to add transactions to the blocks which they create and publish.

Namada transaction fees can be paid in any fungible token which is a member of a whitelist controlled by Namada governance. Governance also sets minimum fee rates (which can be periodically updated so that they are usually sufficient) which transactions must pay in order to be accepted (but they can always pay more to encourage the proposer to prioritise them). When using the shielded pool, transactions can also unshield tokens in order to pay the required fees.

The token whitelist consists of a list of pairs, where is a token identifier and is the minimum price per unit gas which must be paid by a transaction paying fees using that asset. This whitelist can be updated with a standard governance proposal. All fees collected are paid directly to the block proposer (incentive-compatible, so that side payments are no more profitable).

Inflation system

The Namada protocol controls the Namada token NAM (the native staking token), which is programmatically minted to pay for algorithmically measurable public goods - proof-of-stake security and shielded pool usage - and out-of-band public goods. Proof-of-stake rewards are paid into the reward distribution mechanism in order to distribute them to validators and delegators. Shielded pool rewards are paid into the shielded pool reward mechanism, where users who kept tokens in the shielded pool can claim them asynchronously. Public goods funding is paid to the public goods distribution mechanism, which further splits funding between proactive and retroactive funding and into separate categories.

Proof-of-stake rewards

The security of the proof-of-stake voting power allocation mechanism used by Namada is dependent in part upon locking (bonding) tokens to validators, where these tokens can be slashed should the validators misbehave. Funds so locked are only able to be withdrawn after an unbonding period. In order to reward validators and delegators for locking their stake and participating in the consensus mechanism, Namada pays a variable amount of inflation to all delegators and validators. The amount of inflation paid is varied on a PD-controller in order to target a particular bonding ratio (fraction of the NAM token being locked in proof-of-stake). Namada targets a bonding ratio of 2/3, paying up to 10% inflation per annum to proof-of-stake rewards. See reward distribution mechanism for details.

Shielded pool rewards

Privacy provided by the MASP in practice depends on how many users use the shielded pool and what assets they use it with. To increase the likelihood of a sizeable privacy set, Namada pays a variable portion of inflation, up to 10% per annum, to shielded pool incentives, which are allocated on a per-asset basis by a PD-controller targeting specific amounts of each asset being locked in the shielded pool. See shielded pool incentives for details.

Public goods funding

Namada provides 10% per annum inflation for other non-algorithmically-measurable public goods. See public goods funding for details.

Detailed inflation calculation model

Inflation is calculated and paid per-epoch as follows.

First, we start with the following fixed (governance-alterable) parameters:

  • is the cap of proof-of-stake reward rate, in units of percent per annum (genesis default: 10%)
  • is the cap of shielded pool reward rate for each asset , in units of percent per annum
  • is the public goods funding reward rate, in units of percent per annum
  • is the target staking ratio (genesis default: 2/3)
  • is the target amount of asset locked in the shielded pool (separate value for each asset )
  • is the number of epochs per year (genesis default: 365)
  • is the nominal proportional gain of the proof-of-stake PD controller, as a fraction of the total input range
  • is the nominal derivative gain of the proof-of-stake PD controller, as a fraction of the total input range
  • is the nominal proportional gain of the shielded pool reward controller for asset , as a fraction of the total input range (separate value for each asset )
  • is the nominal derivative gain of the shielded pool reward controller for asset , as a fraction of the total input range (separate value for each asset )

Second, we take as input the following state values:

  • is the current supply of NAM
  • is the current amount of NAM locked in proof-of-stake
  • is the proof-of-stake inflation amount from the previous epoch, in units of tokens per epoch
  • is the proof-of-stake locked token ratio from the previous epoch
  • is the current amount of asset locked in the shielded pool (separate value for each asset )
  • is the shielded pool inflation amount for asset from the previous epoch, in units of tokens per epoch
  • is the shielded pool locked token ratio for asset from the previous epoch (separate value for each asset )

Public goods funding inflation can be calculated and paid immediately (in terms of total tokens per epoch):

These tokens () are distributed to the public goods funding validity predicate.

To run the PD-controllers for proof-of-stake and shielded pool rewards, we first calculate some intermediate values:

  • Calculate the latest staking ratio as
  • Calculate the per-epoch cap on the proof-of-stake and shielded pool token inflation
    • (separate value for each )
  • Calculate PD-controller constants to be used for this epoch

Then, for proof-of-stake first, run the PD-controller:

  • Calculate the error
  • Calculate the error derivative
  • Calculate the control value
  • Calculate the new

These tokens () are distributed to the proof-of-stake reward distribution validity predicate.

Similarly, for each asset for which shielded pool rewards are being paid:

  • Calculate the error
  • Calculate the error derivative
  • Calculate the control value
  • Calculate the new

These tokens () are distributed to the shielded pool reward distribution validity predicate.

Finally, we store the latest inflation and locked token ratio values for the next epoch's controller round.

Proof-of-stake (PoS)

This section of the specification describes the proof-of-stake mechanism of Namada, which is largely modeled after Cosmos bonded proof-of-stake, but makes significant changes to bond storage representation, validator set change handling, reward distribution, and slashing, with the general aims of increased precision in reasoning about security, validator decentralisation, and avoiding unnecessary proof-of-stake-related transactions.

This section is split into three subcomponents: the bonding mechanism, reward distribution, and cubic slashing.

Context

Blockchain systems rely on economic security (directly or indirectly) to prevent abuse and for actors to behave according to the protocol. The aim is that economic incentives promote correct long-term operation of the system and economic punishments discourage diverging from correct protocol execution either by mistake or with the intent of carrying out attacks. Many PoS blockchains rely on the 1/3 Byzantine rule, where they make the assumption the adversary cannot control more 2/3 of the total stake or 2/3 of the actors.

Goals of Rewards and Slashing: Liveness and Security

  • Security: Delegation and Slashing: we want to make sure validators are backed by enough funds to make misbehaviour very expensive. Security is achieved by punishing (slashing) if they do. Slashing locked funds (stake) intends to disincentivize diverging from correct execution of protocol, which in this case is voting to finalize valid blocks.
  • Liveness: Paying Rewards. For continued operation of Namada we want to incentivize participating in consensus and delegation, which helps security.

Security

In blockchain systems we do not rely on altruistic behavior but rather economic security. We expect the validators to execute the protocol correctly. They get rewarded for doing so and punished otherwise. Each validator has some self-stake and some stake that is delegated to it by other token holders. The validator and delegators share the reward and risk of slashing impact with each other.

The total stake behind consensus should be taken into account when value is transferred via a transaction. For example, if we have 1 billion tokens, we aim that 300 Million of these tokens is backing validators. This means that users should not transfer more than 200 million of this token within a block.

Bonding mechanism

Epoched data

Epoched data is data associated with a specific epoch that is set in advance. The data relevant to the PoS system in the ledger's state are epoched. Each data can be uniquely identified. These are:

  • System parameters. Discrete values for each epoch in which the parameters have changed.
  • Validator sets. Discrete values for each epoch.
  • Total voting power. A sum of all validators' voting power, excluding jailed validators. A delta value for each epoch.
  • Validators' consensus key, state and total bonded tokens. Identified by the validator's address.
  • Bonds are created by self-bonding and delegations. They are identified by the pair of source address and the validator's address.

Changes to the epoched data do not take effect immediately. Instead, changes in epoch n are queued to take effect in the epoch n + pipeline_length for most cases and n + pipeline_length + unboding_length for unbonding actions. Should the same validator's data or same bonds (i.e. with the same identity) be updated more than once in the same epoch, the later update overrides the previously queued-up update. For bonds, the token amounts are added up. Once the epoch n has ended, the queued-up updates for epoch n + pipeline_length are final and the values become immutable.

Additionally, any account may submit evidence for a slashable misbehaviour.

Validator

A validator must have a public consensus key.

A validator may be in one of the following states:

  • inactive: A validator is not being considered for block creation and cannot receive any new delegations.
  • candidate: A validator is considered for block creation and can receive delegations.

For each validator (in any state), the system also tracks total bonded tokens as a sum of the tokens in their self-bonds and delegated bonds. The total bonded tokens determine their voting voting power by multiplication by the votes_per_token parameter. The voting power is used for validator selection for block creation and is used in governance related activities.

Validator actions

  • become validator: Any account that is not a validator already and that doesn't have any delegations may request to become a validator. It is required to provide a public consensus key. For the action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length and the consensus key is set for epoch n + pipeline_length.
  • deactivate: Only a validator whose state at or before the pipeline_length offset is candidate account may deactivate. For this action applied in epoch n, the validator's account is set to become inactive in the epoch n + pipeline_length.
  • reactivate: Only an inactive validator may reactivate. Similarly to become validator action, for this action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length.
  • self-bond: A validator may lock-up tokens into a bond only for its own validator's address.
  • unbond: Any self-bonded tokens may be partially or fully unbonded.
  • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch.
  • change consensus key: Set the new consensus key. When applied in epoch n, the key is set for epoch n + pipeline_length.
  • change commission rate: Set the new commission rate. When applied in epoch n, the new value will be set for epoch n + pipeline_length. The commission rate change must be within the max_commission_rate_change limit set by the validator.

Validator sets

A candidate validator that is not jailed (see slashing) can be in one of the three sets:

  • consensus - consensus validator set, capacity limited by the max_validator_slots parameter
  • below_capacity - validators below consensus capacity, but above the threshold set by min_validator_stake parameter
  • below_threshold - validators with stake below min_validator_stake parameter

From all the candidate validators, in each epoch the ones with the most voting power limited up to the max_validator_slots parameter are selected for the consensus validator set. Whenever stake of a validator is changed, the validator sets must be updated at the appropriate offset matching the stake update.

The limit on min_validator_stake parameter is introduced, because the protocol needs to iterate through the validator sets in order to copy the last known state into a new epoch when epoch changes (to avoid offloading this cost to a transaction that is unlucky enough to be the first one to update the validator set(s) in some new epoch) and also to distribute rewards to consensus validators and to record unchanged validator products for validators below_capacity, who do not receive rewards in the current epoch.

Delegator

A delegator may have any number of delegations to any number of validators. Delegations are stored in bonds.

Delegator actions

  • delegate: An account which is not a validator may delegate tokens to any number of validators. This will lock-up tokens into a bond.
  • undelegate: Any delegated tokens may be partially or fully unbonded.
  • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch.

Bonds

A bond locks-up tokens from validators' self-bonding and delegators' delegations. For self-bonding, the source address is equal to the validator's address. Only validators can self-bond. For a bond created from a delegation, the bond's source is the delegator's account.

For each epoch, bonds are uniquely identified by the pair of source and validator's addresses. A bond created in epoch n is written into epoch n + pipeline_length. If there already is a bond in the epoch n + pipeline_length for this pair of source and validator's addresses, its tokens are incremented by the newly bonded amount.

Any bonds created in epoch n increment the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + pipeline_length.

The tokens put into a bond are immediately deducted from the source account.

Unbond

An unbonding action (validator unbond or delegator undelegate) requested by the bond's source account in epoch n creates an "unbond" with epoch set to n + pipeline_length + unbounding_length. We also store the epoch of the bond(s) from which the unbond is created in order to determine if the unbond should be slashed if a fault occurred within the range of bond epoch (inclusive) and unbond epoch (exclusive). The "bond" from which the tokens are being unbonded is decremented in-place (in whatever epoch it was created in).

Any unbonds created in epoch n decrements the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + pipeline_length.

An "unbond" with epoch set to n may be withdrawn by the bond's source address in or any time after the epoch n. Once withdrawn, the unbond is deleted and the tokens are credited to the source account.

Note that unlike bonding and unbonding where token changes are delayed to some future epochs (pipeline or unbonding offset), the token withdrawal applies immediately. This because when the tokens are withdrawable, they are already "unlocked" from the PoS system and do not contribute to voting power.

Slashing

An important part of the security model of Namada is based on making attacking the system very expensive. To this end, the validator who has bonded stake will be slashed once an offense has been detected.

These are the types of offenses:

  • Equivocation in consensus
    • voting: meaning that a validator has submitted two votes that are conflicting
    • block production: a block producer has created two different blocks for the same height
  • Invalidity:
    • block production: a block producer has produced invalid block
    • voting: validators have voted on invalid block

Unavailability is not considered an offense, but a validator who hasn't voted will not receive rewards.

Once an offense has been reported:

  1. Kicking out
  2. Slashing
  • Individual: Once someone has reported an offense it is reviewed by validators and if confirmed the offender is slashed.
  • cubic slashing: escalated slashing

Instead of absolute values, validators' total bonded token amounts and bonds' and unbonds' token amounts are stored as their deltas (i.e. the change of quantity from a previous epoch) to allow distinguishing changes for different epoch, which is essential for determining whether tokens should be slashed. Slashes for a fault that occurred in epoch n may only be applied before the beginning of epoch n + unbonding_length. For this reason, in epoch m we can sum all the deltas of total bonded token amounts and bonds and unbond with the same source and validator for epoch equal or less than m - unboding_length into a single total bonded token amount, single bond and single unbond record. This is to keep the total number of total bonded token amounts for a unique validator and bonds and unbonds for a unique pair of source and validator bound to a maximum number (equal to unbonding_length).

To disincentivize validators misbehaviour in the PoS system a validator may be slashed for any fault that it has done. An evidence of misbehaviour may be submitted by any account for a fault that occurred in epoch n anytime before the beginning of epoch n + unbonding_length.

A valid evidence reduces the validator's total bonded token amount by the slash rate in and before the epoch in which the fault occurred. The validator's voting power must also be adjusted to the slashed total bonded token amount. Additionally, a slash is stored with the misbehaving validator's address and the relevant epoch in which the fault occurred. When an unbond is being withdrawn, we first look-up if any slash occurred within the range of epochs in which these were active and if so, reduce its token amount by the slash rate. Note that bonds and unbonds amounts are not slashed until their tokens are withdrawn.

The invariant is that the sum of amounts that may be withdrawn from a misbehaving validator must always add up to the total bonded token amount.

Initialization

An initial validator set with self-bonded token amounts must be given on system initialization.

This set is used to initialize the genesis state with epoched data active immediately (from the first epoch).

System parameters

The default values that are relative to epoch duration assume that an epoch last about 24 hours.

  • max_validator_slots: Maximum consensus validators, default 128
  • min_validator_stake: Minimum stake of a validator that allows the validator to enter the consensus or below_capacity sets, in number of native tokens. Because the inflation system targets a bonding ratio of 2/3, the minimum should be somewhere around total_supply * 2/3 / max_validator_slots, but it can and should be much lower to lower the entry cost, as long as it's enough to prevent validation account creation spam that could slow down PoS system update on epoch change
  • pipeline_len: Pipeline length in number of epochs, default 2 (see https://github.com/cosmos/cosmos-sdk/blob/019444ae4328beaca32f2f8416ee5edbac2ef30b/docs/architecture/adr-039-epoched-staking.md#pipelining-the-epochs)
  • unboding_len: Unbonding duration in number of epochs, default 6
  • votes_per_token: Used in validators' voting power calculation, default 100‱ (1 voting power unit per 1000 tokens)
  • duplicate_vote_slash_rate: Portion of validator's stake that should be slashed on a duplicate vote
  • light_client_attack_slash_rate: Portion of validator's stake that should be slashed on a light client attack

Storage

The system parameters are written into the storage to allow for their changes. Additionally, each validator may record a new parameters value under their sub-key that they wish to change to, which would override the systems parameters when more than 2/3 voting power are in agreement on all the parameters values.

The validators' data are keyed by the their addresses, conceptually:

type Validators = HashMap<Address, Validator>;

Epoched data are stored in a structure, conceptually looking like this:

struct Epoched<Data> {
  /// The epoch in which this data was last updated
  last_update: Epoch,
  /// How many epochs of historical data to keep, this is `0` in most cases
  /// except for validator `total_deltas` and `total_unbonded`, in which 
  /// historical data for up to `pipeline_length + unbonding_length - 1` is 
  /// needed to be able to apply any slashes that may occur.
  /// The value is not actually stored with the data, it's either constant 
  /// value or resolved from PoS parameters on which it may depends.
  past_epochs_to_store: u64,
  /// An ordered map in which the head is the data for epoch in which 
  /// the `last_update - past_epochs_to_store` was performed and every
  /// consecutive epoch up to a required length. For system parameters, 
  /// and all the epoched data 
  /// `LENGTH = past_epochs_to_store + pipeline_length + 1`, 
  /// with exception of unbonds, for which 
  /// `LENGTH = past_epochs_to_store + pipeline_length + unbonding_length + 1`.
  data: Map<Epoch, Option<Data>>
}

Note that not all epochs will have data set, only the ones in which some changes occurred. The only exception to this are the consensus and below_capacity validator sets, which are written on a new epoch from the latest state into the new epoch by the protocol. This is so that a transaction never has to update the whole validator set when it hasn't changed yet in the current epoch, which would require a copy of the last epoch data and that copy would additionally have to be verified by the PoS validity predicate.

To try to look-up a value for Epoched data with discrete values in each epoch (such as the consensus validator set) in the current epoch n:

  1. read the data field at epoch n:
    1. if there's a value at n return it
    2. else if n == last_update - past_epochs_to_store, return None
    3. else decrement n and repeat this sub-step from 1.

To look-up a value for Epoched data with delta values in the current epoch n:

  1. sum all the values that are not None in the last_update - past_epochs_to_store .. n epoch range bounded inclusively below and above

To update a value in Epoched data with discrete values in epoch n with value new for epoch m:

  1. let epochs_to_clear = min(n - last_update, LENGTH)
  2. if epochs_to_clear == 0:
    1. data[m] = new
  3. else:
    1. for i in last_update - past_epochs_to_store .. last_update - past_epochs_to_store + epochs_to_clear range bounded inclusively below and exclusively above, set data[i] = None
    2. set data[m] = new
    3. set last_update to the current epoch

To update a value in Epoched data with delta values in epoch n with value delta for epoch m:

  1. let epochs_to_sum = min(n - last_update, LENGTH)
  2. if epochs_to_sum == 0:
    1. set data[m] = data[m].map_or_else(delta, |last_delta| last_delta + delta) (add the delta to the previous value, if any, otherwise use the delta as the value)
  3. else:
    1. let sum to be equal to the sum of all delta values in the last_update - past_epochs_to_store .. last_update - past_epochs_to_store + epochs_to_sum range bounded inclusively below and exclusively above and set data[i] = None
    2. set data[n - past_epochs_to_store] = data[n - past_epochs_to_store].map_or_else(sum, |last_delta| last_delta + sum) to add the sum to the last epoch that will be stored
    3. set data[m] = data[m].map_or_else(delta, |last_delta| last_delta + delta) to add the new delta
    4. set last_update to the current epoch

The invariants for updates in both cases are that m >= n (epoched data cannot be updated in an epoch lower than the current epoch) and m - n <= LENGTH - past_epochs_to_store (epoched data can only be updated at the future-most epoch set by the LENGTH - past_epochs_to_store of the data).

We store the consensus validators and validators below_capacity in two set, ordered by their voting power. We don't have to store the validators below_threshold in a set, because we don't need to know their order.

Note that we still need to store below_capacity set in order of their voting power, because when e.g. one of the consensus validator's voting power drops below that of a maximum below_capacity validator, we need to know which validator to swap in into the consensus set. The protocol new epoch update just disregards validators who are not in consensus or below_capacity sets as below_threshold validators and so iteration on unbounded size is avoided. Instead the size of the validator set that is regarded for PoS rewards can be adjusted by the min_validator_stake parameter via governance.

Conceptually, this may look like this:

type VotingPower = u64;

/// Validator's address with its voting power.
#[derive(PartialEq, Eq, PartialOrd, Ord)]
struct WeightedValidator {
  /// The `voting_power` field must be on top, because lexicographic ordering is
  /// based on the top-to-bottom declaration order and in the `ValidatorSet`
  /// the `WeighedValidator`s these need to be sorted by the `voting_power`.
  voting_power: VotingPower,
  address: Address,
}

struct ValidatorSet {
  /// Active validator set with maximum size equal to `max_validator_slots`
  consensus: BTreeSet<WeightedValidator>,
  /// Other validators that are not in `consensus`, but have stake above `min_validator_stake`
  below_threshold: BTreeSet<WeightedValidator>,
}

type ValidatorSets = Epoched<ValidatorSet>;

/// The sum of all validators voting power (including `below_threshold`)
type TotalVotingPower = Epoched<VotingPower>;

When any validator's voting power changes, we attempt to perform the following update on the ValidatorSet:

  1. let validator be the validator's address, power_before and power_after be the voting power before and after the change, respectively
  2. find if the power_before and power_after are above the min_validator_stake threshold
    1. if they're both below the threshold, nothing else needs to be done
  3. let power_delta = power_after - power_before
  4. let min_consensus = consensus.first() (consensus validator with lowest voting power)
  5. let max_below_capacity = below_capacity.last() (below_capacity validator with greatest voting power)
  6. find whether the validator was in consensus set, let was_in_consensus = power_before >= max_below_capacity.voting_power
  7. find whether the validator was in below capacity set, let was_below_capacity = power_before > min_validator_stake
    1. if was_in_consensus:
      1. if power_after >= max_below_capacity.voting_power, update the validator in consensus set with voting_power = power_after
      2. else if power_after < min_validator_stake, remove the validator from consensus, insert the max_below_capacity.address validator into consensus and remove max_below_capacity.address from below_capacity
      3. else, remove the validator from consensus, insert it into below_capacity and remove max_below_capacity.address from below_capacity and insert it into consensus
    2. else if was_below_capacity:
      1. if power_after <= min_consensus.voting_power, update the validator in below_capacity set with voting_power = power_after
      2. else if power_after < min_validator_stake, remove the validator from below_capacity
      3. else, remove the validator from below_capacity, insert it into consensus and remove min_consensus.address from consensus and insert it into below_capacity
    3. else (if validator was below minimum stake):
      1. if power_after > min_consensus.voting_power, remove the min_consensus.address from consensus, insert the min_consensus.address into below_capacity and insert the validator in consensus set with voting_power = power_after
      2. else if power_after >= min_validator_stake, insert the validator into below_capacity set with voting_power = power_after
      3. else, do nothing

Additionally, for rewards distribution:

  • When a validator moves from below_threshold set to either below_capacity or consensus set, the transaction must also fill in the validator's reward products from its last known value, if any, in all epochs starting from their last_known_product_epoch (exclusive) up to the current_epoch + pipeline_len - 1 (inclusive) in order to make their look-up cost constant (assuming that validator's stake can only be increased at pipeline_len offset).
  • And on the opposite side, when a stake of a validator from consensus or below_capacity drops below min_validator_stake, we record their last_known_product_epoch, so that it can be used if and when the validator's stake goes above min_validator_stake.

Within each validator's address space, we store public consensus key, state, total bonded token amount, total unbonded token amount (needed for applying of slashes) and voting power calculated from the total bonded token amount (even though the voting power is stored in the ValidatorSet, we also need to have the voting_power here because we cannot look it up in the ValidatorSet without iterating the whole set):

struct Validator {
  consensus_key: Epoched<PublicKey>,
  state: Epoched<ValidatorState>,
  total_deltas: Epoched<token::Amount>,
  total_unbonded: Epoched<token::Amount>,
  voting_power: Epoched<VotingPower>,
}

enum ValidatorState {
  Inactive,
  Candidate,
}

The bonds and unbonds are keyed by their identifier:

type Bonds = HashMap<BondId, Epoched<Bond>>;
type Unbonds = HashMap<BondId, Epoched<Unbond>>;

struct BondId {
  validator: Address,
  /// The delegator adddress for delegations, or the same as the `validator`
  /// address for self-bonds.
  source: Address,
}

struct Bond {
  /// A key is a the epoch set for the bond. This is used in unbonding, where
  // it's needed for slash epoch range check.
  deltas: HashMap<Epoch, token::Amount>,
}

struct Unbond {
  /// A key is a pair of the epoch of the bond from which a unbond was created
  /// the epoch of unboding. This is needed for slash epoch range check.
  deltas: HashMap<(Epoch, Epoch), token::Amount>
}

For slashes, we store the epoch and block height at which the fault occurred, slash rate and the slash type:

struct Slash {
  epoch: Epoch,
  block_height: u64,
  /// slash token amount ‱ (per ten thousand)
  rate: u8,
  r#type: SlashType,
}

Cubic slashing

Namada implements a slashing scheme that is called cubic slashing: the amount of a slash is proportional to the cube of the voting power committing infractions within a particular interval. This is designed to make it riskier to operate larger or similarly configured validators, and thus the scheme encourages network resilience.

When a slash is detected:

  1. Using the height of the infraction, calculate the epoch at the unbonding length relative to the current epoch. This is the final epoch before the stake that was used to commit the infraction can be fully unbonded and withdrawn. The slash is enqueued to be processed in this final epoch to allow for sufficient time to detect any other validator misbehaviors while still processing the slash before the infraction stake could be unbonded and withdrawn.
  2. Jail the misbehaving validator, effective at the beginning of the next epoch. While the validator is jailed, it is removed from the validator set. Note that this is the only instance in our proof-of-stake model wherein the validator set is updated without waiting for the pipeline offset.
  3. Prevent the delegators to this validator from altering their delegations in any way until the enqueued slash is processed.

At the end of each epoch, for each slash enqueued to be processed for the end of the epoch:

  1. Collect all known infractions committed within a range of [-window_width, +window_width] epochs around the infraction in question. By default, window_width = 1.
  2. Sum the fractional voting powers (relative to the total PoS voting power) of the misbehaving validator for each of the collected nearby infractions.
  3. The final slash rate for the slash in question is then dependent on this sum. Using as the nominal slash rate and to indicate voting power, the slash rate is expressed as:

Or, in pseudocode:


#![allow(unused)]
fn main() {
// Infraction type, where inner field is the slash rate for the type
enum Infraction {
    DuplicateVote(Decimal),
    LightClientAttack(Decimal)
}

// Generic validator with an address and voting power
struct Validator {
    address: Vec<u8>,
    voting_power: u64,
}

// Generic slash object with the misbehaving validator and infraction type
struct Slash {
    validator: Validator,
    infraction_type: Infraction,
}

// Calculate the cubic slash rate for a slash in the current epoch
fn calculate_cubic_slash_rate(
    current_epoch: u64,
    nominal_slash_rate: Decimal,
    cubic_window_width: u64,
    slashes: Map<u64, Vec<Slash>>,
    total_voting_power: u64
) -> Decimal {
    let mut vp_frac_sum = Decimal::ZERO;

    let start_epoch = current_epoch - cubic_window_width;
    let end_epoch = current_epoch + cubic_window_width;

    for epoch in start_epoch..=end_epoch {
        let cur_slashes = slashes.get(epoch);
        let vp_frac_this_epoch = cur_slashes.iter.fold(0, |sum, Slash{validator, _}|
            { sum + validator.voting_power / total_voting_power}
        );
        vp_frac_sum += vp_frac_this_epoch;
    }
    let rate = cmp::min(
        Decimal::ONE,
        cmp::max(
            slash.infraction_type.0,
            9 * vp_frac_sum * vp_frac_sum,
        ),
    );
    rate
}
}

As a function, it can be drawn as (assuming ):

  1. Set the slash rate on the now "finalised" slash in storage.
  2. Update the misbehaving validators' stored voting powers appropriately.
  3. Delegations to the validator can now be redelegated / start unbonding / etc.

Note: The voting power associated with a slash is the voting power of the validator when they violated the protocol. This does mean that these voting powers may not sum to 1, but this method should still be close to the desired incentives and cannot really be changed without making the system easier to game.

A jailed validator can later submit a transaction to unjail themselves after a configurable period. When the transaction is applied and accepted, the validator updates its state to "candidate" and is added back to the appropriate validator set (depending on its new voting power) starting at the pipeline offset relative to the epoch in which the unjailing transaction was submitted.

At present, funds slashed are sent to the governance treasury.

Slashes

Slashes should lead to punishment for delegators who were contributing voting power to the validator at the height of the infraction, as if the delegations were iterated over and slashed individually.

This can be implemented as a negative inflation rate for a particular block.

Reward distribution

Namada uses the automatically-compounding variant of the F1 fee distribution.

Rewards are given to validators for proposing blocks, for voting on finalizing blocks, and for being in the consensus validator set. The funds for these rewards come from minting (creating new tokens). The amount that is minted depends on how many staking tokens are locked (staked) and some maximum annual inflation rate. The rewards mechanism is implemented as a PD controller that dynamically adjusts the inflation rate to achieve a target staked token ratio. When the total fraction of tokens staked is very low, the return rate per validator needs to increase, but as the total fraction of stake rises, validators will receive fewer rewards. Once the desired staking fraction is achieved, the amount minted will just be the desired annual inflation.

Each delegation to a validator is initiated at an agreed-upon commission rate charged by the validator. Validators pay out rewards to delegators based on this mutually-determined commission rate. The minted rewards are auto-bonded and only transferred when the funds are unbonded. Once the protocol determines the total amount of tokens to mint at the end of the epoch, the minted tokens are effectively divided among the relevant validators and delegators according to their proportional stake. In practice, the reward products, which are the fractional increases in staked tokens claimed, are stored for the validators and delegators, and the reward tokens are only transferred to the validator’s or delegator’s account upon withdrawal. This is described in the following sections. The general system is similar to what Cosmos does.

Basic algorithm

Consider a system with

  • a canonical singular staking unit of account.
  • a set of validators .
  • a set of delegations , where indicates the associated validator, each with a particular initial amount.
  • epoched proof-of-stake, where changes are applied as follows:
    • bonding is processed after the pipeline length
    • unbonding is processed after the pipeline + unbonding length
    • rewards are paid out at the end of each epoch, i.e., in each epoch , a reward is paid out to validator
    • slashing is applied as described in slashing.

We wish to approximate as exactly as possible the following ideal delegator reward distribution system:

  • At each epoch, for a validator , iterate over all of the delegations to that validator. Update each delegation , as follows. where and respectively denote the reward and stake of validator at epoch .
  • Similarly, multiply the validator's voting power by the same factor , which should now equal the sum of their revised-amount delegations.

In this system, rewards are automatically rebonded to delegations, increasing the delegation amounts and validator voting powers accordingly.

However, we wish to implement this without actually needing to iterate over all delegations each block, since this is too computationally expensive. We can exploit this constant multiplicative factor , which does not vary per delegation, to perform this calculation lazily. In this lazy method, only a constant amount of data per validator per epoch is stored, and revised amounts are calculated for each individual delegation only when a delegation changes.

We will demonstrate this for a delegation to a validator . Let denote the stake of at epoch .

For two epochs and with , define the function as

Denote as . The function has a useful property.

One may calculate the accumulated changes upto epoch as

If we know the delegation upto epoch , the delegation at epoch is obtained by the following formula,

Using property ,

Clearly, the quantity does not depend on the delegation . Thus, for a given validator, we only need to store this product at each epoch , from which the updated amounts for all delegations can be calculated.

The product at the end of each epoch is updated as follows.


updateProducts 
:: HashMap<Address, HashMap<Epoch, Float>> 
-> HashSet<Address> 
-> Epoch 
-> HashMap<BondId, Token::amount>>

updateProducts validatorProducts activeSet currentEpoch = 
 let stake = PoS.readValidatorTotalDeltas validator currentEpoch
     reward = PoS.reward stake currentEpoch
     rsratio = reward / stake
     entries = lookup validatorProducts validator
     lastProduct = lookup entries (Epoch (currentEpoch - 1))
 in insert currentEpoch (lastProduct*(1+rsratio)) entries

In case a delegator wishes to withdraw delegation(s), then the proportionate rewards are appropriated using the aforementioned scheme, which is implemented by the following function.

withdrawalAmount 
:: HashMap<Address, HashMap <Epoch, Product>> 
-> BondId 
->  [(Epoch, Delegation)] 
-> Token::amount

withdrawalAmount validatorProducts bondId unbonds = 
 sum [stake * endp/startp | (endEpoch, unbond) <- unbonds, 
                            let epochProducts = lookup (validator bondId)
                           validatorProducts, 
                            let startp = lookup (startEpoch unbond) 
                       epochProducts, 
                            let endp = lookup endEpoch epochProducts, 
                            let stake =  delegation unbond]
 

Commission

Commission is charged by a validator on the rewards coming from delegations. These are set as percentages by the validator, who may charge any commission they wish between 0-100%.

Let be the commission rate for a delegation to a validator at epoch . The expression for the product that was introduced earlier can be modified for a delegator in particular as

in order to calculate the new rewards given out to the delegator during withdrawal. Thus the commission charged per epoch is retained by the validator and remains untouched upon withdrawal by the delegator.

The commission rate is the same for all delegations to a validator in a given epoch , including for self-bonds. The validator can change the commission rate at any point, subject to a maximum rate of change per epoch, which is a constant specified when the validator is created and immutable once validator creation has been accepted.

While rewards are given out at the end of every epoch, voting power is only updated after the pipeline offset. According to the proof-of-stake system, at the current epoch e, the validator sets can only be updated for epoch e + pipeline_offset, and it should remain unchanged from epoch e to e + pipeline_offset - 1. Updating voting power in the current epoch would violate this rule.

Distribution of block rewards to validators

A validator can earn a portion of the block rewards in three different ways:

  • Proposing the block
  • Providing a signature on the constructed block (voting)
  • Being a member of the consensus validator set

The reward mechanism calculates fractions of the total block reward that are given for the above-mentioned three behaviors, such that

where is the proposer reward fraction, is the reward fraction for the set of signers, and is the reward fraction for the whole active validator set.

The reward for proposing a block is dependent on the combined voting power of all validators whose signatures are included in the block. This is to incentivize the block proposer to maximize the inclusion of signatures, as blocks with more signatures have better security guarantees and allow for more efficient light clients.

The block proposer reward is parameterized as

where is the ratio of the combined stake of all block signers to the combined stake of all consensus validators:

The value of is bounded from below at 2/3, since a block requires this amount of signing stake to be verified. We currently enforce that the block proposer reward is a minimum of 1%.

The block signer reward for a validator is parameterized as

where is the stake of validator , is the combined stake of all signers, and is the combined stake of all consensus validators.

Finally, the remaining reward just for being in the consensus validator set is parameterized as

Thus, as an example, the total fraction of the block reward for the proposer (assuming they include their own signature in the block) would be:

The values of the parameters and are set in the proof-of-stake storage and can only change via governance. The values are chosen relative to each other such that a block proposer is always incentivized to include as much signing stake as possible. These values at genesis are currently:

These rewards must be determined for every single block, but the inflationary token rewards are only minted at the end of an epoch. Thus, the rewards products are only updated at the end of an epoch as well.

In order to maintain a record of the block rewards over the course of an epoch, a reward fraction accumulator is implemented as a Map<Address, Decimal> and held in the storage key #{PoS}/validator_set/consensus/rewards_accumulator. When finalizing each block, the accumulator value for each consensus validator is incremented with the fraction of that block's reward owed to the validator. At the end of the epoch when the rewards products are updated, the accumulator value is divided by the number of blocks in that epoch, which yields the fraction of the newly minted inflation tokens owed to the validator. The next entry of the rewards products for each validator can then be created. The map is then reset to be empty in preparation for the next epoch and consensus validator set.

Shielded pool incentives

Rationale

Private transactions made by individual users using the MASP increase the privacy set for other users, so even if the individual doesn't care whether a particular transaction is private, others benefit from their choice to do the transaction in private instead of in public. In the absence of a subsidy (the computation required for private state transitions is likely more expensive) or other incentives, users may not elect to make their transactions private when they do not need to because the benefits do not directly accrue to them. This provides grounds for a protocol subsidy of shielded transactions (relative to the computatation required), so that users who do not have a strong preference on whether or not to make their transaction private will be "nudged" by the fee difference to do so.

Separately, and additionally, a privacy set which is very small in absolute terms does not provide much privacy, and transactions increasing the privacy set provide more additional privacy if the privacy set is small. Compare, for example, the doubled privacy set from 10 to 20 transactions to the minor increase from 1010 to 1020 transactions. This provides grounds for some sort of incentive mechanism for making shielded transactions which pays in inverse proportion to the size of the current privacy set (so shielded transactions when the privacy set is small receive increased incentives in accordance with their increased contributions to privacy).

Incentive mechanisms are also dangerous, as they give users reason to craft particular transactions when they might not otherwise have done so, and they must satisfy certain constraints in order not to compromise state machine throughput, denial-of-service resistance, etc. A few constraints to keep in mind:

  • Fee subsidies cannot reduce fees to zero, or reduce fees so much that inexpensive transaction spam can fill blocks and overload validators.
  • Incentives for contributing to the privacy set should not incentivise transactions which do not meaningfully contribute to the privacy set or merely repeat a previous action (shielded and unshielding the same assets, repeatedly transferring the same assets, etc.)
  • Incentives for contributing to the privacy set, since the MASP supports many assets, will need to be adjusted over time according to actual conditions of use.

Design

Namada enacts a shielded pool incentive which pays users a variable rate for keeping assets in the shielded pool. Assets do not need to be locked in any way. Users may claim rewards while remaining in the shielded pool using the convert circuit, and unshield the rewards (should they wish to) at some later point in time. The protocol uses a PD-controller to target particular minimum amounts of particular assets being shielded. Rewards accumulate automatically over time, so claiming rewards more frequently does not result in additional funds.

Implementation

When users deposit assets into the shielded pool, the current epoch is appended to the asset type. Users can use these "epoched assets" as normal within the shielded pool. When epochs advance, users can use the convert circuit to convert assets tagged with the old epoch to assets tagged with the new epoch, receiving shielded rewards in NAM proportional to the amount of the asset they had shielded, which automatically compound while the assets are shielded and the epochs progressing. When unshielding from the shielded pool, assets must be first converted to the current epoch (claiming any rewards), after which they can be converted back to the normal (un-epoched) unshielded asset denomination.

Namada allocates up to 10% per annum inflation of NAM to pay for shielded pool rewards. This inflation is kept in a temporary shielded rewards pool, which is then allocated according to a set of PD (proportional-derivative) controllers for assets and target shielded amounts configured by Namada governance. Each epoch, subject to available rewards, each controller calculates the reward rate for its asset in this epoch, which is then used to compute entries into the conversion table. Entries from epochs before the previous one are recalculated based on cumulative rewards. Users may then asynchronously claim their rewards by using the convert circuit at some future point in time.

PGF specs

Motivation

Public goods are non-excludable non-rivalrous items which provide benefits of some sort to their users. Examples include languages, open-source software, research, designs, Earth's atmosphere, and art (conceptually - a physical painting is excludable and rivalrous, but the painting as-such is not). Namada's software stack, supporting research, and ecosystem tooling are all public goods, as are the information ecosystem and education which provide for the technology to be used safety, the hardware designs and software stacks (e.g. instruction set, OS, programming language) on which it runs, and the atmosphere and biodiverse environment which renders its operation possible. Without these things, Namada could not exist, and without their continued sustenance it will not continue to. Public goods, by their nature as non-excludable and non-rivalrous, are mis-modeled by economic systems (such as payment-for-goods) built upon the assumption of scarcity, and are usually either under-funded (relative to their public benefit) or funded in ways which require artificial scarcity and thus a public loss. For this reason, it is in the interest of Namada to help out, where possible, in funding the public goods upon which its existence depends in ways which do not require the introduction of artificial scarcity, balancing the costs of available resources and operational complexity.

Design precedent

There is a lot of existing research into public-goods funding to which justice cannot be done here. Most mechanisms fall into two categories: need-based and results-based, where need-based allocation schemes attempt to pay for particular public goods on the basis of cost-of-resources, and results-based allocation schemes attempt to pay (often retroactively) for particular public goods on the basis of expected or assessed benefits to a community and thus create incentives for the production of public goods providing substantial benefits (for a longer exposition on retroactive PGF, see here, although the idea is not new). Additional constraints to consider include the cost-of-time of governance structures (which renders e.g. direct democracy on all funding proposals very inefficient), the necessity of predictable funding in order to make long-term organisational decision-making, the propensity for bike-shedding and damage to the information commons in large-scale public debate (especially without an identity layer or Sybil resistance), and the engineering costs of implementations.

Funding categories

Note that the following is social consensus, precedent which can be set at genesis and ratified by governance but does not require any protocol changes.

Categories of public-goods funding

Namada groups public goods into four categories, with earmarked pools of funding:

  • Technical research Technical research covers funding for technical research topics related to Namada and Namada, such as cryptography, distributed systems, programming language theory, and human-computer interface design, both inside and outside the academy. Possible funding forms could include PhD sponsorships, independent researcher grants, institutional funding, funding for experimental resources (e.g. compute resources for benchmarking), funding for prizes (e.g. theoretical cryptography optimisations), and similar.
  • Engineering Engineering covers funding for engineering projects related to Namada and Namada, including libraries, optimisations, tooling, alternative interfaces, alternative implementations, integrations, etc. Possible funding forms could include independent developer grants, institutional funding, funding for bug bounties, funding for prizes (e.g. practical performance optimisations), and similar.
  • Social research, art, and philosophy Social research, art, and philosophy covers funding for artistic expression, philosophical investigation, and social/community research (not marketing) exploring the relationship between humans and technology. Possible funding forms could include independent artist grants, institutional funding, funding for specific research resources (e.g. travel expenses to a location to conduct a case study), and similar.
  • External public goods External public goods covers funding for public goods explicitly external to the Namada and Namada ecosystem, including carbon sequestration, independent journalism, direct cash transfers, legal advocacy, etc. Possible funding forms could include direct purchase of tokenised assets such as carbon credits, direct cash transfers (e.g. GiveDirectly), institutional funding (e.g. Wikileaks), and similar.

Funding amounts

In Namada, up to 10% inflation per annum of the NAM token is directed to this public goods mechanism. The further division of these funds is entirely up to the discretion of the elected PGF council.

Namada encourages the public goods council to adopt a default social consensus of an equal split between categories, meaning 1.25% per annum inflation for each category (e.g. 1.25% for technical research continuous funding, 1.25% for technical research retroactive PGF). If no qualified recipients are available, funds may be redirected or burnt.

The Namada PGF council is also granted a 5% income as a reward for conducting PGF activities (5% * 10% = 0.05% of total inflation). This will be a governance parameter subject to change.

Voting for the Council

Constructing the council

All valid PGF councils will be established multisignature account addresses. These must be created by the intdended parties that wish to create a council. The council will therefore have the discretion to decide what threshold will be required for their multisig (i.e the "k" in the "k out of n").

Proposing Candidacy

The council will be responsible to publish this address to voters and express their desired spending_cap.

The --spending-cap argument is an Amount, which indicates the maximum amount of NAM available to the PGF council that the PGF council is able to spend during their term. If the spending cap is greater than the total balance available to the council, the council will be able to spend up to the full amount of NAM allocated to them (i.e the spending cap can not increase their allowance).

A council consisting of the same members should also be able to propose multiple spending caps (with the same multisig address). These will be voted on as separate councils and votes counted separately.

Proposing candidacy as a PGF council is something that is done at any time. This simply signals to the rest of governance that a given established multisignature account address is willing to be voted on during a PGF council election in the future.

Candidacy proposals last a default of 30 epochs. There is no limit to the number of times a council can be proposed for candidacy. This helps ensure that no PGF council is elected that does not intend to become one.

The structure of the candidacy proposal should be


#![allow(unused)]
fn main() {
  Map< epoch: Epoch, (council: Council, attestation: Url)>
}

Initiating the vote

Before a new PGF council can be elected, a governance proposal that suggests a new PGF council must pass. This vote is handled by the governancea proposal type PgfProposal.

The the struct of PgfProposal is constructed as follows, and is explained in more detail in the governance specs


#![allow(unused)]
fn main() {
struct PgfProposal{
  id: u64
  content: Vec<u8>,
  author: Address,
  r#type: PGFCouncil,
  votingStartEpoch: Epoch,
  votingEndEpoch: Epoch,
  graceEpoch: Epoch,
}
}

The above proposal type exists in order to determine whether a new PGF council will be elected. In order for a new PGF council to be elected (and hence halting the previous council's power), of validating power must vote on the PgfProposal and more than half of the votes must be in favor. If more than half of the votes are against no council is elected and the previous council's ability to spend funds (if applicable) is revoked. Approval voting is employed in order to elect the new PGF council, whilst the PgfProposal is active. In other words, voters may vote for multiple PGF councils, and the council & spending cap pair with the greatest proportion of votes will be elected.

See the example below for more detail, as it may serve as the best medium for explaining the mechanism.

Voting on the council

After the PgfProposal has been submitted, and once the council has been constructed and broadcast, the council address can be voted on by governance particpants. All voting must occur between votingStartEpoch and votingEndEpoch.

The vote for a set of PGF council addresses will be constructed as follows.

Each participant submits a vote through governance:


#![allow(unused)]
fn main() {
struct OnChainVote {
    id: u64,
    voter: Address,
    yay: proposalVote,
}
}

In turn, the proposal vote will include the structure:


#![allow(unused)]
fn main() {
HashSet<(address: Address, spending_cap: Amount)>
}

The structure contains all the counsils voted, where each cousil is specified as a pair Address (the enstablished address of the multisig account) and Amount (spending cap).

These votes will then be used in order to vote for various PGF councils. Multiple councils can be voted on through a vector as represented above.

Dealing with ties

In the rare occurance of a tie, the council with the lower spending_cap will win the tiebreak.

In the case of equal tiebreaks, the addresses with lower alphabetical order will be chosen. This is very arbitrary due to the expected low frequency.

Electing the council

Once the elected council has been decided upon, the established address corresponding to the multisig is added to the PGF internal address, and the spending_cap variable is stored. The variable amount_spent is also reset from the previous council, which is a variable in storage meant to track the spending of the active PGF council.

Example

The below example hopefully demonstrates the mechanism more clearly.

Note

The governance set consists of Alice, Bob, Charlie, Dave, and Elsa. Each member has 20% voting power.

The current PGF council consits of Dave and Elsa.

  • At epoch 42, Alice proposes the PgfProposal with the following struct:

#![allow(unused)]
fn main() {
struct PgfProposal{
  id: 2
  content: Vec<32,54,01,24,13,37>, // (Just the byte representation of the content (description) of the proposal)
  author: 0xalice,
  r#type: PGFCouncil,
  votingStartEpoch: Epoch(45),
  votingEndEpoch: Epoch(54),
  graceEpoch: Epoch(57),
}
}
  • At epoch 47, after seeing this proposal go live, Bob and Charlie decide to put themselves forward as a PGF council. They construct a multisig with address 0xBobCharlieMultisig and broadcast it on Namada using the CLI. They set their spending_cap to 1_000_000. (They could have done this before the proposal went live as well).

  • At epoch 48, Elsa broadcasts a multisig PGF council address which includes herself and her sister. They set their spending_cap: 500_000, meaning they restrict themselves to spending 500,000 NAM.

  • At epoch 49, Alice submits the vote:


#![allow(unused)]
fn main() {
struct OnChainVote {
    id: 2,
    voter: 0xalice,
    yay: proposalVote,
}
}

Whereby the proposalVote includes


#![allow(unused)]
fn main() {
HashSet<(address: 0xBobCharlieMultisig, spending_cap: 1_000_000)>
}
  • At epoch 49, Bob submits an identical transaction.

  • At epoch 50, Dave votes Nay on the proposal.

  • At epoch 51, Elsa votes Yay but on the Councils (address: 0xElsaAndSisterMultisig, spending_cap: 1_000_000) AND (address: 0xBobCharlieMultisig, spending_cap: 1_000_000).

  • At epoch 54, the voting period ends and the votes are tallied. Since 80% > 33% of the voting power voted on this proposal (everyone except Charlie), the intitial condition is passed and the Proposal is active. Further, because out of the total votes, most were Yay, (75% > 50% threshold), a new council will be elected. The council that received the most votes, in this case 0xBobCharlieMultisig is elected the new PGF council. The Council (address: 0xElsaAndSisterMultisig, spending_cap: 50) actually received 0 votes because Elsa's vote included the wrong spending_cap.

  • At epoch 57, Bob and Charlie have the effective power to carry out Public Goods Funding transactions. `

Mechanism

Once elected and instantiated, members of the PGF council will then unilaterally be able to propose and sign transactions for this purpose. The PGF council multisig will have an "allowance" to spend up to the PGF internal address's balance multiplied by the spending_cap variable. Consensus on these transactions, in addition to motivation behind them will be handled off-chain, and should be recorded for the purposes of the "End of Term Summary".

PGF council transactions

The PGF council members will be responsible for collecting signatures offline. One member will then be responsinble for submitting a transaction containing at least out of the signatures.

The collecting member of the council will then be responsible for submitting this tx through the multisig. The multisig will only accept the tx if this is true.

The PGF council should be able to make both retroactive and continuous public funding transactions. Retroactive public funding transactions should be straightforward and implement no additional logic to a normal transfer.

However, for continuous PGF (cPGF), the council should be able to submit a one time transaction which indicates the recipient addresses that should be eligble for receiveing cPGF.

The following data is attached to the PGF transaction and will allow the council to decide which projects will be continously funded. Each tuple represent the address and the respective amount of NAM that the recipient will receive every epoch.


#![allow(unused)]
fn main() {
struct cPgfRecipients {
    recipients: HashSet<(Address, u64)>
}
}

The mechanism for these transfers will be implemented in finalize-block.rs, which will send the addresses their respective amounts each end-of-epoch. Further, the following transactions:

  • add (recipient, amount) to cPgfRecipients (inserts the pair into the hashset above)
  • remove recipient from cPgfRecipients (removes the address and corresponding amount pair from the hashset above) should be added in order to ease the management of cPGF recipients.

#![allow(unused)]
fn main() {
impl addRecipient for cPgfRecipients

impl remRecipient for cPgfRecipients
}

End of Term Summary

At the end of each term, the council is encouraged to submit a "summary" which describes the funding decisions the councils have made and their reasoning for these decisions. This summary will act as an assessment of the council and will be the primary document on the basis of which governance should decide whether to re-elect the council.

Addresses

Governance adds 1 internal address:

PGF internal address

The internal address VP will hold the allowance the 10% inflation of NAM. This will be added in addition to what was unspent by the previous council. It is important to note that it is this internal address which holds the funds, rather than the PGF council multisig.

The council should be able to burn funds (up to their spending cap), but this hopefully should not require additional functionality beyond what currently exists.

Further, the VP should contain the parameter that dictates the number of epochs a candidacy is valid for once it has been broadcast and before it needs to be renewed.

VP checks

The VP must check that the council does not exceed its spending cap.

The VP must also check that the any spending is only done by a the correctly elected PGF council multisig address.

Storage

Storage keys

Each recipient will be listed under this storage space (for cPGF)

  • /PGFAddress/cPGF_recipients/Address = Amount
  • /PGFAddress/spending_cap = Amount
  • /PGFAddress/spent_amount = Amount
  • /PGFAddress/candidacy_length = u8
  • /PGFAddress/council_candidates/candidate_address/spending_cap = (epoch, url)
  • /PGFAddress/active_council/address = Address

Struct


#![allow(unused)]
fn main() {
struct Council {
    address: Address,
    spending_cap: Amount,
    spent_amount: Amount,
}
}

Further reading

Thanks for reading! You can find further information about the project below: