Namada

Welcome to the Namada specifications!

What is Namada?

Namada is a sovereign proof-of-stake blockchain, using Tendermint BFT consensus, that enables multi-asset private transfers for any native or non-native asset using a multi-asset shielded pool derived from the Sapling circuit. Namada features full IBC protocol support, a natively integrated Ethereum bridge, a modern proof-of-stakesystem with automatic reward compounding and cubic slashing, a stake-weighted governance signalling mechanism, and a proactive/retroactive public goods funding system. Users of shielded transfers are rewarded for their contributions to the privacy set in the form of native protocol tokens. A multi-asset shielded transfer wallet is provided in order to facilitate safe and private user interaction with the protocol.

You can learn more about Namada here.

What is Namada?

The Namada protocol is designed to facilitate the operation of networked fractal instances, which intercommunicate but can utilise varied state machines and security models. A fractal instance is an instance of the Namada consensus and execution protocols operated by a set of networked validators. Namada’s fractal instance architecture is an attempt to build a platform which is architecturally homogeneous and with a heterogeneous security model. Thus, different fractal instances may specialise in different tasks and serve different communities. Privacy should be default and inherent in the systems we use for transacting.

How does Namada relate to Namada?

The Namada instance will be the first such fractal instance, and it will be focused exclusively on the use-case of private asset transfers. Namada is a helpful stepping stone to finalise, test, and launch a protocol version that is simpler than the full Namada protocol but still encapsulates a unified and useful set of features.

Raison d'être

Privacy should be default and inherent in the systems we use for transacting. Yet safe and user-friendly multi-asset privacy doesn't yet exist in the blockchain ecosystem. Up until now users have had the choice of either a sovereign chain that reissues assets (e.g. Zcash) or a privacy preserving solution built on an existing smart contract chain. Both have large trade-offs: in the former case, users don't have assets that they actually want to transact with, and in the latter case, the restrictions of existing platforms mean that users leak a ton of metadata and the protocols are expensive and clunky to use.

Namada can support any fungible or non-fungible asset on an IBC-compatible blockchain and fungible or non-fungible assets (such as ERC20 tokens) sent over a custom Ethereum bridge that reduces transfer costs and streamlines UX as much as possible. Once assets are on Namada, shielded transfers are cheap and all assets contribute to the same anonymity set.

Users on Namada can earn rewards, retain privacy of assets, and contribute to the overall privacy set.

Layout of this specification

The Namada specification documents are organised into four sub-sections:

Base ledger

The base ledger of Namada includes a consensus system, validity predicate-based execution system, and signalling-based governance mechanism. Namada's ledger also includes proof-of-stake, slashing, fees, and inflation funding for staking rewards, shielded pool incentives, and public goods -- these are specified in the economics section

Consensus

Namada uses Tendermint Go through the tendermint-rs bindings in order to provide peer-to-peer transaction gossip, BFT consensus, and state machine replication for Namada's custom state machine.

Execution

The Namada ledger execution system is based on an initial version of the Namada protocol. The system implements a generic computational substrate with WASM-based transactions and validity predicate verification architecture, on top of which specific features of Namada such as IBC, proof-of-stake, and the MASP are built.

Validity predicates

Conceptually, a validity predicate (VP) is a function from the transaction's data and the storage state prior and posterior to a transaction execution returning a boolean value. A transaction may modify any data in the accounts' dynamic storage sub-space. Upon transaction execution, the VPs associated with the accounts whose storage has been modified are invoked to verify the transaction. If any of them reject the transaction, all of its storage modifications are discarded.

Namada ledger

The Namada ledger is built on top of Tendermint's ABCI interface with a slight deviation from the ABCI convention: in Namada, the transactions are currently not being executed in ABCI's DeliverTx method, but rather in the EndBlock method. The reason for this is to prepare for future DKG and threshold decryption integration.

The ledger features an account-based system (in which UTXO-based systems such as the MASP can be internally implemented as specific accounts), where each account has a unique address and a dynamic key-value storage sub-space. Every account in Namada is associated with exactly one validity predicate. Fungible tokens, for example, are accounts, whose rules are governed by their validity predicates. Many of the base ledger subsystems specified here are themselves just special Namada accounts too (e.g. PoS, IBC and MASP).

Interaction with the Namada ledger are made possible via transactions (note transaction whitelist below). In Namada, transactions are allowed to perform arbitrary modifications to the storage of any account, but the transaction will be accepted and state changes applied only if all the validity predicates that were triggered by the transaction accept it. That is, the accounts whose storage sub-spaces were touched by the transaction and/or an account that was explicitly elected by the transaction as the verifier will all have their validity predicates verifying the transaction. A transaction can add any number of additional verifiers, but cannot remove the ones determined by the protocol. For example, a transparent fungible token transfer would typically trigger 3 validity predicates - those of the token, source and target addresses.

Supported validity predicates

While the execution model is fully programmable, for Namada only a selected subset of provided validity predicates and transactions will be permitted through pre-defined whitelists configured at network launch.

There are some native VPs for internal transparent addresses that are built into the ledger. All the other VPs are implemented as WASM programs. One can build a custom VP using the VP template or use one of the pre-defined VPs.

Supported validity predicates for Namada:

  • Native
    • Proof-of-stake (see spec)
    • IBC & IbcToken (see spec)
    • Governance (see spec)
    • SlashFund (see spec)
    • Protocol parameters
  • WASM
    • Fungible token (see spec)
    • MASP (see spec)
    • Implicit account VP (see spec)
    • k-of-n multisignature VP (see spec)

Namada Governance

Before describing Namada governance, it is useful to define the concepts of validators, delegators, and NAM.

Namada's economic model is based around a single native token, NAM, which is controlled by the protocol.

A Namada validator is an account with a public consensus key, which may participate in producing blocks and governance activities. A validator may not also be a delegator.

A Namada delegator is an account that delegates some tokens to a validator. A delegator may not also be a validator.

Namada introduces a governance mechanism to propose and apply protocol changes with or without the need for a hard fork. Anyone holding some NAM will be able to propose some changes to which delegators and validators will cast their yay or nay votes; in addition it will also be possible to attach some payloads to votes, in specific cases, to embed additional information.Governance on Namada supports both signaling and voting mechanisms. The difference between the the two is that the former is needed when the changes require a hard fork. In cases where the chain is not able to produce blocks anymore, Namada relies on off chain signaling to agree on a common move.

Further informtion regarding delegators, validators, and NAM is contained in the economics section.

On-chain protocol

Governance Address

Governance adds 2 internal addresses:

  • GovernanceAddress
  • SlashFundAddress

The first internal address contains all the proposals under its address space. The second internal address holds the funds of rejected proposals.

Governance storage

Each proposal will be stored in a sub-key under the internal proposal address. The storage keys involved are:

/$GovernanceAddress/proposal/$id/content: Vec<u8>
/$GovernanceAddress/proposal/$id/author: Address
/$GovernanceAddress/proposal/$id/type: ProposalType
/$GovernanceAddress/proposal/$id/start_epoch: Epoch
/$GovernanceAddress/proposal/$id/end_epoch: Epoch
/$GovernanceAddress/proposal/$id/grace_epoch: Epoch
/$GovernanceAddress/proposal/$id/proposal_code: Option<Vec<u8>>
/$GovernanceAddress/proposal/$id/funds: u64
/$GovernanceAddress/proposal/epoch/$id: u64

An epoch is a range of blocks or time that is defined by the base ledger and made available to the PoS system. This document assumes that epochs are identified by consecutive natural numbers. All the data relevant to PoS are associated with epochs.

  • Author address field will be used to credit the locked funds if the proposal is approved.
  • /$GovernanceAddress/proposal/$epoch/$id is used for easing the ledger governance execution. $epoch refers to the same value as the one specified in the grace_epoch field.
  • The content value should follow a standard format. We leverage a similar format to what is described in the BIP2 document:
{
    "title": "<text>",
    "authors": "<authors email addresses> ",
    "discussions-to": "<email address / link>",
    "created": "<date created on, in ISO 8601 (yyyy-mm-dd) format>",
    "license": "<abbreviation for approved license(s)>",
    "abstract": "<text>",
    "motivation": "<text>",
    "details": "<AIP number(s)> - optional field",
    "requires": "<AIP number(s)> - optional field",
}

The ProposalType imply different combinations of:

  • the optional wasm code attached to the proposal
  • which actors should be allowed to vote (delegators and validators or validators only)
  • the threshold to be used in the tally process
  • the optional payload (memo) attached to the vote

The correct logic to handle these different types will be hardcoded in protocol. We'll also rely on type checking to strictly enforce the correctness of a proposal given its type. These two approaches combined will prevent a user from deviating from the intended logic for a certain proposal type (e.g. providing a wasm code when it's not needed or allowing only validators to vote when also delegators should, etc...). More details on the specific types supported can be found in the relative section of this document.

GovernanceAddress parameters and global storage keys are:

/$GovernanceAddress/counter: u64
/$GovernanceAddress/min_proposal_fund: u64
/$GovernanceAddress/max_proposal_code_size: u64
/$GovernanceAddress/min_proposal_period: u64
/$GovernanceAddress/max_proposal_content_size: u64
/$GovernanceAddress/min_proposal_grace_epochs: u64
/$GovernanceAddress/pending/$proposal_id: u64
  • counter is used to assign a unique, incremental ID to each proposal.
  • min_proposal_fund represents the minimum amount of locked tokens to submit a proposal.
  • max_proposal_code_size is the maximum allowed size (in bytes) of the proposal wasm code.
  • min_proposal_period sets the minimum voting time window (in Epoch).
  • max_proposal_content_size tells the maximum number of characters allowed in the proposal content.
  • min_proposal_grace_epochs is the minimum required time window (in Epoch) between end_epoch and the epoch in which the proposal has to be executed.
  • /$GovernanceAddress/pending/$proposal_id this storage key is written only before the execution of the code defined in /$GovernanceAddress/proposal/$id/proposal_code and deleted afterwards. Since this storage key can be written only by the protocol itself (and by no other means), VPs can check for the presence of this storage key to be sure that a proposal_code has been executed by the protocol and not by a transaction.

The governance machinery also relies on a subkey stored under the NAM token address:

/$NAMAddress/balance/$GovernanceAddress: u64

This is to leverage the NAM VP to check that the funds were correctly locked. The governance subkey, /$GovernanceAddress/proposal/$id/funds will be used after the tally step to know the exact amount of tokens to refund or move to Treasury.

Supported proposal types

At the moment, Namada supports 3 types of governance proposals:


#![allow(unused)]
fn main() {
pub enum ProposalType {
  /// Carries the optional proposal code path
  Custom(Option<String>),
  PGFCouncil,
  ETHBridge,
}
}

Custom represents a generic proposal with the following properties:

  • Can carry a wasm code to be executed in case the proposal passes
  • Allows both validators and delegators to vote
  • Requires 2/3 of the total voting power to succeed
  • Doesn't expect any memo attached to the votes

PGFCouncil is a specific proposal to elect the council for Public Goods Funding:

  • Doesn't carry any wasm code
  • Allows both validators and delegators to vote
  • Requires 1/3 of the total voting power to vote for the same council
  • Expect every vote to carry a memo in the form of a tuple Set<(Set<Address>, BudgetCap)>

ETHBridge is aimed at regulating actions on the bridge like the update of the Ethereum smart contracts or the withdrawing of all the funds from the Vault :

  • Doesn't carry any wasm code
  • Allows only validators to vote
  • Requires 2/3 of the validators' total voting power to succeed
  • Expect every vote to carry a memo in the form of a tuple (Action, Signature)

GovernanceAddress VP

Just like PoS, also governance has its own storage space. The GovernanceAddress validity predicate task is to check the integrity and correctness of new proposals. A proposal, to be correct, must satisfy the following:

  • Mandatory storage writes are:
    • counter
    • author
    • type
    • funds
    • voting_start epoch
    • voting_end epoch
    • grace_epoch
  • Lock some funds >= min_proposal_fund
  • Contains a unique ID
  • Contains a start, end and grace Epoch
  • The difference between StartEpoch and EndEpoch should be >= min_proposal_period.
  • Should contain a text describing the proposal with length < max_proposal_content_size characters.
  • Vote can be done only by a delegator or validator (further constraints can be applied depending on the proposal type)
  • If delegators are allowed to vote, than validators can vote only in the initial 2/3 of the whole proposal duration (end_epoch - start_epoch)
  • Due to the previous requirement, the following must be true, (EndEpoch - StartEpoch) % 3 == 0
  • If defined, proposalCode should be the wasm bytecode representation of the changes. This code is triggered in case the proposal has a position outcome.
  • The difference between grace_epoch and end_epoch should be of at least min_proposal_grace_epochs

Once a proposal has been created, nobody can modify any of its fields. If proposal_code is Empty or None, the proposal upgrade will need to be done via hard fork, unless this is a specific type of proposal: in this case the protocol can directly apply the required changes.

It is possible to check the actual implementation here.

Examples of proposalCode could be:

  • storage writes to change some protocol parameter
  • storage writes to restore a slash
  • storage writes to change a non-native vp

This means that corresponding VPs need to handle these cases.

Proposal Transactions

The on-chain proposal transaction will have the following structure, where author address will be the refund address.


#![allow(unused)]
fn main() {
struct Proposal {
    id: u64,
    content: Vec<u8>,
    author: Address,
    r#type: ProposalType,
    votingStartEpoch: Epoch,
    votingEndEpoch: Epoch,
    graceEpoch: Epoch,
}
}

The optional proposal wasm code will be embedded inside the ProposalType enum variants to better perform validation through type checking.

Vote transaction

Vote transactions have the following structure:


#![allow(unused)]
fn main() {
struct OnChainVote {
    id: u64,
    voter: Address,
    yay: ProposalVote,
}
}

Vote transaction creates or modifies the following storage key:

/$GovernanceAddress/proposal/$id/vote/$delegation_address/$voter_address: ProposalVote

where ProposalVote is an enum representing a Yay or Nay vote: the yay variant also contains the specific memo (if any) required for that proposal.

The storage key will only be created if the transaction is signed either by a validator or a delegator. In case a vote misses a required memo or carries a memo with an invalid format, the vote will be discarded at validation time (VP) and it won't be written to storage.

If delegators are allowed to vote, validators will be able to vote only for 2/3 of the total voting period, while delegators can vote until the end of the voting period.

If a delegator votes differently than its validator, this will override the corresponding vote of this validator (e.g. if a delegator has a voting power of 200 and votes opposite to the delegator holding these tokens, than 200 will be subtracted from the voting power of the involved validator).

As a small form of space/gas optimization, if a delegator votes accordingly to its validator, the vote will not actually be submitted to the chain. This logic is applied only if the following conditions are satisfied:

  • The transaction is not being forced
  • The vote is submitted in the last third of the voting period (the one exclusive to delegators). This second condition is necessary to prevent a validator from changing its vote after a delegator vote has been submitted, effectively stealing the delegator's vote.

Tally

At the beginning of each new epoch (and only then), in the finalize_block function, tallying will occur for all the proposals ending at this epoch (specified via the grace_epoch field of the proposal). The proposal has a positive outcome if the threshold specified by the ProposalType is reached. This means that enough yay votes must have been collected: the threshold is relative to the staked NAM total.

Tallying, when no memo is required, is computed with the following rules:

  1. Sum all the voting power of validators that voted yay
  2. For any validator that voted yay, subtract the voting power of any delegation that voted nay
  3. Add voting power for any delegation that voted yay (whose corresponding validator didn't vote yay)
  4. If the aforementioned sum divided by the total voting power is greater or equal to the threshold set by ProposalType, the proposal outcome is positive otherwise negative.

If votes carry a memo, instead, the yay votes must be evaluated net of it. The protocol will implement the correct logic to make sense of these memos and compute the tally correctly:

  1. Sum all the voting power of validators that voted yay with a specific memo, effectively splitting the yay votes into different subgroups
  2. For any validator that voted yay, subtract the voting power of any delegation that voted nay or voted yay with a different memo
  3. Add voting power for any delegation that voted yay (whose corresponding validator voted nay or yay with a different memo)
  4. From the yay subgroups select the one that got the greatest amount of voting power
  5. If the aforementioned voting power divided by the total voting power is greater or equal to the threshold set by ProposalType, the proposal outcome is positive otherwise negative.

All the computation will be done on data collected at the epoch specified in the end_epoch field of the proposal.

It is possible to check the actual implementation here.

Refund and Proposal Execution mechanism

Together with tallying, in the first block at the beginning of each epoch, in the finalize_block function, the protocol will manage the execution of accepted proposals and refunding. For each ended proposal with a positive outcome, it will refund the locked funds from GovernanceAddress to the proposal author address (specified in the proposal author field). For each proposal that has been rejected, instead, the locked funds will be moved to the SlashFundAddress. Moreover, if the proposal had a positive outcome and proposal_code is defined, these changes will be executed right away. To summarize the execution of governance in the finalize_block function:

If the proposal outcome is positive and current epoch is equal to the proposal grace_epoch, in the finalize_block function:

  • transfer the locked funds to the proposal author
  • execute any changes specified by proposal_code

In case the proposal was rejected or if any error, in the finalize_block function:

  • transfer the locked funds to SlashFundAddress

The result is then signaled by creating and inserting a [Tendermint Event](https://github.com/tendermint/tendermint/blob/ab0835463f1f89dcadf83f9492e98d85583b0e71/docs/spec/abci/abci.md#events.

SlashFundAddress

Funds locked in SlashFundAddress address should be spendable only by proposals.

SlashFundAddress storage

/$SlashFundAddress/?: Vec<u8>

The funds will be stored under:

/$NAMAddress/balance/$SlashFundAddress: u64

SlashFundAddress VP

The slash_fund validity predicate will approve a transfer only if the transfer has been made by the protocol (by checking the existence of /$GovernanceAddress/pending/$proposal_id storage key)

It is possible to check the actual implementation here.

Off-chain protocol

Create proposal

A CLI command to create a signed JSON representation of the proposal. The JSON will have the following structure:

{
  content: Base64<Vec<u8>>,
  author: Address,
  votingStart: TimeStamp,
  votingEnd: TimeStamp,
  signature: Base64<Vec<u8>>
}

The signature is produced over the hash of the concatenation of: content, author, votingStart and votingEnd. Proposal types are not supported off-chain.

Create vote

A CLI command to create a signed JSON representation of a vote. The JSON will have the following structure:

{
  proposalHash: Base64<Vec<u8>>,
  voter: Address,
  signature: Base64<Self.proposalHash>,
  vote: Enum(yay|nay)
}

The proposalHash is produced over the concatenation of: content, author, votingStart, votingEnd, voter and vote. Vote memos are not supported off-chain.

Tally

Same mechanism as on chain tally but instead of reading the data from storage it will require a list of serialized json votes.

Interfaces

  • Ledger CLI
  • Wallet

Default account

The default account validity predicate authorises transactions on the basis of a cryptographic signature.

k-of-n multisignature

The k-of-n multisignature validity predicate authorises transactions on the basis of k out of n parties approving them.

Fungible token

The fungible token validity predicate authorises token balance changes on the basis of conservation-of-supply and approval-by-sender.

Multi-asset shielded pool

The multi-asset shielded pool (MASP) is an extension to the Sapling circuit which adds support for sending arbitrary assets.

See the following documents:

MASP integration spec

Overview

The overall aim of this integration is to have the ability to provide a multi-asset shielded pool following the MASP spec as an account on the current Namada blockchain implementation.

Shielded pool validity predicate (VP)

The shielded value pool can be an Namada established account with a validity predicate which handles the verification of shielded transactions. Similarly to zcash, the asset balance of the shielded pool itself is transparent - that is, from the transparent perspective, the MASP is just an account holding assets. The shielded pool VP has the following functions:

  • Accepts only valid transactions involving assets moving in or out of the pool.
  • Accepts valid shielded-to-shielded transactions, which don't move assets from the perspective of transparent Namada.
  • Publishes the note commitment and nullifier reveal Merkle trees.

To make this possible, the host environment needs to provide verification primitives to VPs. One possibility is to provide a single high-level operation to verify transaction output descriptions and proofs, but another is to provide cryptographic functions in the host environment and implement the verifier as part of the VP.

In future, the shielded pool will be able to update the commitment and nullifier Merkle trees as it receives transactions. This could likely be achieved via the temporary storage mechanism added for IBC, with the trees finalized with each block.

The input to the VP is the following set of state changes:

  • updates to the shielded pool's asset balances
  • new encrypted notes
  • updated note and nullifier tree states (partial, because we only have the last block's anchor)

and the following data which is ancillary from the ledger's perspective:

  • spend descriptions, which destroy old notes:
struct SpendDescription {
  // Value commitment to amount of the asset in the note being spent
  cv: jubjub::ExtendedPoint,
  // Last block's commitment tree root
  anchor: bls12_381::Scalar,
  // Nullifier for the note being nullified
  nullifier: [u8; 32],
  // Re-randomized version of the spend authorization key
  rk: PublicKey,
  // Spend authorization signature
  spend_auth_sig: Signature,
  // Zero-knowledge proof of the note and proof-authorizing key
  zkproof: Proof<Bls12>,
}
  • output descriptions, which create new notes:
struct OutputDescription {
  // Value commitment to amount of the asset in the note being created
  cv: jubjub::ExtendedPoint,
  // Derived commitment tree location for the output note
  cmu: bls12_381::Scalar,
  // Note encryption public key
  epk: jubjub::ExtendedPoint,
  // Encrypted note ciphertext
  c_enc: [u8; ENC_CIPHERTEXT_SIZE],
  // Encrypted note key recovery ciphertext
  c_out: [u8; OUT_CIPHERTEXT_SIZE],
  // Zero-knowledge proof of the new encrypted note's location
  zkproof: Proof<Bls12>,
}

Given these inputs:

The VP must verify the proofs for all spend and output descriptions (bellman::groth16), as well as the signature for spend notes.

Encrypted notes from output descriptions must be published in the storage so that holders of the viewing key can view them; however, the VP does not concern itself with plaintext notes.

Nullifiers and commitments must be appended to their respective Merkle trees in the VP's storage as well, which is a transaction-level rather than a block-level state update.

In addition to the individual spend and output description verifications, the final transparent asset value change described in the transaction must equal the pool asset value change. As an additional sanity check, the pool's balance of any asset may not end up negative.

NB: Shielded-to-shielded transactions in an asset do not, from the ledger's perspective, transact in that asset; therefore, the asset's own VP cannot run as described above because the shielded pool is asset-hiding.

Client capabilities

The client should be able to:

  • Make transactions with a shielded sender and/or receiver
  • Scan the blockchain to determine shielded assets in one's possession
  • Generate payment addresses from viewing keys from spending keys

To make shielded transactions, the client has to be capable of creating and spending notes, and generating proofs which the pool VP verifies.

Unlike the VP, which must have the ability to do complex verifications, the transaction code for shielded transactions can be comparatively simple: it delivers the transparent value changes in or out of the pool, if any, and proof data computed offline by the client.

The client and wallet must be extended to support the shielded pool and the cryptographic operations needed to interact with it. From the perspective of the transparent Namada protocol, a shielded transaction is just a data write to the MASP storage, unless it moves value in or out of the pool. The client needs the capability to create notes, transactions, and proofs of transactions, but it has the advantage of simply being able to link against the MASP crates, unlike the VP.

Protocol

Note Format

The note structure encodes an asset's type, its quantity and its owner. More precisely, it has the following format:

struct Note {
  // Diversifier for recipient address
  d: jubjub::SubgroupPoint,
  // Diversified public transmission key for recipient address
  pk_d: jubjub::SubgroupPoint,
  // Asset value in the note
  value: u64,
  // Pedersen commitment trapdoor
  rseed: Rseed,
  // Asset identifier for this note
  asset_type: AssetType,
  // Arbitrary data chosen by note sender
  memo: [u8; 512],
}

For cryptographic details and further information, see Note Plaintexts and Memo Fields. Note that this structure is required only by the client; the VP only handles commitments to this data.

Diversifiers are selected by the client and used to diversify addresses and their associated keys. v and t identify the asset type and value. Asset identifiers are derived from asset names, which are arbitrary strings (in this case, token/other asset VP addresses). The derivation must deterministically result in an identifier which hashes to a valid curve point.

Transaction Format

The transaction data structure comprises a list of transparent inputs and outputs as well as a list of shielded inputs and outputs. More precisely:

struct Transaction {
    // Transaction version
    version: u32,
    // Transparent inputs
    tx_in: Vec<TxIn>,
    // Transparent outputs
    tx_out: Vec<TxOut>,
    // The net value of Sapling spends minus outputs
    value_balance_sapling: Vec<(u64, AssetType)>,
    // A sequence ofSpend descriptions
    spends_sapling: Vec<SpendDescription>,
    // A sequence ofOutput descriptions
    outputs_sapling: Vec<OutputDescription>,
    // A binding signature on the SIGHASH transaction hash,
    binding_sig_sapling: [u8; 64],
}

For the cryptographic constraints and further information, see Transaction Encoding and Consensus. Note that this structure slightly deviates from Sapling due to the fact that value_balance_sapling needs to be provided for each asset type.

Transparent Input Format

The input data structure decribes how much of each asset is being deducted from certain accounts. More precisely, it is as follows:

struct TxIn {
    // Source address
    address: Address,
    // Asset identifier for this input
    token: AssetType,
    // Asset value in the input
    amount: u64,
    // A signature over the hash of the transaction
    sig: Signature,
    // Used to verify the owner's signature
    pk: PublicKey,
}

Note that the signature and public key are required to authenticate the deductions.

Transparent Output Format

The output data structure decribes how much is being added to certain accounts. More precisely, it is as follows:

struct TxOut {
    // Destination address
    address: Address,
    // Asset identifier for this output
    token: AssetType,
    // Asset value in the output
    amount: u64,
}

Note that in contrast to Sapling's UTXO based approach, our transparent inputs/outputs are based on the account model used in the rest of Namada.

Shielded Transfer Specification

Transfer Format

Shielded transactions are implemented as an optional extension to transparent ledger transfers. The optional shielded field in combination with the source and target field determine whether the transfer is shielding, shielded, or unshielded. See the transfer format below:

/// A simple bilateral token transfer
#[derive(..., BorshSerialize, BorshDeserialize, ...)]
pub struct Transfer {
    /// Source address will spend the tokens
    pub source: Address,
    /// Target address will receive the tokens
    pub target: Address,
    /// Token's address
    pub token: Address,
    /// The amount of tokens
    pub amount: Amount,
    /// The unused storage location at which to place TxId
    pub key: Option<String>,
    /// Shielded transaction part
    pub shielded: Option<Transaction>,
}

Conditions

Below, the conditions necessary for a valid shielded or unshielded transfer are outlined:

  • A shielded component equal to None indicates a transparent Namada transaction
  • Otherwise the shielded component must have the form Some(x) where x has the transaction encoding specified in the Multi-Asset Shielded Pool Specs
  • Hence for a shielded transaction to be valid:
    • the Transfer must satisfy the usual conditions for Namada ledger transfers (i.e. sufficient funds, ...) as enforced by token and account validity predicates
    • the Transaction must satisfy the conditions specified in the Multi-Asset Shielded Pool Specification
    • the Transaction and Transfer together must additionally satisfy the below boundary conditions intended to ensure consistency between the MASP validity predicate ledger and Namada ledger
  • A key equal to None indicates an unpinned shielded transaction; one that can only be found by scanning and trial-decrypting the entire shielded pool
  • Otherwise the key must have the form Some(x) where x is a String such that there exists no prior accepted transaction with the same key

Boundary Conditions

Below, the conditions necessary to maintain consistency between the MASP validity predicate ledger and Namada ledger are outlined:

  • If the target address is the MASP validity predicate, then no transparent outputs are permitted in the shielded transaction
  • If the target address is not the MASP validity predicate, then:
    • there must be exactly one transparent output in the shielded transaction and:
      • its public key must be the hash of the target address bytes - this prevents replay attacks altering transfer destinations
        • the hash is specifically a RIPEMD-160 of a SHA-256 of the input bytes
      • its value must equal that of the containing transfer - this prevents replay attacks altering transfer amounts
      • its asset type must be derived from the token address raw bytes and the current epoch once Borsh serialized from the type (Address, Epoch):
        • the dependency on the address prevents replay attacks altering transfer asset types
        • the current epoch requirement prevents attackers from claiming extra rewards by forging the time when they began to receive rewards
        • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer
  • If the source address is the MASP validity predicate, then:
    • no transparent inputs are permitted in the shielded transaction
    • the transparent transaction value pool's amount must equal the containing wrapper transaction's fee amount
    • the transparent transaction value pool's asset type must be derived from the containing wrapper transaction's fee token
      • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer
  • If the source address is not the MASP validity predicate, then:
    • there must be exactly one transparent input in the shielded transaction and:
      • its value must equal that of amount in the containing transfer - this prevents stealing/losing funds from/to the pool
      • its asset type must be derived from the token address raw bytes and the current epoch once Borsh serialized from the type (Address, Epoch):
        • the address dependency prevents stealing/losing funds from/to the pool
        • the current epoch requirement ensures that withdrawers receive their full reward when leaving the shielded pool
        • the derivation must be done as specified in 0.3 Derivation of Asset Generator from Asset Identifer

Remarks

Below are miscellaneous remarks on the capabilities and limitations of the current MASP implementation:

  • The gas fees for shielded transactions are charged to the signer just like it is done for transparent transactions
    • As a consequence, an amount exceeding the gas fees must be available in a transparent account in order to execute an unshielding transaction - this prevents denial of service attacks
  • Using the MASP sentinel transaction key for transaction signing indicates that gas be drawn from the transaction's transparent value pool
    • In this case, the gas will be taken from the MASP transparent address if the shielded transaction is proven to be valid
  • With knowledge of its key, a pinned shielded transaction can be directly downloaded or proven non-existent without scanning the entire blockchain
    • It is recommended that pinned transaction's key be derived from the hash of its payment address, something that both transaction parties would share
    • This key must not be reused, this is in order to avoid revealing that multiple transactions are going to the same entity

Multi-Asset Shielded Pool Specification Differences from Zcash Protocol Specification

The Multi-Asset Shielded Pool Specification referenced above is in turn an extension to the Zcash Protocol Specification. Below, the changes from the Zcash Protocol Specification assumed to have been integrated into the Multi-Asset Shielded Pool Specification are listed:

  • 3.2 Notes
  • 4.1.8 Commitment
    • NoteCommit and ValueCommit must be parameterized by asset type
  • 4.7.2 Sending Notes (Sapling)
    • Sender must also be able to select asset type
    • NoteCommit and hence cm must be parameterized by asset type
    • ValueCommit and hence cv must be parameterized by asset type
    • The note plaintext tuple must include asset type
  • 4.8.2 Dummy Notes (Sapling)
    • A random asset type must also be selected
    • NoteCommit and hence cm must be parameterized by asset type
    • ValueCommit and hence cv must be parameterized by asset type
  • 4.13 Balance and Binding Signature (Sapling)
    • The Sapling balance value is now defined as the net value of Spend and Convert transfers minus Output transfers.
    • The Sapling balance value is no longer a scalar but a vector of pairs comprising values and asset types
    • Addition, subtraction, and equality checks of Sapling balance values is now done component-wise
    • A Sapling balance value is defined to be non-negative iff each of its components is non-negative
    • ValueCommit and the value base must be parameterized by asset type
    • Proofs must be updated to reflect the presence of multiple value bases
  • 4.19.1 Encryption (Sapling and Orchard)
    • The note plaintext tuple must include asset type
  • 4.19.2 Decryption using an Incoming Viewing Key (Sapling and Orchard)
    • The note plaintext extracted from the decryption must include asset type
  • 4.19.3 Decryption using a Full Viewing Key (Sapling and Orchard)
    • The note plaintext extracted from the decryption must include asset type
  • 5.4.8.2 Windowed Pedersen commitments
    • NoteCommit must be parameterized by asset type
  • 5.4.8.3 Homomorphic Pedersen commitments (Sapling and Orchard)
    • HomomorphicPedersenCommit, ValueCommit, and value base must be parameterized by asset type
  • 5.5 Encodings of Note Plaintexts and Memo Fields
    • The note plaintext tuple must include asset type
    • The Sapling note plaintext encoding must use 32 bytes inbetween d and v to encode asset type
    • Hence the total size of a note plaintext encoding should be 596 bytes
  • 5.6 Encodings of Addresses and Keys
    • Bech32m [BIP-0350] is used instead of Bech32 [ZIP-173] to further encode the raw encodings
  • 5.6.3.1 Sapling Payment Addresses
    • For payment addresses on the Testnet, the Human-Readable Part is "patest"
  • 7.1 Transaction Encoding and Consensus
    • valueBalanceSapling is no longer scalar. Hence it should be replaced by two components:
      • nValueBalanceSapling: a compactSize indicating number of asset types spanned by balance
      • a length nValueBalanceSapling sequence of 40 byte values where:
        • the first 32 bytes encode the asset type
        • the last 8 bytes are an int64 encoding asset value
    • In between vSpendsSapling and nOutputsSapling are two additional rows:
      • First row:
        • Bytes: Varies
        • Name: nConvertsMASP
        • Data Type: compactSize
        • Description: The number of Convert descriptions in vConvertsMASP
      • Second row:
        • Bytes: 64*nConvertsMASP
        • Name: vConvertsMASP
        • Data Type: ConvertDescription[nConvertsMASP]
        • Description: A sequence of Convert descriptions, encoded as described in the following section.
  • 7.4 Output Description Encoding and Consensus
    • The encCiphertext field must be 612 bytes in order to make 32 bytes room to encode the asset type

Additional Sections

In addition to the above components of shielded transactions inherited from Zcash, we have the following:

Convert Descriptions

Each transaction includes a sequence of zero or more Convert descriptions.

Let ValueCommit.Output be as defined in 4.1.8 Commitment. Let B[Sapling Merkle] be as defined in 5.3 Constants. Let ZKSpend be as defined in 4.1.13 Zero-Knowledge Proving System.

A convert description comprises (cv, rt, pi) where

  • cv: ValueCommit.Output is value commitment to the value of the conversion note
  • rt: B[Sapling Merkle] is an anchor for the current conversion tree or an archived conversion tree
  • pi: ZKConvert.Proof is a zk-SNARK proof with primary input (rt, cv) for the Convert statement defined at Burn and Mint conversion transactions in MASP.

Convert Description Encoding

Let pi_{ZKConvert} be the zk-SNARK proof of the corresponding Convert statement. pi_{ZKConvert} is encoded in the zkproof field of the Convert description.

An abstract Convert description, as described above, is encoded in a transaction as an instance of a ConvertDescription type:

  • First Entry
    • Bytes: 32
    • Name: cv
    • Data Type: byte[32]
    • Description: A value commitment to the value of the conversion note, LEBS2OSP_256(repr_J(cv)).
  • Second Entry
    • Bytes: 32
    • Name: anchor
    • Data Type: byte[32]
    • Description: A root of the current conversion tree or an archived conversion tree, LEBS2OSP_256(rt^Sapling).
  • Third Entry
    • Bytes: 192
    • Name: zkproof
    • Data Type: byte[192]
    • Description: An encoding of the zk-SNARK proof pi_{ZKConvert} (see 5.4.10.2 Groth16).

Required Changes to ZIP 32: Shielded Hierarchical Deterministic Wallets

Below, the changes from ZIP 32: Shielded Hierarchical Deterministic Wallets assumed to have been integrated into the Multi-Asset Shielded Pool Specification are listed:

Storage Interface Specification

Namada nodes provide interfaces that allow Namada clients to query for specific pinned transactions, transactions accepted into the shielded pool, and allowed conversions between various asset types. Below we describe the ABCI paths and the encodings of the responses to each type of query.

Shielded Transfer Query

In order to determine shielded balances belonging to particular keys or spend one's balance, it is necessary to download the transactions that transferred the assets to you. To this end, the nth transaction in the shielded pool can be obtained by getting the value at the storage path <MASP-address>/tx-<n>. Note that indexing is 0-based. This will return a quadruple of the type below:

(
    /// the epoch of the transaction's block
    Epoch,
    /// the height of the transaction's block
    BlockHeight,
    /// the index of the transaction within the block
    TxIndex,
    /// the actual bytes of the transfer
    Transfer
)

Transfer is defined as above and (Epoch, BlockHeight, TxIndex) = (u64, u64, u32).

Transaction Count Query

When scanning the shielded pool, it is sometimes useful know when to stop scanning. This can be done by querying the storage path head-tx, which will return a u64 indicating the total number of transactions in the shielded pool.

Pinned Transfer Query

A transaction pinned to the key x in the shielded pool can be obtained indirectly by getting the value at the storage path <MASP address>/pin-<x>. This will return the index of the desired transaction within the shielded pool encoded as a u64. At this point, the above shielded transaction query can then be used to obtain the actual transaction bytes.

Conversion Query

In order for MASP clients to convert older asset types to their latest variants, they need to query nodes for currently valid conversions. This can be done by querying the ABCI path conv/<asset-type> where asset-type is a hexadecimal encoding of the asset identifier as defined in Multi-Asset Shielded Pool Specification. This will return a quadruple of the type below:

(
    /// the token address of this asset type
    Address,
    /// the epoch of this asset type
    Epoch,
    /// the amount to be treated as equivalent to zero
    Amount,
    /// the Merkle path to this conversion
    MerklePath<Node>
)

If no conversions are available, the amount will be exactly zero, otherwise the amount must contain negative units of the queried asset type.

Asset name schema

MASP notes carry balances that are some positive integer amount of an asset type. Per both the MASP specification and the implementation, the asset identifier is an 32-byte Blake2s hash of an arbitrary asset name string, although the full 32-byte space is not used because the identifier must itself hash to an elliptic curve point (currently guaranteed by incrementing a nonce until the hash is a curve point). The final curve point is the asset type proper, used in computations.

The following is a schema for the arbitrary asset name string intended to support various uses - currently fungible tokens and NFTs, but possibly others in future.

The asset name string is built up from a number of segments, joined by a separator. We use / as the separator.

Segments may be one of the following:

  • Controlling address segment: a Namada address which controls the asset. For example, this is the fungible token address for a fungible token. This segment must be present, and must be first; it should in theory be an error to transparently transact in assets of this type without invoking the controlling address's VP. This should be achieved automatically by all transparent changes involving storage keys under the controlling address.

  • Epoch segment: An integer greater than zero, representing an epoch associated with an asset type. Mainly for use by the incentive circuit. This segment must be second if present. (should it be required? could be 0 if the asset is unepoched) (should it be first so we can exactly reuse storage keys?) This must be less than or equal to the current epoch.

  • Address segment: An ancillary address somehow associated with the asset. This address probably should have its VP invoked, and is probably in the transparent balance storage key.

  • ID segment: A nonnegative (?) integer identifying something, i.e., a NFT id. (should probably not be a u64 exactly - for instance, I think ERC721 NFTs are u256)

  • Text segment: A piece of text, normatively but not necessarily short (50 characters or less), identifying something. For compatibility with non-numeric storage keys used in transparent assets generally; an example might be a ticker symbol for a specific sub-asset. The valid character set is the same as for storage keys.

For example, suppose there is a virtual stock certificate asset, incentivized (somehow), at transparent address addr123, which uses storage keys like addr123/[owner address]/[ticker symbol]/[id]. The asset name segments would be:

  • Controlling address: just addr123
  • Epoch: the epoch when the note was created
  • Owner address: an address segment
  • Ticker symbol: a text segment
  • ID: an ID segment

This could be serialized to, e.g., addr123/addr456/tSPY/i12345.

Burn and Mint conversion transactions in MASP

Introduction

Ordinarily, a MASP transaction that does not shield or unshield assets must achieve a homomorphic net value balance of 0. Since every asset type has a pseudorandomly derived asset generator, it is not ordinarily feasible to achieve a net value balance of 0 for the transaction without each asset type independently having a net value balance of 0. Therefore, intentional burning and minting of assets typically requires a public "turnstile" where some collection of assets are unshielded, burned or minted in a public transaction, and then reshielded. Since this turnstile publicly reveals asset types and amounts, privacy is affected.

The goal is to design an extension to MASP that allows for burning and minting assets according to a predetermined, fixed, public ratio, but without explicitly publicly revealing asset types or amounts in individual transactions.

Approach

In the MASP, each Spend or Output circuit only verifies the integrity of spending or creation of a specific note, and does not verify the integrity of a transaction as a whole. To ensure that a transaction containing Spend and Output descriptions does not violate the invariants of the shielded pool (such as the total unspent balance of each asset in the pool) the value commitments are added homomorphically and this homomorphic sum is opened to reveal the transaction has a net value balance of 0. When assets are burned or minted in a MASP transaction, the homomorphic net value balance must be nonzero, and offset by shielding or unshielding a corresponding amount of each asset.

Instead of requiring the homomorphic sum of Spend and Output value commitments to sum to 0, burning and minting of assets can be enabled by allowing the homomorphic sum of Spend and Output value commitments to sum to either 0 or a multiple of an allowed conversion ratio. For example, if distinct assets A and B can be converted in a 1-1 ratio (meaning one unit of A can be burned to mint one unit of B) then the Spend and Output value commitments may sum to a nonzero value.

Allowed conversions

Let be distinct asset types. An allowed conversion is a list of tuples where are signed 64-bit integers.

The asset generator of an allowed conversion is defined to be: where is the asset generator of asset .

Each allowed conversion is committed to a Jubjub point using a binding Bowe-Hopwood commitment of its asset generator (it is not necessary to be hiding). All allowed conversion commitments are stored in a public Merkle tree, similar to the Note commitment tree. Since the contents of this tree are entirely public, allowed conversions may be added, removed, or modified at any time.

Convert circuit

In order for an unbalanced transaction containing burns and mints to get a net value balance of zero, one or more value commitments burning and minting assets must be added to the value balance. Similar to how Spend and Output circuits check the validity of their respective value commitments, the Convert circuit checks the validity and integrity of:

  1. There exists an allowed conversion commitment in the Merkle tree, and
  2. The imbalance in the value commitment is a multiple of an allowed conversion's asset generator

In particular, the Convert circuit takes public input:

and private input:

and the circuit checks:

  1. Merkle Path validity: is a valid Merkle path from to .
  2. Allowed conversion commitment integrity: opens to
  3. Value commitment integrity: where is the value commitment randomness base

Note that 8 is the cofactor of the Jubjub curve.

Balance check

Previously, the transaction consisted of Spend and Output descriptions, and a value balance check that the value commitment opens to 0. Now, the transaction validity includes:

  1. Checking the Convert description includes a valid and current
  2. Checking the value commitment opens to 0

Directionality

Directionality of allowed conversions must be enforced as well. That is, must be a non-negative 64 bit integer. If negative values of are allowed (or equivalently, unbounded large values of in the prime order scalar field of the Jubjub curve) then an allowed conversion could happen in the reverse direction, burning the assets intended to be minted and vice versa.

Cycles

It is also critical not to allow cycles. For example, if and are allowed conversions, then an unlimited amount of may be minted from a nonzero amount of . Since

Alternative approaches

It may theoretically be possible to implement similar mechanisms with only the existing Spend and Output circuits. For example, a Merkle tree of many Notes could be created with asset generator and many different values, allowing anyone to Spend these public Notes, which will only balance if proper amounts of asset type 1 are Spent and asset type 2 are Output.

However, the Nullifier integrity check of the Spend circuit reveals the nullifier of each of these Notes. This removes the privacy of the conversion as the public nullifier is linkable to the allowed conversion. In addition, each Note has a fixed value, preventing arbitrary value conversions.

Conclusion

In principle, as long as the Merkle tree only contains allowed conversions, this should permit the allowed conversions while maintaining other invariants. Note that since the asset generators are not derived in the circuit, all sequences of values and asset types are allowed.

Convert Circuit

Convert Circuit Description

The high-level description of Convert can be found Burn and mint.

The Convert provides a mechanism that burning and minting of assets can be enabled by adding Convert Value Commitments in transaction and ensuring the homomorphic sum of Spend, Output and Convert value commitments to be zero.

The Convert value commitment is constructed from AllowedConversion which was published earlier in AllowedConversion Tree. The AllowedConversion defines the allowed conversion assets. The AllowedConversion Tree is a merkle hash tree stored in the ledger.

AllowedConversion

An AllowedConversion is a compound asset type in essence, which contains distinct asset types and the corresponding conversion ratios.

AllowedConversion is an array of tuple

  • : is a bytestring representing the asset identifier of the note.
  • : is a signed 64-bit integer in the range .

Calculate:

Note that PedersenHashToPoint is used the same as in NoteCommitment for now.

An AllowedConversion can be issued, removed and modified as public conversion rule by consensus authorization and stored in AllowedConversion Tree as leaf node.

An AllowedConversion can be used by proving the existence in AllowedConversion Tree(must use the latest root anchor), and then generating a Convert Value Commitment to be used in transaction.

Convert Value Commitment

Convert Value Commitment is a tuple

  • : is an unsigned integer representing the value of conversion in range .

Choose independent uniformly random commitment trapdoors:

Check that is of type , i.e. it is a valid ctEdwards Curve point on the JubjubCurve (as defined in the original Sapling specification) not equal to . If it is equal to , is an invalid asset identifier.

Calculate

Note that is used the same as in NoteCommitment for now.

AllowedConversion Tree

AllowedConversion Tree has the same structure as Note Commitment Tree and is an independent tree stored in ledger.

  • : 32(for now)
  • leaf node:

Convert Statement

The Convert circuit has 47358 constraints.

Let , , , , , be as defined in the original Sapling specification.

A valid instance of assures that given a primary input:

the prover knows an auxiliary input:

such that the following conditions hold:

  • AllowedConversion cm integrity:

  • Merkle path validity: Either is 0; or is a valid Merkle path of depth , as as defined in the original Sapling specification, from to the anchor

  • Small order checks: is not of small order, i.e..

  • Convert Value Commitment integrity:

Return

Notes:

  • Public and auxiliary inputs MUST be constrained to have the types specified. In particular, see the original Sapling specification, for required validity checks on compressed representations of Jubjub curve points. The ValueCommit.Output type also represents points, i.e. .
  • In the Merkle path validity check, each layer does not check that its input bit sequence is a canonical encoding(in ) of the integer from the previous layer.

Incentive Description

Incentive system provide a mechanism in which the old asset(input) is burned, the new asset(output) is minted with the same quantity and incentive asset(reward) is minted with the convert ratio meanwhile.

Incentive AllowedConversion Tree

As described in Convert circuit, the AllowedConversion Tree is an independent merkle tree in the ledger and contains all the Incentive AllowedConversions.

Incentive AllowedConversion Struct

In general, there are three items in Incentive AllowedConversion Struct(but not mandatory?),i.e. input, output and reward. And each item has an asset type and a quantity(i64, for the convert ratio).

Note that the absolute value of input and output must be consistent in incentive system. The quantity of input is negative and the quantity of output is positive.

To guarantee the input and output to be open as the same asset type in future unshielding transactions, the input and output assets have the same prefix description(e.g. BTC_1, BTC_2...BTC_n). To prevent repeated shielding and unshielding and encourage long-term contribution to privacy pool, the postfix timestamp is used to distinguish the input and output assets. The timestamp depends on the update period and can be defined flexibly (e.g. date, epoch num). When a new timestamp occurs, the AllowedConversion will be updated to support all the "history asset" conversion to the latest one.

Incentive AllowedConversion Operation

Incentive AllowedConversion is governed by the incentive system, which will be in charge of issuing new incentive plan, updating(modifying) to the latest timestamp, and removing disabled conversion permissions.

  • Issue
    • Issue a new incentive plan for new asset.
    • Issue for the last latest AllowedConversion when new timestamp occurs.
  • Update
    • For every new timestamp that occurs, update the existing AllowedConversion. Keep the input but update the output to the latest asset and modify the reward quantity according to the ratio.
  • Destroy
    • Delete the AllowedConversion from the tree.
  • Query Service
    • A service for querying the latest AllowedConversion, return (anchor, path, AllowedConversion).

Workflow from User's Perspective

  • Shielding transaction
    • Query the latest timestamp for target asset(non-latest will be rejected in tx execution)
    • Construct a target shielded note and shielding tx
    • Add the note to shielded pool if tx executes successfully(check the prefix and the latest timestamp).
  • Converting transaction
    • Construct spend notes from shielded notes
    • Construct convert notes(query the latest AllowedConversion)
    • Construct output notes
    • Construct convert tx
    • Get incentive output notes with latest timestamp and rewards if tx executes successfully
  • Unshielding transaction
    • Construct unshielding transaction
    • Get unshielded note if tx executes successfully(check the prefix)

Namada Trusted Setup

This spec assumes that you have some previous knowledge about Trusted Setup Ceremonies. If not, you might want to check the following two articles: Setup Ceremonies - ZKProof and Parameter Generation - Zcash.

The Namada Trusted Setup (TS) consists of running the phase 2 of the MPC which is a circuit-specific step to construct the multi-asset shielded pool circuit. Our phase 2 takes as input the Powers of Tau (phase 1) ran by Zcash that can be found here. You can sign up for the Namada Trusted Setup here.

Contribution flow

Overview

  1. Contributor compiles or downloads the CLI binary and runs it.
  2. CLI generates a 24-words BIP39 mnemonic.
  3. CLI can choose to participate in the incentivized program or not.
  4. CLI joins the queue and waits for its turn.
  5. CLI downloads the challenge from the nearest AWS S3 bucket.
  6. Contributor can choose to contribute on the same machine or another.
  7. Contributor can choose to give its own seed of randomness or not.
  8. CLI contributes.
  9. CLI uploads the response to the challenge and notifies the coordinator with its personal info.

Detailed Flow

NOTE: add CLI flag --offline for the contributors that run on an offline machine. The flag will skip all the steps where there is communication with the coordinator and go straight to the generation of parameters in step 14.

  1. Contributor downloads the Namada CLI source from GitHub, compiles it, runs it.
  2. CLI asks the Contributor a couple of questions: a) Do you want to participate in the incentivized trusted setup? - Yes. Asks for personal information: full name and email. - No. Contribution will be identified as Anonymous.
  3. CLI generates a ed25519 key pair that will serve to communicate and sign requests with the HTTP REST API endpoints and receive any potential rewards. The private key is generated through BIP39 where we use it as a seed for the ed25519 key pair and a 24 word seed-phrase is presented to the user to back-up.
  4. CLI sends request to the HTTP REST API endpoint contributor/join_queue. Contributor is added to the queue of the ceremony.
  5. CLI polls periodically the HTTP REST API endpoint contributor/queue_status to get the current the position in the queue. CLI also sends periodically a heartbeat request to HTTP REST API endpoint contributor/heartbeat to tell the Coordinator that it is still connected. CLI shows the current position in the queue to the contributor.
  6. When Contributor is in position 0 in the queue, it leaves the queue. CLI can then acquire the lock of the next chunk by sending a request to the HTTP REST API endpoint contributor/lock_chunk.
  7. As soon as the file is locked on the Coordinator, the CLI asks for more info about the chunk through the endpoint download/chunk. This info is later needed to form a new contribution file and send it back to the Coordinator.
  8. CLI gets the actual blob challenge file by sending a request to the endpoint contributor/challenge.
  9. CLI saves challenge file namada_challenge_round_{round_number}.params in the root folder.
  10. CLI computes challenge hash.
  11. CLI creates contribution file namada_contribution_round_{round_number}_public_key_{public_key}.params in the root folder.
  12. Previous challenge hash is appended to the contribution file.
  13. Contributor decides whether to do the computation on the same machine or on a different machine. Do you want to use another machine to run your contribution? NOTE: be clear that if users choose to generate the parameters on a OFFLINE machine then they will have max. 15 min to do all the operations.
  • No. Participant will use the Online Machine to contribute. CLI runs contribute_masp() that executes the same functions as in the contribute() function from the masp-mpc crate. CLI asks the contributor if he wants to input a custom seed of randomness instead of using the combination of entropy and OS randomness. In both cases, he has to input something. CLI creates a contribution file signature ContributionFileSignature of the contribution.
  • Yes. Participant will use an Offline Machine to contribute. CLI display a message with instructions about the challenge and contribution files. Participant can export the Contribution file namada_contribution_round_{round_number}_public_key_{public_key}.params to the Offline Machine and contribute from there. When the Contributor is done, he gives the path to the contribution file. Before continuing, CLI checks if the required files are available on path and if the transformation of the parameters is valid. NOTE: CLI will display a countdown of 10 min with an extension capability of 5 min.
  1. CLI generates a json file saved locally that contains the full name, email, the public key used for the contribution, contribution hash, hash of the contribution file, contribution file signature, plus a signature of the metadata. -> display the signature and message that needs to be posted somewhere over the Internet
  2. CLI uploads the chunk to the Coordinator by using the endpoint upload/chunk.
  3. When the contribution blob was transferred successfully to the Coordinator. CLI notifies the Coordinator that the chunk was uploaded by sending a request to endpoint contributor/contribute_chunk.
  4. Coordinator verifies that the chunk is valid by executing function verify_transform() from the crate masp-mpc. If the transformation is correct, it outputs the hash of the contribution.
  5. Coordinator calls try_advance() function that tries to advance to the next round as soon as all contributions are verified. If it succeeds, it removes the next contributor from the queue and adds him as contribturo to the next round.
  6. Repeat.

Subcomponents

Our implementation of the TS consists of the following subcomponents:

  1. A fork of the Aleo Trusted Setup where we re-used the Coordinator Library (CL) contained in the phase1-coordinator folder.
  2. A HTTP REST API that interfaces with the CL.
  3. A CLI implementation that communicates with the HTTP REST API endpoints.
  4. An integration of the masp-mpc crypto functions (initialize, contribute and verify) in the CL.

Let's go through each subcomponents and describe them.

1. Coordinator Library (CL)

Description

The CL handles the operation steps of the ceremony: adding a new contributor to the queue, authentificating a contributor, sending and receiving challenge files, removing inactive contributors, reattributing challenge file to a new contributor after a contributor dropped, verifying contributions, creating new files...

"Historical" context

The CL was originally implemented for the Powers of Tau (phase 1 of the MPC). In this implementation, there was a tentative to optimise the whole operational complexity of the ceremony. In short, to reduce the contribution time to the parameters, the idea was to split the parameters during a round into multiple chunks that can be then distributed to multiple participants in parallel. That way, the computation time would be reduced by some linear factor on a per-round basis. You can read more about it in this article.

CL in the Namada context

Splitting the parameters into multiple chunks is useful, if it takes hours to contribute. In our case, the contribution time is about some seconds and in the worst case about some minutes. So, we don't need to split the parameters into chunks. Though, since we forked from the Aleo Trusted Setup, we still have some references to "chunked" things like folder, variable or function names. In our implementation, this means that we have one contributor and one chunk per round. For example, the contribution file of a round i from a participant will always be located at transcript/round_{i}/chunk_0/contribution_1.unverified. To be able to re-use the CL without heavy refactoring, we decided to keep most of the Aleo code as it is and only change the parts that needed to be changed, more precisely the crypto functions (initialize, contribute and verify) and the coordinator config environment.rs.

2. HTTP REST API

Description

The HTTP REST API is a rocket web server that interfaces with the CL. All requests need to be signed to be accepted by the endpoints. It's the core of the ceremony where the Coordinator is started together with utility functions like verify_contributions  and update_coordinator.

Endpoints

  • /contributor/join_queue Add the incoming contributor to the queue of contributors.
  • /contributor/lock_chunk Lock a Chunk in the ceremony. This should be the first function called when attempting to contribute to a chunk. Once the chunk is locked, it is ready to be downloaded.
  • /contributor/challenge Get the challenge key on Amazon S3 from the Coordinator.
  • /upload/chunk Request the urls where to upload a Chunk contribution and the ContributionFileSignature.
  • /contributor/contribute_chunk Notify the Coordinator of a finished and uploaded Contribution. This will unlock the given Chunk.
  • /contributor/heartbeat Let the Coordinator know that the participant is still alive and participating (or waiting to participate) in the ceremony.
  • /update Update the Coordinator state. This endpoint is accessible only by the coordinator itself.
  • /stop Stop the Coordinator and shuts the server down. This endpoint is accessible only by the coordinator itself.
  • /verify Verify all the pending contributions. This endpoint is accessible only by the coordinator itself.
  • /contributor/queue_status Get the queue status of the contributor.
  • /contributor/contribution_info Write ContributionInfo to disk.
  • /contribution_info Retrieve the contributions' info. This endpoint is accessible by anyone and does not require a signed request.
  • /healthcheck Retrieve healthcheck info. This endpoint is accessible by anyone and does not require a signed request.

Saved files

  • contributors/namada_contributor_info_round_{i}.json contributor info received from the participant. Same file as described below.
  • contributors.json list of contributors that can be exposed to a public API to be displayed on the website
[
   {
      "public_key":"very random public key",
      "is_another_machine":true,
      "is_own_seed_of_randomness":true,
      "ceremony_round":1,
      "contribution_hash":"some hash",
      "contribution_hash_signature":"some hash",
	// (optional) some timestamps that can be used to calculate and display the contribution time
      "timestamp":{
         "start_contribution":1,
         "end_contribution":7
      }
   },
   // ...
   {
      "public_key":"very random public key",
      "is_another_machine":true,
      "is_own_seed_of_randomness":true,
      "ceremony_round":42,
      "contribution_hash":"some hash",
      "contribution_hash_signature":"some hash",
      "timestamp":{
         "start_contribution":1,
         "end_contribution":7
      }
   }
]

3. CLI Implementation

Description

The CLI communicates with the HTTP REST API accordingly to the overview of the contribution flow.

Saved files

  • namada_challenge_round_{round_number}.params challenge file downloaded from the Coordinator.
  • namada_contribution_round_{round_number}.params contribution file that needs to be uploaded to the Coordinator
  • namada_contributor_info_round_{round_number}.json contributor info that serves to identify participants.
{
   "full_name":"John Cage",
   "email":"john@cage.me",
   // ed25519 public key
   "public_key":"very random public key",
   // User participates in incentivized program or not
   "is_incentivized":true,
   // User can choose to contribute on another machine
   "is_another_machine":true,
   // User can choose the default method to generate randomness or his own.
   "is_own_seed_of_randomness":true,
   "ceremony_round":42,
   // hash of the contribution run by masp-mpc, contained in the transcript
   "contribution_hash":"some hash",
   // FIXME: is this necessary? so other user can check the contribution hash against the public key?
   "contribution_hash_signature":"signature of the contribution hash",
   // hash of the file saved on disk and sent to the coordinator
   "contribution_file_hash":"some hash",
   "contribution_file_signature":"signature of the contribution file",
   // Some timestamps to get performance metrics of the ceremony
   "timestamp":{
		// User starts the CLI
      "start_contribution":1,
      // User has joined the queue
      "joined_queue":2,
      // User has locked the challenge on the coordinator
      "challenge_locked":3,
      // User has completed the download of the challenge
      "challenge_downloaded":4,
      // User starts computation locally or downloads the file to another machine
      "start_computation":5,
      // User finishes computation locally or uploads the file from another machine
      "end_computation":6,
      // User attests that the file was uploaded correctly
      "end_contribution":7
   },
   "contributor_info_signature":"signature of the above fields and data"
}

4. Integration of the masp-mpc

Description

There are 4 crypto commands available in the CL under phase1-coordinator/src/commands/:

  1. aggregations.rs this was originally used to aggregate the chunks of the parameters. Since we don't have chunks, we don't need to aggregate anything. However, this logic was required and kept to transition between rounds. It doesn't affect any contribution file.
  2. computation.rs is used by a participant to contribute. The function contribute_masp() contains the logic from masp-mpc/src/bin/contribute.rs.
  3. initialization.rs is used to bootstrap the parameters on round 0 by giving as input the Zcash's Powers of Tau. The function initialize_masp() contains the logic from masp-mpc/src/bin/new.rs.
  4. verification.rs is used to verify the correct transformation of the parameters between contributions. The function verify_masp() contains the logic from masp-mpc/src/bin/verify_transform.rs.

Interoperability

Namada can interoperate permissionlessly with other chains through integration of the IBC protocol. Namada also includes a bespoke Ethereum bridge operated by the Namada validator set.

Ethereum bridge

The Namada - Ethereum bridge exists to mint ERC20 tokens on Namada which naturally can be redeemed on Ethereum at a later time. Furthermore, it allows the minting of wrapped tokens on Ethereum backed by escrowed assets on Namada.

The Namada Ethereum bridge system consists of:

  • An Ethereum full node run by each Namada validator, for including relevant Ethereum events into Namada.
  • A set of validity predicates on Namada which roughly implements ICS20 fungible token transfers.
  • A set of Ethereum smart contracts.
  • A relayer for submitting transactions to Ethereum

This basic bridge architecture should provide for almost-Namada consensus security for the bridge and free Ethereum state reads on Namada, plus bidirectional message passing with reasonably low gas costs on the Ethereum side.

Security

On Namada, the validators are full nodes of Ethereum and their stake is also accounting for security of the bridge. If they carry out a forking attack on Namada to steal locked tokens of Ethereum their stake will be slashed on Namada. On the Ethereum side, we will add a limit to the amount of assets that can be locked to limit the damage a forking attack on Namada can do. To make an attack more cumbersome we will also add a limit on how fast wrapped Ethereum assets can be redeemed from Namada. This will not add more security, but rather make the attack more inconvenient.

Ethereum Events Attestation

We want to store events from the smart contracts of our bridge onto Namada. We will include events that have been seen by at least one validator, but will not act on them until they have been seen by at least 2/3 of voting power.

There will be multiple types of events emitted. Validators should ignore improperly formatted events. Raw events from Ethereum are converted to a Rust enum type (EthereumEvent) by Namada validators before being included in vote extensions or stored on chain.


#![allow(unused)]
fn main() {
pub enum EthereumEvent {
    // we will have different variants here corresponding to different types
    // of raw events we receive from Ethereum
    TransfersToNamada(Vec<TransferToNamada>)
    // ...
}
}

Each event will be stored with a list of the validators that have ever seen it as well as the fraction of total voting power that has ever seen it. Once an event has been seen by 2/3 of voting power, it is locked into a seen state, and acted upon.

There is no adjustment across epoch boundaries - e.g. if an event is seen by 1/3 of voting power in epoch n, then seen by a different 1/3 of voting power in epoch m>n, the event will be considered seen in total. Validators may never vote more than once for a given event.

Minimum confirmations

There will be a protocol-specified minimum number of confirmations that events must reach on the Ethereum chain, before validators can vote to include them on Namada. This minimum number of confirmations will be changeable via governance.

TransferToNamada events may include a custom minimum number of confirmations, that must be at least the protocol-specified minimum number of confirmations.

Validators must not vote to include events that have not met the required number of confirmations. Voting on unconfirmed events is considered a slashable offence.

Storage

To make including new events easy, we take the approach of always overwriting the state with the new state rather than applying state diffs. The storage keys involved are:

# all values are Borsh-serialized
/eth_msgs/$msg_hash/body : EthereumEvent
/eth_msgs/$msg_hash/seen_by : Vec<Address>
/eth_msgs/$msg_hash/voting_power: (u64, u64)  # reduced fraction < 1 e.g. (2, 3)
/eth_msgs/$msg_hash/seen: bool

$msg_hash is the SHA256 digest of the Borsh serialization of the relevant EthereumEvent.

Changes to this /eth_msgs storage subspace are only ever made by internal transactions crafted and applied by all nodes based on the aggregate of vote extensions for the last Tendermint round. That is, changes to /eth_msgs happen in block n+1 in a deterministic manner based on the vote extensions of the Tendermint round for block n.

The /eth_msgs storage subspace does not belong to any account and cannot be modified by transactions submitted from outside of the ledger via Tendermint. The storage will be guarded by a special validity predicate - EthSentinel - that is part of the verifier set by default for every transaction, but will be removed by the ledger code for the specific permitted transactions that are allowed to update /eth_msgs.

Including events into storage

For every Namada block proposal, the vote extension of a validator should include the events of the Ethereum blocks they have seen via their full node such that:

  1. The storage value /eth_msgs/$msg_hash/seen_by does not include their address.
  2. It's correctly formatted.
  3. It's reached the required number of confirmations on the Ethereum chain

Each event that a validator is voting to include must be individually signed by them. If the validator is not voting to include any events, they must still provide a signed voted extension indicating this.

The vote extension data field will be a Borsh-serialization of something like the following.


#![allow(unused)]
fn main() {
pub struct VoteExtension(Vec<SignedEthEvent>);

/// A struct used by validators to sign that they have seen a particular
/// ethereum event. These are included in vote extensions
#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, BorshSchema)]
pub struct SignedEthEvent {
    /// The address of the signing validator
    signer: Address,
    /// The proportion of the total voting power held by the validator
    power: FractionalVotingPower,
    /// The event being signed and the block height at which
    /// it was seen. We include the height as part of enforcing
    /// that a block proposer submits vote extensions from
    /// **the previous round only**
    event: Signed<(EthereumEvent, BlockHeight)>,
}
}

These vote extensions will be given to the next block proposer who will aggregate those that it can verify and will inject a protocol transaction (the "vote extensions" transaction).


#![allow(unused)]
fn main() {
pub struct MultiSigned<T: BorshSerialize + BorshDeserialize> {
    /// Arbitrary data to be signed
    pub data: T,
    /// The signature of the data
    pub sigs: Vec<common::Signature>,
}

pub struct MultiSignedEthEvent {
    /// Address and voting power of the signing validators
    pub signers: Vec<(Address, FractionalVotingPower)>,
    /// Events as signed by validators
    pub event: MultiSigned<(EthereumEvent, BlockHeight)>,
}

pub enum ProtocolTxType {
    EthereumEvents(Vec<MultiSignedEthEvent>)
}
}

This vote extensions transaction will be signed by the block proposer. Validators will check this transaction and the validity of the new votes as part of ProcessProposal, this includes checking:

  • signatures
  • that votes are really from active validators
  • the calculation of backed voting power

It is also checked that each vote extension came from the previous round, requiring validators to sign over the Namada block height with their vote extension. Furthermore, the vote extensions included by the block proposer should have at least 2 / 3 of the total voting power of the previous round backing it. Otherwise the block proposer would not have passed the FinalizeBlock phase of the last round. These checks are to prevent censorship of events from validators by the block proposer.

In FinalizeBlock, we derive a second transaction (the "state update" transaction) from the vote extensions transaction that:

  • calculates the required changes to /eth_msgs storage and applies it
  • acts on any /eth_msgs/$msg_hash where seen is going from false to true (e.g. appropriately minting wrapped Ethereum assets)

This state update transaction will not be recorded on chain but will be deterministically derived from the vote extensions transaction, which is recorded on chain. All ledger nodes will derive and apply this transaction to their own local blockchain state, whenever they receive a block with a vote extensions transaction. This transaction cannot require a protocol signature as even non-validator full nodes of Namada will be expected to do this.

The value of /eth_msgs/$msg_hash/seen will also indicate if the event has been acted on on the Namada side. The appropriate transfers of tokens to the given user will be included on chain free of charge and requires no additional actions from the end user.

Namada Validity Predicates

There will be three internal accounts with associated native validity predicates:

  • #EthSentinel - whose validity predicate will verify the inclusion of events from Ethereum. This validity predicate will control the /eth_msgs storage subspace.
  • #EthBridge - the storage of which will contain ledgers of balances for wrapped Ethereum assets (ERC20 tokens) structured in a "multitoken" hierarchy
  • #EthBridgeEscrow which will hold in escrow wrapped Namada tokens which have been sent to Ethereum.

Transferring assets from Ethereum to Namada

Wrapped ERC20

The "transfer" transaction mints the appropriate amount to the corresponding multitoken balance key for the receiver, based on the specifics of a TransferToNamada Ethereum event.


#![allow(unused)]
fn main() {
pub struct EthAddress(pub [u8; 20]);

/// Represents Ethereum assets on the Ethereum blockchain
pub enum EthereumAsset {
    /// An ERC20 token and the address of its contract
    ERC20(EthAddress),
}

/// An event transferring some kind of value from Ethereum to Namada
pub struct TransferToNamada {
    /// Quantity of ether in the transfer
    pub amount: Amount,
    /// Address on Ethereum of the asset
    pub asset: EthereumAsset,
    /// The Namada address receiving wrapped assets on Namada
    pub receiver: Address,
}
}
Example

For 10 DAI i.e. ERC20(0x6b175474e89094c44da98b954eedeac495271d0f) to atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt

#EthBridge
    /erc20
        /0x6b175474e89094c44da98b954eedeac495271d0f
            /balances
                /atest1v4ehgw36xue5xvf5xvuyzvpjx5un2v3k8qeyvd3cxdqns32p89rrxd6xx9zngvpegccnzs699rdnnt 
                += 10

Namada tokens

Any wrapped Namada tokens being redeemed from Ethereum must have an equivalent amount of the native token held in escrow by #EthBridgeEscrow. The protocol transaction should simply make a transfer from #EthBridgeEscrow to the receiver for the appropriate amount and asset.

Transferring from Namada to Ethereum

To redeem wrapped Ethereum assets, a user should make a transaction to burn their wrapped tokens, which the #EthBridge validity predicate will accept.

Once this burn is done, it is incumbent on the end user to request an appropriate "proof" of the transaction. This proof must be submitted to the appropriate Ethereum smart contract by the user to redeem their native Ethereum assets. This also means all Ethereum gas costs are the responsibility of the end user.

The proofs to be used will be custom bridge headers that are calculated deterministically from the block contents, including messages sent by Namada and possibly validator set updates. They will be designed for maximally efficient Ethereum decoding and verification.

For each block on Namada, validators must submit the corresponding bridge header signed with a special secp256k1 key as part of their vote extension. Validators must reject votes which do not contain correctly signed bridge headers. The finalized bridge header with aggregated signatures will appear in the next block as a protocol transaction. Aggregation of signatures is the responsibility of the next block proposer.

The bridge headers need only be produced when the proposed block contains requests to transfer value over the bridge to Ethereum. The exception is when validator sets change. Since the Ethereum smart contract should accept any header signed by bridge header signed by 2 / 3 of the staking validators, it needs up-to-date knowledge of:

  • The current validators' public keys
  • The current stake of each validator

This means the at the end of every Namada epoch, a special transaction must be sent to the Ethereum contract detailing the new public keys and stake of the new validator set. This message must also be signed by at least 2 / 3 of the current validators as a "transfer of power". It is to be included in validators vote extensions as part of the bridge header. Signing an invalid validator transition set will be consider a slashable offense.

Due to asynchronicity concerns, this message should be submitted well in advance of the actual epoch change, perhaps even at the beginning of each new epoch. Bridge headers to ethereum should include the current Namada epoch so that the smart contract knows how to verify the headers. In short, there is a pipelining mechanism in the smart contract.

Such a message is not prompted by any user transaction and thus will have to be carried out by a bridge relayer. Once the transfer of power message is on chain, any time afterwards a Namada bridge process may take it to craft the appropriate message to the Ethereum smart contracts.

The details on bridge relayers are below in the corresponding section.

Signing incorrect headers is considered a slashable offense. Anyone witnessing an incorrect header that is signed may submit a complaint (a type of transaction) to initiate slashing of the validator who made the signature.

Namada tokens

Mints of a wrapped Namada token on Ethereum (including NAM, Namada's native token) will be represented by a data type like:


#![allow(unused)]
fn main() {
struct MintWrappedNam {
    /// The Namada address owning the token
    owner: NamadaAddress,
    /// The address on Ethereum receiving the wrapped tokens
    receiver: EthereumAddress,
    /// The address of the token to be wrapped 
    token: NamadaAddress,
    /// The number of wrapped Namada tokens to mint on Ethereum
    amount: Amount,
}
}

If a user wishes to mint a wrapped Namada token on Ethereum, they must submit a transaction on Namada that:

  • stores MintWrappedNam on chain somewhere
  • sends the correct amount of Namada token to #EthBridgeEscrow

Just as in redeeming Ethereum assets above, it is incumbent on the end user to request an appropriate proof of the transaction. This proof must be submitted to the appropriate Ethereum smart contract by the user. The corresponding amount of wrapped NAM tokens will be transferred to the receiver on Ethereum by the smart contract.

Namada Bridge Relayers

Validator changes must be turned into a message that can be communicated to smart contracts on Ethereum. These smart contracts need this information to verify proofs of actions taken on Namada.

Since this is protocol level information, it is not user prompted and thus should not be the responsibility of any user to submit such a transaction. However, any user may choose to submit this transaction anyway.

This necessitates a Namada node whose job it is to submit these transactions on Ethereum at the conclusion of each Namada epoch. This node is called the Designated Relayer. In theory, since this message is publicly available on the blockchain, anyone can submit this transaction, but only the Designated Relayer will be directly compensated by Namada.

All Namada validators will have an option to serve as bridge relayer and the Namada ledger will include a process that does the relaying. Since all Namada validators are running Ethereum full nodes, they can monitor that the message was relayed correctly by the Designated Relayer.

During the FinalizeBlock call in the ledger, if the epoch changes, a flag should be set alerting the next block proposer that they are the Designated Relayer for this epoch. If their message gets accepted by the Ethereum state inclusion onto Namada, new NAM tokens will be minted to reward them. The reward amount shall be a protocol parameter that can be changed via governance. It should be high enough to cover necessary gas fees.

Ethereum Smart Contracts

The set of Ethereum contracts should perform the following functions:

  • Verify bridge header proofs from Namada so that Namada messages can be submitted to the contract.
  • Verify and maintain evolving validator sets with corresponding stake and public keys.
  • Emit log messages readable by Namada
  • Handle ICS20-style token transfer messages appropriately with escrow & unescrow on the Ethereum side
  • Allow for message batching

Furthermore, the Ethereum contracts will whitelist ETH and tokens that flow across the bridge as well as ensure limits on transfer volume per epoch.

An Ethereum smart contract should perform the following steps to verify a proof from Namada:

  1. Check the epoch included in the proof.
  2. Look up the validator set corresponding to said epoch.
  3. Verify that the signatures included amount to at least 2 / 3 of the total stake.
  4. Check the validity of each signature.

If all the above verifications succeed, the contract may affect the appropriate state change, emit logs, etc.

Starting the bridge

Before the bridge can start running, some storage may need to be initialized in Namada.

Resources which may be helpful:

IBC integration

IBC transaction

An IBC transaction tx_ibc.wasm is provided. We have to set an IBC message to the transaction data corresponding to execute an IBC operation.

The transaction decodes the data to an IBC message and handles IBC-related data, e.g. it makes a new connection ID and writes a new connection end for MsgConnectionOpenTry. The operations are implemented in IbcActions.The transaction doesn't check the validity for the state changes. IBC validity predicate is in charge of the validity.

IBC validity predicate

IBC validity predicate checks if an IBC-related transaction satisfies IBC protocol. When an IBC-related transaction is executed, i.e. a transaction changes the state of the key that contains InternalAddress::Ibc, IBC validity predicate (one of the native validity predicates) is executed. For example, if an IBC connection end is created in the transaction, IBC validity predicate validates the creation. If the creation with MsgConnectionOpenTry is invalid, e.g. the counterpart connection end doesn't exist, the validity predicate makes the transaction fail.

Fungible Token Transfer

The transfer of fungible tokens over an IBC channel on separate chains is defined in ICS20.

In Namada, the sending tokens is triggered by a transaction having MsgTransfer as transaction data. A packet including FungibleTokenPacketData is made from the message in the transaction execution.

Namada chain receives the tokens by a transaction having MsgRecvPacket which has the packet including FungibleTokenPacketData.

The sending and receiving tokens in a transaction are validated by not only IBC validity predicate but also IBC token validity predicate. IBC validity predicate validates if sending and receiving the packet is proper. IBC token validity predicate is also one of the native validity predicates and checks if the token transfer is valid. If the transfer is not valid, e.g. an unexpected amount is minted, the validity predicate makes the transaction fail.

A transaction escrowing/unescrowing a token changes the escrow account's balance of the token. The key is {token_addr}/balance/{escrow_addr}. A transaction burning a token changes the burn account's balance of the token. The key is {token_addr}/balance/BURN_ADDR. A transaction minting a token changes the mint account's balance of the token. The key is {token_addr} /balance/MINT_ADDR. {escrow_addr}, {BURN_ADDR}, and {MINT_ADDR} are addresses of InternalAddress. When these addresses are included in the changed keys after transaction execution, IBC token validity predicate is executed.

IBC message

IBC messages are defined in ibc-rs. The message should be encoded with Protobuf (NOT with Borsh) as the following code to set it as a transaction data.


#![allow(unused)]
fn main() {
use ibc::tx_msg::Msg;

pub fn make_ibc_data(message: impl Msg) -> Vec<u8> {
    let msg = message.to_any();
    let mut tx_data = vec![];
    prost::Message::encode(&msg, &mut tx_data).expect("encoding IBC message shouldn't fail");
    tx_data
}
}

Economics

Namada users pay transaction fees in NAM and other tokens (see fee system and governance), so demand for NAM can be expected to track demand for block space. On the supply side, the protocol mints NAM at a fixed maximum per-annum rate based on a fraction of the current supply (see inflation system), which is directed to three areas of protocol subsidy: proof-of-stake, shielded pool incentives, and public-goods funding. Inflation rates for these three areas are adjusted independently (the first two on PD controllers and the third based on funding decisions) and excess tokens are slowly burned.

Fee system

In order to be accepted by the Namada ledger, transactions must pay fees. Transaction fees serve two purposes: first, the efficient allocation of block space given permissionless transaction submission and varying demand, and second, incentive-compatibility to encourage block producers to add transactions to the blocks which they create and publish.

Namada transaction fees can be paid in any fungible token which is a member of a whitelist controlled by Namada governance. Governance also sets minimum fee rates (which can be periodically updated so that they are usually sufficient) which transactions must pay in order to be accepted (but they can always pay more to encourage the proposer to prioritise them). When using the shielded pool, transactions can also unshield tokens in order to pay the required fees.

The token whitelist consists of a list of pairs, where is a token identifier and is the minimum price per unit gas which must be paid by a transaction paying fees using that asset. This whitelist can be updated with a standard governance proposal. All fees collected are paid directly to the block proposer (incentive-compatible, so that side payments are no more profitable).

Inflation system

The Namada protocol controls the Namada token NAM (the native staking token), which is programmatically minted to pay for algorithmically measurable public goods - proof-of-stake security and shielded pool usage - and out-of-band public goods. Proof-of-stake rewards are paid into the reward distribution mechanism in order to distribute them to validators and delegators. Shielded pool rewards are paid into the shielded pool reward mechanism, where users who kept tokens in the shielded pool can claim them asynchronously. Public goods funding is paid to the public goods distribution mechanism, which further splits funding between proactive and retroactive funding and into separate categories.

Proof-of-stake rewards

The security of the proof-of-stake voting power allocation mechanism used by Namada is dependent in part upon locking (bonding) tokens to validators, where these tokens can be slashed should the validators misbehave. Funds so locked are only able to be withdrawn after an unbonding period. In order to reward validators and delegators for locking their stake and participating in the consensus mechanism, Namada pays a variable amount of inflation to all delegators and validators. The amount of inflation paid is varied on a PD-controller in order to target a particular bonding ratio (fraction of the NAM token being locked in proof-of-stake). Namada targets a bonding ratio of 2/3, paying up to 10% inflation per annum to proof-of-stake rewards. See reward distribution mechanism for details.

Shielded pool rewards

Privacy provided by the MASP in practice depends on how many users use the shielded pool and what assets they use it with. To increase the likelihood of a sizeable privacy set, Namada pays a variable portion of inflation, up to 10% per annum, to shielded pool incentives, which are allocated on a per-asset basis by a PD-controller targeting specific amounts of each asset being locked in the shielded pool. See shielded pool incentives for details.

Public goods funding

Namada provides 10% per annum inflation for other non-algorithmically-measurable public goods. See public goods funding for details.

Detailed inflation calculation model

Inflation is calculated and paid per-epoch as follows.

First, we start with the following fixed (governance-alterable) parameters:

  • is the cap of proof-of-stake reward rate, in units of percent per annum (genesis default: 10%)
  • is the cap of shielded pool reward rate for each asset , in units of percent per annum
  • is the public goods funding reward rate, in units of percent per annum
  • is the target staking ratio (genesis default: 2/3)
  • is the target amount of asset locked in the shielded pool (separate value for each asset )
  • is the number of epochs per year (genesis default: 365)
  • is the nominal proportional gain of the proof-of-stake PD controller, as a fraction of the total input range
  • is the nominal derivative gain of the proof-of-stake PD controller, as a fraction of the total input range
  • is the nominal proportional gain of the shielded pool reward controller for asset , as a fraction of the total input range (separate value for each asset )
  • is the nominal derivative gain of the shielded pool reward controller for asset , as a fraction of the total input range (separate value for each asset )

Second, we take as input the following state values:

  • is the current supply of NAM
  • is the current amount of NAM locked in proof-of-stake
  • is the current proof-of-stake inflation amount, in units of tokens per epoch
  • is the proof-of-stake locked token ratio from the previous epoch
  • is the current amount of asset locked in the shielded pool (separate value for each asset )
  • is the current shielded pool inflation amount for asset , in units of tokens per epoch
  • is the shielded pool locked token ratio for asset from the previous epoch (separate value for each asset )

Public goods funding inflation can be calculated and paid immediately (in terms of total tokens per epoch):

These tokens are distributed to the public goods funding validity predicate.

To run the PD-controllers for proof-of-stake and shielded pool rewards, we first calculate some intermediate values:

  • Calculate the latest staking ratio as
  • Calculate the per-epoch cap on the proof-of-stake and shielded pool token inflation
    • (separate value for each )
  • Calculate PD-controller constants to be used for this epoch

Then, for proof-of-stake first, run the PD-controller:

  • Calculate the error
  • Calculate the error derivative
  • Calculate the control value
  • Calculate the new

These tokens are distributed to the proof-of-stake reward distribution validity predicate.

Similarly, for each asset for which shielded pool rewards are being paid:

  • Calculate the error
  • Calculate the error derivative
  • Calculate the control value
  • Calculate the new

These tokens are distributed to the shielded pool reward distribution validity predicate.

Finally, we store the latest inflation and locked token ratio values for the next epoch's controller round.

Proof-of-stake (PoS)

This section of the specification describes the proof-of-stake mechanism of Namada, which is largely modeled after Cosmos bonded proof-of-stake, but makes significant changes to bond storage representation, validator set change handling, reward distribution, and slashing, with the general aims of increased precision in reasoning about security, validator decentralisation, and avoiding unnecessary proof-of-stake-related transactions.

This section is split into three subcomponents: the bonding mechanism, reward distribution, and cubic slashing.

Context

Blockchain systems rely on economic security (directly or indirectly) to prevent abuse and for actors to behave according to the protocol. The aim is that economic incentives promote correct long-term operation of the system and economic punishments discourage diverging from correct protocol execution either by mistake or with the intent of carrying out attacks. Many PoS blockchains rely on the 1/3 Byzantine rule, where they make the assumption the adversary cannot control more 2/3 of the total stake or 2/3 of the actors.

Goals of Rewards and Slashing: Liveness and Security

  • Security: Delegation and Slashing: we want to make sure validators are backed by enough funds to make misbehaviour very expensive. Security is achieved by punishing (slashing) if they do. Slashing locked funds (stake) intends to disincentivize diverging from correct execution of protocol, which in this case is voting to finalize valid blocks.
  • Liveness: Paying Rewards. For continued operation of Namada we want to incentivize participating in consensus and delegation, which helps security.

Security

In blockchain systems we do not rely on altruistic behavior but rather economic security. We expect the validators to execute the protocol correctly. They get rewarded for doing so and punished otherwise. Each validator has some self-stake and some stake that is delegated to it by other token holders. The validator and delegators share the reward and risk of slashing impact with each other.

The total stake behind consensus should be taken into account when value is transferred via a transaction. For example, if we have 1 billion tokens, we aim that 300 Million of these tokens is backing validators. This means that users should not transfer more than 200 million of this token within a block.

Bonding mechanism

Epoched data

Epoched data is data associated with a specific epoch that is set in advance. The data relevant to the PoS system in the ledger's state are epoched. Each data can be uniquely identified. These are:

  • System parameters. Discrete values for each epoch in which the parameters have changed.
  • Validator sets. Discrete values for each epoch.
  • Total voting power. A sum of all validators' voting power, excluding jailed validators. A delta value for each epoch.
  • Validators' consensus key, state and total bonded tokens. Identified by the validator's address.
  • Bonds are created by self-bonding and delegations. They are identified by the pair of source address and the validator's address.

Changes to the epoched data do not take effect immediately. Instead, changes in epoch n are queued to take effect in the epoch n + pipeline_length for most cases and n + pipeline_length + unboding_length for unbonding actions. Should the same validator's data or same bonds (i.e. with the same identity) be updated more than once in the same epoch, the later update overrides the previously queued-up update. For bonds, the token amounts are added up. Once the epoch n has ended, the queued-up updates for epoch n + pipeline_length are final and the values become immutable.

Additionally, any account may submit evidence for a slashable misbehaviour.

Validator

A validator must have a public consensus key.

A validator may be in one of the following states:

  • inactive: A validator is not being considered for block creation and cannot receive any new delegations.
  • candidate: A validator is considered for block creation and can receive delegations.

For each validator (in any state), the system also tracks total bonded tokens as a sum of the tokens in their self-bonds and delegated bonds. The total bonded tokens determine their voting voting power by multiplication by the votes_per_token parameter. The voting power is used for validator selection for block creation and is used in governance related activities.

Validator actions

  • become validator: Any account that is not a validator already and that doesn't have any delegations may request to become a validator. It is required to provide a public consensus key. For the action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length and the consensus key is set for epoch n + pipeline_length.
  • deactivate: Only a validator whose state at or before the pipeline_length offset is candidate account may deactivate. For this action applied in epoch n, the validator's account is set to become inactive in the epoch n + pipeline_length.
  • reactivate: Only an inactive validator may reactivate. Similarly to become validator action, for this action applied in epoch n, the validator's state will be set to candidate for epoch n + pipeline_length.
  • self-bond: A validator may lock-up tokens into a bond only for its own validator's address.
  • unbond: Any self-bonded tokens may be partially or fully unbonded.
  • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch.
  • change consensus key: Set the new consensus key. When applied in epoch n, the key is set for epoch n + pipeline_length.
  • change commission rate: Set the new commission rate. When applied in epoch n, the new value will be set for epoch n + pipeline_length. The commission rate change must be within the max_commission_rate_change limit set by the validator.

Validator sets

A candidate validator that is not jailed (see slashing) can be in one of the three sets:

  • consensus - consensus validator set, capacity limited by the max_validator_slots parameter
  • below_capacity - validators below consensus capacity, but above the threshold set by min_validator_stake parameter
  • below_threshold - validators with stake below min_validator_stake parameter

From all the candidate validators, in each epoch the ones with the most voting power limited up to the max_validator_slots parameter are selected for the consensus validator set. Whenever stake of a validator is changed, the validator sets must be updated at the appropriate offset matching the stake update.

The limit on min_validator_stake parameter is introduced, because the protocol needs to iterate through the validator sets in order to copy the last known state into a new epoch when epoch changes (to avoid offloading this cost to a transaction that is unlucky enough to be the first one to update the validator set(s) in some new epoch) and also to distribute rewards to consensus validators and to record unchanged validator products for validators below_capacity, who do not receive rewards in the current epoch.

Delegator

A delegator may have any number of delegations to any number of validators. Delegations are stored in bonds.

Delegator actions

  • delegate: An account which is not a validator may delegate tokens to any number of validators. This will lock-up tokens into a bond.
  • undelegate: Any delegated tokens may be partially or fully unbonded.
  • withdraw unbonds: Unbonded tokens may be withdrawn in or after the unbond's epoch.

Bonds

A bond locks-up tokens from validators' self-bonding and delegators' delegations. For self-bonding, the source address is equal to the validator's address. Only validators can self-bond. For a bond created from a delegation, the bond's source is the delegator's account.

For each epoch, bonds are uniquely identified by the pair of source and validator's addresses. A bond created in epoch n is written into epoch n + pipeline_length. If there already is a bond in the epoch n + pipeline_length for this pair of source and validator's addresses, its tokens are incremented by the newly bonded amount.

Any bonds created in epoch n increment the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + pipeline_length.

The tokens put into a bond are immediately deducted from the source account.

Unbond

An unbonding action (validator unbond or delegator undelegate) requested by the bond's source account in epoch n creates an "unbond" with epoch set to n + pipeline_length + unbounding_length. We also store the epoch of the bond(s) from which the unbond is created in order to determine if the unbond should be slashed if a fault occurred within the range of bond epoch (inclusive) and unbond epoch (exclusive). The "bond" from which the tokens are being unbonded is decremented in-place (in whatever epoch it was created in).

Any unbonds created in epoch n decrements the bond's validator's total bonded tokens by the bond's token amount and update the voting power for epoch n + pipeline_length.

An "unbond" with epoch set to n may be withdrawn by the bond's source address in or any time after the epoch n. Once withdrawn, the unbond is deleted and the tokens are credited to the source account.

Note that unlike bonding and unbonding where token changes are delayed to some future epochs (pipeline or unbonding offset), the token withdrawal applies immediately. This because when the tokens are withdrawable, they are already "unlocked" from the PoS system and do not contribute to voting power.

Slashing

An important part of the security model of Namada is based on making attacking the system very expensive. To this end, the validator who has bonded stake will be slashed once an offense has been detected.

These are the types of offenses:

  • Equivocation in consensus
    • voting: meaning that a validator has submitted two votes that are conflicting
    • block production: a block producer has created two different blocks for the same height
  • Invalidity:
    • block production: a block producer has produced invalid block
    • voting: validators have voted on invalid block

Unavailability is not considered an offense, but a validator who hasn't voted will not receive rewards.

Once an offense has been reported:

  1. Kicking out
  2. Slashing
  • Individual: Once someone has reported an offense it is reviewed by validators and if confirmed the offender is slashed.
  • cubic slashing: escalated slashing

Instead of absolute values, validators' total bonded token amounts and bonds' and unbonds' token amounts are stored as their deltas (i.e. the change of quantity from a previous epoch) to allow distinguishing changes for different epoch, which is essential for determining whether tokens should be slashed. Slashes for a fault that occurred in epoch n may only be applied before the beginning of epoch n + unbonding_length. For this reason, in epoch m we can sum all the deltas of total bonded token amounts and bonds and unbond with the same source and validator for epoch equal or less than m - unboding_length into a single total bonded token amount, single bond and single unbond record. This is to keep the total number of total bonded token amounts for a unique validator and bonds and unbonds for a unique pair of source and validator bound to a maximum number (equal to unbonding_length).

To disincentivize validators misbehaviour in the PoS system a validator may be slashed for any fault that it has done. An evidence of misbehaviour may be submitted by any account for a fault that occurred in epoch n anytime before the beginning of epoch n + unbonding_length.

A valid evidence reduces the validator's total bonded token amount by the slash rate in and before the epoch in which the fault occurred. The validator's voting power must also be adjusted to the slashed total bonded token amount. Additionally, a slash is stored with the misbehaving validator's address and the relevant epoch in which the fault occurred. When an unbond is being withdrawn, we first look-up if any slash occurred within the range of epochs in which these were active and if so, reduce its token amount by the slash rate. Note that bonds and unbonds amounts are not slashed until their tokens are withdrawn.

The invariant is that the sum of amounts that may be withdrawn from a misbehaving validator must always add up to the total bonded token amount.

Initialization

An initial validator set with self-bonded token amounts must be given on system initialization.

This set is used to initialize the genesis state with epoched data active immediately (from the first epoch).

System parameters

The default values that are relative to epoch duration assume that an epoch last about 24 hours.

  • max_validator_slots: Maximum consensus validators, default 128
  • min_validator_stake: Minimum stake of a validator that allows the validator to enter the consensus or below_capacity sets, in number of native tokens. Because the inflation system targets a bonding ratio of 2/3, the minimum should be somewhere around total_supply * 2/3 / max_validator_slots, but it can and should be much lower to lower the entry cost, as long as it's enough to prevent validation account creation spam that could slow down PoS system update on epoch change
  • pipeline_len: Pipeline length in number of epochs, default 2 (see https://github.com/cosmos/cosmos-sdk/blob/019444ae4328beaca32f2f8416ee5edbac2ef30b/docs/architecture/adr-039-epoched-staking.md#pipelining-the-epochs)
  • unboding_len: Unbonding duration in number of epochs, default 6
  • votes_per_token: Used in validators' voting power calculation, default 100‱ (1 voting power unit per 1000 tokens)
  • duplicate_vote_slash_rate: Portion of validator's stake that should be slashed on a duplicate vote
  • light_client_attack_slash_rate: Portion of validator's stake that should be slashed on a light client attack

Storage

The system parameters are written into the storage to allow for their changes. Additionally, each validator may record a new parameters value under their sub-key that they wish to change to, which would override the systems parameters when more than 2/3 voting power are in agreement on all the parameters values.

The validators' data are keyed by the their addresses, conceptually:

type Validators = HashMap<Address, Validator>;

Epoched data are stored in a structure, conceptually looking like this:

struct Epoched<Data> {
  /// The epoch in which this data was last updated
  last_update: Epoch,
  /// How many epochs of historical data to keep, this is `0` in most cases
  /// except for validator `total_deltas` and `total_unbonded`, in which 
  /// historical data for up to `pipeline_length + unbonding_length - 1` is 
  /// needed to be able to apply any slashes that may occur.
  /// The value is not actually stored with the data, it's either constant 
  /// value or resolved from PoS parameters on which it may depends.
  past_epochs_to_store: u64,
  /// An ordered map in which the head is the data for epoch in which 
  /// the `last_update - past_epochs_to_store` was performed and every
  /// consecutive epoch up to a required length. For system parameters, 
  /// and all the epoched data 
  /// `LENGTH = past_epochs_to_store + pipeline_length + 1`, 
  /// with exception of unbonds, for which 
  /// `LENGTH = past_epochs_to_store + pipeline_length + unbonding_length + 1`.
  data: Map<Epoch, Option<Data>>
}

Note that not all epochs will have data set, only the ones in which some changes occurred. The only exception to this are the consensus and below_capacity validator sets, which are written on a new epoch from the latest state into the new epoch by the protocol. This is so that a transaction never has to update the whole validator set when it hasn't changed yet in the current epoch, which would require a copy of the last epoch data and that copy would additionally have to be verified by the PoS validity predicate.

To try to look-up a value for Epoched data with discrete values in each epoch (such as the consensus validator set) in the current epoch n:

  1. read the data field at epoch n:
    1. if there's a value at n return it
    2. else if n == last_update - past_epochs_to_store, return None
    3. else decrement n and repeat this sub-step from 1.

To look-up a value for Epoched data with delta values in the current epoch n:

  1. sum all the values that are not None in the last_update - past_epochs_to_store .. n epoch range bounded inclusively below and above

To update a value in Epoched data with discrete values in epoch n with value new for epoch m:

  1. let epochs_to_clear = min(n - last_update, LENGTH)
  2. if epochs_to_clear == 0:
    1. data[m] = new
  3. else:
    1. for i in last_update - past_epochs_to_store .. last_update - past_epochs_to_store + epochs_to_clear range bounded inclusively below and exclusively above, set data[i] = None
    2. set data[m] = new
    3. set last_update to the current epoch

To update a value in Epoched data with delta values in epoch n with value delta for epoch m:

  1. let epochs_to_sum = min(n - last_update, LENGTH)
  2. if epochs_to_sum == 0:
    1. set data[m] = data[m].map_or_else(delta, |last_delta| last_delta + delta) (add the delta to the previous value, if any, otherwise use the delta as the value)
  3. else:
    1. let sum to be equal to the sum of all delta values in the last_update - past_epochs_to_store .. last_update - past_epochs_to_store + epochs_to_sum range bounded inclusively below and exclusively above and set data[i] = None
    2. set data[n - past_epochs_to_store] = data[n - past_epochs_to_store].map_or_else(sum, |last_delta| last_delta + sum) to add the sum to the last epoch that will be stored
    3. set data[m] = data[m].map_or_else(delta, |last_delta| last_delta + delta) to add the new delta
    4. set last_update to the current epoch

The invariants for updates in both cases are that m >= n (epoched data cannot be updated in an epoch lower than the current epoch) and m - n <= LENGTH - past_epochs_to_store (epoched data can only be updated at the future-most epoch set by the LENGTH - past_epochs_to_store of the data).

We store the consensus validators and validators below_capacity in two set, ordered by their voting power. We don't have to store the validators below_threshold in a set, because we don't need to know their order.

Note that we still need to store below_capacity set in order of their voting power, because when e.g. one of the consensus validator's voting power drops below that of a maximum below_capacity validator, we need to know which validator to swap in into the consensus set. The protocol new epoch update just disregards validators who are not in consensus or below_capacity sets as below_threshold validators and so iteration on unbounded size is avoided. Instead the size of the validator set that is regarded for PoS rewards can be adjusted by the min_validator_stake parameter via governance.

Conceptually, this may look like this:

type VotingPower = u64;

/// Validator's address with its voting power.
#[derive(PartialEq, Eq, PartialOrd, Ord)]
struct WeightedValidator {
  /// The `voting_power` field must be on top, because lexicographic ordering is
  /// based on the top-to-bottom declaration order and in the `ValidatorSet`
  /// the `WeighedValidator`s these need to be sorted by the `voting_power`.
  voting_power: VotingPower,
  address: Address,
}

struct ValidatorSet {
  /// Active validator set with maximum size equal to `max_validator_slots`
  consensus: BTreeSet<WeightedValidator>,
  /// Other validators that are not in `consensus`, but have stake above `min_validator_stake`
  below_threshold: BTreeSet<WeightedValidator>,
}

type ValidatorSets = Epoched<ValidatorSet>;

/// The sum of all validators voting power (including `below_threshold`)
type TotalVotingPower = Epoched<VotingPower>;

When any validator's voting power changes, we attempt to perform the following update on the ValidatorSet:

  1. let validator be the validator's address, power_before and power_after be the voting power before and after the change, respectively
  2. find if the power_before and power_after are above the min_validator_stake threshold
    1. if they're both below the threshold, nothing else needs to be done
  3. let power_delta = power_after - power_before
  4. let min_consensus = consensus.first() (consensus validator with lowest voting power)
  5. let max_below_capacity = below_capacity.last() (below_capacity validator with greatest voting power)
  6. find whether the validator was in consensus set, let was_in_consensus = power_before >= max_below_capacity.voting_power
  7. find whether the validator was in below capacity set, let was_below_capacity = power_before > min_validator_stake
    1. if was_in_consensus:
      1. if power_after >= max_below_capacity.voting_power, update the validator in consensus set with voting_power = power_after
      2. else if power_after < min_validator_stake, remove the validator from consensus, insert the max_below_capacity.address validator into consensus and remove max_below_capacity.address from below_capacity
      3. else, remove the validator from consensus, insert it into below_capacity and remove max_below_capacity.address from below_capacity and insert it into consensus
    2. else if was_below_capacity:
      1. if power_after <= min_consensus.voting_power, update the validator in below_capacity set with voting_power = power_after
      2. else if power_after < min_validator_stake, remove the validator from below_capacity
      3. else, remove the validator from below_capacity, insert it into consensus and remove min_consensus.address from consensus and insert it into below_capacity
    3. else (if validator was below minimum stake):
      1. if power_after > min_consensus.voting_power, remove the min_consensus.address from consensus, insert the min_consensus.address into below_capacity and insert the validator in consensus set with voting_power = power_after
      2. else if power_after >= min_validator_stake, insert the validator into below_capacity set with voting_power = power_after
      3. else, do nothing

Additionally, for rewards distribution:

  • When a validator moves from below_threshold set to either below_capacity or consensus set, the transaction must also fill in the validator's reward products from its last known value, if any, in all epochs starting from their last_known_product_epoch (exclusive) up to the current_epoch + pipeline_len - 1 (inclusive) in order to make their look-up cost constant (assuming that validator's stake can only be increased at pipeline_len offset).
  • And on the opposite side, when a stake of a validator from consensus or below_capacity drops below min_validator_stake, we record their last_known_product_epoch, so that it can be used if and when the validator's stake goes above min_validator_stake.

Within each validator's address space, we store public consensus key, state, total bonded token amount, total unbonded token amount (needed for applying of slashes) and voting power calculated from the total bonded token amount (even though the voting power is stored in the ValidatorSet, we also need to have the voting_power here because we cannot look it up in the ValidatorSet without iterating the whole set):

struct Validator {
  consensus_key: Epoched<PublicKey>,
  state: Epoched<ValidatorState>,
  total_deltas: Epoched<token::Amount>,
  total_unbonded: Epoched<token::Amount>,
  voting_power: Epoched<VotingPower>,
}

enum ValidatorState {
  Inactive,
  Candidate,
}

The bonds and unbonds are keyed by their identifier:

type Bonds = HashMap<BondId, Epoched<Bond>>;
type Unbonds = HashMap<BondId, Epoched<Unbond>>;

struct BondId {
  validator: Address,
  /// The delegator adddress for delegations, or the same as the `validator`
  /// address for self-bonds.
  source: Address,
}

struct Bond {
  /// A key is a the epoch set for the bond. This is used in unbonding, where
  // it's needed for slash epoch range check.
  deltas: HashMap<Epoch, token::Amount>,
}

struct Unbond {
  /// A key is a pair of the epoch of the bond from which a unbond was created
  /// the epoch of unboding. This is needed for slash epoch range check.
  deltas: HashMap<(Epoch, Epoch), token::Amount>
}

For slashes, we store the epoch and block height at which the fault occurred, slash rate and the slash type:

struct Slash {
  epoch: Epoch,
  block_height: u64,
  /// slash token amount ‱ (per ten thousand)
  rate: u8,
  r#type: SlashType,
}

Cubic slashing

Namada implements cubic slashing, meaning that the amount of a slash is proportional to the cube of the voting power committing infractions within a particular interval. This is designed to make it riskier to operate larger or similarly configured validators, and thus encourage network resilience.

When a slash is detected:

  1. Using the height of the infraction, calculate the epoch just after which stake bonded at the time of infraction could have been fully unbonded. Enqueue the slash for processing at the end of that epoch (so that it will be processed before unbonding could have completed, and hopefully long enough for any other misbehaviour from around the same height as this misbehaviour to also be detected).
  2. Jail the validator in question (this will apply at the end of the current epoch). While the validator is jailed, it should be removed from the validator set (also being effective from the end of the current epoch). Note that this is the only instance in our proof-of-stake model when the validator set is updated without waiting for the pipeline offset.
  3. Prevent the delegators to this validator from altering their delegations in any way until the enqueued slash is processed.

At the end of each epoch, in order to process any slashes scheduled for processing at the end of that epoch:

  1. Iterate over all slashes for infractions committed within a range of (-1, +1) epochs worth of block heights (this may need to be a protocol parameter) of the infraction in question.
  2. Calculate the slash rate according to the following formula:

Or, in pseudocode:

calculateSlashRate :: [Slash] -> Float

calculateSlashRate slashes = 
    let votingPowerFraction = sum [ votingPowerFraction (validator slash) | slash <- slashes]
	in max 0.01 (min 1 (votingPowerFraction**2)*9)
  -- minimum slash rate is 1%
  -- then exponential between 0 & 1/3 voting power
  -- we can make this a more complex function later

As a function, it can be drawn as:

cubic_slash

Note: The voting power of a slash is the voting power of the validator when they violated the protocol, not the voting power now or at the time of any of the other infractions. This does mean that these voting powers may not sum to 1, but this method should still be close to the incentives we want, and can't really be changed without making the system easier to game.

  1. Set the slash rate on the now "finalised" slash in storage.
  2. Update the validators' stored voting power appropriately.
  3. Delegations to the validator can now be redelegated / start unbonding / etc.

Validator can later submit a transaction to unjail themselves after a configurable period. When the transaction is applied and accepted, the validator updates its state to "candidate" and is added back to the validator set starting at the epoch at pipeline offset (into consensus, below_capacity or below_threshold set, depending on its voting power).

At present, funds slashed are sent to the governance treasury.

Slashes

Slashes should lead to punishment for delegators who were contributing voting power to the validator at the height of the infraction, as if the delegations were iterated over and slashed individually.

This can be implemented as a negative inflation rate for a particular block.

Instant redelegation is not supported. Redelegations must wait the unbonding period.

Reward distribution

Namada uses the automatically-compounding variant of F1 fee distribution.

Rewards are given to validators for proposing blocks, for voting on finalizing blocks, and for being in the consensus validator set: the funds for these rewards can come from minting (creating new tokens). The amount that is minted depends on how many staking tokens are locked (staked) and some maximum annual inflation rate. The rewards mechanism is implemented as a PD controller that dynamically adjusts the inflation rate to achieve a target staking token ratio. When the total fraction of tokens staked is very low, the return rate per validator needs to increase, but as the total fraction of stake rises, validators will receive fewer rewards. Once the desired staking fraction is achieved, the amount minted will just be the desired annual inflation.

Each delegation to a validator is initiated at an agreed-upon commission rate charged by the validator. Validators pay out rewards to delegators based on this mutually-determined commission rate. The minted rewards are auto-bonded and only transferred when the funds are unbonded. Once the protocol determines the total amount of tokens to mint at the end of the epoch, the minted tokens are effectively divided among the relevant validators and delegators according to their proportional stake. In practice, the reward products, which are the fractional increases in staked tokens claimed, are stored for the validators and delegators, and the reward tokens are only transferred to the validator’s or delegator’s account upon withdrawal. This is described in the following sections. The general system is similar to what Cosmos does.

Basic algorithm

Consider a system with

  • a canonical singular staking unit of account.
  • a set of validators .
  • a set of delegations , where indicates the associated validator, each with a particular initial amount.
  • epoched proof-of-stake, where changes are applied as follows:
    • bonding is processed after the pipeline length
    • unbonding is processed after the pipeline + unbonding length
    • rewards are paid out at the end of each epoch, i.e., in each epoch , a reward is paid out to validator
    • slashing is applied as described in slashing.

We wish to approximate as exactly as possible the following ideal delegator reward distribution system:

  • At each epoch, for a validator , iterate over all of the delegations to that validator. Update each delegation , as follows. where and respectively denote the reward and stake of validator at epoch .
  • Similarly, multiply the validator's voting power by the same factor , which should now equal the sum of their revised-amount delegations.

In this system, rewards are automatically rebonded to delegations, increasing the delegation amounts and validator voting powers accordingly.

However, we wish to implement this without actually needing to iterate over all delegations each block, since this is too computationally expensive. We can exploit this constant multiplicative factor , which does not vary per delegation, to perform this calculation lazily. In this lazy method, only a constant amount of data per validator per epoch is stored, and revised amounts are calculated for each individual delegation only when a delegation changes.

We will demonstrate this for a delegation to a validator . Let denote the stake of at epoch .

For two epochs and with , define the function as

Denote as . The function has a useful property.

One may calculate the accumulated changes upto epoch as

If we know the delegation upto epoch , the delegation at epoch is obtained by the following formula,

Using property ,

Clearly, the quantity does not depend on the delegation . Thus, for a given validator, we only need to store this product at each epoch , from which the updated amounts for all delegations can be calculated.

The product at the end of each epoch is updated as follows.


updateProducts 
:: HashMap<Address, HashMap<Epoch, Float>> 
-> HashSet<Address> 
-> Epoch 
-> HashMap<BondId, Token::amount>>

updateProducts validatorProducts activeSet currentEpoch = 
 let stake = PoS.readValidatorTotalDeltas validator currentEpoch
     reward = PoS.reward stake currentEpoch
     rsratio = reward / stake
     entries = lookup validatorProducts validator
     lastProduct = lookup entries (Epoch (currentEpoch - 1))
 in insert currentEpoch (lastProduct*(1+rsratio)) entries

In case a delegator wishes to withdraw delegation(s), then the proportionate rewards are appropriated using the aforementioned scheme, which is implemented by the following function.

withdrawalAmount 
:: HashMap<Address, HashMap <Epoch, Product>> 
-> BondId 
->  [(Epoch, Delegation)] 
-> Token::amount

withdrawalAmount validatorProducts bondId unbonds = 
 sum [stake * endp/startp | (endEpoch, unbond) <- unbonds, 
                            let epochProducts = lookup (validator bondId)
                           validatorProducts, 
                            let startp = lookup (startEpoch unbond) 
                       epochProducts, 
                            let endp = lookup endEpoch epochProducts, 
                            let stake =  delegation unbond]
 

Commission

Commission is charged by a validator on the rewards coming from delegations. These are set as percentages by the validator, who may charge any commission they wish between 0-100%.

Let be the commission rate for a delegation to a validator at epoch . The expression for the product that was introduced earlier can be modified for a delegator in particular as

in order to calculate the new rewards given out to the delegator during withdrawal. Thus the commission charged per epoch is retained by the validator and remains untouched upon withdrawal by the delegator.

The commission rate is the same for all delegations to a validator in a given epoch , including for self-bonds. The validator can change the commission rate at any point, subject to a maximum rate of change per epoch, which is a constant specified when the validator is created and immutable once validator creation has been accepted.

While rewards are given out at the end of every epoch, voting power is only updated after the pipeline offset. According to the proof-of-stake system, at the current epoch e, the validator sets an only be updated for epoch e + pipeline_offset, and it should remain unchanged from epoch e to e + pipeline_offset - 1. Updating voting power in the current epoch would violate this rule.

Distribution to validators

A validator can earn a portion of the block rewards in three different ways:

  • Proposing the block
  • Providing a signature on the constructed block (voting)
  • Being a member of the consensus validator set

The reward mechanism calculates fractions of the total block reward that are given for the above-mentioned three behaviors, such that

where is the proposer reward fraction, is the reward fraction for the set of signers, and is the reward fraction for the whole active validator set.

The reward for proposing a block is dependent on the combined voting power of all validators whose signatures are included in the block. This is to incentivize the block proposer to maximize the inclusion of signatures, as blocks with more signatures are (JUSTIFY THIS POINT HERE).

The block proposer reward is parameterized as

where is the ratio of the combined stake of all block signers to the combined stake of all consensus validators:

The value of is bounded from below at 2/3, since a block requires this amount of signing stake to be verified. We currently enforce that the block proposer reward is a minimum of 1%.

The block signer reward for a validator is parameterized as

where is the stake of validator , is the combined stake of all signers, and is the combined stake of all consensus validators.

Finally, the remaining reward just for being in the consensus validator set is parameterized as

Thus, as an example, the total fraction of the block reward for the proposer (assuming they include their own signature in the block) would be:

The values of the parameters and are set in the proof-of-stake storage and can only change via governance. The values are chosen relative to each other such that a block proposer is always incentivized to include as much signing stake as possible. These values at genesis are currently:

These rewards must be determined for every single block, but the inflationary token rewards are only minted at the end of an epoch. Thus, the rewards products are only updated at the end of an epoch as well.

In order to maintain a record of the block rewards over the course of an epoch, a reward fraction accumulator is implemented as a Map<Address, Decimal> and held in the storage key #{PoS}/validator_set/consensus/rewards_accumulator. When finalizing each block, the accumulator value for each consensus validator is incremented with the fraction of that block's reward owed to the validator. At the end of the epoch when the rewards products are updated, the accumulator value is divided by the number of blocks in that epoch, which yields the fraction of the newly minted inflation tokens owed to the validator. The next entry of the rewards products for each validator can then be created. The map is then reset to be empty in preparation for the next epoch and consensus validator set.

TODO describe / figure out:

  • how leftover reward tokens from round-off / truncation are handled

Shielded pool incentives

Rationale

Private transactions made by individual users using the MASP increase the privacy set for other users, so even if the individual doesn't care whether a particular transaction is private, others benefit from their choice to do the transaction in private instead of in public. In the absence of a subsidy (the computation required for private state transitions is likely more expensive) or other incentives, users may not elect to make their transactions private when they do not need to because the benefits do not directly accrue to them. This provides grounds for a protocol subsidy of shielded transactions (relative to the computatation required), so that users who do not have a strong preference on whether or not to make their transaction private will be "nudged" by the fee difference to do so.

Separately, and additionally, a privacy set which is very small in absolute terms does not provide much privacy, and transactions increasing the privacy set provide more additional privacy if the privacy set is small. Compare, for example, the doubled privacy set from 10 to 20 transactions to the minor increase from 1010 to 1020 transactions. This provides grounds for some sort of incentive mechanism for making shielded transactions which pays in inverse proportion to the size of the current privacy set (so shielded transactions when the privacy set is small receive increased incentives in accordance with their increased contributions to privacy).

Incentive mechanisms are also dangerous, as they give users reason to craft particular transactions when they might not otherwise have done so, and they must satisfy certain constraints in order not to compromise state machine throughput, denial-of-service resistance, etc. A few constraints to keep in mind:

  • Fee subsidies cannot reduce fees to zero, or reduce fees so much that inexpensive transaction spam can fill blocks and overload validators.
  • Incentives for contributing to the privacy set should not incentivise transactions which do not meaningfully contribute to the privacy set or merely repeat a previous action (shielded and unshielding the same assets, repeatedly transferring the same assets, etc.)
  • Incentives for contributing to the privacy set, since the MASP supports many assets, will need to be adjusted over time according to actual conditions of use.

Design

Namada enacts a shielded pool incentive which pays users a variable rate for keeping assets in the shielded pool. Assets do not need to be locked in any way. Users may claim rewards while remaining in the shielded pool using the convert circuit, and unshield the rewards (should they wish to) at some later point in time. The protocol uses a PD-controller to target particular minimum amounts of particular assets being shielded. Rewards accumulate automatically over time, so claiming rewards more frequently does not result in additional funds.

Implementation

When users deposit assets into the shielded pool, the current epoch is appended to the asset type. Users can use these "epoched assets" as normal within the shielded pool. When epochs advance, users can use the convert circuit to convert assets tagged with the old epoch to assets tagged with the new epoch, receiving shielded rewards in NAM proportional to the amount of the asset they had shielded, which automatically compound while the assets are shielded and the epochs progressing. When unshielding from the shielded pool, assets must be first converted to the current epoch (claiming any rewards), after which they can be converted back to the normal (un-epoched) unshielded asset denomination.

Namada allocates up to 10% per annum inflation of NAM to pay for shielded pool rewards. This inflation is kept in a temporary shielded rewards pool, which is then allocated according to a set of PD (proportional-derivative) controllers for assets and target shielded amounts configured by Namada governance. Each epoch, subject to available rewards, each controller calculates the reward rate for its asset in this epoch, which is then used to compute entries into the conversion table. Entries from epochs before the previous one are recalculated based on cumulative rewards. Users may then asynchronously claim their rewards by using the convert circuit at some future point in time.

Motivation

Public goods are non-excludable non-rivalrous items which provide benefits of some sort to their users. Examples include languages, open-source software, research, designs, Earth's atmosphere, and art (conceptually - a physical painting is excludable and rivalrous, but the painting as-such is not). Namada's software stack, supporting research, and ecosystem tooling are all public goods, as are the information ecosystem and education which provide for the technology to be used safety, the hardware designs and software stacks (e.g. instruction set, OS, programming language) on which it runs, and the atmosphere and biodiverse environment which renders its operation possible. Without these things, Namada could not exist, and without their continued sustenance it will not continue to. Public goods, by their nature as non-excludable and non-rivalrous, are mis-modeled by economic systems (such as payment-for-goods) built upon the assumption of scarcity, and are usually either under-funded (relative to their public benefit) or funded in ways which require artificial scarcity and thus a public loss. For this reason, it is in the interest of Namada to help out, where possible, in funding the public goods upon which its existence depends in ways which do not require the introduction of artificial scarcity, balancing the costs of available resources and operational complexity.

Design precedent

There is a lot of existing research into public-goods funding to which justice cannot be done here. Most mechanisms fall into two categories: need-based and results-based, where need-based allocation schemes attempt to pay for particular public goods on the basis of cost-of-resources, and results-based allocation schemes attempt to pay (often retroactively) for particular public goods on the basis of expected or assessed benefits to a community and thus create incentives for the production of public goods providing substantial benefits (for a longer exposition on retroactive PGF, see here, although the idea is not new). Additional constraints to consider include the cost-of-time of governance structures (which renders e.g. direct democracy on all funding proposals very inefficient), the necessity of predictable funding in order to make long-term organisational decision-making, the propensity for bike-shedding and damage to the information commons in large-scale public debate (especially without an identity layer or Sybil resistance), and the engineering costs of implementations.

Mechanism

Namada instantiates a dual proactive/retroactive public-goods funding model, stewarded by a public-goods council elected by limited liquid democracy.

This requires the following protocol components:

  • Limited liquid democracy / targeted delegation: Namada's current voting mechanism is altered to add targeted delegation. By default, each delegator delegates their vote in governance to their validator, but they can set an alternative governance delegate who can instead vote on their behalf (but whose vote can be overridden as usual). Validators can also set governance delegates, in which case those delegates can vote on their behalf, and on the behalf of all delegators to that validator who do not override the vote, unless the validator overrides the vote. This is a limited form of liquid democracy which could be extended in the future.
  • Funding council: bi-annually (every six months), Namada governance elects a public goods funding council by stake-weighted approval vote (see below). Public goods funding councils run as groups. The public goods funding council decides according to internal decision-making procedures (practically probably limited to a k-of-n multisignature) how to allocate continuous funding and retroactive funding during their term. Namada genesis includes an initial funding council, and the next election will occur six months after launch.
  • Continuous funding: Namada prints an amount of inflation fixed on a percentage basis dedicated to continuous funding. Each quarter, the public goods funding council selects recipients and amounts (which in total must receive all of the funds, although they could burn some) and submits this list to the protocol. Inflation is distributed continuously by the protocol to these recipients during that quarter.
  • Retroactive funding: Namada prints an amount of inflation fixed on a percentage basis dedicated to retroactive funding. Each quarter, the public goods funding council selects recipients and amounts (which in total must receive all of the funds) and submits this list to the protocol. Amounts are distributed immediately as lump sums. The public goods funding council is instructed to use this funding to fund public goods retroactively, proportional to assessed benefit.
  • Privacy of council votes: in order to prevent targeting of individual public goods council members, it is important that council acts only as a group. Whatever internal decision-making structure it uses is up the council; Namada governance should evaluate councils as opaque units. We may need a simple threshold public key to provide this kind of privacy - can we evaluate the implementation difficulty of that?
  • Stake-weighted approval voting: as public goods councils are exclusive, we can use a stake-weighted form of approval voting. Governance voters include all public goods council candidates of which they approve, and the council candidate with the most stake approving it wins. This doesn't have game-theoretic properties as nice as ranked-choice voting (especially when votes are public, as they are at the moment), but it is much simpler (background), and in practice I do not think there will be too many public goods council candidates.
  • Interface support: the interface should support limited liquid democracy for delegate selection and approval voting for public goods council candidates. The interface or explorer should display past retroactive PGF winners and past/current continuous funding recipients. Proposal submission for continuous and retroactive funding will happen separately, in whatever manner the public goods council deems fit.

Funding categories

Note that the following is social consensus, precedent which can be set at genesis and ratified by governance but does not require any protocol changes.

Categories of public-goods funding

Namada groups public goods into four categories, with earmarked pools of funding:

  • Technical research Technical research covers funding for technical research topics related to Namada and Namada, such as cryptography, distributed systems, programming language theory, and human-computer interface design, both inside and outside the academy. Possible funding forms could include PhD sponsorships, independent researcher grants, institutional funding, funding for experimental resources (e.g. compute resources for benchmarking), funding for prizes (e.g. theoretical cryptography optimisations), and similar.
  • Engineering Engineering covers funding for engineering projects related to Namada and Namada, including libraries, optimisations, tooling, alternative interfaces, alternative implementations, integrations, etc. Possible funding forms could include independent developer grants, institutional funding, funding for bug bounties, funding for prizes (e.g. practical performance optimisations), and similar.
  • Social research, art, and philosophy Social research, art, and philosophy covers funding for artistic expression, philosophical investigation, and social/community research (not marketing) exploring the relationship between humans and technology. Possible funding forms could include independent artist grants, institutional funding, funding for specific research resources (e.g. travel expenses to a location to conduct a case study), and similar.
  • External public goods External public goods covers funding for public goods explicitly external to the Namada and Namada ecosystem, including carbon sequestration, independent journalism, direct cash transfers, legal advocacy, etc. Possible funding forms could include direct purchase of tokenised assets such as carbon credits, direct cash transfers (e.g. GiveDirectly), institutional funding (e.g. Wikileaks), and similar.

Funding amounts

In Namada, 10% inflation per annum of the NAM token is directed to this public goods mechanism, 5% to continuous funding and 5% to retroactive funding. This is a genesis default and can be altered by governance.

Namada encourages the public goods council to adopt a default social consensus of an equal split between categories, meaning 1.25% per annum inflation for each category (e.g. 1.25% for technical research continuous funding, 1.25% for technical research retroactive PGF). If no qualified recipients are available, funds may be redirected or burnt.

Namada also pays the public goods council members themselves (in total) a default of 0.1% inflation per annum.

Further reading

Thanks for reading! You can find further information about the project below:

The state of Namada. The state of Namada Namada source code Namada source code Namada Community Heliax Namada Medium page Namada Docs Namada Discord Namada Twitter Namada Twitter