Penumbra Guide

Penumbra is a fully shielded zone for the Cosmos ecosystem, allowing anyone to securely transact, stake, swap, or marketmake without broadcasting their personal information to the world.

This site contains documentation on how to use, deploy, and develop the Penumbra software. The description of the protocol itself can be found in the protocol specification, and the API documentation can be found here.

Test networks

Penumbra is a decentralized protocol, so Penumbra Labs is building in public, launching (and crashing) lots of work-in-progress testnets to allow community participation, engagement, and feedback.

Currently, Penumbra only has a command line client, pcli (pronounced “pickle-y”), which bundles all of the client components in one binary, and a chain-scanning daemon, pclientd, which runs just the view service, without spend capability. To get started with the Penumbra test network, all that’s required is to download and build pcli, as described in Installation.

The Penumbra node software is the Penumbra daemon, pd. This is an ABCI application, which must be driven by CometBFT, so a Penumbra full node consists of both a pd instance and a cometbft instance.

The basic architecture of Penumbra is as follows:

          ╭   ┌───────┐
  spending│   │custody│
capability│   │service│
          ╰   └───────┘
               ▲     │
               │tx   │auth
               │plan │data
               │     ▼
          ╭   ┌───────┐
   viewing│   │wallet │ tx submission
capability│   │logic  │────────┐
          │   └───────┘        │
          │    ▲               │
          │    │view private state
          │    │               │
          │    │               │
          │   ┌───────┐        │
          │   │view   │        │
          │   │service│        │
          ╰   └───────┘        │
               ▲               │
               │sync private state
               │               │
          ╭ ┌──┼───────────────┼──────┐
    public│ │  │     Penumbra Fullnode│
     chain│ │  │               │      │
      data│ │  │               ▼      │
          │ │ ┌──┐ app   ┌──────────┐ │
          │ │ │pd│◀─────▶│ cometbft │ │
          │ │ └──┘ sync  └──────────┘ │
          │ │               ▲         │
          ╰ └───────────────┼─────────┘
                       ,'   │ `.
                  .───;     │consensus
                 ;          │sync
               .─┤          │   ├──.
             ,'             │       `.
            ;   Penumbra    │         :
            :   Network  ◀──┘         ;
             ╲                       ╱
              `.     `.     `.     ,'
                `───'  `───'  `───'

The custody service holds signing keys and is responsible for authorizing transaction plans. The view service holds viewing keys and scans the chain state. Wallet logic can query the view service to get information about what funds are available, submit a transaction plan to the custody service for signing, and then use the returned signatures to build the transaction and submit it.

As a shielded chain, Penumbra’s architecture is slightly different than a transparent chain, because user data such as account balances, transaction activity, etc., is not part of the public chain state. This means that clients need to synchronize with the chain to build a copy of the private user data they have access to. This logic is provided by the view service, which is bundled into pcli, but can also be run as a standalone pclientd daemon.

Modeling authorization as an (asynchronous) RPC to a custody service means that the client software is compatible with many different custody flows by default – an in-process “SoftHSM”, a hardware wallet with user intervention, a cluster of online threshold signers, an offline threshold signing process, etc.

Using the web extension

This section describes how to use the Penumbra Wallet web extension, a GUI client for Penumbra.

Currently, the web extension only supports a subset of functionality of the command-line client, pcli.

Installing the extension

The Penumbra Wallet web extension only supports the Google Chrome browser. You must run Chrome in order to follow the instructions below.

  1. Visit the Web Store page for the Penumbra Wallet, and click Add to Chrome to install it.
  2. Navigate to the dApp website for the extension: and click Connect in the top-right corner.
  3. Click Get started to proceed with wallet configuration.

Generating a wallet

You’ll be offered to import a pre-existing wallet. If you don’t already have one, choose Create a new wallet. During the guided tutorial, you’ll need to set a passphrase to protect your wallet. The passphrase is not the same as the recovery phrase. The passphrase is used to restrict access to the web wallet on your computer. The recovery phrase can be used to import your wallet on a fresh installation, or on a different machine. Make sure to store both the passphrase and the recovery phrase securely, for example in a password manager.

Re-enter portions of the recovery phrase when prompted, to confirm that you’ve saved it properly. Then you’ll be taken to a screen that shows an initial synchronization process with the most recent testnet:

Obtaining funds

In order to use the testnet, it’s first necessary for you to get some testnet tokens. To obtain your address, click on the extension icon. The drop-down should display your wallet address and a button to copy it to the clipboard. Next, join our Discord and post your address in the #testnet-faucet channel. We’ll send your address some tokens on the testnet for you to send to your friends! :)

In addition, addresses posted to the testnet faucet are periodically rolled into the testnet genesis file, so that in future testnets your address will have testnet tokens pre-loaded.

Just keep in mind: testnet tokens do not have monetary value, and in order to keep the signal-to-noise ratio high on the server, requests for tokens in other channels will be deleted without response. Please do not DM Penumbra Labs employees asking for testnet tokens; the correct venue is the dedicated channel.

Creating transactions

Now that you’ve got the web wallet configured, let’s use it to send a transaction. Navigate to the dApp website: and click Connect, then authorize the extension to work with the site. After doing so, you’ll see buttons for actions such as Receive, Send, and Exchange.

As of Testnet 53, only the Send action is supported. Check back on subsequent versions to follow progress as we implement more advanced functionality in the web wallet.

Upgrading to a new testnet

When a new testnet is released, you’ll need to clear the existing state from the extension, much like running pcli view reset is required on the command-line. To synchronize with a new testnet:

  1. Click the Penumbra Wallet option in the extensions drop-down menu, next to the URL bar.
  2. Unlock the wallet by providing your passphrase, if prompted.
  3. Click the gear icon in the top right corner of the overlay.
  4. Choose Advanced -> Clear Cache -> Confirm.

Then navigate to again and reauthorize the connection. The extension will automatically sync with the new chain.

Updating to a new version of the extension

The extension should be automatically updated every time a new version is released. You can view the latest version of the extension at the Chrome Web Store. To force a check for updates:

  1. Click the three-dot icon in the top right corner of the browser.
  2. From the drop-down menu, choose Extensions -> Manage Extensions.
  3. Select Update on the top panel.

After updating the extension manually, it may be helpful to clear the local cache, as described above.

Using pcli

This section describes how to use pcli, the command line client for Penumbra:

Penumbra is a private blockchain, so the public chain state does not reveal any private user data. By default, pcli includes a view service that synchronizes with the chain and scans with a viewing key.

Please submit any feedback and bug reports

Thank you for helping us test the Penumbra network! If you have any feedback, please let us know in the #testnet-feedback channel on our Discord. We would love to know about bugs, crashes, confusing error messages, or any of the many other things that inevitably won’t quite work yet. Have fun! :)

Diagnostics and Warnings

By default, pcli prints a warning message to the terminal, to be sure that people understand that this is unstable, unfinished, pre-release software. To disable this warning, export the PCLI_UNLEASH_DANGER environment variable.

Installing pcli

Installing the Rust toolchain

This requires that you install a recent (>= 1.73) stable version of the Rust compiler, installation instructions for which you can find here. Don’t forget to reload your shell so that cargo is available in your \$PATH!

You can verify the rust compiler version by running rustc --version which should indicate version 1.73 or later.

pcli requires rustfmt as part of the build process — depending on your OS/install method for Rust, you may have to install that separately.

Installing build prerequisites


You may need to install some additional packages in order to build pcli, depending on your distribution. For a bare-bones Ubuntu installation, you can run:

sudo apt-get install build-essential pkg-config libssl-dev clang git-lfs

For a minimal Fedora/CentOS/RHEL image, you can run:

sudo dnf install openssl-devel clang git cargo rustfmt git-lfs


You may need to install the command-line developer tools if you have never done so:

xcode-select --install

You’ll also need to install Git LFS, which you can do via Homebrew:

brew install git-lfs

Making sure that git-lfs is installed

Running git lfs install will make sure that git-lfs is correctly installed on your machine.

Cloning the repository

Once you have installed the above tools, you can clone the repository:

git clone

To build the version of pcli compatible with the current testnet, navigate to the penumbra folder, fetch the latest from the repository, and check out the latest tag for the current testnet:

cd penumbra && git fetch && git checkout v0.63.1

Building the pcli client software

Then, build the pcli tool using cargo:

cargo build --release --bin pcli

Because you are building a work-in-progress version of the client, you may see compilation warnings, which you can safely ignore.

Generating a Wallet

On first installation of pcli, you will need to generate a fresh wallet to use with Penumbra.

The pcli init command will generate a configuration file. To generate a new wallet, try:

$ cargo run --quiet --release --bin pcli init soft-kms generate
Save this in a safe place!
Writing generated configs to [PATH TO PCLI DATA]

This uses the soft-kms backend, which saves the generated spend key in the config file.

Alternatively, to import an existing wallet, try

$ cargo run --quiet --release --bin pcli init soft-kms import-phrase
Enter seed phrase:
Writing generated configs to [PATH TO PCLI DATA]

Penumbra’s design automatically creates 2^32 (four billion) numbered accounts controlled by your wallet.

To generate the address for a numbered account, use pcli view address:

$ cargo run --quiet --release --bin pcli view address 0

You can also run pcli view address on an address to see which account it corresponds to:

$ cargo run --quiet --release --bin pcli view address penumbrav2t1...
Address is viewable with this full viewing key. Account index is 0.

Addresses are opaque and do not reveal account information. Only you, or someone who has your viewing key, can decrypt the account information from the address.

Getting testnet tokens on the [Discord] in the #testnet-faucet channel

In order to use the testnet, it’s first necessary for you to get some testnet tokens. The current way to do this is to join our Discord and post your address in the #testnet-faucet channel. We’ll send your address some tokens on the testnet for you to send to your friends! :)

Just keep in mind: testnet tokens do not have monetary value, and in order to keep the signal-to-noise ratio high on the server, requests for tokens in other channels will be deleted without response. Please do not DM Penumbra Labs employees asking for testnet tokens; the correct venue is the dedicated channel.

Updating pcli

Make sure you’ve followed the installation steps. Then, to update to the latest testnet release:

cd penumbra && git fetch && git checkout v0.63.1

Once again, build pcli with cargo:

cargo build --release --bin pcli

No wallet needs to be generated. Instead, keep one’s existing wallet and reset view data.

cargo run --quiet --release --bin pcli view reset

Viewing Balances

Once you’ve received your first tokens, you can scan the chain to import them into your local wallet (this may take a few minutes the first time you run it):

cargo run --quiet --release --bin pcli view sync

Syncing is performed automatically, but running the sync subcommand will ensure that the client state is synced to a recent state, so that future invocations of pcli commands don’t need to wait.

If someone sent you testnet assets, you should be able to see them now by running:

cargo run --quiet --release --bin pcli view balance

This will print a table of assets by balance in each. The balance view just shows asset amounts. To see more information about delegation tokens and the stake they represent, use

cargo run --quiet --release --bin pcli view staked

Sending Transactions

Now, for the fun part: sending transactions. If you have someone else’s testnet address, you can send them any amount of any asset you have.

First, use balance to find the amount of assets you have:

cargo run --release --bin pcli view balance

Second, if I wanted to send 10 penumbra tokens to my friend, I could do that like this (filling in their full address at the end):

cargo run --quiet --release --bin pcli tx send 10penumbra --to penumbrav2t...

Notice that asset amounts are typed amounts, specified without a space between the amount (10) and the asset name (penumbra). If you have the asset in your wallet to send, then so it shall be done!


In addition, to sending an asset, one may also stake penumbra tokens to validators.

Find a validator to stake to:

cargo run --release --bin pcli query validator list

Copy and paste the identity key of one of the validators to stake to, then construct the staking tx:

cargo run --release --bin pcli tx delegate 10penumbra --to penumbravalid...

To undelegate from a validator, use the pcli tx undelegate command, passing it the typed amount of delegation tokens you wish to undelegate. Wait a moment for the network to process the undelegation, then reclaim your funds:

cargo run --release --bin pcli tx undelegate-claim

Inspect the output; a message may instruct you to wait longer, for a new epoch. Check back and rerun the command later to add the previously delegated funds to your wallet.


Penumbra features on-chain governance similar to Cosmos Hub where anyone can submit proposals and both validators and delegators to vote on them. Penumbra’s governance model incorporates a single DAO account, into which anyone can freely deposit, but from which only a successful governance vote can spend. For details on using governance, see the governance section.

Managing Liquidity Positions

Penumbra’s decentralized exchange (“dex”) implementation allows users to create their own on-chain liquidity positions. The basic structure of a liquidity position expresses a relationship between two assets, i.e. “I am willing to buy 100penumbra at a price of 1gm each, with a fee of 20bps (base points)” or “I am willing to sell 100penumbra at a price of 1gm each, with a fee of 10bps”.

Opening a Liquidity Position

The basic commands for opening liquidity positions are tx position order buy and tx position order sell.

To open an order buying 10cube at a price of 1penumbra each, with no fee, you’d do the following:

cargo run --release --bin pcli -- tx position order buy 10cube@1penumbra

Similarly, to open an order selling 100penumbra at a price of 5gm each, with a 20bps fee on transactions against the liquidity position, you would append /20bps at the end of the order, like as follow:

cargo run --release --bin pcli -- tx position order sell 100penumbra@5gm/20bps

After opening the position, you’ll see that your account has been deposited an “LPNFT” representing the open position:

$ cargo run --release --bin pcli -- view balance

 Account  Amount
 0        1lpnft_opened_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

Closing a Liquidity Position

If you have an open liquidity position, you may close it, preventing further trading against it.

cargo run --release --bin pcli -- tx position close plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

This will subtract the opened LPNFT and deposit a closed LPNFT into your balance:

$ cargo run --release --bin pcli -- view balance

 Account  Amount
 0        1lpnft_closed_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

You also have the option to close all liquidity positions associated with an address at once. This is useful if you have many individual positions, e.g. due to trading function approximation:

cargo run --release --bin pcli -- tx position close-all

Withdrawing a Liquidity Position

If you have a closed liquidity position, you may withdraw it, depositing the reserves in the trading position into your balance.

cargo run --release --bin pcli -- tx position withdraw plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

This will subtract the closed LPNFT and deposit a withdrawn LPNFT into your balance, along with any reserves belonging to the trading position:

$ cargo run --release --bin pcli -- view balance

 Account  Amount
 0        1lpnft_withdrawn_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr
 0        1cube

You also have the option to withdraw all liquidity positions associated with an address at once. This is useful if you have many individual positions, e.g. due to trading function approximation:

cargo run --release --bin pcli -- tx position withdraw-all

Swapping Assets

One of the most exciting features of Penumbra is that by using IBC (inter-blockchain communication) and our shielded pool design, any tokens can be exchanged in a private way.

Swaps take place against the on-chain liquidity positions described earlier in the guide.

If you wanted to exchange 1 penumbra tokens for gm tokens, you could do so like so:

cargo run --release --bin pcli -- tx swap --into gm 1penumbra

This will handle generating the swap transaction and you’d soon have the market-rate equivalent of 1 penumbra in gm tokens returned to you, or the original investment of 1 penumbra tokens returned if there wasn’t enough liquidity available to perform the swap.

Replicating a UniswapV2 (x*y=k) pool

Penumbra’s constant-price pool is a versatile market primitive, allowing users extensive control over their trading strategies. It’s not solely for active DEX quoters; with our AMM replication tool, users can emulate any passive AMM of their choice. The testnet comes with a built-in UniswapV2 replicator that is utilized as such:

cargo run -r --bin pcli tx lp replicate xyk <TRADING_PAIR> <QUANTITY> [--current-price AMT] [--fee-bps AMT]

For instance, to provide ~100penumbra and ~100test_usd liquidity on the penumbra:test_usd pair with a pool fee of 33bps, run:

cargo run -r --bin pcli tx lp replicate xyk penumbra:test_usd 100penumbra --fee-bps 33

You will be prompted a disclaimer which you should read carefully, and accept or reject by pressing “y” for yes, or “n” for no.

The replicating market makers tool will then generate a list of positions that you can submit by pressing “y”, or reject by pressing “n”.

There are other pairs available that you can try this tool on, for example gm:gn or gm:penumbra.

IBC withdrawals

Penumbra aims to implement full IBC support for cross-chain asset transfers. For now, however, we’re only running a relayer between the Penumbra testnet and the Osmosis testnet chains. For Testnet 63 Rhea, the channel information is:

  "state": 3,
  "ordering": 1,
  "counterparty": {
    "port_id": "transfer",
    "channel_id": "channel-4217"
  "connection_hops": [
  "version": "ics20-1",
  "port_id": "transfer",
  "channel_id": "channel-0"

The output above shows that the IBC channel id on Penumbra is 0, and on Osmosis it’s 3909. There’s one more piece of information we need to make an IBC withdrawal: the appropriate IBC timeout height, which is composed of two values: <counterparty_chain_id_revision>-<counterparty_chain_block_height>. For the Osmosis testnet, as of 2023Q3, the chain id is osmo-test-5, meaning the chain id revision is 5. So a value like 5-5000000 (i.e. revision 5 at height 5 million) will work.

To initiate an IBC withdrawal from Penumbra testnet to Osmosis testnet:

cargo run --release --bin pcli -- tx withdraw --to <OSMOSIS_ADDRESS> --channel <CHANNEL_ID> 5gm --timeout-height 5-5000000

Unfortunately the CLI tooling for Osmosis is cumbersome. For now, use rly as a user agent for the Osmosis testnet, as described in the IBC dev docs.


Penumbra features on-chain governance similar to Cosmos Hub, with the simplification that there are only 3 kinds of vote: yes, no, and abstain.

Quick Start

There’s a lot you can do with the governance system in Penumbra. If you have a particular intention in mind, here are some quick links:

Getting Proposal Information

To see information about the currently active proposals, including your own, use the pcli query proposal subcommand.

To list all the active proposals by their ID, use:

cargo run --release --bin pcli query governance list-proposals

Other proposal query commands all follow the form:

cargo run --release --bin pcli query governance proposal [PROPOSAL_ID] [QUERY]

These are the queries currently defined:

  • definition gets the details of a proposal, as the submitted JSON;
  • state gets information about the current state of a proposal (voting, withdrawn, or finished, along with the reason for withdrawal if any, and the outcome of finished proposals);
  • period gets the voting start and end block heights of a proposal;
  • tally gets the current tally of a proposal’s votes, as a total across all validators, and broken down by each validator’s votes and the total votes of their delegators.

Voting On A Proposal

Validators and delegators may both vote on proposals. Validator votes are public and attributable to that validator; delegator votes are anonymous, revealing only the voting power used in the vote, and the validator which the voting delegator had delegated to. Neither validators nor delegators can change their votes after they have voted.

Voting As A Delegator

If you had staked delegation tokens to one or more active validators when a proposal started, you can vote on it using the tx vote subcommand of pcli. For example, if you wanted to vote “yes” on proposal 1, you would do:

cargo run --release --bin pcli tx vote yes --on 1

When you vote as a delegator (but not when you vote as a validator), you will receive commemorative voted_on_N tokens, where N is the proposal ID, proportionate to the weight of your vote. Think of these as the cryptocurrency equivalent of the “I voted!” stickers you may have received when voting in real life at your polling place.

Voting As A Validator

If you are a validator who was active when the proposal started, you can vote on it using the validator vote subcommand of pcli. For example, if you wanted to vote “yes” on proposal 1, you would do:

cargo run --release --bin pcli validator vote yes --on 1

Eligibility And Voting Power

Only validators who were active at the time the proposal started voting may vote on proposals. Only delegators who had staked delegation tokens to active validators at the time the proposal started voting may vote on proposals.

A validator’s voting power is equal to their voting power at the time a proposal started voting, and a delegator’s voting power is equal to the unbonded staking token value (i.e. the value in penumbra) of the delegation tokens they had staked to an active validator at the time the proposal started voting. When a delegator votes, their voting power is subtracted from the voting power of the validator(s) to whom they had staked delegation notes at the time of the proposal start, and their stake-weighted vote is added to the total of the votes: in other words, validators vote on behalf of their delegators, but delegators may override their portion of their validator’s vote.

Authoring A Proposal

Anyone can submit a new governance proposal for voting by escrowing a proposal deposit, which will be held until the end of the proposal’s voting period. Penumbra’s governance system discourages proposal spam with a slashing mechanism: proposals which receive more than a high threshold of no votes have their deposit burned. At present, the slashing threshold is 80%. If the proposal is not slashed (but regardless of whether it passes or fails), the deposit will then be returned to the proposer at the end of voting.

From the proposer’s point of view, the lifecycle of a proposal begins when it is submitted and ends when it the deposit is claimed. During the voting period, the proposer may also optionally withdraw the proposal, which prevents it from passing, but does not prevent it from being slashed. This is usually used when a proposal has been superceded by a revised alternative.

In the above, rounded grey boxes are actions submitted by the proposal author, rectangular colored boxes are the state of the proposal on chain, and colored circles are outcomes of voting.

Kinds Of Proposal

There are 4 kinds of governance proposal on Penumbra: signaling, emergency, parameter change, and DAO spend.

Signaling Proposals

Signaling proposals are meant to signal community consensus about something. They do not have a mechanized effect on the chain when passed; they merely indicate that the community agrees about something.

This kind of proposal is often used to agree on code changes; as such, an optional commit field may be included to specify these changes.

Emergency Proposals

Emergency proposals are meant for when immediate action is required to address a crisis, and conclude early as soon as a 2/3 majority of all active voting power votes yes.

Emergency proposals have the power to optionally halt the chain when passed. If this occurs, off-chain coordination between validators will be required to restart the chain.

Parameter Change Proposals

Parameter change proposals alter the chain parameters when they are passed. Chain parameters specify things like the base staking reward rate, the amount of penalty applied when slashing, and other properties that determine how the chain behaves. Many of these can be changed by parameter change proposals, but some cannot, and instead would require a chain halt and upgrade.

A parameter change proposal specifies both the old and the new parameters. If the current set of parameters at the time the proposal passes are an exact match for the old parameters specified in the proposal, the entire set of parameters is immediately set to the new parameters; otherwise, nothing happens. This is to prevent two simultaneous parameter change proposals from overwriting each others’ changes or merging with one another into an undesired state. Almost always, the set of old parameters should be the current parameters at the time the proposal is submitted.

DAO Spend Proposals

DAO spend proposals submit a transaction plan which may spend funds from the DAO if passed.

DAO spend transactions have exclusive capability to use two special actions which are not allowed in directly submitted user transactions: DaoSpend and DaoOutput. These actions, respectively, spend funds from the DAO, and mint funds transparently to an output address (unlike regular output actions, which are shielded). DAO spend transactions are unable to use regular shielded outputs, spend funds from any source other than the DAO itself, perform swaps, or submit, withdraw, or claim governance proposals.

Submitting A Proposal

To submit a proposal, first generate a proposal template for the kind of proposal you want to submit. For example, suppose we want to create a signaling proposal:

cargo run --release --bin pcli tx proposal template signaling --file proposal.toml

This outputs a TOML template for the proposal to the file proposal.toml, where you can edit the details to match what you’d like to submit. The template will contain relevant default fields for you to fill in, as well as a proposal ID, automatically set to the next proposal ID at the time you generated the template. If someone else submits a proposal before you’re ready to upload yours, you may need to increment this ID, because it must be the sequentially next proposal ID at the time the proposal is submitted to the chain.

Once you’re ready to submit the proposal, you can submit it. Note that you do not have to explicitly specify the proposal deposit in this action; it is determined automatically based on the chain parameters.

cargo run --release --bin pcli tx proposal submit --file proposal.toml

The proposal deposit will be immediately escrowed and the proposal voting period will start in the very next block. As the proposer, you will receive a proposal deposit NFT which can be redeemed for the proposal deposit after voting concludes, provided the proposal is not slashed. This NFT has denomination proposal_N_deposit, where N is the ID of your proposal. Note that whoever holds this NFT has exclusive control of the proposal: they can withdraw it or claim the deposit.

Making A DAO Spend Transaction Plan

In order to submit a DAO spend proposal, it is necessary to create a transaction plan. At present, the only way to specify this is to provide a rather human-unfriendly JSON-formatted transaction plan, because there is no stable human-readable representation for a transaction plan at present. This will change in the future as better tooling is developed.

For now, here is a template for a transaction plan that withdraws 100 penumbra from the DAO and sends it to a specified address (in this case, the address of the author of this document):

    "fee": { "amount": { "lo": 0, "hi": 0 } },
    "actions": [
        { "daoSpend": { "value": {
            "amount": { "lo": 100000000, "hi": 0 },
            "assetId": { "inner": "KeqcLzNx9qSH5+lcJHBB9KNW+YPrBk5dKzvPMiypahA=" }
        } } },
        { "daoOutput": {
            "value": {
                "amount": { "lo": 100000000, "hi": 0 },
                "assetId": { "inner": "KeqcLzNx9qSH5+lcJHBB9KNW+YPrBk5dKzvPMiypahA=" }
            "address": {
                "inner": "vzZ60xfMPPwewTiSb08jk5OdUjc0BhQ7IXLgHAayJoi5mvmlnTpqFuaPU2hCBhwaEwO2c03tBbN/GVh0+CajAjYBmBq3yHAbzNJCnZS8jUs="
        } }

Note that the asset ID and address are specified not in the usual bech32 formats you are used to seeing, but in base64. To get your address in this format, use pcli view address 0 --base64.

To template a DAO spend proposal using a JSON transaction plan, use the pcli tx proposal template dao-spend --transaction-plan <FILENAME>.json, which will take the transaction plan and include it in the generated proposal template. If no plan is specified, the transaction plan will be the empty transaction which does nothing when executed.

Withdrawing A Proposal

If you want to withdraw a proposal that you have made (perhaps because a better proposal has come to community consensus), you can do so before voting concludes. Note that this does not make you immune to losing your deposit by slashing, as withdrawn proposals can still be voted on and slashed.

cargo run --release --bin pcli \
    tx proposal withdraw 0 \
    --reason "some human-readable reason for withdrawal"

When you withdraw a proposal, you consume your proposal deposit NFT, and produce a new proposal unbonding deposit NFT, which has the denomination proposal_N_unbonding_deposit, where N is the proposal ID. This, like the proposal deposit NFT, can be used to redeem the deposit at the end of voting, provided the proposal is not slashed.

Claiming A Proposal Deposit

Regardless of whether you have or have not withdrawn your proposal, once voting on the proposal concludes, you can claim your proposal deposit using the tx proposal deposit-claim subcommand of pcli. For example, if you wanted to claim the deposit for a concluded proposal number 1, you could say:

cargo run --release --bin pcli tx proposal deposit-claim 1

This will consume your proposal deposit NFT (either the original or the one you received after withdrawing the proposal) and send you back one of three different proposal result NFTs, depending on the result of the vote: proposal_N_passed, proposal_N_failed or proposal_N_slashed. If the proposal was not slashed (that is, it passed or failed), this action will also produce the original proposal deposit. Note that you can claim a slashed proposal: you will receive the slashed proposal result NFT, but you will not receive the original proposal deposit.

Contributing To The DAO

Anyone can contribute any amount of any denomination to the Penumbra DAO. To do this, use the command pcli tx dao-deposit, like so:

cargo run --release --bin pcli tx dao-deposit 100penumbra

Funds contributed to the DAO cannot be withdrawn except by a successful DAO spend governance proposal.

To query the current DAO balance, use pcli query dao balance with the base denomination of an asset or its asset ID (display denominations are not currently accepted). For example:

cargo run --release --bin pcli query dao balance upenumbra

DAO spend proposals are only accepted for voting if they would not overdraw the current funds in the DAO at the time the proposal is submitted, so it’s worth checking this information before submitting such a proposal.

Sending Validator Funding Streams To The DAO

A validator may non-custodially send funds to the DAO, similarly to any other funding stream. To do this, add a [[funding_stream]] section to your validator definition TOML file that declares the DAO as a recipient for a funding stream. For example, your definition might look like this:

sequence_number = 0
enabled = true
name = "My Validator"
website = ""
description = "An example validator"
identity_key = "penumbravalid1s6kgjgnphs99udwvyplwceh7phwt95dyn849je0jl0nptw78lcqqvcd65j"
governance_key = "penumbragovern1s6kgjgnphs99udwvyplwceh7phwt95dyn849je0jl0nptw78lcqqhknap5"

type = "tendermint/PubKeyEd25519"
value = "tDk3/k8zjEyDQjQC1jUyv8nJ1cC1B/MgrDzeWvBTGDM="

# Send a 1% commission to this address:
recipient = "penumbrav2t1hum845ches70c8kp8zfx7nerjwfe653hxsrpgwepwtspcp4jy6ytnxhe5kwn56sku684x6zzqcwp5ycrkee5mmg9kdl3jkr5lqn2xq3kqxvp4d7gwqdue5jznk2ter2t66mk4n"
rate_bps = 100

# Send another 1% commission to the DAO:
recipient = "DAO"
rate_bps = 100

Using pcli with pclientd

Using pd

This section describes how to build and run pd, the node implementation for Penumbra:

Building pd

The node software pd is part of the same repository as pcli, so follow those instructions to clone the repo and install dependencies.

To build pd, run

cargo build --release --bin pd

Because you are building a work-in-progress version of the node software, you may see compilation warnings, which you can safely ignore.

Installing CometBFT

You’ll need to have CometBFT installed on your system to join your node to the testnet.

NOTE: Previous versions of Penumbra used Tendermint, but as of Testnet 62 (released 2023-10-10), only CometBFT v0.37.2 is supported. Do not use any version of Tendermint, which may not work with pd.

You can download v0.37.2 from the CometBFT releases page to install a binary. If you prefer to compile from source instead, make sure you are compiling version v0.37.2.

Joining a Testnet

We provide instructions for running both fullnode deployments and validator deployments. A fullnode will sync with the network but will not have any voting power, and will not be eligible for staking or funding stream rewards. For more information on what a fullnode is, see the CometBFT documentation.

A regular validator will participate in voting and rewards, if it becomes part of the consensus set. Of course, these rewards, like all other testnet tokens, have no value.

Joining as a fullnode

To join a testnet as a fullnode, check out the tag for the current testnet, run pd testnet join to generate configs, then use those configs to run pd and cometbft. In more detail:

Resetting state

First, reset the testnet data from any prior testnet you may have joined:

cargo run --bin pd --release -- testnet unsafe-reset-all

This will delete the entire testnet data directory.

Generating configs

Next, generate a set of configs for the current testnet:

cargo run --bin pd --release -- testnet join --external-address IP_ADDRESS:26656 --moniker MY_NODE_NAME

where IP_ADDRESS (like is the public IP address of the node you’re running, and MY_NODE_NAME is a moniker identifying your node. Other peers will try to connect to your node over port 26656/TCP.

If your node is behind a firewall or not publicly routable for some other reason, skip the --external-address flag, so that other peers won’t try to connect to it. You can also skip the --moniker flag to use a randomized moniker instead of selecting one.

This command fetches the genesis file for the current testnet, and writes configs to a testnet data directory (by default, ~/.penumbra/testnet_data). If any data exists in the testnet data directory, this command will fail. See the section above on resetting node state.

Running pd and cometbft

Next, run pd:

cargo run --bin pd --release -- start

Then (perhaps in another terminal), run CometBFT, specifying --home:

cometbft start --home ~/.penumbra/testnet_data/node0/cometbft

Alternatively, pd and cometbft can be orchestrated with docker-compose:

cd deployments/compose/
docker-compose pull
docker-compose up --abort-on-container-exit

or via systemd:

cd deployments/systemd/
sudo cp *.service /etc/systemd/system/
# edit service files to customize for your system
sudo systemctl daemon-reload
sudo systemctl restart penumbra cometbft

Joining as a validator

After starting your node, as above, you should now be participating in the network as a fullnode. However your validator won’t be visible to the chain yet, as the definition hasn’t been uploaded.

Validator Definitions (Penumbra)

A validator definition contains fields defining metadata regarding your validator as well as funding streams, which are Penumbra’s analogue to validator commissions.

The root of a validator’s identity is their identity key. Currently, pcli reuses the spend authorization key in whatever wallet is active as the validator’s identity key. This key is used to sign validator definitions that update the configuration for a validator.

Creating a template definition

To create a template configuration, use pcli validator definition template:

$ cargo run --release --bin pcli -- validator definition template \
    --tendermint-validator-keyfile ~/.penumbra/testnet_data/node0/cometbft/config/priv_validator_key.json \
    --file validator.toml
$ cat validator.toml
# This is a template for a validator definition.
# The identity_key and governance_key fields are auto-filled with values derived
# from this wallet's account.
# You should fill in the name, website, and description fields.
# By default, validators are disabled, and cannot be delegated to. To change
# this, set `enabled = true`.
# Every time you upload a new validator config, you'll need to increment the
# `sequence_number`.

sequence_number = 0
enabled = false
name = ''
website = ''
description = ''
identity_key = 'penumbravalid1kqrecmvwcc75rvg9arhl0apsggtuannqphxhlzl34vfamp4ukg9q87ejej'
governance_key = 'penumbragovern1kqrecmvwcc75rvg9arhl0apsggtuannqphxhlzl34vfamp4ukg9qus84v5'

type = 'tendermint/PubKeyEd25519'
value = 'HDmm2FmJhLHxaKPnP5Fw3tC1DtlBx8ETgTL35UF+p6w='

recipient = 'penumbrav2t1cntf73e36y3um4zmqm4j0zar3jyxvyfqxywwg5q6fjxzhe28qttppmcww2kunetdp3q2zywcakwv6tzxdnaa3sqymll2gzq6zqhr5p0v7fnfdaghrr2ru2uw78nkeyt49uf49q'
rate_bps = 100

recipient = "DAO"
rate_bps = 100

and adjust the data like the name, website, description, etc as desired.

The enabled field can be used to enable/disable your validator without facing slashing penalties. Disabled validators can not appear in the active validator set and are ineligible for rewards.

This is useful if, for example, you know your validator will not be online for a period of time, and you want to avoid an uptime violation penalty. If you are uploading your validator for the first time, you will likely want to start with it disabled until your CometBFT & pd instances have caught up to the consensus block height.

Note that by default the enabled field is set to false and will need to be enabled in order to activate one’s validator.

In the default template, there is a funding stream declared to contribute funds to the DAO. This is not required, and may be altered or removed if you wish.

Setting the consensus key

In the command above, the --tendermint-validator-keyfile flag was used to instruct pcli to import the consensus key for the CometBFT identity. This works well if pcli and pd are used on the same machine. If you are running them in separate environments, you can omit the flag, and pd will generate a random key in the template. You must then manually update the consensus_key. You can get the correct value for consensus_key from your cometbft configs:

$ grep -A3 pub_key ~/.penumbra/testnet_data/node0/cometbft/config/priv_validator_key.json
  "pub_key": {
    "type": "tendermint/PubKeyEd25519",
    "value": "Fodjg0m1kF/6uzcAZpRcLJswGf3EeNShLP2A+UCz8lw="

Copy the string in the value field and paste that into your validator.toml, as the value field under the [consensus_key] heading.

Configuring funding streams

Unlike the Cosmos SDK, which has validators specify a commission percentage that goes to the validator, Penumbra uses funding streams, a list of pairs of commission amounts and addresses. This design allows validators to dedicate portions of their commission non-custodially – for instance, a validator could declare some amount of commission to cover their operating costs, and another that would be sent to an address controlled by a DAO.

Uploading a definition

After setting up metadata, funding streams, and the correct consensus key in your validator.toml, you can upload it to the chain:

cargo run --release --bin pcli -- validator definition upload --file validator.toml

And verify that it’s known to the chain:

cargo run --release --bin pcli -- query validator list -i

However your validator doesn’t have anything delegated to it and will remain in an Inactive state until it receives enough delegations to place it in the active set of validators.

Delegating to your validator

First find your validator’s identity key:

cargo run --release --bin pcli -- validator identity

And then delegate some amount of penumbra to it:

cargo run --release --bin pcli -- tx delegate 1penumbra --to penumbravalid1g2huds8klwypzczfgx67j7zp6ntq2m5fxmctkf7ja96zn49d6s9qz72hu3

You should then see your balance of penumbra decreased and that you have received some amount of delegation tokens for your validator:

cargo run --release --bin pcli view balance

Voting power will be calculated on the next epoch transition after your delegation takes place. Assuming that your delegation was enough to place your validator in the top N validators by voting power, it should appear in the validator list as Active after the next epoch transition. The epoch duration and the active validator limit are chain parameters, and will vary by deployment. You can find the values in use for the current chain in its genesis.json file.

Updating your validator

First fetch your existing validator definition from the chain:

cargo run --release --bin pcli -- validator definition fetch --file validator.toml

Then make any changes desired and make sure to increase by sequence_number by at least 1! The sequence_number is a unique, increasing identifier for the version of the validator definition.

After updating the validator definition you can upload it again to update your validator metadata on-chain:

cargo run --release --bin pcli -- validator definition upload --file validator.toml

Local RPC with pclientd

Penumbra’s architecture separates public shared state from private per-user state. Each user’s state is known only to them and other parties they disclose it to. While this provides many advantages – and enables the core features of the chain – it also creates new operational challenges. Most existing blockchain tooling is built on the assumption that all chain state is available from a fullnode via RPC, allowing the tooling to be relatively stateless, obtaining its information from an RPC.

The role of pclientd, the Penumbra client daemon, is to restore this paradigm, allowing third-party tooling to query both public and private state via RPC, and to handle all of the Penumbra-specific cryptography. It does this by:

  • scanning and synchronizing a local, decrypted copy of all of a specific user’s private data;
  • exposing that data through a “view service” RPC that can query state and plan and build transactions;
  • proxying requests for public chain state to its fullnode;
  • optionally authorizing and signing transactions if configured with a spending key.

Client software can be written in any language with GRPC support, using pclientd as a single endpoint for all requests.

   ┌────────┐  ┌─────────────────┐  ┌────────┐
   │  Client│  │         pclientd│  │Penumbra│
   │Software│◀─┼─┐             ┌─┼─▶│Fullnode│
   └────────┘  │ │             │ │  └────────┘
            ╭  │ │ ┌───────┐   │ │
     public │  │ │ │grpc   │   │ │
 chain data │  │ ├▶│proxy  │◀──┤ │
            ╰  │ │ └───────┘   │ │
               │ │             │ │
            ╭  │ │ ┌───────┐   │ │
    private │  │ │ │view   │   │ │
  user data │  │ ├▶│service│◀──┘ │
            ╰  │ │ └───────┘     │
               │ │               │
            ╭  │ │ ┌ ─ ─ ─ ┐     │
   spending │  │ │  custody      │
 capability │  │ └▶│service│     │
 (optional) ╰  │    ─ ─ ─ ─      │


Currently, pclientd does not support any kind of transport security or authentication mechanism. Do not expose its RPC to untrusted access. We intend to remedy this gap in the future.

Configuring pclientd

First, install pclientd following the instructions for building pcli but building pclientd rather than pcli:

cargo build --release --bin pclientd

Generating configs

pclientd can run in either view mode, with only a full viewing key, or custody mode, with the ability to sign transactions.

To initialize pclientd in view mode, run

cargo run --release --bin pclientd -- init --view FULL_VIEWING_KEY

The FULL_VIEWING_KEY can be obtained from the config.toml generated by pcli init.

To initialize pclientd in custody mode, run

cargo run --release --bin pclientd -- init --custody -

to read a seed phrase from stdin, or

cargo run --release --bin pclientd -- init --custody "SEED PHRASE"

to specify the seed phrase on the command line.

Authorization policy

When run in custody mode, pclientd supports configurable authorization policy for transaction signing. The default set of policies created by init --custody are an example, and need to be edited before use.

For example, pclientd init --custody might generate output like

full_viewing_key = 'penumbrafullviewingkey1f33fr3zrquh869s3h8d0pjx4fpa9fyut2utw7x5y7xdcxz6z7c8sgf5hslrkpf3mh8d26vufsq8y666chx0x0su06ay3rkwu74zuwqq9w8aza'
grpc_url = ''
bind_addr = ''

spend_key = 'penumbraspendkey1e9gf5g8jfraap4jqul7e80vv0zrnwpsm4ke0df38ejrfh430nu4s9gc22d'

type = 'DestinationAllowList'
allowed_destination_addresses = ['penumbrav2t13vh0fkf3qkqjacpm59g23ufea9n5us45e4p5h6hty8vg73r2t8g5l3kynad87u0n9eragf3hhkgkhqe5vhngq2cw493k48c9qg9ms4epllcmndd6ly4v4dw2jcnxaxzjqnlvnw']

type = 'OnlyIbcRelay'

type = 'PreAuthorization'
method = 'Ed25519'
required_signatures = 1
allowed_signers = ['+Osq5OiWKos57KigDjd3XCG/YLUOSUbuBly4LBBpJTg=']

The kms_config section controls the configuration of the (software) key management system. Each kms.auth_policy section is a separate policy that must be satisfied for transaction authorization to succeed. To allow any transaction to be authorized, simply delete all the policies.

Destination allowlisting

type = 'DestinationAllowList'
allowed_destination_addresses = ['penumbrav2t13vh0fkf3qkqjacpm59g23ufea9n5us45e4p5h6hty8vg73r2t8g5l3kynad87u0n9eragf3hhkgkhqe5vhngq2cw493k48c9qg9ms4epllcmndd6ly4v4dw2jcnxaxzjqnlvnw']

This policy only allows transactions that send funds to the addresses on the allowlist. Transactions sending funds to any other address will be rejected.


type = 'OnlyIbcRelay'

This policy only allows transactions with the following actions: IbcAction, Spend, Output. The latter two are required to pay fees, so this policy should be combined with a DestinationAllowList to prevent sending funds outside of the relayer’s account.


type = 'PreAuthorization'
method = 'Ed25519'
required_signatures = 1
allowed_signers = ['+Osq5OiWKos57KigDjd3XCG/YLUOSUbuBly4LBBpJTg=']

This policy only allows transactions submitted with a pre-authorization Ed25519 signature made with at least required_signers signatures from the allowed_signers list. This allows clients to authenticate authorization requests to pclientd using standard Ed25519 signatures rather than Penumbra-specific decaf377-rdsa signatures. In the future, more pre-authorization methods may be added (e.g., WebAuthn).

Making RPC requests

pclientd exposes a GRPC and GRPC-web endpoint at its bind_addr. Several services are available.

To interactively explore requests and responses, try running GRPCUI locally or using Buf Studio in the browser. Buf Studio has a nicer user interface but does not (currently) support streaming requests. The Buf Studio link is preconfigured to make requests against a local pclientd instance with the default bind_addr, but can be aimed at any endpoint.

Accessing public chain state

pclientd has an integrated GRPC proxy, routing requests about public chain state to the fullnode it’s connected to.

Documentation on these RPCs is available on; follow the links in Buf Studio for more information.

Accessing private chain state

Access to a user’s private state is provided by the ViewService RPC.

In addition to ordinary queries, like Balances, which gets a user’s balances by account, the RPC also contains utility methods that allow computations involving cryptography. For instance, the AddressByIndex request computes a public address from an account index, and the IndexByAddress request decrypts an address to its private index.

Finally, the view service can plan and build transactions, as described in the next section.

Requesting transaction authorization

If pclientd was configured in custody mode, it exposes a CustodyService.

This allows authorization of a TransactionPlan, as described in the next section.

Building Transactions

Using the view and custody services to construct a transaction has four steps.

Plan the Transaction

Using the TransactionPlanner RPC in the view service, compute a TransactionPlan.

This RPC translates a general intent, like “send these tokens to this address” into a fully deterministic plan of the exact transaction, with all spends and outputs, all blinding factors selected, and so on.

Authorize the Transaction

With a TransactionPlan in hand, use the Authorize RPC to request authorization of the transaction from the custody service.

Note that authorization happens on the cleartext transaction plan, not the shielded transaction, so that the custodian can inspect the transaction before signing it.

Build the Transaction

With the TransactionPlan and AuthorizationData in hand, use the WitnessAndBuild RPC to have the view service build the transaction, using the latest witness data to construct the ZK proofs.

Broadcast the Transaction

With the resulting shielded Transaction complete, use the BroadcastTransaction request to broadcast the transaction to the network.

The await_detection parameter will wait for the transaction to be confirmed on-chain. Using await_detection is a simple way to ensure that different transactions can’t conflict with each other.


This section is for developers working on pd itself.

Devnet Quickstart

This page describes a quickstart method for running pd+cometbft to test changes during development.

To start, you’ll need to install CometBFT v0.37.

Generating configs

To generate a clean set of configs, run

cargo run --release --bin pd -- testnet generate

This will write configs to ~/.penumbra/testnet_data/.

Running pd

You’ll probably want to set RUST_LOG. Here’s one suggestion that’s quite verbose:

# Optional. Expect about 20GB/week of log data for pd with settings below.
export RUST_LOG="info,pd=debug,penumbra=debug,jmt=debug"

To run pd, run

cargo run --release --bin pd -- start

This will start but won’t do anything yet, because CometBFT isn’t running.

Running cometbft

To run CometBFT, run

cometbft --home ~/.penumbra/testnet_data/node0/cometbft/ start

in another terminal window.

Running pcli

To interact with the chain, first do

cargo run --release --bin pcli -- view reset

and then pass the -n flag to any commands you run to point pcli at your local node, e.g.,

cargo run --bin pcli -- -n view balance

By default, pd testnet generate uses the latest snapshot of the Discord’s faucet channel, so if you posted your address there more than a week or two ago, you should already have an allocation in your local devnet.

If not, reset the state as below, and edit the genesis.json to add your address.

Resetting and restarting

After making changes, you may want to reset and restart the devnet:

cargo run --release --bin pd -- testnet unsafe-reset-all

You’ll probably also want to reset your wallet state:

cargo run --release --bin pcli -- view reset

At this point you’re ready to generate new configs, and restart both pd and cometbft. The order they’re started in doesn’t particularly matter for correctness, because cometbft will retry connecting to the ABCI server until it succeeds.

Optional: running smoke-tests

Once you have a working devnet running, you should be able to run the smoke tests successfully. This can be useful if you are looking to contribute to Penumbra, or if you need to check that your setup is correct.

To run the smoke tests:

  1. Make sure you have a devnet running (see previous steps)
  2. Run integration tests:
PENUMBRA_NODE_PD_URL= PCLI_UNLEASH_DANGER=yes cargo test --package pcli -- --ignored --test-threads 1

SQLite compilation setup

The view server uses SQLite via sqlx as its backing store. The type-safe query macros require compile-time information about the database schemas. Normally, this information is cached in the crate’s sqlx-data.json, and nothing extra is required to build.

However, when editing the view server’s database code, it’s necessary to work with a development database:

  1. You’ll need sqlx-cli installed with the correct features: cargo install sqlx-cli --features sqlite

  2. The database structure is defined in the migrations/ directory of the view crate.

  3. Set the DATABASE_URL environment variable to point to the SQLite location. For instance,

    export DATABASE_URL="sqlite:///tmp/pclientd-dev-db.sqlite"

    will set the shell environment variable to the same one set in the project’s .vscode/settings.json.

  4. From the view directory, run cargo sqlx database setup to create the database and run migrations.

  5. From the view directory, run cargo sqlx prepare -- --lib to regenerate the sqlx-data.json file that allows offline compilation.

Building documentation

The protocol docs and the guide (this document) are built using mdBook and auto-deployed on pushes to main. To build locally:

  1. Install the requirements: cargo install mdbook mdbook-katex mdbook-mermaid
  2. Run mdbook serve from docs/protocol (for the protocol spec) or from docs/guide (for this document).

The Rust API docs can be built with ./deployments/scripts/rust-docs. The landing page, the top-level index.html, is handled as a special case. If you added new crates by appending a -p <crate_name> to the rust-docs script, then you must rebuild the index page via:

You’ll need to use the nightly toolchain for Rust to build the docs. In some cases, you’ll need a specific version. To configure locally:

rustup toolchain install nightly-2023-05-15

CI will automatically rebuild all our docs on merges into main.

Maintaining protobuf specs

The Penumbra project dynamically generates code for interfacing with gRPC. The following locations within the repository are relevant:

  • proto/penumbra/**/*.proto, the developer-authored spec files
  • crates/proto/src/gen/*.rs, the generated Rust code files
  • proto/go/**/*.pb.go, the generated Go code files
  • tools/proto-compiler/, the build logic for generated the Rust code files

We use buf to auto-publish the protobuf schemas at, and to generate Go and Typescript packages. The Rust code files are generated with our own tooling, located at tools/proto-compiler.

Installing protoc

The protoc tool is required to generate our protobuf specs via tools/proto-compiler. We mandate the use of a specific major version of the protoc tool, to make outputs predictable. Currently, the supported version is 24.x. Obtain the most recent pre-compiled binary from the protoc website for that major version. After installing, run protoc --version and confirm you’re running at least 24.4 (or newer). Don’t install protoc from package managers such as apt, as those versions are often outdated, and will not work with Penumbra.

To install the protoc tool from the zip file, extract it to a directory on your PATH:

unzip -d ~/.local/

Installing buf

The buf tool is required to update lockfiles used for version management in the Buf Schema Registry. Visit the buf download page to obtain a version. After installing, run buf --version and confirm you’re running at least 1.27.0 (or newer).

Building protos

From the top-level of the git repository:


Then run git status to determine whether any changes were made. The build process is deterministic, so regenerating multiple times from the same source files should not change the output.

If the generated output would change in any way, CI will fail, prompting the developer to commit the changes.

Updating buf lockfiles

We pin specific versions of upstream Cosmos deps in the buf lockfile for our proto definitions. Doing so avoids a tedious chore of needing to update the lockfile frequently when the upstream BSR entries change. We should review these deps periodically and bump them, as we would any other dependency.

cd proto/penumbra
# edit buf.yaml to remove the tags, i.e. suffix `:<tag>`
buf mod update

Then commit and PR in the results.


Metrics are an important part of observability, allowing us to understand what the Penumbra software is doing. Penumbra Labs runs Grafana instances for the public deployments:


Viewing Metrics

TODO: add details on how to use Grafana:

  • link to for dashboard on current testnet;
  • instructions on how to run Grafana + Prometheus for local dev setup (ideally this could work without requiring that pd itself is Dockerized, since local development is often more convenient outside of docker);
  • instructions on how to commit dashboards back to the repo.

Adding Metrics

We use a common structure for organizing metrics code throughout the penumbra workspace. Each crate that uses metrics has a top-level metrics module, which is private to the crate. That module contains:

  • a re-export of the entire metrics crate: pub use metrics::*;
  • &'static str constants for every metrics key used by the crate;
  • a pub fn register_metrics() that registers and describes all of the metrics used by the crate;

Finally, the register_metrics function is publicly re-exported from the crate root.

The only part of this structure visible outside the crate is the register_metrics function in the crate root, allowing users of the library to register and describe the metrics it uses on startup.

Internally to the crate, all metrics keys are in one place, rather than being scattered across the codebase, so it’s easy to see what metrics there are. Because the metrics module re-exports the contents of the metrics crate, doing use crate::metrics; is effectively a way to monkey-patch the crate-specific constants into the metrics crate, allowing code like:

fn main() {
    "kind" => "new",
    "code" => "1"

The metrics keys themselves should:

  • follow the Prometheus metrics naming guidelines
  • have an initial prefix of the form penumbra_CRATE, e.g., penumbra_stake, penumbra_pd, etc;
  • have some following module prefix that makes sense relative to the other metrics in the crate.

For instance:

fn main() {
pub const MEMPOOL_CHECKTX_TOTAL: &str = "penumbra_pd_mempool_checktx_total";

Backing up Grafana

After being changed, Grafana dashboards should be backed up to the repository for posterity and redeployment.

Grafana has an import/export feature that we use for maintaining our dashboards.

  1. Export the dashboard as JSON with the default settings
  2. Rename the JSON file and copy into the repo (config/grafana/dashboards/)
  3. PR the changes into main, and confirm on preview post-deploy that it works as expected.

Editing metrics locally

To facilitate working with metrics locally, first run a pd node on your machine with the metrics endpoint exposed. Then, you can spin up a metrics sidecar deployment:

cd deployments/compose
just metrics

To add new Grafana visualizations, open http://localhost:3000 and edit the existing dashboards. When you’re happy with what you’ve got, follow the “Backing up Grafana” instructions above to save your work.

Zero-Knowledge Proofs

Test Parameter Setup

Penumbra’s zero-knowledge proofs require circuit-specific parameters to be generated in a preprocessing phase. There are two keys generated for each circuit, the Proving Key and Verifying Key - used by the prover and verifier respectively.

For development purposes only, we have a crate in tools/parameter-setup that lets one generate the proving and verifying keys:

cd tools/parameter-setup
cargo run

The verifying and proving keys for each circuit will be created in a serialized form in the proof-params/src/gen folder. Note that the keys will be generated for all circuits, so you should commit only the keys for the circuits that have changed.

The proving keys are tracked using Git-LFS. The verifying keys are stored directly in git since they are small (around ~1 KB each).

Adding a new Proof

To add a new circuit to the parameter setup, you should modify tools/parameter-setup/src/ before running cargo run.

Then edit penumbra-proof-params to reference the new parameters created in proof-params/src/gen.

Circuit Benchmarks

We have benchmarks for all proofs in the penumbra-bench crate. You can run them via:

cargo bench

Performance as of commit ce2d319bd5534fd28600227b28506e32b8504493 benchmarked on a 2023 Macbook Pro M2 (12 core CPU) with 32 GB memory and the parallel feature enabled:

ProofNumber of constraintsProving time
Delegator vote36,723389ms
Undelegate claim14,776139ms
Nullifier derivation39415ms

zk-SNARK Ceremony Benchmarks

Run benchmarks for the zk-SNARK ceremony via:

cd crates/crypto/proof-setup
cargo bench

Performance as of commit 1ed963657c16e49c65a8e9ecf998d57fcce8f200 benchmarked on a 2023 Macbook Pro M2 (12 core CPU) with 32 GB memory using 37,061 constraints (SwapClaim circuit) (note that in practice the performance will be based on the next power of two, for the most part):

Phase 1 run71.58s
Phase 1 check147.41s
Phase transition131.72s
Phase 2 run14.76s
Phase 2 check0.21s

Working with gRPC for Penumbra

The Penumbra pd application exposes a gRPC service for integration with other tools, such as pcli or the web extension. A solid understanding of how the gRPC methods work is helpful when building software that interoperates with Penumbra.

Using gRPC UI

The Penumbra Labs team runs gRPC UI instances for testnet deployments:

You can use this interface to perform queries against the relevant chain. It’s also possible to run gRPC UI locally on your machine, to connect to a local devnet.

Using Buf Studio

The Buf Studio webapp provides a polished GUI and comprehensive documentation. However, a significant limitation for use with Penumbra is that it lacks support for streaming requests, such as penumbra.client.v1alpha1.CompactBlockRangeRequest.

To get started with Buf Studio, you can use the publicly available gRPC endpoint from the testnet deployments run by Penumbra Labs:

  • For the current testnet, use
  • For ephemeral devnets, use

Set the request type to gRPC-web at the bottom of the screen. You can then select a Method and explore the associated services. Click Send to submit the request and view response data in the right-hand pane.

Interacting with local devnets

Regardless of which interface you choose, you can connect to an instance of pd running on your machine, which can be useful while adding new features. First, make sure you’ve joined a testnet by setting up a node on your local machine. Once it’s running, you can connect directly to the pd port via http://localhost:8080.

Alternatively, you can use pclientd. First, make sure you’ve configured pclientd locally with your full viewing key. Once it’s running, you can connect directly to the pclient port via http://localhost:8081.

Testing IBC

This guide explains how to work with IBC functionality while developing Penumbra.

Working with a local devnet

Use this approach while fixing bugs or adding features. Be aware that creating a new channel on the public Osmosis testnet creates lingering state on that counterparty chain. Be respectful.

  1. Create a devnet. Make note of the randomly generated chain id emitted in the logs, as we’ll need it to configure Hermes.
  2. Checkout the Penumbra fork of Hermes, and build it with cargo build --release.
  3. Edit the config-devnet-osmosis.toml file to use the chain id for your newly created devnet.
  4. Add Osmosis key material to Hermes. Look up the Osmosis recovery phrase stored in shared 1Password, then:
echo "SEED PHRASE" > ./mnemonic
cargo run --release --bin hermes -- --config config-devnet-osmosis.toml keys add --chain osmo-test-5 --mnemonic-file ./mnemonic
  1. Create a new channel for this devnet:
cargo run --release --bin hermes -- --config config-devnet-osmosis.toml create channel --a-chain $PENUMBRA_DEVNET_CHAIN_ID --b-chain osmo-test-5 --a-port transfer --b-port transfer --new-client-connection

Hermes will run for a while, emit channel info, and then exit. 6. Finally, run Hermes: cargo run --release --bin hermes -- --config config-devnet-osmosis.toml start

You may see a spurious error about “signature key not found: penumbra-wallet: cannot find key file”. Ignore that error: we haven’t implemented fees yet, so no Penumbra keys are required in Hermes. Hermes will emit a summary of the channel info, something like:

# Chain: penumbra-testnet-tethys-8777cb20
  - Client: 07-tendermint-0
  - Client: 07-tendermint-1
    * Connection: connection-0
      | State: OPEN
      | Counterparty state: OPEN
      + Channel: channel-0
        | Port: transfer
        | State: OPEN
        | Counterparty: channel-1675
# Chain: osmo-test-5
  - Client: 07-tendermint-1029
    * Connection: connection-939
      | State: OPEN
      | Counterparty state: OPEN
      + Channel: channel-1675
        | Port: transfer
        | State: OPEN
        | Counterparty: channel-0

Make note of the channels on both the primary (Penumbra devnet) and counterparty (Osmosis testnet) chains. You can use those values to send funds from the Penumbra devnet to the counterparty:

cargo run --release --bin pcli -- -n http://localhost:8080 view reset
# check what funds are available
cargo run --release --bin pcli -- -n http://localhost:8080 view balance
cargo run --release --bin pcli -- -n http://localhost:8080 tx withdraw --to osmo1kh0fwkdy05yp579d8vczgharkcexfw582zj488 --channel 0 --timeout-height 5-2900000 100penumbra

See the IBC pcli docs for more details.

Making Osmosis -> Penumbra transfers, via rly

Transferring from Osmosis to Penumbra requires making an Osmosis transaction. The osmosisd CLI tooling unfortunately does not work for IBC transfers. To move funds from a Penumbra chain to an Osmosis testnet, use the rly binary from the cosmos/relayer repo. Then run:

# inside the penumbra repo:
cd deployments/relayer
# refresh the chain id for local devnet:
./generate-configs local
rly config init --memo "PenumbraIBC"
rly chains add -f configs/penumbra-local.json
rly chains add -f configs/osmosis-testnet.json
# use the seed phrase from 1password for the osmosis key:
rly keys restore osmosis-testnet default "SEED PHRASE"
rly keys add penumbra-local default
# create an IBC path between the two chains
rly paths add $PENUMBRA_DEVNET_CHAIN_ID osmo-test-5 penumbra-osmosis-dev

# finally, make the transfer
rly transact transfer osmosis-testnet penumbra-local 100uosmo penumbrav2t1jp4pryqqmh65pq8e7zwk6k2674vwhn4qqphxjk0vukxln0crmp2tdld0mhavuyrspwuajnsk5t5t33u2auxvheunr7qde4l068ez0euvtu08z7rwj6shlh64ndz0wvz7mfqdcd channel-1675 -y 10000 -c 2h
2023-09-17T20:19:47.510916Z	info	Successfully sent a transfer	{"src_chain_id": "osmo-test-5", "dst_chain_id": "penumbra-testnet-tethys-8777cb20", "send_result": {"successful_src_batches": 1, "successful_dst_batches": 0, "src_send_errors": "<nil>", "dst_send_errors": "<nil>"}}
2023-09-17T20:19:47.510938Z	info	Successful transaction	{"provider_type": "cosmos", "chain_id": "osmo-test-5", "packet_src_channel": "channel-1675", "packet_dst_channel": "channel-0", "gas_used": 117670, "fees": "3717uosmo", "fee_payer": "osmo1kh0fwkdy05yp579d8vczgharkcexfw582zj488", "height": 2720869, "msg_types": ["/ibc.applications.transfer.v1.MsgTransfer"], "tx_hash": "67C55AD4FC6855579C2B4B421F8F02B2B2BFE6BA0D5C1553C13BBB7DFFAD781D"}

You can view account history for the shared Osmosis testnet account here:

Updating Hermes config for a new testnet

On every release of a new Penumbra testnet, we must update the Hermes relayer to establish a channel between it and target counterparty test chains.

  1. Checkout the Penumbra fork of Hermes, and build it with cargo build --release.
  2. Edit the config-osmosis-testnet.toml file to use the chain id of the new Penumbra testnet, e.g. penumbra-testnet-dione.
  3. Add Osmosis key material to Hermes. Look up the Osmosis recovery phrase stored in shared 1Password, then:
echo "SEED PHRASE" > ./mnemonic
cargo run --release --bin hermes -- --config config-osmosis-testnet.toml keys add --chain osmo-test-5 --mnemonic-file ./mnemonic
  1. Create a new channel for this testnet:
cargo run --release --bin hermes -- --config config-osmosis-testnet.toml create channel --a-chain $PENUMBRA_TESTNET_CHAIN_ID --b-chain osmo-test-5 --a-port transfer --b-port transfer --new-client-connection

Hermes will run for a while, emit channel info, and then exit. 6. Run Hermes: cargo run --release --bin hermes -- --config config-osmosis-testnet.toml start

Use the IBC user docs to make a test transaction, to ensure that relaying is working. In the future, we should consider posting the newly created channel to the IBC docs guide, so community members can use it.


This page links to various resources that are helpful for working with and understanding Penumbra:


The primary communication hub is our Discord; click the link to join the discussion there.


Documentation on how to use pcli, how to run pd, and how to do development can be found at

Protocol Specification

The protocol specification is rendered at

API documentation

The API documentation is rendered at

Protobuf documentation

The protobuf documentation is rendered at

Talks and presentations

These talks were given at various conferences and events, describing different aspects of the Penumbra ecosystem.