Penumbra Guide

Penumbra is a fully private, cross-chain proof-of-stake network and decentralized exchange for the Cosmos and beyond. Penumbra brings privacy to IBC, allowing anyone to shield any IBC asset just by transferring it into Penumbra’s multi-asset shielded pool. Within Penumbra, users can transact, stake, swap, or marketmake without broadcasting their personal information to the world.

Unlike a transparent chain, where all information is public, Penumbra is end-to-end encrypted.

Using Penumbra on the web

The easiest way to get started using Penumbra is with the Penumbra web extension. The web extension runs entirely locally, and contains an embedded ultralight node that syncs and decrypts only the data visible to your wallet. Websites can request to connect to your wallet and query your data.

The Using Penumbra on the web chapter describes how to use Penumbra on the web.

Using Penumbra from the command line

Penumbra also has a command-line client, pcli. Some protocol features, such as threshold custody for shielded multisigs, do not yet have support in web frontends and are only accessible via the command line.

The Using Penumbra from the command line chapter describes how to use pcli.

Running a node

Running a node is not necessary to use the protocol. Both the web extension and pcli are designed to operate with any RPC endpoint. However, we’ve tried to make it as easy as possible to run nodes so that users can host their own RPC.

There are two kinds of Penumbra nodes:

  • Penumbra fullnodes run pd and cometbft to synchronize and verify the entire chain state, as described in Running a node: pd.
  • Penumbra ultralight nodes run pclientd to scan, decrypt, and synchronize a specific wallet’s data, as well as build and sign transactions, as described in Running a node: pclientd.

The web extension and pcli embed the view and custody functionality provided by pclientd, so it is not necessary to run pclientd to use them. Instead, pclientd is intended to act as a local RPC for programmatic tooling (e.g., trading bots) not written in Rust that cannot easily embed the code for working with Penumbra’s shielded cryptography.

Participating in development

Penumbra is a decentralized, open-source protocol, built in public.

The Participating in development chapter has developer documentation for working on the protocol itself.


The Resources chapter has links to other resources about the Penumbra protocol.

Using the web extension

This section describes how to use the Penumbra Wallet web extension, a GUI client for Penumbra.

Currently, the web extension only supports a subset of functionality of the command-line client, pcli.

Installing the extension

The Penumbra Wallet web extension only supports the Google Chrome browser. You must run Chrome in order to follow the instructions below.

  1. Visit the Web Store page for the Penumbra Wallet, and click Add to Chrome to install it.
  2. Navigate to the dApp website for the extension: and click Connect in the top-right corner.
  3. Click Get started to proceed with wallet configuration.

Generating a wallet

You’ll be offered to import a pre-existing wallet. If you don’t already have one, choose Create a new wallet. During the guided tutorial, you’ll need to set a passphrase to protect your wallet. The passphrase is not the same as the recovery phrase. The passphrase is used to restrict access to the web wallet on your computer. The recovery phrase can be used to import your wallet on a fresh installation, or on a different machine. Make sure to store both the passphrase and the recovery phrase securely, for example in a password manager.

Re-enter portions of the recovery phrase when prompted, to confirm that you’ve saved it properly. Then you’ll be taken to a screen that shows an initial synchronization process with the most recent testnet:

Obtaining funds

In order to use the testnet, it’s first necessary for you to get some testnet tokens. To obtain your address, click on the extension icon. The drop-down should display your wallet address and a button to copy it to the clipboard. Next, join our Discord and post your address in the #testnet-faucet channel. We’ll send your address some tokens on the testnet for you to send to your friends! :)

In addition, addresses posted to the testnet faucet are periodically rolled into the testnet genesis file, so that in future testnets your address will have testnet tokens pre-loaded.

Just keep in mind: testnet tokens do not have monetary value, and in order to keep the signal-to-noise ratio high on the server, requests for tokens in other channels will be deleted without response. Please do not DM Penumbra Labs employees asking for testnet tokens; the correct venue is the dedicated channel.

Creating transactions

Now that you’ve got the web wallet configured, let’s use it to send a transaction. Navigate to the dApp website: and click Connect, then authorize the extension to work with the site. After doing so, you’ll see buttons for actions such as Receive, Send, and Exchange.

As of Testnet 53, only the Send action is supported. Check back on subsequent versions to follow progress as we implement more advanced functionality in the web wallet.

Upgrading to a new testnet

When a new testnet is released, you’ll need to clear the existing state from the extension, much like running pcli view reset is required on the command-line. To synchronize with a new testnet:

  1. Click the Penumbra Wallet option in the extensions drop-down menu, next to the URL bar.
  2. Unlock the wallet by providing your passphrase, if prompted.
  3. Click the gear icon in the top right corner of the overlay.
  4. Choose Advanced -> Clear Cache -> Confirm.

Then navigate to again and reauthorize the connection. The extension will automatically sync with the new chain.

Updating to a new version of the extension

The extension should be automatically updated every time a new version is released. You can view the latest version of the extension at the Chrome Web Store. To force a check for updates:

  1. Click the three-dot icon in the top right corner of the browser.
  2. From the drop-down menu, choose Extensions -> Manage Extensions.
  3. Select Update on the top panel.

After updating the extension manually, it may be helpful to clear the local cache, as described above.

Using pcli

This section describes how to use pcli, the command line client for Penumbra:

Penumbra is a private blockchain, so the public chain state does not reveal any private user data. By default, pcli includes a view service that synchronizes with the chain and scans with a viewing key.

Please submit any feedback and bug reports

Thank you for helping us test the Penumbra network! If you have any feedback, please let us know in the #testnet-feedback channel on our Discord. We would love to know about bugs, crashes, confusing error messages, or any of the many other things that inevitably won’t quite work yet. Have fun! :)

Diagnostics and Warnings

By default, pcli prints a warning message to the terminal, to be sure that people understand that this is unstable, unfinished, pre-release software. To disable this warning, export the PCLI_UNLEASH_DANGER environment variable.

Installing pcli

Download prebuilt binaries from the Penumbra releases page on Github. Make sure to use the most recent version available, as the version of pcli must match the software currently running on the network.

Make sure choose the correct platform for your machine. Or, you can use a one-liner install script:

curl --proto '=https' --tlsv1.2 -LsSf | sh

# confirm the pcli binary is installed by running:
pcli --version

The installer script will place the binary in $HOME/.cargo/bin/.

If you see an error message containing GLIBC, then your system is not compatible with the precompiled binaries. See details below.

Platform support

Only modern versions of Linux and macOS are supported, such as:

  • Ubuntu 22.04
  • Debian 12
  • Fedora 39
  • macOS 14

When checking the locally installed binary via pcli --version, you may see an error message similar to:

pcli: /lib/x86_64-linux-gnu/ version `GLIBCXX_3.4.30' not found (required by pcli)
pcli: /lib/x86_64-linux-gnu/ version `GLIBCXX_3.4.29' not found (required by pcli)
pcli: /lib/x86_64-linux-gnu/ version `GLIBC_2.32' not found (required by pcli)
pcli: /lib/x86_64-linux-gnu/ version `GLIBC_2.34' not found (required by pcli)
pcli: /lib/x86_64-linux-gnu/ version `GLIBC_2.33' not found (required by pcli)

If you see that message, you must either switch to a supported platform, or else build the software from source. If you need to use Windows, consider using WSL.

Generating a Wallet

On first installation of pcli, you will need to generate a fresh wallet to use with Penumbra.

The pcli init command will generate a configuration file, depending on the custody backend used to store keys.

There are currently three custody backends:

  1. The softkms backend is a good default choice for low-security use cases. It stores keys unencrypted in a local config file.
  2. The threshold backend is a good choice for high-security use cases. It provides a shielded multisig, with key material sharded over multiple computers.
  3. The view-only backend has no custody at all and only has access to viewing keys.

Furthermore, softkms and threshold allow encrypting the spend-key related material with a password.

After running pcli init with one of the backends described above, pcli will be initialized.

Shielded accounts

Penumbra’s design automatically creates 2^32 (four billion) numbered accounts controlled by your wallet.

To generate the address for a numbered account, use pcli view address:

$ pcli view address 0

You can also run pcli view address on an address to see which account it corresponds to:

$ pcli view address penumbra1...
Address is viewable with this full viewing key. Account index is 0.

Addresses are opaque and do not reveal account information. Only you, or someone who has your viewing key, can decrypt the account information from the address.

Getting testnet tokens on Discord in the #testnet-faucet channel

In order to use the testnet, it’s first necessary for you to get some testnet tokens. The current way to do this is to join our Discord and post your address in the #testnet-faucet channel. We’ll send your address some tokens on the testnet for you to send to your friends! :)

Just keep in mind: testnet tokens do not have monetary value, and in order to keep the signal-to-noise ratio high on the server, requests for tokens in other channels will be deleted without response. Please do not DM Penumbra Labs employees asking for testnet tokens; the correct venue is the dedicated channel.

Validator custody

Validators need to custody more kinds of key material than ordinary users. A validator operator has control over:

  • The validator’s identity signing key: the root of the validator’s identity, controlling its on-chain definition and all other subkeys. The public half of this keypair is contained in a Penumbra validator ID such as penumbravalid1u2z9c75xcc2ny6jxccge6ehqtqkhgy4ltxms3ldappr06ekpguxqq48pdh. This key can never be rotated, so it is very important for a validator to keep it secure. However, the identity signing key is only used for signing changes to the validator’s configuration data, so it can be kept in cold storage or via threshold (multisig) custody.
  • The validator’s CometBFT consensus key: the key used to sign blocks by the running validator. Unlike the Cosmos SDK, the consensus key can be freely rotated by uploading a new validator definition with an updated consensus key. It is important to secure all historical consensus keys used by a validator, as compromise could lead to double-signing and slashing. Secure custody of consensus keys is outside the scope of this document. For examples of ways to custody this key material, see Horcrux and tmkms, two approaches for high-security online custody of CometBFT consensus keys.
  • The validator’s governance signing key: the key used to vote on governance proposals in the capacity as a validator. Unlike the Cosmos SDK, this is also a subkey, allowing validators to vote on governance proposals without taking their most sensitive key material out of cold storage. By default, this is identical to the validator’s identity key, but it can be manually set to a different key. To do this, use the command pcli init validator-governance-subkey after you have run pcli init. This command requires that you choose a custody backend for the governance subkey, which can be one of soft-kms or threshold. The same options apply to these as above.
  • The validator’s treasury wallet(s): the wallet(s) which hold the validator’s self-delegated funds, and which receive output from (some of) the validator’s funding stream(s). These are configured entirely separately from the validator’s identity key, and may be changed at any time by migrating funds from one wallet to another and/or updating the funding streams in the validator’s on-chain definition. The default templated definition uses an address derived from the validator’s identity key wallet, but most operators likely want to change this.

Because Penumbra permits validators to separate their key material as above, validators can choose different custody solutions for the different risk profiles and use cases of different keys, independently.

Multiple signatures when using threshold custody

Validator definitions are self-authenticating data packets, signed by the validator’s identity key. Updating a validator definition requires two steps: first, producing a signed validator definition, and second, relaying that definition onto the chain. In a low-security setup, pcli can do these steps automatically.

However, when a validator is using a threshold custody backend for the identity key, these steps may require separate signing operations. First, the definition must be signed (using the custody method of the identity key), and then that definition must be broadcast to the chain. Any wallet can broadcast the signed validator definition, similar to the way that any wallet can relay IBC packets.

Similarly, when a validator is using a threshold custody backend for the governance subkey, casting a vote as a validator may require separate steps, because the vote is signed independently of the transaction which broadcasts it.

Airgap or Wormhole custody

If a validator wishes to use an airgapped signing setup (with or without threshold custody) to sign definition updates or governance votes, this is possible:

  • To sign a definition over an airgap, produce a signature on the airgapped machine or machines using pcli validator definition sign, then upload the definition on a networked machine, after copying the signature across the airgap, using pcli validator definition upload with the optional --signature flag to specify the externally-produced signature for the definition.
  • To sign a validator vote over an airgap, produce a signature on the airgapped machine or machines using pcli validator vote sign, then upload the vote on a networked machine, after copying the signature across the airgap, using pcli validator vote cast with the optional --signature flag to specify the externally-produced signature for the vote.

Alternatively, rather than using a literal airgap, magic wormhole is a fast and secure method for relaying data between computers without complex network interactions.

Software Custody Backend

The softkms custody backend stores the spending key unencrypted in the pcli configuration file.

To generate a new wallet, try:

$ pcli init soft-kms generate
Save this in a safe place!
Writing generated config to [PATH TO PCLI DATA]

Alternatively, to import an existing wallet, try

$ pcli init soft-kms import-phrase
Enter seed phrase:
Writing generated config to [PATH TO PCLI DATA]


A password can be used to generate an encrypted config via:

$ pcli init --encrypted soft-kms ...

with either the generate, or the import-phrase command.

Furthermore, an existing config can be converted to an encrypted one with:

$ pcli init re-encrypt

Threshold Custody Backend

This backend allows splitting the spend authority (the ability to spend funds) among multiple parties. Each of these parties will have full viewing authority (the ability to decrypt and view the wallet’s on-chain activity).

Threshold custody involves a certain number of parties, N, which each have a share, and a threshold T, denoting the number of parties required to sign. This is often referred to as an (N, T) threshold setup.

For example, a (3, 2) threshold setup would involve 3 people holding shares, with 2 of them required to sign.

At a high-level, using this backend involves:

  1. (only once) generating the split keys, either in a centralized or a decentralized manner,
  2. (many times) signing transactions by having the parties coordinate over a shared secure communication channel.

For signing, the parties will need to exchange messages to coordinate a threshold signature, and will need some kind of secure channel to do that; for example, a Signal group. This channel should:

  • provide authenticity to each party sending messages in it,
  • encrypt messages, preventing information about on-chain activity from leaking outside the signing parties.

This backend is only accessible from the command-line interface, pcli.

Key-Generation: Centralized

pcli init threshold deal --threshold <T> --home <HOME1> --home <HOME2> ...

This command will generate the key shares, and then write appropriate pcli configs to <HOME1>/config.toml, <HOME2>/config.toml, etc. The number of parties is controlled only by the number of home directories passed to this command. The threshold required to sign will be <T>.


The computer running this command will have access to all the shares at the moment of generation. This can be useful to centrally provision a threshold setup, but should be done on a secure computer which gets erased after the setup has been performed.

Key-Generation: Decentralized

pcli init threshold dkg --threshold <T> --num-participants <N>

This command will generate the key shares in a decentralized manner. Each party will run this command on the machine where they want their share to be. The command will prompt them to communicate information with the other parties, and to relay the information they’ve received back into it, before eventually producing a key share, and writing it to the default home directory used by pcli.

This method is more secure, because no computer ever has full access to all shares.


For signing, one party must coordinate signing, and the other parties will follow along and review the suggested transaction.

The coordinator will use pcli as they would with the standard single-party custody backend. When it comes time for pcli to produce a signature, the command will then prompt the user to communicate information with the other parties, and relay their response back, in order to produce a signature.

The followers run a separate command to participate in signing:

pcli threshold sign

This will have them be prompted to input the coordinator’s information, and will display a summary of the transaction, which they should independently review to check that it’s actually something they wish to sign.

Note that only a threshold number of participants are required to sign, and any others do not need to participate. So, for example, with a threshold of 2, only one other follower beyond the coordinator is needed, and additional followers cannot be used.

Communication Channel

The dkg and sign commands will spit out blobs of information that need to be relayed between the participants securely. An end-to-end example of how this process works is captured in this video:


A password can be used to generate an encrypted config via:

$ pcli init --encrypted threshold dkg ...

Furthermore, an existing config can be converted to an encrypted one with:

$ pcli init re-encrypt

Updating pcli

Follow the installation steps to install the most recent version of pcli, which is v0.77.3.

After installing the updated version, reset the view data used by pcli:

pcli view reset

No wallet needs to be generated. The existing wallet will be used automatically.

Viewing Balances

Once you’ve received your first tokens, you can scan the chain to import them into your local wallet (this may take a few minutes the first time you run it):

pcli view sync

Syncing is performed automatically, but running the sync subcommand will ensure that the client state is synced to a recent state, so that future invocations of pcli commands don’t need to wait.

If someone sent you testnet assets, you should be able to see them now by running:

pcli view balance

This will print a table of assets by balance in each. The balance view just shows asset amounts. To see more information about delegation tokens and the stake they represent, use

pcli view staked

Sending Transactions

Now, for the fun part: sending transactions. If you have someone else’s testnet address, you can send them any amount of any asset you have.

First, use balance to find the amount of assets you have:

pcli view balance

Second, if I wanted to send 10 penumbra tokens to my friend, I could do that like this (filling in their full address at the end):

pcli tx send 10penumbra --to penumbrav2t...

Notice that asset amounts are typed amounts, specified without a space between the amount (10) and the asset name (penumbra). If you have the asset in your wallet to send, then so it shall be done!


In addition, to sending an asset, one may also stake penumbra tokens to validators.

Find a validator to stake to:

pcli query validator list

Copy and paste the identity key of one of the validators to stake to, then construct the staking tx:

pcli tx delegate 10penumbra --to penumbravalid...

To undelegate from a validator, use the pcli tx undelegate command, passing it the typed amount of delegation tokens you wish to undelegate. Wait a moment for the network to process the undelegation, then reclaim your funds:

pcli tx undelegate-claim

Inspect the output; a message may instruct you to wait longer, for a new epoch. Check back and rerun the command later to add the previously delegated funds to your wallet.


Penumbra features on-chain governance similar to Cosmos Hub where anyone can submit proposals and both validators and delegators to vote on them. Penumbra’s governance model incorporates a single Community Pool account, into which anyone can freely deposit, but from which only a successful governance vote can spend. For details on using governance, see the governance section.

Managing Liquidity Positions

Penumbra’s decentralized exchange (“dex”) implementation allows users to create their own on-chain liquidity positions. The basic structure of a liquidity position expresses a relationship between two assets, i.e. “I am willing to buy 100penumbra at a price of 1gm each, with a fee of 20bps (base points)” or “I am willing to sell 100penumbra at a price of 1gm each, with a fee of 10bps”.

Opening a Liquidity Position

The basic commands for opening liquidity positions are tx position order buy and tx position order sell.

To open an order buying 10cube at a price of 1penumbra each, with no fee, you’d do the following:

pcli tx position order buy 10cube@1penumbra

Similarly, to open an order selling 100penumbra at a price of 5gm each, with a 20bps fee on transactions against the liquidity position, you would append /20bps at the end of the order, like as follow:

pcli tx position order sell 100penumbra@5gm/20bps

After opening the position, you’ll see that your account has been deposited an “LPNFT” representing the open position:

$ pcli view balance

 Account  Amount
 0        1lpnft_opened_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

Closing a Liquidity Position

If you have an open liquidity position, you may close it, preventing further trading against it.

pcli tx position close plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

This will subtract the opened LPNFT and deposit a closed LPNFT into your balance:

$ pcli view balance

 Account  Amount
 0        1lpnft_closed_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

You also have the option to close all liquidity positions associated with an address at once. This is useful if you have many individual positions, e.g. due to trading function approximation:

pcli tx position close-all

Withdrawing a Liquidity Position

If you have a closed liquidity position, you may withdraw it, depositing the reserves in the trading position into your balance.

pcli tx position withdraw plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr

This will subtract the closed LPNFT and deposit a withdrawn LPNFT into your balance, along with any reserves belonging to the trading position:

$ pcli view balance

 Account  Amount
 0        1lpnft_withdrawn_plpid1hzrzr2myjw508nf0hyzehl0w0x2xzr4t8vwe6t3qtnfhsqzf5lzsufscqr
 0        1cube

You also have the option to withdraw all liquidity positions associated with an address at once. This is useful if you have many individual positions, e.g. due to trading function approximation:

pcli tx position withdraw-all

Swapping Assets

One of the most exciting features of Penumbra is that by using IBC (inter-blockchain communication) and our shielded pool design, any tokens can be exchanged in a private way.

Swaps take place against the on-chain liquidity positions described earlier in the guide.

If you wanted to exchange 1 penumbra tokens for gm tokens, you could do so like so:

pcli tx swap --into gm 1penumbra

This will handle generating the swap transaction and you’d soon have the market-rate equivalent of 1 penumbra in gm tokens returned to you, or the original investment of 1 penumbra tokens returned if there wasn’t enough liquidity available to perform the swap.

Replicating a UniswapV2 (x*y=k) pool

Penumbra’s constant-price pool is a versatile market primitive, allowing users extensive control over their trading strategies. It’s not solely for active DEX quoters; with our AMM replication tool, users can emulate any passive AMM of their choice. The testnet comes with a built-in UniswapV2 replicator that is utilized as such:

pcli tx lp replicate xyk <TRADING_PAIR> <QUANTITY> [--current-price AMT] [--fee-bps AMT]

For instance, to provide ~100penumbra and ~100test_usd liquidity on the penumbra:test_usd pair with a pool fee of 33bps, run:

pcli tx lp replicate xyk penumbra:test_usd 100penumbra --fee-bps 33

You will be prompted a disclaimer which you should read carefully, and accept or reject by pressing “y” for yes, or “n” for no.

The replicating market makers tool will then generate a list of positions that you can submit by pressing “y”, or reject by pressing “n”.

There are other pairs available that you can try this tool on, for example gm:gn or gm:penumbra.

IBC withdrawals

Penumbra aims to implement full IBC support for cross-chain asset transfers. For now, however, we’re only running a relayer between the Penumbra testnet and the Osmosis testnet chains. For Testnet 69 Deimos, the channel is 0:

| Channel ID | Port     | Counterparty | Counterparty Channel ID | State | Client ID       | Client Height |
| 0          | transfer | osmo-test-5  | channel-6105            | OPEN  | 07-tendermint-4 | 5-5937586     |

You can see this yourself by running pcli query ibc channels and comparing the output you see with what’s shown above. It’s possible the output will include mention of other chains.

The output above shows that the IBC channel id on Penumbra is 0, and on Osmosis it’s 6105. To initiate an IBC withdrawal from Penumbra testnet to Osmosis testnet:

pcli tx withdraw --to <OSMOSIS_ADDRESS> --channel <CHANNEL_ID> 5gm

Unfortunately the CLI tooling for Osmosis is cumbersome. For now, use hermes as a user agent for the Osmosis testnet, as described in the IBC dev docs.


Penumbra features on-chain governance similar to Cosmos Hub, with the simplification that there are only 3 kinds of vote: yes, no, and abstain.

Quick Start

There’s a lot you can do with the governance system in Penumbra. If you have a particular intention in mind, here are some quick links:

Getting Proposal Information

To see information about the currently active proposals, including your own, use the pcli query proposal subcommand.

To list all the active proposals by their ID, use:

pcli query governance list-proposals

Other proposal query commands all follow the form:

pcli query governance proposal [PROPOSAL_ID] [QUERY]

These are the queries currently defined:

  • definition gets the details of a proposal, as the submitted JSON;
  • state gets information about the current state of a proposal (voting, withdrawn, or finished, along with the reason for withdrawal if any, and the outcome of finished proposals);
  • period gets the voting start and end block heights of a proposal;
  • tally gets the current tally of a proposal’s votes, as a total across all validators, and broken down by each validator’s votes and the total votes of their delegators.

Voting On A Proposal

Validators and delegators may both vote on proposals. Validator votes are public and attributable to that validator; delegator votes are anonymous, revealing only the voting power used in the vote, and the validator which the voting delegator had delegated to. Neither validators nor delegators can change their votes after they have voted.

Voting As A Delegator

If you had staked delegation tokens to one or more active validators when a proposal started, you can vote on it using the tx vote subcommand of pcli. For example, if you wanted to vote “yes” on proposal 1, you would do:

pcli tx vote yes --on 1

When you vote as a delegator (but not when you vote as a validator), you will receive commemorative voted_on_N tokens, where N is the proposal ID, proportionate to the weight of your vote. Think of these as the cryptocurrency equivalent of the “I voted!” stickers you may have received when voting in real life at your polling place.

Voting As A Validator

If you are a validator who was active when the proposal started, you can vote on it using the validator vote cast subcommand of pcli. For example, if you wanted to vote “yes” on proposal 1, you would do:

pcli validator vote cast yes --on 1

If your validator uses an airgap custody setup, you can separately sign and cast your vote using the pcli validator vote sign command to output your signature, and the --signature option on pcli validator vote cast to attach it and broadcast it.

Eligibility And Voting Power

Only validators who were active at the time the proposal started voting may vote on proposals. Only delegators who had staked delegation tokens to active validators at the time the proposal started voting may vote on proposals.

A validator’s voting power is equal to their voting power at the time a proposal started voting, and a delegator’s voting power is equal to the unbonded staking token value (i.e. the value in penumbra) of the delegation tokens they had staked to an active validator at the time the proposal started voting. When a delegator votes, their voting power is subtracted from the voting power of the validator(s) to whom they had staked delegation notes at the time of the proposal start, and their stake-weighted vote is added to the total of the votes: in other words, validators vote on behalf of their delegators, but delegators may override their portion of their validator’s vote.

Authoring A Proposal

Anyone can submit a new governance proposal for voting by escrowing a proposal deposit, which will be held until the end of the proposal’s voting period. Penumbra’s governance system discourages proposal spam with a slashing mechanism: proposals which receive more than a high threshold of no votes have their deposit burned. At present, the slashing threshold is 80%. If the proposal is not slashed (but regardless of whether it passes or fails), the deposit will then be returned to the proposer at the end of voting.

From the proposer’s point of view, the lifecycle of a proposal begins when it is submitted and ends when the deposit is claimed. During the voting period, the proposer may also optionally withdraw the proposal, which prevents it from passing, but does not prevent it from being slashed. This is usually used when a proposal has been superseded by a revised alternative.

In the above, rounded grey boxes are actions submitted by the proposal author, rectangular colored boxes are the state of the proposal on chain, and colored circles are outcomes of voting.

Kinds Of Proposal

There are 4 kinds of governance proposal on Penumbra: signaling, emergency, parameter change, and community pool spend.

Signaling Proposals

Signaling proposals are meant to signal community consensus about something. They do not have a mechanized effect on the chain when passed; they merely indicate that the community agrees about something.

This kind of proposal is often used to agree on code changes; as such, an optional commit field may be included to specify these changes.

Emergency Proposals

Emergency proposals are meant for when immediate action is required to address a crisis, and conclude early as soon as a 1/3 majority of all active voting power votes yes.

Emergency proposals have the power to optionally halt the chain when passed. If this occurs, off-chain coordination between validators will be required to restart the chain.

Parameter Change Proposals

Parameter change proposals alter the chain parameters when they are passed. Chain parameters specify things like the base staking reward rate, the amount of penalty applied when slashing, and other properties that determine how the chain behaves. Many of these can be changed by parameter change proposals, but some cannot, and instead would require a chain halt and upgrade.

A parameter change proposal specifies both the old and the new parameters. If the current set of parameters at the time the proposal passes are an exact match for the old parameters specified in the proposal, the entire set of parameters is immediately set to the new parameters; otherwise, nothing happens. This is to prevent two simultaneous parameter change proposals from overwriting each others’ changes or merging with one another into an undesired state. Almost always, the set of old parameters should be the current parameters at the time the proposal is submitted.

Community Pool Spend Proposals

Community Pool spend proposals submit a transaction plan which may spend funds from the Community Pool if passed.

Community Pool spend transactions have exclusive capability to use two special actions which are not allowed in directly submitted user transactions: CommunityPoolSpend and CommunityPoolOutput. These actions, respectively, spend funds from the Community Pool, and mint funds transparently to an output address (unlike regular output actions, which are shielded). Community Pool spend transactions are unable to use regular shielded outputs, spend funds from any source other than the Community Pool itself, perform swaps, or submit, withdraw, or claim governance proposals.

Submitting A Proposal

To submit a proposal, first generate a proposal template for the kind of proposal you want to submit. For example, suppose we want to create a signaling proposal:

pcli tx proposal template signaling --file proposal.toml

This outputs a TOML template for the proposal to the file proposal.toml, where you can edit the details to match what you’d like to submit. The template will contain relevant default fields for you to fill in, as well as a proposal ID, automatically set to the next proposal ID at the time you generated the template. If someone else submits a proposal before you’re ready to upload yours, you may need to increment this ID, because it must be the sequentially next proposal ID at the time the proposal is submitted to the chain.

Once you’re ready to submit the proposal, you can submit it. Note that you do not have to explicitly specify the proposal deposit in this action; it is determined automatically based on the chain parameters.

pcli tx proposal submit --file proposal.toml

The proposal deposit will be immediately escrowed and the proposal voting period will start in the very next block. As the proposer, you will receive a proposal deposit NFT which can be redeemed for the proposal deposit after voting concludes, provided the proposal is not slashed. This NFT has denomination proposal_N_deposit, where N is the ID of your proposal. Note that whoever holds this NFT has exclusive control of the proposal: they can withdraw it or claim the deposit.

Making A Community Pool Spend Transaction Plan

In order to submit a Community Pool spend proposal, it is necessary to create a transaction plan. At present, the only way to specify this is to provide a rather human-unfriendly JSON-formatted transaction plan, because there is no stable human-readable representation for a transaction plan at present. This will change in the future as better tooling is developed.

For now, here is a template for a transaction plan that withdraws 100 penumbra from the Community Pool and sends it to a specified address (in this case, the address of the author of this document):

  "fee": { "amount": { "lo": 0, "hi": 0 } },
  "actions": [
      "communityPoolSpend": {
        "value": {
          "amount": { "lo": 100000000, "hi": 0 },
          "assetId": { "inner": "KeqcLzNx9qSH5+lcJHBB9KNW+YPrBk5dKzvPMiypahA=" }
      "communityPoolOutput": {
        "value": {
          "amount": { "lo": 100000000, "hi": 0 },
          "assetId": { "inner": "KeqcLzNx9qSH5+lcJHBB9KNW+YPrBk5dKzvPMiypahA=" }
        "address": {
          "inner": "vzZ60xfMPPwewTiSb08jk5OdUjc0BhQ7IXLgHAayJoi5mvmlnTpqFuaPU2hCBhwaEwO2c03tBbN/GVh0+CajAjYBmBq3yHAbzNJCnZS8jUs="

Note that the asset ID and address are specified not in the usual bech32 formats you are used to seeing, but in base64. To get your address in this format, use pcli view address 0 --base64.

To template a Community Pool spend proposal using a JSON transaction plan, use the pcli tx proposal template community-pool-spend --transaction-plan <FILENAME>.json, which will take the transaction plan and include it in the generated proposal template. If no plan is specified, the transaction plan will be the empty transaction which does nothing when executed.

Withdrawing A Proposal

If you want to withdraw a proposal that you have made (perhaps because a better proposal has come to community consensus), you can do so before voting concludes. Note that this does not make you immune to losing your deposit by slashing, as withdrawn proposals can still be voted on and slashed.

pcli tx proposal withdraw 0 \
    --reason "some human-readable reason for withdrawal"

When you withdraw a proposal, you consume your proposal deposit NFT, and produce a new proposal unbonding deposit NFT, which has the denomination proposal_N_unbonding_deposit, where N is the proposal ID. This, like the proposal deposit NFT, can be used to redeem the deposit at the end of voting, provided the proposal is not slashed.

Claiming A Proposal Deposit

Regardless of whether you have or have not withdrawn your proposal, once voting on the proposal concludes, you can claim your proposal deposit using the tx proposal deposit-claim subcommand of pcli. For example, if you wanted to claim the deposit for a concluded proposal number 1, you could say:

pcli tx proposal deposit-claim 1

This will consume your proposal deposit NFT (either the original or the one you received after withdrawing the proposal) and send you back one of three different proposal result NFTs, depending on the result of the vote: proposal_N_passed, proposal_N_failed or proposal_N_slashed. If the proposal was not slashed (that is, it passed or failed), this action will also produce the original proposal deposit. Note that you can claim a slashed proposal: you will receive the slashed proposal result NFT, but you will not receive the original proposal deposit.

Running a node

Running a node is not necessary to use the protocol. Both the web extension and pcli are designed to operate with any RPC endpoint. However, we’ve tried to make it as easy as possible to run nodes so that users can host their own RPC.

There are two kinds of Penumbra nodes:

  • Penumbra fullnodes run pd and cometbft to synchronize and verify the entire chain state, as described in Running a node: pd.
  • Penumbra ultralight nodes run pclientd to scan, decrypt, and synchronize a specific wallet’s data, as well as build and sign transactions, as described in Running a node: pclientd.

The web extension and pcli embed the view and custody functionality provided by pclientd, so it is not necessary to run pclientd to use them. Instead, pclientd is intended to act as a local RPC for programmatic tooling (e.g., trading bots) not written in Rust that cannot easily embed the code for working with Penumbra’s shielded cryptography.

Using pd

This section describes how to build and run pd, the node implementation for Penumbra:

Requirements for running a node

In order to run a Penumbra fullnode, you’ll need a machine with sufficient resources. See specifics below.

System requirements

We recommend using a machine with at least:

  • 8GB RAM
  • 2-4 vCPUS
  • ~200GB persistent storage (~20GB/week)

You can host your node on hardware, or on your cloud provider of choice.

Network requirements

A Penumbra fullnode should have a publicly routable IP address to accept P2P connections. It’s possible to run a fullnode behind NAT, but then it won’t be able to receive connections from peers. The relevant network endpoints for running Penumbra are:

  • 26656/TCP for CometBFT P2P, should be public
  • 26657/TCP for CometBFT RPC, should be private
  • 26660/TCP for CometBFT metrics, should be private
  • 26658/TCP for Penumbra ABCI, should be private
  • 9000/TCP for Penumbra metrics, should be private
  • 8080/TCP for Penumbra gRPC, should be private
  • 443/TCP for Penumbra HTTPS, optional, should be public if enabled

You can opt in to HTTPS support for Penumbra’s gRPC service by setting the --grpc-auto-https <DOMAIN> option. See pd start --help for more info.

Custody considerations

Validators should review the pcli key custody recommendations for protecting the validator identity.

Deployment strategies

We expect node operators to manage the lifecycle of their Penumbra deployments. Some example configs for systemd, docker compose, and kubernetes helm charts can be found in the Penumbra repo’s deployments/ directory. You should consult these configurations as a reference, and write your own scripts to maintain your node.

Consider joining the Penumbra Discord to receive announcements about new versions and required actions by node operators.

Installing pd

Download prebuilt binaries from the Penumbra releases page on Github. Make sure to use the most recent version available, as the version of pd must match the software currently running on the network.

Make sure to choose the correct platform for your machine. After downloading the .tar.gz file, extract it, and copy its contents to your $PATH. For example:

curl -sSfL -O
tar -xf pd-x86_64-unknown-linux-gnu.tar.gz
sudo mv pd-x86_64-unknown-linux-gnu/pd /usr/local/bin/

# confirm the pd binary is installed by running:
pd --version

There’s also a one-liner install script available on the release page, which will install pd to $HOME/.cargo/bin/. As of v0.64.1 (released 2023-12-12), we build Linux binaries on Ubuntu 22.04. If these binaries don’t work for you out of the box, you’ll need to build from source, or use the container images.

Installing CometBFT

You’ll need to have CometBFT installed on your system to join your node to the testnet.

You must use a specific version of CometBFT, v0.37.5, which you can download from the CometBFT releases page. If you prefer to compile from source instead, make sure you are compiling version v0.37.5.

Previous versions of Penumbra used Tendermint, but as of Testnet 62 (released 2023-10-10), only CometBFT is supported. Do not use any version of Tendermint, which will not work with pd.

Joining a Testnet

We provide instructions for running both fullnode deployments and validator deployments. A fullnode will sync with the network but will not have any voting power, and will not be eligible for staking or funding stream rewards. For more information on what a fullnode is, see the CometBFT documentation.

A regular validator will participate in voting and rewards, if it becomes part of the consensus set. Of course, these rewards, like all other testnet tokens, have no value.

Joining as a fullnode

To join a testnet as a fullnode, install the most recent version of pd, run pd testnet join to generate configs, then use those configs to run pd and cometbft. In more detail:

Resetting state

First, reset the testnet data from any prior testnet you may have joined:

pd testnet unsafe-reset-all

This will delete the entire testnet data directory.

Generating configs

Next, generate a set of configs for the current testnet:

pd testnet join --external-address IP_ADDRESS:26656 --moniker MY_NODE_NAME \
    --archive-url ""

where IP_ADDRESS (like is the public IP address of the node you’re running, and MY_NODE_NAME is a moniker identifying your node. Other peers will try to connect to your node over port 26656/TCP. Finally, the --archive-url flag will fetch a tarball of historical blocks, so that your newly joining node can understand transactions that occurred prior to the most recent chain upgrade.

If your node is behind a firewall or not publicly routable for some other reason, skip the --external-address flag, so that other peers won’t try to connect to it. You can also skip the --moniker flag to use a randomized moniker instead of selecting one.

This command fetches the genesis file for the current testnet, and writes configs to a testnet data directory (by default, ~/.penumbra/testnet_data). If any data exists in the testnet data directory, this command will fail. See the section above on resetting node state.

Running pd and cometbft

Next, run pd:

pd start

Then (perhaps in another terminal), run CometBFT, specifying --home:

cometbft start --home ~/.penumbra/testnet_data/node0/cometbft

Alternatively, pd and cometbft can be orchestrated with docker-compose:

cd deployments/compose/
docker-compose pull
docker-compose up --abort-on-container-exit

or via systemd:

cd deployments/systemd/
sudo cp *.service /etc/systemd/system/
# edit service files to customize for your system
sudo systemctl daemon-reload
sudo systemctl restart penumbra cometbft

See the deployments/ directory for more examples on configuration scripts.

Becoming a validator

After starting your node, you should now be participating in the network as a fullnode. If you wish to run a validator, you’ll need to perform additional steps.

Every validator on Penumbra is a fullnode that has been “promoted” to validator status. Make sure you are comfortable running a fullnode before you attempt to run a validator, because validators are subject to penalties for downtime.

Validator definitions in Penumbra

A validator definition contains fields defining metadata regarding your validator as well as funding streams, which are Penumbra’s analogue to validator commissions.

The root of a validator’s identity is their identity key. Currently, pcli reuses the spend authorization key in whatever wallet is active as the validator’s identity key. This key is used to sign validator definitions that update the configuration for a validator.

IMPORTANT: The validator identity cannot be changed, ever! This means that if you are running a production validator, you definitely should use a secure custody setup for the wallet which backs the identity key. Strongly consider using the threshold backend of pcli for this purpose. More details about validator custody can be found under the validator custody section of the pcli guide.

Creating a template definition

First, make sure you’ve installed pcli. To create a template configuration, use pcli validator definition template:

$ pcli validator definition template \
    --tendermint-validator-keyfile ~/.penumbra/testnet_data/node0/cometbft/config/priv_validator_key.json \
    --file validator.toml
$ cat validator.toml
# This is a template for a validator definition.
# The identity_key and governance_key fields are auto-filled with values derived
# from this wallet's account.
# You should fill in the name, website, and description fields.
# By default, validators are disabled, and cannot be delegated to. To change
# this, set `enabled = true`.
# Every time you upload a new validator config, you'll need to increment the
# `sequence_number`.

sequence_number = 0
enabled = false
name = ''
website = ''
description = ''
identity_key = 'penumbravalid1kqrecmvwcc75rvg9arhl0apsggtuannqphxhlzl34vfamp4ukg9q87ejej'
governance_key = 'penumbragovern1kqrecmvwcc75rvg9arhl0apsggtuannqphxhlzl34vfamp4ukg9qus84v5'

type = 'tendermint/PubKeyEd25519'
value = 'HDmm2FmJhLHxaKPnP5Fw3tC1DtlBx8ETgTL35UF+p6w='

recipient = 'penumbrav2t1cntf73e36y3um4zmqm4j0zar3jyxvyfqxywwg5q6fjxzhe28qttppmcww2kunetdp3q2zywcakwv6tzxdnaa3sqymll2gzq6zqhr5p0v7fnfdaghrr2ru2uw78nkeyt49uf49q'
rate_bps = 100

recipient = "CommunityPool"
rate_bps = 100

and adjust the data like the name, website, description, etc as desired.

The enabled field can be used to enable/disable your validator without facing slashing penalties. Disabled validators can not appear in the active validator set and are ineligible for rewards.

This is useful if, for example, you know your validator will not be online for a period of time, and you want to avoid an uptime violation penalty. If you are uploading your validator for the first time, you will likely want to start with it disabled until your CometBFT & pd instances have caught up to the consensus block height.

Note that by default the enabled field is set to false and will need to be enabled in order to activate one’s validator.

In the default template, there is a funding stream declared to contribute funds to the Community Pool. This is not required, and may be altered or removed if you wish.

Setting the consensus key

In the command above, the --tendermint-validator-keyfile flag was used to instruct pcli to import the consensus key for the CometBFT identity. This works well if pcli and pd are used on the same machine. If you are running them in separate environments, you can omit the flag, and pd will generate a random key in the template. You must then manually update the consensus_key. You can get the correct value for consensus_key from your cometbft configs:

$ grep -A3 pub_key ~/.penumbra/testnet_data/node0/cometbft/config/priv_validator_key.json
  "pub_key": {
    "type": "tendermint/PubKeyEd25519",
    "value": "Fodjg0m1kF/6uzcAZpRcLJswGf3EeNShLP2A+UCz8lw="

Copy the string in the value field and paste that into your validator.toml, as the value field under the [consensus_key] heading.

Configuring funding streams

Unlike the Cosmos SDK, which has validators specify a commission percentage that goes to the validator, Penumbra uses funding streams, a list of pairs of commission amounts and addresses. This design allows validators to dedicate portions of their commission non-custodially – for instance, a validator could declare some amount of commission to cover their operating costs, and another that would be sent to an address controlled by the Community Pool.

Uploading a definition

After setting up metadata, funding streams, and the correct consensus key in your validator.toml, you can upload it to the chain:

pcli validator definition upload --file validator.toml

And verify that it’s known to the chain:

pcli query validator list --show-inactive

However your validator doesn’t have anything delegated to it and will remain in an Inactive state until it receives enough delegations to place it in the active set of validators.

Delegating to your validator

First find your validator’s identity key:

pcli validator identity

And then delegate some amount of penumbra to it:

pcli tx delegate 1penumbra --to [YOUR_VALIDATOR_IDENTITY_KEY]

You should then see your balance of penumbra decreased and that you have received some amount of delegation tokens for your validator:

pcli view balance

Voting power will be calculated on the next epoch transition after your delegation takes place. Assuming that your delegation was enough to place your validator in the top N validators by voting power, it should appear in the validator list as Active after the next epoch transition. The epoch duration and the active validator limit are chain parameters, and will vary by deployment. You can find the values in use for the current chain in its genesis.json file.

Updating your validator

First fetch your existing validator definition from the chain:

pcli validator definition fetch --file validator.toml

Then make any changes desired and make sure to increase sequence_number by at least 1! The sequence_number is a unique, increasing identifier for the version of the validator definition.

After updating the validator definition you can upload it again to update your validator metadata on-chain:

pcli validator definition upload --file validator.toml

Performing chain upgrades

When consensus-breaking changes are made to the Penumbra protocol, node operators must coordinate upgrading to the new version of the software at the same time. Penumbra uses a governance proposal for scheduling upgrades at a specific block height.

Upgrade process abstractly

At a high level, the upgrade process consists of the following steps:

  1. Governance proposal submitted, specifying explicit chain height n for halt to occur.
  2. Governance proposal passes.
  3. Chain reaches specified height n-1, nodes stop generating blocks.
  4. Manual upgrade is performed on each validator and fullnode:
    1. Install the new version of pd.
    2. Apply changes to pd and cometbft state via pd migrate.
    3. Restart node.

After the node is restarted on the new version, it should be able to talk to the network again. Once enough validators with sufficient stake weight have upgraded, the network will resume generating blocks.

Performing a chain upgrade

Consider performing a backup as a preliminary step during the downtime, so that your node state is recoverable.

  1. Stop both pd and cometbft. Depending on how you run Penumbra, this could mean sudo systemctl stop penumbra cometbft.
  2. Download the latest version of pd and install it. Run pd --version and confirm you see v0.77.3 before proceeding.
  3. Optionally, use pd export to create a snapshot of the pd state.
  4. Apply the migration with pd migrate --home PD_HOME --comet-home COMETBFT_HOME. If using the default home locations (from pd testnet join), you can omit the paths and just run pd migrate.

Finally, restart the node, e.g. sudo systemctl restart penumbra cometbft. Check the logs, and you should see the chain progressing past the halt height n.

Indexing ABCI events

The pd software emits ABCI events while processing blocks. By default, these blocks are stored in CometBFT’s key-value database locally, but node operators can opt-in to writing the events to an external PostgreSQL database.

Configuring a Penumbra node to write events to postgres

  1. Create a Postgres database, username, and credentials.
  2. Apply the CometBFT schema to the database: psql -d $DATABASE_NAME -f <path/to/schema.sql>
  3. Edit the CometBFT config file, by default located at ~/.penumbra/testnet_data/node0/cometbft/config/config.toml, specifically its [tx_index], and set:
    1. indexer = "psql"
    2. psql-conn = "<DATABASE_URL>"
  4. Run pd and cometbft as normal.

The format for DATABASE_URL is specified in the Postgres docs. After the node is running, check the logs for errors. Query the database with SELECT height FROM blocks ORDER BY height DESC LIMIT 10; and confirm you’re seeing the latest blocks being added to the database.

Rebuilding an index database from scratch

If you are joining the network after a chain upgrade, the events behind the upgrade boundary will not be available to your node for syncing while catching up to current height. To emit historical events, you will need access to archives of CometBFT state created before (each) planned upgrade. The process then becomes:

  1. Restore node state from backup.
  2. Ensure you’re using the appropriate pd and cometbft versions for the associated state.
  3. Run pd migrate --ready-to-start to permit pd to start up.
  4. Run CometBFT with extra options: --p2p.pex=false --p2p.seeds='' --moniker archive-node-1
  5. Run pd and cometbft as normal, taking care to use the appropriate versions.

Then configure another node with indexing support, as described above, and join the second node to the archive node. As it streams blocks, the ABCI events will be recorded in the database.

Debugging a Penumbra node

Below are a list of FAQs about running a Penumbra node.

How do I check whether my node is connected to other peers?

Poll the CometBFT RPC for current number of peers:

curl -s http://localhost:26657/net_info | jq .result.n_peers

How do I check whether my node is synchronized with the network?

Poll the CometBFT RPC for sync status:

curl -s http://localhost:26657/status | jq .result.sync_info

Specifically, check that catching_up=false. You can also compare the latest_block_height value with the tip of the chain visible when running pcli view sync.

How long does it take to synchronize with the network?

A new node will sync at a rate of approximately 100,000 blocks per 6h.

How do I check whether my validator is active?

You can view the list of known validators by running:

pcli query validator list --show-inactive

Remember that it will take time for delegations made against your validator to become bonded. You can check how long this will take by running:

pcli query chain info --verbose

Inspect the values for Current Block Height, Current Epoch, and Epoch Duration. You’ll need to wait until the next epoch boundary post-delegation for the delegated weight to be computed in your validator’s voting power.

Local RPC with pclientd

Penumbra’s architecture separates public shared state from private per-user state. Each user’s state is known only to them and other parties they disclose it to. While this provides many advantages – and enables the core features of the chain – it also creates new operational challenges. Most existing blockchain tooling is built on the assumption that all chain state is available from a fullnode via RPC, allowing the tooling to be relatively stateless, obtaining its information from an RPC.

The role of pclientd, the Penumbra client daemon, is to restore this paradigm, allowing third-party tooling to query both public and private state via RPC, and to handle all of the Penumbra-specific cryptography. It does this by:

  • scanning and synchronizing a local, decrypted copy of all of a specific user’s private data;
  • exposing that data through a “view service” RPC that can query state and plan and build transactions;
  • proxying requests for public chain state to its fullnode;
  • optionally authorizing and signing transactions if configured with a spending key.

Client software can be written in any language with GRPC support, using pclientd as a single endpoint for all requests.

   ┌────────┐  ┌─────────────────┐  ┌────────┐
   │  Client│  │         pclientd│  │Penumbra│
   │Software│◀─┼─┐             ┌─┼─▶│Fullnode│
   └────────┘  │ │             │ │  └────────┘
            ╭  │ │ ┌───────┐   │ │
     public │  │ │ │grpc   │   │ │
 chain data │  │ ├▶│proxy  │◀──┤ │
            ╰  │ │ └───────┘   │ │
               │ │             │ │
            ╭  │ │ ┌───────┐   │ │
    private │  │ │ │view   │   │ │
  user data │  │ ├▶│service│◀──┘ │
            ╰  │ │ └───────┘     │
               │ │               │
            ╭  │ │ ┌ ─ ─ ─ ┐     │
   spending │  │ │  custody      │
 capability │  │ └▶│service│     │
 (optional) ╰  │    ─ ─ ─ ─      │


Currently, pclientd does not support any kind of transport security or authentication mechanism. Do not expose its RPC to untrusted access. We intend to remedy this gap in the future.

Configuring pclientd

First, install pclientd following the instructions for installing pcli but downloading pclientd rather than pcli.

Generating configs

pclientd can run in either view mode, with only a full viewing key, or custody mode, with the ability to sign transactions.

To initialize pclientd in view mode, run

pclientd init --view FULL_VIEWING_KEY

The FULL_VIEWING_KEY can be obtained from the config.toml generated by pcli init.

To initialize pclientd in custody mode, run

pclientd init --custody -

to read a seed phrase from stdin, or

pclientd init --custody "SEED PHRASE"

to specify the seed phrase on the command line.

Authorization policy

When run in custody mode, pclientd supports configurable authorization policy for transaction signing. The default set of policies created by init --custody are an example, and need to be edited before use.

For example, pclientd init --custody might generate output like

full_viewing_key = 'penumbrafullviewingkey1f33fr3zrquh869s3h8d0pjx4fpa9fyut2utw7x5y7xdcxz6z7c8sgf5hslrkpf3mh8d26vufsq8y666chx0x0su06ay3rkwu74zuwqq9w8aza'
grpc_url = ''
bind_addr = ''

spend_key = 'penumbraspendkey1e9gf5g8jfraap4jqul7e80vv0zrnwpsm4ke0df38ejrfh430nu4s9gc22d'

type = 'DestinationAllowList'
allowed_destination_addresses = ['penumbrav2t13vh0fkf3qkqjacpm59g23ufea9n5us45e4p5h6hty8vg73r2t8g5l3kynad87u0n9eragf3hhkgkhqe5vhngq2cw493k48c9qg9ms4epllcmndd6ly4v4dw2jcnxaxzjqnlvnw']

type = 'OnlyIbcRelay'

type = 'PreAuthorization'
method = 'Ed25519'
required_signatures = 1
allowed_signers = ['+Osq5OiWKos57KigDjd3XCG/YLUOSUbuBly4LBBpJTg=']

The kms_config section controls the configuration of the (software) key management system. Each kms.auth_policy section is a separate policy that must be satisfied for transaction authorization to succeed. To allow any transaction to be authorized, simply delete all the policies.

Destination allowlisting

type = 'DestinationAllowList'
allowed_destination_addresses = ['penumbrav2t13vh0fkf3qkqjacpm59g23ufea9n5us45e4p5h6hty8vg73r2t8g5l3kynad87u0n9eragf3hhkgkhqe5vhngq2cw493k48c9qg9ms4epllcmndd6ly4v4dw2jcnxaxzjqnlvnw']

This policy only allows transactions that send funds to the addresses on the allowlist. Transactions sending funds to any other address will be rejected.


type = 'OnlyIbcRelay'

This policy only allows transactions with the following actions: IbcAction, Spend, Output. The latter two are required to pay fees, so this policy should be combined with a DestinationAllowList to prevent sending funds outside of the relayer’s account.


type = 'PreAuthorization'
method = 'Ed25519'
required_signatures = 1
allowed_signers = ['+Osq5OiWKos57KigDjd3XCG/YLUOSUbuBly4LBBpJTg=']

This policy only allows transactions submitted with a pre-authorization Ed25519 signature made with at least required_signers signatures from the allowed_signers list. This allows clients to authenticate authorization requests to pclientd using standard Ed25519 signatures rather than Penumbra-specific decaf377-rdsa signatures. In the future, more pre-authorization methods may be added (e.g., WebAuthn).

Making RPC requests

pclientd exposes a GRPC and GRPC-web endpoint at its bind_addr. Several services are available.

To interactively explore requests and responses, try running GRPCUI locally or using Buf Studio in the browser. Buf Studio has a nicer user interface but does not (currently) support streaming requests. The Buf Studio link is preconfigured to make requests against a local pclientd instance with the default bind_addr, but can be aimed at any endpoint.

Accessing public chain state

pclientd has an integrated GRPC proxy, routing requests about public chain state to the fullnode it’s connected to.

Documentation on these RPCs is available on; follow the links in Buf Studio for more information.

Accessing private chain state

Access to a user’s private state is provided by the ViewService RPC.

In addition to ordinary queries, like Balances, which gets a user’s balances by account, the RPC also contains utility methods that allow computations involving cryptography. For instance, the AddressByIndex request computes a public address from an account index, and the IndexByAddress request decrypts an address to its private index.

Finally, the view service can plan and build transactions, as described in the next section.

Requesting transaction authorization

If pclientd was configured in custody mode, it exposes a CustodyService.

This allows authorization of a TransactionPlan, as described in the next section.

Building Transactions

Using the view and custody services to construct a transaction has four steps.

Plan the Transaction

Using the TransactionPlanner RPC in the view service, compute a TransactionPlan.

This RPC translates a general intent, like “send these tokens to this address” into a fully deterministic plan of the exact transaction, with all spends and outputs, all blinding factors selected, and so on.

Authorize the Transaction

With a TransactionPlan in hand, use the Authorize RPC to request authorization of the transaction from the custody service.

Note that authorization happens on the cleartext transaction plan, not the shielded transaction, so that the custodian can inspect the transaction before signing it.

Build the Transaction

With the TransactionPlan and AuthorizationData in hand, use the WitnessAndBuild RPC to have the view service build the transaction, using the latest witness data to construct the ZK proofs.

Broadcast the Transaction

With the resulting shielded Transaction complete, use the BroadcastTransaction request to broadcast the transaction to the network.

The await_detection parameter will wait for the transaction to be confirmed on-chain. Using await_detection is a simple way to ensure that different transactions can’t conflict with each other.


This section is for developers working on Penumbra source code.

Compiling from source

Penumbra is written in Rust. To build it, you will need a recent stable version of Rust, as well as a few OS-level dependencies. We don’t support building on Windows. If you need to use Windows, consider using WSL instead.

Installing the Rust toolchain

This requires that you install a recent (>= 1.75) stable version of the Rust compiler, installation instructions for which you can find here. Don’t forget to reload your shell so that cargo is available in your $PATH!

You can verify the rust compiler version by running rustc --version which should indicate version 1.75 or later. The project uses a rust-toolchain.toml file, which will ensure that your version of rust stays current enough to build the project from source.

Installing build prerequisites


You may need to install some additional packages in order to build pcli, depending on your distribution. For a bare-bones Ubuntu installation, you can run:

sudo apt-get install build-essential pkg-config libssl-dev clang git-lfs

For a minimal Fedora/CentOS/RHEL image, you can run:

sudo dnf install openssl-devel clang git cargo rustfmt git-lfs


You may need to install the command-line developer tools if you have never done so:

xcode-select --install

You’ll also need to install Git LFS, which you can do via Homebrew:

brew install git-lfs

Making sure that git-lfs is installed

Running git lfs install will make sure that git-lfs is correctly installed on your machine.

Cloning the repository

Once you have installed the above packages, you can clone the repository:

git clone

To build the versions of pcli, pd, etc. compatible with the current testnet, navigate to the penumbra/ folder, fetch the latest from the repository, and check out the latest tag for the current testnet:

cd penumbra && git fetch && git checkout v0.77.3

If you want to build the most recent version compatible with the “preview” environment, then run git checkout main instead.

Building the binaries

Then, build all the project binaries using cargo:

cargo build --release

Linking Against RocksDB (Optional)

Development builds can avoid the cost of recompiling RocksDB for storage libraries in the Cargo workspace. This manifests as a librocksdb-sys(build) message when building or testing crates in the monorepo.

Building librocksdb.a from source

First, clone the rocksdb repository:

# Clone the repository, and enter that directory.
git clone && cd rocksdb

# Checkout the version of rocksdb used in `librocksdb-sys`.
git checkout 6a43615

# Add an environment variable pointing to this repository:

# Compile the static `librocksdb.a` library to link against:
make static_lib

Building libsnappy.a from source

next, clone the snappy repository and follow the instructions to build it:

# Clone the repository, and enter that directory.
git clone && cd snappy

# Checkout the version of snappy used in `librocksdb-sys`.
git checkout 2b63814

# Initialize the submodules.
git submodule update --init

# Build snappy using cmake.
mkdir build
cd build && cmake .. && make

# Add an environment variable pointing to the build/ directory.

Building Penumbra

Once you’ve built rocksdb and set the environment variable, the librocksdb-sys crate will search in that directory for the compiled librocksdb.a static library when it is rebuilt.

Devnet Quickstart

This page describes a quickstart method for running pd+cometbft to test changes during development.

To start, you’ll need to install a specific version of CometBFT.

Generating configs

To generate a clean set of configs, run

cargo run --release --bin pd -- testnet generate

This will write configs to ~/.penumbra/testnet_data/.

Running pd

You’ll probably want to set RUST_LOG. Here’s one suggestion that’s quite verbose:

# Optional. Expect about 20GB/week of log data for pd with settings below.
export RUST_LOG="info,pd=debug,penumbra=debug,jmt=debug"

To run pd, run

cargo run --release --bin pd -- start

This will start but won’t do anything yet, because CometBFT isn’t running.

Running cometbft

To run CometBFT, run

cometbft --home ~/.penumbra/testnet_data/node0/cometbft/ start

in another terminal window.

Running pcli

To interact with the chain, configure a wallet pointing at the localhost node:

cargo run --release --bin pcli -- --home ~/.local/share/pcli-localhost view reset
cargo run --release --bin pcli -- init --grpc-url http://localhost:8080 soft-kms generate
# or, to reuse an existing seed phrase:
cargo run --release --bin pcli -- init --grpc-url http://localhost:8080 soft-kms import-phrase

and then pass the --home flag to any commands you run to point pcli at your local node, e.g.,

cargo run --release --bin pcli -- --home ~/.local/share/pcli-localhost view balance

By default, pd testnet generate uses the testnet allocations from the testnets/ directory in the git repo. If you have an address included in those files, then use pcli init soft-kms import-phrase. Otherwise, edit the genesis.json to add your address.

Resetting and restarting

After making changes, you may want to reset and restart the devnet:

cargo run --release --bin pd -- testnet unsafe-reset-all

You’ll probably also want to reset your wallet state:

cargo run --release --bin pcli -- --home ~/.local/share/pcli-localhost view reset

At this point you’re ready to generate new configs, and restart both pd and cometbft. The order they’re started in doesn’t particularly matter for correctness, because cometbft will retry connecting to the ABCI server until it succeeds.

Optional: running smoke-tests

Once you have a working devnet running, you should be able to run the smoke tests successfully. This can be useful if you are looking to contribute to Penumbra, or if you need to check that your setup is correct.

To run the smoke tests:

  1. Make sure you have a devnet running (see previous steps)
  2. Run integration tests:
PENUMBRA_NODE_PD_URL= PCLI_UNLEASH_DANGER=yes cargo test --package pcli -- --ignored --test-threads 1

Find the exact commands for each binary’s smoke tests in deployments/compose/process-compose-smoke-test.yml. You can also run the entire smoke test suite end-to-end via just smoke, including setup and teardown of the network. If you want to execute the tests against an already-running devnet, however, use manual invocations like the cargo test example above. You’ll need to install process-compose to use the automated setup.

Working with SQLite

The view server uses SQLite3 to store client state locally. During debugging, you may wish to interact with the sqlite db directly. To do so:

$ sqlite3 ~/.local/share/pcli/pcli-view.sqlite
sqlite> PRAGMA table_info(tx);

sqlite> SELECT json_object('tx_hash', quote(tx_hash)) FROM tx;

Note that because binary data is stored directly in the db (see BLOB in pragma), you’ll need to decode the blob as a JSON object to get readable info.

Viewing IBC assets

To list assets that have been transferred in via IBC, query on the denom for a prefix of transfer/:

sqlite> SELECT denom, json_object('asset_id', quote(asset_id)) FROM assets WHERE denom LIKE 'transfer/%' ;

Building documentation

mdBook docs

The protocol docs and the guide (this document) are built using mdBook and auto-deployed on pushes to main. To build locally:

  1. Install the requirements: cargo install mdbook mdbook-katex mdbook-mermaid mdbook-linkcheck
  2. Run mdbook serve from docs/protocol (for the protocol spec) or from docs/guide (for this document).

The hosting config uses Firebase. To debug Firebase-specific functionality like redirects, use firebase emulators:start to run a local webserver. You’ll need to rebuild the docs with mdbook build to get livereload functionality, however.

Rust API docs

The Rust API docs can be built with ./deployments/scripts/rust-docs. The landing page, the top-level index.html, is handled as a special case. If you added new crates by appending a -p <crate_name> to the rust-docs script, then you must rebuild the index page via:

You’ll need to use the nightly toolchain for Rust to build the docs. In some cases, you’ll need a specific version. To configure locally:

rustup toolchain install nightly-2023-05-15

CI will automatically rebuild all our docs on merges into main.

Maintaining protobuf specs

The Penumbra project dynamically generates code for interfacing with gRPC. The following locations within the repository are relevant:

  • proto/penumbra/**/*.proto, the developer-authored spec files
  • crates/proto/src/gen/*.rs, the generated Rust code files
  • proto/go/**/*.pb.go, the generated Go code files
  • tools/proto-compiler/, the build logic for generating the Rust code files

We use buf to auto-publish the protobuf schemas at, and to generate Go and Typescript packages. The Rust code files are generated with our own tooling, located at tools/proto-compiler.

Installing protoc

The protoc tool is required to generate our protobuf specs via tools/proto-compiler. We mandate the use of a specific major version of the protoc tool, to make outputs predictable. Currently, the supported version is 24.x. Obtain the most recent pre-compiled binary from the protoc website for that major version. After installing, run protoc --version and confirm you’re running at least 24.4 (or newer). Don’t install protoc from package managers such as apt, as those versions are often outdated, and will not work with Penumbra.

To install the protoc tool from the zip file, extract it to a directory on your PATH:

unzip -d ~/.local/

Installing buf

The buf tool is required to update lockfiles used for version management in the Buf Schema Registry. Visit the buf download page to obtain a version. After installing, run buf --version and confirm you’re running at least 1.32.0 (or newer).

Building protos

From the top-level of the git repository:


Then run git status to determine whether any changes were made. The build process is deterministic, so regenerating multiple times from the same source files should not change the output.

If the generated output would change in any way, CI will fail, prompting the developer to commit the changes.

Updating buf lockfiles

We pin specific versions of upstream Cosmos deps in the buf lockfile for our proto definitions. Doing so avoids a tedious chore of needing to update the lockfile frequently when the upstream BSR entries change. We should review these deps periodically and bump them, as we would any other dependency.

cd proto/penumbra
# edit buf.yaml to remove the tags, i.e. suffix `:<tag>`
buf dep update

Then commit and PR in the results.


Metrics are an important part of observability, allowing us to understand what the Penumbra software is doing. Penumbra Labs runs Grafana instances for the public deployments:


There’s a more comprehensive WIP dashboard, gated by basic auth for PL team:


Check the usual place for credentials. Eventually those views should be exported as public references.

Adding Metrics

We use a common structure for organizing metrics code throughout the penumbra workspace. Each crate that uses metrics has a top-level metrics module, which is private to the crate. That module contains:

  • a re-export of the entire metrics crate: pub use metrics::*;
  • &'static str constants for every metrics key used by the crate;
  • a pub fn register_metrics() that registers and describes all of the metrics used by the crate;

Finally, the register_metrics function is publicly re-exported from the crate root.

The only part of this structure visible outside the crate is the register_metrics function in the crate root, allowing users of the library to register and describe the metrics it uses on startup.

Internally to the crate, all metrics keys are in one place, rather than being scattered across the codebase, so it’s easy to see what metrics there are. Because the metrics module re-exports the contents of the metrics crate, doing use crate::metrics; is effectively a way to monkey-patch the crate-specific constants into the metrics crate, allowing code like:

fn main() {
    "kind" => "new",
    "code" => "1"

The metrics keys themselves should:

  • follow the Prometheus metrics naming guidelines
  • have an initial prefix of the form penumbra_CRATE, e.g., penumbra_stake, penumbra_pd, etc;
  • have some following module prefix that makes sense relative to the other metrics in the crate.

For instance:

fn main() {
pub const MEMPOOL_CHECKTX_TOTAL: &str = "penumbra_pd_mempool_checktx_total";

Backing up Grafana

After being changed, Grafana dashboards should be backed up to the repository for posterity and redeployment.

Grafana has an import/export feature that we use for maintaining our dashboards.

  1. View the dashboard you want to export, and click the share icon in the top bar.
  2. Choose Export, and enable Export for sharing externally, which will generalized the datasource.
  3. Download the JSON file, renaming it as necessary, and copy into the repo (config/grafana/dashboards/)
  4. PR the changes into main, and confirm on preview post-deploy that it works as expected.

Editing metrics locally

To facilitate working with metrics locally, first run a pd node on your machine with the metrics endpoint exposed. Then, you can spin up a metrics sidecar deployment:

just metrics

Note that this setup only works on Linux hosts, due to the use of host networking, so the metrics containers can reach network ports on the host machine.

To add new Grafana visualizations, open http://localhost:3000 and edit the existing dashboards. When you’re happy with what you’ve got, follow the “Backing up Grafana” instructions above to save your work.

Zero-Knowledge Proofs

Test Parameter Setup

Penumbra’s zero-knowledge proofs require circuit-specific parameters to be generated in a preprocessing phase. There are two keys generated for each circuit, the Proving Key and Verifying Key - used by the prover and verifier respectively.

For development purposes only, we have a crate in tools/parameter-setup that lets one generate the proving and verifying keys:

cargo run --release --bin penumbra-parameter-setup

The verifying and proving keys for each circuit will be created in a serialized form in the proof-params/src/gen folder. Note that the keys will be generated for all circuits, so you should commit only the keys for the circuits that have changed.

The proving keys are tracked using Git-LFS. The verifying keys are stored directly in git since they are small (around ~1 KB each).

Adding a new Proof

To add a new circuit to the parameter setup, you should modify tools/parameter-setup/src/ before running cargo run.

Then edit penumbra-proof-params to reference the new parameters created in proof-params/src/gen.

Circuit Benchmarks

We have benchmarks for all proofs in the penumbra-bench crate. You can run them via:

cargo bench

Performance as of commit ce2d319bd5534fd28600227b28506e32b8504493 benchmarked on a 2023 Macbook Pro M2 (12 core CPU) with 32 GB memory and the parallel feature enabled:

ProofNumber of constraintsProving time
Delegator vote36,723389ms
Undelegate claim14,776139ms
Nullifier derivation39415ms

zk-SNARK Ceremony Benchmarks

Run benchmarks for the zk-SNARK ceremony via:

cd crates/crypto/proof-setup
cargo bench

Performance as of commit 1ed963657c16e49c65a8e9ecf998d57fcce8f200 benchmarked on a 2023 Macbook Pro M2 (12 core CPU) with 32 GB memory using 37,061 constraints (SwapClaim circuit) (note that in practice the performance will be based on the next power of two, for the most part):

Phase 1 run71.58s
Phase 1 check147.41s
Phase transition131.72s
Phase 2 run14.76s
Phase 2 check0.21s

Working with gRPC for Penumbra

The Penumbra pd application exposes a gRPC service for integration with other tools, such as pcli or the web extension. A solid understanding of how the gRPC methods work is helpful when building software that interoperates with Penumbra.

Using gRPC UI

The Penumbra Labs team runs gRPC UI instances for testnet deployments:

You can use this interface to perform queries against the relevant chain. It’s also possible to run gRPC UI locally on your machine, to connect to a local devnet.

Using Buf Studio

The Buf Studio webapp provides a polished GUI and comprehensive documentation. However, a significant limitation for use with Penumbra is that it lacks support for streaming requests, such as penumbra.core.component.compact_block.v1.CompactBlockRangeRequest.

To get started with Buf Studio, you can use the publicly available gRPC endpoint from the testnet deployments run by Penumbra Labs:

  • For the current testnet, use
  • For ephemeral devnets, use

Set the request type to gRPC-web at the bottom of the screen. You can then select a Method and explore the associated services. Click Send to submit the request and view response data in the right-hand pane.

Interacting with local devnets

Regardless of which interface you choose, you can connect to an instance of pd running on your machine, which can be useful while adding new features. First, make sure you’ve joined a testnet by setting up a node on your local machine. Once it’s running, you can connect directly to the pd port via http://localhost:8080.

Alternatively, you can use pclientd. First, make sure you’ve configured pclientd locally with your full viewing key. Once it’s running, you can connect directly to the pclient port via http://localhost:8081.

Testing IBC

This guide explains how to work with IBC functionality while developing Penumbra.

Making Penumbra -> Osmosis outbound transfers, via pcli

See the IBC user docs for how to use pcli to make an outbound IBC withdrawal, to a different testnet.

Making Osmosis -> Penumbra inbound transfers, via hermes

Transferring from Osmosis to Penumbra requires making an Osmosis transaction. The osmosisd CLI tooling unfortunately does not work for IBC transfers. To move funds from a Penumbra chain to an Osmosis testnet, use the hermes binary from the Penumbra fork. What you’ll need:

  • a local checkout of Hermes
  • your own osmosis wallet, with funds from the testnet faucet
  • channel info for both chains (consult pcli query ibc channels)
  • a penumbra address

You should use your own Osmosis wallet, with funds from the testnet faucet, and configure Hermes locally on your workstation with key material. Do not reuse the Hermes relayer instance, as sending transactions from its wallets while it’s relaying can break things.

# Hop into the hermes repo and build it:
cargo build --release

# Edit `config-penumbra-osmosis.toml` with your Penumbra wallet SpendKey,
# and make sure the Penumbra chain id is correct.
# Add your osmosis seed phrase to the file `mnemonic-osmosis-transfer`,
# then import it:
cargo run --release --bin hermes -- \
    --config config-penumbra-osmosis.toml keys add \
    --chain osmo-test-5 --mnemonic-file ./mnemonic-osmosis-transfer

# Then run a one-off command to trigger an outbound IBC transfer,
# from Osmosis to Penumbra:
cargo run --release --bin hermes -- \
    --config ./config-penumbra-osmosis.toml tx ft-transfer \
    --dst-chain <PENUMBRA_CHAIN_ID> --src-chain osmo-test-5 --src-port transfer \
    --src-channel <CHANNEL_ID_ON_OSMOSIS_CHAIN> --denom uosmo --amount 100 \
    --timeout-height-offset 10000000 --timeout-seconds 10000000 \
    --receiver <PENUMBRA_ADDRESS>

You can view account history for the shared Osmosis testnet account here: Change the address at the end of the URL to your account to confirm that your test transfer worked.

Updating Hermes config for a new testnet

See the procedure in the wiki for up to date information.

Use the IBC user docs to make a test transaction, to ensure that relaying is working. In the future, we should consider posting the newly created channel to the IBC docs guide, so community members can use it.

Working with a local devnet

Use this approach while fixing bugs or adding features. Be aware that creating a new channel on the public Osmosis testnet creates lingering state on that counterparty chain. Be respectful.

  1. Create a devnet. Make note of the randomly generated chain id emitted in the logs, as we’ll need it to configure Hermes.
  2. Checkout the Penumbra fork of Hermes, and build it with cargo build --release.
  3. Edit the config-devnet-osmosis.toml file to use the chain id for your newly created devnet.
  4. Add Osmosis key material to Hermes. Look up the Osmosis recovery phrase stored in shared 1Password, then:
echo "SEED PHRASE" > ./mnemonic
cargo run --release --bin hermes -- --config config-devnet-osmosis.toml keys add --chain osmo-test-5 --mnemonic-file ./mnemonic
  1. Create a new channel for this devnet:
cargo run --release --bin hermes -- --config config-devnet-osmosis.toml create channel --a-chain $PENUMBRA_DEVNET_CHAIN_ID --b-chain osmo-test-5 --a-port transfer --b-port transfer --new-client-connection

Hermes will run for a while, emit channel info, and then exit. 6. Finally, run Hermes: cargo run --release --bin hermes -- --config config-devnet-osmosis.toml start

You may see a spurious error about “signature key not found: penumbra-wallet: cannot find key file”. Ignore that error: we haven’t implemented fees yet, so no Penumbra keys are required in Hermes. Hermes will emit a summary of the channel info, something like:

# Chain: penumbra-testnet-tethys-8777cb20
  - Client: 07-tendermint-0
  - Client: 07-tendermint-1
    * Connection: connection-0
      | State: OPEN
      | Counterparty state: OPEN
      + Channel: channel-0
        | Port: transfer
        | State: OPEN
        | Counterparty: channel-1675
# Chain: osmo-test-5
  - Client: 07-tendermint-1029
    * Connection: connection-939
      | State: OPEN
      | Counterparty state: OPEN
      + Channel: channel-1675
        | Port: transfer
        | State: OPEN
        | Counterparty: channel-0

Make note of the channels on both the primary (Penumbra devnet) and counterparty (Osmosis testnet) chains. You can use those values to send funds from the Penumbra devnet to the counterparty:

cargo run --release --bin pcli -- -n http://localhost:8080 view reset
# check what funds are available
cargo run --release --bin pcli -- -n http://localhost:8080 view balance
cargo run --release --bin pcli -- -n http://localhost:8080 tx withdraw --to osmo1kh0fwkdy05yp579d8vczgharkcexfw582zj488 --channel 0 --timeout-height 5-2900000 100penumbra

See the IBC pcli docs for more details.


This page links to various resources that are helpful for working with and understanding Penumbra.

Getting started

  • The primary communication hub is our Discord; click the link to join the discussion there.
  • Documentation on how to use pcli, how to run pd, and how to do development can be found at

For developers


For all those URLs, there’s also a preview version available, e.g., that tracks the latest tip of the git repo, rather than the current public testnet.

Talks and presentations

These talks were given at various conferences and events, describing different aspects of the Penumbra ecosystem.