Skip to main content

Prerequisites

You can install Stellar Core a number of different ways, and once you do, you can configure it to participate in the network on a several different levels: it can be either a Basic Validator or a Full Validator. No matter how you install Stellar Core or what kind of node you run, however, you need to set up and connect to the peer-to-peer network and store the state of the ledger in a SQL database.

Hardware Requirements

info

CPU, RAM, Disk and network depends on network activity. If you decide to collocate certain workloads, you will need to take this into account.

For Tier 1 organizations

Tier 1 organizations run three geographically dispersed Full Validators, each meeting these requirements independently. Each node needs its own hardware, its own unique validator key, and its own history archive. Plan for three times the resources listed below, spread across different data centers or cloud regions. See Tier 1 Organizations for the full requirements and onboarding path.

Stellar Core is designed to run on relatively modest hardware so that a whole range of individuals and organizations can participate in the network, and basic nodes should be able to function pretty well without tremendous overhead. That said, the more you ask of your node, the greater the requirements.

The following recommendations were verified against production nodes in April 2024. Hardware requirements grow with network activity; check the stellar-core releases for any notes on updated requirements.

Node TypeCPURAMDiskAWS SKUGoogle Cloud SKU
Core Validator Node8 vCPUs @ 3.4 GHz16 GB100 GB NVMe SSD* (10,000 IOPS)c5d.2xlargen4-highcpu-8

PostgreSQL co-located on the same machine performs well at this spec — a separate database host is not required for a single validator.

* Disk sizing assumes a 30-day retention window (AUTOMATIC_MAINTENANCE_COUNT at default). See Storage below for details.

Stellar Network Access

Stellar Core interacts with the peer-to-peer network to keep a distributed ledger in sync, which means that your node needs to make certain TCP ports available for inbound and outbound communication.

Inbound

A Stellar Core node needs to allow all IPs to connect to its PEER_PORT over TCP. You can specify a port when you configure Stellar Core, but most people use the default, which is 11625.

Outbound

A Stellar Core node needs to connect to other nodes on the internet via their PEER_PORT over TCP. You can find information about other nodes' PEER_PORTs on a network explorer like Obsrvr Radar, but most use the default port for this as well, which is (again) 11625.

Internal System Access

Stellar Core also needs to connect to certain internal systems, though exactly how this is accomplished can vary based on your setup.

Inbound

  • Stellar Core exposes an unauthenticated HTTP endpoint on its HTTP_PORT. You can specify a port when you configure Stellar Core, but most people use the default, which is 11626.
  • The HTTP_PORT is used by other systems (such as Stellar RPC) to submit transactions, so this port may have to be exposed to the rest of your internal IP addresses.
  • It's also used to query Stellar Core info and provide metrics.
  • And to perform administrative commands such as scheduling upgrades and changing log levels
  • For more on that, see commands
Note on exposing the HTTP endpoint

If you need to expose this endpoint to other hosts in your local network, we strongly recommended you use an intermediate reverse proxy server to implement authentication. Don't expose the HTTP endpoint to the raw and cruel open internet.

Outbound

  • Stellar Core requires access to a database (PostgreSQL, for example). If that database resides on a different machine on your network, you'll need to allow that connection. You'll specify the database when you configure Stellar Core.
  • You can safely block all other connections.

Storage

Stellar Core's local storage needs come from two sources: the buckets directory (which serves as the primary database backend under BucketListDB, the default since stellar-core 21.0) and a much smaller SQL database for metadata. Both are managed entirely by Stellar Core. Local disk usage stays bounded over time — see Why local disk stays bounded below.

How storage breaks down

Approximate sizes for a current default-config validator on Mainnet. These figures should be treated as planning estimates rather than precise measurements.

ComponentApproximate sizeNotes
Buckets directory (BucketListDB)20–40 GBPrimary store for live ledger state since stellar-core 21.0.
SQL databaseA few GBPost-BucketListDB, used only for non-ledger metadata, transaction history within the retention window, and some DEX queries. Most ledger state tables are dropped at migration.
WAL logs, temp files5–15 GBPostgreSQL write-ahead logs and temporary space during maintenance operations. SQLite users will see lower numbers.

A working set of roughly 30–60 GB is typical. The 100 GB local NVMe included with the recommended c5d.2xlarge (and comparable on Hetzner, OVH, Contabo, and others) leaves comfortable operational headroom on top of that — room for debug captures, re-syncs, and unforeseen operational needs.

Why local disk stays bounded

A common misconception is that validators need to provision storage proportional to network history. They do not.

Live ledger state is bounded by state archival. Every entry on the ledger has a rent balance; when the balance reaches zero, the entry is archived and removed from the live state. Validators store the live state plus a small "Hot Archive" of recently archived entries; when the Hot Archive fills, it is published to the History Archive, the validator retains only the Merkle root of the published tree, and the archived entries themselves are deleted from the validator. The result is that local validator state stays compact even as cumulative network history grows.

History archives live on object storage, not on the validator. Full validators publish history archives to a separate object store (S3, R2, Backblaze B2, etc.) — that's where the multi-TB archive data lives. The validator process itself doesn't hold the archive on its local disk. See Publishing History Archives for the recommended setup.

CATCHUP_COMPLETE=true is almost never the right choice. This setting makes the node sync the entire ledger from genesis on startup and is rarely appropriate for a validator. The standard pattern for new validators — including new Tier 1 candidates — is to sync against current network state, publish a history archive forward from that point, and use stellar-archivist mirror to backfill historical data into the published archive as a separate operation. The validator's local disk requirements are determined by the live state model above, not by historical depth.

Database

Even with BucketListDB as the primary store, Stellar Core still requires a SQL database — either SQLite or PostgreSQL (recommended for production) — for metadata and transaction history.

The SQL database is consulted during consensus and modified atomically when a transaction set is applied to the ledger. Access patterns are random, fine-grained, and fast.

If you're using PostgreSQL, we recommend you configure your local database to be accessed over a Unix domain socket, as well as updating the below PostgreSQL configuration parameters:

# !!! DB connection should be over a Unix domain socket !!!
# shared_buffers = 25% of available system ram
# effective_cache_size = 50% of available system ram
# max_wal_size = 5GB
# max_connections = 150

Buckets

Stellar Core stores ledger state in the form of flat XDR files called "buckets." These files are used for hashing and transmission of ledger differences to history archives. Under BucketListDB (the default since stellar-core 21.0), the buckets directory also serves as the primary database backend — making it the largest single component of validator storage.

Buckets should be stored on a fast, local disk with sufficient space for several times the current ledger size. NVMe SSDs with 10,000+ IOPS are recommended for production validators. Network-attached or remote storage is not recommended; latency on the buckets path directly affects consensus performance.

Kubernetes considerations

We currently do not recommend running validator nodes in Kubernetes. Standard VM-based deployments (bare metal or cloud instances) are the well-tested path for production validators.

If you choose to use Kubernetes regardless, consider the following:

  • Sensitive data such as node seeds will be stored in Kubernetes etcd. Consider consuming credentials using tools like vault agent or the AWS Secrets Store CSI driver to improve security
  • Consider how external traffic will reach the pods. Tier 1 nodes need public DNS names and necessary ports must be accessible from the internet
  • Validators have unique seeds and history archive configurations, so each pod will require its own specific configuration
  • Ensure that sufficient resources are always available to the pods
  • Depending on how history archives are published, you may need to fork docker images to include extra tooling