ENSRainbow Development and Contributions
This guide covers running ENSRainbow locally for development and contributions.
Quick Navigation
Section titled “Quick Navigation”For focused guidance on specific topics, check out these dedicated pages:
- New to the project? Start with Local Development
- Need CLI help? Check the CLI Reference
- Building for production? See Building Docker Images
Getting Started
Section titled “Getting Started”Follow these steps to start contributing to ENSRainbow:
- Follow ENSNode’s contribution guide to prepare your workspace environment & install dependencies
- Choose your development path using the focused guides above
- Start with Local Development for the quickest way to get ENSRainbow running locally
Quick Reference
Section titled “Quick Reference”- Need to build from source? → Building Docker Images
- Looking for CLI commands? → CLI Reference
- Running into issues? → Troubleshooting
- Want to understand the data flow? → Data Model
Data Ingestion (ingest-ensrainbow
)
Section titled “Data Ingestion (ingest-ensrainbow)”Ingests data from .ensrainbow
files into the LevelDB database.
pnpm run ingest-ensrainbow --input-file <path/to/LABEL_SET_ID-LABEL_SET_VERSION.ensrainbow> [--data-dir path/to/db]
--input-file
: Path to the.ensrainbow
file. Adheres to ingestion rules.--data-dir
: Directory for the LevelDB database (default:data/
). The database storesLABEL_SET_ID
andHIGHEST_LABEL_SET_VERSION
metadata.
Database Validation (validate
)
Section titled “Database Validation (validate)”pnpm run validate [--data-dir path/to/db] [--lite]
Validates database integrity by:
- Verifying the keys for all rainbow records are valid labelhashes
- Ensuring stored labels match their corresponding labelhashes
- Validating the total rainbow record count
- Verifying no ingestion was interrupted before successful completion
The --lite
option performs a faster, less thorough validation by skipping hash verification and record count validation. It only checks that:
- The ingestion was completed successfully
- The schema version is correct
- The precalculated count exists and can be retrieved
The process will exit with:
- Code 0: Validation successful
- Code 1: Validation failed or errors encountered
Database Purge (purge
)
Section titled “Database Purge (purge)”pnpm run purge [--data-dir path/to/db]
Completely removes all files from the specified data directory. This is useful when you need to start fresh with a clean database.
The process will exit with:
- Code 0: Successful purge
- Code 1: Error during purge operation
API Server (serve
)
Section titled “API Server (serve)”pnpm run serve [--port 3223] [--data-dir path/to/db]
Starts the API server. The process will exit with:
- Code 0: Clean shutdown
- Code 1: Error during operation
Using ENSRainbow with ens-test-env
Section titled “Using ENSRainbow with ens-test-env”The ens-test-env project provides a test environment for ENS development. It includes a small dataset of ENS names in the ens_test_env_names.sql.gz
file that can be used with ENSRainbow for testing purposes.
Ingesting ens_test_env_names.sql.gz
Section titled “Ingesting ens_test_env_names.sql.gz”To ingest the test data into ENSRainbow:
-
Convert Test Data (if needed): If you don’t have a pre-converted
ens-test-env-0.ensrainbow
file:Terminal window # Navigate to apps/ensrainbow or adjust paths accordinglypnpm run convert --input-file test/fixtures/ens_test_env_names.sql.gz --output-file ens-test-env-0.ensrainbowThis creates
ens-test-env-0.ensrainbow
. -
Download Test Data (Alternative): Alternatively, download the pre-defined test data file:
Terminal window # In apps/ensrainbow directory./scripts/download-ensrainbow-files.sh ens-test-env 0This will place
ens-test-env-0.ensrainbow
inlabelsets/
. Adjust the path for the ingest command accordingly.
Ingesting Test Data
Section titled “Ingesting Test Data”# Assuming ens-test-env-0.ensrainbow is in the current directory or accessible by pathpnpm run ingest-ensrainbow --input-file ens-test-env-0.ensrainbow --data-dir data_ens_test_env
Validating the Test Data
Section titled “Validating the Test Data”You can validate the ingested test data to ensure it was properly loaded:
pnpm validate --data-dir data_ens_test_env
Running ENSRainbow with the test data
Section titled “Running ENSRainbow with the test data”pnpm serve --data-dir data_ens_test_env --port 3223
Using with Docker
Section titled “Using with Docker”You can also run ENSRainbow with the test data using Docker. This involves running the standard ENSRainbow image and configuring it via environment variables to download and use the test dataset.
-
Ensure you have the standard ENSRainbow Docker image: If you haven’t already, pull the latest image:
Terminal window docker pull ghcr.io/namehash/ensnode/ensrainbow:latest -
Run the ENSRainbow container with test data configuration: You’ll need to mount a volume for persistent storage and set the
DB_SCHEMA_VERSION
,LABEL_SET_ID
, andLABEL_SET_VERSION
environment variables to point to the test data.Terminal window # Create a directory on your host machine for test data, e.g., ~/my_ensrainbow_test_datamkdir -p ~/my_ensrainbow_test_datadocker run -d --name ensrainbow_test_env \-v ~/my_ensrainbow_test_data:/app/apps/ensrainbow/data \-e DB_SCHEMA_VERSION="3" \-e LABEL_SET_ID="ens-test-env" \-e LABEL_SET_VERSION="0" \-p 3223:3223 \ghcr.io/namehash/ensnode/ensrainbow:latest- Adjust
DB_SCHEMA_VERSION
,LABEL_SET_ID
, andLABEL_SET_VERSION
if the test data parameters differ. Typically, for the standard test data,LABEL_SET_ID
isens-test-env
andLABEL_SET_VERSION
is0
. TheDB_SCHEMA_VERSION
should match the version of the database archives you intend to use (e.g.,3
).
- Adjust
This setup allows the entrypoint.sh
script within the container to download the ens-test-env-0
database archive into the mounted volume on its first run. Subsequent runs will reuse the data from the volume.
This test environment setup is particularly useful for running ENS tests (i.e. ens-test-env) that require label healing capabilities without needing the full production dataset.
Environment Variables
Section titled “Environment Variables”When using ENSRainbow with Docker, the following environment variables control which pre-built ENSRainbow database archive is downloaded and used:
-
DB_SCHEMA_VERSION
: Specifies the database schema version (e.g.,3
). This determines the format and structure of the pre-built ENSRainbow database archives and is not related to the API version.- Goal: Ensures compatibility between the ENSRainbow software and the structure of downloaded database files that are prebuilt for startup-time optimizations.
- Configuration: It is strongly recommended to use the latest available schema version unless you have specific compatibility requirements.
-
LABEL_SET_ID
: The identifier for a Label Set, which is a collection of ENS labelhash-to-label mappings from a specific source.- Goal: To enable the extensible definition of new label sets (e.g., subgraph vs. production vs. test).
- Configuration: See the Available Label Sets page for a complete list of currently available label set IDs and their descriptions.
-
LABEL_SET_VERSION
: A non-negative integer representing the version of a Label Set.- Goal: To support the deterministic evolution of datasets over time, allowing services to achieve reproducible results.
- Configuration: Use the highest available version number for the most up-to-date data. Versions are sequential and incremental:
0
- The initial/base version of the Label Set.1
,2
, etc. - Incremental updates to the Label Set.
Example combinations:
# Latest production dataDB_SCHEMA_VERSION=3 LABEL_SET_ID=subgraph LABEL_SET_VERSION=0
# The ens-test-env dataDB_SCHEMA_VERSION=3 LABEL_SET_ID=ens-test-env LABEL_SET_VERSION=0
# Extended discovery dataDB_SCHEMA_VERSION=3 LABEL_SET_ID=discovery-a LABEL_SET_VERSION=0
Persistent Storage with Docker
Section titled “Persistent Storage with Docker”The ENSRainbow Docker image (built with the combined Dockerfile) now includes an entrypoint.sh
script that supports persistent storage for the LevelDB database. This prevents re-downloading the database every time the container starts.
How it Works:
- On First Run (with an empty volume):
- The script requires
DB_SCHEMA_VERSION
,LABEL_SET_ID
, andLABEL_SET_VERSION
environment variables to be set. - It downloads the specified database archive into
/app/apps/ensrainbow/data
(the designated data directory). - After successful download, extraction, and validation, it creates a marker file (
.ensrainbow_db_ready
) in the data directory.
- The script requires
- On Subsequent Runs:
- The script checks for the presence of the data directory and the marker file.
- If found and the data passes a quick validation, it skips the download and uses the existing database.
- If the data is missing, invalid, or the marker file isn’t present, it will attempt to download it again (requiring the environment variables).
Using Docker Volumes:
To ensure the database persists even if the container is removed and recreated, you must mount a Docker volume to /app/apps/ensrainbow/data
inside the container.
Example with a Host Directory:
# Create a directory on your host machine first, e.g., ~/my_ensrainbow_datamkdir -p ~/my_ensrainbow_data
docker run -d --name ensrainbow_persistent \ -v ~/my_ensrainbow_data:/app/apps/ensrainbow/data \ -e DB_SCHEMA_VERSION="3" \ -e LABEL_SET_ID="subgraph" \ -e LABEL_SET_VERSION="0" \ -p 3223:3223 \ ghcr.io/namehash/ensnode/ensrainbow:latest
Example with a Named Docker Volume (Recommended):
# Create a named volume (only needs to be done once)docker volume create ensrainbow_db_volume
docker run -d --name ensrainbow_persistent \ -v ensrainbow_db_volume:/app/apps/ensrainbow/data \ -e DB_SCHEMA_VERSION="3" \ -e LABEL_SET_ID="subgraph" \ -e LABEL_SET_VERSION="0" \ -p 3223:3223 \ ghcr.io/namehash/ensnode/ensrainbow:latest
Adjust DB_SCHEMA_VERSION
, LABEL_SET_ID
, and LABEL_SET_VERSION
as needed for the initial download.
Using a volume ensures that your downloaded and ingested data is not lost when the container stops or is removed, saving time and bandwidth on subsequent runs.
Generating and Uploading Database Archives
Section titled “Generating and Uploading Database Archives”These steps are typically performed by project maintainers for releasing official pre-built ENSRainbow database archives. Assumes rclone
is configured with a remote named ENSRAINBOWR2
.
1. Prepare .ensrainbow
Files
Section titled “1. Prepare .ensrainbow Files”This section covers the conversion of source data (like SQL dumps or empty files for initial datasets) into the .ensrainbow
format. The time
command is used here to measure the duration of potentially long-running conversion processes.
For the subgraph
Label Set (main dataset):
This command converts a SQL dump file (ens_names.sql.gz
) into an .ensrainbow
file for version 0 of the subgraph
Label Set.
# Assuming ens_names.sql.gz contains the primary datasettime pnpm run convert --input-file ens_names.sql.gz --output-file subgraph_0.ensrainbow --label-set-id subgraph --label-set-version 0
For the discovery-a
Label Set (initially empty for discovered labels):
This creates an empty .ensrainbow
file for version 0 of the discovery-a
Label Set, which is used for labels discovered dynamically.
touch empty.sqlgzip empty.sqltime pnpm run convert --input-file empty.sql.gz --output-file discovery-a_0.ensrainbow --label-set-id discovery-a --label-set-version 0
For the ens-test-env
Label Set (for testing):
This converts a test dataset SQL dump into an .ensrainbow
file for version 0 of the ens-test-env
Label Set.
time pnpm run convert --input-file test/fixtures/ens_test_env_names.sql.gz --output-file ens-test-env_0.ensrainbow --label-set-id ens-test-env --label-set-version 0
2. Upload .ensrainbow
Files to R2 Storage
Section titled “2. Upload .ensrainbow Files to R2 Storage”After generation, these .ensrainbow
files are uploaded to the designated cloud storage (e.g., R2 via rclone).
rclone copy ./subgraph_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/rclone copy ./discovery-a_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/rclone copy ./ens-test-env_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/
3. Calculate and Upload Checksums for .ensrainbow
Files
Section titled “3. Calculate and Upload Checksums for .ensrainbow Files”SHA256 checksums are generated for each .ensrainbow
file to ensure data integrity. These checksum files are then uploaded.
# Calculate checksumssha256sum subgraph_0.ensrainbow > subgraph_0.ensrainbow.sha256sumsha256sum discovery-a_0.ensrainbow > discovery-a_0.ensrainbow.sha256sumsha256sum ens-test-env_0.ensrainbow > ens-test-env_0.ensrainbow.sha256sum
# Upload checksumsrclone copy ./subgraph_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/rclone copy ./discovery-a_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/rclone copy ./ens-test-env_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/
4. Ingest .ensrainbow
Files into LevelDB Databases
Section titled “4. Ingest .ensrainbow Files into LevelDB Databases”The .ensrainbow
files are ingested into local LevelDB instances to create the actual databases.
pnpm ingest-ensrainbow --input-file subgraph_0.ensrainbow --data-dir ./data-subgraph_0pnpm ingest-ensrainbow --input-file discovery-a_0.ensrainbow --data-dir ./data-discovery-a_0pnpm ingest-ensrainbow --input-file ens-test-env_0.ensrainbow --data-dir ./data-ens-test-env_0
5. Package LevelDB Databases
Section titled “5. Package LevelDB Databases”The LevelDB data directories, now populated, are packaged into compressed tar.gz
archives.
tar -czvf subgraph_0.tgz ./data-subgraph_0tar -czvf discovery-a_0.tgz ./data-discovery-a_0tar -czvf ens-test-env_0.tgz ./data-ens-test-env_0
6. Upload Database Archives to R2 Storage
Section titled “6. Upload Database Archives to R2 Storage”These database archives are uploaded to cloud storage, tagged with a schema version (e.g., 3
in this example).
rclone copy ./subgraph_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/rclone copy ./discovery-a_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/rclone copy ./ens-test-env_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/
7. Calculate and Upload Checksums for Database Archives
Section titled “7. Calculate and Upload Checksums for Database Archives”Finally, checksums for the database archives are calculated and uploaded to ensure their integrity.
# Calculate checksumssha256sum subgraph_0.tgz > subgraph_0.tgz.sha256sumsha256sum discovery-a_0.tgz > discovery-a_0.tgz.sha256sumsha256sum ens-test-env_0.tgz > ens-test-env_0.tgz.sha256sum
# Upload checksumsrclone copy ./subgraph_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/rclone copy ./discovery-a_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/rclone copy ./ens-test-env_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/