Skip to content

ENSRainbow Development and Contributions

For focused guidance on specific topics, check out these dedicated pages:


Follow these steps to start contributing to ENSRainbow:

  1. Follow ENSNode’s contribution guide to prepare your workspace environment & install dependencies
  2. Choose your development path using the focused guides above
  3. Start with Local Development for the quickest way to get ENSRainbow running locally

Ingests data from .ensrainbow files into the LevelDB database.

Terminal window
pnpm run ingest-ensrainbow --input-file <path/to/LABEL_SET_ID-LABEL_SET_VERSION.ensrainbow> [--data-dir path/to/db]
  • --input-file: Path to the .ensrainbow file. Adheres to ingestion rules.
  • --data-dir: Directory for the LevelDB database (default: data/). The database stores LABEL_SET_ID and HIGHEST_LABEL_SET_VERSION metadata.
Terminal window
pnpm run validate [--data-dir path/to/db] [--lite]

Validates database integrity by:

  • Verifying the keys for all rainbow records are valid labelhashes
  • Ensuring stored labels match their corresponding labelhashes
  • Validating the total rainbow record count
  • Verifying no ingestion was interrupted before successful completion

The --lite option performs a faster, less thorough validation by skipping hash verification and record count validation. It only checks that:

  • The ingestion was completed successfully
  • The schema version is correct
  • The precalculated count exists and can be retrieved

The process will exit with:

  • Code 0: Validation successful
  • Code 1: Validation failed or errors encountered
Terminal window
pnpm run purge [--data-dir path/to/db]

Completely removes all files from the specified data directory. This is useful when you need to start fresh with a clean database.

The process will exit with:

  • Code 0: Successful purge
  • Code 1: Error during purge operation
Terminal window
pnpm run serve [--port 3223] [--data-dir path/to/db]

Starts the API server. The process will exit with:

  • Code 0: Clean shutdown
  • Code 1: Error during operation

The ens-test-env project provides a test environment for ENS development. It includes a small dataset of ENS names in the ens_test_env_names.sql.gz file that can be used with ENSRainbow for testing purposes.

To ingest the test data into ENSRainbow:

  1. Convert Test Data (if needed): If you don’t have a pre-converted ens-test-env-0.ensrainbow file:

    Terminal window
    # Navigate to apps/ensrainbow or adjust paths accordingly
    pnpm run convert --input-file test/fixtures/ens_test_env_names.sql.gz --output-file ens-test-env-0.ensrainbow

    This creates ens-test-env-0.ensrainbow.

  2. Download Test Data (Alternative): Alternatively, download the pre-defined test data file:

    Terminal window
    # In apps/ensrainbow directory
    ./scripts/download-ensrainbow-files.sh ens-test-env 0

    This will place ens-test-env-0.ensrainbow in labelsets/. Adjust the path for the ingest command accordingly.

Terminal window
# Assuming ens-test-env-0.ensrainbow is in the current directory or accessible by path
pnpm run ingest-ensrainbow --input-file ens-test-env-0.ensrainbow --data-dir data_ens_test_env

You can validate the ingested test data to ensure it was properly loaded:

Terminal window
pnpm validate --data-dir data_ens_test_env
Terminal window
pnpm serve --data-dir data_ens_test_env --port 3223

You can also run ENSRainbow with the test data using Docker. This involves running the standard ENSRainbow image and configuring it via environment variables to download and use the test dataset.

  1. Ensure you have the standard ENSRainbow Docker image: If you haven’t already, pull the latest image:

    Terminal window
    docker pull ghcr.io/namehash/ensnode/ensrainbow:latest
  2. Run the ENSRainbow container with test data configuration: You’ll need to mount a volume for persistent storage and set the DB_SCHEMA_VERSION, LABEL_SET_ID, and LABEL_SET_VERSION environment variables to point to the test data.

    Terminal window
    # Create a directory on your host machine for test data, e.g., ~/my_ensrainbow_test_data
    mkdir -p ~/my_ensrainbow_test_data
    docker run -d --name ensrainbow_test_env \
    -v ~/my_ensrainbow_test_data:/app/apps/ensrainbow/data \
    -e DB_SCHEMA_VERSION="3" \
    -e LABEL_SET_ID="ens-test-env" \
    -e LABEL_SET_VERSION="0" \
    -p 3223:3223 \
    ghcr.io/namehash/ensnode/ensrainbow:latest
    • Adjust DB_SCHEMA_VERSION, LABEL_SET_ID, and LABEL_SET_VERSION if the test data parameters differ. Typically, for the standard test data, LABEL_SET_ID is ens-test-env and LABEL_SET_VERSION is 0. The DB_SCHEMA_VERSION should match the version of the database archives you intend to use (e.g., 3).

This setup allows the entrypoint.sh script within the container to download the ens-test-env-0 database archive into the mounted volume on its first run. Subsequent runs will reuse the data from the volume.

This test environment setup is particularly useful for running ENS tests (i.e. ens-test-env) that require label healing capabilities without needing the full production dataset.

When using ENSRainbow with Docker, the following environment variables control which pre-built ENSRainbow database archive is downloaded and used:

  • DB_SCHEMA_VERSION: Specifies the database schema version (e.g., 3). This determines the format and structure of the pre-built ENSRainbow database archives and is not related to the API version.

    • Goal: Ensures compatibility between the ENSRainbow software and the structure of downloaded database files that are prebuilt for startup-time optimizations.
    • Configuration: It is strongly recommended to use the latest available schema version unless you have specific compatibility requirements.
  • LABEL_SET_ID: The identifier for a Label Set, which is a collection of ENS labelhash-to-label mappings from a specific source.

    • Goal: To enable the extensible definition of new label sets (e.g., subgraph vs. production vs. test).
    • Configuration: See the Available Label Sets page for a complete list of currently available label set IDs and their descriptions.
  • LABEL_SET_VERSION: A non-negative integer representing the version of a Label Set.

    • Goal: To support the deterministic evolution of datasets over time, allowing services to achieve reproducible results.
    • Configuration: Use the highest available version number for the most up-to-date data. Versions are sequential and incremental:
      • 0 - The initial/base version of the Label Set.
      • 1, 2, etc. - Incremental updates to the Label Set.

Example combinations:

Terminal window
# Latest production data
DB_SCHEMA_VERSION=3 LABEL_SET_ID=subgraph LABEL_SET_VERSION=0
# The ens-test-env data
DB_SCHEMA_VERSION=3 LABEL_SET_ID=ens-test-env LABEL_SET_VERSION=0
# Extended discovery data
DB_SCHEMA_VERSION=3 LABEL_SET_ID=discovery-a LABEL_SET_VERSION=0

The ENSRainbow Docker image (built with the combined Dockerfile) now includes an entrypoint.sh script that supports persistent storage for the LevelDB database. This prevents re-downloading the database every time the container starts.

How it Works:

  1. On First Run (with an empty volume):
    • The script requires DB_SCHEMA_VERSION, LABEL_SET_ID, and LABEL_SET_VERSION environment variables to be set.
    • It downloads the specified database archive into /app/apps/ensrainbow/data (the designated data directory).
    • After successful download, extraction, and validation, it creates a marker file (.ensrainbow_db_ready) in the data directory.
  2. On Subsequent Runs:
    • The script checks for the presence of the data directory and the marker file.
    • If found and the data passes a quick validation, it skips the download and uses the existing database.
    • If the data is missing, invalid, or the marker file isn’t present, it will attempt to download it again (requiring the environment variables).

Using Docker Volumes:

To ensure the database persists even if the container is removed and recreated, you must mount a Docker volume to /app/apps/ensrainbow/data inside the container.

Example with a Host Directory:

Terminal window
# Create a directory on your host machine first, e.g., ~/my_ensrainbow_data
mkdir -p ~/my_ensrainbow_data
docker run -d --name ensrainbow_persistent \
-v ~/my_ensrainbow_data:/app/apps/ensrainbow/data \
-e DB_SCHEMA_VERSION="3" \
-e LABEL_SET_ID="subgraph" \
-e LABEL_SET_VERSION="0" \
-p 3223:3223 \
ghcr.io/namehash/ensnode/ensrainbow:latest

Example with a Named Docker Volume (Recommended):

Terminal window
# Create a named volume (only needs to be done once)
docker volume create ensrainbow_db_volume
docker run -d --name ensrainbow_persistent \
-v ensrainbow_db_volume:/app/apps/ensrainbow/data \
-e DB_SCHEMA_VERSION="3" \
-e LABEL_SET_ID="subgraph" \
-e LABEL_SET_VERSION="0" \
-p 3223:3223 \
ghcr.io/namehash/ensnode/ensrainbow:latest

Adjust DB_SCHEMA_VERSION, LABEL_SET_ID, and LABEL_SET_VERSION as needed for the initial download.

Using a volume ensures that your downloaded and ingested data is not lost when the container stops or is removed, saving time and bandwidth on subsequent runs.

Generating and Uploading Database Archives

Section titled “Generating and Uploading Database Archives”

This section covers the conversion of source data (like SQL dumps or empty files for initial datasets) into the .ensrainbow format. The time command is used here to measure the duration of potentially long-running conversion processes.

For the subgraph Label Set (main dataset): This command converts a SQL dump file (ens_names.sql.gz) into an .ensrainbow file for version 0 of the subgraph Label Set.

Terminal window
# Assuming ens_names.sql.gz contains the primary dataset
time pnpm run convert --input-file ens_names.sql.gz --output-file subgraph_0.ensrainbow --label-set-id subgraph --label-set-version 0

For the discovery-a Label Set (initially empty for discovered labels): This creates an empty .ensrainbow file for version 0 of the discovery-a Label Set, which is used for labels discovered dynamically.

Terminal window
touch empty.sql
gzip empty.sql
time pnpm run convert --input-file empty.sql.gz --output-file discovery-a_0.ensrainbow --label-set-id discovery-a --label-set-version 0

For the ens-test-env Label Set (for testing): This converts a test dataset SQL dump into an .ensrainbow file for version 0 of the ens-test-env Label Set.

Terminal window
time pnpm run convert --input-file test/fixtures/ens_test_env_names.sql.gz --output-file ens-test-env_0.ensrainbow --label-set-id ens-test-env --label-set-version 0

After generation, these .ensrainbow files are uploaded to the designated cloud storage (e.g., R2 via rclone).

Terminal window
rclone copy ./subgraph_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/
rclone copy ./discovery-a_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/
rclone copy ./ens-test-env_0.ensrainbow ENSRAINBOWR2:ensrainbow/labelsets/

3. Calculate and Upload Checksums for .ensrainbow Files

Section titled “3. Calculate and Upload Checksums for .ensrainbow Files”

SHA256 checksums are generated for each .ensrainbow file to ensure data integrity. These checksum files are then uploaded.

Terminal window
# Calculate checksums
sha256sum subgraph_0.ensrainbow > subgraph_0.ensrainbow.sha256sum
sha256sum discovery-a_0.ensrainbow > discovery-a_0.ensrainbow.sha256sum
sha256sum ens-test-env_0.ensrainbow > ens-test-env_0.ensrainbow.sha256sum
# Upload checksums
rclone copy ./subgraph_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/
rclone copy ./discovery-a_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/
rclone copy ./ens-test-env_0.ensrainbow.sha256sum ENSRAINBOWR2:ensrainbow/labelsets/

4. Ingest .ensrainbow Files into LevelDB Databases

Section titled “4. Ingest .ensrainbow Files into LevelDB Databases”

The .ensrainbow files are ingested into local LevelDB instances to create the actual databases.

Terminal window
pnpm ingest-ensrainbow --input-file subgraph_0.ensrainbow --data-dir ./data-subgraph_0
pnpm ingest-ensrainbow --input-file discovery-a_0.ensrainbow --data-dir ./data-discovery-a_0
pnpm ingest-ensrainbow --input-file ens-test-env_0.ensrainbow --data-dir ./data-ens-test-env_0

The LevelDB data directories, now populated, are packaged into compressed tar.gz archives.

Terminal window
tar -czvf subgraph_0.tgz ./data-subgraph_0
tar -czvf discovery-a_0.tgz ./data-discovery-a_0
tar -czvf ens-test-env_0.tgz ./data-ens-test-env_0

These database archives are uploaded to cloud storage, tagged with a schema version (e.g., 3 in this example).

Terminal window
rclone copy ./subgraph_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/
rclone copy ./discovery-a_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/
rclone copy ./ens-test-env_0.tgz ENSRAINBOWR2:ensrainbow/databases/3/

7. Calculate and Upload Checksums for Database Archives

Section titled “7. Calculate and Upload Checksums for Database Archives”

Finally, checksums for the database archives are calculated and uploaded to ensure their integrity.

Terminal window
# Calculate checksums
sha256sum subgraph_0.tgz > subgraph_0.tgz.sha256sum
sha256sum discovery-a_0.tgz > discovery-a_0.tgz.sha256sum
sha256sum ens-test-env_0.tgz > ens-test-env_0.tgz.sha256sum
# Upload checksums
rclone copy ./subgraph_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/
rclone copy ./discovery-a_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/
rclone copy ./ens-test-env_0.tgz.sha256sum ENSRAINBOWR2:ensrainbow/databases/3/