Skip to content


pip install sparkwheel


  • Declarative Configuration


    Define complex Python objects in clean YAML files. Replace boilerplate instantiation code with simple _target_ declarations.

  • Smart References


    Use @ for resolved references (instantiated objects, computed values) or % for raw references (unprocessed YAML). Keep configurations DRY and maintainable.

  • Flexible Composition


    Configs compose naturally by default (merge dicts, extend lists). Use = to replace or ~ to delete. Build modular configs for experiments and environments.

  • Python Expressions


    Execute code with $ prefix. Compute values, call functions, and create dynamic configurations on the fly.

  • Schema Validation


    Validate configs with Python dataclasses. Continuous validation catches errors immediately at mutation time with type checking, coercion, and required field validation.

  • CLI Overrides


    Override any config value from command line. Perfect for hyperparameter sweeps and quick experiments.

Python Objects from YAML

If you're tired of hardcoding parameters and want configuration-driven workflows, Sparkwheel makes it effortless. Define components in YAML, reference and compose them freely, then instantiate in Python.

config.yaml
dataset:
  path: "/data/train"
  num_classes: 10
  batch_size: 32

model:
  _target_: torch.nn.Sequential
  _args_:
    - _target_: torch.nn.Linear
      in_features: 784
      out_features: "@dataset::num_classes"  # Reference!
    - _target_: torch.nn.ReLU

training:
  epochs: 10
  learning_rate: 0.001
  steps_per_epoch: "$10000 // @dataset::batch_size"  # Expression!
train.py
from sparkwheel import Config

# Load config (or multiple configs!)
config = Config()
config.update("config.yaml")

# Access raw values
batch_size = config["dataset::batch_size"]  # 32

# Resolve references and expressions
steps = config.resolve("training::steps_per_epoch")  # 312

# Instantiate Python objects automatically
model = config.resolve("model")  # Actual torch.nn.Sequential!
experiment_large.yaml
# Override specific values, keep the rest (merges by default!)
model:
  _args_:
    - 0:  # Override first layer
        out_features: 20  # More classes

training:
  learning_rate: 0.0001  # Lower LR
  # epochs inherited from base!
from sparkwheel import Config
import sys

# Load base + experiment (composes automatically!)
config = (Config()
          .update("config.yaml")
          .update("experiment_large.yaml"))

# Or override from CLI (parse args yourself)
config = Config()
config.update("config.yaml")
for arg in sys.argv[1:]:
    if "=" in arg:
        key, value = arg.split("=", 1)
        # Simple parsing - use ast.literal_eval for type conversion
        try:
            import ast
            value = ast.literal_eval(value)
        except (ValueError, SyntaxError):
            pass  # Keep as string
        config.set(key, value)

Understanding References

Sparkwheel has two types of references with distinct purposes:

@ - Resolved References

Get the final, computed value after instantiation and evaluation.

model:
  _target_: torch.nn.Linear
  in_features: 784
  out_features: 10

# @ follows the reference and gets the instantiated object
trained_model: "@model"  # Gets the actual torch.nn.Linear instance

Use @ when you want the result of computation.

% - Raw References

Get the unprocessed YAML content before any resolution.

# base.yaml
defaults:
  learning_rate: 0.001

# config.yaml
# % copies the raw YAML definition (can be from external files or same file)
optimizer:
  lr: "%base.yaml::defaults::learning_rate"  # Gets raw value: 0.001

# Or reference within same file
backup_defaults: "%defaults"  # Gets the entire defaults dict as-is

Use % when you want to copy/import raw YAML (like copy-paste).

Why Sparkwheel?

Familiar, But More Powerful

If you've used Hydra or OmegaConf, you'll feel right at home. Sparkwheel adds:

  • Composition-by-default - Configs merge/extend naturally, no operators needed for common case
  • List extension - Lists extend by default (unique vs Hydra!)
  • = replace operator - Explicit control when you need replacement
  • ~ delete operator - Remove inherited keys cleanly (idempotent!)
  • Python expressions with $ - Compute values dynamically
  • Dataclass validation - Type-safe configs without boilerplate
  • Dual reference system - @ for resolved values, % for raw YAML
  • Simpler API - Less magic, clearer behavior
# Merges by default - no operator needed!
model:
  hidden_size: 1024  # Override just this
  ~dropout: null     # Remove dropout
  # Other fields preserved automatically!

Start Learning

  • Quick Start


    Get productive in 5 minutes with a hands-on tutorial

    Quick Start

  • User Guide


    Deep dive into references, expressions, and composition

    Core Concepts

  • Examples


    See complete real-world configuration patterns

    View Examples

  • API Reference


    Complete API documentation and reference

    Browse API

Feature Deep Dives

About

Sparkwheel is a hard fork of MONAI Bundle's configuration system, refined and expanded for general-purpose use. We're deeply grateful to the MONAI team for their excellent foundation.

Sparkwheel powers Lighter, a configuration-driven deep learning framework built on PyTorch Lightning.

Ready to contribute? View on GitHub