Configuration Reference
Lighter uses Sparkwheel for configuration—a powerful YAML-based system supporting references, expressions, and object instantiation.
Complete Documentation
This page covers Lighter-specific patterns and common usage. For complete Sparkwheel syntax, advanced features, and detailed examples, see the Sparkwheel documentation.
Quick Reference
| Symbol | Purpose | Sparkwheel Docs |
|---|---|---|
_target_ |
Instantiate a class | Instantiation |
@path::to::value |
Resolved reference (instantiated object) | References |
%path::to::value |
Raw reference (unprocessed YAML) | References |
$expression |
Evaluate Python expression | Expressions |
:: |
Path notation (navigate config) | Basics |
. |
Access Python attributes | Expressions |
=key: |
Replace operator (override merge) | Operators |
~key: |
Delete operator | Operators |
Lighter Configuration Structure
Every Lighter config has two mandatory sections:
trainer:
_target_: pytorch_lightning.Trainer
max_epochs: 10
system:
_target_: lighter.System
model: ...
criterion: ...
optimizer: ...
dataloaders: ...
Optional sections:
_requires_: # Import Python modules
project: ./path # Custom module directory
vars: ... # Variables for reuse
args: ... # Stage-specific arguments (fit, test, etc.)
Essential Syntax
_target_: Instantiate Classes
Equivalent to: torchvision.models.resnet18(num_classes=10)
@ and %: References
| Type | Syntax | Use Case |
|---|---|---|
Resolved (@) |
@system::optimizer |
Pass actual object instances |
Raw (%) |
%system::metrics::train |
Reuse config to create new instances |
Example:
scheduler:
_target_: torch.optim.lr_scheduler.StepLR
optimizer: "@system::optimizer" # Resolved: actual optimizer object
metrics:
train:
- _target_: torchmetrics.Accuracy
task: multiclass
num_classes: 10
val: "%system::metrics::train" # Raw: creates new instance
$: Expressions
Evaluate Python in configs:
optimizer:
_target_: torch.optim.Adam
params: "$@system::model.parameters()" # Call model.parameters()
lr: "$0.001 * 2" # Result: 0.002
::: Path Notation
Navigate nested configs:
@system::model # Access model
@system::optimizer::lr # Access nested value
%::train::batch_size # Relative reference (sibling)
CLI Overrides
Override any config value from command line:
# Simple override
lighter fit config.yaml trainer::max_epochs=100
# Nested values
lighter fit config.yaml system::optimizer::lr=0.001
# Multiple overrides
lighter fit config.yaml \
trainer::max_epochs=100 \
system::optimizer::lr=0.001 \
trainer::devices=4
Merging Configs
Combine multiple YAML files for modular experiments:
# Merge base + experiment
lighter fit base.yaml,experiment.yaml
# Compose from modules
lighter fit base.yaml,models/resnet.yaml,data/cifar10.yaml
Default Merging Behavior
Dictionaries merge recursively:
# base.yaml
trainer:
max_epochs: 10
devices: 1
# experiment.yaml
trainer:
max_epochs: 100 # Overrides
accelerator: gpu # Adds
# Result: max_epochs=100, devices=1, accelerator=gpu
Lists extend (append):
# base.yaml
trainer:
callbacks:
- _target_: pytorch_lightning.callbacks.ModelCheckpoint
# experiment.yaml
trainer:
callbacks:
- _target_: pytorch_lightning.callbacks.EarlyStopping
# Result: Both callbacks present
Override Merging: = and ~
Replace with =:
# experiment.yaml
trainer:
=callbacks: # Replace instead of extend
- _target_: pytorch_lightning.callbacks.RichProgressBar
Delete with ~:
# Delete entire key
trainer:
~callbacks: null
# Delete list items
trainer:
~callbacks: [1, 3] # Delete indices 1 and 3
# Delete dict keys
system:
~dataloaders: ["train", "test"]
Common Lighter Patterns
Pattern 1: Model → Optimizer
system:
model:
_target_: torchvision.models.resnet18
num_classes: 10
optimizer:
_target_: torch.optim.Adam
params: "$@system::model.parameters()"
lr: 0.001
Pattern 2: Optimizer → Scheduler
system:
optimizer:
_target_: torch.optim.Adam
params: "$@system::model.parameters()"
lr: 0.001
scheduler:
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
optimizer: "@system::optimizer"
factor: 0.5
Pattern 3: Reusing Configurations
system:
metrics:
train:
- _target_: torchmetrics.Accuracy
task: multiclass
num_classes: 10
val: "%system::metrics::train" # Reuse
dataloaders:
train:
_target_: torch.utils.data.DataLoader
batch_size: 128
num_workers: 4
val:
_target_: torch.utils.data.DataLoader
batch_size: "%::train::batch_size" # Relative reference
num_workers: "%::train::num_workers"
Pattern 4: Variables for Reuse
vars:
batch_size: 32
num_classes: 10
base_lr: 0.001
system:
model:
_target_: torchvision.models.resnet18
num_classes: "%vars::num_classes"
optimizer:
lr: "%vars::base_lr"
dataloaders:
train:
batch_size: "%vars::batch_size"
Pattern 5: Stage-Specific Arguments
args:
fit:
ckpt_path: null # Start from scratch
test:
ckpt_path: "checkpoints/best.ckpt"
predict:
ckpt_path: "checkpoints/best.ckpt"
return_predictions: true
Override from CLI:
Common Pitfalls
1. Resolved vs Raw Reference
# ❌ Wrong: Shares same instance
metrics:
val: "@system::metrics::train"
# ✅ Correct: Creates new instance
metrics:
val: "%system::metrics::train"
2. Path Notation vs Python Attributes
# ❌ Wrong: :: is for config paths
params: "$@system::model::parameters()"
# ✅ Correct: . is for Python attributes
params: "$@system::model.parameters()"
3. Missing $ for Expressions
# ❌ Wrong: Treated as string
batch_size: "@vars::base_batch * 2"
# ✅ Correct: Evaluated
batch_size: "$%vars::base_batch * 2"
Advanced Features
Refer to Sparkwheel documentation for advanced usage.
Next Steps
- Running Experiments - Execute training, testing, prediction
- Configuration Recipes - Ready-to-use patterns
- Troubleshooting - Debug config errors
- Sparkwheel Documentation - Complete reference