Executors#
Executors determine how the REF schedules and runs diagnostic computations.
You can configure which executor to use in your ref.toml under the [executor] section:
Additionally, you can configure executor-specific options in the [executor.config] section of your ref.toml.
For example, for the LocalExecutor, you can set the number of parallel jobs:
The REF supports four built-in executors:
LocalExecutor (default)#
- Runs diagnostics in parallel on your local machine using a process pool.
- Good for typical desktop or laptop usage.
- Use when you want maximum CPU utilization on a single host.
SynchronousExecutor#
- Runs each diagnostic serially in the main Python process.
- Useful for debugging or profiling individual diagnostics.
- To enable:
HPCExecutor#
- Submits diagnostics as batch jobs on HPC clusters using Slurm + Parsl.
- Coordinates a master process on the login node and worker jobs on compute nodes.
- See the HPCExecutor guide for setup and configuration options.
CeleryExecutor#
- Distributes tasks via Celery and a message broker (e.g., Redis).
- Ideal for running REF on multi-node clusters or cloud environments.
- See the Docker deployment guide for a Celery + Redis example.
Choosing an executor#
- LocalExecutor is recommended for most local workflows.
- SynchronousExecutor helps isolate issues in individual diagnostics.
- HPCExecutor is ideal for large-scale runs on HPC systems.
- CeleryExecutor suits distributed deployments in containerized or cloud setups.
Once configured, run ref solve as usual and the REF will use your chosen executor to schedule and execute diagnostics.