Multi-experiment Equation Learning Improves Model Generalizability
Researchers introduce `ME-EQL`, a multi-experiment extension to equation learning that aims to recover interpretable continuum models from agent-based simulations across varying parameter regimes. The paper presents two complementary methods, `OAT ME-EQL` (one-at-a-time) and `ES ME-EQL` (embedded structure), and evaluates them on a birth-death mean-field model and an on-lattice agent-based model with birth, death, and migration. Both approaches reduce parameter-recovery error compared with single-experiment equation learning, with `OAT ME-EQL` showing superior generalizability across parameter space. The work targets practitioners building surrogate models or interpretable dynamical systems from heterogeneous experiments and suggests concrete paths to improve transferability of learned equations.
What happened
Researchers propose `ME-EQL`, a multi-experiment extension to equation learning that improves how discovered continuum models generalize across parameter space for biological systems. The paper formalizes two methods, `OAT ME-EQL` and `ES ME-EQL`, and demonstrates them on a birth-death mean-field model and an on-lattice agent-based model with birth, death, and migration. Both approaches reduce relative error in parameter recovery from agent-based simulations; `OAT ME-EQL` provides the strongest generalization across unseen parameter sets.
Technical details
The authors address the core limitation of standard equation learning, which typically fits a model per experimental condition and fails to interpolate behavior between parameterizations. `ME-EQL` reframes discovery to leverage multiple experiments simultaneously. Key technical elements:
- •`OAT ME-EQL`: learn independent symbolic/parametric models per parameter set, then connect models via interpolation or meta-modeling to predict intermediate parameter regimes.
- •`ES ME-EQL`: build a shared library of basis terms and learn parameter-dependent coefficients, embedding structure across experiments into a single discoverable representation.
- •Evaluation uses synthetic data from a birth-death mean-field system and an on-lattice ABM; metrics focus on parameter recovery error, predictive error across parameter sweeps, and interpretability of discovered terms.
Context and significance
Equation learning and system identification are central to scientific ML because they produce compact, interpretable descriptions of dynamics and serve as fast surrogates for costly simulations. This work directly tackles transferability, a common bottleneck when models are trained on narrow experimental conditions. By explicitly encoding multi-experiment structure, `ME-EQL` sits between per-experiment symbolic discovery and fully pooled parameterized models, offering a practical trade-off for practitioners who need interpretable surrogates that remain valid across experimental variability.
What to watch
Validate on noisy, real biological data and on higher-dimensional spatial systems. Also watch integration with physics-informed priors, sparsity controls, and active experiment design to select the most informative parameter sets for library construction.
Scoring Rationale
This is a notable methodological advance for scientific ML and system identification, improving generalization of discovered equations across parameter space. It is primarily of interest to practitioners working on interpretable surrogates and ABM-to-continuum mapping.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


