Distinguishing Power Connects Expressiveness Across Domains

A new essay on arXiv formalizes a shared pattern across algebra, analysis, machine learning, and linguistics: when a class of objects can distinguish all relevant inputs, it can also express any function consistent with those distinctions. Authors led by Benjamin Blum-Smith draw a parallel between the Fundamental Theorem of Galois Theory and the Stone--Weierstrass theorem, and prove an elementary connecting result about "distinguishing power." The paper then applies this lens to practical ML topics such as equivariant models, invariant features, and probing in language models, arguing that expressivity claims should be measured relative to a model class's distinguishing capacity. For practitioners, the essay reframes universality results and suggests more precise criteria for model sufficiency in representational tasks.
What happened
The authors, led by Benjamin Blum-Smith, published a cross-disciplinary essay on arXiv (v2, 21 Apr 2026) that frames the Fundamental Theorem of Galois Theory and the Stone--Weierstrass theorem as instances of a single meta-principle: the relationship between a class's ability to distinguish inputs and its ability to express functions on those inputs. The paper proves an elementary theorem making this connection formal and surveys implications for machine learning, particularly equivariant and invariant modeling, and for linguistics.
Technical details
The core formalism isolates two notions: the class's distinguishing power (its capacity to separate points or equivalence classes induced by observables) and its expressive power (the algebraic or topological closure of functions generated by the class). The essay shows that when a class distinguishes the relevant quotiented space, closure results like Stone-Weierstrass or algebraic correspondences like Galois yield full expressivity on that quotient. Practitioners should note:
- •The result is statement-level and abstract; it does not produce a new neural architecture but gives criteria to evaluate universality claims.
- •It ties standard universal approximation perspectives to symmetry-aware settings: if a model class cannot separate orbits induced by a symmetry, no universal approximator built from that class can recover orbit-distinguishing functions.
- •The paper connects to constructive parameterization techniques used in equivariant ML and invariant polynomials literature, pointing to constructive parameterization techniques in the equivariant and invariant polynomials literature that make the abstract correspondences algorithmically relevant.
Context and significance
This essay reframes widely used but sometimes informal claims about expressivity. In contemporary ML, results about universal approximation often assume unrestricted input distinctions; the paper shows that the relevant notion is the quotient induced by invariances or by the measurement process. That matters for equivariant neural networks, graph networks, and representation probing: a model family may be "universal" on the quotient space but unable to recover distinctions lost by imposed invariances. For linguistics, the same perspective explains why certain grammatical distinctions are representable only when the signal class can separate underlying linguistic states. The work consolidates threads from algebraic invariant theory, classical approximation theory, and recent equivariant-ML research into a single diagnostic principle.
Practical implications
For model selection, architecture design, and diagnostic benchmarks, translate expressivity claims into tests of distinguishing power: does your feature map or model family separate the equivalence classes implied by the task? If not, adding capacity alone will not recover the missing distinctions. For researchers building equivariant layers or invariant featurizers, the paper suggests focusing on constructive bases of invariant functions and on explicit parameterizations that make the Stone--Weierstrass or Galois correspondences usable in training.
What to watch
Look for follow-up work that converts the essay's abstract theorems into explicit constructive recipes for equivariant parameterizations and for empirical probes that operationalize "distinguishing power" in large models. Also watch applications linking this lens to rigorous probing methodologies in NLP.
Scoring Rationale
The paper provides a clear, cross-disciplinary conceptual lens tying classic math theorems to ML practice, useful for theorists and practitioners. It is not an empirical breakthrough or new model, so its impact is notable but not industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

