Verified DeepPoly ReLU Relaxation #
This module implements the "Triangle Relaxation" for ReLU activation. This is the mathematical core of Symbolic Interval Propagation (SIP).
The Dependency Problem #
Standard interval arithmetic fails on expressions like x - x because it treats
each occurrence of x independently, returning [lo - hi, hi - lo] instead of 0.
Symbolic Interval Propagation solves this by tracking variables as linear functions
of the inputs: xₖ = Σ wᵢ · inputᵢ + b
The Geometry #
For a neuron with pre-activation bounds [l, u], we bound ReLU(x) = max(0, x)
using linear functions:
Lower Linear Bound:
- If l ≥ 0: y = x (always active)
- If u ≤ 0: y = 0 (always inactive)
- If l < 0 < u: y ≥ 0 (simplest valid choice)
Upper Linear Bound (The DeepPoly Line):
- If l < 0 < u: The line passing through (l, 0) and (u, u). Slope λ = u / (u - l) Equation: y ≤ λ · (x - l)
Main Results #
upper_bound_valid- The triangle upper bound contains ReLU in the crossing caserelu_relaxation_sound- Complete soundness theorem for all cases
Represents a linear function y = slope · x + bias
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
The DeepPoly upper bound line for the crossing case (l < 0 < u). Line passing through (l, 0) and (u, u). Slope = u / (u - l) Using point-slope form at (l, 0): y = slope · (x - l)
Equations
Instances For
The Triangle Theorem: The DeepPoly upper bound contains ReLU.
For x ∈ [l, u] where l < 0 < u, we prove max(0, x) ≤ λ · (x - l) where λ = u / (u - l).
Geometric intuition: ReLU is convex, and the line connecting (l, ReLU(l)) = (l, 0) and (u, ReLU(u)) = (u, u) lies above the graph.
The Symbolic Bound structure for a single variable. Represents the constraint: lower.eval(x) ≤ y ≤ upper.eval(x)
- lower : LinearBound
Linear lower bound: y ≥ slope · x + bias
- upper : LinearBound
Linear upper bound: y ≤ slope · x + bias
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Compute the verified relaxation for ReLU given concrete bounds [l, u]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Lower bound for crossing case: 0 ≤ max(0, x)
Main Soundness Theorem: The ReLU relaxation is mathematically correct.
For any x in the interval [l, u], the computed linear bounds contain ReLU(x): lower.eval(x) ≤ max(0, x) ≤ upper.eval(x)
This is the foundation of verified neural network verification.
Properties of Linear Bounds #
Identity bound: y = x
Equations
- LeanCert.ML.Symbolic.LinearBound.identity = { slope := 1, bias := 0 }
Instances For
Zero bound: y = 0
Equations
- LeanCert.ML.Symbolic.LinearBound.zero = { slope := 0, bias := 0 }
Instances For
Compose linear bounds: if y = ax + b and z = cy + d, then z = c(ax+b) + d = (ca)x + (cb+d)
Equations
Instances For
Interval Width after ReLU #
In the crossing case, evaluating upper bound at u gives exactly u
In the crossing case, evaluating upper bound at l gives exactly 0