Documentation

LeanCert.ML.Symbolic.ReLU

Verified DeepPoly ReLU Relaxation #

This module implements the "Triangle Relaxation" for ReLU activation. This is the mathematical core of Symbolic Interval Propagation (SIP).

The Dependency Problem #

Standard interval arithmetic fails on expressions like x - x because it treats each occurrence of x independently, returning [lo - hi, hi - lo] instead of 0.

Symbolic Interval Propagation solves this by tracking variables as linear functions of the inputs: xₖ = Σ wᵢ · inputᵢ + b

The Geometry #

For a neuron with pre-activation bounds [l, u], we bound ReLU(x) = max(0, x) using linear functions:

  1. Lower Linear Bound:

    • If l ≥ 0: y = x (always active)
    • If u ≤ 0: y = 0 (always inactive)
    • If l < 0 < u: y ≥ 0 (simplest valid choice)
  2. Upper Linear Bound (The DeepPoly Line):

    • If l < 0 < u: The line passing through (l, 0) and (u, u). Slope λ = u / (u - l) Equation: y ≤ λ · (x - l)

Main Results #

Represents a linear function y = slope · x + bias

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.
      Instances For

        Evaluate linear bound at x

        Equations
        Instances For

          The DeepPoly upper bound line for the crossing case (l < 0 < u). Line passing through (l, 0) and (u, u). Slope = u / (u - l) Using point-slope form at (l, 0): y = slope · (x - l)

          Equations
          Instances For
            theorem LeanCert.ML.Symbolic.crossingUpperBound_eval (l u : ) (x : ) (h_ne : u - l 0) :
            (crossingUpperBound l u).eval x = u / (u - l) * (x - l)

            Key lemma: The crossing upper bound evaluates to slope · (x - l)

            theorem LeanCert.ML.Symbolic.upper_bound_valid (l u : ) (x : ) (hl : l < 0) (hu : 0 < u) (hx_mem : l x x u) :

            The Triangle Theorem: The DeepPoly upper bound contains ReLU.

            For x ∈ [l, u] where l < 0 < u, we prove max(0, x) ≤ λ · (x - l) where λ = u / (u - l).

            Geometric intuition: ReLU is convex, and the line connecting (l, ReLU(l)) = (l, 0) and (u, ReLU(u)) = (u, u) lies above the graph.

            The Symbolic Bound structure for a single variable. Represents the constraint: lower.eval(x) ≤ y ≤ upper.eval(x)

            • lower : LinearBound

              Linear lower bound: y ≥ slope · x + bias

            • upper : LinearBound

              Linear upper bound: y ≤ slope · x + bias

            Instances For
              Equations
              • One or more equations did not get rendered due to their size.
              Instances For
                Equations
                • One or more equations did not get rendered due to their size.
                Instances For

                  Compute the verified relaxation for ReLU given concrete bounds [l, u]

                  Equations
                  • One or more equations did not get rendered due to their size.
                  Instances For

                    Lower bound for crossing case: 0 ≤ max(0, x)

                    theorem LeanCert.ML.Symbolic.relu_relaxation_sound (l u : ) (x : ) (h : l x x u) :
                    have sb := getReLURelaxation l u; sb.lower.eval x max 0 x max 0 x sb.upper.eval x

                    Main Soundness Theorem: The ReLU relaxation is mathematically correct.

                    For any x in the interval [l, u], the computed linear bounds contain ReLU(x): lower.eval(x) ≤ max(0, x) ≤ upper.eval(x)

                    This is the foundation of verified neural network verification.

                    Properties of Linear Bounds #

                    Identity bound: y = x

                    Equations
                    Instances For

                      Zero bound: y = 0

                      Equations
                      Instances For

                        Compose linear bounds: if y = ax + b and z = cy + d, then z = c(ax+b) + d = (ca)x + (cb+d)

                        Equations
                        Instances For
                          theorem LeanCert.ML.Symbolic.LinearBound.eval_compose (outer inner : LinearBound) (x : ) :
                          (outer.compose inner).eval x = outer.eval (inner.eval x)

                          Interval Width after ReLU #

                          The output interval width after applying ReLU relaxation. For crossing case, width = λ · (u - l) = u · (u - l) / (u - l) = u

                          Equations
                          Instances For
                            theorem LeanCert.ML.Symbolic.crossingUpperBound_at_u (l u : ) (hl : l < 0) (hu : 0 < u) :
                            (crossingUpperBound l u).eval u = u

                            In the crossing case, evaluating upper bound at u gives exactly u

                            theorem LeanCert.ML.Symbolic.crossingUpperBound_at_l (l u : ) (hl : l < 0) (hu : 0 < u) :

                            In the crossing case, evaluating upper bound at l gives exactly 0