Neural Network Verification Example #
This file demonstrates verified neural network robustness using LeanCert's interval arithmetic infrastructure.
Example: Verifying a 2→2→1 Network #
We define a small neural network and prove that for inputs in a bounded region, the output remains within specified bounds. This is the foundation for certified adversarial robustness.
Main results #
exampleNet- A simple 2→2→1 ReLU networkrobustness_certificate- Proof that output stays bounded for bounded inputs
Network Definition #
The example two-layer network
Equations
Instances For
Well-formedness Proofs #
Input Region Definition #
Create an interval around a center point with radius ε
Equations
- LeanCert.Examples.NeuralNet.intervalAround center eps heps prec = LeanCert.Core.IntervalDyadic.ofIntervalRat { lo := center - eps, hi := center + eps, le := ⋯ } prec
Instances For
Input region: x ∈ [0.4, 0.6], y ∈ [0.4, 0.6] This represents inputs near (0.5, 0.5) with perturbation ε = 0.1
Equations
- One or more equations did not get rendered due to their size.
Instances For
Verification #
Compute the output interval for the example network
Equations
Instances For
Robustness Certificate #
Main robustness theorem: For any real input (x, y) within ε=0.1 of (0.5, 0.5), the network output is contained in the computed output interval.
This is a verified robustness certificate: it proves that small perturbations to the input cannot cause the output to leave the computed bounds.
Interpretation #
The theorem robustness_certificate proves that for ANY real values x, y
satisfying |x - 0.5| ≤ 0.1 and |y - 0.5| ≤ 0.1, the network output is
guaranteed to be within the computed bounds.
This is a formal verification of robustness - not just an empirical test, but a mathematical proof that holds for ALL inputs in the region.
Applications:
- Certified defense against adversarial examples
- Safety verification for neural network controllers
- Proving bounds on neural network outputs for formal verification