Four ways Differential Equations can make neural networks smarter, from smoothing features to adaptive depth.
Standard CNNs are powerful but arbitrary. What if we borrow rules from physics, equations that describe how heat spreads, how fluids flow, and plug them directly into the learning pipeline?
This work tests four distinct touchpoints where Differential Equations can interact with a ResNet-18 baseline. Each one is an ablation. Together, they tell a richer story than any single trick could.
A ResNet layer is literally a discrete Euler step of an ODE. This connection is the foundation of the entire project.
Test whether diffusion-based feature smoothing gives a baseline ResNet-18 measurably better accuracy.
Visualize and quantify the difference between sharp, standard attention vs. PDE-smoothed attention across image classes.
Measure accuracy under noise and blur perturbations with and without PDE-driven training augmentation.
Compare accuracy vs. NFEs (neural function evaluations) between a standard ResNet block and its ODE replacement.
Interested in this research? Reach out