Nature & Math
Why does the universe speak mathematics — and what does it mean to model it?
I grew up fascinated by how things work. Science and engineering were not subjects to me — they were just the way I saw the world. I have a photo of myself at age seven, standing in front of a mechanical engineering building, already certain that was where I was headed. It was not a casual interest. It was a direction.
Getting there required work. Good universities are competitive, and I learned early that effort was the one variable I could control. I found a pattern that worked: attend every lecture, take careful notes, solve the practice problems and homeworks until the solution paths feel automatic. By exam day, I had usually seen every type of question that could appear. I aced most of them. By any external measure, I was a successful engineering student.
But I had learned to recognise patterns, not to understand systems. I could solve the heat equation. I could not tell you why it has the form it does, what makes it fundamentally different from the wave equation, or what physical intuition should tell you before you even pick up a pencil. The big picture — the why behind the machinery — was never taught. Or if it was, it was buried under enough technical detail that I missed it.
This gap did not matter much in coursework. It mattered enormously when I started a PhD. Research is not pattern recognition. You are working on something that has not been solved — there is no practice problem to study, no answer key to check against. Scientific work requires pondering: sitting with a problem long enough that you develop a feel for it, a sense of which direction is promising and which is a dead end. That feel comes from intuition, and intuition comes from understanding things deeply, not from memorising their solutions.
This series is my attempt to build that intuition — in numerical methods, in the equations that model the physical world, in the connection between the two. I am writing it partly to solidify my own understanding, and partly because I suspect I am not the only one who went through an engineering degree and came out the other side technically competent but philosophically lost.
If that sounds familiar, this is for you too.
Why does the universe speak mathematics?
In 1623, Galileo Galilei wrote that the book of nature "is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures." Three hundred years later, the physicist Eugene Wigner called it "the unreasonable effectiveness of mathematics in the natural sciences" — the baffling fact that abstract structures invented by human minds, often with no practical purpose in sight, turn out to describe physical reality with startling precision.
This is genuinely strange. Mathematics is a creation of the human mind. A partial differential equation is a set of symbols on a page. There is no obvious reason why those symbols should predict the orbit of Neptune, the temperature inside a jet engine, or the spread of an epidemic. Yet they do. And they do it so well that we build bridges, fly aircraft, and design drugs on the strength of it.
There are three honest answers to Wigner's question, and I don't think we need to pick one:
- The Platonic view: mathematical structure is woven into the fabric of reality. We don't invent it — we discover it. The equation was always there; we just finally wrote it down.
- The pragmatic view: survivorship bias. We have tried thousands of mathematical frameworks on the physical world. We kept the ones that worked and forgot the ones that didn't. Of course the survivors look like they fit.
- The evolutionary view: brains that found patterns — regularities in the environment — survived. The patterns we are good at finding are precisely the ones that exist at human-accessible scales. Mathematics is pattern-finding made rigorous.
The mystery doesn't need to be resolved. It's part of what makes this worth studying.
What is a model?
The statistician George Box put it plainly: "All models are wrong, some are useful." A model is not reality. It is a deliberate simplification of reality — a map, not the territory. A map of a city omits the texture of the roads, the smell of the bakery on the corner, the exact height of every building. It is wrong in all those ways. But it still gets you from the train station to the museum.
The art of mathematical modelling is the art of choosing what to ignore. A billiard ball can be treated as a perfect sphere with uniform density — that model is accurate enough to predict how it rolls. The same approximation applied to a crumpled piece of paper fails instantly. The model did not become wrong; it was always wrong. What changed is whether the ignored details matter for the question being asked.
Consider the Moon. Depending on what you want to know, you might model it as:
- A point mass — enough to compute its orbit around the Earth to engineering precision. Newton did this in 1687.
- A rigid sphere — needed to explain the tides, which depend on the Moon's gravitational gradient across the Earth's diameter.
- A deformable body — needed to understand how the Moon's own shape responds to the Earth's tidal forces over geological time.
Each model is "wrong." Each model is useful for a specific question. Choosing the right level of description for the question at hand — that is the engineering mindset.
The anatomy of a mathematical model
Every mathematical model has three components. Once you see this structure, you will recognise it everywhere — in fluid dynamics, in circuit theory, in population biology, in financial mathematics.
- State variables — the quantities that describe the current condition of the system. Position and velocity for a projectile. Temperature for a heated rod. Concentration for a chemical reaction.
- Parameters — constants that characterise the system but don't evolve with it. Gravitational acceleration $g$. Thermal diffusivity $\alpha$. A reaction rate constant $k$.
- Governing equations — the rules that say how the state evolves. Usually derived from a conservation law: conservation of energy, momentum, mass, charge.
Take the simplest possible example: a ball thrown at angle $\theta$ with speed $v_0$. The state is $(x, y, v_x, v_y)$. The parameter is $g = 9.81\ \text{m/s}^2$. The governing equation comes from Newton's second law, $F = ma$:
$\ddot{x} = 0, \qquad \ddot{y} = -g$
Integrate twice, apply initial conditions, and you get the familiar parabola. This is a model. It ignores air resistance, the rotation of the Earth, the elasticity of the ball. For a slow throw in a room, those omissions are fine. For a long-range artillery shell or a baseball pitch, they are not.
The model ladder
What happens when we stop ignoring things? Below is the same projectile described by three levels of model. Each adds one layer of physics. Watch how the predicted trajectory changes — and notice that each curve requires new parameters, new assumptions, and more computation.
- Level 1 — Point mass, no air: $\ddot{y} = -g$. A perfect parabola.
- Level 2 — Quadratic drag: $m\ddot{\mathbf{r}} = m\mathbf{g} - \tfrac{1}{2}\rho C_d A\,|\mathbf{v}|\,\mathbf{v}$. The ball slows faster on the way down than up. Range shrinks.
- Level 3 — Magnus effect (spin): a spinning ball deflects sideways. $\mathbf{F}_\text{Magnus} = C_L\,(\boldsymbol{\omega} \times \mathbf{v})$. This is why a football curves.
There is no "correct" model. There is only the model that is appropriate for the precision you need and the cost you can afford.
Digital twins
A simulation is something you run once and walk away from. A digital twin is something that runs alongside reality — continuously fed with sensor data, continuously correcting itself against what the physical system is actually doing.
The term was coined at NASA in the early 2000s, initially for spacecraft and aircraft structural health monitoring. The idea: build a mathematical model of a physical asset so faithful that the virtual copy and the real object evolve together. When the real engine develops a hairline crack, the twin's stress field shifts to match — and predicts when the crack will become dangerous before any human inspector would notice.
Today, digital twins operate at every scale:
- Component scale: a turbine blade, a bearing, a battery cell. The twin predicts remaining useful life and schedules maintenance before failure.
- System scale: a factory floor, an aircraft, a wind farm. The twin optimises control decisions in real time.
- City scale: Singapore's Virtual Singapore project maps every building, road, and utility line into a living model used for urban planning and emergency response.
- Human scale: the Living Heart Project (Dassault Systèmes) builds patient-specific cardiac models from medical imaging. Surgeons rehearse on the twin before touching the patient.
The quality of a digital twin is bounded by the quality of its underlying mathematical model. A twin built on a crude model gives crude predictions, no matter how many sensors you plug in. This is why the equations matter — and why understanding them at a deep level is not an academic exercise.
Surrogate models
There is a tension at the heart of mathematical modelling: the more accurate the model, the more expensive it is to run. A full computational fluid dynamics simulation of airflow over a wing might take hours on a supercomputer. A digital twin that needs to make real-time control decisions cannot wait hours. Something has to give.
The answer is a surrogate model (also called an emulator or metamodel): a cheap approximation that is trained to mimic the expensive model over the parameter range of interest.
The process:
- Run the expensive simulator at a carefully chosen set of input parameters — a design of experiments.
- Train a surrogate (a polynomial, a Gaussian process, a neural network) on those input-output pairs.
- Use the surrogate where you need fast predictions; fall back to the full model for validation.
Surrogates sit on a spectrum between two extremes:
The most interesting current research lives in the middle: physics-informed neural networks (PINNs), which incorporate known governing equations as constraints during training. The network learns to satisfy the PDE and the data simultaneously — combining the expressiveness of deep learning with the structure of physics.
We will return to all of this. The chapters that follow build the mathematical machinery — the PDEs, their classification, their solutions — that makes all of the above possible. The goal is not to memorise equations. It is to develop a mental model of why different types of equations describe different types of phenomena, and what that tells you about the physical system before you even start computing.