From Classical Echoes to Quantum Dynamics: A Reader's Guide
What this series is, who it's for, and how to use it
Most medical technology is still built on the idea that health lives in single numbers: a lab value, a blood pressure reading, an ECG strip printed once and filed. The body does not work that way.
Almost everything that matters is changing over time. Heart signals, brain waves, glucose, drug levels, speech, molecular pathways — these are stories unfolding as sequences, not isolated snapshots. If you want to understand what is happening to a patient, or a compound, or a cell line, you need to read the whole trajectory, not just one point on it.
I am now trying the following, a six‑part series, that asks a simple but uncomfortable question: what kind of computing do we actually need for data that looks like that?
I walk through a family of methods called reservoir computing — and their quantum cousins — that were built from the start for long, noisy, drifting streams of data. Along the way I compare them to the tools everyone reaches for by default: ARIMA, LSTMs, Transformers. I look at where each one really wins and where each one fails once you care about the messy constraints of medicine, i.e. small datasets, non‑stationarity, regulation, and hardware that needs to run in a wearable, not a data centre.
You do not need to follow every equation to get value from this. If you are a clinician, the goal is a better feel for which models are structurally suited to your signals, and why some “state‑of‑the‑art” systems fall apart when the ward, patient mix, or protocol shifts. If you are in pharma or biotech, it is a map for when deep learning is worth its cost, when smaller reservoir methods are the smarter first move, and where quantum devices might realistically enter drug discovery over the next decade. If you are simply trying to see past the hype around “AI in medicine,” think of it as a guided tour through the actual landscape — with a consistent separation between what we know, what works, and what is still a research bet.
The series runs across six parts. Pick the one that matches where you are:
Part 1 — Why biological time series are a different kind of problem
Part 2 — What the quantum upgrade actually means (Hilbert space, entanglement, features)
Part 3 — Every major method compared on the dimensions that matter in medicine - ARIMA, LSTM, transformers, classical reservoirs and quantum reservoirs on the same biological time‑series problems
Part 4 — Why today’s noisy quantum hardware is better for this than tomorrow’s perfect machines
Part 5 — What classical and quantum reservoir computing can actually do in the clinic today
Part 6 — The honest research frontier: what we still don’t know, and why that’s worth paying attention to
Start with Part 1 if you are new to this, or go straight to Part 3 if you already work with clinical time-series data.
For those interested in the Classical baseline: ARIMA and time series
The old statistical workhorse we use as a reference point in this series – what ARIMA is, the core equation, where it works, and where it fails on messy biological data - see here the link to ARIMA companion.
Enjoy!
Further Reading
Gauthier, D.J. et al. (2021). “Next generation reservoir computing.” Nature Communications, 12, 5564. https://doi.org/10.1038/s41467-021-25801-2
Nakajima, K. & Fischer, I. (Eds.) (2021). Reservoir Computing: Theory, Physical Implementations, and Applications. Springer. https://link.springer.com/book/10.1007/978-981-13-1687-6
Quantum Reservoir and Data Selection — QuaRDS Project (LMU Munich & Merck KGaA, Darmstadt). https://qarlab.de/en/quards-en/
Tanaka, G. et al. (2019). “Recent advances in physical reservoir computing: A review.” Neural Networks, 115, 100–123. https://doi.org/10.1016/j.neunet.2019.03.005


