Serving the Quantitative Finance Community

 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 23rd, 2025, 1:34 pm

No thanks. One problem is that you seem to be confusing the 1/2 coefficient in q-variance, with lambda. The first comes from a simple energy balance – perturbing the oscillator by an amount z increases the energy and therefore the variance by this amount. The Qvar app is testing this relation. Lambda in contrast relates to the q-distribution. Its value of 1/2 is required for consistency with q-variance, but it's not the same thing.

The other thing you are missing is that the q-distribution comes about by considering an ensemble of oscillators. The pdf for each oscillator is non-Gaussian, but when you sum over them the resulting distribution converges to a sum of Gaussians. You can also check it in a different way by showing that the q-distribution is again consistent with q-variance.
 
User avatar
katastrofa
Posts: 7949
Joined: August 16th, 2007, 5:36 am
Location: Event Horizon

Re: David Orrell's q-variance paper in Wilmott

August 25th, 2025, 7:49 pm

No thanks. One problem is that you seem to be confusing the 1/2 coefficient in q-variance, with lambda. The first comes from a simple energy balance – perturbing the oscillator by an amount z increases the energy and therefore the variance by this amount. The Qvar app is testing this relation. Lambda in contrast relates to the q-distribution. Its value of 1/2 is required for consistency with q-variance, but it's not the same thing

I'm not mixing them up. The 1/2 is the slope in your q‑variance rule, V(z) = sigma^2 + z^2/2. Lambda is the Poisson mean in your q‑distribution, ie the Poisson‑weighted sum of Gaussians with component variances sigma^2 * (1+2n).
The mechanism is the issue: in a genuine coherent state the position variance is *constant*; increasing amplitude raises energy, *not* positional variance. So the + z^2/2 term does not come from coherent‑state "energy balance". In your own articles you then fix lambda = 0.5 so the q‑distribution and the qvariance rule are "consistent". I couldn't find anywhere what you mean by this consistency, but I'd guess its' a simple variance (second‑moment) match between the two halves of your model. The unconditional variance of your Poisson-Gaussian mixture is sigma^2 * (1 + 2 * lambda). If you also assert V(z) = sigma^2 + alpha * z^2 and require the average of that rule to equal the mixture's unconditional variance, you get alpa = 2 * lambda / (1 + 2 * lambda). Setting alpha = 1/2 therefore forces lambda = 1/2. That explains your "lambda = 0.5 is consistent with q‑variance". But this only goes one way: choosing lambda = 1/2 does not make the conditional variance law have slope 1/2 when you derive it from the mixture by proper conditioning; tthe Bayes‑derived small z slope from your Poisson‑Gaussian mixture is never 1/2 for any lambda (it peaks around 0.4 near lambda = 2). This also shows why keeping the 1/2 slope while changing lambda in other places breaks your the consistency criterion, eg. in one article you keep 1/2 slope but use lambda =~ 0.25, whcih - under the consistency requirement (alpha= 2 lambda / (1 + 2 lambda))- would imply a slope of 1/3. So changing lambda while keeping the 1/2 slope breaks that consistency condition... Maybe you could specify what the consistency means?


The other thing you are missing is that the q-distribution comes about by considering an ensemble of oscillators. The pdf for each oscillator is non-Gaussian, but when you sum over them the resulting distribution converges to a sum of Gaussians.You can also check it in a different way by showing that the q-distribution is again consistent with q-variance.


If each oscillator really has a non‑Gaussian position pdf (true for QHO energy eigenstates), then a classical ensemble over levels yields a sum of non‑Gaussians, not a Gaussian mixture. You only get a "sum of Gaussians" if you assume each component is already Gaussian (coherent states), or replace each non‑Gaussian eigenstate pdf by a Gaussian with variance sigma^2 * (1+2n). Your q‑distribution does the second: it is explicitly a Poisson‑weighted sum of Gaussians with lambda fixed at 1/2. That approximation is OK as a model, but it's just not a consequence of "ensemble of non‑Gaussian oscillators."

Since there are so many different "ensembles" in physics/statistics, to avoid talking past each other, I should clarify that I assume we're talking about a classical mixture across your windows: in each window a latent mode n is drawn; conditionally on n, z ~ N(0, sigma^2 * (1+2n)) - is that the case? Marginalising over n gives your q‑distribution. So it's not a quantum superposition or a CLT "sum" - just a standard mixture.
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 25th, 2025, 10:22 pm

Nice music! Let’s do q-variance first. The probability function of a coherent state is a normal distribution moving from side to side with amplitude x and period T. As you point out, the variance at any time is sigma^2. But what we use in the model is the variance over a complete period, which is sigma^2 + x^2/2. After scaling for time, this gives q-variance.

Another way to look at it is that we can’t measure the variance by sampling multiple points, but we can deduce it from the energy state, and the expected energy of the coherent state gives the same answer.

That’s q-variance. The q-distribution is derived by saying that we represent the market using a number of identical and entangled oscillators acting in series. Again we can’t measure variance by sampling lots of points, since this would collapse the wave function and lose all the structure; however we can deduce the variance by measuring the energy level. This collapses the wave function so the measurements of position are uncorrelated. The distribution corresponding to each level is Gaussian, because we are summing over an uncorrelated ensemble. The net result is a Poisson-weighted sum of Gaussians.

As discussed in e.g. the Journal of Derivatives article (not sure what papers you are using, but they may be a bit out-of-date) a corollary of q-variance is that the total variance is twice the minimum variance, and when applied to the q-distribution this gives you lambda=0.5 (there are a couple of other checks, but this is the easiest). 

A numerical example, also discussed in the JoD article, is given by the jump diffusion model in the QVar app. For periods of length T=1 week the model satisfies both q-variance and the q-distribution. The quantum model does the same but for all periods T.
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 26th, 2025, 4:14 pm

kat is one of the few people on the planet who is capable of understanding this model, and willing to understand this model.
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 26th, 2025, 5:36 pm

This might be a bit of a side-topic then, but one question – which would also shed light on the q-distribution – is how to obtain a simulated time series of daily data which respects q-variance. The quantum model gives log price changes and variance over a period T such as a week or month, but does not break it down into steps. Jump diffusion works for a single period length T, but how do you get something which works for any T? (Other than use actual market data, that is.) 

I put up a new version of the QVar app which plots the q-distribution as well.
 
User avatar
jasonbell
Topic Author
Posts: 347
Joined: May 6th, 2022, 4:16 pm
Location: Limavady, NI, UK
Contact:

Re: David Orrell's q-variance paper in Wilmott

August 27th, 2025, 9:23 am

Once I understand all this a little (or lot) more, I am intrigued to see how it can be used on crypto markets. 

I keep on learning day by day, beats any startup I did. 
Website: https://jasonbelldata.com
Linkedin: https://www.linkedin.com/in/jasonbelldata/
Author of Machine Learning: Hands on for Developers and Technical Professionals (Wiley).
Contributor: Machine Learning in the City (Wiley).
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 27th, 2025, 2:09 pm

Further to Paul’s comment, as someone who has dabbled in science communication, I find it fascinating how this model is perfectly poised to be very simple but understandable by almost no one. (This is based not on this chat – I thank everyone for bearing with it – but on several years experience.)

People with no background in quantum physics will usually be blocked by their assumption that quantum methods are incomprehensible (after all, this is what physicists themselves have long said). People with a physics background will usually be blocked by their related assumption that any such model is a flaky analogy based on subatomic particles or whatever.

Which is a shame, because using a wave function to model probability – which is what this boils down to – is not actually that hard or weird. (Non-physicists: if you can understand how a wave function is used to model the infinite well, which is an undergrad level problem, you are most of the way there – just think of transactions instead of particles.)

In a classical approach, you model price change as a random walk, and after a time T end up with a normal distribution. There are no real dynamics. In the quantum approach, you model price change over the whole period T using a QHO which is the quantum version of a spring. A feature of the QHO is that if you displace it by an amount x, this creates a disturbance (it’s literally a normal curve on a spring!) and the variance over the period increases by an amount x^2/2. After scaling appropriately for time, this gives you q-variance. That’s it.

The quantum model is therefore the simplest (first-order) way to incorporate probabilistic dynamics. The q-distribution is a little more complicated, but it follows from q-variance (if anyone can come up with another distribution that is consistent with q-variance let me know!).

A strange feature of q-variance is that there is no adjustable parameter for the quadratic term. Also it is supposed to hold for any period T. This means that the model should be easily falsifiable. An interesting feature of the model is that, present company aside, people don’t usually notice this! Instead the discussion focuses on abstract ideas or modelling preferences.

Now, I do get that for the reasons mentioned it is hard to engage with this model, but on the other hand, as seen with the QVar app (try selecting 4 stocks at random), q-variance and the q-distribution are empirical phenomena which are (a) easily tested, and (b) have obvious implications for quant finance. So to turn this around, the question at this point isn’t so much how to explain the model or debate the use of quantum methods, although I'm happy to do that. Instead the question for financial engineers is, how do you model these phenomena, and reconcile them with existing models? (I'll just leave these quantum tools here in case you need them ...)
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 27th, 2025, 6:44 pm

I want to know how to simulate all this. Could you plot the following two pictures?

1. Pick T = 20 days, say. Now plot variance over those 20 days vs z (as already done) but join the dots chronologically. And only do about 50 dots, or whatever is not too messy.

2. Same as above but instead of a rolling 20, keep start point fixed and do T = 10, then 11, 12, etc.

No idea what this will prove. Anyway…
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 27th, 2025, 9:31 pm

Also, plot var vs previous period’s z and vice versa.

Also, if there’s this relationship for period T and for period T+1 then what does say about what happens at time T+1?
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 28th, 2025, 12:59 pm

Will add some diagnostics to the app to check this kind of thing ...
 
User avatar
katastrofa
Posts: 7949
Joined: August 16th, 2007, 5:36 am
Location: Event Horizon

Re: David Orrell's q-variance paper in Wilmott

August 30th, 2025, 10:51 am

Nice music! Let’s do q-variance first. The probability function of a coherent state is a normal distribution moving from side to side with amplitude x and period T. As you point out, the variance at any time is sigma^2. But what we use in the model is the variance over a complete period, which is sigma^2 + x^2/2. After scaling for time, this gives q-variance.

Another way to look at it is that we can’t measure the variance by sampling multiple points, but we can deduce it from the energy state, and the expected energy of the coherent state gives the same answer.
Quite the opposite: in QM you estimate a distribution by repeating the preparation/measurement. Collapse affects a single run, not your ability to build a histogram from many runs.

In markets there's no literal wavefunction anyway. Reading prices doesn't "collapse" anything. In fact a market already gives you repeated runs in practice: non-overlapping windows through time. And the observation is passive - simply reading a price doesn't change the law of the next return.

Finally, note that your own q-variance derivation time-averages over a period (to get sigma^2 + amplitude^2/2) - if time-sampling were illegitimate, that step wouldn't be allowed either.

That’s q-variance. The q-distribution is derived by saying that we represent the market using a number of identical and entangled oscillators acting in series. Again we can’t measure variance by sampling lots of points, since this would collapse the wave function and lose all the structure; however we can deduce the variance by measuring the energy level. This collapses the wave function so the measurements of position are uncorrelated.
Independence (uncorrelated draws) is about how you prepare runs -- reset between windows -- not about what collapse does within one run. Collapse updates the state in that run; it doesnt produce any independence across runs.

The distribution corresponding to each level is Gaussian, because we are summing over an uncorrelated ensemble. The net result is a Poisson-weighted sum of Gaussians.
That's definitely not true. In the harmonic‑oscillator, after an energy measurement the system is in level n and the position distribution is not Gaussian - it's a Gaussian multiplied by a squared Hermite polynomial. Repeating iid doesn't change that shape. The only way to get your "Poisson‑weighted sum of Gaussians" is to replace each level's true shape by a Gaussian whose variance scales like 1+2n. That is a classical modeling step, not a consequence of collapse, entanglement, or any other quantum doodah.


So this model boils down to a classical normal variance‑mixture aka a compound Poisson-Gaussian jump‑diffusion over a window.
- draw a latent count N ~ Poisson(lambda)
- given N=n, draw the window return z ~ Normal(0, sigma^2 * (1+2n))
- marginalise over n to get a symmetric heavy‑tailed mixture
- assert the quadratic variance rule V(z) = sigma^2 + z^2/2; to stop that rule and the mixture from contradicting each other on average set lambda = 0.5

The quantum (QHO) analogy inspires two things: the coherent-state time-average identity (sigma^2 + amplitude^2/2) and the variance ladder 1+2n. But still the working distribution is built from Gaussian substitutes and a Poisson prior chosen for convenience. The entanglement/collapse rhetoric is not what makes the math go.


@"I find it fascinating how this model is perfectly poised to be very simple but understandable by almost no one. (This is based not on this chat – I thank everyone for bearing with it – but on several years experience.)

People with no background in quantum physics will usually be blocked by their assumption that quantum methods are incomprehensible (after all, this is what physicists themselves have long said). People with a physics background will usually be blocked by their related assumption that any such model is a flaky analogy based on subatomic particles or whatever."

It's indeed a narrow strip - and that's exactly where storytelling can outrun derivation ;-) I'm only asking that the story and the math line up.
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

September 1st, 2025, 2:52 pm

"The distribution corresponding to each level is Gaussian, because we are summing over an uncorrelated ensemble. The net result is a Poisson-weighted sum of Gaussians." That's definitely not true. In the harmonic‑oscillator, after an energy measurement the system is in level n and the position distribution is not Gaussian - it's a Gaussian multiplied by a squared Hermite polynomial. Repeating iid doesn't change that shape. The only way to get your "Poisson‑weighted sum of Gaussians" is to replace each level's true shape by a Gaussian whose variance scales like 1+2n. That is a classical modeling step, not a consequence of collapse, entanglement, or any other quantum doodah.
As clearly stated, the model is not “repeating iid”, it sums over the ensemble, which does give a Gaussian (it’s called the Central Limit Theorem).

Speaking of storytelling though, this points to the real problem. Most physicists have been trained in the story that quantum mechanics is in a sense literally true in physics, while for something like markets there is as you put it “no literal wavefunction anyway”. Any use of quantum models (“analogies”) outside physics therefore represents “quantum doodah” which must be debunked. This is why Murray Gell-Mann for example devoted an entire chapter of his 1994 book The Quark and the Jaguar to “Quantum Mechanics and Flapdoodle” (maybe that is the word you are thinking of?). However it does make it difficult to engage with a model like this in an open or productive way as one might with a model based on classical probability.

So since you apparently prefer the classical approach, let me leave you with this question: where is the classical model predicting the empirical phenomenon of q-variance? If there is none, then shouldn’t someone come up with one?
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

September 2nd, 2025, 8:04 pm

For those interested in looking at actual data, here is an updated version of the QVar app with some more bells and whistles on the Data tab: 

http://david-systemsforecasting.shinyapps.io/qvar
 
User avatar
katastrofa
Posts: 7949
Joined: August 16th, 2007, 5:36 am
Location: Event Horizon

Re: David Orrell's q-variance paper in Wilmott

September 13th, 2025, 10:28 am

As clearly stated, the model is not “repeating iid”, it sums over the ensemble, which does give a Gaussian (it’s called the Central Limit Theorem).
"Ensemble" suggests a mixture (adding pdfs with weights), but you invoke CLT - adding random variables, so I'm guessing that you're adding the random "shocks" within your n levels/windows to obtain the Gaussians. Next you mix them with Poisson weights. That's a purely classical construction which doesn't involve QM apparatus at any step.

I was trying to explain above that the QM or QHO theory doesn't mean what you think it means and don't work the way you use it in your narration.

where is the classical model predicting the empirical phenomenon of q-variance? If there is none, then shouldn’t someone come up with one?
I'm pretty sure many come up with similar classical models and why bother publishing them - assuming you ae allowed. Off the top of my head:  

The starting observation is that markets aren’t stationary. Liquidity, tradnig activity, newi nformation etc. drift throughout the day and from day to day. If you average over such different regimes, you'll obviously understate risk in busy periods and overstate it in calm ones. So you cut time into short, comparable windows with horizon T, assuming that within each window conditions are roughly constant and across windows they change.

So you have different layers of uncertainty in the model.

Inside a single window the return is still noisy: given the window’s volatility level, lots of tiny pushes up and down add up to a symmetric bell curve whose spread is V (variance), or mathematically z | V ~ N(0,V). No exotic shapes - we'll let the data tell us if any such emerge.

Across windows there is a state uncertainty: before the window opens you don't know its volatilitty but you know that it will be influenced by many small, positive pieces of information.
In mathematics we have a gamma distribution to describe such a sum of small positives. More precisely, gamma would be for precision (1/V), so for V it's inverse gamma - we put this prior assumption on V. So this prior encodes the state uncertainty about window's variance before any data arrives.  https://en.m.wikipedia.org/wiki/Inverse ... stribution

So we have likelihood p(z | V) ~ V^(−1/2) * exp(− z^2 / (2 V))

Prior p(V) ~ V^(−alpha−1) * exp(− beta / V) with alpha and beta for shape and scale parameters.  Those who'vr done some Bayesian models will like gamma distributions, because they are conjugate for a Normal with unknown variance - ie you get an immediate, closed-form update for V once you've seen the window’s update of z.

To get posterior after data arrives multiply likelihood x prior to get the posterior for V: p(V | z) ~ V^(−(alpha + 1 + 1/2)) * exp(− (beta + z^2/2) / V). (It stays inverse-gamma with updated parameters.)

That's the Bayes update.

What you report as window’s variance after seeing z is the posterior mean of V. Quick check on Wikipedia that for an inverse-gamma with shape a and scale b, the mean is b / (a − 1), so in our case: E[V|z] = (beta + z^2/2) / (alpha − 1/2)

If you choose alpha = 3/2 and beta = sigma^2 (sigma^2 is the baseline volatility at z=0) you get E[V|z] = sigma^2 + 0.5 * z^2. That's the q-variance exactly: window's best variance estimate equals baseline plus one-half times the squared move. Small move |z| - you stay near the baseline, big move - window's variance grows quadratically. And alpha controls "responsiveness" of the volatility curve to such updates, so you can tune it up or down by decreasing/increasing alpha, respectively (intercept beta you set at baseline volatility). It's nowcasting. More subtle than simple methods like moving average variance and can be used to adjust your positions, risk limits, hedging, or whatever you people do there. I would always prefer to see  the posterior interval for V (eg 90% credible interval from the inverse gamma quantiles) to see how much it widens with large moves.

If you want to "zoom out" to many windows view and pool returns across heterogeneous windows, you integrate V out. The marginal law of z is Student-t with 2*alpha degrees of freedom and scale s^2 = beta/alpha. ADDED UPON EDIT: which has the fatter shoulders we see in data.
https://en.m.wikipedia.org/wiki/Normal- ... stribution

With alpha = 3/2 and beta = sigma^2, that’s a t with 3 degrees of freedom: heavier shoulders than a single normal (what you actually see in data). Its variance is 2 * sigma^2, which explains the practitioner’s corollary "total variance is twice the minimum" (the minimum of the conditional curve is sigma^2; the pooled variance is 2 * sigma^2).

That's the Bayesian story, but a similar simple one can be made around Poisson, a la market microstructure. No need for quantum oscillators, entanglements or such abstract theoreticla concepts as collapse (btw, do you follow Copenhagen interpretation or Many Worlds? ;-) )
Last edited by katastrofa on September 14th, 2025, 11:35 am, edited 1 time in total.
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

September 13th, 2025, 10:52 pm

Hmm, a word “suggests” something so you are “guessing” that I do something with “windows” to obtain Gaussians which are mixed with Poisson weights in a “purely classical construction”. Tell you what, just wait for the book to come out (some time next year – it is with a tiny little academic press and they are VERY slow).

Speaking of Bayesian probability though, your prior appears from what you have written to be that quantum methods are really only for physics (perhaps they represent deep ontological truths about subatomic matter). In this view, applications to other areas may serve as an analogy (a word rarely applied to classical finance models, despite the fact they often descend from physics), but in reality they are quantum woo (or “doodah”) which should be debunked.

This viewpoint was long the default, but has started to break down in recent years with the development of areas such as quantum cognition, quantum finance, quantum law, applications of quantum computing, and so on. So for the sake of discussion try looking at it instead from an applied mathematics perspective. The model is not the reality. Quantum is just a different sort of probability (it can be viewed as the next-simplest type after the classical sort) which can be applied to other systems where appropriate.

From this viewpoint, the test is whether a model can explain and predict a system in a parsimonious manner. Obviously it is possible to build a classical model of q-variance after the fact – you can build a model of anything. But your model uses two parameters, alpha and beta, to model quadratic variation over a single period T. The model therefore does not make falsifiable predictions, it just fits a curve. (Wouldn't it be easier to assume that variance depends on z in a symmetric fashion, so this is a Taylor expansion? Not as good a story though.)

You also imply this classical model is simpler because there is “No need for quantum oscillators, entanglements or such abstract theoretical concepts as collapse”. But the quantum model is based on tools encountered in second-year physics (at least that is where I learned them). In quantum probability, collapse is not a complex theoretical concept, it is just a projection. Nothing to do with many worlds or Copenhagen (remember quantum probability exists outside of physics). In fact the model here boils down to saying that the normal distribution of the classical model oscillates from side to side, increasing the variance. Not actually that hard.

From the (old) physics perspective, the quantum finance model is problematic because it represents a kind of appropriation of deep physical principles (which is not a real test). From an applied math perspective, what counts is that it is derived from a coherent argument and makes falsifiable predictions.

So, a couple of questions:

1: Can you use your model to produce a daily time series which respects q-variance over all periods T (ranging from a couple of days to a year) without changing the parameters?

2: You say that “I'm pretty sure many come up with similar classical models and why bother publishing them - assuming you ae allowed”. If you know of classical models that predict or explain q-variance, please supply a reference. (This was exactly the topic of the dialogue with Paul in the July issue. As far as we and the others we spoke to can tell, no one had even noticed it, despite the fact that it is a large effect.) Also please unpack the last part. Why would someone not bother? And why would it not be allowed?

3. Please explain/reference what you mean by "the practitioner’s corollary 'total variance is twice the minimum'"?