Serving the Quantitative Finance Community

 
User avatar
jasonbell
Topic Author
Posts: 347
Joined: May 6th, 2022, 4:16 pm
Location: Limavady, NI, UK
Contact:

David Orrell's q-variance paper in Wilmott

June 23rd, 2025, 12:49 pm

I've read through David Orrell's paper in Wilmott July 2025 (page 40), "A Quantum of Variance and the Challenge for Finance", a number of times now. 

One question has been lingering in my mind, if you took out the word "quantum" would it make much difference? How would a paper about variance been seen in the quant field? 

From reading it seems to me that the word is being interchanged in it's meaning from physics, math and figuratively. It looks good and forward thinking, the alignment to the S&P returns give us some confirmation but beyond the paper itself would it ever be trialled in a real world setting, with real money, to see whether the claims are true? 
Website: https://jasonbelldata.com
Linkedin: https://www.linkedin.com/in/jasonbelldata/
Author of Machine Learning: Hands on for Developers and Technical Professionals (Wiley).
Contributor: Machine Learning in the City (Wiley).
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

June 23rd, 2025, 1:54 pm

The word “quantum” in the paper refers either to the mathematical framework, or to a small discrete quantity which comes out of the math (the paper does not discuss physics!). It is used because the model is based on quantum probability, which can be considered as the next most complicated kind after classical probability. The advantage is that representing probability with a complex wave function allows you to incorporate dynamical effects such as imbalance. In particular it is what leads to the q-variance relation between price change and volatility. 

Yes it is being trialled in a real-world setting – happy to discuss!
 
User avatar
jasonbell
Topic Author
Posts: 347
Joined: May 6th, 2022, 4:16 pm
Location: Limavady, NI, UK
Contact:

Re: David Orrell's q-variance paper in Wilmott

June 23rd, 2025, 2:18 pm

The word “quantum” in the paper refers either to the mathematical framework, or to a small discrete quantity which comes out of the math (the paper does not discuss physics!). It is used because the model is based on quantum probability, which can be considered as the next most complicated kind after classical probability. The advantage is that representing probability with a complex wave function allows you to incorporate dynamical effects such as imbalance. In particular it is what leads to the q-variance relation between price change and volatility. 

Yes it is being trialled in a real-world setting – happy to discuss!
Thank you David, I appreciate you responding. So pleased to hear that it's being used in the real world. I'm more a practitioner than academic so I always see things from that stand point. 

Cheers! 
Website: https://jasonbelldata.com
Linkedin: https://www.linkedin.com/in/jasonbelldata/
Author of Machine Learning: Hands on for Developers and Technical Professionals (Wiley).
Contributor: Machine Learning in the City (Wiley).
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

June 25th, 2025, 4:25 pm

Further to the kind “forward thinking” remark, I think there are two sides to this. We need new ideas, but it won’t happen unless we also discard some old ones!

For example in the paper I argue that q-variance falsifies the BSM model. This is obviously a contentious point so maybe I should elaborate for those interested. 

Referring to the 1973 Black-Scholes paper, the main assumption is that “The stock price follows a random walk in continuous time with a variance rate proportional to the square of the stock price. Thus the distribution of possible stock prices at the end of any finite interval is lognormal. The variance rate of the return on the stock is constant.” However as discussed in the paper q-variance is incompatible with a random walk, volatility is not constant but depends on price change, and the distribution is not lognormal but is better described as a q-distribution. The equations in the paper therefore do not apply.

Another assumption states that “There are no transaction costs in buying or selling the stock or the option.” But transaction costs – including the bid/ask spread which is key to the quantum model – do of course exist and become especially important for options at extreme strikes.

Finally, the main conclusion is that the option costs computed by the BSM model are appropriate in terms of expected payouts. There is a loophole here, which is that in theory at least the cost of an option reflects not just the payout but also the price of risk, and the hedging argument is supposed to account for this by moving the analysis to a risk-free playing field. Since volatility is related to price change and transaction fees are not negligible, this argument breaks down, so we are left with the commonsense test that there are no obvious arbitrage opportunities – for example the model will not assign a price which is significantly higher than the expected payout, since then a trader could consistently make money by selling the option at that price.

This condition is easily tested by comparing BSM costs and actual payouts, and as shown by the payout-implied volatility surface in the paper, the model with constant volatility cannot work, which of course is one reason why traders adjust the number.

It might seem that there is no good reason to abandon the BSM model because everyone is used to it, it involves only a single parameter, etc. But if the single parameter has to change for every strike and expiration time (!!) it might actually be simpler to try something new. (Alternatively, for markets where q-variance does not apply and price changes are lognormal, the BSM model will work just fine – let me know if anyone finds one!)
 
zebedeem
Posts: 14
Joined: May 20th, 2025, 7:03 am

Re: David Orrell's q-variance paper in Wilmott

July 5th, 2025, 8:27 am

Referring to the 1973 Black-Scholes paper, the main assumption is that “The stock price follows a random walk in continuous time with a variance rate proportional to the square of the stock price. Thus the distribution of possible stock prices at the end of any finite interval is lognormal. The variance rate of the return on the stock is constant.” 
I have a slide show of the S&P 500 versus the expected value from GBM plus the next term in the expansion.
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 4:58 pm

David has produced this app, that shows what is going on: https://david-systemsforecasting.shinyapps.io/qvar/

Try the following:

1. Select "walk" (lognormal) with the default 1-12 weeks chosen. It's not quick. You'll see David's theory (red line) and simulation results (blue). No match.

2. Change to "jump." Looks like it could be a match! Bad news for David so far!

3. Now play around with the number of weeks, still on "jump." Just select one number for weeks (or more than one with Ctrl). As the number of weeks increases, the blue line flattens out. Jump model starts to look lognormal as expected. 

4. Now start picking individual stocks. And vary the number of weeks to see if the blue line is anything like David's red line. Maybe hold weeks fixed at 26 (because it looks like the jump diffusion blue line has flattened out by then) and change stocks. I think most stocks I've tried have some funny behaviour that looks rather like David's model. Good news for David!

Play around with it! 
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 5:31 pm

Just to add that if you do only a single T you will limit the amount of points which leads to more uncertainty, as seen by the uncertainty estimates in the upper plot. So I would suggest looking at single period lengths to understand the data, but select all periods (T=1 to 50 weeks) for better accuracy.
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 5:38 pm

Yes, shrinking data is obviously an issue. But it is supposed to work for each T individually. That it works on average is not as impressive. How about holding T fixed and include all stocks from the same sector, for example? Can you tweak the app to allow selection of more than one stock?
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 6:06 pm

Yes, the model applies for all T, but the agreement with data will only ever be as good as the uncertainty. If you average over many period lengths T then that reduces the noise so you get a better idea of the fit. Another way as you suggest is to look at multiple stocks. The lower plot does this for all periods T, will have a look at allowing selection of multiple stocks. The other thing though is that the quantum model can be viewed as a first-order approximation to the dynamics, so one wouldn't expect it to fit every case, but the errors should come out in the wash. So combined with noisy data sets, I would argue that fitting every T perfectly will be impossible, and fitting the average (as in the lower plot) is the goal.
 
User avatar
Paul
Posts: 7055
Joined: July 20th, 2001, 3:28 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 6:32 pm

The goal is to convince people to take this seriously! They will dismiss it out of hand, but you don’t want to make it easy for them!
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 15th, 2025, 6:46 pm

My aim isn't to convince anyone, that is far too hard!
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 16th, 2025, 3:15 pm

I put up a new version of the app which now allows you to combine data from up to 4 stocks. The time range for periods is shortened to 1 to 26 weeks. This allows for multiple stocks without using too much memory, but also at 26 weeks there are about 50 points which is about as low as you can go to get meaningful results (remember we are fitting a curve where each point is a variance, not just a variance of a single time series). The jump model gives poor results over this range because the curve is quite flat after even 5 weeks. The user can now also choose between binned (default) and LOESS for the interpolation.
 
User avatar
katastrofa
Posts: 7949
Joined: August 16th, 2007, 5:36 am
Location: Event Horizon

Re: David Orrell's q-variance paper in Wilmott

August 17th, 2025, 8:47 am

The word “quantum” in the paper refers either to the mathematical framework, or to a small discrete quantity which comes out of the math (the paper does not discuss physics!). It is used because the model is based on quantum probability, which can be considered as the next most complicated kind after classical probability. The advantage is that representing probability with a complex wave function allows you to incorporate dynamical effects such as imbalance. In particular it is what leads to the q-variance relation between price change and volatility. 

Yes it is being trialled in a real-world setting – happy to discuss!
I thought the quantum reference comes from the analogy of your model to coherent states of a quantum harmonic oscillator. QHO: larger displacement -> higher variance in position measurement; q-var: larger price movement -> higher variance of returns in that period.
As I understand you model the return distribution as a Poisson mixture of ifferent volatility "modes" in analogy to a coherent state of QHO. The volatility modes themselves are analogous to qho energy levels and thus scale with 2n+1. That's why your q-variance is quadratic in returns just as the qhe variance is quadratic in displacement amplitude?

So this this basically models returns as mixture of gaussians, aka heavier tails - additionally amplified by ^2, and that will produce a volatility smile?
 
User avatar
DavidO
Posts: 26
Joined: January 18th, 2017, 3:19 pm

Re: David Orrell's q-variance paper in Wilmott

August 17th, 2025, 7:28 pm

Nearly! The model uses a QHO coherent state to model price change over a period T. It’s not an analogy (nothing to do with subatomic particles), it’s the result of quantizing a linear entropic force, so the model can be viewed as a first-order approximation to the underlying probabilistic dynamics.

In the quantum model variance is a measure of the energy level, so q-variance occurs because this increases with displacement. To compute the price change distribution we look at an ensemble of such oscillators. Since the energy level of a coherent state follows a Poisson distribution, the result is the q-distribution, which is a mixture of Gaussians with a particular set of weights (no new parameters other than base volatility and drift). So rather than q-variance emerging from the q-distribution, they each arise from basic properties of the QHO, and are consistent with one another.

Hope this helps. The app demonstrates q-variance for a range of assets, give it a try if you haven't already!
 
User avatar
katastrofa
Posts: 7949
Joined: August 16th, 2007, 5:36 am
Location: Event Horizon

Re: David Orrell's q-variance paper in Wilmott

August 23rd, 2025, 9:56 am

QHO is not about subatomic particles either. It's just a popular mathematical framework, because a linear force is a common approximation in many systems, cf. Hooke's law: -kx. Same here with your linear entropic force, ie the gradient of the log‑probability density ("propensity") of price, which you assume to be locally Gaussian. So this force is like a spring pulling price toward the most plausible value.

When you really quantise the harmonic oscillator, you obtain the expression for position variances: Var(x|n)=(2n+1) * hbar/(2*m*omega), where n numbers the stationary states, and  hbar/(2*m*omega) is the ground‑state variance. This law comes from the QHO operator algebra (and equivalently from integrating the Hermite x Gaussian eigenfunctions).

But in your model you don't actually use those eigenstates for your "volatility modes" - instead you replace them with Gaussians (whose variance is fixed by their parameter sigma, as everyone knows). That means the (2n+1) scaling isn't derived from your quantisation; it's imposed by borrowing the QHO result and assigning it to Gaussian surrogates.

And the states you call coherent aren't coherent states either in the QHO sense. A coherent state |alpha> is a superposition of Hamiltonian eigenstates |n>and has a Gaussian position distribution of fixed width. If you measure energy on this |alpha>, the probabilit yof getting level n is Poisson with mean lambda = |alpha|^2. In your model, you use these Poisson probabilities as classical mixing weights over Gaussians with variances ~(2n+1), and you fix lambda=1/2 by assumption so the calculations give your q‑variance V(z)=sigma^2+z^2/2. So what you’ve built is a quantum‑inspired variance‑mixture, not the literal coherent‑state position law.

PS. If you actually followed the QHO formalism - ie used the true eigenfunctions (or formed a genuine coherent state) - the position density is just one Gaussian with constant width: no heavy tails. The heavy tails only appear after replacing eigenstate shapes by Gaussians and using Poisson probabilities as classical mixture weights.


So why not put everything in a short sentence: large x windows are more likely to have high variances, so the conditional variance rises like a+bx^2, and let's fix b=1/2 for parsimony?