---------------------------------

"Over the years, I’ve often been asked for investment advice, and in the process of answering I’ve learned a good deal about human behavior. My regular recommendation has been a low-cost S&P 500 index fund. To their credit, my friends who possess only modest means have usually followed my suggestion.

I believe, however, that none of the mega-rich individuals, institutions or pension funds has followed that same advice when I’ve given it to them. Instead, these investors politely thank me for my thoughts and depart to listen to the siren song of a high-fee manager or, in the case of many institutions, to seek out another breed of hyper-helper called a consultant.

That professional, however, faces a problem. Can you imagine an investment consultant telling clients, year after year, to keep adding to an index fund replicating the S&P 500? That would be career suicide. Large fees flow to these hyper-helpers, however, if they recommend small managerial shifts every year or so. That advice is often delivered in esoteric gibberish that explains why fashionable investment “styles” or current economic trends make the shift appropriate.

The wealthy are accustomed to feeling that it is their lot in life to get the best food, schooling, entertainment, housing, plastic surgery, sports ticket, you name it. Their money, they feel, should buy them something superior compared to what the masses receive.

In many aspects of life, indeed, wealth does command top-grade products or services. For that reason, the financial “elites” – wealthy individuals, pension funds, college endowments and the like – have great trouble meekly signing up for a financial product or service that is available as well to people investing only a few thousand dollars. This reluctance of the rich normally prevails even though the product at issue is –on an expectancy basis – clearly the best choice. My calculation, admittedly very rough, is that the search by the elite for superior investment advice has caused it, in aggregate, to waste more than $100 billion over the past decade. Figure it out: Even a 1% fee on a few trillion dollars adds up. Of course, not every investor who put money in hedge funds ten years ago lagged S&P returns. But I believe my calculation of the aggregate shortfall is conservative.

Much of the financial damage befell pension funds for public employees. Many of these funds are woefully underfunded, in part because they have suffered a double whammy: poor investment performance accompanied by huge fees. The resulting shortfalls in their assets will for decades have to be made up by local taxpayers.

Human behavior won’t change. Wealthy individuals, pension funds, endowments and the like will continue to feel they deserve something “extra” in investment advice. Those advisors who cleverly play to this expectation will get very rich. This year the magic potion may be hedge funds, next year something else. The likely result from this parade of promises is predicted in an adage: “When a person with money meets a person with experience, the one with experience ends up with the money and the one with money leaves with experience.”

Long ago, a brother-in-law of mine, Homer Rogers, was a commission agent working in the Omaha stockyards. I asked him how he induced a farmer or rancher to hire him to handle the sale of their hogs or cattle to the buyers from the big four packers (Swift, Cudahy, Wilson and Armour). After all, hogs were hogs and the buyers were experts who knew to the penny how much any animal was worth. How then, I asked Homer, could any sales agent get a better result than any other?

Homer gave me a pitying look and said: “Warren, it’s not how you sell ‘em, it’s how you tell ‘em.” What worked in the stockyards continues to work in Wall Street."

Source: Berkshire Hathaway 2016 Shareholder letter (Feb. 25, 2017)

Statistics: Posted by Alan — Yesterday, 6:53 pm

]]>

Moreover, I am looking for the missing link -- at least in my personal library -- between the market efficiency positive economics modelling and the growing normative modelling which uses

Thanks in advance for the hints and suggestions anyone would provide.

Statistics: Posted by GiuseppeAlesii — Yesterday, 10:11 am

]]>

Statistics: Posted by Cuchulainn — September 10th, 2017, 2:05 pm

]]>

Some keywords:

* Getting rid of intermediate steps is called "end-to-end learning"

* A very active research area for learning optimal behaviour is "reinforcement learning".

Here is are two example where bots learn very complex behaviour and outperform the best humans:

https://www.google.nl/amp/s/amp.busines ... ndi-2017-8

http://vizdoom.cs.put.edu.pl/competition-cig-2017

Statistics: Posted by outrun — September 6th, 2017, 11:28 am

]]>

I am looking for research papers that deal with automated market making, ideally on Options.

Do you have any references ?

I saw a lot of papers dealing with inventory risk, is there another sub-subject in automated market making ? Such as short-term vol prediction ?

Many thanks

Statistics: Posted by brummie69 — September 6th, 2017, 10:11 am

]]>

http://onlinelibrary.wiley.com/doi/10.1 ... 10366/epdf

preview

..

Statistics: Posted by Cuchulainn — August 10th, 2017, 7:35 am

]]>

http://onlinelibrary.wiley.com/doi/10.1 ... 10606/epdf

Statistics: Posted by Cuchulainn — August 10th, 2017, 7:31 am

]]>

Is there a compelling reason (supervisor demands it, legacy code) to use trinomial? (it is only a simple explicit FDM, 1st order accurate and conditionally stable).

Why not go the whole way and use Crank Nicolson. And it allows you to calculate the greeks in one sweep.

https://mhittesdorf.wordpress.com/2013/11/17/introducing-quantlib-american-option-pricing-with-dividends/

I don't see any (e.g. mathematical) reason to use trinomial against a decent FD scheme.

Trees: More beautiful Geometry!

Trees More Interesting History, read the rings!

Tree models more intuitive?

Beauty > 50%

Statistics: Posted by Collector — August 6th, 2017, 8:20 pm

]]>

This is an interesting set of questions. In principle, the answer is yes. Can you answer the questions in the case of your lattices?

The big issue is if you separate the underlying stuff from the payoff algos. If the common data is read only then parallel processing is on the cards.

But a more precise specification is needed.

Statistics: Posted by Cuchulainn — July 21st, 2017, 5:13 pm

]]>

Book 2 has been out for about a year now. If you're interested, there's a bunch of pre-sales stuff and support at the link below: full table of contents, code files, etc.

Statistics: Posted by Alan — July 21st, 2017, 2:14 pm

]]>

geodesic,

Things get more interesting in 2D and up. I've got one chapter in a recent book that may interest you: Ch. 5 "Stochastic Volatility as a Hidden Markov Model" in the book "Option Valuation under Stochastic Volatility II". This turns out to be kind of a mixed lattice and continuum representation.

The virtue of convergent lattice methods is that they provide an explicit Markov chain approximation to a continuum process -- an approximation that has manifestly non-negative transition probabilities and weak convergence to the continuum process. PDE discretizations typically do not offer this interpretation.

If you've ever tried to solve a PDE for a tricky transition probability in 2D or higher, you've probably seen cases where convergence to a strictly positive result can be quite computationally demanding. And simply truncating a small negative pde result to zero can sometimes be grossly wrong (in maximum likelihood estimation, for example). So, there is a role for both approaches.

I haven't seen that particular problem, but I have seen "bouncing ghosts" where diffusions of probability bounce off the "grid wall" at "infinity". I've also seen the same thing happen off the faux wall at zero in spherical coordinates for a spherically symmetric function (where r is restricted to be non-negative). So yeah, I've definitely seen gotchas in PDE approaches, but I don't think I've ever solved a numerical PDE in finance. Only in physics (my dissertation research was solving and interpreting a complex valued diffusion eqn.)

If you're the author of "Option Valuation under Stochastic Volatility I" may I just say how amazing your book is? I didn't know book 2 existed -- it seems like you covered so much in book 1!

Statistics: Posted by geodesic — July 21st, 2017, 12:25 pm

]]>

You are not asking the right question. Lattice methods are poor men's pdes.

Admittedly lattice methods played a huge rule in quantitative finance, as they brought Black-Scholes theory to ... poor men. And I am not saying they aren't being used, they are.

But beyond this, look no further, look for advanced books on pdes and you shall find.

Also notice that you would find a lot more unpublished documentation with dealers than you would find in textbooks and papers. That's not just true of lattice methods, but generally the case. It's hard for people outside practitioner circles to even come up with the relevant problems. That's because you hit so many roadblocks in practical industrial strength applications that you either solve the problems or you just stop.

Thanks. If one wants to trade accuracy for speed (on the provisio that the accuracy is "good enough" for a risk system if not a trading system) what are the benefits of recasting the problem as a PDE?

Let's say I have a few hundred convertible bonds that need to get valued every day for a daily market risk report. I'm not out to get dead-accurate trading accuracy. I just need to get them valued in a decent amount of time on a system that many people share to do valuations. What's the benefit of a PDE? Can a PDE system be "reused" for different instruments living on the same interest rate lattice?

Statistics: Posted by geodesic — July 21st, 2017, 12:14 pm

]]>

Cuchulainn wrote:Related to above, what do people think of this statement?To re-iterate, for the two-dimensional Black-Scholes PDE there is no con-sistent and monotone discretisation with fixed stencil on a uniform mesh,for any [$]\rho \ne 0 [$].

Is the problem truly that [$]\rho \ne 0 [$] or that the diffusion coefs become unbounded? If true, the statement needs a proof to clarify the unstated assumptions. For example, what is the computational domain and the numerical boundary conditions?

The quote is taken from this article

https://arxiv.org/pdf/1605.06348.pdf

The Holy Grail of FDM is finding monotone schemes for all correlation/diffusion/convection terms.

1. For correlation, for the unprocessed BS PDE, constant meshes h1 and h2 will not lead to a M-matrix indeed (as elegantly shown by Samarski/Shishkin/O'Riordan). However, a transform [$]x = log S[$] will produce an M-matrix using their approximation to the correlation derivative V_xy. I have verified this mathematically for Heston PDE and I expect a similar conclusion for the transformed BS PDE.

2. Convection can destroy monotonicity but we resolve using 1) upwinding or better 2) exponential fitting.

3. Degenerate diffusion is also an issue of course but Fichera/Feller come to the rescue? And using MOL makes life easier.

Statistics: Posted by Cuchulainn — July 20th, 2017, 12:43 pm

]]>

Related to above, what do people think of this statement?

Is the problem truly that [$]\rho \ne 0 [$] or that the diffusion coefs become unbounded? If true, the statement needs a proof to clarify the unstated assumptions. For example, what is the computational domain and the numerical boundary conditions?

Statistics: Posted by Alan — July 19th, 2017, 10:44 pm

]]>

This is a universal issue in general.

Non-monotone schemes can produce the bespoke problems.And the initial delta is tricky. I don't have Vol II here but I reckon lattices track the exact solution very well. Many FDM not.

So wrong.

Statistics: Posted by Cuchulainn — July 19th, 2017, 8:41 pm

]]>