SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Verde
Posts: 9
Joined: December 18th, 2009, 11:49 am

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 12:38 pm

QuoteOriginally posted by: Edgey@VerdeI don't disagree with your post, but you haven't argued against my point (that the actuarial approach doesn't need independence). Insurance companies do use independence to reduce their costs, but independence is not a required assumption for pricing. Actuaries do add costs on for reserves, but this is not part of the the expected cost under the actuarial approach. To be clear, my definition of the "actuarial approach" is to use historical probabilities to derive prices, rather than market implied probabilities. You may have a more broad definition.OK, I think I get it. I am viewing the issue from the point of view of an insurer who needs to meet obligations out of premiums collected. You see it from the point of view of an investor using the actuarial approach to derive prices, which is the correct approach in the context of this discussion.
 
User avatar
numbersix
Posts: 474
Joined: July 23rd, 2001, 2:33 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 1:11 pm

Alan, thank you for this elegant clarification. In Neftci's An Introduction to the Mathematics of Financial Derivatives, one can read: QuoteThe Girsanov theorem provides the general framework for transforming one probability measure into another "equivalent" measure ... The probabilities so transformed are called "equivalent" because ... they assign positive probabilities to the same domains. (p. 322)In the following chapter on the applications of equivalent measures, Neftci writes:QuoteIn this chapter, we show how the method of equivalent martingale measures can be applied. We use option pricing to do this. We know that there are two ways of calculating the arbitrage-free price of a European call option:1) The original Black-Scholes approach where a riskless portfolio is formed and a partial differential equation is obtained. 2) The martingale methods, where one finds a "synthetic" probability under which the stock price process becomes a martingale. (p. 345)The martingale method is presumably the one that does not rely on dynamic hedging and invokes only the combination of non-arbitrage (this is what the term 'martingale' refers to) and the equivalence between the 'synthetic' probability (or changed, risk-neutral measure) and the original real one. Neftci provides the details of the derivation of option prices following the two routes. Crucially, the pricing PDE in route (2) is not obtained as in route (1), via the non-arbitrage argument imposing that the dynamically hedged portfolio should only earn the interest rate, but first, in transforming the price process of the underlying into a martingale through Girsanov's theorem, then in expressing the price process of the derivative by using Ito's lemma, finally in insisting that the price process of the derivative should also be a martingale in the changed measure. To achieve the latter, the drift term of the SDE ruling the derivative is set equal to zero and this yields the same PDE as in Black-Scholes' original derivation. Neftci concludes:QuoteIt was shown that the martingale approach implies the same PDEs utilized by the PDE methodology [by this, Neftci means the traditional BSM approach through dynamic hedging]. The difference is that, in the martingale approach, the PDE is a consequence of risk-neutral asset pricing, whereas in the [original BSM] method, one begins with the PDE to obtain risk-free prices. (p. 366)Real probability measure vs. changed measureThis is all well and I certainly believe that measure theory, Girsanov's theorem, continuous-time stochastic processes etc. are well established and that they all exist, because they are only mathematics. On the other hand, I also believe statistics exist and I believe insurance companies do break-even on average, or at least, that the problem that they face -- that of trying to break-even on average when faced with their statistical populations -- is well-posed. But to go back to my original point and to the question that prompted this whole thread about the difference between actuarial valuation and financial pricing, I ask again: Why insist on calling the risk-neutral measure equivalent to the 'original real' one? Why even suppose that the risk-neutral measure, or the pricing operator one uses to generate arbitrage-free derivatives prices, is the result of changing the 'original real' measure in which the underlying is supposed to exhibit its real, historical statistical distribution? What if there was no such thing as the 'real probability measure'? Not that it should be unobservable or inscrutable; no, my problem is that we should find no precise meaning, but only muddled conceptions, of what the real probability means. For let us not fool ourselves, if one wishes to read into the words 'real probability distribution' something that goes beyond the mere formalism of Girsanov's theorem, which merely treats measures symmetrically and does not know what 'real' means, this 'real' distribution of the underlying that everybody is talking about has to be the one that some actuary is reading for me from the past statistics of the underlying, not the forward-looking one (whatever that means too) with which the trader is supposed to price derivatives.It is perfectly fine to identify the probability of an event that has been recognized, or modeled, or idealized, as a member in a statistical series, with the frequency of its occurrence in the series. In that case, the word 'probability' would just be a rewording and would have no ontological implication. However, the trouble begins when you try to make sense of the probability of that single case or single event, independently of any reference to the whole series.So again, I ask: Armed with your favorite concept of frequency-based probability, when you are squeezed in that corner facing the next draw -- the next single occurrence of the event and nothing but --, what exactly do you mean when you say its probability of occurring is p? Is p really its probability or does it, once again, refer to the whole series?To repeat, this is not a problem of knowledge. Indeed, you might be prevented from knowing the frequency of the event in the series; or it might be that the series itself and the corresponding repetition of the draw are only thought-experiments whose outcomes you may speculate upon but never know. The problem I am posing is a problem of reference, i.e. an ontological or even logico-semantic problem. Does this probability p truly refer to the single event (and by 'truly' I mean: according to your own system of thought and metaphysics; according to your own usage of precise language), or does it always have, as Richard von Mises prescribes, to presuppose the whole collective?Ex-ante vs. ex-postAs I have suggested in an earlier post, the whole philosophical problem of single-case objective probability can be rephrased as the problem of shifting from an ex-post stance to an ex-ante stance. If your answer to the question above is that the probability p is deemed the 'probability of the single event' only insofar as the event would display a frequency p of occurring in case the draw was suitably repeated in a series -- even an ideal series that didn't exist empirically but was only a thought-experiment --, then your stance will be ex-post, despite the fact that you seem to be addressing the event before it occurs when you so answer me. To really move to the ex-ante stance, you have to tell me something about the event literally before it occurs (this is what 'ex-ante' means literally), let alone before the whole series, possibly involving it, unfolds.So when I say that the market price of the contingent claim exists whereas the objective probability of the event triggering it doesn't, I speak ex-ante, as this is what 'to exist' really means. To repeat, the market can give me the ex-ante price of a contingent claim whose triggering event is genuinely single-case (as this is what the market is supposed to do), while I doubt that we could ever spell out the objective probability of this single event without smuggling in statistics implicitly, therefore the ex-post stance. Actuarial valuation vs. financial pricingThis is why I contend that financial pricing, in its usage of the term 'real probability', has not managed to unshackle itself completely from actuarial valuation. Even though the market of contingent claims is the long-awaited technology that would finally allow us to account for future contingent events without using probability, simply by assigning a tradable price to the corresponding contingent claim in its market even and especially when the triggering event in one of a kind, i.e. an event that never was and never will be a member of a statistical series (typically the default event of a corporation), the textbook presentation of financial derivative pricing is still obsessed, or at least burdened, with the legacy of actuarial science. How? Simply when it argues (for instance, Neftci p. 319) that 'on average, the risky asset will appreciate faster than the growth or a risk-free investment' and for this reason its drift has to be changed through changing the probability measure, if it is to become a martingale. What does 'on average' refer to, in Neftci's statement, other than a hypothetical insurance company that is supposed to hold a population of such assets and would presumably make more money by holding them than by investing risk-free (probably what AIG had in mind when they started holding CDSs)? In that case, 'average' would mean that the particular asset we are talking about is the 'average asset' of the population, or its representative. But what if there was no such population and the asset was one of a kind? Probably the insurance company would have to invent such a population and hold assets belonging, say, to the same sector, or something like that. As Fermion points out, crucially probability is dependent on the ability to count and to measure frequencies. But what if the event was un-countable because it was unique? Presumably, some subjective elements would have to enter into play and the unique event, or the unique asset, would have to be modeled, that is to say forced, to be a member of that statistical series or that reference class. Or is 'on average' supposed to mean 'on the long run'? In other words, you are supposed to continue holding this single asset (and not a population thereof) while all the different rise and fall scenarios that History has in store for it unfold, until you observe, by virtue of some ergodic theorem guaranteeing that all the possible paths that are open in space for that assets will eventually unfold in time, that you have made money overall. But what if the market was a single run and not a long run of runs? What if the market was precisely like History, something that happens once and never repeats itself or gives you a second chance?The self-sufficiency of risk-neutral pricingIt seems to me that the only reason why we need martingales in finance is to express the prices of contingent claims generally as the discounted expectation of their payoff, thus making sure we observe non-arbitrage among their instant prices. 'In the absence of arbitrage possibilities, market equilibrium suggests that we can find a synthetic probability distribution such that all properly discounted asset prices behave as martingales. Because of this, martingales have a fundamental role to play in practical asset pricing', writes Neftci (p. 124). I don't know what else than 'practical asset pricing' could there be. 'Theoretical asset pricing' perhaps? Better to use the word 'valuation' instead of 'pricing' in that case. Non-arbitrage is the only real (or realistic) constraint. It closes itself off on the instant market and bears no relation to an outside or to a long run, or even to the future distribution of returns of the assets. For this reason, it seems to me that the whole elaboration in terms of changing the measure and equivalence with the real measure is just lip service to the actuarial ancestry. Characteristically, in the one passage where Neftci speaks of the real rate of return R of the risky asset, he writes: 'Now consider the problem of a financial analyst who wants to obtain the fair market value of this asset today'. One way to do this, Neftci suggests, is to compute the present value as the mathematical expectation of the future returns under the real probability measure: S_t = E[S_t+1/(1 + R)]. However, notes Neftci, this requires a knowledge of the distribution of R, which requires knowing the risk premium of the asset. Neftci then observes that 'knowing the risk premium before knowing the fair market value S_t is rare' (p. 320). Note the irony.Next Neftci offers to change the probability measure in order to get rid of the risk premium in the computations. Under this risk-neutral measure, the present fair value our analyst is looking for would simply be the mathematical expectation of the future market prices discounted by the interest rate. All that remains to do then is to forecast the future market prices S_t+1. This can be done, writes Neftci, probably without noticing the irony of the reversal, by 'using a model that describes the dynamics of S_t and then discounting the "average forecast" by the (known) r'. In short, all an analyst has to do in order to estimate the (fair? real? fundamental?) present value of an asset is to forecast its future market prices. Why not simply say that its present value is equal to its present price? As a matter of fact, Neftci's notation is equivocal on this point as he speaks of forecasting S_t+1 (supposedly the future prices) using a model for the dynamics of S_t (supposedly the present value we were after, now imperceptibly confused with the present price).And when, in a later section, Neftci concludes that 'the synthetic probabilities [or risk-neutral measure] appear central to pricing of financial securities' and wonders where they can be got from, he suggests that the volatility parameters of the stochastic differential equation should be calibrated by the practitioner, 'based on the existence of liquid options ... that provide direct volatility quotes' (pp. 334-35). This completes the proof that the practice of derivative pricing, that is to say, of financial pricing as a whole, is totally impervious to the real probability (whatever that means) and its actuarial underpinnings. Yet you wonder: What the implication of this might be on probability and its understanding? For surely, the textbooks of financial derivatives will never dispense with the concept of probability? It is one thing to suddenly wake up in a market where derivatives trade liquidly alongside their underlying and can be used as inputs in the pricing models; it is another thing to initiate the market on that road. Philosophical theories of probabilityThe philosophical interpretation of probability has a long history and a very thick literature. It ranges from probability being only a shorthand for statistical frequency, in other words, an essentially ex-post concept (Richard von Mises), to probability as a propensity, i.e. an ex-ante concept that specifically concerns the single event and does not depend on the existence of a whole statistical series or reference class (this was advocated by Karl Popper, after quantum mechanics had pressed the case of irreducible randomness that didn't relate to statistics but seemed to suggest a random generator truly at work behind each individual experiment), to subjective probability, of course, where probability is identified with the betting odds that some agent would produce (Bruno de Finetti). It is not my purpose to rehearse this debate through the very same examples that all those authors have used, usually dice, roulette wheels, mortality tables... and quantum mechanics, but to see whether the derivatives market could not offer a fresh perspective. Note that, apart from revealing irreducible randomness in nature, quantum mechanics quickly posed deeper problems, such as what we meant by object, or property, or physical state, or even identity of the particles, etc. In itself, the algorithm for computing quantum probabilities posed no particular problem (the Born rule). As for the reason why such a random generator existed in nature, the most advanced interpretations ended up suggesting that it was not probability that we were ultimately talking about in quantum mechanics, but something else. According to Jan von Plato (Creating Modern Probability), it is statistical physics that contributed all the tools of modern probability theory in its most advanced branches, namely stochastic processes and calculus, yet, he complains, it never was prominent in the philosophical debate of probability because it was eclipsed by quantum mechanics. (Chaos theory and chaotic determinism are a different subject, of course.) My observation is that derivative pricing is today the most advanced branch of the most advanced branch, and probably for this reason, its contribution to the philosophical debate of probability is even smaller than statistical physics, not to say totally non-existent. Blame it on the excessive sophistication of the mathematics. We have accustomed ourselves so well to the theoretical notion of 'random generator'; also our computerized Monte Carlo simulations seem to materialize it so well, that no one really wonders, in finance, what this means philosophically or at least semantically that we should give ourselves such a generator and write such a thing as a stochastic process. My contention is that, because derivatives markets are so real and so indisputable today, because prices exist materially and the probability of a single-case event doesn't exist or at least is still problematic, maybe we could follow through the logic of the markets and establish a link between the contingent event and the price of the corresponding contingent claim without the intermediation of probability. Note that, up to this day, it is not clear yet what Popper's propensity means. Nobody knows what it means really that a coin tossing experiment, in and by itself, should have a 0.5 propensity -- Popper also calls it a 'dispositional property' -- of producing heads or tails. Popper insists that propensity is real and present in the situation as a 'generating condition' (same word as 'random generator'), i.e. it is truly ex-ante, as opposed to being just a nominalistic rephrasing of an ex-post frequency. He writes: 'Like all dispositional properties, propensities exhibit a certain similarity to Aristotelian potentialities. But ... they cannot ... be inherent in the individual things. They are not properties inherent in the die, or in the penny, but in something a little more abstract, even though physically real: they are relational properties of the total objective situation' (Realism and the Aim of Science, p. 359). Apart from the fact that Popper insists it really exists, this doesn't tell us, of course, what propensity really is. Popper himself closes the discussion by arguing that propensity is in the end 'what corresponds to the transition from the mathematical frequency theory of von Mises to the neo-classical or measure-theoretical treatment of probability' (p. 360). In other words, Popper evades the debate through the point I have noted above, through the sophistication of mathematics being ultimately the only true thing. He notes that measure theory 'is superior to the frequency theory, not only from a philosophical but also from a purely mathematical point of view' (Ibid.) Why superior? Presumably because, as Popper writes, 'the neo-classical theory does not attempt to give a definition of "probability", either on the lines of Laplace or of von Mises ... Instead it takes "probability" as anything that satisfies the rules of certain calculus ... It clearly separates the formal task of constructing a mathematical calculus of probability from the task of interpreting this calculus... ' (pp. 374-75). In short, the real place of the random generator lies in mathematics not in physics and the ex-ante notion of objective probability, or propensity, remains undefined and unexplained -- a lack that Popper now ironically recognizes as a philosophical superiority.Money and break-even as primitive conceptsThis leaves as only meaningful objective probability the ex-post concept of statistical average which depends on counting a population and cannot be single-case, and as only meaningful ex-ante concept of probability de Finetti's subjective probability. My observation is that both depend on money and on the existence of some financial account and that they don't stand by themselves. Subjective probability is obviously financial, because de Finetti explicitly equates it with the betting odds that a banker is supposed to quote for you. Thus de Finetti had in mind a transaction and a price, and he fell one step short of the market of contingent claims. I think the reason why he did not fully embrace the market was that he was still keen on defining probability and that probability had to reside in the mind of a subject if it was found not to reside in nature or in some object. For surely, it could not reside in the mind of the market! As for the statistical probability, I claim it is related to money too, because in this case, the account in question is of course the insurance company's. It is all well to define statistical probability as the limiting frequency of a certain occurrence in von Mises' collectives. However, I believe the real operational concept is that of breaking even in the long run, when somebody plays that dice, or plays those mortality tables. Characteristically, for von Mises's statistical probability to make sense, the series it is measured upon has to be 'truly random'. (Imagine that some demon is systematically drawing a series that didn't reflect the 'true' probability, say, an indefinite series of 'heads'.) And how does von Mises avoid the circularity of defining 'truly random' when probability is not yet defined? By arguing that a truly random sequence is one that would be immune to gambling systems. In other words, a trading concept, or generally an accounting argument, lies at the basis of von Mises' whole edifice! For this reason, I wish to argue that statistical regularity and the corresponding break-even in the long run are the primitive concepts and that probability, if you insist we should consider it at all, is only a derivative concept. Von Mises claims that his statistical probability is not definable without reference to the whole series or population. I elaborate this by saying that statistical probability -- or actuarial probability -- is not definable without prior reference to the ex-post accounting equation of the insurance company. Only because it has broken even on average, admittedly after a long history of trial and error and adjustments of the insurance premium, can the insurance company later turn back to the single case and form such a concept as the probability of death of that particular individual. (See this article.)What I am saying is that, appearances to the contrary, the integral comes before the integrand. First you sum up all the cases, and then you work out probability as the frequency. I am proposing that you went one step further and integrated the probability, not just against the indicating function of the event of death, but, more realistically, against the money paid in case of death. Defining probability as the limiting frequency is not enough and is not the real thing. The real thing is that only the person (either physical or moral) that has broken even on average relative to the given statistics and population can turn back and speak of the probability of the single occurrence. There would be no metaphysical coup de force in this but only a different way of slicing the account. It is not against the current of time that one should navigate in order to switch from ex-post to ex-ante, but against the current of money.I guess the reason why I insist that time should be replaced by money is that what bothers me in probability is the element of time and the time connotation of terms like 'to expect', 'to predict', etc. What bothers me is that we should wait for the event to happen and wonder, in the meantime, what its probability might be. I say we should wait in money, not wait in time, because what is accountable is money. In both the cases of subjective and objective probability, we accounted for the event; we didn't value it. Also, this financial underpinning of probability will allow me to argue, with all the greater force, that indeed prices of contingent claims exist and probabilities don't. For we are still missing the one configuration in which probability could be said really to exist (thereby vindicating the claim of a metaphysical realist), namely, a meaningful ex-ante concept of objective probability. We are still missing an objective concept of single-case probability -- probability attaching to a singular event that is part of no population. The market as a fundamental categoryBut why speak of the 'probability' of this event? Why don't we just say that we wish to account for it? Here is my proposition: Instead of going from the derivative concept -- probability -- and wondering how this probability could be adapted to the genuine single case for which there is no population, no statistics, and no breaking even of an insurance company, why not branch off at the earlier step and see how the concept of break-even itself can be adapted to the single case? In other words, I wish to create the conditions of break-even for the contingent claim that is written on a singular event before even the notion of probability is constructed -- for probability is a derivative concept, to repeat, and is only relative to the break-even of an insurance company over a whole population. And how do we break-even when we hold a singular contingent claim? Obviously not by waiting for the long run as there is none. We break-even by not waiting, by simply making sure that we could liquidate our holdings at once if we wished; in other words, by creating a market for them. Conversely, if such a liquid market existed, would we be satisfied 'valuing' our holdings other than by marking-them-to-market?Now we see that our whole predicament comes from our unwillingness to recognize in the market a fundamental category. It is not a slight thing to have invented money, contingent claims, and the market place where they are exchanged. Exchanging is a fundamental invention. If probability is the concept that was invented precisely to suit the insurance company, in those situations where the statistical regularity (an indisputable law of nature), combined with the integrity of the account of the insurance company, created the time loop in which it looked as if the event could be addressed ex-ante and as if meaning could be given to its probability, we should look for an overall alternative to probability when there is no such statistics and no insurance company. Instead of changing the probability measure, we should change the whole concept of probability. We are so entrapped in probability that the actuarial value of the contingent claim seems indelible from our minds. We prefer to imagine that a certain asset first admits of a value -- even on pains of having artificially to create the population of which its one-time payoff would be a member -- second that the market is in charge of altering or changing this value by the play of supply and demand or the fact that some players won't be content to break even on average, instead of accepting that the asset has no value but only a price through the exchange. To us, exchange can only mean price change. Admittedly, the major conversion I am proposing, in which price absolutely replaces probability, leaves us in the later impossibility of modeling the dynamics of price. For how could we model its dynamics except through probability? But do we really need to model it, now that it is given by the market? Isn't the rule precisely constantly to recalibrate our models of the underlying dynamics to the market prices of derivatives? Better: are our derivative pricing models really models of the underlying dynamics or just risk-neutral pricing operators that allow us to capture a semblance of consistency between the instant prices of derivatives, with no idea of what will happen next apart from recalibration to the market update? Why indeed doesn't the market become a pricing theory of its own, THE pricing theory? Why do we need a theory for the market? Is it because the market is complex and we need to model it? Well, I say the market is simple, not complex. Just forget the crowd that constitutes it. Simply, the market is what gives the price (and the price process) of contingent claims. And if you're not happy calling the market a theory, then just call it a technology.
Last edited by numbersix on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9785
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 2:54 pm

QuoteOriginally posted by: numbersixAdmittedly, the major conversion I am proposing, in which price absolutely replaces probability, leaves us in the later impossibility of modeling the dynamics of price. For how could we model its dynamics except through probability? But do we really need to model it, now that it is given by the market? Isn't the rule precisely constantly to recalibrate our models of the underlying dynamics to the market prices of derivatives? Better: are our derivative pricing models really models of the underlying dynamics or just risk-neutral pricing operators that allow us to capture a semblance of consistency between the instant prices of derivatives, with no idea of what will happen next apart from recalibration to the market update? Why indeed doesn't the market become a pricing theory of its own, THE pricing theory? Why do we need a theory for the market? Is it because the market is complex and we need to model it? Well, I say the market is simple, not complex. Just forget the crowd that constitutes it. Simply, the market is what gives the price (and the price process) of contingent claims. And if you're not happy calling the market a theory, then just call it a technology.I enjoyed this essay, and have absolutely no quibble with the thesis that, fundamentally, objective probabilities do not existin most market setups (ignoring lotteries/casinos, etc). What I don't see is the following. Go back to the 50's and 60's, whenlisted and well-developed derivative markets did not exist (except OTC), but well-developedstock and commodity markets did exist. It is self-evident that these markets were worthy of academic (meaning serious) study.So, indeed, people like Osborne and others collected price series and studied them. They found that, to a first approximation anyway,no matter what security you picked, the price change series behaved much like random walks. (Here by 'random walk',I mean it in the weak sense of simply uncorrelated price changes). The statistical properties of market priceseries beg for an explanation -- can you explain or even study these properties without invoking probability? (honest question)
Last edited by Alan on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
numbersix
Posts: 474
Joined: July 23rd, 2001, 2:33 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 3:28 pm

Thank you. A quick reply is that I do believe in statistical laws and statistical regularity -- an indisputable law of nature, as I say in my 'essay'. However, the unwarranted move is to go from the statistical distribution to the random generator, from ex-post to ex-ante. True, this sounds like splitting hair, but the whole metaphysical divide between the empiricists who only believe in statistics and the realists who believe in the subsistence of generators lies in just this fine point.On the other hand, I have no problem considering that the market of contingent claims has always existed, even in the 50's and 60's, but that it is was virtual (or potential) then, not actual. (This doesn't make it less real.) As a matter of fact, I have no problem considering that the millennium-old problem of probability was waiting for the technology of contingents claims in its ultimate sophistication to find its answer.
 
User avatar
Alan
Posts: 9785
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 4:06 pm

OK, first sticking to ex-post, would you object to saying: well, we know that a data generator does not truly exist,but the data is reasonably consistent with certain generators? If so, then a data generator becomes merely anice short-hand for describing, to a first approximation, many of statistical properties of the series -- again, ex-post.Now, as to ex-ante, we probably agree that, regardless of what was said about our historical series, it'spotentially a qualitatively different ball-game when we look to future. Our statements have to become much moretentative. Nevertheless, if you were asked for your opinion that a 5 min. price change series of IBM,extending from today to a year from now, would exhibit similar low auto-correlation as it did historically, Iwill guess your answer would be yes. If so, what are your thought processes that support this opinion? (I ask because I am struggling to see how to discuss the random character of stock prices, toborrow Cootner's book title, in your framework.)
Last edited by Alan on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
numbersix
Posts: 474
Joined: July 23rd, 2001, 2:33 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 5:03 pm

Agreed on the data being consistent with the theoretical posit of a generator, therefore the generator is a model, or a way of speech, or a shorthand for describing the phenomenon. As for the statistical regularity to persist in the future, I have no problem with that either, as this is the definition of regularity. But this is not ex-ante, this is believing in the stability of laws of nature. This is my thought process.Ex-ante would be to focus on the next individual 5 min. price change, I mean the one beginning right now, not on the 5 min. price change series, and to try to give a meaning to its probabillity distribution other than the ex-post fact that, as time goes by, it will belong to a whole population in which the statistical auto-correlation is indeed observed. To me, ex-ante is undissociable from 'single-case'.In short, we all write: dS = mu*S*dt + sigma*S*dZ (this is your 5 min. price change);What does that mean exactly?
Last edited by numbersix on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9785
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 5:37 pm

OK, well I am surprised to hear you say you believe in the "stability of the laws of nature", as I thought we would agree thatthere is fundamentally no 'stable law' that describes the upcoming IBM price series over the next year. But, if I take youat your word, then there seems to me a potential contradiction here. If you can elaborate on what the putative law is, then I suspect it will lead to some probability-type statements about the single case event of the next IBM 5 min price change. p.s. To answer your question, if it's not rhetorical, I would say dS = sigma(t) S dZ for the 5 min price changeis simply a very useful model for the typical investor (including myself) without some insider knowledge of the underlying firm.It obviously has a precise mathematical meaning and leads to probability statements about S(t+5 min)-S(t).It also leads to useful normative prescriptions about how to approach investments, how not to waste moneyneedlessly on short-term trading, and much else. Certainly, it's not a law of nature in the sense of physicallaw and is easily refuted if taken seriously on that basis. It also embeds the notion that, for many stocks, wehave excellent price continuity during the NYSE session*. So, as you say, it's a 'way of speech' -- a very helpful mental model for thinking about the unknowable future and forming judgments about it. *[Of course, continuity for a lattice process with penny increments strictly meansthat each tick of DeltaS will be $0.00 or $0.01. This is not quite true for IBM, asthe trade ticks are not that small, but if IBM split to say $20, it probably would be].
Last edited by Alan on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Marsden
Posts: 3829
Joined: August 20th, 2001, 5:42 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 7:08 pm

numbersix, while admitting that I had to refresh my coffee a couple times while reading your essay, and therefore that I may have dropped some of your lines of reasoning inadvertently, I think you may be missing the forest for the trees.(Let me say at the outset that I think probability is always subjective; I think it's likely that this will come up at some point.)You seem, from my perspective, to accord prices far more "reality" than is warranted. In many respects, prices are no more real than are shadows: they have no mass; they are not subject directly to laws of physics such as we have developed them; etc. I think prices are very well considered in the way that Wittgenstein considered "meaning;" meaning is defined by use.What is the use of a price? Prices are not stand-alone quantities that support nothing more than a new-fangled regime of gambling; they represent economic realities. In particular, they tell us something about the economic circumstances that brought them to wherever they are, and they direct our own economic activities by giving us insight on the rewards for different undertakings.But how can this relationship with economic reality function without "objective probabilities?" (And here is where my belief that all probability is subjective comes up: "objective" as used in "objective probablility" is, as I understand it, one's best guess at the probability distribution of future outcomes, independent of other considerations that may enter into prices such as utility, etc. "Objective probabilities" are subjective because they are not "God's own true probability distributions," but rather they are conceived separately by individual agents based upon the information -- and possibly the misinformation -- that each agent possesses. I hope that doesn't disagree with anyone else's understanding, but I thought in any case that I'd be very clear about it.) Maybe it can, and certainly "objective probabilities" are not the only inputs affecting prices, but I can't clearly see how prices would retain their economic meaning and function without any consideration of "objective probabilities."
Last edited by Marsden on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
numbersix
Posts: 474
Joined: July 23rd, 2001, 2:33 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 9:23 pm

Alan,Just to be clear -- and I am sorry for any confusion -- I just followed on your IBM example because I thought you were only interested in exhibiting the passage between ex-post and ex-ante, between the statistics and the data generator that is merely a shorthand way of speaking of the statistics. The law of nature that I referred to in that instance was just the statistical regularity.However, if what you really had in mind was the IBM stock as such, qua denizen of the market, then of course I don't believe in any law there, because I believe the statistics of prices are not stationary. On the contrary, I am firm believer that the market is a nest of Black Swans. (And I don't know what happened in the 50's - 60's for the distribution to seem stationary then.)Still, even if there were stationarity and the putative statistical law was invoked (say we are not talking about markets but about mortality), certainly, as you say, this will lead to probability-like statements concerning the next single case event as this is just a statement and we have agreed that probability is just this way of speech, but the whole point is precisely to see what this statement could possibly mean other than, blah blah, the next tick will be part of statistical population with the corresponding distribution, etc. In the end, this, once again, refers to the population, not the next tick as such.I quickly dismissed it in my 'essay', but the real pressing case for single-case objective probability (or propensity) is quantum mechanics. Here, truly, there is something going on concerning the single-case as such, that is both random and not necessarily related to a population. Quantum mechanics is really what pressed the case of irreducible single-case objective probability. However, as it happens, the notion of generator and possible state is inadequate here, because quantum indeterminacy is not about probability, it is about something deeper.
Last edited by numbersix on June 22nd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Fermion
Posts: 4486
Joined: November 14th, 2002, 8:50 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 23rd, 2011, 10:04 pm

QuoteOriginally posted by: MarsdenYou seem, from my perspective, to accord prices far more "reality" than is warranted. In many respects, prices are no more real than are shadows: they have no mass; they are not subject directly to laws of physics such as we have developed them; etc. I think prices are very well considered in the way that Wittgenstein considered "meaning;" meaning is defined by use.What is the use of a price? Prices are not stand-alone quantities that support nothing more than a new-fangled regime of gambling; they represent economic realities. In particular, they tell us something about the economic circumstances that brought them to wherever they are, and they direct our own economic activities by giving us insight on the rewards for different undertakings.But how can this relationship with economic reality function without "objective probabilities?" (And here is where my belief that all probability is subjective comes up: "objective" as used in "objective probablility" is, as I understand it, one's best guess at the probability distribution of future outcomes, independent of other considerations that may enter into prices such as utility, etc. "Objective probabilities" are subjective because they are not "God's own true probability distributions," but rather they are conceived separately by individual agents based upon the information -- and possibly the misinformation -- that each agent possesses. I hope that doesn't disagree with anyone else's understanding, but I thought in any case that I'd be very clear about it.) Maybe it can, and certainly "objective probabilities" are not the only inputs affecting prices, but I can't clearly see how prices would retain their economic meaning and function without any consideration of "objective probabilities."Yes! What are prices? I claim they are aggregate market estimates of present value perceived by market participants. They are not the present value itself, but they imply that market participants themselves have a probability distribution in mind not just in the future but also at the present instance. One does not have to believe in objective probability or even that the distribution is knowable to recognize that an implicit price distribution exists in the market both now and in the future as a result of the aggregate behavior of market participants and that distribution collapses to a unique value at the moment of a trade.QuoteOriginally posted by: numbersixHowever, if what you really had in mind was the IBM stock as such, qua denizen of the market, then of course I don't believe in any law there, because I believe the statistics of prices are not stationary. On the contrary, I am firm believer that the market is a nest of Black Swans. (And I don't know what happened in the 50's - 60's for the distribution to seem stationary then.)At the current instance, with all available information known (by definition) there are no black swans to affect the current distribution of possible prices out of which the market selects a price. As regards the future, one can conceive of a future of black swans so extreme as to make the mean of the future distribution completely unpredictable, but leaving its shape unaffected -- or at least a fairly stable function of time. When it comes to valuing derivatives, risk neutrality tells us that we don't care about the expected value -- and by implication nor do we care about those black swans -- only the shape of the distribution matters and that is a product of trader behavior combined with the aggregate magnitude (but not direction) of black swan events.
Last edited by Fermion on June 23rd, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
numbersix
Posts: 474
Joined: July 23rd, 2001, 2:33 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 27th, 2011, 7:40 am

QuoteOriginally posted by: Marsden1) You seem, from my perspective, to accord prices far more "reality" than is warranted.2) Prices are not stand-alone quantities that support nothing more than a new-fangled regime of gambling; they represent economic realities.3) "objective" as used in "objective probability" is, as I understand it, one's best guess at the probability distribution of future outcomes.4) I can't clearly see how prices would retain their economic meaning and function without any consideration of "objective probabilities."In other words:- The only thing that is real is the present economic reality. - When this reality is future, it is not real; it is just possible, therefore it is dealt with by probability.- Probability is not real; it doesn't replace the missing reality by some real random generator; it takes place in the minds of people.- Only in the last instance do prices enter into play. - Prices are even less real than probabilities, because they consist only of meaning and use, and the only meaning they derive is through probabilities.Here is an alternative set of propositions:- Contingent claims are not possibilities; they are real sheets of paper (also called contracts) that translate into real money, only contingent, in the future reality.- What they translate into, today, are prices -- real money too, no less contingent mind you. - It is the same contingent claims that we have today (and exchange for prices) and will have tomorrow (and exchange for payoffs). Possibilities, by contrast, can dramatically change or be revised (after Black Swan events).- Money is accountable; probability isn't.- I wonder whether the future economic realities, as seen from today, can be translated otherwise than by future contingent payoffs.- I wonder whether the present economic realities, as felt today, can be translated otherwise than by the present prices.Note that derivative pricing theory is based on probabilistic models of prices, not of abstract states of the world or of the economy. That's why it evolved into a real technology (involving software and listed exchanges) while economic theory didn't and hardly left the stage of game theory.All that remains to see is that derivative pricing theory, now understood as a technology and no longer as the candid textbook theory everyone thinks it is, closes itself off on prices, via calibration and recalibration. Its textbook use of probability, as I have said, is only lip service to the actuaries.
Last edited by numbersix on June 26th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Fermion
Posts: 4486
Joined: November 14th, 2002, 8:50 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 27th, 2011, 4:12 pm

numbersix:1. Do you agree that the future consists of possibilities?2. Do you agree that some possibilities are more likely than others? E.g. if a stock has a price 100 today then it is more likely that the price will be 100 tomorrow than that it will be 1000 or 0?If you answer yes to both these questions, 3. why does it matter that events can change possibilities? Doesn't that just mean that the more or less likely status (i.e. relative probability) changes conditional on new events?If your response to (3) is that we can't forsee the black swan event and so cannot assess the probability, thena. In what sense was it possible to say that one price was more likely than another tomorrow in the first place?b. Why can't we simply say that there is a probability of a black swan event and add it in to our estimates. And if there are black swans on top of black swans recursively, why can't we add those in too?In other words, why cannot we consider black swans to be merely extremely unlikely events and (say) put a maximum probability on their occurrence rather than throwing up our hands and saying "black swans destroy probability" -- which seems to be your line.Of course there are some black swan events which genuinely could destroy not just future probability but markets themselves (we even anticipate knowable extreme events -- that aren't even black swans -- that can do that such as nuclear annihilation, asteroid collision, Yellowstone eruption and so on) but, in that case, don't we just say "well if that happens we're all dead anyway, so we might as well continue as if they won't happen" (i.e. treat their probability as miniscule and die unavoidably if we're wrong). An alternative way of stating this is that we can necessarily concern ourselves only with events we can anticipate since, by definition, we cannot do anything about events we cannot anticipate.
 
User avatar
Marsden
Posts: 3829
Joined: August 20th, 2001, 5:42 pm

Actuarial Pricing vs Financial Mathematics Pricing

June 27th, 2011, 8:30 pm

QuoteOriginally posted by: numbersixIn other words:- The only thing that is real is the present economic reality. - When this reality is future, it is not real; it is just possible, therefore it is dealt with by probability.- Probability is not real; it doesn't replace the missing reality by some real random generator; it takes place in the minds of people.- Only in the last instance do prices enter into play. - Prices are even less real than probabilities, because they consist only of meaning and use, and the only meaning they derive is through probabilities.The only thing I immediately don't agree with is "the only meaning they (prices) derive is through probabilities." There are also things like utility that potentially affect prices but that are not matters purely of probability.QuoteHere is an alternative set of propositions:- Contingent claims are not possibilities; they are real sheets of paper (also called contracts) that translate into real money, only contingent, in the future reality.- What they translate into, today, are prices -- real money too, no less contingent mind you. - It is the same contingent claims that we have today (and exchange for prices) and will have tomorrow (and exchange for payoffs). Possibilities, by contrast, can dramatically change or be revised (after Black Swan events).- Money is accountable; probability isn't.- I wonder whether the future economic realities, as seen from today, can be translated otherwise than by future contingent payoffs.- I wonder whether the present economic realities, as felt today, can be translated otherwise than by the present prices.I don't see how from this set of propositions economic reality that exists prior to prices based upon it can be recognized; if tomorrow a huge new oil deposit is discovered, how is its value, as reflected in the prices of whatever ownership exists for it, determined?I guess that basically I see your system of belief as lacking a creation myth ...QuoteNote that derivative pricing theory is based on probabilistic models of prices, not of abstract states of the world or of the economy. That's why it evolved into a real technology (involving software and listed exchanges) while economic theory didn't and hardly left the stage of game theory.Here you seem to envision creating, essentially, a system of physical laws that apply to prices. I don't know if that's wise. Earlier, I noted that prices are in some ways no more real than are shadows. If we were to try to create a system of physical laws that applies to shadows solely through the method of observing shadows, I think we'd end up with a mess. It would only be by the insight that shadows are created by a light source, a light blocker, and a reflecting surface, and that each of these will be composed of matter and subject to physical laws regarding matter that we might have a reasonable hope of creating a derivative set of "physical laws" that apply to shadows.Do you think I'm raising a false parallel, and if you do, why?QuoteAll that remains to see is that derivative pricing theory, now understood as a technology and no longer as the candid textbook theory everyone thinks it is, closes itself off on prices, via calibration and recalibration. Its textbook use of probability, as I have said, is only lip service to the actuaries.I think that if a price were readily available for every conceivable thing, your system would hold together perfectly. But I think there will always be something with economic significance for which no price is available; possibly that is even a proposition that can be proven. Without having given the matter too much thought yet, I think in these situations your system will want a creation myth like I noted above. What happens with your system when it runs into needing initial and boundary conditions?
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On