I have attached the Fed hike probabilities quoted on Bloomberg on the 24

I have figured out how to calculate the hike probability (by following http://www.cmegroup.com/education/fed-f ... lator.html), and since the cut probabilities are 0%, I can also calculate the no change probability by following

P(no change) = 100% - P(hike).

But if the cut wasn't 0%, the formula would be

P(cut) + P(hike) + P(no change) = 100%

Does anyone know how to calculate P(cut)?

Statistics: Posted by mesk — February 9th, 2018, 12:05 pm

]]>

]]>

cheers

Statistics: Posted by ml11986 — December 1st, 2017, 12:37 pm

]]>

]]>

Statistics: Posted by Nishizono — August 25th, 2017, 4:13 am

]]>

]]>

1) Parellel VaR

How to compute measures of samples from probability distributions in a distributed manner.

In risk management you see a lot of MC simulation of market prices, e.g. "give me 10.000 samples for possible prices of IBM next month". However, bank do this for 10.000-100.000s of stocks, and those simulations are slow because they also need to simulate small timesteps as well as all the dependencies between those 10.000 stocks, and on top of that lots of prices of derivatives etc etc etc... It's slow....

To speed things up these samples are generate on multiple machines and then later merged. E.g. suppose you have 10 machines each generating 1.000 samples of (amongst others) the IBM stock. Can you find an efficient algorithm that tells me the level above which the IBM stock will stay with 99% probability without first merging all 10x1.000 samples onto a single machine (transmission and memory cost) and then sorting the samples? Some existing strategies compute some properties of the 1.000 samples, communicate those properties to a central location, and come up with an answer by combining those properties. In general you want to have all sort of properties of the distribution: mean, variance, p-levels etc

2) Alan knows more about this: Some market models have stochastic volatility , these are latent variable model, and reconstructing the dynamics (state across time) of the latent variable is typically done with ML and (discrete) HMM techniques. It might be interesting to work in that area?

3) High frequency trading order book algorithms

At very small time scales the market is modelled as an "order flow". A central element in this model is an orderbook. Imagine you have a central apple trading shop. People come in and go out and try to buy/sell apples at various prices and quantities. You might come in the shop at 11:00 and wants to buy 12 apples for $1.30 each. Nobody is willing to sell apples that cheap to you and so he leaves the shop at 11:35 empty handed. Imagine you log all these events of people entering and leaving the shop in a big database. Can you come up with an efficient algorithm that retrieves a list of all people that were in the shop last Friday at 9:05:36? What would be my average price if I bough 60 apples from the people who were in the shop at that time? Suppose I just wanted to pay just $1 per apple and never left the shop until you got them, at what time would you get them? Can you answer that without fully retrieving all the people going in/out of the shop? If you decompose these questions into algorithms you see that there are some interesting data structures that can speed up answering these question. Speed is required here because these orderbooks are simulated throughout time to test HF trading bots, and that can involve 100s millions of events.

Statistics: Posted by outrun — August 24th, 2017, 9:14 am

]]>

Re your chosen topic, the article attachment I posted in this thread may be of interest.

It's unlikely you'll be able to get up to speed in the field to produce something really novel in 2 months.

Here is a suggestion for a topic. Learn how to produce the so-called risk-neutral distribution [$]Q(S_T)[$], which is the market's distribution outlook for the future stock price [$]S_T[$] (adjusted for risk), for the S&P500 Index. Here [$]T[$] is a future date -- say 1 month away. You deduce [$]Q(S_T)[$] from current SPX option chain data, namely using options expiring in one month.

This will require you to learn how to automate the acquisition of the option prices, convert them to Black-Scholes implied volatilities, fit those implied volatilities to a smooth function vs. strike K (I suggest Gatheral's SVI fit), insert that function back into the Black-Scholes formula, and then take two K-derivatives (analytically). This last step uses what is called the Breeden-Litzenberger formula. All of this should be something learnable with your background, will likely be novel enough for your instructor, and I believe it's quite doable in a couple months.

Thanks for the link and topic suggestion, they are very interesting. Unfortunately, I misinterpreted my instructor's demands. He requires the project to include a non-trivial algorithm; it's an algorithms course but he did not imply the inclusion of an algorithm in the paper until I asked him directly.

Apologies for that. Nethertheless, I'm going to read up on Black-Scholes and the like and maybe attempt your suggestion in my spare time.

Would you happen to have any topic suggestions that satisfy the above simple requirements? Currently, I'm leaning towards using Monte Carlo (I have some Bayesian background) and Expectation-Maximization algorithms for portfolio optimization since I'm losing time but am willing to explore other areas of finance.

Statistics: Posted by Nishizono — August 24th, 2017, 6:10 am

]]>

It's unlikely you'll be able to get up to speed in the field to produce something really novel in 2 months.

Here is a suggestion for a topic. Learn how to produce the so-called risk-neutral distribution [$]Q(S_T)[$], which is the market's distribution outlook for the future stock price [$]S_T[$] (adjusted for risk), for the S&P500 Index. Here [$]T[$] is a future date -- say 1 month away. You deduce [$]Q(S_T)[$] from current SPX option chain data, namely using options expiring in one month.

This will require you to learn how to automate the acquisition of the option prices, convert them to Black-Scholes implied volatilities, fit those implied volatilities to a smooth function vs. strike K (I suggest Gatheral's SVI fit), insert that function back into the Black-Scholes formula, and then take two K-derivatives (analytically). This last step uses what is called the Breeden-Litzenberger formula. All of this should be something learnable with your background, will likely be novel enough for your instructor, and I believe it's quite doable in a couple months.

Statistics: Posted by Alan — August 23rd, 2017, 8:23 pm

]]>

Portfolio optimization using Bayesian statistics to obtain point estimates of mean and variance seems interesting for now. Open to more ideas/topics for inspiration.

Also, any tips for how I should tackle research?

Thanks.

Statistics: Posted by Nishizono — August 23rd, 2017, 12:44 pm

]]>

It's called gambling when it's your own money.

You're right

Statistics: Posted by Ethan6 — August 18th, 2017, 2:08 pm

]]>

]]>

2 The relation of policy rates and Interbank rates

Any publication will be appreciated.

Statistics: Posted by jsichalwe — February 28th, 2017, 1:52 pm

]]>

]]>

]]>