I think using scheduled times for the major financial news such as NFP or Fed rate to model currencies volatility is crucial. Volatility in NFP Friday is very different than other Fridays.

Statistics: Posted by TraderWalrus — Today, 2:23 pm

]]>

It's an interesting problem!

Statistics: Posted by outrun — Today, 2:09 pm

]]>

A question can be - what metric better reflects performance and is less sensitive to randomness? is win rate or profit factor better than drawdown, because they are independent of the sequence of trades?

In any case, it seems more natural to me to decrease or increase the size of a system rather than switch it on and off completely.

Statistics: Posted by TraderWalrus — Today, 1:32 pm

]]>

Thing of it differently: all the performance measures and statistics from you trading strategy can also act as thermometers that reveal information about the state of the market. Draw down is just one of them, but you can collect much more info. We once used this in an Algo trading firm: we monitored the behavior of the algo (profits, fill rates etc) and had hotswitches based on the statistics of the behavior would deviate too much from the backtest data we used. Not sure if you like this better?

Statistics: Posted by outrun — Today, 9:59 am

]]>

acquisitors to take on the titans of industry, and he held it responsible

for twisting the buyout world's priorities until they were unrecognizable.

No longer, Forstmann believed, did buyout firms buy companies

to work side-by-side with management, grow their businesses, and sell

out in five to seven years, as Forstmann Little did. All that mattered

now was keeping up a steady flow of transactions that produced an

even steadier flow of fees--management fees for the buyout firms, advisory

fees for the investment banks, junk-bond fees for the bond specialists.

As far as Ted Forstmann was concerned, the entire LBO industry

had become the province of quick-buck artists."

I get advisory fees for the IBs but I fail to understand how junk bond fees and management fees could work in the transactions.

Statistics: Posted by Tedypendah — Today, 9:20 am

]]>

1. A limit order that doesn't move during the trade, set at a fixed distance from entry point. This number comes from backtest optimization results.

2. A large trailing stop.

So, almost all winning trades fall into category 1, while the losing trades can be either at the initial risk or smaller.

I will try to implement what you suggested.

Statistics: Posted by TraderWalrus — Today, 6:31 am

]]>

It looks like two clear distinct modes: a steady pretty certain profit that doesn't vary much, but then occasional big-ish losses. Are theoretical (model instead of market) prices involved?

Some things I would like to know if I was running this would be:

1) can I predict these losses? Prevent some of them from happening?

2) do these losses come in patterns/clusters, or do they arrive randomly? If so then my multi-day risk is much higher than independent single day risks.

These question are related, eg if they come in clusters then you can predict them. There are a couple of statistical test that test for that, the simplest is to look at time between drawdowns of a certain magnitude. If they come in clusters then the time between them will be smaller more often than expected. The simplest implementation of that test is by Monte Carlo simulation. You draw 360 random samples fom your historical distribution, look at the distribution of the number of days between big drawdowns. Eg "I saw 4 events where the time between two consecutive big drawdowns was less than a week". Repeat that 1000 times and you'll get a distribution. This would be the distribution in case drawdown happen randomly, without clustering j. Time. Sometimes you saw 4 events in your simulation, sometime 1, sometimes 2, etc etc. Now compare that against your actual trade results. If you experienced12 events then that might be very high compared to the cases you saw in your 1000 simulations.

You can also use this technique to test for changes in behaviour by comparing old performance with current performance. One issue is that you have only 360 samples, it'll be difficult to make statistical significant claims from such a small set of samples.

Statistics: Posted by outrun — Yesterday, 1:13 pm

]]>

The system uses a fixed target, so most profitable trades are there. There is a protective stop and some form of trailing. win rate is 71%. Profit factor of 1.28. Traded live for about 3 months.

Drawdowns in this system can be quite significant. Max actual drawdown is 189,000 for closed trade (or 217,000 when we look at intra-trade data). Using MC simulation with those trades, Max DD on 95% of the simulations is 514,000 - more than double the actual DD so far.

What may be a criteria to determine the system no longer suitable for the market regime?

Statistics: Posted by TraderWalrus — Yesterday, 5:03 am

]]>

What's good about ranking against past returns is that it's a distribution-free methods: you make no assumption w.r.t. the return distribution (e.g. assuming it's Gaussian like some people do when they talk about "3 sigma" events).

Statistics: Posted by outrun — September 19th, 2017, 7:13 pm

]]>

To account for changing market conditions, that can be good or bad for my systems, and react quicker, I would like to give more weight to recent trades over older. My basic premise is that none of my systems will work forever - sooner or later a drawdown will become larger and larger until it will be obvious the system doesn't work anymore. On the other hand, when market conditions are favorable for a system, I would like to take advantage of that.

I thought about assigning probabilities for each trade instead of giving them equal chance to be chosen in a simulation. For example, if I have a 100 trades, then the oldest trade will get a score of 1, the trade that came after it will receive a score of (1+x), the following a score of (1+2x) and so on. I will sum all of those scores, normalize them, then select the trades (with repetition) for the simulations, according to those scores (probabilities).

The big question is, what should x be? how would you go about it? using an exponential averaging? with what lambda? any other ideas?

I will appreciate any comments and views on this subject.

Statistics: Posted by TraderWalrus — September 19th, 2017, 4:48 pm

]]>

Using this notation I found on wikipedia,

[$]y_t=x'_t b +\epsilon_t[$]

[$]\epsilon_t| \psi_t \sim\mathcal{N}(0, \sigma^2_t)[$]

[$]\sigma_t^2=\omega + \alpha_1 \epsilon_{t-1}^2 + \cdots + \alpha_q \epsilon_{t-q}^2 + \beta_1 \sigma_{t-1}^2 + \cdots + \beta_p\sigma_{t-p}^2 = \omega + \sum_{i=1}^q \alpha_i \epsilon_{t-i}^2 + \sum_{i=1}^p \beta_i \sigma_{t-i}^2[$]

I would put a deterministic time varying [$]f_t = \{0,1\}[$] factor here:

[$]\epsilon_t| \psi_t \sim\mathcal{N}(0, \sigma^2_t e^{c f_t })[$]

That way you can (I expect) nicely handle the regime switches and the rolling measure? [$]\sigma[$] could be thought of as an base vol, which gets scales by deterministic intra-day and weekly factors.

About Alan's idea for exogenous variables: I would try that too. The only issue I had is that if that exogenous variable moves on your timescale then

you will have to model the dynamics of that too if you want to do e.g. multi-step MC simulations.

Statistics: Posted by outrun — September 11th, 2017, 5:38 pm

]]>

]]>

Let's say we head down your route and say that we are going to simplify this to just two regimes, 'office hours' and 'not office hours'. That may be 8h of the day we expect the currency to be 1.2x more volatile than the other 16h of the day. How should I create a rolling measure? For example I've just finished an 8h office day and I'm now 1h into the next 16h out of office hours. When i look at the preceding 8h, should I be summing squared price changes on each minute bar, or should I be looking at the price change over the whole of the 8h? Clearly if there is a momentum effect or mean reversion then the two will not convert to the same volatility over a constant period. However, we are now 1h into the next regime. How do I combine 1h of this regime with the 8h before that and the 16h before that etc?

I'm thinking of using some type of weighting curve (by time of day) and rolling average where less volatile times of days count more. The problem with this is that small bumps at non-volatile times of day may lead to outsized changes on volatiltiy.

Statistics: Posted by MartinGale7 — September 11th, 2017, 4:06 pm

]]>

]]>

]]>