• Home
  • Forums
  • Trades
  • News
  • Calendar
  • Market
  • Brokers
  • Login
  • Join
  • User/Email: Password:
  • 8:26pm
Menu
  • Forums
  • Trades
  • News
  • Calendar
  • Market
  • Brokers
  • Login
  • Join
  • 8:26pm
Sister Sites
  • Metals Mine
  • Energy EXCH
  • Crypto Craft

Options

Bookmark Thread

First Page First Unread Last Page Last Post

Print Thread

Similar Threads

Backtesting Optimization parameters 0 replies

Optimized Parameters in Metatrader Testing? 3 replies

OrderSend parameters change? 3 replies

pivot points and MACD parameters 0 replies

Can parameters run on different periods in the same EA? 4 replies

  • Platform Tech
  • /
  • Reply to Thread
  • Subscribe
  • 22
Attachments: Finding The Most Robust Parameters Using Optimization
Exit Attachments

Finding The Most Robust Parameters Using Optimization

  • Last Post
  •  
  • 1 Page 2 3
  • 1 Page 2 3
  •  
  • Post #21
  • Quote
  • Dec 21, 2021 9:06am Dec 21, 2021 9:06am
  •  MathTrader7
  • Joined Aug 2014 | Status: Trading | 2,142 Posts
Quoting yoriz
Disliked
... How can we possibly select the best parameters based on the backtest metrics?
Ignored
Use Monte Carlo simulation to select the most reliable backtest parameters.
Trading is the hardest way to make easy money...
 
 
  • Post #22
  • Quote
  • Edited at 9:51am Dec 21, 2021 9:37am | Edited at 9:51am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting MathTrader7
Disliked
Use Monte Carlo simulation to select the most reliable backtest parameters.
Ignored
Can you please elaborate on that?

I am familiar with the Monte Carlo Permutation Method (MCP) as described in Aronson's book "Evidence-Based Technical Analysis" (short online extract here). He uses that method to compare against the Null Hypothesis. What is the likelihood the test results are pure luck?

However, in the example in post #1, I backtest 1250 different parameter combinations. Do you propose to run MC's over each of these 1250 backtests? That will give 1250 profit profiles similar to the graphs in post #21.

Suppose half of them have a likelihood of over p=0.995 that they are profitable. What would then be the next step to select the best one?
 
 
  • Post #23
  • Quote
  • Dec 21, 2021 9:56am Dec 21, 2021 9:56am
  •  Ihlas
  • Joined Nov 2020 | Status: Member | 1,897 Posts
Quoting robots4me
Disliked
{quote} Probably off-topic, but I couldn't resist. I really like @PercyJames description of the market as a giant balancing machine... {image} Check-out the image above -- the indicator is based on DSS-Bressert. It's a simple algorithm, just a few lines -- based on double-smoothed Stochastics. Do you see the rhythmic pattern of peaks and valleys at repeating intervals? Really quite amazing. The price curve above doesn't display anything that would reflect that type of pattern. This repeating pattern of peaks and valleys occurs with...
Ignored
 
 
  • Post #24
  • Quote
  • Dec 21, 2021 10:20am Dec 21, 2021 10:20am
  •  FXEZ
  • Joined Jan 2007 | Status: developing... | 970 Posts
yoriz,

You're asking good questions - ones that I asked a long time ago. There is an essential but long forgotten thread called Systematic Trading and I strongly suggest you read at least the first 20 pages or so. This thread should answer most of the questions that have been asked and fill in a lot of detail that I can't provide here. However I think I should point out a couple of things here to make it more obvious where to focus.

Your edge or systematic advantage is not your parameters. Yet this is what is typically done:
Inserted Code
  run a series of backtests
  select the best parameter based on some metric
  trade it forward: it ends in tears.

Think about the story with the monkeys throwing darts at the financial papers. The monkeys were better than fund managers at picking stocks, not because they threw the dart one time but because they threw it many times and thereby gained an average of the stock returns of a random portfolio. A random portfolio has a positive advantage because the stock market is long term upward trending. This was the monkey's edge along with diversification.

When you select a single system to trade you are betting that those parameters will remain highly profitable into the future. However the system results are much more likely to mean revert back to the average of the advantage rather than remain as an outlier. You are the fund manager, chasing past returns and underperforming a monkey.

This is why it seems that the market changes and the system falls apart. The market really doesn't change, in fact the market's nature is constant change so it stays the same. You are, however vastly overestimating the edge that your system should provide. And you aren't making use of the law of large numbers and are thus subject to huge random variation.

So what is your advantage or edge if not one particular system with its parameters? Your edge is the average result from your set of rules that you have parametrized. In other words you should trade all the systems in your parameter space. However probably half or more of these systems will be marginal or losers and your average profit will likely be at or near zero if you do so.

So you split your in-sample data and apply a fitness metric or function to weed out the chaff. A fitness function can be any sort of rule or performance metric (like avg profit per trade, total profit, profit / max drawdown [Sterling ratio], Sharpe Ratio, Kelly Total growth, profit factor, etc.). Discard the systems in your system space that don't meet your minimum fitness metric and run the data forward on the second half (part) of your data. You don't really care about the performance of any one system, because you aren't trading any one of them but a combination of multiple systems. So your average or combined performance is what matters.

There are effectively two ways to combine multiple systems into a single equity curve: trade a portfolio of all the chosen systems with certain weights - such as modern portfolio theory, or trade a combination of the chosen systems in an ensemble (machine learning). You should be conversant in both but you will get much more performance from ensembles though stability is more challenging.

In terms of robustness, the most robust set of weights is equal weights. You may be tempted to optimize the weights but you are very likely to waste a lot of time, only to overfit the data when you do this and end up with something that fails out of sample.

Your goal in all this, is not to create a single system, but a system creation framework that generally produces out of sample results that match at least 70% of your in sample performance and thus has good consistency. Because you are using a fitness metric to select, you will periodically need to reoptimize or create a new system that includes the latest data since some of your systems will drop below the minimum threshold while others will rise above it.

That's about as much as I want to share on this topic. Read the thread linked in this post and you should be on your way. It's not a short term project and has taken me many years to get to this point with some failures and success along the way. Good luck!
 
3
  • Post #25
  • Quote
  • Dec 21, 2021 10:34am Dec 21, 2021 10:34am
  •  Ihlas
  • Joined Nov 2020 | Status: Member | 1,897 Posts
One more thing is to concentrate on similarities that aren't similar (framework). Hidden in the obvious.
 
 
  • Post #26
  • Quote
  • Dec 21, 2021 10:50am Dec 21, 2021 10:50am
  •  MathTrader7
  • Joined Aug 2014 | Status: Trading | 2,142 Posts
Quoting yoriz
Disliked
{quote} Can you please elaborate on that? I am familiar with the Monte Carlo Permutation Method (MCP) as described in Aronson's book "Evidence-Based Technical Analysis" (short online extract here). He uses that method to compare against the Null Hypothesis. What is the likelihood the test results are pure luck? However, in the example in post #1, I backtest 1250 different parameter combinations. Do you propose to run MC's over each of these 1250 backtests? That will give 1250 profit...
Ignored
I am too busy and don't have time to explain it here, but the following link is a good start to be familiar with MC application in backtesting.

https://towardsdatascience.com/impro...s-abacde033adf
Trading is the hardest way to make easy money...
 
 
  • Post #27
  • Quote
  • Dec 21, 2021 11:02am Dec 21, 2021 11:02am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting Ihlas
Disliked
One more thing is to concentrate on similarities that aren't similar (framework). Hidden in the obvious.
Ignored
Hahaha, now you sound like a fairy tale wizard speaking in riddles ;-)
Not sure what to do with this advice...
 
 
  • Post #28
  • Quote
  • Dec 21, 2021 11:03am Dec 21, 2021 11:03am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting MathTrader7
Disliked
the following link is a good start to be familiar with MC application in backtesting.
Ignored
Thanks for the link. Much appreciated.
 
1
  • Post #29
  • Quote
  • Dec 21, 2021 11:11am Dec 21, 2021 11:11am
  •  Ihlas
  • Joined Nov 2020 | Status: Member | 1,897 Posts
Quoting yoriz
Disliked
{quote} Hahaha, now you sound like a fairy tale wizard speaking in riddles ;-) Not sure what to do with this advice...
Ignored
https://en.wikipedia.org/wiki/Similarity_(geometry)
https://en.wikipedia.org/wiki/Congruence_(geometry)
--- This fairy tales might help
In Euclidean geometry, two objects are similar if they have the same shape, or one has the same shape as the mirror image of the other. More precisely, one can be obtained from the other by uniformly scaling (enlarging or reducing), possibly with additional translation, rotation and reflection. This means that either object can be rescaled, repositioned, and reflected, so as to coincide precisely with the other object. If two objects are similar, each is congruent to the result of a particular uniform scaling of the other.
 
 
  • Post #30
  • Quote
  • Dec 21, 2021 11:16am Dec 21, 2021 11:16am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting FXEZ
Disliked
There is an essential but long forgotten thread called Systematic Trading and I strongly suggest you read at least the first 20 pages or so.
Ignored
Thanks for the link. I'll have something to read under the christmas tree.

Quoting FXEZ
Disliked
You don't really care about the performance of any one system, because you aren't trading any one of them but a combination of multiple systems. So your average or combined performance is what matters.
Ignored
Interesting idea. That way it doesn't hurt when I accidentally pick a loosing parameter set, I choose a whole lot of them. Top 5 with highest Sharpe, plus top 5 with highest profit factor, etc. and then trade tens of them simultaneously. What way profit will be much less than some of the individual high performers, but the results would be much more robust and stable. And perhaps due to the more linear/stable equity curve, the risk adjusted profit might be even higher.

A few quick tests averaging the forward test profits of the top-x strategies in the spreadsheet from post #2 already showed nice results. This is definitely something I am going to study more. Thank you for this valuable insight!

Quoting FXEZ
Disliked
There are effectively two ways to combine multiple systems into a single equity curve: trade a portfolio of all the chosen systems with certain weights - such as modern portfolio theory, or trade a combination of the chosen systems in an ensemble (machine learning).
Ignored
I am always struggling with ensembles. I can envision how to do that for strategies that are always in the market (typical ML algo's that predict the next H1 bar, for example), but don't quite understand how to do that for strategies that are only in the market every now and then. You would need a huge amount of these to implement something like majority voting. Or are there smarter ways to create ensembles?
 
 
  • Post #31
  • Quote
  • Dec 21, 2021 11:22am Dec 21, 2021 11:22am
  •  Ihlas
  • Joined Nov 2020 | Status: Member | 1,897 Posts
Quoting yoriz
Disliked
{quote} Hahaha, now you sound like a fairy tale wizard speaking in riddles ;-) Not sure what to do with this advice...
Ignored
Harry Potter says that market doesn't change, but the scale.
I'm trying to spot this baby snake
 
1
  • Post #32
  • Quote
  • Dec 21, 2021 4:42pm Dec 21, 2021 4:42pm
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting MathTrader7
Disliked
the following link is a good start to be familiar with MC application in backtesting.
Ignored
Thanks again for the link. I have read the article. What it describes is a pragmatic introduction to MC, what Aronson describes in more detail in his book. However, both the article and Aronson only consider one single system at a time.

How can we apply MC for system selection? We can run MC simulations on each of the hundreds or thousands of backtests we run when varying the parameters. Using MC simulation will surely improve the accuracy of estimates like Return/DD and similar backtest metrics. As usual, we will likely see that some parameters are clearly not profitable, while others appear to be profitable.

But still the question remains how to select the most robust, stable parameters (or show to select a subset of systems to create a portfolio of promising systems, as FXEZ pointed out). I assume that selecting the most profitable strategies from the MC'd backtest results is not wise, because these are probably just lucky, curve fitted instances (i.e. the 90+ percentile in the profit distribution in post #1).

Any suggestions?
 
 
  • Post #33
  • Quote
  • Dec 22, 2021 6:55am Dec 22, 2021 6:55am
  •  PipMeUp
  • Joined Aug 2011 | Status: Member | 1,305 Posts
I see your graph as a noisy image in parameter space. Running a low pass filter (blur) on it was my first idea. I took your data and applied a 5x5 uniform kernel on it (a 2D SMA). You can see that the lucky peak gets averaged out. Then I spotted (in red) the three best local maximas. The rational is that you seek paramters which give similar (and good!) results when slighly changed (a flat high plateau); so that they are assumed robust. You can take the top N best in your portfolio.
Attached Image (click to enlarge)
Click to Enlarge

Name: heatmap-param-space.png
Size: 146 KB

There is also this surprisingly good results in the forward test. Two hypothesis pop to my mind
1/ this set of parameters has no predictive power and result is random
2/ It highly depends on the market condition which has changed between the two datasets.
You said you used E/U from 2020-01-01 to 2020-07-01. Right after this period is a smoother uptrend. Can this be an explanation?
Attached Image (click to enlarge)
Click to Enlarge

Name: EURUSD-D1.png
Size: 29 KB
No greed. No fear. Just maths.
 
1
  • Post #34
  • Quote
  • Dec 22, 2021 7:19am Dec 22, 2021 7:19am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting PipMeUp
Disliked
I took your data and applied a 5x5 uniform kernel on it (a 2D SMA). You can see that the lucky peak gets averaged out. Then I spotted (in red) the three best local maximas.
Ignored
Good idea! That looks like a good way to detect plateaus and easily extends to n-dimensional parameter spaces. In the past I did some attempts to calculate the gradient to find the peaks in parameter space but I was not very successful. Your suggestion to first low-pass filter the data helps both in peak finding and mixing in neighboring parameter values (aka plateaus).

Quoting PipMeUp
Disliked
There is also this surprisingly good results in the forward test. Two hypothesis pop to my mind
1/ this set of parameters has no predictive power and result is random
Ignored
Yes, possibly. This is just a very simple toy breakout strategy based on Donchian channels and a fixed RR 1:1. It does not even try to benefit from the entire trend and does an early take profit. I chose it to keep the example simple, as it has just two parameters with only 1250 possible combinations.

I intend to capture the lessons learned from this thread in code to have the EA re-optimize itself automatically. Then I can do some serious WFA testing, because the current manual approach is very time consuming. Indeed, I might then discover that this simple strategy has no predictive power.

Quoting PipMeUp
Disliked
2/ It highly depends on the market condition which has changed between the two datasets. You said you used E/U from 2020-01-01 to 2020-07-01. Right after this period is a smoother uptrend. Can this be an explanation?
Ignored
Not sure, because this is at an entirely different timescale. The average holding time of a trade was only 1.5 hours.
 
 
  • Post #35
  • Quote
  • Edited at 3:30pm Dec 22, 2021 1:03pm | Edited at 3:30pm
  •  MathTrader7
  • Joined Aug 2014 | Status: Trading | 2,142 Posts
Quoting yoriz
Disliked
{quote} Thanks again for the link. I have read the article. What it describes is a pragmatic introduction to MC, what Aronson describes in more detail in his book. However, both the article and Aronson only consider one single system at a time. How can we apply MC for system selection? We can run MC simulations on each of the hundreds or thousands of backtests we run when varying the parameters. Using MC simulation will surely improve the accuracy of estimates like Return/DD and similar backtest metrics. As usual, we will likely see that some parameters...
Ignored
I answer your question with an example of trading that I started (with real money) about one year ago. After reading resources available about stock market pair trading, and understanding the math behind it, I started pair trading which involves two (or more) stocks. In a nutshell, with pair trading one looks for a linear combination of the selected stocks (which ideally shall be cointegrated) to create a stationary synthetic symbol (a basket). When created, the hyper-parameters for the basket need to be optimized where Monte Carlo simulation can be applied to. The trick is to consider each BASKET BUY and BASKET SELL as the outcomes of the synthetic symbol (like someone is trading a single asset).

Hope this helps you with your portfolio!

/Matt
Trading is the hardest way to make easy money...
 
 
  • Post #36
  • Quote
  • Edited at 2:16pm Dec 22, 2021 1:24pm | Edited at 2:16pm
  •  robots4me
  • Joined Dec 2017 | Status: Member | 4,378 Posts
The following comments won't be well-recieved, but a little bit of controversy never hurts...

Obviously y'all are smart fellows -- but you aren't traders. Your satisfaction comes from thinking about and developing theoretical systems. And there is nothing wrong with that. My comments that follow are primarily intended for those who might consider following in your foot steps...

Your data mining, statistical analysis, Monte Carlo analysis are great for optimizing a system with known constraints -- e.g. traffic control, where there are known constraints and the goal is to optimize a few parameters to achieve the greatest traffic throughput. But in FX everything is in motion -- there are no known constraints that don't change with timeframe or pair or market conditions.

If your over-fitted systems really worked you wouldn't need a thread to discuss how to make them work -- instead, you'd spend your time actually trading, right?

But let's say one of your systems did work for some interval of time. Y'all already acknowledge that constant recalibration is required, right? Well -- a real trader knows that by the time you acknowledge your current system isn't good enough and requires recalibration you've already lost more than what you may have previouisly profitted.

I'm a strong believer in algorthmic trading -- so, my intent is not to discourage you. Rather, to encourage you to favor strategies that don't rely on settings that require calibration. That's a bit difficult because 95% of the indicators out there employ some flavor of moving average that requires a "period" parameter who's optimal value changes with time frame, pair, market conditions, etc. If your strategy employs a moving average then you will always be chasing your tail...

For those readers at a cross-roads as to what approach to take, first learn how to discretionary trade. The window between a successful algo trader and failed one is much smaller than you think -- until you appreciate that you'll be doomed to failure. Study the actual price data at the ground level. Statistical analysis is the view from 30,000 feet. Once you become a good discretionary trader then you are in the position to design and develop a system -- if that is what you enjoy doing. And most important of all -- KISS...
 
 
  • Post #37
  • Quote
  • Edited at 4:49pm Dec 22, 2021 4:25pm | Edited at 4:49pm
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Building upon the excellent suggestion from @FXEZ, I tried to build portfolios of many different parameter sets to average out the performance of the individual settings. In the plots below, I sorted on the various backtest metrics as provided by the MetaTrader strategy tester and created portfolio's of the top-1, top-2, top-3, etc. strategies (truncated at max. 500 strategies for scaling):
Attached Image (click to enlarge)
Click to Enlarge

Name: screenshot.png
Size: 227 KB

Recovery Factor and Equity DD % look like the most robust metrics as all portfolios between N=6..332, respectively N=2..400 are profitable, albeit with varying profits obviously. Expected Payoff is a useless metric for this strategy. In fact, by inverting the sort order (sorted by ascending order, so building a portfolio of the worst Expected Payoffs), I got excellent results but it does not feel robust to use something that is counter-intuitive:
Attached Image (click to enlarge)
Click to Enlarge

Name: screenshot.png
Size: 43 KB

What worries me is that the Backtest Profit is also a very bad predictor of Forward Test Profit. Is that a sign that the stategy has no predictive power?

Several curves start with a deep dip and climb back up again. Does that confirm my hypothesis that we should filter out the top scoring strategies because these are just lucky curve fits?
 
 
  • Post #38
  • Quote
  • Dec 22, 2021 5:03pm Dec 22, 2021 5:03pm
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting MathTrader7
Disliked
The trick is to consider each BASKET BUY and BASKET SELL as the outcomes of the synthetic symbol (like someone is trading a single asset).
Ignored
I see. So you suggest to try to combine strategies in a portfolio that maximizes the Risk Adjusted Return found by MC simulation. I see how that forces the portfolio to be diversified, so it will probably include strategies with parameters that are far apart like the 3 red boxes @PipMeUp selected in his heatmap in post #33. Does that also imply robustness? Or does diversification in general already imply robustness? @FXEZ seems to think so.

The only thing that worries me about this approach is how to make it practical. Brute-forcing all combinations is obviously impossible. Should I follow an additive approach by first trying to pair only 2 strategies (that is "only" 1.5 million combinations to try), and then try which 3rd strategy matches best with already found pair? Etc. Still a huge amount CPU cycles, but doable in days or weeks.
 
1
  • Post #39
  • Quote
  • Edited at 5:57pm Dec 22, 2021 5:23pm | Edited at 5:57pm
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting robots4me
Disliked
The following comments won't be well-recieved, but a little bit of controversy never hurts...
Ignored
Ok, I'll bite ;-)

Quoting robots4me
Disliked
If your over-fitted systems really worked you wouldn't need a thread to discuss how to make them work
Ignored
What makes you think the systems are over-fitted? There are mathematical methods to calculate your data-mining bias. I already mentioned the book "Evidence-Based Technical Analysis" by Aronson. He describes how to test against the Null Hypothesis and determine what the chance is your strategy is profitable by sheer luck, or has an actual edge.

Also highly recommended is the excellent thread "Machine Learning with AlgoTraderJo". He describes how he mines (or optimizes if you like) a strategy on real symbol data, but also on 200 randomized synthetic data series (think of it like shuffling the returns of the bars in a chart to remove any causality between subsequent bars while preserving the overal ranges and dynamics of the symbol). To check data-mining bias, he allows only some 'x'% of the stategies mined from random data to be better than those mined from the real data. That is a very expensive but thorough way to ensure you are not curve fitting!

Quoting robots4me
Disliked
encourage you to favor strategies that don't rely on settings that require calibration.
Ignored
Ah, the parameter-less strategy! There was a thread about that 2 years ago on FF, if I remember correctly. Can't seem to find it now.

Unfortunately, these are fairly limiting in what rules are possible. And beware that often there is an implicit parameter hidden in the rules of these strategies. For example: "if prices crosses the resistance twice ...". Why twice and not three times? Yes, you hardcoded "2" but only after you experimented or backtested a bit with "1", "3" and "4". So you don't have data-mining bias, but you have selection bias (or survivorship bias) instead.

Quoting robots4me
Disliked
Once you become a good discretionary trader then you are in the position to design and develop a system
Ignored
I have a very good discretionary trader in my family who always impresses me. However, the problem with discretion is that it is very hard to capture in an algorithm. Trust me, together we did many attempts to code all his ideas. Within hours I have an EA that adheres to his basic rules. Then we run a backtest and he starts inspecting the resulting trades. And then you have it: "well... here I wouldn't have entered". "Why?". "Don't know. It just doesn't look right.". It is very hard to capture gut feeling and years of experience in code.
 
 
  • Post #40
  • Quote
  • Edited at 3:41am Dec 23, 2021 3:29am | Edited at 3:41am
  •  yoriz
  • Joined Dec 2016 | Status: Member | 128 Posts
Quoting robots4me
Disliked
Not true at all. That statement simply reflects your limited experience in trading and bias in favor of complexity.
Ignored
Perhaps I am not creative enough. Please name 10 parameter-less rules that make sense for building strategies.

Quoting robots4me
Disliked
unfortunately skills don't necessarily transfer due to being an acquaintance.
Ignored
Hahaha, of course not. I was just introducing the paragraph about discretionary trading. I was not implying I am having the same skills. You don't need to master all skills yourself to do a successful team project.

Quoting robots4me
Disliked
another reason I know you are not a trader: with all your statistical analysis you don't mention anything about money management.
Ignored
Yes, because that is off topic for this thread: "Finding the most robust parameters using optimization". Feel free to start another thread on the topic of MM and I am happy to contribute.

Quoting robots4me
Disliked
If you couldn't code a successful strategy with the aid of your discretionary-trading friend, then how does statistical analysis overcome that defiiciency?
Ignored
Because algo trading often exploits different inefficiencies than discretionary traders. There is not a single person doing NN calculations in his head to make a trade decision (ok, biologists among you will argue the brain is a NN, but that is not what I mean obviously). Capturing discretionary trading instincts in code is very hard, regardless of statistics.

Quoting robots4me
Disliked
For OOS you'll typically use the next chunk of data adjacent to the chunk you optimized for. The probability is very high the adjacent chunk represents similar market conditions as the chunk used for optimizing your settings. [...] your optimized settings that seemed to work on the adjacent OOS data fail miserably on the non-adjacent OOS data. It's a common misconception -- the problem is that your testing software probably forces you to use adjacent data for OOS.
Ignored
Isn't that exactly the goal? When going live with a strategy, the OOS part is the actual live market. We want it to be similar market conditions. In post #13 you write: "My preference is to use a data horizon just large enough to yield 50 trades -- typically around 2 months using H1 data.", and now you seem to suggest the opposite?

Thank you for bringing a fresh, different perspective, but I feel we are drifting more and more off-topic. How do you select the best strategies and parameter sets in your trading?
 
1
  • Platform Tech
  • /
  • Finding The Most Robust Parameters Using Optimization
  • Reply to Thread
    • 1 Page 2 3
    • 1 Page 2 3
0 traders viewing now
  • More
Top of Page
Forex Factory Blog Updated: Alerting All Members
  • Facebook
  • Twitter
About FF
  • Mission
  • Products
  • User Guide
  • Media Kit
  • Blog
  • Contact
FF Products
  • Forums
  • Trades
  • Calendar
  • News
  • Market
  • Brokers
  • Trade Explorer
FF Website
  • Homepage
  • Search
  • Members
  • Report a Bug
Follow FF
  • Facebook
  • Twitter

FF Sister Sites:

  • Metals Mine
  • Energy EXCH
  • Crypto Craft

Forex Factory® is a brand of Fair Economy, Inc.

Terms of Service / ©2022