- Search Forex Factory
- 16 Results
- ampleparking replied Oct 23, 2015
About recurrent neural networks... in my opinion they could be the way to go. Classic MLP networks don't account for the input geometry. Convolutional neural networks (CNN) exploit local correlations, but they are invariant to translation and they ...
- ampleparking replied Oct 7, 2015
Exhaustive/Grid search?
- ampleparking replied Sep 30, 2015
Yes, that's probably more indicative than a random walk, since it uses more realistic steps.
- ampleparking replied Sep 30, 2015
Of course I apply the whole process to the random data, not the model fitted to my original data. The starting point is the "theoretically valid idea".
- ampleparking replied Sep 30, 2015
This is what I would do: A) Start with a theoretically valid idea (don't try random non-sense things until something seems to work) B) Train on the train set C) Validate/Tune on the cv set D) Test on the test set (never fit/train/tune on the test ...
- ampleparking replied Sep 30, 2015
I would probably perform a DBM analysis at the very end of my model analysis, but I think that a proper train/cv/test dataset splitting should already give some good hints.
- ampleparking replied Sep 30, 2015
If my validation set score is good and my test score is bad, I would try to improve my already good validation set score. When I reach my best validation score, if the test score is still bad, then my model is garbage. Never try to directly improve ...
- ampleparking replied Sep 30, 2015
Well, that would be a wrong approach because you would fit the test set. You have to improve your validation set score, not your test score.
- ampleparking replied Sep 30, 2015
Instead of that DMB analysis, can't we use the standard train/cross-validate/test dataset splitting? Train on the train set, tune hyper-parameters on the validation set, test on the test set.
- ampleparking replied Jun 14, 2015
In my opinion it would be interesting to use the sample entropy instead of ATR to set the SL/TP thresholds.
- ampleparking replied Jun 10, 2015
I have some questions about the "borders": When I look at the "past (C) hourly bar returns", is the return between the last "full candle" open price and the "current, incomplete candle" open price included? I think so. When I look at the "(D) bars ...
- ampleparking replied May 28, 2015
Yes, I mainly use Python (for backtesting too) so I'm trying to reimplement your models in Python... then I'll be able to share some experiments
- ampleparking replied May 28, 2015
I don't know in my opinion a single output model would be easier to train, especially if you use algorithms like backpropagation, rprop etc....
- ampleparking replied May 28, 2015
That A parameter is a bit worrisome... the "data snooping" alarm in my head is ringing...
- ampleparking replied May 27, 2015
Of course you could use an optimization algorithm like a GA, SA or PSO to train the input-to-hidden layer weights, but the nice thing about the ELM theory is that you don't have to train them. In my opinion you could just create an ensemble of 10-20 ...
- ampleparking replied May 27, 2015
Hello everybody, algoTraderJo, is there any reason for using 2 output units (MLE and MSE) instead of using 1 unit (the MLE/MSE ratio)?
- Posts by Member Search: 'ampleparking'