This is an interesting thread, I'll hope to add some perspective to...
The topic of Quantitative Analysis in general is rather infinite and while the Op cares to keep the thread on task by some of the responses, Op has not really identified a "specific" question or scope within which to remain that I can sort out, so it's very hard to accurately respond on topic. This isn't uncommon in Quant Analysis (QA) or a negative reflection on the OP, as the field again is "infinite" and addresses many things which are simply "relative" by their own nature in comparison, not necessarily right versus wrong or black and white.
QA requires before anything else, "sanitary" data less one has "Garbage in = Garbage out". Quants many times demand and filter to create sanitized data in order to see "empirical" results of some specific attribute. However, there can be a serious misfire in misunderstanding that Quants typically are working with institutional Liquidity Provider (LP) bulk data before the underlying market maker initiatives will modify that data in many possible ways such as Dealer / Broker A, B or C booking profiles wherein intentional manipulation is applied to bend the results favorable to institutional gains, either legitimate or non-legitimate in nature. Consider price markup stable as legit, versus price siphoning on peaks and valley's dynamically to defeat limit fills or enhance herd stop hunts, for example. All speculators act on Loss Mitigation, not profit initiatives, lest they get caught behind enemy lines.
Condition 1: That said, if we scope the understanding to say "retail" benefits of analysis for end users of broker data, it becomes immediately clear you MUST use the specific broker's resulting LIVE data to be certain your quantitative results reflect a true profile of where, how and when that broker is acting on supply / demand in their downstream execution and risk management agendas. This is CRITICALLY relevant as the definition of a trend might appear and even be perfectly similar on two different brokers but in the end the fill statistics, rejects or slippage, grossly differ between both by comparison. Finding the underlying differences INSIDE a given trend that do or do not result in profitable fills then becomes a function of stability and reliability on a statistical level in order to rank and compare them apples to apples reliably and therefore trust the resulting data can give valuable insight to guide trading decisions in an algo on ONLY the given broker in question. If one cannot sample true, live execution to test the hypothesis that QA presents, one cannot have a reliable result as demo fills are characteristically perfect fills, where live data is full of deviations of failed or less than perfect, "real world" conditions. Therefore working from Demo one must depreciate the testing results to some unknown estimation.
Condition 2: In "most", (not all) cases, the retail trader is working inside a fixed time domain of M1, M5, hourly etc. which is the basis most commonly provided for retail traders, the exception being Renco, Tick and other chart variants that shift the time factor into a rate of change model versus fixed time. Unfortunately the Mt4 makes relatively poor use of these alternate charting methods on a custom basis only. Because the institutional LP's ahead of (or as part of) the broker are NOT time based, this creates another variation in any broker's results which are absolutely broker specific. The institutional side is playing off of supply and demand initiatives to create herd invitation well before we consider the effects of dark pool contamination or High Frequency Trading, (HFT), both of which create horribly deceptive results in the retail fixed time base being secondary effects which are very difficult to establish in a time based or even tick based charting process.
In short, this means the available volume by which to make timely fills to retail is based SOLEY on broker's access to fills being aggregated from a larger and wider tier of volume pricing either the LP or broker must aggregate and to some degree carry any residual differences (risk) in buying a larger chunk of currency and distributing only most of it, holding the difference for distribution at a later time / price the provider's risk management algos must account for later on in yet further B or C book agenda, (Think Alpari getting stuffed when the Swiss uncapped their currency). i.e. over exposed taking the other side in the dark.
A retail broker may actually be 100% transparent, and yet owned by their own LP resource WHICH IS NOT transparent, hence garbage in. While the broker can legally claim to be truly transparent, it is possibly overlooked by traders the broker holds a 1 to 1 hedge with his LP and the LP is manipulating price and fills both. (scary and very true today). DO NOT believe for a minute that regulatory commissions are the trader's friend, they are not. They simply compartmentalize what forms of manipulation occur when and where to the benefit of the provider giving provider a safe shield to hide behind when you consider all LP monetary power comes ultimately from central banks and investor funds which are under the agenda of government backed profiteering on a whole other scale, supporting the global race for GDP dominance creating purchasing power for any given currency / government. These governing agendas over time are strategically killing brokerages in favor of banks ending up holding most of the volume, giving ever-greater government control on the supply side.
Mandelbrot long ago proved these relationships and others distinctly make it LITERALLY impossible to predict market movement in that the future is always changing during the now, based on the unforeseen affects of liquidity proportioning affecting different supply sides along side of differing profit agendas and Risk Management methods within the "decentralized" nature Forex is between multiple provider domains. For this reason alone, time based analysis becomes a fallacy on the retail side which is where Mandelbrot identified that most indicators with fixed bars back domains are relatively less that 30% accurate at any frame of time simply due to market frequency being a moving target, even before the other considerations of fill statistics, etc.
Condition 3: Real time domain versus "relative time". In any sense of a periodic data set such as bars data provides, there is "normal variation" given of pure supply and demand relative to a theoretical random mean, versus non-normal variation, a result of manipulation beyond pure supply / demand semantics. Since the supply side is working from "Volume" based criteria ONLY, acting on a "Time Based" event profile, one must have a means to differentiate "Wild" volatility from normal and literally remove the effect of time (and hence correlate frequency modality) in order to rank a comparison of the two. This is where Mandelbrot identified the differences of a Brownian Walk being absent real-world wild volatility whereby Montecarlo testing is defeated in the absence of deterministic controls to introduce wild volatility back to the Brownian set as part of a valid comparison. By formulating data in a manner to dissect and therefore remove or model against fixed-time wild volatility, retail can then compare the continuum of "self similarity" to rank "how reliable" is some part of a defined trend to provide improved probabilityagainst things like a stop-hunt concatenation of an otherwise reliable Elliot wave projection inside the fixed time based domain. One essentially is removing "fixed time", in order to manage the decision tree on "Relative time" subject to degree of quantified volatility being reliable to "some" given degree better than simply random neutral mean (50/50 probability) supply and demand.
In conclusion, the desire to understand and make use of quantitative methods requires that one understands these "relative" considerations and also creates a "sanitary" concept by which to then make decisions reliably toward an improved end effect. The end effect will NOT be one of predicting future outcome, but can ONLY ever be an end result of mitigating loss probability. In other words, we CAN determine when the market it less likely to hurt us on a ranking greater than 50/50 and therefore attain "safer" profit, but we cannot determine the degree to which we are likely to profit, only to limit our access to settings in time which would be more likely to cause loss, i.e. loss mitigation (conservative), not profit projection (greed motivated).
Not wanting to drag the thread off into never-land, Op is or was on the right track, but in order to afford a more reliable answer the Op would need to help identify more fully, which aspect of market analysis is being sought and under what means or method is intended to make use of "some" greater accuracy or dynamic QA feedback an EA or indicator might make use of, or manual method even, per se.
If most of the reply here is outside the reader's familiarity, then it's likely a good idea to look up some in depth discussions on quantitative analysis, volatility and time based domains, to gain a fuller understanding first and therefore be equipped to look further in Quant methods to formulate well constructed concepts or ideas. It is safe to say that "most" classical trading methods of years gone by have fallen fairly victim to regulatory disguise, dark pool contamination and HFT transactions at banks and hedge firms which now operate at the "millionth" of a second transaction speed. It has been theorized by at least one of the few fore-running authorities in banking quant design, that "all markets today are subject to SOME degree of flash-crash delay upwards of 20 times per day on all instruments" making price-based decision making highly volatile in many hidden aspects if real-time pricing is lagging.
This alone tells us just how unstable finding any simple answer is. The nature of government subsidized manipulation will continue to grow exponentially since a few short years just prior to the U.S. housing crash of 2008, the point statistically verifiable, at which the U.S. became intensively demanding on quant manipulation in large finance to generate synthetic funding protocols in order to "churn" and supposedly strengthen U.S. dominance in market GDP initiatives globally, shoring up the U.S. Dollar. The degree to which retail traders are subjected to this setting is unprecedented today but for which we know the extent of damage to any truly fair conditions in the market are long since destroyed by such unethical initiatives, hiding behind Dodd-Frank and the facade of a safer setting for investors via increased compliance destroying true competition free markets depend on. Such agendas have left retail in a setting of doing battle in the vapor trail of a comet out of control, while China in as many years has risen to attain as much as 70% the the world's true purchasing power in owned hard assets China now dominates, while the U.S. has become ever more dependent on the yuks of "free trade" as the U.S. increasingly loses internal GDP stability and becomes further dependent on purchasing than ever before. Is this the definition of being the world's leader in financial vitality or simply the front-runner of justification to create greater military dominance to protect an evaporating U.S. Dollar?
Quant analysis does and will work at a very high expense of personal learning and dedication just as real change only sustains in greater education. But, be well prepared to deal with what it may reveal so that expectations remain conservative, pressing on toward safer gains among what remains as the ground continues to shift beneath our feet. "Who then will judge those who sit in the seat of judgment?" Hopefully those who will learn and create change.
The degree to which QA is beneficial in retail is a growing dependency proportionate to the degree to which QA is being used to negatively impact retail trading and therefor the conclusion above is not digression from the topic, but is actually the underlying reason the topic is so important, so long as one understands the reason.
Best wishes.
The topic of Quantitative Analysis in general is rather infinite and while the Op cares to keep the thread on task by some of the responses, Op has not really identified a "specific" question or scope within which to remain that I can sort out, so it's very hard to accurately respond on topic. This isn't uncommon in Quant Analysis (QA) or a negative reflection on the OP, as the field again is "infinite" and addresses many things which are simply "relative" by their own nature in comparison, not necessarily right versus wrong or black and white.
QA requires before anything else, "sanitary" data less one has "Garbage in = Garbage out". Quants many times demand and filter to create sanitized data in order to see "empirical" results of some specific attribute. However, there can be a serious misfire in misunderstanding that Quants typically are working with institutional Liquidity Provider (LP) bulk data before the underlying market maker initiatives will modify that data in many possible ways such as Dealer / Broker A, B or C booking profiles wherein intentional manipulation is applied to bend the results favorable to institutional gains, either legitimate or non-legitimate in nature. Consider price markup stable as legit, versus price siphoning on peaks and valley's dynamically to defeat limit fills or enhance herd stop hunts, for example. All speculators act on Loss Mitigation, not profit initiatives, lest they get caught behind enemy lines.
Condition 1: That said, if we scope the understanding to say "retail" benefits of analysis for end users of broker data, it becomes immediately clear you MUST use the specific broker's resulting LIVE data to be certain your quantitative results reflect a true profile of where, how and when that broker is acting on supply / demand in their downstream execution and risk management agendas. This is CRITICALLY relevant as the definition of a trend might appear and even be perfectly similar on two different brokers but in the end the fill statistics, rejects or slippage, grossly differ between both by comparison. Finding the underlying differences INSIDE a given trend that do or do not result in profitable fills then becomes a function of stability and reliability on a statistical level in order to rank and compare them apples to apples reliably and therefore trust the resulting data can give valuable insight to guide trading decisions in an algo on ONLY the given broker in question. If one cannot sample true, live execution to test the hypothesis that QA presents, one cannot have a reliable result as demo fills are characteristically perfect fills, where live data is full of deviations of failed or less than perfect, "real world" conditions. Therefore working from Demo one must depreciate the testing results to some unknown estimation.
Condition 2: In "most", (not all) cases, the retail trader is working inside a fixed time domain of M1, M5, hourly etc. which is the basis most commonly provided for retail traders, the exception being Renco, Tick and other chart variants that shift the time factor into a rate of change model versus fixed time. Unfortunately the Mt4 makes relatively poor use of these alternate charting methods on a custom basis only. Because the institutional LP's ahead of (or as part of) the broker are NOT time based, this creates another variation in any broker's results which are absolutely broker specific. The institutional side is playing off of supply and demand initiatives to create herd invitation well before we consider the effects of dark pool contamination or High Frequency Trading, (HFT), both of which create horribly deceptive results in the retail fixed time base being secondary effects which are very difficult to establish in a time based or even tick based charting process.
In short, this means the available volume by which to make timely fills to retail is based SOLEY on broker's access to fills being aggregated from a larger and wider tier of volume pricing either the LP or broker must aggregate and to some degree carry any residual differences (risk) in buying a larger chunk of currency and distributing only most of it, holding the difference for distribution at a later time / price the provider's risk management algos must account for later on in yet further B or C book agenda, (Think Alpari getting stuffed when the Swiss uncapped their currency). i.e. over exposed taking the other side in the dark.
A retail broker may actually be 100% transparent, and yet owned by their own LP resource WHICH IS NOT transparent, hence garbage in. While the broker can legally claim to be truly transparent, it is possibly overlooked by traders the broker holds a 1 to 1 hedge with his LP and the LP is manipulating price and fills both. (scary and very true today). DO NOT believe for a minute that regulatory commissions are the trader's friend, they are not. They simply compartmentalize what forms of manipulation occur when and where to the benefit of the provider giving provider a safe shield to hide behind when you consider all LP monetary power comes ultimately from central banks and investor funds which are under the agenda of government backed profiteering on a whole other scale, supporting the global race for GDP dominance creating purchasing power for any given currency / government. These governing agendas over time are strategically killing brokerages in favor of banks ending up holding most of the volume, giving ever-greater government control on the supply side.
Mandelbrot long ago proved these relationships and others distinctly make it LITERALLY impossible to predict market movement in that the future is always changing during the now, based on the unforeseen affects of liquidity proportioning affecting different supply sides along side of differing profit agendas and Risk Management methods within the "decentralized" nature Forex is between multiple provider domains. For this reason alone, time based analysis becomes a fallacy on the retail side which is where Mandelbrot identified that most indicators with fixed bars back domains are relatively less that 30% accurate at any frame of time simply due to market frequency being a moving target, even before the other considerations of fill statistics, etc.
Condition 3: Real time domain versus "relative time". In any sense of a periodic data set such as bars data provides, there is "normal variation" given of pure supply and demand relative to a theoretical random mean, versus non-normal variation, a result of manipulation beyond pure supply / demand semantics. Since the supply side is working from "Volume" based criteria ONLY, acting on a "Time Based" event profile, one must have a means to differentiate "Wild" volatility from normal and literally remove the effect of time (and hence correlate frequency modality) in order to rank a comparison of the two. This is where Mandelbrot identified the differences of a Brownian Walk being absent real-world wild volatility whereby Montecarlo testing is defeated in the absence of deterministic controls to introduce wild volatility back to the Brownian set as part of a valid comparison. By formulating data in a manner to dissect and therefore remove or model against fixed-time wild volatility, retail can then compare the continuum of "self similarity" to rank "how reliable" is some part of a defined trend to provide improved probabilityagainst things like a stop-hunt concatenation of an otherwise reliable Elliot wave projection inside the fixed time based domain. One essentially is removing "fixed time", in order to manage the decision tree on "Relative time" subject to degree of quantified volatility being reliable to "some" given degree better than simply random neutral mean (50/50 probability) supply and demand.
In conclusion, the desire to understand and make use of quantitative methods requires that one understands these "relative" considerations and also creates a "sanitary" concept by which to then make decisions reliably toward an improved end effect. The end effect will NOT be one of predicting future outcome, but can ONLY ever be an end result of mitigating loss probability. In other words, we CAN determine when the market it less likely to hurt us on a ranking greater than 50/50 and therefore attain "safer" profit, but we cannot determine the degree to which we are likely to profit, only to limit our access to settings in time which would be more likely to cause loss, i.e. loss mitigation (conservative), not profit projection (greed motivated).
Not wanting to drag the thread off into never-land, Op is or was on the right track, but in order to afford a more reliable answer the Op would need to help identify more fully, which aspect of market analysis is being sought and under what means or method is intended to make use of "some" greater accuracy or dynamic QA feedback an EA or indicator might make use of, or manual method even, per se.
If most of the reply here is outside the reader's familiarity, then it's likely a good idea to look up some in depth discussions on quantitative analysis, volatility and time based domains, to gain a fuller understanding first and therefore be equipped to look further in Quant methods to formulate well constructed concepts or ideas. It is safe to say that "most" classical trading methods of years gone by have fallen fairly victim to regulatory disguise, dark pool contamination and HFT transactions at banks and hedge firms which now operate at the "millionth" of a second transaction speed. It has been theorized by at least one of the few fore-running authorities in banking quant design, that "all markets today are subject to SOME degree of flash-crash delay upwards of 20 times per day on all instruments" making price-based decision making highly volatile in many hidden aspects if real-time pricing is lagging.
This alone tells us just how unstable finding any simple answer is. The nature of government subsidized manipulation will continue to grow exponentially since a few short years just prior to the U.S. housing crash of 2008, the point statistically verifiable, at which the U.S. became intensively demanding on quant manipulation in large finance to generate synthetic funding protocols in order to "churn" and supposedly strengthen U.S. dominance in market GDP initiatives globally, shoring up the U.S. Dollar. The degree to which retail traders are subjected to this setting is unprecedented today but for which we know the extent of damage to any truly fair conditions in the market are long since destroyed by such unethical initiatives, hiding behind Dodd-Frank and the facade of a safer setting for investors via increased compliance destroying true competition free markets depend on. Such agendas have left retail in a setting of doing battle in the vapor trail of a comet out of control, while China in as many years has risen to attain as much as 70% the the world's true purchasing power in owned hard assets China now dominates, while the U.S. has become ever more dependent on the yuks of "free trade" as the U.S. increasingly loses internal GDP stability and becomes further dependent on purchasing than ever before. Is this the definition of being the world's leader in financial vitality or simply the front-runner of justification to create greater military dominance to protect an evaporating U.S. Dollar?
Quant analysis does and will work at a very high expense of personal learning and dedication just as real change only sustains in greater education. But, be well prepared to deal with what it may reveal so that expectations remain conservative, pressing on toward safer gains among what remains as the ground continues to shift beneath our feet. "Who then will judge those who sit in the seat of judgment?" Hopefully those who will learn and create change.
The degree to which QA is beneficial in retail is a growing dependency proportionate to the degree to which QA is being used to negatively impact retail trading and therefor the conclusion above is not digression from the topic, but is actually the underlying reason the topic is so important, so long as one understands the reason.
Best wishes.