HCombs manages robots created by HoneyMaster. It runs them, sends analysis results to HoneyBee – MetaTrader5 connector. HoneyBees calculate deal sizes, open deals, manage capital, draw objects in corresponding MT5 charts for all opened deals. For every instrument that you want to trade there must be a HoneyBee running in the MetaTrader’s chart, no matter of what timeframe.
HCombs has its own historical database. Before every analysis it updates required his. data file in RAM with the feed from MT5 and saves changes to filesystem once in 10 hours. All data specific to HoneyCombs, like robots, his. data, configuration, is stored in “Roaming” folder under the user’s %AppData% folder. Example: C:\Users\UserName\AppData\Roaming\HoneyCombs.
HBees communicate with HoneyCombs’ server by the common externally synchronized timer. They provide trading statistics and data feed. Input parameters of HoneyBees provide additional layer of configuration:
Parameters you need to know:
Other values are for experts only and need to be changed only if you encountered some broker- or platform-specific problem. Better consult with us before altering them. To adapt HoneyBee for a specific broker you may need to change these parameters: Slippage, Lot size, Max allowed spread. Other parameters should work just fine with the default values with any broker.
HoneyBee has a few buttons drawn over the chart. You can always:
Additionally, for every trade made, there are drawn informative arrows:
To help you easily manage HoneyBees we provide two scripts-helpers: “Launch HBees” and “Close HBees”.
It is very simple to use: just specify what instruments you want to initialize HBees for and launch script on any chart window. The script will open multiple charts one by one. The only broker-specific problem you may encounter is if your broker uses postfixes. Some brokers, especially for DEMO accounts, append symbols to the instruments’ names, e.g. “EURUSD” they may call “EURUSD_i”. You have to specify this “_i” postfix explicitly. The same goes for HoneyCombs and HoneyMaster: they need to be explicitly told if broker uses postfixes.
To simplify configuration we suggest that you create custom templates for your HoneyBees. For this initialize HoneyBee with preferred settings in any chart window, right-click on the chart -> Templates -> Save Template… Name your custom template “HoneyBee_My1.tpl” or “HoneyBee_My2.tpl”(script supports up to 4 user-defined templates)… Now you can always use “Launch HBees” script with your own template. This is way you can initialize HBees in a batch fashion with customized parameters that suit you the best.
In the main grid of HoneyCombs you may observe all loaded robots, found in /Robots folder. Each robot has its separate statistics: deals open and closed, profit, time on, statistical and theoretical PPD and DPD. PPD and DPD we consider the most important and informative values. PPD stands for “Percent Per Day”, DPD for – “Deals Per Day”.
Theoretical PPD and DPD are the values obtained during the optimization process in HoneyMaster. Statistical values obtained during actual trading from the ticket database. HCombs maintains its own ticket database, from time to time updating it with the data feed from MT5.
Percent per day is profit made by the robot relative to the capital size. 0.01% PPD means that the robot, if the capital is 1000$, makes (0.001% * 1000$ = ) 10 cents every day. In HMaster PPD values are calculated both arithmetically and geometrically, but in HCombs, for simplification, only arithmetic values are displayed.
In the “Th. PPD” column you may see small charts. These are very helpful “capital history” charts recorded during optimization. They help to identify robots that perform almost the same way(their charts look alike) and to remove them. The main goal is to diversify trading as much as possible and running a bunch of robots that may open the same deal multiple times goes against this concept. You should not run robots with almost identical trading behavior.
Statistical values are calculated over a period of time. This period is set in the “Charts Configuration” window. To configure this period and rate of recalculation, simply right-click on the profit chart or any gauge and choose “Configure…”.
In the main grid you may also assign a specific EA to the robot. You need to do this only in Manual Assignment mode. By default HCombs does this for you automatically.
To turn robot on/off, left-click on the left column with displayed robot’s ID. For additional options use right-click: this way you can launch/stop all robots at once, edit their amins, see optimization results.
To start trading you need to perform as few as three steps:
All other controls are set up by default in the way you don’t need to think about them. But if you do, every button and checkbox has an informative hint which will help you identify its purpose.
HoneyCombs is developed both for x32 and x64 Windows systems. That’s what you need to run it:
The main principles behind HMaster are deep diversification and adaptive behavior.
This optimization engine allows to test millions of different robots, record detailed statistics during optimization, genetically evolve your trading system in the most convenient way. We hard-coded in c++ dozens of trading and forecasting paradigms into the engine making it the most powerful trading optimizator there is on the market.
To ensure its superiority we designed highly adaptive trading algorithms that follow the market, adapt in response to its changes. One of the things that creates such behavior is our Pattern Recognition System. It detects common patterns, categorizes them, uses gathered data to forecast the current(last) market wave evolution as well as the future wave formation. This system is recursive: it understands multiple frequencies, wave harmonics, its logic is based on vector projections and inner pattern’s relations, like levels and vector types.
All forecasting routines of HMaster are based on different kinds of probability distribution grids created with statistical data gathered for the period of time. This way “Honey” engine responses well to any market changes: volatility changes, common market pattern changes, absolute price value changes, other changes of the market’s “character”. This adaptive behavior is the reason why HMaster is able to create trading robots that perform profitably during optimization not with just 1-2 years of his.data and then lose the deposit, but earn money with 12+ years of data on multiple instruments simultaneously with as little as 5-15% drawdowns.
HM’s layout is logically divided into three tabs: Input, Analysis and Output. Let’s take a look at the Analysis tab…
HMaster and its algorithms are able to work with any types of markets, any instruments, as long as you provide standardized financial time series data feed. Data can be imported in the following .txt format, timestamps have to be GMT+0 without daylight changes: Data format sample . Note that the length of price values(byte-wise) may differ, but the header must be exactly 64 bytes long. HMaster parses the file looking for floating point values and timestamps in ANSI encoded 8-bit text format.
In the standard installation packet we provide 18 main Forex instruments, all data is 16-18 years long:
HoneyMaster keeps his. data in its internal binary file format: “.quot”. Data can be updated with the feed from MetaTrader5: for this you need to launch EA-helper in MT5 called “HoneyUpdater”. It downloads data in background and sends it to HMaster when requested. If your broker uses postfixes for instruments’ names, e.g. “_i” in “EURUSD_i”, you need to explicitly specify them in HMaster as well as in HUpdater(see the first string parameter).
HUpdater requires .dll imports allowed: it communicates with HMaster through MT5dllNamedPipes.dll. To start data update just hit “Update data” button in HM. Note: MT5 may fail to download all needed data at once. In this case you should wait a few minutes and try again. You can always update a single file by using options that popup when you use right-click on the item in his. data tool.
HM optimization engine can optimize trading strategies for a specific broker. For this you need to specify swaps and average spread for each instrument:
We have already included a number of presets you can use. By using our optimizator you will find out that brokers’ appetites are high enough: their spreads can “eat” up the whole profit made by the robot. That’s why finding a decent ECN broker with low spreads is a must.
Optimization process in HM is built around MAIN instrument concept. MAIN instrument is the one you enter all input parameters for. Some amins work perfectly well with any instrument – their values are relative. But some require absolute values that depend on instrument’s price. To be able to test the same robot for a number of instruments we need to scale these values in accordance with instruments’ prices.
This scaling is performed with the use of multipliers from Multiplier tool. By default, all multipliers are calculated automatically. Since the MAIN instrument does not require any scaling, its corresponding multiplier = 1. Other multipliers are calculated as “average instrument price”/”average MAIN instrument price”. You can override this behavior by turning “Auto scale amins for secondary instruments” option off under the “Scaling” tab and entering all multipliers manually.
The best practice is to test robots with multiple instruments. This increases statistical data amount, provides much better, more reliable results. We suggest that you always use EURUSD as MAIN. This simplifies things a lot.
A decent system should be optimized with at least 5 years data time range(12 is better). We always use 3650+ days time range and all 18 basic instruments. If a robot is profitable with at least 4-5 instruments we investigate it further.
When a combination of parameters gets scaled HMaster creates a copy of it, keeping most of the values while scaling the others. To be able to identify this copy as the very same combination, but scaled, HM tags it with the same unique tag. It allows later to group results into families, thus making it easy to work with multiple instruments.
Global tag is increased after every analysis, this way every combination of values always has it’s unique identificator. If you use multiple machines for optimization you should keep track of the tags you use. Write them to some text file and add a meaningful comment. When you started analysis, write down the first and the last assigned tags. On the other machine manually increment global tagging so that ranges do not cross, e.g. on the first machine launch analysis with tags 0-99999, on the other – 100000-199999…
This way you will be able to combine results together into a single pool and genetically sort and filter them like they were produced on the same machine.
This tab is used for passing combinations you want to test to the engine’s input. Tagging control is located in the control box.
You can always reset global tag to a new value. The displayed value is the tag that will be assigned to the first processed combination.
There’s also additional scaling helper: it is used for manual or semi-automatic scaling of values in the grids, not in the engine(the option in main Analysis tab scales amins in the engine and it’s the default behavior). By default this “Auto” checkbox is off and it’s doubtful that you will need to use it, ever. It may be useful in case of data losses(MAIN combination was lost). If you activate “Auto” checkbox here whenever you change MAIN instrument all scalable values in grids get scaled in accordance with new average price.
All parameters that HM engine takes in are divided in three groups: WAVE, FORECAST and TRADE. There’s a number of reasons behind this architecture, the most important is: it speeds things up. Analysis is performed in three nested loops, the first parent loop is WAVE. The most CPU hungry procedures are processed in WAVE cycle(pattern recognition mostly, gathering of statistical data): this way these procedures are performed only once for a group of FORECAST cycles and nested in them TRADE cycles, which eliminates the need to do them again for every input combination of parameters. This approach increases optimization speed multiple times.
FORECAST procedures include processing of gathered statistical data, forecasting market evolution.
TRADE routines consist out of final decision-making, virtual trading, recording trading results.
For each group of amins there’s a separate grid and a preset manager. You can save grids, import them, update and delete. Each row of FORECAST and TRADE grids contains a single combination of amins. WAVE grid is a special case: in contains amins in a two dimensional(2D) matrix. To easily manage multiple WAVE grids we created WAVE BUFFER.
WAVE BUFFER contains multiple grids. It can be saved to the filesystem just as FORECAST or TRADE grids can be, but in this case a single element is a whole grid. You can add elements(whole WAVE grids) to WAVE BUFFER, delete them, update, send to HM engine’s input.
Every input parameter has a description that tells its purpose, specifies recommended values for the amin, tells if it is scalable, describes important interactions with other amins. You can always tell if the amin active or not: blacked out cells indicate amins that do absolutely nothing, even if you change their value. Gray cells indicate amins that are inactive unless you change their value.
To configure WAVE group of parameters you may use WAVE MASTER.
It helps in configuring WAVE amins by providing user-friendly interface. All changes made in WAVE MASTER are instantly passed to the grid. Remember: the values in the grid are always the ones that get sent to HM engine, not the ones in the WAVE MASTER.
To specify a range of values with a step use “Ranger” tool(right-click on the grid).
It works a bit differently for WAVE and FORECAST/TRADE grids. If you set up ranges for the WAVE grid, next time you press “Add” button or push a grid directly to HM’s engine, instead of a single grid will be pushed a group of grids. This group is created by mixing up all possible combinations of values specified with Ranger. Be careful specifying more than just a few values for a few amins: the resulting number of combinations may exceed billions of billions of billions…
To use Ranger tool with F/T grids select a group of cells and use right-click.
When editing F/T grids you may use “Brush” tool. It is found in the Ranger tool when you right-click on the grid. This tool simply copies values from one group of cells to another. First select cells that you want to create a brush with, then click “create brush”, then paint with cursor over the grid. Then click “forget brush” when you finished copying.
For each group of amins there’s a separate input buffer. You can fill it directly from grids by clicking “+” button or clear it. During optimization HM engine will mix W, F and T combinations of amins together. This way 10 W, 10 F and 10 T combinations produce total of 10*10*10 = 1000 W-F-T combinations.
Each group of amins passed to the engine gets a type-specific tag assigned. These tags are named “w_tag, f_tag and t_tag” – they are intensively used during genetic filtering and sorting. This approach helps to group analysis results into families by W/P/T genes.
Before passing a W-F-T combination to the engine’s input you can perform a quick test to see if it works. QPA processes parameters currently displayed in WAVE grid + highlighted by darkened gradient amins from FORECAST and TRADE grids. You can use QPA only with a single combination. By default, time scope for QPA is 1000 bars, but you can specify a wider scope.
In this window you can observe separate charts for all used indicators, time series chart containing graphical info on every trade made, capital history chart(values are calculated arithmetically). When you click on any bar you can observe detailed info gathered by HM engine during analysis for this particular market situation. This info helps to identify reasons behind robot’s behavior.
Every indicator partially works as a prefilter: it can abort following analysis if the market situation did pass examination. You can always tell if a certain indicator approved the market situation: there will be a corresponding highlighted region in the chart. There also will be a readable explanation in the info box.
After filling engine’s input with combinations and loading up historical data you can start analysis…
When HM engine finishes analysis it forms output. There are three output options:
In the typical optimization process it is most comfortable to just write .conf files. They store processed W-F-T combinations of amins, corresponding optimization information(PPD, DPD, Math.Expectation values, e.t.c), trading history(TP, SL, Prognosed Wave, Profit for every deal). Writing trading history is optional because it usually requires a lot of storage space.
We recommend that you perform first 5-6 genetic cycles without trading history being recorded. All you really need to know at this stage is DPD, PPD, Max drawdown, Math.Expectation, Wins/Losses ratio values. Trading history becomes helpful when you are testing the very best combinations that are the candidates for export.
Output to *.txt files is a legacy option, it is very unlikely that you will need will to use it. *.txt files contain information in a human-readable ANSI encoded 8-bit text format.
When you pass results directly to Output they are stored in RAM of your machine. If you didn’t specify “*.conf” option then when you close HMaster all data will be lost.
To load *.conf or *.robot files use “Import” controls. You can specify a folder containing these files or a single file. For processing large amounts of data that can not be stored in RAM there’s “prefilter” option. It allows to avoid loading into RAM combinations that do not meet prefilter requirements. It is already preconfigured good enough by default, by you can tighten filtering up in case of large amounts of data. Keep an eye on the free RAM amount in your system: there’s an info box at the top of HMaster’s window. Note that HoneyMaster needs a good overhead for filtering and sorting procedures.
All data that you load is stored in Main Buffer. It is untouchable – you can’t modify nor access it in any way. To work with this data you need to pass (create a copy) it to Sub Buffer. For this, click on “Simple Sort”. Loaded combinations will be passed to Sub Buffer and initially sorted by PPD value.
If you need to load new files to a Sub Buffer – clear Sub Buffer. Following execution of any sorting algorithm will refill Sub Buffer with elements from Main Buffer again.
Remember: the most profitable combinations(highest PPD) are usually the results of overtraining. What really important is for all four basic parameters – PPD, DPD, Max Drawdown, Math. Expectation – to be in a particular range.
Math. Expectation shows how many profit robot makes per an amount of capital risked. For example: 1% means that if you risk in a single deal 100$ on average you will get 1$ of profit. – 1% means that you will loose 1$ for every 100$ risked.
PPD stands for “Percent Per Day”. 0.02% PPD means that if your capital is 1000$ the robot will make 0.02% * 1000$ = 0.2$ per day. This value is calculated arithmetically and geometrically. You can see both in robot’s description.
DPD stands for “Deals Per Day”. 0.1 DPD means that the robot opens 1 deal every 10 days. We do not need robots that open less than 1 deal per 20-30 days – they are easily overtrained because there’s just not enough statistical data for optimization. Robot should not open deals too often as well: 1 deal per 3-20 days is the most effective trading frequency.
Max drawdown is, probably, the most important property. It indicates the maximum amount of losses detected relative to the maximum capital value registered.
Wins/Losses ratio indicates how many profitable deals the robot made in proportion to losing deals. It helps in filtering out overtrained robots. Example: a robot made 1000 losing deals and only 20-30 profitable deals, but the Stop Loss / Take Profit ratio was configured the way that these 20 deals made more money than 1000 losing deals lost(in case of a very high TP and small SL). Even if it is, theoretically, a profitable configuration, 20 deals is a very small amount of statistical data. If the robot made only 10 deals(50% deviation) then this configuration would not be profitable anymore. We need to filter out robots with a small Wins/Losses ratio to protect ourselves from such overtraining.
At first genetic cycles we need to use loosened filters. These values are a good start:
A decent robot, a candidate for export, created after a number of genetic cycles, should meet these requirements:
The exact values are a subject for speculation/improvisation. This is what we’ve come up so far, and may reconsider these values in the future. Most probably we will use tighter filters as our pools of robots grow.
To observe analysis results for a single combination you may open info box. For this, unfreeze layout and drag info box to some place on the screen. The same way you can drag’n’drop charts when the layout is ‘liquid’.
In “Simple sort” mode you can apply “Simple” group of filters and sort combinations by any statistical value if you click on the top row of the grid. Statistical values are displayed in the right part of the TRADE grid in “Simple sort” mode.
If you have recorded trading history and loaded it into RAM, you can observe small charts in the “PPD” column. These are “Capital history” charts. The red zone indicates level < 100% of capital. These charts are very useful when you compile your candidate for export system: you can easily spot combinations that behave the same way and get rid of them. Deep diversification requires that all of your robots behave in a unique way. If some charts look alike it means that corresponding robots open almost the same deals even if they have different amins. At the last genetic cycle you need to filter them out and leave only one robot per one behavior pattern. This will ensure that your robots won’t open dozens of same deals that may multiply possible drawdowns to the point of losing the whole deposit.
After initial filtering in “Simple sort” mode, if you are performing genetic optimization, apply “Family sort”. In this mode combinations are grouped into families. Each family consists of combinations with the same tag(a single combination processed for multiple instruments) or with the same amins’ values. You can observe combinations of the family in the “family” grid. The method of grouping depends on chosen “Family Sort Mode”. When you work in ”Group by genes” mode you need to explicitly specify genes you want to group combinations by. For this, click on the top rows of F/T grids with the amin’s names or on the W grid cells while in ”Group by genes” mode. The most comfortable and convenient way is to work in “Group by tags” mode.
In “Family sort” mode you can apply “population” filters. The most useful filter is “Min instruments number”. If you analyzed a number of instruments and used “Group by tags” mode then every family contains copies of the very same combination just scaled in accordance with instruments’ prices. If the same combination was profitable on a decent number of instruments (5-10+) than the chances of it’s profitability in the future are much higher. It does not need to make a good profit with all instruments: just not to loose any money. If you like this robot, you should use it only with the instruments it works best with. Other instruments are just helpers in cross-examination process. We suggest that you always use all available instruments for analysis and work in this mode.
To proceed with genetic filtering apply “Split sort”. In this mode Sub Buffer is split into three different buffers: WAVE, FORECAST and TRADE, each consists of families. If option “Use type specific tags” is on then combinations will be grouped by assigned earlier w_tags, f_tags and t_tags. This is the most comfortable and productive way. If this option is off grouping will be based on amins’ values.
Notice a new window popped up: “WAVE split”. It contains information on combinations in WAVE buffer. In “Split sort” mode you can observe collective averages of statistical values for W, F, T combinations separately. It helps in identifying the most profitable genes. Generally, the bigger the family(number of combinations contained displayed close to families’ indexes), the more profitable it is. The size of the family is one of the most important values at this stage of genetic filtering. It is like in real life: the smartest survive and form a big population, neanderthals number decrease. Our goal is to let neanderthals die.
There are six export options:
Options 1-2 work in any sort mode. Before using them, load MAIN instrument first to ensure that you won’t experience any data loss due to type conversions. You can send a single combination or a number of families. When you send a family, only one combination will be sent out of the whole family – the one processed for the current MAIN instrument(not scaled). To choose which ones you wish to send – click on combination/family index cell in the grid. If the row is highlighted with green color – it is chosen.
Options 3-6 work only in Simple and Family modes. You can not use them in “Split sort” mode. If you like the robot and wish to run it in HoneyCombs – select it for export and use either 3 or 4 option. When you export a single combination(3), it must be the combination highlighted with dark gradient. To choose a single combination simply click on the corresponding row of F/T/”family” grid with left mouse button(not on the index cell). When you use batch export(4) with multiple combinations or families you need to select which ones first. “Chosen” combs/families are highlighted with green color. To select a family or a combination, use left-click on index cells.
HCombs’ robots are stored in *.robot files. Batch export will automatically name these files and set unique ids. Every robot must be assigned a unique id. *.robot files contain a single robot(amins), trading history, optimization results. All robots are stored in %AppData%/Roaming/HoneyCombs/Robots folder. You need to specify it in the export window before exporting, if HMaster didn’t find HCombs location automatically.
If you are performing genetic optimization you should send the most profitable and big families to Input grids(1), mutate them, start a new genetic cycle. Sometimes you may want to send parameters directly to the engine’s input(2).
From time to time you can recompile your pool of analysis results: load all stored on HDD/SSD files, sort combinations, filter and compile a new *.conf file that contains only the best combinations. This may help in keeping your pool of robots tight and small, while removing all unprofitable junk. Then you can delete unused files.
HoneyMaster is developed both for x32 and x64 Windows systems. That’s what you need to run it:
HoneyMaster and HoneyCombs are preconfigured by default in a way that you can use them with little or no customization at all. But you may encounter some broker-specific problems.
Some MT5 brokers don’t think that providing standardized UTC timestamps in their data feed is necessary. HoneyMaster and HoneyCombs perform a number of checks to ensure that historical data they receive is up to date and has correct timestamps. They automatically correct timestamps in accordance with your locale system settings and broker server settings. For standardization we use GMT+0 UTC timestamps without daylight changes everywhere.
If HM or HC fails to update data with some “ERROR516: invalid timestamps” error, you need to explicitly specify time offset. Info box with all known timestamps will help you to identify the problem.
Some brokers, especially for DEMO accounts, append certain postfixes to all instruments’ names. For example: standardized name for EURUSD pair becomes something like “EURUSD_i”. You need to specify these postfixes explicitly in HoneyMaster, HoneyCombs and “Launch HBees” script.