Closed rdft4e closed 8 months ago
fast_windows, slow_windows = list(zip(*combinations(windows, 2)))
fast_ma = vbt.MA.run(price, window=fast_windows, short_name='fast', speedup=True)
slow_ma = vbt.MA.run(price, window=slow_windows, short_name='slow', speedup=True)
entries = fast_ma.ma_above(slow_ma, crossover=True)
exits = fast_ma.ma_below(slow_ma, crossover=True)
Edit: in the next version speedup will become run_unique.
Compared to the run_combs method, this approach must calculate each parameter combination twice, while run_combs is optimized to calculate each combination only once.
Hi @polakowo,
Keep up the great work!
I made some changes on my package to streamline the process a bit. On my development branch, I have an example VectorBT and Pandas TA Golden Cross Jupyter Notebook. Would appreciate feedback on the process and integration when you get a chance. 😎
I like the other Example Notebooks you have as well. Gotta find more time to dig into them though. 😄
Kind Regards, KJ
Hi @twopirllc,
I definitely liked the notebook and the new tsignals
method, which makes it more convenient to generate signals in a vectorbt format. I can't think of any idea how to improve the process, as we both already made it too easy for the user to simulate basic strategies :)
If you allow me one suggestion for further improving integration between vectorbt and pandas-ta: it would be great having an information dictionary per indicator that states what inputs, parameters, and outputs this indicator works with. In Talib you can do abstract.Function(func_name)._Function__info
to get all necessary information, whereas in pandas-ta I currently have to execute the strategy on a dummy input to see what output columns it produces, which is not great performance-wise (you can look at IndicatorFactory.parse_pandas_ta_config
to see how it's currently done). Such a dictionary would allow any software to automatically traverse your indicators and build on top of them.
The way I would do this is to decorate each indicator method to append info to your Category
dict in __init__.py
(maybe renaming it to info
), so you can define data as closest to the indicator method as possible. For example, for Bollinger bands:
@append_info(
name='bbands',
display_name='Bollinger Bands',
category='volatility',
inputs=['close'],
params=dict(length=None, std=None, mamode=None, ddof=0, offset=None),
outputs=['BBL_{length}_{std}', 'BBM_{length}_{std}', 'BBU_{length}_{std}', 'BBB_{length}_{std}']
)
def bbands(...): ...
I believe this may allow for a lot of new cool use cases.
Hi @polakowo,
I definitely liked the notebook and the new tsignals method, which makes it more convenient to generate signals in a vectorbt format. I can't think of any idea how to improve the process, as we both already made it too easy for the user to simulate basic strategies :)
Thank you. I do like the symbiotic integration our packages share. It certainly helps smooth the process from data acquisition to analysis to testing. 😎
If you allow me one suggestion for further improving integration between vectorbt and pandas-ta: it would be great having an information dictionary per indicator that states what inputs, parameters, and outputs this indicator works with. In Talib you can do abstract. Function(func_name)._Function__info to get all necessary information, whereas in pandas-ta I currently have to execute the strategy on a dummy input to see what output columns it produces, which is not great performance-wise (you can look at IndicatorFactory.parse_pandas_ta_config to see how it's currently done). Such a dictionary would allow any software to automatically traverse your indicators and build on top of them.
The way I would do this is to decorate each indicator method to append info to your Category dict in init.py (maybe renaming it to info), so you can define data as closest to the indicator method as possible.
Great idea! Thanks for the suggestion. I will take a look into it. Curious how to handle the case when the output can be a Series or DataFrame (like indicator ta.er
)? 🤔
@append_info(
# ...
# Case for Series or DataFrame output?
outputs=['BBL_{length}_{std}', 'BBM_{length}_{std}', 'BBU_{length}_{std}', 'BBB_{length}_{std}']
)
def bbands(...): ...
I believe this may allow for a lot of new cool use cases.
Could you provide a couple of simple examples?
Apologies for hijacking the thread.
Thanks, KJ
@twopirllc
I would then suggest splitting the er
indicator into two different indicator instances. I would also suggest listing keyword arguments explicitly rather than using **kwargs
, otherwise the user might have difficulty finding out what keyword arguments are accepted by the function and would need to dig into the code to figure out. Using **kwargs
is fine when passing it directly to another function though. This issue is why you want structured indicator information.
On use cases: The first thing that comes to my mind is dashboarding. Instead of hard-coding controls for each indicator, one could simply read the info dict and automatically build a range of controls such as parameter sliders for tuning.
Especially vectorbt and other hyperparameter tuners would benefit, as they would discover tunable parameters hidden behind kwargs
.
Annotating each parameter argument with its respective type would also help because right now there is no automated way to tell whether the parameter is bool or float apart from parsing the documentation.
These are all suggestions to make your indicators fully discoverable if you wanted them to.
Best, OP
@polakowo,
Thanks for the suggestion, use cases and your time.
Best KJ
fast_windows, slow_windows = list(zip(*combinations(windows, 2))) fast_ma = vbt.MA.run(price, window=fast_windows, short_name='fast', speedup=True) slow_ma = vbt.MA.run(price, window=slow_windows, short_name='slow', speedup=True) entries = fast_ma.ma_above(slow_ma, crossover=True) exits = fast_ma.ma_below(slow_ma, crossover=True)
Edit: in the next version speedup will become run_unique.
Compared to the run_combs method, this approach must calculate each parameter combination twice, while run_combs is optimized to calculate each combination only once.
Hi! thank you for the snippet. Such first "itertools" line made me the trick.
At this time I stuck on vectorbt 0.16.6 version and the speedtrue feature was not implemented yet I think. I will update soon or better I will learn docker. In anycase, the speed for me is ok.
I would like to point out for the casual reader that such itertool line is ok for "symmetric" indicators but eventually I am taking list(zip(*product(windows1, windows2))) to get the full matrix of arbitrary dissimilar windows. Also, looping the loop, we could need more sophisticated (conditional) entries and exits as for example:
entries = INDICATOR1_abovewhatever & INDICATOR2_abovewhatsoever
exits = INDICATOR1_belowwhatever & INDICATOR2_abovewhatsoever
In this case, INDICATOR1 is being mapped and INDICATOR2 holds constant. I realized that the & boolean operation needs INDICATOR2 with same dimensions and "column names" repeating or duplicating columns as need (it holds constant) as INDICATOR1. For the casual reader, I am using this line for this task:
propagated_df_cte = pd.concat([INDICATOR2_abovewhatsoever])] * matrix_elements, axis =1).set_axis(windows1, axis =1).rename_axis("whatsoever", axis="columns")
If you know a better or elegant way to do it, I am all ears.
Thank you!
@rdft4e remember that you can do INDICATOR1_abovewhatever.vbt & INDICATOR2_abovewhatsoever
to compare using vectorbt's own broadcasting mechanism, which will stack column levels if columns are different.
Hi Polakowo! thank you for vectorbt. Here a fan of the hyperparameter optimization.
We all know this beautiful snippet:
I would like to try a multiple indicator strategy but before hands on, I would like to test that I can reproduce the results of before snippet using the "hard" or suboptimal way:
I stuck on the part: "...pass them to .vbt.combine_with_multiple together with combine_func=np.logical_and and keys set to column names ..."
Could you provide us the correct snippet for this suboptimal approach? Thank you.