Open KiudLyrl opened 7 years ago
I’m not sure the fix is a great fix, only because of the original authors choice of precision threshold for unknown reasons.
Also, keep in mind it has been many years since underlying library has been released.
On Apr 11, 2018, at 2:31 PM, shrew4u2do notifications@github.com wrote:
@cgohlke Thank you!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@mrjbq7 I am no expert, but do agree thatthe change in the underlying c code may have unforeseen outcomes. From your perspective, if we intend to use the library for calculating identifiers with the values trending very close to 0. Is the trade off of the multiplying the close value by say 1000 and dividing the resulting output by 1000 significant?
Here are results from using the unpatched vs patched library Patched running python 3.6 windows x64 0.0002510313433782578 0.00020173749999999983 0.00015244365662174183
UnPatched out of the box values to BBAND 0.0002017375 0.0002017375 0.0002017375
Unpatched - multiplied close value by 1000 and divided output by 1000 0.000251031343378 0.0002017375 0.000152443656622
It seems like the multiply divide technique would be “okay” but as with most technical indicators ultimately significance is determined through study and backtesting.
I’m not sure the change to the C code is bad, but wonder since that whole “almost zero” concept is there, why it was chosen the way it was. Since most of the code was probably used on dollars and cents stock prices, maybe it’s just a matter of updating the constant. Or maybe that introduces some subtle error with operating on stock prices.
On Apr 28, 2018, at 6:48 PM, Vivek Chowdhary notifications@github.com wrote:
@mrjbq7 I am no expert, but do agree thatthe change in the underlying c code may have unforeseen outcomes. From your perspective, if we intend to use the library for calculating identifiers with the values trending very close to 0. Is the trade off of the multiplying the close value by say 1000 and dividing the resulting output by 1000 significant?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Maybe the best way is for the crypto to rise in price so much it’s got larger price values. Haha. HODL!
On Apr 28, 2018, at 6:48 PM, Vivek Chowdhary notifications@github.com wrote:
@mrjbq7 I am no expert, but do agree thatthe change in the underlying c code may have unforeseen outcomes. From your perspective, if we intend to use the library for calculating identifiers with the values trending very close to 0. Is the trade off of the multiplying the close value by say 1000 and dividing the resulting output by 1000 significant?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Total buzzkill when trading Japanese Yen FX :) multiply by 100,000, then devide the BBANDs results by the same seems to address the issue.
Hi I need to rebuild ta-lib for x64. I can see nmake command creating some .lib files in the target folder. Could you please let me know how I can install these libs using pip? Thanks
Hi @kumpulin1 if you've compiled the underling TA-Lib library as 64-bit. You just need to make sure whatever location it's built in is linked at is passed to the pip install ta-lib
command for Python to be able to find them. By default it imagines you "unzip the binary" to C:\ta-lib
and looks for C:\ta-lib\include
and C:\ta-lib\lib
. So, either put it there or change the locations in setup.py
before installing...
if someone is still struggling with this issue take a look at this comment
the latest pip install talib has same error
Follow the installation instructions in the README On Jul 3, 2023, at 8:23 AM, Too large to fit in the margin @.***> wrote: the latest pip install talib has same error
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Hi,
I tried to get the bollinger bands, my data are valid since the EMA and RSI are good :
However the upperBB == middleBB == lowerBB here, why is that? I also tried without nbdevup=2, nbdevdn=2.
Thanks.