Closed x777 closed 3 years ago
so this is a known issue. it happens when doing a rest request to the servers too often. I'm working on resolving it and it will be fixed in the next release. in any case, it doesn't influence your code flow. so it does appear on the console, but your code should work with no issue.
@shlomikushchi Ok, but I don't see my logs from algorithm, so I am not sure that it's working. Looks like I am not receiving data from feed.
which data source are you using? alpaca data api or polygon?
@shlomikushchi polygon with AlpacaStore. Without Alpaca broker works good.
are you using an IDE? try putting a breakpoint in next()
of you strategy file. let's see if it gets there.
@shlomikushchi yes. Put breakpoint inside next()
.
I see only: "Starting Portfolio Value: 99695.46" and waiting...then after again:
sleep 3 seconds and retrying https://paper-api.alpaca.markets/v2/account 3 more time(s)
ok I understand. I will try to debug this soon and will let you know.
could you check which version of alpaca-trade-api-python you have installed?
just run this: pip freeze| grep alpaca-trade-api-python
@shlomikushchi pip freeze| grep alpaca-trade-api-python
- no such library.
Only this:
alpaca-backtrader-api==0.7
alpaca-trade-api==0.46
could you update alpaca-trade-api to the latest version and try again?
/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/alpaca_trade_api/stream2.py:151: UserWarning: Discarding nonzero nanoseconds in conversion
await handler(self, channel, ent)
and now even without alpaca broker I can't use it, looks I need downgrade it?
Traceback (most recent call last):
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/sslproto.py", line 650, in _process_write_backlog
self._transport.write(chunk)
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/selector_events.py", line 756, in write
self._fatal_error(exc, 'Fatal write error on socket transport')
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/selector_events.py", line 634, in _fatal_error
self._force_close(exc)
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/selector_events.py", line 646, in _force_close
self._loop.call_soon(self._call_connection_lost, exc)
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/base_events.py", line 595, in call_soon
self._check_closed()
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/base_events.py", line 381, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
Task was destroyed but it is pending!
task: <Task pending coro=<WebSocketCommonProtocol.transfer_data() done, defined at /home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py:818> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f19677c7b58>()]> cb=[<TaskWakeupMethWrapper object at 0x7f19677c7dc8>()]>
Task was destroyed but it is pending!
task: <Task pending coro=<WebSocketCommonProtocol.keepalive_ping() done, defined at /home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py:1103> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f19677c7d38>()]>>
Exception ignored in: <coroutine object WebSocketCommonProtocol.close_connection at 0x7f195f4600a0>
Traceback (most recent call last):
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py", line 1206, in close_connection
if await self.wait_for_connection_lost():
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py", line 1229, in wait_for_connection_lost
loop=self.loop if sys.version_info[:2] < (3, 8) else None,
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/tasks.py", line 342, in wait_for
timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/base_events.py", line 564, in call_later
timer = self.call_at(self.time() + delay, callback, *args)
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/base_events.py", line 574, in call_at
self._check_closed()
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/asyncio/base_events.py", line 381, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
Task was destroyed but it is pending!
task: <Task pending coro=<WebSocketCommonProtocol.close_connection() done, defined at /home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py:1153> wait_for=<Task pending coro=<WebSocketCommonProtocol.transfer_data() done, defined at /home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/websockets/protocol.py:818> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f19677c7b58>()]> cb=[<TaskWakeupMethWrapper object at 0x7f19677c7dc8>()]>>
^CStopping Backtrader
Traceback (most recent call last):
File "alpaca.py", line 127, in <module>
cerebro.plot(style='candlestick')
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/backtrader/cerebro.py", line 996, in plot
plotter.show()
File "/home/x777/anaconda3/envs/env_backtrader/lib/python3.6/site-packages/backtrader/plot/plot.py", line 814, in show
self.mpyplot.show()
I downgraded alpaca-trader-api to 0.46
and I had to upgraded alpaca-backtrader-api to 0.7.1
because event loop error, that I didn't see before.
Next time will make updates on new environment because now I can use your libraries at least with live data feed for testing without alpaca broker.
ok, just remember to update all packages when we release the new version
ok, just remember to update all packages when we release the new version
I mean with old versions. With new not working at all. But I will try with new env from scratch.
check out this version and let me know if it works now: try the new version and check if you still get this error: https://github.com/alpacahq/alpaca-backtrader-api/releases/tag/v0.8.0
@shlomikushchi what version of python I need? I can't install 0.8.0 even from scratch.
@shlomikushchi 0.8.0 could not find version.
try this: pip install git+https://github.com/alpacahq/alpaca-backtrader-api@v0.8.0
pip freeze:
alpaca-backtrader-api==0.8.0
alpaca-trade-api==0.48
Something new in logs but "sleeping" looks like the same:
Starting Portfolio Value: 99693.46
/home/x777/anaconda3/envs/env_alpaca/lib/python3.6/site-packages/alpaca_trade_api/stream2.py:151: UserWarning: Discarding nonzero nanoseconds in conversion
await handler(self, channel, ent)
sleep 3 seconds and retrying https://paper-api.alpaca.markets/v2/account 3 more time(s)...
sleep 3 seconds and retrying https://paper-api.alpaca.markets/v2/account 3 more time(s)...
sleep 3 seconds and retrying https://paper-api.alpaca.markets/v2/account 3 more time(s)..
store = alpaca_backtrader_api.AlpacaStore(
key_id=ALPACA_API_KEY,
secret_key=ALPACA_SECRET_KEY,
paper=True
)
data = DataFactory(
dataname='SPY',
tz=timezone,
timeframe=bt.TimeFrame.Minutes,
compression=1,
fromdate=pd.Timestamp('2020-5-11'),
historical=False
)
Then, I starting resampling time frames.
After about 1 minute, it started, but sometimes I receiving "sleep 3 seconds"
.
As I understand, I am using my polygon key by default, right?
the sleep 3 seconds and retrying
message is annoying but should not affect your execution.
I will fix it soon.
now it works as expected?
by default you use the alpaca data stream.
you could provide to use the polygon stream
keys should be the same
polygon stream could only be used in funded accounts.
Yes, now looks like it works.
if you want to use the polygon stream. just make sure to pass usePolygon
I don't understand why you said that "sleep 3 second" not affect algorithm. For example, If I set up fromdate
to few days ago (I must because I need data to my SMAs), it loads very slow vs without alpaca broker.
Few days loading took about 30 minutes! It's normal? Maybe I am doing something wrong?
IMO these are unrelated issues. let's try something. change line 152 in alpaca_backtrader_api/alpacabroker from this:
self.value = float(self.o.oapi.get_account().portfolio_value)
to this:
self.value = float(self.o.get_value())
it will make the 3 seconds log line to disappear. let's see if your other issue disappears as well.
sleep 3 second...
not disappear, but looks like algorithm is loaded data much faster. Will see on trading session. Anyway, between logs I see sometimes this message.
Looks like Alpaca Data very rare data feed. Bad news again.
Alpaca Data chart:
Polygon Data chart:
so, I'm not sure what I see in the graphs. could you explain? are you still experiencing issues?
Alpaca Data chart shows that data not loading because (I think so) no data or very rare (dots in example) and moving can't be calculated. It's about data quality.
About issues yes - I've tried using with all your comments lines and uncomments, but it's freezing - not loading data or very slow and no matter with Alpaca data or Polygon. Maybe with Polygon faster soe times.
Looks like not many people use this library, so we can't analyze problem on my side or not. Now, I am using old version of packages only.
does it happen with polygon as well? code for polygon should be the same.
does it happen with every symbol you try? for instance the example code works with AAPL and this is the data calls I get right now:
so basically every second.
I think all problems begin after adding resample()
. Without it works good, better is 100%. But without resample no sense. Did you try add few resample()
and test?
no I haven't. could you give me a simple use case based on the sma_crossover_strategy?
I am using this code in constructor:
#moving average for 120 min
self.sma_20 = bt.ind.SMA(self.data1, period=20)
#moving average for 120 min
self.sma_20_120 = bt.ind.SMA(self.data2, period=20)
And this code for resampling:
#data
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=1)
#data1
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=20, name='data_30m')
#data2
cerebro.resampledata(data, timeframe=bt.TimeFrame.Minutes, compression=120, name='data_120m')
In next()
you can simple log self.sma_20
and self.sma_20_120
.
so next is still called every second:
but this is the data in each data:
you would expect different data in each array?
this is how I defined things:
First added resampledata()
is what nex()
will iterate in my experience. Then, try simple log()
for any ticker value of self.sma_20
and self.sma_20_10
, I am using this function:
def log(self, txt, dt=None):
''' Logging function for this strategy'''
dt = dt or self.datas[0].datetime.date(0)
if isinstance(dt, float):
dt = bt.num2date(dt)
print('%s, %s, %s' % (dt.isoformat(), self.datetime.time(), txt))
Do you see sleep message and how long it load data?
this is what I get with the log method.
it loads in about a minute.
no sleep messages. and I don't use the patch (self.value = float(self.o.get_value())
) I suggested to you
and no "sleep 3 seconds"
Can you provide DataFactory()
params you are using?
If I am using this code to log SMA:
DataFactory = store.getdata # or use alpaca_backtrader_api.AlpacaData
data = DataFactory(
dataname=args.ticker,
tz=timezone,
timeframe=bt.TimeFrame.Minutes,
compression=1,
fromdate=pd.Timestamp('2020-5-1'),
historical=False)
It's very thin ticker, if it would be like AAPL, need hours to load data.
On the vide speed of loading:
Without fromdate=pd.Timestamp('2020-5-1'),
it not starts at all. I don't know if it needed, maybe for SMA it loads data automatically? But looks like it not starts because for SMA no data.
this is the data factory I am using:
DataFactory(dataname='AAPL',
historical=False,
timeframe=bt.TimeFrame.Days)
# or just alpaca_backtrader_api.AlpacaBroker()
broker = store.getbroker()
cerebro.setbroker(broker)
and basically the entire stratefy code is in the sample folder of the repo
also changing to timeframe=bt.TimeFrame.MInutes
works the same way
so the difference is the start_date. once added it takes time to load previous data. but that also makes sense, no?
Why without Alpaca broker it's much faster - in example above from 2020-5-1 it takes about few seconds vs hour and more with Alpaca broker?
that is weird because the data source is the same
Yes, but anyway I appreciate for this library because it was simpler way to start backtesting and also live trading (with side execution) with polygon.io data.
this package wraps the python sdk. you may look directly at the sdk https://github.com/alpacahq/alpaca-trade-api-python and the example code it is another approach you could take
I'm having this problem now, but last week I ran this same code for paper trade and it worked fine....strange.
I'm getting: sleep 3 seconds and retrying https://api.alpaca.markets/v2/account 3 more time(s)...
If I click the link it gives:
{"code":40110000,"message":"access key verification failed : access key not found (Code = 40110000)"}
It happens in both paper trading and live mode, but not in backtests. Anyone know what's going on?
when you run a backtest you are not connected to the broker so it makes sense it doesn't happen in that case try pulling the latest code from the master branch and run it again today
That worked, thx! I just worry that in the future i'm going to be running it live and it's going to just stop working or something. Any idea what causes problems like that or if they can happen while you're running it live on a real $$ account?
first of all always start by paper trading for a while, making sure your setup works secondly, follow the issue page of the package, it's active and we try to make sure any issue is handled so make sure you're updated with the latest version
I try paper trading with my own live feed using alpaca-backtrader-api. When I do resample for this data feed, I received message:
sleep 3 seconds and retrying https://paper-api.alpaca.markets/v2/account 3 more time(s)…
Sometimes it starts normally, but often no at all.
When disable alpaca broker, data feed working well.