Closed anandkp92 closed 5 years ago
Can you attach the code that produces the error?
Make sure that the way that you set up the client is the way we do it here
This is the code that I'm running:
from pyxbos.mortard import MortarClient
import pymortar
import time
client = MortarClient({
'namespace': "<namspace-hash>,
'wave': 'localhost:777',
'entity': 'client.ent',
'prooffile': 'clientproof.pem',
'grpcservice': 'mortar/Mortar/*',
'address': 'localhost:4587',
})
req = pymortar.FetchRequest(
sites=["blr"],
dataFrames=[
pymortar.DataFrame(
name="meter_data",
aggregation=pymortar.MEAN,
window="5m",
uuids=["<uuid-from-our-db>"],
)
],
time=pymortar.TimeParams(
start="2019-01-01T00:00:00Z",
end="2019-06-01T00:00:00Z",
)
)
s = time.time()
res = client.fetch(req)
e = time.time()
print("took {0}".format(e-s))
print(res)
I think mortard.MortarClient
does set up the WAVEGRPCClient as you described.
I can't reproduce the error using this code. Are you using Python 2 or Python 3?
Python 3.5, i created a conda environment to run this. Could it be some sort of firewall issue?
This is all over localhost, so firewalls shouldn't affect it. Can you check that you have tlslite-ng installed?
pip install tlslite-ng
If you have a packaged tlslite
installed you should remove it.
I added the correct tlslite-ng package as a dependency in pyxbos 0.2.2
Yes, I'd installed the wrong tlslite
instead of tlstlite-ng
!
Also upgraded to pyxbos 0.2.2.
That worked!
But now there is a thread that doesn't exit:
$ python mortartest.py
Listening on ('localhost', 5005)
new client call
Done with call
ccd06506-a4df-59f2-85eb-5e5e121f7c89
2019-05-09 04:25:00+00:00 55.342500
2019-05-09 04:30:00+00:00 55.277857
2019-05-09 04:35:00+00:00 55.172667
2019-05-09 04:40:00+00:00 54.112000
2019-05-09 04:45:00+00:00 53.435714
took 2.4387152194976807
<pymortar.result.Result: views:n/a dataframes:1 timeseries:1 vals:5>
^CException ignored in: <module 'threading' from '/home/solarplus/anaconda3/envs/python35/lib/python3.5/threading.py'>
Traceback (most recent call last):
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/threading.py", line 1292, in _shutdown
t.join()
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/threading.py", line 1054, in join
self._wait_for_tstate_lock()
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/threading.py", line 1070, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
And on the xbosmortard side (seems like there are no errors):
...
INFO[2019-05-13T13:33:34-07:00] Done loading Brick. Took 969.083231ms
INFO[2019-05-13T13:33:34-07:00] Connected to InfluxDB!
INFO[2019-05-13T13:33:34-07:00] <|influx ts stage|>
INFO[2019-05-13T13:33:34-07:00] <| brick stage |>
INFO[2019-05-13T13:33:34-07:00] <| api frontend wave auth stage |>
INFO[2019-05-13T13:33:34-07:00] get output
2019/05/13 13:33:36 [INFO] Server verifying server handshake <namespace-hash>
ERRO[2019-05-13T13:33:37-07:00] Got Error in ret <nil>
INFO[2019-05-13T13:33:37-07:00] Fetch took 86.291774ms
My doubt is if we close the grpc connection, but this loop still runs.
What do you think?
I'll take a look at that later today. In the prototype client I uploaded, something was terminating the GRPC connection each time, so I attempted to re-establish it. What probably needs to happen is to move the setup_connection
call out of the infinite loop so that we only establish that once.
OK. And I stand corrected - the loop seems to be terminating, but yes, maybe the connection still remains. Thanks!
I forgot to daemonize the threads in the client. Can you upgrade to pyxbos 0.2.3 and try again? Unfortunately, connection re-use is still not working so the latency is super high (for now)
That didn't work.
$ python mortartest.py
Listening on ('localhost', 5005)
Done with call
Traceback (most recent call last):
File "mortartest.py", line 53, in <module>
res = client.fetch(req)
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/site-packages/pyxbos/mortard.py", line 222, in fetch
for a in self._client.Fetch(request):
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/site-packages/grpc/_channel.py", line 363, in __next__
return self._next()
File "/home/solarplus/anaconda3/envs/python35/lib/python3.5/site-packages/grpc/_channel.py", line 346, in _next
raise self
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string = "{"created":"@1557782540.272304202","description":"Error received from peer ipv4:127.0.0.1:5005","file":"src/core/lib/surface/call.cc","file_line":1041,"grpc_message":"Socket closed","grpc_status":14}"
>
Now there is an error on the grpc server as well:
$ ./xbosmortard
...
2019/05/13 14:22:18 [INFO] Server verifying server handshake <namespace-hash>
ERRO[2019-05-13T14:22:20-07:00] timing out on waiting for result in fetch
INFO[2019-05-13T14:22:20-07:00] Fetch took 1.087840719s
Could this be due to a small timeout for fetch? This is the line in your code that prints that error.
If you follow the definition for the timeout, you'll find that its like 60 minutes, so unlikely to be that :). The culprit was I didn't finish fixing the influxdb stage implementation when I changed how mortar handles client requests contexts. If you update xboswave, the updated mortar stage should be in there now. Let me know if you see the issue after it updates to mortar v1.1.0-alpha4
Perfect! That worked! Thanks a lot.
You're right about the latency, but this is great to move ahead.
I've created server (mortar.ent, serverproof.pem) and client (client.ent, clientproof.pem) entities and proofs to query data using pyxbos, following the instructions mentioned here.
I've changed the FetchRequest in the (example)[https://github.com/gtfierro/xboswave/blob/master/python/examples/mortartest.py] file to fetch a uuid stored in my influx database.
However, when I run
python mortartest.py
, I get the following errors: