Closed LambdaSoft closed 2 years ago
I have tried the timeout argument in InfluxDBClient, but it has no effect on that read timeout
Hi @LambdaSoft,
thanks for using our client.
Is it hard to say what exactly cause the Read timed out
but I think it can be caused by how long the data is processed on a server side.
How much data do you store in one request?
I have tried the timeout argument in InfluxDBClient, but it has no effect on that read timeout
The default timeout for the client is 10_000
milliseconds. What value did you change timeout to?
Regards
Hi @LambdaSoft,
thanks for using our client.
Is it hard to say what exactly cause the
Read timed out
but I think it can be caused by how long the data is processed on a server side.How much data do you store in one request?
I have tried the timeout argument in InfluxDBClient, but it has no effect on that read timeout
The default timeout for the client is
10_000
milliseconds. What value did you change timeout to?Regards
About the data, each request contains 38 short int values. And about the client timeout, I have tried changing it to 100 000ms, but I get the same result, I think it has no effect on the read timeout
Can you share the debug output from client?
You can enable debug mode by: InfluxDBClient(url="http://localhost:8086", token="my-token", org="my-org", debug=True)
.
(read timeout=0)
can be caused by HTTP request timed out and also by timed out SSL handshake.
This issue has been closed because it has not had recent activity. Please reopen if this issue is still important to you and you have additionally information.
For future reference of others who may hit the same error and do not see an obvious cause:
I ran into a similar error when overriding the client timeout with 20 seconds. For some reason I hit these errors:
HTTPConnectionPool(host='influxdb-v2', port=8086): Read timed out. (read timeout=0.019817536998307333)
Which was weird.
However, it took me a while to figure out that the timeout format differs from other HTTP clients, e.g. requests
, because the InfluxDB client expects milliseconds. So I unknowingly set a 20ms timeout... :facepalm:
So instead of using the client with:
InfluxDBClient(
...,
timeout=20 # = 20 milliseconds, NOT 20 seconds!
)
I multiplied by 1000 to get the expected timeout allowed:
InfluxDBClient(
...,
timeout=20000 # = 20 seconds
)
@dennissiemensma
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='europe-west1-1.gcp.cloud2.influxdata.com', port=443): Read timed out. (read timeout=9.999033014057204)
What need to be changed in influx or added/modified in code?
@Bra1nsen I think you have a different issue, as I get a response when I try your given host + port. Maybe your request simply takes too long.
telnet europe-west1-1.gcp.cloud2.influxdata.com 443
Trying 35.205.139.177...
Connected to europe-west1-1.gcp.cloud2.influxdata.com.
Escape character is '^]'.
Note that my error was:
Read timed out. (read timeout=0.019817536998307333)
Yours is:
Read timed out. (read timeout=9.999033014057204)
Which means that you are running into a different issue, as your timeout actually seems to be 10 seconds.
Hi folks,
I have a smiliar error:
>>> Request: 'POST https://xxxxxxxxxxxx:8086/api/v2/write?org=xxxxxxxxxx&bucket=xxxxxxxxxxxxx&precision=ns'
>>> Content-Encoding: identity
>>> Content-Type: text/plain
>>> Accept: application/json
>>> Authorization: ***
>>> User-Agent: influxdb-client-python/1.35.0
>>> Body: b'MetaData_HouseData value=587i'
got error HTTPSConnectionPool(host='xxxxxxxxxxxx', port=8086): Read timed out. (read timeout=0)
but the data is actually submitted. This request is send from a the same host as the influx installation. From another machine, I do not get an error.
My server is on a Debian 11 installation, recently updated:
$ curl -sl -I https://xxxxxxxxxxxxxxxx:8086/ping
HTTP/2 204
vary: Accept-Encoding
x-influxdb-build: OSS
x-influxdb-version: v2.7.0
date: Thu, 13 Apr 2023 18:44:27 GMT
I see the same behavior with influxdb-client=1.37.0 (even using a 100 s timeout).
My data also seems to be transmitted successfully, but i get the timeout error finally. This is really confusing as me as a user i am not sure if all data was transmitted completely (already with the default 10s timeout)...
I am writing a dataframe with 2.1 million rows (timestamp index + value).
Turning on the debug output i hope this is useful:
>>> Request: 'POST http://192.168.178.10:8087/api/v2/write?org=openhab&bucket=openhab&precision=ns'
>>> Content-Type: text/plain
>>> Accept: application/json
>>> Authorization: ***
>>> User-Agent: influxdb-client-python/1.37.0
# Dont know if this is relevant:
IOPub data rate exceeded.
The Jupyter server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--ServerApp.iopub_data_rate_limit`.
Current values:
ServerApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
ServerApp.rate_limit_window=3.0 (secs)
# Stack trace
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connectionpool.py:536, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
535 try:
--> 536 response = conn.getresponse()
537 except (BaseSSLError, OSError) as e:
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connection.py:461, in HTTPConnection.getresponse(self)
460 # Get the response from http.client.HTTPConnection
--> 461 httplib_response = super().getresponse()
463 try:
File ~\Anaconda3\envs\openhab4_helpers\lib\http\client.py:1375, in HTTPConnection.getresponse(self)
1374 try:
-> 1375 response.begin()
1376 except ConnectionError:
File ~\Anaconda3\envs\openhab4_helpers\lib\http\client.py:318, in HTTPResponse.begin(self)
317 while True:
--> 318 version, status, reason = self._read_status()
319 if status != CONTINUE:
File ~\Anaconda3\envs\openhab4_helpers\lib\http\client.py:279, in HTTPResponse._read_status(self)
278 def _read_status(self):
--> 279 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
280 if len(line) > _MAXLINE:
File ~\Anaconda3\envs\openhab4_helpers\lib\socket.py:705, in SocketIO.readinto(self, b)
704 try:
--> 705 return self._sock.recv_into(b)
706 except timeout:
TimeoutError: timed out
The above exception was the direct cause of the following exception:
ReadTimeoutError Traceback (most recent call last)
Cell In[95], line 3
1 # transfer data to InfluxDB2 directly from the dataframe
2 with client2.write_api(write_options=SYNCHRONOUS) as write_api:
----> 3 write_api.write(bucket="openhab", record=df, data_frame_measurement_name=influxdb2_item_name)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\client\write_api.py:378, in WriteApi.write(self, bucket, org, record, write_precision, **kwargs)
375 final_string = b'\n'.join(payload[1])
376 return self._post_write(_async_req, bucket, org, final_string, payload[0])
--> 378 results = list(map(write_payload, payloads.items()))
379 if not _async_req:
380 return None
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\client\write_api.py:376, in WriteApi.write.<locals>.write_payload(payload)
374 def write_payload(payload):
375 final_string = b'\n'.join(payload[1])
--> 376 return self._post_write(_async_req, bucket, org, final_string, payload[0])
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\client\write_api.py:509, in WriteApi._post_write(self, _async_req, bucket, org, body, precision, **kwargs)
507 def _post_write(self, _async_req, bucket, org, body, precision, **kwargs):
--> 509 return self._write_service.post_write(org=org, bucket=bucket, body=body, precision=precision,
510 async_req=_async_req,
511 content_type="text/plain; charset=utf-8",
512 **kwargs)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\service\write_service.py:60, in WriteService.post_write(self, org, bucket, body, **kwargs)
58 return self.post_write_with_http_info(org, bucket, body, **kwargs) # noqa: E501
59 else:
---> 60 (data) = self.post_write_with_http_info(org, bucket, body, **kwargs) # noqa: E501
61 return data
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\service\write_service.py:90, in WriteService.post_write_with_http_info(self, org, bucket, body, **kwargs)
64 """Write data.
65
66 Writes data to a bucket. Use this endpoint to send data in [line protocol](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/) format to InfluxDB. #### InfluxDB Cloud - Does the following when you send a write request: 1. Validates the request and queues the write. 2. If queued, responds with _success_ (HTTP `2xx` status code); _error_ otherwise. 3. Handles the delete asynchronously and reaches eventual consistency. To ensure that InfluxDB Cloud handles writes and deletes in the order you request them, wait for a success response (HTTP `2xx` status code) before you send the next request. Because writes and deletes are asynchronous, your change might not yet be readable when you receive the response. #### InfluxDB OSS - Validates the request and handles the write synchronously. - If all points were written successfully, responds with HTTP `2xx` status code; otherwise, returns the first line that failed. #### Required permissions - `write-buckets` or `write-bucket BUCKET_ID`. *`BUCKET_ID`* is the ID of the destination bucket. #### Rate limits (with InfluxDB Cloud) `write` rate limits apply. For more information, see [limits and adjustable quotas](https://docs.influxdata.com/influxdb/cloud/account-management/limits/). #### Related guides - [Write data with the InfluxDB API](https://docs.influxdata.com/influxdb/latest/write-data/developer-tools/api) - [Optimize writes to InfluxDB](https://docs.influxdata.com/influxdb/latest/write-data/best-practices/optimize-writes/) - [Troubleshoot issues writing data](https://docs.influxdata.com/influxdb/latest/write-data/troubleshoot/)
(...)
85 returns the request thread.
86 """ # noqa: E501
87 local_var_params, path_params, query_params, header_params, body_params = \
88 self._post_write_prepare(org, bucket, body, **kwargs) # noqa: E501
---> 90 return self.api_client.call_api(
91 '/api/v2/write', 'POST',
92 path_params,
93 query_params,
94 header_params,
95 body=body_params,
96 post_params=[],
97 files={},
98 response_type=None, # noqa: E501
99 auth_settings=[],
100 async_req=local_var_params.get('async_req'),
101 _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
102 _preload_content=local_var_params.get('_preload_content', True),
103 _request_timeout=local_var_params.get('_request_timeout'),
104 collection_formats={},
105 urlopen_kw=kwargs.get('urlopen_kw', None))
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\_sync\api_client.py:343, in ApiClient.call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, async_req, _return_http_data_only, collection_formats, _preload_content, _request_timeout, urlopen_kw)
304 """Make the HTTP request (synchronous) and Return deserialized data.
305
306 To make an async_req request, set the async_req parameter.
(...)
340 then the method will return the response directly.
341 """
342 if not async_req:
--> 343 return self.__call_api(resource_path, method,
344 path_params, query_params, header_params,
345 body, post_params, files,
346 response_type, auth_settings,
347 _return_http_data_only, collection_formats,
348 _preload_content, _request_timeout, urlopen_kw)
349 else:
350 thread = self.pool.apply_async(self.__call_api, (resource_path,
351 method, path_params, query_params,
352 header_params, body,
(...)
356 collection_formats,
357 _preload_content, _request_timeout, urlopen_kw))
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\_sync\api_client.py:173, in ApiClient.__call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, _return_http_data_only, collection_formats, _preload_content, _request_timeout, urlopen_kw)
170 urlopen_kw = urlopen_kw or {}
172 # perform request and return response
--> 173 response_data = self.request(
174 method, url, query_params=query_params, headers=header_params,
175 post_params=post_params, body=body,
176 _preload_content=_preload_content,
177 _request_timeout=_request_timeout, **urlopen_kw)
179 self.last_response = response_data
181 return_data = response_data
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\_sync\api_client.py:388, in ApiClient.request(self, method, url, query_params, headers, post_params, body, _preload_content, _request_timeout, **urlopen_kw)
379 return self.rest_client.OPTIONS(url,
380 query_params=query_params,
381 headers=headers,
(...)
385 body=body,
386 **urlopen_kw)
387 elif method == "POST":
--> 388 return self.rest_client.POST(url,
389 query_params=query_params,
390 headers=headers,
391 post_params=post_params,
392 _preload_content=_preload_content,
393 _request_timeout=_request_timeout,
394 body=body,
395 **urlopen_kw)
396 elif method == "PUT":
397 return self.rest_client.PUT(url,
398 query_params=query_params,
399 headers=headers,
(...)
403 body=body,
404 **urlopen_kw)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\_sync\rest.py:311, in RESTClientObject.POST(self, url, headers, query_params, post_params, body, _preload_content, _request_timeout, **urlopen_kw)
308 def POST(self, url, headers=None, query_params=None, post_params=None,
309 body=None, _preload_content=True, _request_timeout=None, **urlopen_kw):
310 """Perform POST HTTP request."""
--> 311 return self.request("POST", url,
312 headers=headers,
313 query_params=query_params,
314 post_params=post_params,
315 _preload_content=_preload_content,
316 _request_timeout=_request_timeout,
317 body=body,
318 **urlopen_kw)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\influxdb_client\_sync\rest.py:220, in RESTClientObject.request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout, **urlopen_kw)
218 elif isinstance(body, str) or isinstance(body, bytes):
219 request_body = body
--> 220 r = self.pool_manager.request(
221 method, url,
222 body=request_body,
223 preload_content=_preload_content,
224 timeout=timeout,
225 headers=headers,
226 **urlopen_kw)
227 else:
228 # Cannot generate the request from given parameters
229 msg = """Cannot prepare a request message for provided
230 arguments. Please check that your arguments match
231 declared content type."""
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\_request_methods.py:118, in RequestMethods.request(self, method, url, body, fields, headers, json, **urlopen_kw)
110 return self.request_encode_url(
111 method,
112 url,
(...)
115 **urlopen_kw,
116 )
117 else:
--> 118 return self.request_encode_body(
119 method, url, fields=fields, headers=headers, **urlopen_kw
120 )
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\_request_methods.py:217, in RequestMethods.request_encode_body(self, method, url, fields, headers, encode_multipart, multipart_boundary, **urlopen_kw)
213 extra_kw["headers"].setdefault("Content-Type", content_type)
215 extra_kw.update(urlopen_kw)
--> 217 return self.urlopen(method, url, **extra_kw)
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\poolmanager.py:443, in PoolManager.urlopen(self, method, url, redirect, **kw)
441 response = conn.urlopen(method, url, **kw)
442 else:
--> 443 response = conn.urlopen(method, u.request_uri, **kw)
445 redirect_location = redirect and response.get_redirect_location()
446 if not redirect_location:
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connectionpool.py:844, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
841 elif isinstance(new_e, (OSError, HTTPException)):
842 new_e = ProtocolError("Connection aborted.", new_e)
--> 844 retries = retries.increment(
845 method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
846 )
847 retries.sleep()
849 # Keep track of the error for the retry warning.
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\util\retry.py:445, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
433 """Return a new Retry object with incremented retry counters.
434
435 :param response: A response object, or None, if the server did not
(...)
441 :return: A new ``Retry`` object.
442 """
443 if self.total is False and error:
444 # Disabled, indicate to re-raise the error.
--> 445 raise reraise(type(error), error, _stacktrace)
447 total = self.total
448 if total is not None:
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\util\util.py:39, in reraise(tp, value, tb)
37 if value.__traceback__ is not tb:
38 raise value.with_traceback(tb)
---> 39 raise value
40 finally:
41 value = None # type: ignore[assignment]
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connectionpool.py:790, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
787 response_conn = conn if not release_conn else None
789 # Make the request on the HTTPConnection object
--> 790 response = self._make_request(
791 conn,
792 method,
793 url,
794 timeout=timeout_obj,
795 body=body,
796 headers=headers,
797 chunked=chunked,
798 retries=retries,
799 response_conn=response_conn,
800 preload_content=preload_content,
801 decode_content=decode_content,
802 **response_kw,
803 )
805 # Everything went great!
806 clean_exit = True
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connectionpool.py:538, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
536 response = conn.getresponse()
537 except (BaseSSLError, OSError) as e:
--> 538 self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
539 raise
541 # Set properties that are used by the pooling layer.
File ~\Anaconda3\envs\openhab4_helpers\lib\site-packages\urllib3\connectionpool.py:370, in HTTPConnectionPool._raise_timeout(self, err, url, timeout_value)
367 """Is the error actually a timeout? Will raise a ReadTimeout or pass"""
369 if isinstance(err, SocketTimeout):
--> 370 raise ReadTimeoutError(
371 self, url, f"Read timed out. (read timeout={timeout_value})"
372 ) from err
374 # See the above comment about EAGAIN in Python 3.
375 if hasattr(err, "errno") and err.errno in _blocking_errnos:
ReadTimeoutError: HTTPConnectionPool(host='192.168.178.10', port=8087): Read timed out. (read timeout=9.969000000011874)
I'm running through same error whenever i want to write my data to influxdb , i tried to set timeout to 20sec or so as suggested abovce but still have the issue. I'm trying to restart influxdb and noticed unable to open boltdb: timeout. I don't know if this is what causing the actual problem. If anyone faced this issue , share what you did.
Steps to reproduce:
Expected behavior: Write data correctly (don't throw an exception)
Actual behavior: Sometimes I get this error:
But if I check the InfluxDB Cloud, I see the data is submitted
Specifications: