Open lonelyleaf opened 2 years ago
FYI, we are seeing this as well. InfluxDB v2.1.1, Python client. Writing a large number of rows, we first see the issue on row 14995. The error definitely appears to be data-dependent. From skimming the InfluxDB source, this comes from extracting an array of strings from a compressed sequence of bytes. Apparently something becomes corrupt and the decoder expects to find more bytes of string than are actually present.
Going to see if I can downgrade to an earlier InfluxDB and check against that.
In our case, this appears to be caused by corruption in the bucket we're writing to. I've confirmed that writing to a new bucket in a different org works, and even a new bucket in the same org. But writing to this particular bucket/org fails every time, regardless of whether that particular measurement has been purged or not.
@abelletti You're right, I can fix this issue by create a new bucket, thanks. Hope IInfluxdata can add some feature to fix corrupted data in bucket.
We have seen this when a disk fills up. That can cause corrupt files which persist after a disk resize or move to a larger disk.
Steps to reproduce: List the minimal actions needed to reproduce the behavior.
write point use java client
client report
influxdb logs error too ,and only when write some data trigger this error
Environment info:
docker influx 2.1.1
Logs: Include snippet of errors in log.