Open mrhawkin opened 1 year ago
By using --jobs-only parameter I managed to avoid this issue.
slurm2sql doesn't try to set strict limits, it lets sqlite3 figure it out how to store it. If it's an integer, it seems sqlite3 goes up to 8 byte integers, so roughly ± 9.2e+18.
Any idea which of the values exceeds this? (e.g. sacct -o all
for whatever job might be doing this, and see what the really big numbers are there).
I came across this when trying to export data from some years. But its not related to running over a long time period because it fails the same way even when I have history-start the day this issue occurs or days later (and then if fails immediately). By testing, it seems there is a specific period from 15 November 2021 to 6 December 2021 it fails
Traceback (most recent call last): File "/cluster/home/haagen/slurm/slurm2sql.py", line 798, in <module> exit(main(sys.argv[1:])) File "/cluster/home/haagen/slurm/slurm2sql.py", line 564, in main errors = get_history(db, sacct_filter=sacct_filter, File "/cluster/home/haagen/slurm/slurm2sql.py", line 635, in get_history errors += slurm2sql(db, sacct_filter=new_filter, update=True, jobs_only=jobs_only, File "/cluster/home/haagen/slurm/slurm2sql.py", line 752, in slurm2sql c.execute('INSERT %s INTO slurm (%s) VALUES (%s)'%( OverflowError: Python int too large to convert to SQLite INTEGER