Closed captainfalcon23 closed 1 year ago
Apologies for the inconvenience @captainfalcon23, but it could be a condition where the plugin is getting initialized in the background and the query is being run at the same time.
A couple of follow-up questions:
csv.spc
file after removing any sensitive information?Hi @misraved :
- Could you please mention the number of CSV files that you are trying to parse?
Sometimes 2-3 in the directory. Note that there are different steps e.g. step1: download a csv from somewhere, step2: create a csv from a jira steampipe query, step3: Try and open the csv from step 1
- Would it be possible to share a snap of your csv.spc file after removing any sensitive information?
sure, it is 100% default:
connection "csv" {
plugin = "csv"
paths = [ "*.csv", "*.csv.gz" ]
}
- Are you seeing this error only with this plugin version (v0.9.0)?
I haven't used any previous version, and this is the first time I am running/using the particular script with many attempts of reading csvs
- Would it be possible to share the plugin logs so that we could get a better idea of what is triggering this failure?
It's a a bit hard, as firstly it is completely random, and secondly, I usually run via gitlab, so can't get the logs after the execution is finished, however, I looked on my local to check the logs and found the following relevant log entries in the log files:
database-2023-06-20.log:2023-06-20 05:35:22.351 UTC [30623] ERROR: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:35:38.405 UTC [WARN] hub: goFdwImportForeignSchema failed: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:35:38.405 UTC [31010] ERROR: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:57:42.994 UTC [WARN] hub: goFdwImportForeignSchema failed: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:57:42.995 UTC [32541] ERROR: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:58:41.027 UTC [WARN] hub: goFdwImportForeignSchema failed: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:58:41.027 UTC [1314] ERROR: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:59:47.548 UTC [WARN] hub: goFdwImportForeignSchema failed: Reattachment process not found
database-2023-06-20.log:2023-06-20 05:59:47.548 UTC [2521] ERROR: Reattachment process not found
database-2023-06-20.log:2023-06-20 06:00:17.533 UTC [WARN] hub: goFdwImportForeignSchema failed: Reattachment process not found
database-2023-06-20.log:2023-06-20 06:00:17.533 UTC [3263] ERROR: Reattachment process not found
They roughly coincide with these:
database-2023-06-20.log:2023-06-20 05:58:07.533 UTC [579] ERROR: relation "csv.blah1" does not exist at character 307
database-2023-06-20.log:2023-06-20 05:58:24.895 UTC [802] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 05:58:26.906 UTC [802] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 05:58:38.670 UTC [1229] ERROR: relation "csv.blah2" does not exist at character 353
database-2023-06-20.log:2023-06-20 05:58:40.680 UTC [1229] ERROR: relation "csv.blah2" does not exist at character 353
database-2023-06-20.log:2023-06-20 05:59:00.543 UTC [1774] ERROR: relation "csv.blah3" does not exist at character 357
database-2023-06-20.log:2023-06-20 05:59:02.554 UTC [1774] ERROR: relation "csv.blah3" does not exist at character 357
database-2023-06-20.log:2023-06-20 05:59:02.809 UTC [1774] ERROR: relation "csv.blah3" does not exist at character 357
database-2023-06-20.log:2023-06-20 05:59:31.116 UTC [2079] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 05:59:33.128 UTC [2079] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 05:59:33.381 UTC [2079] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 05:59:45.184 UTC [2423] ERROR: relation "csv.blah2" does not exist at character 236
database-2023-06-20.log:2023-06-20 06:00:01.826 UTC [2882] ERROR: relation "csv.blah2" does not exist at character 353
database-2023-06-20.log:2023-06-20 06:00:20.087 UTC [3398] ERROR: relation "csv.blah3" does not exist at character 357
Thanks @captainfalcon23 for the detailed response. The logs definitely don't paint a happy picture.
As we try to replicate and investigate this issue, I would suggest you to point the paths
argument to the folder that contains the CSV files, that would reduce the plugin build time and help in reading the CSV files faster.
No probs @misraved. Currently all the CSVs are in the current working directory which seems to be the default, so I don't think explicitly setting it would help, or am I misunderstanding?
That's absolutely correct @captainfalcon23, again apologies for the inconvenience.
Would it be possible for you to give us a good test case scenario to replicate this error? Like create 3 CSVs in directory 1, add nested structure in directory 2, and then possibly a sample of your join queries which would perfectly replicate your use case?
Here's exactly what I do in my script @misraved . You should be able to dummy up some data and do something similar:
AWS_ACCOUNT_NAME,AWS_REGION,AWS_REPO,ECR_IMAGE_TAG,COMMON_VULN_ID,VULN_INFO_URL,VULN_SEVERITY,VULN_PACKAGE_VERSION,VULN_PACKAGE_NAME,Finding_Level_Comment,REPO_SPECIFIC_COMMENT
And there's a large amount of rows.
And basically follows this loop a little.
If really needed, I am happy to share my script privately to yourself for review/to assist here.
Thanks for all the input, @captainfalcon23. We tried to replicate the steps that you mentioned above and were able to reproduce the error intermittently.
Assuming that the race condition is creating a problem, we could possibly add an output command after creating a new CSV to narrow down which CSV is actually failing to get created.
Would it be possible to add a sleep N
command after creating every CSV file to ensure that the race condition is averted?
Also, if possible, could you please try to build the plugin with this branch:add-lock-to-prevent-concurrent-read-write
? This will ensure we get an error log whenever a CSV file is empty.
@kaidaguerre @binaek, do you have any idea/insight on when we would encounter such errors?
Hu @Subhajit97
I have tried the below:
add-lock-to-prevent-concurrent-read-write
sleep 5
after each steampipe command+---------------------------------------------------+---------+
| Installed Plugin | Version |
+---------------------------------------------------+---------+
| hub.steampipe.io/plugins/turbot/aws@latest | local |
| hub.steampipe.io/plugins/turbot/csv@latest | local |
| hub.steampipe.io/plugins/turbot/jira@latest | 0.10.1 |
| hub.steampipe.io/plugins/turbot/kubernetes@latest | 0.21.0 |
| hub.steampipe.io/plugins/turbot/steampipe@latest | 0.8.0 |
+---------------------------------------------------+---------+
After about 10 or so attempts I reproduced the issue on my local.
Database.log:
2023-06-28 01:41:24.922 UTC [22734] LOG: connection received: host=127.0.0.1 port=44252
2023-06-28 01:41:24.929 UTC [22734] LOG: connection authorized: user=root database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:24.946 UTC [22734] LOG: disconnection: session time: 0:00:00.024 user=root database=steampipe host=127.0.0.1 port=44252
2023-06-28 01:41:24.949 UTC [22735] LOG: connection received: host=127.0.0.1 port=44262
2023-06-28 01:41:24.955 UTC [22735] LOG: connection authorized: user=root database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:25.136 UTC [22747] LOG: connection received: host=127.0.0.1 port=44272
2023-06-28 01:41:25.141 UTC [22747] LOG: connection authorized: user=root database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:25.150 UTC [22748] LOG: connection received: host=127.0.0.1 port=44276
2023-06-28 01:41:25.155 UTC [22748] LOG: connection authorized: user=root database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:25.165 UTC [22735] LOG: disconnection: session time: 0:00:00.216 user=root database=steampipe host=127.0.0.1 port=44262
2023-06-28 01:41:25.167 UTC [22749] LOG: connection received: host=127.0.0.1 port=44288
2023-06-28 01:41:25.169 UTC [22750] LOG: connection received: host=127.0.0.1 port=44296
2023-06-28 01:41:25.175 UTC [22750] LOG: connection authorized: user=root database=steampipe application_name=pm_steampipe_05b1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:25.175 UTC [22749] LOG: connection authorized: user=steampipe database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:25.180 UTC [22749] ERROR: relation "csv.all_jira_childs" does not exist at character 307
2023-06-28 01:41:25.180 UTC [22749] STATEMENT: select
distinct inq.key
from
(
select
fef.key,
ajp."AWS_ACCOUNT_NAME",
ajp."AWS_REGION",
ajp."AWS_REPO",
ajp."ECR_IMAGE_TAG"
from
csv.all_jira_childs fef
left join csv.filtered_ecr_findings ajp on ajp."AWS_ACCOUNT_NAME" = fef."AWS_ACCOUNT_NAME"
and ajp."AWS_REGION" = fef."AWS_REGION"
and ajp."AWS_REPO" = fef."AWS_REPO"
and ajp."ECR_IMAGE_TAG" = fef."ECR_IMAGE_TAG"
and ajp."COMMON_VULN_ID" = fef."COMMON_VULN_ID"
and ajp."VULN_INFO_URL" = fef."VULN_INFO_URL"
and ajp."VULN_SEVERITY" = fef."VULN_SEVERITY"
and ajp."VULN_PACKAGE_VERSION" = fef."VULN_PACKAGE_VERSION"
and ajp."VULN_PACKAGE_NAME" = fef."VULN_PACKAGE_NAME"
) inq
where
inq.key is not null
and (
inq."AWS_ACCOUNT_NAME" is null
and inq."AWS_REGION" is null
and inq."AWS_REPO" is null
and inq."ECR_IMAGE_TAG" is null
);
2023-06-28 01:41:25.686 UTC [22748] LOG: could not receive data from client: Connection reset by peer
2023-06-28 01:41:25.687 UTC [22747] LOG: could not receive data from client: Connection reset by peer
2023-06-28 01:41:25.687 UTC [22747] LOG: disconnection: session time: 0:00:00.550 user=root database=steampipe host=127.0.0.1 port=44272
2023-06-28 01:41:25.687 UTC [22748] LOG: disconnection: session time: 0:00:00.537 user=root database=steampipe host=127.0.0.1 port=44276
2023-06-28 01:41:28.703 UTC [22749] ERROR: relation "csv.all_jira_childs" does not exist at character 307
2023-06-28 01:41:28.703 UTC [22749] STATEMENT: select
distinct inq.key
from
(
select
fef.key,
ajp."AWS_ACCOUNT_NAME",
ajp."AWS_REGION",
ajp."AWS_REPO",
ajp."ECR_IMAGE_TAG"
from
csv.all_jira_childs fef
left join csv.filtered_ecr_findings ajp on ajp."AWS_ACCOUNT_NAME" = fef."AWS_ACCOUNT_NAME"
and ajp."AWS_REGION" = fef."AWS_REGION"
and ajp."AWS_REPO" = fef."AWS_REPO"
and ajp."ECR_IMAGE_TAG" = fef."ECR_IMAGE_TAG"
and ajp."COMMON_VULN_ID" = fef."COMMON_VULN_ID"
and ajp."VULN_INFO_URL" = fef."VULN_INFO_URL"
and ajp."VULN_SEVERITY" = fef."VULN_SEVERITY"
and ajp."VULN_PACKAGE_VERSION" = fef."VULN_PACKAGE_VERSION"
and ajp."VULN_PACKAGE_NAME" = fef."VULN_PACKAGE_NAME"
) inq
where
inq.key is not null
and (
inq."AWS_ACCOUNT_NAME" is null
and inq."AWS_REGION" is null
and inq."AWS_REPO" is null
and inq."ECR_IMAGE_TAG" is null
);
2023-06-28 01:41:28.957 UTC [22749] ERROR: relation "csv.all_jira_childs" does not exist at character 307
2023-06-28 01:41:28.957 UTC [22749] STATEMENT: select
distinct inq.key
from
(
select
fef.key,
ajp."AWS_ACCOUNT_NAME",
ajp."AWS_REGION",
ajp."AWS_REPO",
ajp."ECR_IMAGE_TAG"
from
csv.all_jira_childs fef
left join csv.filtered_ecr_findings ajp on ajp."AWS_ACCOUNT_NAME" = fef."AWS_ACCOUNT_NAME"
and ajp."AWS_REGION" = fef."AWS_REGION"
and ajp."AWS_REPO" = fef."AWS_REPO"
and ajp."ECR_IMAGE_TAG" = fef."ECR_IMAGE_TAG"
and ajp."COMMON_VULN_ID" = fef."COMMON_VULN_ID"
and ajp."VULN_INFO_URL" = fef."VULN_INFO_URL"
and ajp."VULN_SEVERITY" = fef."VULN_SEVERITY"
and ajp."VULN_PACKAGE_VERSION" = fef."VULN_PACKAGE_VERSION"
and ajp."VULN_PACKAGE_NAME" = fef."VULN_PACKAGE_NAME"
) inq
where
inq.key is not null
and (
inq."AWS_ACCOUNT_NAME" is null
and inq."AWS_REGION" is null
and inq."AWS_REPO" is null
and inq."ECR_IMAGE_TAG" is null
);
2023-06-28 01:41:29.210 UTC [22749] ERROR: relation "csv.all_jira_childs" does not exist at character 307
2023-06-28 01:41:29.210 UTC [22749] STATEMENT: select
distinct inq.key
from
(
select
fef.key,
ajp."AWS_ACCOUNT_NAME",
ajp."AWS_REGION",
ajp."AWS_REPO",
ajp."ECR_IMAGE_TAG"
from
csv.all_jira_childs fef
left join csv.filtered_ecr_findings ajp on ajp."AWS_ACCOUNT_NAME" = fef."AWS_ACCOUNT_NAME"
and ajp."AWS_REGION" = fef."AWS_REGION"
and ajp."AWS_REPO" = fef."AWS_REPO"
and ajp."ECR_IMAGE_TAG" = fef."ECR_IMAGE_TAG"
and ajp."COMMON_VULN_ID" = fef."COMMON_VULN_ID"
and ajp."VULN_INFO_URL" = fef."VULN_INFO_URL"
and ajp."VULN_SEVERITY" = fef."VULN_SEVERITY"
and ajp."VULN_PACKAGE_VERSION" = fef."VULN_PACKAGE_VERSION"
and ajp."VULN_PACKAGE_NAME" = fef."VULN_PACKAGE_NAME"
) inq
where
inq.key is not null
and (
inq."AWS_ACCOUNT_NAME" is null
and inq."AWS_REGION" is null
and inq."AWS_REPO" is null
and inq."ECR_IMAGE_TAG" is null
);
2023-06-28 01:41:29.463 UTC [22749] ERROR: relation "csv.all_jira_childs" does not exist at character 307
2023-06-28 01:41:29.463 UTC [22749] STATEMENT: select
distinct inq.key
from
(
select
fef.key,
ajp."AWS_ACCOUNT_NAME",
ajp."AWS_REGION",
ajp."AWS_REPO",
ajp."ECR_IMAGE_TAG"
from
csv.all_jira_childs fef
left join csv.filtered_ecr_findings ajp on ajp."AWS_ACCOUNT_NAME" = fef."AWS_ACCOUNT_NAME"
and ajp."AWS_REGION" = fef."AWS_REGION"
and ajp."AWS_REPO" = fef."AWS_REPO"
and ajp."ECR_IMAGE_TAG" = fef."ECR_IMAGE_TAG"
and ajp."COMMON_VULN_ID" = fef."COMMON_VULN_ID"
and ajp."VULN_INFO_URL" = fef."VULN_INFO_URL"
and ajp."VULN_SEVERITY" = fef."VULN_SEVERITY"
and ajp."VULN_PACKAGE_VERSION" = fef."VULN_PACKAGE_VERSION"
and ajp."VULN_PACKAGE_NAME" = fef."VULN_PACKAGE_NAME"
) inq
where
inq.key is not null
and (
inq."AWS_ACCOUNT_NAME" is null
and inq."AWS_REGION" is null
and inq."AWS_REPO" is null
and inq."ECR_IMAGE_TAG" is null
);
2023-06-28 01:41:29.465 UTC [22749] LOG: disconnection: session time: 0:00:04.298 user=steampipe database=steampipe host=127.0.0.1 port=44288
2023-06-28 01:41:29.474 UTC [22824] LOG: connection received: host=127.0.0.1 port=44306
2023-06-28 01:41:29.484 UTC [22824] LOG: connection authorized: user=root database=steampipe application_name=steampipe_58d1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:29.506 UTC [22824] LOG: disconnection: session time: 0:00:00.031 user=root database=steampipe host=127.0.0.1 port=44306
2023-06-28 01:41:29.689 UTC [22825] LOG: connection received: host=127.0.0.1 port=44322
2023-06-28 01:41:29.697 UTC [22825] LOG: connection authorized: user=root database=steampipe application_name=pm_steampipe_05b1 SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-06-28 01:41:29.717 UTC [22750] LOG: could not receive data from client: Connection reset by peer
2023-06-28 01:41:29.717 UTC [22825] LOG: could not receive data from client: Connection reset by peer
2023-06-28 01:41:29.717 UTC [22825] LOG: disconnection: session time: 0:00:00.028 user=root database=steampipe host=127.0.0.1 port=44322
2023-06-28 01:41:29.718 UTC [22750] LOG: disconnection: session time: 0:00:04.549 user=root database=steampipe host=127.0.0.1 port=44296
2023-06-28 01:41:30.508 UTC [22723] LOG: received smart shutdown request
2023-06-28 01:41:30.514 UTC [22723] LOG: background worker "logical replication launcher" (PID 22731) exited with exit code 1
2023-06-28 01:41:30.515 UTC [22726] LOG: shutting down
2023-06-28 01:41:30.713 UTC [22723] LOG: database system is shut down
plugin.log:
nothing from that time period
The file "all_jira_childs.csv" 100% exists and looks fine:
$ ls -ltr all_jira_childs.csv
-rw-rw-r-- 1 user user 574878 Jun 28 11:41 all_jira_childs.csv
These are the steps it went through:
====================================================
Script will sync ECR findings to jira
====================================================
download: s3://blah/get-ecr-findings.csv to ./get-ecr-findings.csv
### Starting to get parent jiras ###
### got parent jiras ###
### FInished get parent jiras ###
### Starting to filter findings ###
### Filtered findings ###
### Finished filter findings ###
### Starting to combine to see which have jiras ###
### Combined to see which have jiras ###
### Finished combine to see which have jiras ###
### Starting to get child jiras ###
### got child jiras ###
### Finished get parent jiras ###
### Starting to get parent jiras ###
### got parent jiras ###
### FInished get parent jiras ###
### Starting to combine to see which have jiras ###
### combined to see which have jiras ###
### Finsihed combine to see which have jiras ###
### Starting to get child jiras ###
### got child jiras ###
### Finished get parent jiras ###
### Starting handle closed image ### <------------ this step queries all_jira_childs.csv and died here
Warning: executeQueries: query 1 of 1 failed: ERROR: relation "csv.all_jira_childs" does not exist (SQLSTATE 42P01)
Thanks, @captainfalcon23, for the detailed information. Could you try to set the output logging level to debug
for the plugin logs?
Ref: https://steampipe.io/docs/reference/env-vars/steampipe_log#usage
@Subhajit97 see attached logs. Died close to the start:
====================================================
Script will sync ECR findings to jira
====================================================
download: s3://blah/get-ecr-findings.csv.2023-06-28 to ./get-ecr-findings.csv
### Starting to get parent jiras ###
### got parent jiras ###
### FInished get parent jiras ###
### Starting to filter findings ###
Warning: executeQueries: query 1 of 1 failed: ERROR: relation "csv.get-ecr-findings" does not exist (SQLSTATE 42P01)
Hey @captainfalcon23, I'm sorry our solution isn't yet working for you. We tried to replicate the steps that you mentioned here in a shell script, but couldn't replicate the error you were getting after using the add-lock-to-prevent-concurrent-read-write
csv plugin branch.
To understand your issue better and replicate it, I have some questions for you:
STEAMPIPE_LOG=info
instead of Debug and zip us the log folder just like you did previously?In addition to these if possible can you please share the script you are running with me(personally)? I can contact you if you provide your Slack handle or email(whichever you're comfortable with).
Thanks for reporting this and for your prolonged patience while we try and figure out the issue.
Hi @pskrbasu
What is the volume of data(approx row count of these files) you're dealing with in these CSV files? (we tried with 10k rows)
file-from-s3.csv = approx 9k
filtered_file_from_s3.csv = approx 1200
mergedfile1.csv = <50
mergedfile2.csv = <50
jira2.csv = approx 1200
mergedfile3.csv = <50
jira1.csv = approx 500
mergedfile4.csv = <50
Can you try with a minimal setup, i.e steampipe v0.20.7 with only jira, aws, csv plugins and csv plugin built from the branch given?
$ git branch -v
$ steampipe plugin list +---------------------------------------------------+---------+ | Installed Plugin | Version | +---------------------------------------------------+---------+ | hub.steampipe.io/plugins/turbot/aws@latest | 0.107.0 |
---|---|---|---|---|---|
hub.steampipe.io/plugins/turbot/csv@latest | local | ||||
hub.steampipe.io/plugins/turbot/jira@latest | 0.10.1 | ||||
hub.steampipe.io/plugins/turbot/kubernetes@latest | 0.21.0 | ||||
hub.steampipe.io/plugins/turbot/steampipe@latest | 0.8.0 |
+---------------------------------------------------+---------+
With the above minimal setup can you try and re-run your script with STEAMPIPE_LOG=info instead of Debug and zip us the log folder just like you did previously?
* Done, see attached -> [logs.zip](https://github.com/turbot/steampipe-plugin-csv/files/11923924/logs.zip). It failed near the end after a few attempts:
download: s3://blah/blah.csv to ./get-ecr-findings.csv
Warning: executeQueries: query 1 of 1 failed: ERROR: relation "csv.all_jira_childs" does not exist (SQLSTATE 42P01)
I had a quick look at the csv it is trying to get to "all_jira_childs.csv" and it is there and looks fine to me.
can you please share the script you are running with me(personally)?
* Yes no worries, I have joined the steampipe slack, you can message me @captainfalcon23
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 30 days.
Hey @captainfalcon23 I'm sorry for losing track of this. I did have a play with your script but I could not reproduce your issue(maybe because of a difference in data). Can you build the csv plugin from this branch which was built with the latest rc version of our SDK(v5.6.0-rc.25) and try with Steampipe v0.20.11, and see if your issue still exists?
Hi @pskrbasu I tested this again this morning and ran into the same issue.
I think it is something to do with me not running steampipe in service mode, and instead, each part of the script is stopping and starting the DB.
What are your thoughts? Unfortunately, there is nothing useful in the INFO logs.
@captainfalcon23
I think it is something to do with me not running steampipe in service mode, and instead, each part of the script is stopping and starting the DB.
We recently made some improvements in this area. Can you try using v0.21.0-alpha.8 and see if you still see the issue?
Hi @pskrbasu Tested with that new version, and I ran my script for hours and it didn't die! I think we've found a solution!
@captainfalcon23 Great to hear your script doesn't die anymore 😄
Steampipe CLI v.0.21.0 should be released in the upcoming weeks, but in the meantime you can continue using the alpha versions. If you encounter any other issues, please feel free to re-open this issue!
Describe the bug I have a script which at it's core, using steampipe + csv plugin to manipulate a bunch of CSV's, get data from jira, make some new CSV's etc. This job is run using a Gitlab pipeline. Randomly, the job will just fail reading one of the CSV's with the error "relation "csv.blah" does not exist (SQLSTATE 42P01)". If I retrigger the exact same job, it works fine the second time. I've had this issue when testing locally and in Gitlab. I assume there is some race condition happening.
Steampipe version (
steampipe -v
) Latest as of writing this issue - 0.20.6Plugin version (
steampipe plugin list
) Latest as of writing this issue - 0.9.0To reproduce It's a bit hard to give exact steps, but potentially try having multiple CSV files, do some query to join them, write the result out to a new temp file, move the temp file to blah.csv, try and read blah.csv, do some query to join them, write the result out to a new temp file, move the temp file to blah2.csv.
Expected behavior No error/race condition occurs.