forcedotcom / cli

Salesforce CLI
https://developer.salesforce.com/docs/atlas.en-us.sfdx_cli_reference.meta/sfdx_cli_reference/
BSD 3-Clause "New" or "Revised" License
494 stars 78 forks source link

Error (CSV_INVALID_CLOSING_QUOTE): Invalid Closing Quote: got "A" at line 231 instead of delimiter, record delimiter, trimable character (if activated) or comment #3103

Closed sjurgis closed 3 weeks ago

sjurgis commented 3 weeks ago

Summary

I get error when running sf data query -b -f fields.soql -w 30 -r csv > fields.csv

Error (CSV_INVALID_CLOSING_QUOTE): Invalid Closing Quote: got "A" at line 231 instead of delimiter, record delimiter, trimable character (if activated) or comment

Steps To Reproduce

Unsure, there are 1M+ rows I am trying to get

Why would cli try to parse CSV in first place when API already provides CSV file? I suppose results come in multiple files and one way of merging them is parsing...

Expected result

Actual result

Additional information

System Information

Y{
  "architecture": "darwin-arm64",
  "cliVersion": "@salesforce/cli/2.65.8",
  "nodeVersion": "node-v20.13.1",
  "osVersion": "Darwin 24.1.0",
  "rootPath": "/Users/ihazgrog/.nvm/versions/node/v20.13.1/lib/node_modules/@salesforce/cli",
  "shell": "zsh",
  "pluginVersions": [
    "@oclif/plugin-autocomplete 3.2.7 (core)",
    "@oclif/plugin-commands 4.1.5 (core)",
    "@oclif/plugin-help 6.2.16 (core)",
    "@oclif/plugin-not-found 3.2.24 (core)",
    "@oclif/plugin-plugins 5.4.15 (core)",
    "@oclif/plugin-search 1.2.13 (core)",
    "@oclif/plugin-update 4.6.8 (core)",
    "@oclif/plugin-version 2.2.15 (core)",
    "@oclif/plugin-warn-if-update-available 3.1.20 (core)",
    "@oclif/plugin-which 3.2.16 (core)",
    "@salesforce/cli 2.65.8 (core)",
    "apex 3.5.5 (core)",
    "api 1.3.1 (core)",
    "auth 3.6.70 (core)",
    "custom-metadata 3.3.6 (user) published 189 days ago (Sun May 05 2024) (latest is 3.3.37)",
    "data 3.9.0 (core)",
    "deploy-retrieve 3.15.4 (core)",
    "devops-center 1.2.9 (user) published 241 days ago (Fri Mar 15 2024) (latest is 1.2.26)",
    "functions 1.23.0 (user) published 250 days ago (Wed Mar 06 2024)",
    "info 3.4.15 (core)",
    "lightning-dev 1.3.0 (user) published 51 days ago (Sat Sep 21 2024) (latest is 1.9.3)",
    "limits 3.3.37 (core)",
    "marketplace 1.3.2 (core)",
    "org 5.0.2 (core)",
    "packaging 1.27.1 (user) published 375 days ago (Wed Nov 01 2023) (latest is 2.9.0)",
    "schema 3.3.39 (core)",
    "settings 2.4.2 (core)",
    "signups 2.3.0 (user) published 188 days ago (Tue May 07 2024) (latest is 2.6.1)",
    "sobject 1.4.44 (core)",
    "telemetry 3.6.18 (core)",
    "templates 56.3.26 (core)",
    "trust 3.7.38 (core)",
    "user 3.6.0 (core)",
    "@salesforce/sfdx-scanner 4.1.0 (user) published 194 days ago (Wed May 01 2024) (latest is 4.7.0)"
  ]
}
github-actions[bot] commented 3 weeks ago

Thank you for filing this issue. We appreciate your feedback and will review the issue as soon as possible. Remember, however, that GitHub isn't a mechanism for receiving support under any agreement or SLA. If you require immediate assistance, contact Salesforce Customer Support.

cristiand391 commented 3 weeks ago

I'm pretty sure this is caused by this is caused by the qty of records (sf data query can run out of memory while downloading the CSV chunks from the API and parse them).

Can you try with the data export bulk command? it can handle million of records and output them to a file in csv or json: https://developer.salesforce.com/docs/atlas.en-us.sfdx_cli_reference.meta/sfdx_cli_reference/cli_reference_data_commands_unified.htm#cli_reference_data_export_bulk_unified

sjurgis commented 3 weeks ago

Thanks @cristiand391, that sort of makes sense. With sfdx I used to use bulk export and that worked pretty well. I'm still somewhat new to sf and since it has --bulk api flag I assumed the commands were merged.

Would be nice if command would work or at least suggest using different command when too much data is detected (i.e. Illuminated Cloud IDE always warns me when there's tons of rows about to be returned so it wouldn't crash itself).

Regardless I've played with an LLM and managed to create independent bash script that downloaded data that I needed...

cristiand391 commented 3 weeks ago

Would be nice if command would work or at least suggest using different command when too much data is detected (i.e. Illuminated Cloud IDE always warns me when there's tons of rows about to be returned so it wouldn't crash itself).

Yes, we actually plan to deprecate the bulk functionality from data query so there'll be just one command for bulk queries to avoid confusion.

I'll close this issue, please let us know if you have any issue with data export bulk.