Open redmosquitoo opened 3 years ago
Hey @redmosquitoo, I'm experiencing this, too. Did you ever come up with a workaround?
Sadly, no. At First we just moved the column to the end, so it wouldn't break other columns. But that wasn't ideal, so in the end we changed the connector...
śr., 10 sie 2022, 09:18 użytkownik Eric Riddoch @.***> napisał:
Hey @redmosquitoo https://github.com/redmosquitoo, I'm experiencing this, too. Did you ever come up with a workaround?
— Reply to this email directly, view it on GitHub https://github.com/springml/spark-salesforce/issues/73#issuecomment-1210263128, or unsubscribe https://github.com/notifications/unsubscribe-auth/AV4742CMTFHTD3JV7DWAJ4DVYNJVVANCNFSM5FJD7QHA . You are receiving this because you were mentioned.Message ID: @.***>
I am reading table from SF using soql:
df = spark.read.format("com.springml.spark.salesforce").option("soql",sql).option("queryAll","true").option("sfObject",sf_table).option("bulk",bulk).option("pkChunking",pkChunking).option("version","51.0").option("timeout","99999999").option("username", login).option("password",password).load()
and whenever there is a combination of double-qoutes and commas in string it messes up my table schema, like so:
in source: Column A | Column B | Column C 000AB | "text with, comma" | 123XX
read from SF in df : Column A | Column B | Column C 000AB | ""text with | comma""
Is there any option to avoid such cases when this comma is treated as delimiter?