I have a table that has a column AMOUNTS with a type array<decimal(10,2)>, which I expect to scan into a field type []float64.
This is the test I wrote to reproduce it:
func (s *DatabricksRepositorySuite) Test_Float64_Conversion() {
query := `SELECT AMOUNTS FROM components WHERE AMOUNTS IS NOT NULL AND array_size(AMOUNTS) > 0 LIMIT 1`
var result []float64
err := s.repository.db.QueryRow(query).Scan(&result)
s.NoError(err)
}
When I run my query, I get this error:
sql: Scan error on column index 0, name "AMOUNTS": unsupported Scan, storing driver.Value type string into type *[]float64
Source
This error comes from the standard library in Go: the value for that field is provided by the SDK as a string, but the destination is []float64, so it fails. I also dove into Databricks' SDK source code and found the place [2] where the string is generated.
I have a table that has a column
AMOUNTS
with a typearray<decimal(10,2)>
, which I expect to scan into a field type[]float64
. This is the test I wrote to reproduce it:When I run my query, I get this error:
Source
This error comes from the standard library in Go: the value for that field is provided by the SDK as a
string
, but the destination is[]float64
, so it fails. I also dove into Databricks' SDK source code and found the place [2] where the string is generated.[1] Golang source - convertAssignRows [2] Databricks SDK - listValueContainer.Value
Workaround
I created a custom type for the fields I have that use arrays:
So my field
is now
and I call
Amount.Slice()
when I want to read its value.