Open volkanunsal opened 8 years ago
Can someone help me with this? I found a related issue here, but it looks like this should have been solved by now. Maybe I'm just not getting it. Any help would be appreciated.
Hi @volkanunsal, can you replicate on an sqlite3 database?
I received a similar report about base64 encoded values in mapScan in the past (https://github.com/jmoiron/sqlx/issues/191), but I wasn't really sure how to replicate. If you can give me some more detail about your database, schema, query, I can look into it, especially if the database is one of the 3 in the sqlx test suite (postgres, sqlite3 or mysql).
@jmoiron It happens with postgres. I'm not exactly sure how to create a test case for it in sqlite, but I'll give it a shot this weekend.
The schema of the table is very simple.
Table "public.users"
│ id │ numeric │ │
│ name │ text │ │
The query is SELECT * from users;
This is the part where I'm encoding it into JSON. Could this be where the encoding is getting messed up?
type rowMap map[string]interface{}
type fieldMap map[string]string
type fieldsMap map[string]fieldMap
type successPayload struct {
Rows []rowMap `json:"rows"`
Fields fieldsMap `json:"fields"`
Elapsed float64 `json:"time"`
TotalRows int64 `json:"total_rows"`
}
res := successPayload{scannedRows, fields, elapsed, count}
// Write it back to the client.
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
I just tried it with sqlite3 using the following schema.
CREATE TABLE `users` (
`id` INTEGER,
`name` VARCHAR(64) NULL
);
In sqlite3, the id field works, but the name field is still encoded in base64. Here is my test case:
https://gist.github.com/volkanunsal/e5b84aef87317fb4dff75c97f4c875a8
@jmoiron Have you had a chance to look at the example above? Do you see any clues as to what might be happening? Is there anything I can do to fix this on my own end? Thanks.
definately seeing this with a mysql backend
I have the same problem with SliceScan(), it will return []byte for the string values which the JSON encoder then will convert to base64. My workaround is manually convert them to string before JSON.
I got the same error with mysql for MapScan() and for SliceScan() as well.
This may be related to prepared queries vs non-prepared queries. Try preparing the queries first and see if it fixes the problem.
When performing a SELECT combined with Rows.mapScan, it base64 encodes the Postgres JSON data type instead of either a) parsing it or b) returning it as a string unaltered.
Any advice?
The std encode/json encode byte slice with base64.
reproduce sample
SHOW FULL COLUMNS FROM _tablename_
go: 1.9 mysql driver: github.com/go-sql-driver/mysql (8fefef06)
// rows.Scan => []uint8
type dummy struct {
Field *string `db:"Field"`
Type *string `db:"Type"`
Collation *string `db:"Collation"`
Null *string `db:"Null"`
Key *string `db:"Key"`
Default *string `db:"Default"`
Extra *string `db:"Extra"`
Privileges *string `db:"Privileges"`
Comment *string `db:"Comment"`
}
Fields with *string
json marshal as expected, with interface{}
not.
Doc for row#Scan database/sql/sql.go
I also encountered this problem, so I try transform the field like this。 It works :
tmp := make(map[string]interface{})
rows.MapScan(tmp)
for k, encoded := range tmp {
switch encoded.(type) {
case []byte:
tmp[k] = string(encoded.([]byte))
}
}
I'm implementing a SQL server and I got a strange result from using MapScan. It looks garbled:
{ "id": "Mg==", "name": "Sm9lIFBlc2Np" }
I wonder if there is something I'm missing about how to use
MapScan
? Can you help me understand what's going wrong here?Here is my code:
type Row map[string]interface{} var scannedRows []Row ... i := 0 rows, _ := h.db.Queryx(q[0]) for rows.Next() { results := Row{} err := rows.MapScan(results) checkErr(err, w) scannedRows[i] = results i++ }
I'm implementing a SQL server and I got a strange result from using MapScan. It looks garbled:
{ "id": "Mg==", "name": "Sm9lIFBlc2Np" }
I wonder if there is something I'm missing about how to use
MapScan
? Can you help me understand what's going wrong here?Here is my code:
type Row map[string]interface{} var scannedRows []Row ... i := 0 rows, _ := h.db.Queryx(q[0]) for rows.Next() { results := Row{} err := rows.MapScan(results) checkErr(err, w) scannedRows[i] = results i++ }
how to solve this problem?
columns, _ := rows.Columns()
columnTypes, _ := rows.ColumnTypes()
columnPointers := make([]interface{}, len(columns))
for i := 0; i < len(columns); i++ {
t := columnTypes[i].ScanType()
...
if t.Name() == "RawBytes" {
columnPointers[i] = new(sql.RawBytes)
}
...
}
...
for col, result := range results {
switch result.(type) {
case *sql.RawBytes:
v, _ := result.(*sql.RawBytes)
results[col] = string(*v)
....
}
}
I'm implementing a SQL server and I got a strange result from using MapScan. It looks garbled:
{ "id": "Mg==", "name": "Sm9lIFBlc2Np" }
I wonder if there is something I'm missing about how to use
MapScan
? Can you help me understand what's going wrong here? Here is my code:type Row map[string]interface{} var scannedRows []Row ... i := 0 rows, _ := h.db.Queryx(q[0]) for rows.Next() { results := Row{} err := rows.MapScan(results) checkErr(err, w) scannedRows[i] = results i++ }
I'm implementing a SQL server and I got a strange result from using MapScan. It looks garbled:
{ "id": "Mg==", "name": "Sm9lIFBlc2Np" }
I wonder if there is something I'm missing about how to use
MapScan
? Can you help me understand what's going wrong here? Here is my code:type Row map[string]interface{} var scannedRows []Row ... i := 0 rows, _ := h.db.Queryx(q[0]) for rows.Next() { results := Row{} err := rows.MapScan(results) checkErr(err, w) scannedRows[i] = results i++ }
how to solve this problem?
I'm also having this problem. 10.2.32-MariaDB =-(
Facing the same problem with mySql, data type after doing a mapScan []byte, whereas the actual type in DB is int and varchar
also experiencing this with AWS Aurora (mysql) any good solutions?
Have the same issue with MySQL
My solution:
for rows.Next() {
var data map[string]interface{} = make(map[string]interface{})
err = rows.MapScan(data)
if err != nil {
fmt.Println(err, sqlStr, params.params)
}
for k, v := range data {
if value, ok := v.([]byte); ok {
data[k] = string(value)
}
}
res = append(res, data)
}
There are two problems.
Scan(*[]byte)
may return internal buffer directly.The contents of []byte
will be overwritten by Next()
. So user need to copy the buffer before calling Next()
. This is design of database/sql. This is not specific to sqlx and MapScan(). I think there is nothing sqlx can do here. It is user's responsibility.
Rows.Scan(interface{})
returns []byte
even for string and other types.Drivers don't know the scan target type. So drivers return raw values in []byte
to avoid allocations.
database/sql
converts values returned by driver to scan targe type. But when the target type is interface{}
, database/sql
returns the value from driver without any conversion. This is why []byte
is returned for string columns.
I think sqlx should use ColumnType.ScanType instead of itnerface{}
in here.
https://github.com/jmoiron/sqlx/blob/92bfa368c21aafd33626444a47b2f99be2432623/sqlx.go#L841
Maybe, database/sql
can do the conversion if small backward incompatibility is acceptable.
I'm implementing a SQL server and I got a strange result from using MapScan. It looks garbled:
I wonder if there is something I'm missing about how to use
MapScan
? Can you help me understand what's going wrong here?Here is my code: