Closed Roeya closed 6 years ago
I was tried to use bufferd io. caadd9e But not so fast.
I think driver used too many make() and append()
Umm...
I think driver used too many make() and append()
May be. It's time for pprof ))
I use the pprof profiler of Go to find the problem - and the biggest problem is this net (conn) Read that takes total 190 ms, it calls net (netFD) Read and it calls internal/poll (FD) Read and it calls internal/poll (ioSrv) ExecIO that calls internal/poll (*FD) Read func1 all this call chain has small overhead The last calls are syscall WSARecv that calls syscall Syscall9 that call runtime cgocall Now runtime cgocall take total 140ms of the 190 ms While trying to go through what the driver I noticed many small - 4 byte reads, is this part of the protocol or just the way you access the results ?
WSARecv
is it windows?
Yes - my platform is Windows 10 and the DB is Firebirdsql
Roeya, can you share pprof profile? Better if you will also share test code and test db, we can compare profiles on linux and windows. What version of Go do you use?
@Roeya Can you profile with 'bufio' branch ? That branch (added by me now) use buffer reader, and buffer writer.
I will try to get the branch abd check (new to Go and GIt so not sure... )
I will create a zip with the DB file and share it l
The program and the database file - the file contains try_db.go file that select all data from a table and CVNeto.fdb data base file
Correction - the program had bug in the scan, this one works fine but only get one field from the db correct.zip
I will try to get the branch abd check (new to Go and GIt so not sure... )
if you have problem with git you can download sources by this link: https://github.com/nakagami/firebirdsql/archive/bufio.zip But right way: go get github.com/nakagami/firebirdsql cd $GOPATH\src\github.com\nakagami\firebirdsql git checkout bufio after rebuild app
bat22 Thanks for tip I tested a very simple use of getting one value, I changed my code to Ping the database before timing the query as I understood that the to program connects to DB on the first query.... After checking the new branch comparing it both to C++, Perl & Go Perl ~ 9ms to 17ms C++ ~ 10ms ~ 15ms (notice: this a win32 bit not 64bit as the others) Go ~ 11ms ~ 12ms From the results it seems now all are the same. I changed back to the default Go driver and tested Go (no bufio) ~ 20ms Currently I think you solved the syscall bottleneck The next step will be changing to the sqlx package and comparing get slice of maps comparing it to Perl selectall_arrayref iwth Slice => {} and C++ with loading the data to I think that the next 2X improvement will be handling creating the map more efficient
I understood that the to program connects to DB on the first query....
yes it is
I got the same result, test with bufio 2x faster on windows (win7 x64, go 1.9.5) I create simple table with one int field and 100k rows (for reducing impact of Query method and coping strings). With bufio test is 2x faster, in top runtime.cgocall moved from 1st place to 14th and took 1.8% of time (without bufio - 30%).
Now i see in top functions from fmt package - it is not good.
On linux test with bufio 3.2x faster (ubuntu 18.04 i386, go 1.10, fb 3.0)
thx @Roeya @bat22 Now bufio merge to master
Now bufio merge to master
cool!
I olso fixed issue fix fmt perfomance: https://github.com/nakagami/firebirdsql/pull/63
I performed some tests and found out that fetching rows is very slow compared to using C++ or even Perl. The problem is at the wireprotocol.go module - seems that every Write or Read from the socket goes through a operating system syscall creating huge overhead as there are many small (4 bytes) read operations. I am not familiar with tcp in go and the Firebird protocol, but is it possible to improve the way of communicating with the server ?
In my specific test fetching ~2000 rows that takes 50-70ms in C++ (using IBPP driver) or Perl , took 230-240ms in Go. Even when I limit to 40 records there is a huge penalty in Go.
if there is such a penalty in the tcp then all the internet staff etc would be very slow, but I did not found complains about slow communication, so maybe there is a way around this problem ?