Closed ycd closed 4 years ago
You are measuring the speed of http.get('https://google.com')
, it should be instant, unless you have some Internet issues. Can you try to bench it with, say, Go's http.Get("http.get('https://google.com')")
?
Yes i am doing a little benchmark, am i doing it wrong? I know coroutines in the future plans and I'm sure it will make a significant difference. But I'm not sure how should i do this right now, can you provide a minimal example?
It would appear that it is either intermittent, or specific to your system.
On my system, your code gave this output (but notice the last number compared to the rest):
jalon@Ripper:~/git/vv$ v run foo.v
Finish getting time 159 ms
Finish getting time 158 ms
Finish getting time 159 ms
Finish getting time 162 ms
Finish getting time 161 ms
Finish getting time 166 ms
Finish getting time 161 ms
Finish getting time 167 ms
Finish getting time 168 ms
Finish getting time 164 ms
Finish getting time 166 ms
Finish getting time 167 ms
Finish getting time 168 ms
Finish getting time 172 ms
Finish getting time 169 ms
Finish getting time 167 ms
Finish getting time 166 ms
Finish getting time 170 ms
Finish getting time 170 ms
Finish getting time 172 ms
Finish getting time 170 ms
Finish getting time 169 ms
Finish getting time 172 ms
Finish getting time 174 ms
Finish getting time 173 ms
Finish getting time 173 ms
Finish getting time 173 ms
Finish getting time 175 ms
Finish getting time 172 ms
Finish getting time 177 ms
Finish getting time 177 ms
Finish getting time 176 ms
Finish getting time 176 ms
Finish getting time 180 ms
Finish getting time 177 ms
Finish getting time 176 ms
Finish getting time 179 ms
Finish getting time 180 ms
Finish getting time 176 ms
Finish getting time 178 ms
Finish getting time 181 ms
Finish getting time 181 ms
Finish getting time 181 ms
Finish getting time 179 ms
Finish getting time 180 ms
Finish getting time 182 ms
Finish getting time 180 ms
Finish getting time 187 ms
Finish getting time 186 ms
Finish getting time 1215 ms
jalon@Ripper:~/git/vv$
Benchmarking on a local server will be more ideal than a remote server?
@littledivy, i did not understand? How did you came up with the idea that I am benchmarking on a remote server?
Your code does a 'get' to google.com. Unless you work there, that is a remove server. I believe he was talking about running a server on localhost and testing against that. That way you would not be affected by slow routes on the internet, etc.
Fair enough, I got it now. Thanks for the explanation @JalonSolov, but i do not think it can be related to the internet or smth else because my existing scraper, which is written in Go sends and returns a response for +2500 requests in a second. Even my results are better in Python, that exactly why i mentioned "Pretty sure i'm doing something wrong.".
Also the comment @littledivy posted, literally makes no sense. I already explained my use case, he literally says "benchmark on a local server" and the issue is closed, this a really sad approach.
I suspect (because I didn't do it) it was closed because I ran the exact same code and got tremendously shorter times (1/20th?), not because of what @littledivy said.
It also would be useful to test on a server running on localhost (use all 3 sets of code), just to see what the difference is when you are not ALSO dealing with the internet and whatever issues the network might be having when you run your tests.
If the V code is still that much slower than the Go and Python code when contacting a server on localhost, there are definite issues to be dealt with in V. If you run all 3 back-to-back contacting google.com... there may or may not be an issue in V, because the 'net itself may have developed an issue between runs.
And all that said, there is new network code already being worked on, so any problems you are seeing now may already be fixed in the new code.
@JalonSolov, okay i did a local benchmark, and things got more interesting.
Now it gave me this output.
Finish getting time 5004 ms
Finish getting time 5004 ms
Finish getting time 5005 ms
Finish getting time 5007 ms
Finish getting time 5007 ms
Finish getting time 5005 ms
Finish getting time 5004 ms
Finish getting time 5006 ms
Finish getting time 5011 ms
Finish getting time 5005 ms
Finish getting time 5009 ms
Finish getting time 5006 ms
Finish getting time 5007 ms
Finish getting time 5001 ms
Finish getting time 5008 ms
Finish getting time 5006 ms
Finish getting time 5006 ms
Finish getting time 5006 ms
Finish getting time 5013 ms
Finish getting time 5008 ms
Finish getting time 5002 ms
Finish getting time 5002 ms
Finish getting time 5010 ms
Finish getting time 5002 ms
Finish getting time 5002 ms
Finish getting time 5002 ms
Finish getting time 5002 ms
Finish getting time 5004 ms
Finish getting time 5003 ms
Finish getting time 5003 ms
Finish getting time 5004 ms
Finish getting time 5003 ms
Finish getting time 5003 ms
Finish getting time 5004 ms
Finish getting time 5003 ms
Finish getting time 5003 ms
Finish getting time 5006 ms
Finish getting time 5002 ms
Finish getting time 5002 ms
Finish getting time 5007 ms
Finish getting time 5002 ms
Finish getting time 5006 ms
Finish getting time 5008 ms
Finish getting time 5009 ms
Finish getting time 5007 ms
Finish getting time 5007 ms
Finish getting time 5008 ms
Finish getting time 5004 ms
Finish getting time 5005 ms
Finish getting time 5005 ms
But my HTTP server accepted requests, immediately
2020-10-14 00:10:21.355605
2020-10-14 00:10:21.356665
2020-10-14 00:10:21.356739
2020-10-14 00:10:21.358741
2020-10-14 00:10:21.362016
2020-10-14 00:10:21.361238
2020-10-14 00:10:21.362536
2020-10-14 00:10:21.363084
2020-10-14 00:10:21.363530
2020-10-14 00:10:21.363861
2020-10-14 00:10:21.363866
2020-10-14 00:10:21.364269
2020-10-14 00:10:21.364733
2020-10-14 00:10:21.364950
2020-10-14 00:10:21.365050
2020-10-14 00:10:21.365158
2020-10-14 00:10:21.365334
2020-10-14 00:10:21.365514
2020-10-14 00:10:21.365957
2020-10-14 00:10:21.366228
2020-10-14 00:10:21.366648
2020-10-14 00:10:21.367109
2020-10-14 00:10:21.366717
2020-10-14 00:10:21.367382
2020-10-14 00:10:21.368497
2020-10-14 00:10:21.368831
2020-10-14 00:10:21.369383
2020-10-14 00:10:21.369739
2020-10-14 00:10:21.370572
2020-10-14 00:10:21.370550
2020-10-14 00:10:21.370940
2020-10-14 00:10:21.370940
2020-10-14 00:10:21.371296
2020-10-14 00:10:21.371348
2020-10-14 00:10:21.371884
2020-10-14 00:10:21.372126
2020-10-14 00:10:21.372276
2020-10-14 00:10:21.372788
2020-10-14 00:10:21.372681
2020-10-14 00:10:21.373067
2020-10-14 00:10:21.373221
2020-10-14 00:10:21.373374
2020-10-14 00:10:21.373655
2020-10-14 00:10:21.373676
2020-10-14 00:10:21.374013
2020-10-14 00:10:21.374108
2020-10-14 00:10:21.374460
2020-10-14 00:10:21.375099
2020-10-14 00:10:21.375116
2020-10-14 00:10:21.375572
Very, very strange. You saw the numbers I got. Please run v doctor
and post the output here.
Here is the output of v doctor
.
OS: linux, Ubuntu 20.04.1 LTS
Processor: 4 cpus, 64bit, little endian, Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
CC version: cc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
vroot: /home/leno/v
vexe: /home/leno/v/v
vexe mtime: 2020-10-01 16:06:45
is vroot writable: true
V full version: V 0.1.29 ac384db
Git version: git version 2.25.1
Git vroot status: 0.1.29-469-gac384db7 (110 commit(s) behind V master)
.git/config present: true
/var/tmp/tcc status: master 5e73982e
thirdparty/tcc: N/A
@ycd try running v up
and then test again
Hello @Larpon, i updated with v up
Here is my v doctor
output.
OS: linux, Ubuntu 20.04.1 LTS
Processor: 4 cpus, 64bit, little endian, Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
CC version: cc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
vroot: /home/leno/v
vexe: /home/leno/v/v
vexe mtime: 2020-10-14 07:51:20
is vroot writable: true
V full version: V 0.1.29 a2fba05
Git version: git version 2.25.1
Git vroot status: 0.1.29-579-ga2fba052
.git/config present: true
/var/tmp/tcc status: master 5e73982e
thirdparty/tcc: N/A
But I'm still getting the same result.
Finish getting time 5003 ms
Finish getting time 5004 ms
Finish getting time 5002 ms
Finish getting time 5003 ms
Finish getting time 5002 ms
Finish getting time 5008 ms
Finish getting time 5002 ms
Finish getting time 5006 ms
Finish getting time 5002 ms
Finish getting time 5008 ms
Finish getting time 5006 ms
Finish getting time 5007 ms
Finish getting time 5007 ms
Finish getting time 5006 ms
Finish getting time 5008 ms
Finish getting time 5013 ms
Finish getting time 5015 ms
Finish getting time 5016 ms
Finish getting time 5017 ms
Finish getting time 5017 ms
Finish getting time 5017 ms
Finish getting time 5001 ms
Finish getting time 5002 ms
Finish getting time 5020 ms
Finish getting time 5003 ms
Finish getting time 5006 ms
Finish getting time 5004 ms
Finish getting time 5028 ms
Finish getting time 5004 ms
Finish getting time 5007 ms
Finish getting time 5023 ms
Finish getting time 5019 ms
Finish getting time 5008 ms
Finish getting time 5007 ms
Finish getting time 5028 ms
Finish getting time 5027 ms
Finish getting time 5008 ms
Finish getting time 5028 ms
Finish getting time 5004 ms
Finish getting time 5010 ms
Finish getting time 5003 ms
Finish getting time 5009 ms
Finish getting time 5023 ms
Finish getting time 5011 ms
Finish getting time 5032 ms
Finish getting time 5029 ms
Finish getting time 5011 ms
Finish getting time 5011 ms
Finish getting time 5012 ms
Finish getting time 5015 ms
Is that a 32-bit compiler? If so, try installing a 64-bit version.
Yes the c compiler used looks a little odd.
You could also try passing v -prod run ...
to see if the compiler behaves differently when using optimisation flags
I also have this problem, and the test found that if the concurrent method is not used, the response speed of each request is within 200ms
, this is my system information.
OS: macos, Mac OS X, 10.14.5, 18F132
Processor: 8 cpus, 64bit, little endian, Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
CC version: Apple LLVM version 10.0.1 (clang-1001.0.46.4)
vroot: /Users/zhylmzr/Tools/v
vexe: /Users/zhylmzr/Tools/v/v
vexe mtime: 2020-10-14 07:35:16
is vroot writable: true
V full version: V 0.1.29 c463c26.a2fba05
Git version: git version 2.26.2
Git vroot status: 0.1.29-579-ga2fba052 (18 commit(s) behind V master)
.git/config present: true
thirdparty/tcc: N/A
Could you please try:
v -prof - run <filename or path>
(And paste the output here)
This will hopefully show where the bottleneck might be
Is that a 32-bit compiler? If so, try installing a 64-bit version.
As far as I know, it is a 64-bit compiler.
You could also try passing v -prod run ... to see if the compiler behaves differently when using optimization flags
It took +5000ms for each request again.
Could you please try: v -prof - run
@Larpon This is my output from v -prof -run <filename>
100 0.007ms 71ns strings__new_builder
300 0.217ms 725ns strings__Builder_write
100 0.363ms 3632ns strings__Builder_str
597 0.082ms 137ns hash__wyhash_c
348 0.031ms 90ns strconv__byte_to_lower
100 0.513ms 5133ns strconv__common_parse_uint
100 0.481ms 4808ns strconv__common_parse_uint2
100 0.626ms 6256ns strconv__common_parse_int
100 0.242ms 2416ns strconv__underscore_ok
653 0.435ms 667ns __new_array_with_default
119 0.167ms 1405ns new_array_from_c_array
554 0.152ms 275ns array_ensure_cap
300 0.016ms 53ns array_get_unsafe
204 0.011ms 54ns array_get
254 0.466ms 1834ns array_push
300 0.178ms 593ns array_push_many
50 0.006ms 126ns array_free
48 3.010ms 62705ns println
4824 3.336ms 692ns v_malloc
100 0.084ms 841ns v_realloc
1322 0.157ms 118ns vcalloc
105 0.027ms 260ns memdup
1 0.000ms 357ns builtin_init
897 0.228ms 254ns array_byte_contains
50 0.011ms 217ns fast_string_eq
499 0.042ms 85ns new_dense_array
400 0.077ms 193ns DenseArray_push
499 0.050ms 100ns new_map_1
599 0.582ms 972ns map_key_to_index
400 0.068ms 170ns map_meta_less
400 0.583ms 1458ns map_meta_greater
400 0.059ms 149ns map_ensure_extra_metas
399 1.763ms 4420ns map_set
100 0.160ms 1597ns map_get
100 0.238ms 2378ns map_exists
149 0.068ms 453ns map_keys
1797 0.935ms 520ns opt_ok2
150 0.017ms 112ns opt_none
299 0.025ms 82ns vstrlen
450 0.058ms 128ns tos
300 0.027ms 91ns tos2
3752 0.318ms 85ns tos_lit
2 0.000ms 108ns byteptr_vstring
400 0.299ms 747ns string_clone
100 0.015ms 148ns string_int
599 0.065ms 109ns string_eq
150 0.094ms 624ns string_add
100 0.021ms 211ns string_left
100 0.018ms 181ns string_right
2842 2.374ms 835ns string_substr
450 0.216ms 481ns string_index
100 0.021ms 212ns string_last_index
300 0.196ms 655ns string_index_after
300 0.046ms 154ns string_index_byte
100 0.011ms 110ns string_last_index_byte
50 0.070ms 1408ns string_contains
499 0.039ms 77ns string_starts_with
100 0.032ms 319ns string_ends_with
299 0.565ms 1890ns string_to_lower
50 0.193ms 3854ns string_find_between
200 0.033ms 166ns string_trim_space
200 1.029ms 5145ns string_trim
50 0.087ms 1737ns string_trim_left
12270 0.698ms 57ns string_at
50 0.101ms 2017ns string_all_before
50 0.041ms 820ns array_string_join
249 0.294ms 1181ns string_bytes
2 0.000ms 98ns time__now
2 0.000ms 66ns time__Time_unix_time
98 0.032ms 324ns time__ticks
2 0.000ms 88ns time__convert_ctime
2 0.000ms 66ns time__make_unix_time
2 0.035ms 17318ns time__linux_now
1 2.343ms 2342736ns net__openssl__init
1 0.004ms 3901ns os__getwd
1 0.001ms 1270ns os__init_os_args
2996 0.169ms 56ns net__urllib__should_escape
200 0.635ms 3177ns net__urllib__unescape
200 0.415ms 2075ns net__urllib__escape
100 0.006ms 64ns net__urllib__user
50 0.003ms 69ns net__urllib__Userinfo_empty
100 0.064ms 638ns net__urllib__split_by_scheme
297 0.088ms 296ns net__urllib__split
99 2.714ms 27417ns net__urllib__parse
99 2.606ms 26320ns net__urllib__parse_url
100 1.006ms 10063ns net__urllib__parse_authority
100 0.626ms 6261ns net__urllib__parse_host
100 0.276ms 2763ns net__urllib__URL_set_path
50 0.018ms 360ns net__urllib__URL_escaped_path
99 0.830ms 8388ns net__urllib__valid_optional_port
49 0.509ms 10385ns net__urllib__URL_str
50 0.072ms 1445ns net__urllib__parse_query_silent
50 0.007ms 137ns net__urllib__parse_query_values
50 0.084ms 1672ns net__urllib__URL_query
50 0.802ms 16031ns net__urllib__URL_hostname
50 0.209ms 4183ns net__urllib__URL_port
98 0.994ms 10146ns net__urllib__split_host_port
99 0.296ms 2989ns net__urllib__string_contains_ctl_byte
50 0.003ms 68ns net__urllib__new_values
50 0.485ms 9698ns net__new_socket
50 24.430ms 488596ns net__Socket_connect
50 24.960ms 499191ns net__dial
50 15.474ms 309485ns net__Socket_send
100 0.006ms 64ns net__Socket_crecv
50 5.051ms 101028ns net__Socket_close
8 0.000ms 52ns rand__util__nr_next
2 0.041ms 20484ns rand__util__time_seed_array
1 0.003ms 2834ns rand__util__time_seed_64
50 0.005ms 93ns net__http__get
50 245124.265ms 4902485297ns net__http__fetch
50 0.007ms 147ns net__http__fetch_with_method
49 1.464ms 29886ns net__http__build_url_from_fetch
49 250122.318ms 5104537099ns net__http__Request_do
50 250120.777ms 5002415535ns net__http__Request_method_and_url_to_response
50 7.413ms 148252ns net__http__parse_response
50 4.162ms 83242ns net__http__Request_build_request_headers
50 0.026ms 517ns net__http__Request_build_request_cookies_header
50 250110.930ms 5002218592ns net__http__Request_http_do
50 0.006ms 117ns net__http__Method_str
1 0.000ms 353ns rand__wyrand__WyRandRNG_seed
1 0.049ms 48769ns rand__init
1 0.004ms 4496ns rand__new_default
2 0.001ms 518ns sync__new_mutex
102 5012.239ms 49139603ns sync__Mutex_m_lock
102 0.071ms 693ns sync__Mutex_unlock
2 5012.197ms 2506098344ns sync__Waiter_wait
2 0.047ms 23632ns sync__Waiter_stop
1 0.001ms 883ns sync__new_waiter
1 0.000ms 76ns sync__new_waitgroup
100 0.156ms 1560ns sync__WaitGroup_add
50 0.158ms 3163ns sync__WaitGroup_done
1 5012.225ms 5012225085ns sync__WaitGroup_wait
50 250130.919ms 5002618388ns main__send_request
1 5018.607ms 5018607230ns main__main
@ycd please, can you update V to eab0974, then rerun the profile and post the results.
I think that what you are experiencing is a network timeout, which may not be thread related; if that is the case, you will see greater numbers for net__Socket_crecv
in the profile made with the newer V.
I am getting:
...
Finish getting time 5 ms
Finish getting time 8 ms
Finish getting time 4 ms
Finish getting time 4 ms
Finish getting time 3 ms
Finish getting time 5 ms
...
and 98 83.446ms 851489ns net__Socket_crecv
Without profiling on, executing the whole program, after compiling it first, takes ~40ms usually.
My local webserver is nginx/1.10.3, serving a static index.html page of ~100 bytes. The program is: https://gist.github.com/spytheman/65702c86cc6b50abd1d77a6363427a24 , v doctor:
OS: linux, Ubuntu 16.04.7 LTS
Processor: 4 cpus, 64bit, little endian, Intel(R) Core(TM) i3-3225 CPU @ 3.30GHz
CC version: Ubuntu clang version 10.0.1-++20200708124938+ef32c611aa2-1~exp1~20200707225535.189
vroot: /v/pv
vexe: /v/pv/v
vexe mtime: 2020-10-15 23:17:40
is vroot writable: true
V full version: V 0.1.29 eab0974
env VFLAGS: "-cc /var/tmp/tcc/bin/tcc -cflags -bt10"
Git version: git version 2.15.0
Git vroot status: latest-commit-27-geab0974e
.git/config present: true
/var/tmp/tcc status: master 5e73982e
thirdparty/tcc: N/A
@spytheman Thanks for the latest commit, i updated my V version to V 0.1.29 eab0974
Here is my latest profiling, and it indeed made a huge difference in net__Socket_crecv
.
100 0.011ms 113ns strings__new_builder
300 0.308ms 1028ns strings__Builder_write
99 0.376ms 3793ns strings__Builder_str
599 0.085ms 141ns hash__wyhash_c
350 0.038ms 107ns strconv__byte_to_lower
100 0.614ms 6145ns strconv__common_parse_uint
100 0.575ms 5746ns strconv__common_parse_uint2
99 0.745ms 7525ns strconv__common_parse_int
100 0.280ms 2795ns strconv__underscore_ok
653 0.474ms 726ns __new_array_with_default
119 0.254ms 2136ns new_array_from_c_array
553 0.205ms 371ns array_ensure_cap
300 0.025ms 84ns array_get_unsafe
204 0.019ms 94ns array_get
253 0.188ms 744ns array_push
300 0.247ms 823ns array_push_many
49 0.009ms 192ns array_free
49 8.618ms 175873ns println
4842 70.303ms 14519ns v_malloc
100 0.108ms 1084ns v_realloc
1323 0.148ms 112ns vcalloc
105 0.044ms 417ns memdup
1 0.001ms 584ns builtin_init
895 0.233ms 260ns array_byte_contains
50 0.011ms 222ns fast_string_eq
499 0.052ms 105ns new_dense_array
400 0.079ms 198ns DenseArray_push
499 0.055ms 110ns new_map_1
599 0.485ms 809ns map_key_to_index
400 0.098ms 245ns map_meta_less
400 0.218ms 545ns map_meta_greater
399 0.067ms 167ns map_ensure_extra_metas
399 1.441ms 3611ns map_set
99 0.167ms 1689ns map_get
100 0.110ms 1099ns map_exists
149 0.107ms 721ns map_keys
1794 0.297ms 166ns opt_ok2
149 0.021ms 143ns opt_none
299 0.032ms 106ns vstrlen
449 0.063ms 139ns tos
299 0.032ms 107ns tos2
3766 0.384ms 102ns tos_lit
2 0.001ms 268ns byteptr_vstring
399 0.291ms 730ns string_clone
100 0.020ms 205ns string_int
600 0.079ms 132ns string_eq
149 0.153ms 1029ns string_add
100 0.018ms 176ns string_left
100 0.018ms 183ns string_right
2849 2.256ms 792ns string_substr
450 0.221ms 491ns string_index
100 0.034ms 344ns string_last_index
299 0.198ms 663ns string_index_after
300 0.072ms 239ns string_index_byte
100 0.016ms 157ns string_last_index_byte
49 0.066ms 1347ns string_contains
500 0.058ms 116ns string_starts_with
100 0.050ms 501ns string_ends_with
299 0.658ms 2202ns string_to_lower
49 0.217ms 4434ns string_find_between
200 0.031ms 155ns string_trim_space
199 0.960ms 4825ns string_trim
50 0.110ms 2199ns string_trim_left
12356 2.189ms 177ns string_at
49 0.104ms 2132ns string_all_before
50 0.062ms 1241ns array_string_join
250 0.312ms 1249ns string_bytes
2 0.000ms 222ns time__now
2 0.000ms 140ns time__Time_unix_time
98 0.038ms 390ns time__ticks
2 0.001ms 253ns time__convert_ctime
2 0.000ms 205ns time__make_unix_time
2 0.106ms 53226ns time__linux_now
1 5.347ms 5347226ns net__openssl__init
1 0.012ms 11894ns os__getwd
1 0.004ms 4110ns os__init_os_args
2989 0.258ms 86ns net__urllib__should_escape
200 2.264ms 11322ns net__urllib__unescape
200 0.530ms 2652ns net__urllib__escape
100 0.010ms 97ns net__urllib__user
50 0.005ms 108ns net__urllib__Userinfo_empty
100 0.104ms 1042ns net__urllib__split_by_scheme
300 0.157ms 524ns net__urllib__split
100 6.609ms 66091ns net__urllib__parse
100 6.432ms 64323ns net__urllib__parse_url
100 2.811ms 28108ns net__urllib__parse_authority
100 2.265ms 22651ns net__urllib__parse_host
100 0.433ms 4334ns net__urllib__URL_set_path
50 0.033ms 651ns net__urllib__URL_escaped_path
100 0.413ms 4129ns net__urllib__valid_optional_port
50 0.697ms 13938ns net__urllib__URL_str
50 0.111ms 2229ns net__urllib__parse_query_silent
50 0.010ms 208ns net__urllib__parse_query_values
50 0.129ms 2574ns net__urllib__URL_query
50 0.370ms 7395ns net__urllib__URL_hostname
50 0.565ms 11295ns net__urllib__URL_port
100 0.907ms 9070ns net__urllib__split_host_port
100 1.455ms 14552ns net__urllib__string_contains_ctl_byte
50 0.006ms 115ns net__urllib__new_values
48 2.979ms 62062ns net__new_socket
49 38.955ms 794995ns net__Socket_connect
49 42.046ms 858081ns net__dial
50 16.533ms 330661ns net__Socket_send
99 250117.027ms 2526434613ns net__Socket_crecv
50 5.371ms 107422ns net__Socket_close
8 0.001ms 157ns rand__util__nr_next
2 0.125ms 62645ns rand__util__time_seed_array
1 0.009ms 9247ns rand__util__time_seed_64
50 0.010ms 209ns net__http__get
50 240181.551ms 4803631014ns net__http__fetch
50 0.071ms 1426ns net__http__fetch_with_method
50 4.783ms 95655ns net__http__build_url_from_fetch
50 250195.549ms 5003910979ns net__http__Request_do
50 250193.304ms 5003866073ns net__http__Request_method_and_url_to_response
49 7.380ms 150604ns net__http__parse_response
50 1.112ms 22243ns net__http__Request_build_request_headers
49 0.039ms 800ns net__http__Request_build_request_cookies_header
50 245181.817ms 4903636343ns net__http__Request_http_do
49 0.008ms 164ns net__http__Method_str
1 0.001ms 1168ns rand__wyrand__WyRandRNG_seed
1 0.133ms 132794ns rand__init
1 0.013ms 13472ns rand__new_default
2 0.003ms 1324ns sync__new_mutex
102 5008.914ms 49106996ns sync__Mutex_m_lock
102 0.049ms 479ns sync__Mutex_unlock
2 5008.864ms 2504432118ns sync__Waiter_wait
2 0.019ms 9584ns sync__Waiter_stop
1 0.003ms 2664ns sync__new_waiter
1 0.000ms 253ns sync__new_waitgroup
100 0.176ms 1764ns sync__WaitGroup_add
50 0.160ms 3207ns sync__WaitGroup_done
1 5008.867ms 5008867377ns sync__WaitGroup_wait
50 250282.929ms 5005658570ns main__send_request
1 5030.458ms 5030458137ns main__main
I suspected overhead in the 50 threads spawning but looks like crecv is the culprit. That code is out of our reach? (C socket API call) So something is fishy with your network
@Larpon, thanks for mentioning C socket API call.
I was running an ASGI HTTP server with Python & Uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/hello")
async def hello():
return "hello"
I created another HTTP server in Golang.
package main
import (
"fmt"
"net/http"
)
func hello(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "hello\n")
}
func main() {
http.HandleFunc("/hello", hello)
http.ListenAndServe(":8000", nil)
}
Here are the results from Go server.
Finish getting time 3 ms
Finish getting time 4 ms
Finish getting time 2 ms
Finish getting time 4 ms
Finish getting time 3 ms
Finish getting time 4 ms
Finish getting time 13 ms
Finish getting time 7 ms
Finish getting time 3 ms
...
Profiling output from Go.
100 0.020ms 199ns strings__new_builder
300 2.446ms 8153ns strings__Builder_write
100 2.191ms 21912ns strings__Builder_str
600 0.088ms 146ns hash__wyhash_c
350 0.051ms 146ns strconv__byte_to_lower
100 0.780ms 7805ns strconv__common_parse_uint
100 0.737ms 7368ns strconv__common_parse_uint2
100 0.924ms 9237ns strconv__common_parse_int
100 0.384ms 3838ns strconv__underscore_ok
654 2.412ms 3688ns __new_array_with_default
119 0.356ms 2993ns new_array_from_c_array
555 0.412ms 742ns array_ensure_cap
299 0.048ms 161ns array_get_unsafe
204 0.037ms 182ns array_get
255 0.328ms 1285ns array_push
300 2.307ms 7690ns array_push_many
50 0.017ms 335ns array_free
50 11.686ms 233729ns println
4842 244.891ms 50576ns v_malloc
100 0.225ms 2247ns v_realloc
1323 0.231ms 175ns vcalloc
105 0.091ms 870ns memdup
1 0.001ms 842ns builtin_init
899 0.229ms 255ns array_byte_contains
50 0.011ms 210ns fast_string_eq
500 0.077ms 154ns new_dense_array
400 0.082ms 204ns DenseArray_push
500 0.552ms 1103ns new_map_1
600 0.507ms 845ns map_key_to_index
400 0.062ms 156ns map_meta_less
400 0.221ms 552ns map_meta_greater
400 0.058ms 146ns map_ensure_extra_metas
400 5.116ms 12790ns map_set
100 0.148ms 1480ns map_get
100 0.153ms 1534ns map_exists
150 0.208ms 1384ns map_keys
1793 0.413ms 231ns opt_ok2
150 0.029ms 193ns opt_none
300 0.050ms 167ns vstrlen
450 0.262ms 581ns tos
300 0.054ms 181ns tos2
3767 0.576ms 153ns tos_lit
2 0.001ms 260ns byteptr_vstring
400 3.987ms 9967ns string_clone
100 0.020ms 197ns string_int
600 0.115ms 192ns string_eq
150 2.728ms 18186ns string_add
100 0.018ms 185ns string_left
100 0.020ms 203ns string_right
2848 26.289ms 9231ns string_substr
450 0.277ms 616ns string_index
100 0.064ms 638ns string_last_index
300 0.270ms 900ns string_index_after
300 0.141ms 470ns string_index_byte
100 0.029ms 293ns string_last_index_byte
50 0.061ms 1212ns string_contains
500 0.111ms 222ns string_starts_with
100 0.094ms 936ns string_ends_with
300 2.750ms 9168ns string_to_lower
50 0.251ms 5012ns string_find_between
200 0.029ms 147ns string_trim_space
200 0.955ms 4777ns string_trim
50 0.193ms 3861ns string_trim_left
12375 1.893ms 153ns string_at
50 0.140ms 2798ns string_all_before
50 0.582ms 11633ns array_string_join
250 0.302ms 1206ns string_bytes
2 0.000ms 242ns time__now
2 0.000ms 202ns time__Time_unix_time
100 0.038ms 385ns time__ticks
2 0.000ms 200ns time__convert_ctime
2 0.001ms 250ns time__make_unix_time
2 0.121ms 60726ns time__linux_now
1 6.781ms 6780551ns net__openssl__init
1 0.011ms 11283ns os__getwd
1 0.004ms 4028ns os__init_os_args
2995 0.573ms 191ns net__urllib__should_escape
200 2.190ms 10950ns net__urllib__unescape
200 2.816ms 14080ns net__urllib__escape
100 0.019ms 185ns net__urllib__user
50 0.011ms 211ns net__urllib__Userinfo_empty
100 0.239ms 2391ns net__urllib__split_by_scheme
300 0.261ms 869ns net__urllib__split
100 31.106ms 311059ns net__urllib__parse
100 30.808ms 308078ns net__urllib__parse_url
100 22.610ms 226097ns net__urllib__parse_authority
100 2.177ms 21771ns net__urllib__parse_host
100 2.567ms 25669ns net__urllib__URL_set_path
50 2.490ms 49790ns net__urllib__URL_escaped_path
100 1.180ms 11799ns net__urllib__valid_optional_port
50 4.282ms 85644ns net__urllib__URL_str
50 0.225ms 4492ns net__urllib__parse_query_silent
50 0.019ms 381ns net__urllib__parse_query_values
50 0.289ms 5782ns net__urllib__URL_query
50 22.091ms 441818ns net__urllib__URL_hostname
50 1.003ms 20067ns net__urllib__URL_port
100 23.044ms 230440ns net__urllib__split_host_port
100 0.918ms 9184ns net__urllib__string_contains_ctl_byte
50 0.012ms 230ns net__urllib__new_values
49 2.229ms 45497ns net__new_socket
50 18.532ms 370630ns net__Socket_connect
50 21.155ms 423106ns net__dial
50 14.082ms 281634ns net__Socket_send
100 65.397ms 653966ns net__Socket_crecv
50 6.893ms 137856ns net__Socket_close
8 0.001ms 186ns rand__util__nr_next
2 0.141ms 70294ns rand__util__time_seed_array
1 0.009ms 9101ns rand__util__time_seed_64
50 0.010ms 199ns net__http__get
50 266.779ms 5335579ns net__http__fetch
50 0.019ms 377ns net__http__fetch_with_method
50 27.539ms 550777ns net__http__build_url_from_fetch
50 214.508ms 4290159ns net__http__Request_do
50 210.072ms 4201449ns net__http__Request_method_and_url_to_response
50 16.098ms 321950ns net__http__parse_response
50 52.715ms 1054298ns net__http__Request_build_request_headers
50 0.078ms 1561ns net__http__Request_build_request_cookies_header
50 166.957ms 3339145ns net__http__Request_http_do
50 0.011ms 219ns net__http__Method_str
1 0.001ms 1009ns rand__wyrand__WyRandRNG_seed
1 0.163ms 162602ns rand__init
1 0.013ms 13011ns rand__new_default
2 0.003ms 1285ns sync__new_mutex
102 12.425ms 121811ns sync__Mutex_m_lock
102 0.046ms 450ns sync__Mutex_unlock
2 12.371ms 6185545ns sync__Waiter_wait
2 0.012ms 5908ns sync__Waiter_stop
1 0.003ms 2642ns sync__new_waiter
1 0.000ms 227ns sync__new_waitgroup
100 0.179ms 1793ns sync__WaitGroup_add
50 0.137ms 2734ns sync__WaitGroup_done
1 12.374ms 12373749ns sync__WaitGroup_wait
50 572.858ms 11457150ns main__send_request
1 47.404ms 47404316ns main__main
But I'm curious why sending a request to a server that runs under ASGI specification, causes +5000 ms. I think this is an issue that has to be handled by the Uvicorn side or is it even an issue?
Request throttling or DDOS prevention mechanisms?
I'm not sure.
Is your go version and V version running against the same localhost server when benchmarking?
Sorry for such a late response @Larpon, this issue is already gone too far than I expected, so I am closing this off for now.
I already asked the question on Stack Overflow but I haven't got any answer.
Use case
I am planning to write a scraper that sends approximately 2500 request per second. Then I came up with this.
Output
It's deadly slow right now and I'm pretty sure I'm doing this wrong but I could not find a better way.