Closed ensc closed 2 years ago
Hey @ensc , thanks for this report. I've seen your other comment, which suggest too use localhost
.
I've just pushed a fix for it on devel
branch, which using the dns:///localhost:0
for the gRPC bind. Could you try it out if this is a fix to your problem?
Hey @ensc , thanks for this report. I've seen your other comment, which suggest too use
localhost
.I've just pushed a fix for it on
devel
branch, which using thedns:///localhost:0
for the gRPC bind. Could you try it out if this is a fix to your problem?
Last time I met this problem using bear 3.0.16 compiled on Ubuntu 20, and then solved it by setting no_proxy=localhost,127.0.0.1
. But today I met this problem again when using bear 3.0.16 compiled on Ubuntu 18, and then setting no_proxy=localhost,127.0.0.1
can not solve it.
[14:49:15.751585, wr, 26663, ppid: 26661] wrapper: 3.0.16
[14:49:15.751610, wr, 26663, ppid: 26661] arguments: ["/usr/local/lib/x86_64-linux-gnu/bear/wrapper", "--destination", "127.0.0.1:38313", "--verbose", "--execute", "/usr/bin/sort", "--", "sort", "-g"]
[14:49:15.751644, wr, 26663, ppid: 26661] environment: [******]
[14:49:15.751660, wr, 26663, ppid: 26661] arguments parsed: {program: /usr/local/lib/x86_64-linux-gnu/bear/wrapper, arguments: [{--: [sort, -g]}, {--destination: [127.0.0.1:38313]}, {--execute: [/usr/bin/sort]}, {--verbose: []}]}
[14:49:15.752428, wr, 26663, ppid: 26661] gRPC call requested: supervise::Supervisor::Resolve
[14:49:15.753485, ic, 25130] trying to resolve for library: /usr/bin/sort
[14:49:15.753868, wr, 26663, ppid: 26661] gRPC call [Resolve] finished: true
[14:49:15.754191, wr, 26663, ppid: 26661] Process spawned. [pid: 26667, command: [sort, -g]]
[14:49:15.754249, wr, 26663, ppid: 26661] gRPC call requested: supervise::Interceptor::Register
[14:49:15.755253, el, 26667] lib.cc; on_load
[14:49:15.755491, wr, 26663, ppid: 26661] gRPC call [Register] finished: true
[14:49:15.755574, wr, 26663, ppid: 26661] Process wait requested. [pid: 26667]
[14:49:15.755731, wr, 26663, ppid: 26661] Process wait request: done. [pid: 26667]
[14:49:15.755737, wr, 26663, ppid: 26661] gRPC call requested: supervise::Interceptor::Register
[14:49:15.756213, wr, 26663, ppid: 26661] gRPC call [Register] finished: true
[14:49:15.757012, wr, 26663, ppid: 26661] succeeded with: 0
[14:49:15.757039, el, 26663] lib.cc; on_unload
[14:49:15.757451, el, 26661] lib.cc; on_unload
[14:49:15.757741, el, 26660] lib.cc; on_unload
[14:49:15.757966, wr, 26656, ppid: 25865] Process wait request: done. [pid: 26660]
[14:49:15.757975, wr, 26656, ppid: 25865] gRPC call requested: supervise::Interceptor::Register
[14:49:15.758460, wr, 26656, ppid: 25865] gRPC call [Register] finished: true
[14:49:15.759248, wr, 26656, ppid: 25865] succeeded with: 0
[14:49:15.759281, el, 26656] lib.cc; on_unload
[14:49:15.767293, ic, 25130] trying to resolve for library: /bin/bash
[14:49:15.774416, el, 25865] lib.cc; on_unload
14:49:15 Failed to parse make line: "[14:49:15.763231, el, 26668] lib.cc; execvp file: /bin/bash"
[14:49:15.800855, wr, 25758, ppid: 25754] Process wait request: done. [pid: 25815]
[14:49:15.800888, wr, 25758, ppid: 25754] gRPC call requested: supervise::Interceptor::Register
[14:49:15.801649, wr, 25758, ppid: 25754] gRPC call [Register] finished: true
[14:49:15.802669, wr, 25758, ppid: 25754] succeeded with: 1
[14:49:15.802713, el, 25758] lib.cc; on_unload
[14:49:15.803312, wr, 25754, ppid: 25753] Process wait request: done. [pid: 25758]
[14:49:15.803334, wr, 25754, ppid: 25753] gRPC call requested: supervise::Interceptor::Register
[14:49:15.803871, wr, 25754, ppid: 25753] gRPC call [Register] finished: true
[14:49:15.804803, wr, 25754, ppid: 25753] succeeded with: 1
[14:49:15.804846, el, 25754] lib.cc; on_unload
[14:49:15.805366, el, 25753] lib.cc; on_unload
[14:49:15.807829, el, 26675] lib.cc; execve path: /bin/pwd
[14:49:15.809448, el, 26675] lib.cc; on_load
Add some information: bear 2.3.11 works fine.
Add some information: bear 2.3.11 works fine.
Version 3.x is using gRPC, before it was not depend on it.
Last time I met this problem using bear 3.0.16 compiled on Ubuntu 20, and then solved it by setting no_proxy=localhost,127.0.0.1. But today I met this problem again when using bear 3.0.16 compiled on Ubuntu 18, and then setting no_proxy=localhost,127.0.0.1 can not solve it.
The logs you've been providing show no error related to gRPC connections. What do I miss?
Also, the latest master (which shall have the version number 3.0.17
) has the fix to use localhost
instead of 127.0.0.1
.
Add some information: bear 2.3.11 works fine.
Version 3.x is using gRPC, before it was not depend on it.
Last time I met this problem using bear 3.0.16 compiled on Ubuntu 20, and then solved it by setting no_proxy=localhost,127.0.0.1. But today I met this problem again when using bear 3.0.16 compiled on Ubuntu 18, and then setting no_proxy=localhost,127.0.0.1 can not solve it.
The logs you've been providing show no error related to gRPC connections. What do I miss?
Also, the latest master (which shall have the version number
3.0.17
) has the fix to uselocalhost
instead of127.0.0.1
.
Later, I used compdb and the log was overwritten. In addition, I found that compdb lost a large number of compilation units. Now I decided to use bear 2.3.11 to complete the work first.
Closing due inactivity. Fix for this is on master
branch now. Will be released as 3.0.17
this month.
Closing due inactivity. Fix for this is on
master
branch now. Will be released as3.0.17
this month.
It seems that the problem has not been solved in 3.0.17:
┌──[tr4v3ler@ubuntu]-[~/data/tmp]
└─$ bear --version
bear 3.0.17
┌──[tr4v3ler@ubuntu]-[~/data/tmp]
└─$ g++ test.cc -std=c++17 -o test
┌──[tr4v3ler@ubuntu]-[~/data/tmp]
└─$ bear -- g++ test.cc -std=c++17 -o test
wrapper: failed with: gRPC call failed: failed to connect to all addresses
┌──[tr4v3ler@ubuntu]-[~/data/tmp]
└─$ unset http_proxy https_proxy 1 ↵
┌──[tr4v3ler@ubuntu]-[~/data/tmp]
└─$ bear -- g++ test.cc -std=c++17 -o test
E1115 21:09:42.449045364 35684 ev_epollex_linux.cc:515] Error shutting down fd 11. errno: 9
E1115 21:09:44.455779580 35691 ev_epollex_linux.cc:515] Error shutting down fd 11. errno: 9
E1115 21:09:46.456795170 35691 ev_epollex_linux.cc:515] Error shutting down fd 12. errno: 9
E1115 21:09:48.466992123 35699 ev_epollex_linux.cc:515] Error shutting down fd 11. errno: 9
E1115 21:09:50.467937098 35699 ev_epollex_linux.cc:515] Error shutting down fd 12. errno: 9
E1115 21:09:52.473912816 35705 ev_epollex_linux.cc:515] Error shutting down fd 11. errno: 9
E1115 21:09:54.474887079 35705 ev_epollex_linux.cc:515] Error shutting down fd 12. errno: 9
E1115 21:09:55.471382782 35684 ev_epollex_linux.cc:515] Error shutting down fd 14. errno: 9
E1115 21:10:05.512145709 35711 ev_epollex_linux.cc:515] Error shutting down fd 13. errno: 9
E1115 21:10:07.493078909 35711 ev_epollex_linux.cc:515] Error shutting down fd 12. errno: 9
@tr4v3ler what does this command prints to you host localhost
?
@tr4v3ler what does this command prints to you
host localhost
?
host localhost
localhost.***.com has address 127.0.0.1
"*" indicates some sensitive information.
Ok, will try to reproduce it.. Could you tell me more about your setup? (What architecture your CPU has? What OS you run? What distribution you run? In case of linux can you show the ip a
output section for the loopback device? What is your proxy config? And anything else which you might think can be relevant.)
Ok, will try to reproduce it.. Could you tell me more about your setup? (What architecture your CPU has? What OS you run? What distribution you run? In case of linux can you show the
ip a
output section for the loopback device? What is your proxy config? And anything else which you might think can be relevant.)
Environment:
ip a
:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether e8:4d:d0:**:**:** brd ff:ff:ff:ff:ff:ff
inet 10.176.**.***/** brd 10.176.**.255 scope global enp1s0f0
valid_lft forever preferred_lft forever
inet6 fe80::ea4d:d0ff:feb2:9e61/64 scope link
valid_lft forever preferred_lft forever
3: enp1s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e8:4d:d0:**:**:** brd ff:ff:ff:ff:ff:ff
7: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether 9a:60:1c:fa:1e:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.1/24 brd 192.168.2.255 scope global tap0
valid_lft forever preferred_lft forever
inet6 fe80::9860:1cff:fefa:1ee4/64 scope link
valid_lft forever preferred_lft forever
Describe the bug
bear
fails here withwrapper: failed with: gRPC call failed: failed to connect to all addresses
because
no_proxy
does not contain127.0.0.1
(but onlylocalhost
because pure IP addresses are rarely used with HTTP).Would it be possible to set the
GRPC_ARG_ENABLE_HTTP_PROXY
channel parameter which has been added to recent gPRC?