Closed asemiankevich closed 5 years ago
Will check on them.
So the faucet container is running:
root@faucet2:~# sudo netstat -lpnt | grep 3001
sudo: unable to resolve host faucet2
tcp6 0 0 :::3001 :::* LISTEN 18055/docker-proxy
root@faucet2:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2edd04d942eb trinitronx/python-simplehttpserver "python -m SimpleHTTP" 8 hours ago Up 8 hours 0.0.0.0:80->8080/tcp http
cd3b5ba44190 statusim/faucet "/faucet -network=rin" 5 months ago Up 2 weeks 0.0.0.0:3001->3001/tcp faucet_faucet_1
e49086ea4db7 ethereum/client-go:v1.7.2 "geth --rinkeby --syn" 17 months ago Up 2 weeks 8545-8546/tcp, 0.0.0.0:30303->30303/tcp, 0.0.0.0:30303->30303/udp faucet_geth_1
I do get a redirect from http://faucet.status.im:
$ curl -skv http://faucet.status.im
> GET / HTTP/1.1
> Host: faucet.status.im
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Mon, 18 Mar 2019 11:10:10 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
< Cache-Control: max-age=3600
< Expires: Mon, 18 Mar 2019 12:10:10 GMT
< Location: https://faucet.status.im/
< Server: cloudflare
< CF-RAY: 4b96c2349ba8cc91-WAW
<
* Connection #0 to host faucet.status.im left intact
But the connection to HTTPS port times out:
$ curl --max-time 3 -skv https://faucet.status.im/
...
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
* Operation timed out after 3000 milliseconds with 0 bytes received
* stopped the pause stream!
* Connection #0 to host faucet.status.im left intact
Oh, I see what is happening, I currently can't SSH into 51.15.45.169
:
$ ssh -o ConnectTimeout=3 root@51.15.45.169
ssh: connect to host 51.15.45.169 port 22: Connection timed out
~/work/infra-utils/cloudflare master*
$ ping -c 3 51.15.45.169
PING 51.15.45.169 (51.15.45.169) 56(84) bytes of data.
--- 51.15.45.169 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 53ms
And that is where both faucets are located:
$ ./fqdns.py | grep faucet
0f83254f854d8274cdcd99cd851958e7 P A faucet-rinkeby.status.im 51.15.45.169
83202f10979097b3c561b623760797ad P A faucet.status.im 51.15.45.169
It appears that that host is down.
@jakubgs i googled it a bit and seems this URL is no longer supported . @mandrigin @3esmit maybe you knpw something
It appears the host has been stopped:
$ scw ps -a --no-trunc --filter="name=faucet1"
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
abd8a9ad-b933-4211-a9f3-5d0ce82a983f Docker_1_12_2 22 months stopped 51.15.45.169 faucet1 VC1S
Interestingly enough I can't see the region for the server, so I cannot inspect it.
I've managed to start it with:
$ scw --region ams1 start server:abd8a9ad-b933-4211-a9f3-5d0ce82a983f
server:abd8a9ad-b933-4211-a9f3-5d0ce82a983f
Despite not being able to inspect it:
$ scw --region ams1 inspect server:abd8a9ad-b933-4211-a9f3-5d0ce82a983f
ERRO[0001] Unable to resolve identifier server:abd8a9ad-b933-4211-a9f3-5d0ce82a983f
FATA[0001] cannot execute 'inspect': at least 1 item failed to be inspected
Interestingly now that it's in starting
I can inspect it:
$ scw --region ams1 --sensitive inspect server:abd8a9ad-b933-4211-a9f3-5d0ce82a983f
[{
"arch": "x86_64",
"id": "abd8a9ad-b933-4211-a9f3-5d0ce82a983f",
"name": "faucet1",
"creation_date": "2017-05-08T21:25:50.837681+00:00",
"modification_date": "2019-03-18T11:55:04.028913+00:00",
"image": {
"id": "b29a1c4e-43ed-4457-95f5-044ab7806e02",
"name": "Docker 1.12.2",
"creation_date": "2016-10-26T14:43:42.980247+00:00",
"modification_date": "2016-10-26T14:43:42.980247+00:00",
"root_volume": {
"id": "2fea886f-ff7f-4cd1-9342-aa099ab4a553",
"size": 50000000000,
"name": "x86_64-docker-latest-2016-10-24_15:48",
"volume_type": "l_ssd"
},
"public": true,
"default_bootscript": {
"bootcmdargs": "LINUX_COMMON ip=:::::eth0: boot=local",
"initrd": "http://169.254.42.24/initrd/initrd-Linux-x86_64-v3.12.3.gz",
"kernel": "http://169.254.42.24/kernel/x86_64-4.5.7-docker-4/vmlinuz-4.5.7-docker-4",
"architecture": "x86_64",
"id": "aa9f03c9-5d0e-42bb-82b1-0a73e29501a0",
"organization": "11111111-1111-4111-8111-111111111111",
"title": "x86_64 4.5.7 docker #4"
},
"organization": "abaeb1aa-760b-4391-aeab-c0622be90abf",
"arch": "x86_64"
},
"dynamic_ip_required": false,
"public_ip": {
"id": "77520e46-ac20-4474-bf8f-b74fe14d0d1c",
"address": "51.15.45.169",
"dynamic": false
},
"state": "running",
"boot_type": "bootscript",
"state_detail": "booted",
"private_ip": "10.20.203.27",
"bootscript": {
"bootcmdargs": "LINUX_COMMON ip=:::::eth0: boot=local",
"initrd": "http://169.254.42.24/initrd/initrd-Linux-x86_64-v3.12.3.gz",
"kernel": "http://169.254.42.24/kernel/x86_64-4.5.7-docker-4/vmlinuz-4.5.7-docker-4",
"architecture": "x86_64",
"id": "aa9f03c9-5d0e-42bb-82b1-0a73e29501a0",
"organization": "11111111-1111-4111-8111-111111111111",
"title": "x86_64 4.5.7 docker #4"
},
"hostname": "faucet1",
"volumes": {
"0": {
"id": "bb65cb9f-9800-47ab-a0c2-24ac1cbefb68",
"size": 50000000000,
"creation_date": "2017-05-08T21:25:50.837681+00:00",
"modification_date": "2017-05-08T21:25:50.837681+00:00",
"organization": "745947b8-bbc1-4939-a14d-b682a90b1610",
"name": "x86_64-docker-latest-2016-10-24_15:48",
"server": {
"id": "abd8a9ad-b933-4211-a9f3-5d0ce82a983f",
"name": "faucet1"
},
"volume_type": "l_ssd",
"export_uri": "device://dev/vda"
}
},
"security_group": {
"id": "d73f2e23-1e3f-436e-937a-87d655d04480",
"name": "Default security group"
},
"organization": "745947b8-bbc1-4939-a14d-b682a90b1610",
"commercial_type": "VC1S",
"location": {
"platform_id": "22",
"cluster_id": "3",
"hypervisor_id": "201",
"node_id": "14",
"zone_id": "ams1"
},
"dns_public": "abd8a9ad-b933-4211-a9f3-5d0ce82a983f.pub.cloud.scaleway.com",
"dns_private": "abd8a9ad-b933-4211-a9f3-5d0ce82a983f.priv.cloud.scaleway.com"
}]
I was also unable to extract any info using scw logs
, so I don't know why it was stopped.
Anyway, the server is now up:
$ scw ps -a --no-trunc -f name=faucet1
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
abd8a9ad-b933-4211-a9f3-5d0ce82a983f Docker_1_12_2 ams1 22 months running 51.15.45.169 faucet1 VC1S
And one of the endpoints is back: http://51.15.45.169:3001/faucet-info But that's not correct, because the correct address should be https://faucet.status.im/faucet-info, but I can't seem to find a config for that endpoint on the host. What the fuck happened...
So the 51.15.45.169
host used to contain Nginx configuration for two endpoints:
I revived it and there's a few weird things about it, for example, the container running there is different than on other faucet hosts:
root@faucet1:/etc# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8f9cde87b34 farazdagi/faucet "supervisord --config" 17 minutes ago Up 16 minutes 0.0.0.0:3001->3001/tcp, 0.0.0.0:8545->8545/tcp, 0.0.0.0:30303->30303/tcp faucet
It's using farazdagi/faucet
instead of statusim/faucet
on all other hosts, and there is no ethereum/client-go
container either.
There is also no /etc/nginx
or Nginx installed. Which is weird.
I also found that 51.15.60.23
is running a weird trinitronx/python-simplehttpserver
container:
root@faucet2:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2edd04d942eb trinitronx/python-simplehttpserver "python -m SimpleHTTP" 9 hours ago Up 9 hours 0.0.0.0:80->8080/tcp
Which seems to be hosting some node listing page - http://51.15.60.23/ - which loads nodes forever. This has been started 9 hours ago:
root@faucet2:~# docker inspect http | jq '.[].Created'
"2019-03-18T03:09:14.93644152Z"
But I can't seem to find who did it.
Okay, now http://51.15.60.23:3001/faucet-info is back up as well. I don't know why but 9 minutes ago both containers exited with 0
.
I've brought all three online:
One issue is, the rinkeby
and testnet
both return this from /faucet-info
:
can't fetch trie key xyz...123: no suitable peers available
Also https://faucet-ropsten.status.im/faucet-info seems to return:
{
"account": "0xadaf150b905cf5e6a778e553e15a139b6618bbb7",
"balance": "0 Wei"
}
Not sure why, I assume it wasn't like that before.
can it be because of Constantinople update?
it looks like 0xadaf150b905cf5e6a778e553e15a139b6618bbb7 address was used long time ago and definitely was not sending 0.1 ETH last week or so when faucet was working https://ropsten.etherscan.io/address/0xadaf150b905cf5e6a778e553e15a139b6618bbb7
can it be because of Constantinople update?
Oh, that's possible. It is running ethereum/client-go:v1.7.2
, I assume that's too old?
I will try upgrading it to v1.8.23
.
Whelp, I upgraded to v1.8.23
for Rinkeby and this is what I get from https://faucet-rinkeby.status.im/faucet-info:
{"error":"invalid character 'i' looking for beginning of value"}
The geth
container seems to be having syncing issues:
INFO [03-18|14:20:30.023] IPC endpoint opened url=/root/.ethereum/rinkeby/geth.ipc
INFO [03-18|14:20:30.033] HTTP endpoint opened url=http://0.0.0.0:8545 cors= vhosts=localhost
INFO [03-18|14:20:30.051] Block synchronisation started
WARN [03-18|14:20:30.052] Synchronisation failed, retrying err="block download canceled (requested)"
INFO [03-18|14:21:18.556] Imported new block headers count=192 elapsed=792.793ms number=3725184 hash=e84515…c61638 age=1mo3w5d
INFO [03-18|14:21:22.578] Imported new block headers count=192 elapsed=256.473ms number=3749568 hash=0b4bad…297277 age=1mo3w1d
...
WARN [03-18|14:23:09.865] Rolled back headers count=2048 header=3749568->3747520 fast=0->0 block=0->0
WARN [03-18|14:23:09.865] Synchronisation failed, retrying err="block body download canceled (requested)"
WARN [03-18|14:23:17.159] Synchronisation failed, dropping peer peer=3418b497a0c26758 err=timeout
What a fucking mess. Maybe it makes more sense to just start an infra-faucet
and configure this stuff from scratch. Because fixing this is making me angry.
@jakubgs can it be solved today or tomorrow? I know you have a bunch of other tasks, so maybe we should ask for help from the team?
I've created https://github.com/status-im/infra-faucet and I'm working on deploying the Rinkeby cluster right now. I hope to be done by the end of the day.
So I'm having issues with https://github.com/status-im/faucet being not compatible with new geth versions. I'm going to try to fix it but it looks messy so I can't guarantee it will work today. If anyone needs eth
in Ropsten on Rinkeby I have access to the old faucet wallet, so I can perform transfers by hand.
I believe the faucets are up and running now:
$ curl -sk https://faucet-ropsten.status.im/faucet-info
{"account":"0x2127edab5d08b1e11adf7ae4bae16c2b33fdf74a","balance":"66020 Ether"}
$ curl -sk https://faucet-rinkeby.status.im/faucet-info
{"account":"0x2127edab5d08b1e11adf7ae4bae16c2b33fdf74a","balance":"320 Ether"}
I will be removing the old hosts on Scaleway that we used to use and which IPs are listed in this issue. Please use the DNS names from now on.
Problem
old faucets are not working anymore
Rinkeby: http://51.15.60.23:3001/faucet-info
Ropsten: http://51.15.45.169:3001/faucet-info
Acceptance Criteria
any other ways to get test ETH would be appreciated