ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
15.83k stars 2.96k forks source link

IPNS very slow #3860

Open nezzard opened 7 years ago

nezzard commented 7 years ago

Hi, is it normal, that ipns loading very slow? I tried to make something like cms with dynamic content, but ipns to slow, when i load site via ipns, first loading is very slow, if after that i reload page it's load quickly. But if i reload after few minutes, it's again load slow.

whyrusleeping commented 7 years ago

@nezzard This is generally a known issue, but providing more information is helpful. Are you resolving from your local node? Or are you resolving through the gateway?

nezzard commented 7 years ago

@whyrusleeping throw local, but sometimes gateway faster, sometimes local faster So, for now I can't use ipns normally?

whyrusleeping commented 7 years ago

@nezzard when using locally, how many peers do you have connected? (ipfs swarm peers) The primary slowdown of ipns is connecting to enough of the right peers on the dht, once thats warmed up it should be faster.

DHT based ipns isnt as fast as something more centralized, but you can generally cache the results for longer than ipfs caches them. We should take a look at making these caches more configurable, and look into other ipns slowdowns.

When you say its 'very slow', what time range exactly are you experiencing? 1-5 seconds? 5-10, 10+ ?

nezzard commented 7 years ago

@whyrusleeping Sometimes it's really fast, sometimes i have https://yadi.sk/i/mL6Q4OFX3Gu2nk

Ipfs swarm /ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ /ip4/104.131.180.155/tcp/4001/ipfs/QmeXAm1zdLbPaA9wVemaCjbeJgWsCrH4oSCrK2F92yWnbm /ip4/104.133.2.68/tcp/53366/ipfs/QmTAmvzNBsicnajpLTUnVqcPankP3pNDoqHpAtUNkK2rU7 /ip4/104.155.150.120/tcp/4001/ipfs/Qmep8LtipXUG4WSNgJGEtwmuaQQt77wRDL5nkMpZyDqrD3 /ip4/104.236.169.138/tcp/4001/ipfs/QmYodPH2C6xEYFPxNhK4how1frPdXFWVrZ3QGynTFCFfBe /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z /ip4/104.236.176.59/tcp/4001/ipfs/QmQ8MYL1ANybPTM5uamhzTnPwDwCFgfrdpYo9cwiEmVsge /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64 /ip4/104.40.212.43/tcp/4001/ipfs/QmcvFeaip7B3RDmLU9MgqGcCRv881Citnv5cHkrTSusZD6 /ip4/106.246.181.100/tcp/4001/ipfs/QmQ6TbUShnjKbnJDSYdxaBb78Dz6fF82NMetDKnau3k7zW /ip4/108.161.120.136/tcp/27040/ipfs/QmNRM8W3u6gxAvm8WqSXqCVC6Wzknq66tdET6fLGh8zCVk /ip4/108.28.144.234/tcp/5002/ipfs/QmWfjhgBWjwiesWQPCC4CSV4q83vyBdSA6LRSaZLLCZoVH /ip4/112.196.16.84/tcp/4002/ipfs/QmbELjeVvfpbGYNcC4j4PPr6mnssp6jKWd4D6Jht8jDhiW /ip4/113.253.98.194/tcp/54388/ipfs/QmcL9BdiHQbRng6PvDzbJye7yG73ttNAkhA5hLGn22StM8 /ip4/121.122.82.230/tcp/58960/ipfs/QmPz9uv4HUP1er5TGaaoc4NVCbN8VFMrf5gwvxfmtSAmGv /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu /ip4/128.32.112.184/tcp/4001/ipfs/QmeM9rJsk6Ke57xMwMuCkJBb9pYGx7qVRkgzVD6zhxPaBx /ip4/128.32.153.243/tcp/1030/ipfs/QmYoH11GjCyoQW4HyZtSZcL8BqBuudaXWi1pdYyy1AroFd /ip4/134.71.135.172/tcp/4001/ipfs/QmU3q7GxnnhJabNh3ukDq2QsnzwzVpcT5FEPBcJcRu3Wq1 /ip4/138.201.53.216/tcp/4001/ipfs/QmWmJfJKfJmKtRqsTnygmWgJfsmHnXo4p3Uc1Atf8N5iQ5 /ip4/139.162.191.34/tcp/4001/ipfs/QmYfmBh8Pud13uwc5mbtCGRgYbxzsipY87xgjdj2TGeJWm /ip4/142.4.211.131/tcp/4001/ipfs/QmWWWLYe16uU53wPgdP3V5eEb8QRwoqUb35h5EMWoEyWaJ /ip4/159.203.77.184/tcp/4001/ipfs/QmeLGqhi5dFBpxD4xuzAWWcoip69i5SaneXL9Jb83sxSXo /ip4/163.172.222.20/tcp/4001/ipfs/Qmd4up4kjr8TNWc4rx6r4bFwpe6TJQjVVmfwtiv4q3FSPx /ip4/167.114.2.68/tcp/4001/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG /ip4/168.235.149.174/tcp/4001/ipfs/QmbPFhS9YwUxE4rPeaqd7Vn6GEESd1MUUM67ECtYchHyFB /ip4/168.235.79.131/tcp/4001/ipfs/QmaqsmhXtQfKfiWi3jXdb4PxrN8JNi2zmXN13MDEktjK8H /ip4/168.235.90.18/tcp/4001/ipfs/QmWtA6WFyo44pYzQzHFtrtMWPHZiFEDFjUWihEY49obZ1e /ip4/169.231.33.236/tcp/55897/ipfs/QmQyTC3Bg2BkctdisKBvWPoG8Avr7HMrnNMNJS25ubjVUU /ip4/173.95.181.110/tcp/42615/ipfs/QmTxQ2Bv9gppcNvzAtRJiwNAahVhkUHxFt5mMYkW9qPjE6 /ip4/176.9.85.5/tcp/4001/ipfs/QmNUZW8yuNxdLSPMwvaafiMVN8fof5r2PrsUJAgyAn8Udb /ip4/178.19.251.249/tcp/4401/ipfs/QmR2FRyigN82VJc3MFZNz79L8Hunc3XvfAxU3eA3McRPHg /ip4/178.209.50.28/tcp/30852/ipfs/QmVfwJUWnj7GAkQtV4cDVrNDnZEwi4oxnyZaJc7xY7zaN3 /ip4/178.209.50.28/tcp/36706/ipfs/QmWCNyBxJS9iuwCrnnA3QfcrS9Yb67WXnZTiXZsMDFj2ja /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3 /ip4/180.181.245.242/tcp/4001/ipfs/QmZg57eGmSgXs8cXeGJNsBknZTxdphZH9wWLDx8TdBQrMY /ip4/185.10.68.111/tcp/4001/ipfs/QmTNjTQy6sGFG39VSunS4v1UZRfPFevtGzHwr2h1xfa5Bh /ip4/185.21.217.59/tcp/4001/ipfs/QmQ4GzeQzyW3VcBgVacKSjrUrBxEo6s7VQrrkQyQwi1sxs /ip4/185.32.221.138/tcp/4001/ipfs/QmcmTqKUdasx9xwbG2DcyY95q6GcMzx8uUC9fVqdTyETrZ /ip4/185.61.148.187/tcp/4001/ipfs/QmQ5k9N7aVGECaNBLsX9ZeYJCYvcNWcKDZ8VacV9HGUwSC /ip4/185.97.214.103/tcp/4001/ipfs/QmbKGbNNyvBe6A7kUYQtUpXZU61QiTMnGGjqBx6zuvrYyj /ip4/188.226.129.60/tcp/4001/ipfs/QmWBthnxqH6CpAA9k9XGP9TqWMZGT6UC2DZ4x9qGr7eapc /ip4/188.25.26.115/tcp/32349/ipfs/QmVUR2mtHXCnm7KVyEjBQe1Vdp8XWG6RXC8f8FfrnAxCGJ /ip4/188.25.26.115/tcp/53649/ipfs/QmXctexVWdB4PqAquZ6Ksmu1FwwRMiYhQNfoiaWV4iqEFn /ip4/188.40.114.11/tcp/4001/ipfs/QmZY7MtK8ZbG1suwrxc7xEYZ2hQLf1dAWPRHhjxC8rjq8E /ip4/188.40.41.114/tcp/4001/ipfs/QmUYYq1rYdhmrU7za9zrc6adLmwFBKYx3ksTVU3y1RHomm /ip4/192.124.26.250/tcp/16808/ipfs/QmUnwLT7GK8yCxHrpEELTyHVwGFhiZFwmjrq3jypG9n1k8 /ip4/192.124.26.250/tcp/21486/ipfs/QmeBT8g5ekgXaF4ZPqAi1Y8ssuQTjtzacWB7HC7ZHY8CH7 /ip4/192.131.44.99/tcp/4001/ipfs/QmWQBr5KAnCpGiQa5888DYsJc4gF7x7SDzpT6eVW2SoMMQ /ip4/192.52.2.2/tcp/4001/ipfs/QmeJENdKrdD8Bcj6iSrYPAwfQpR2K1nC8aYFkZ7wXdN9ic /ip4/194.100.58.189/tcp/4001/ipfs/QmVPCaHpUJ2eKVMSgb54zZhYRUKokNsX32C4PSRWiKWY6w /ip4/194.135.91.244/tcp/4001/ipfs/QmbE4S5EBBuY7du97ARD3BizNqpdcwQ3iH1aGyo5c8Ezmb /ip4/195.154.182.94/tcp/1031/ipfs/QmUSfsmVqD8TTgnUcPDTrd24SbWDEpnmkWWr7eqbJT2g8y /ip4/199.188.101.24/tcp/4001/ipfs/QmSsjprNEhoDZJAZYscB4G23b1dhxJ1cmiCdC5k73N8Jra /ip4/204.236.253.32/tcp/4001/ipfs/QmYf9BoND8MCHfmzihpseFc6MA6JwBV1ZvHsSMPJVW9Hww /ip4/206.190.135.76/tcp/4001/ipfs/QmTRmYCFGJLz2s5tfnHiB1kwrfrtVSxKeSPxojMioZKVH6 /ip4/212.227.249.191/tcp/4001/ipfs/QmcZrBqWBYV3RGsPuhQX11QzpKAQ8SYfMYL1dGXuPmaDYF /ip4/212.47.243.156/tcp/4001/ipfs/QmPCfdoA8aDscrfNVAhB12YYJJ2CR9mDG2WtKYFoxwL182 /ip4/213.108.213.138/tcp/4001/ipfs/QmWHo4hLG3tkmfuCot3xGCzE2a822MCNQ1mAx1tdEXVL46 /ip4/213.32.16.10/tcp/4001/ipfs/QmcWjSF6prpJwBZsfPSfzGEL61agU1vcMNCX8K6qaH5PAq /ip4/217.210.239.98/tcp/48069/ipfs/QmWGUTL6pQe4ryneBarFqnMdFwTq847a2DnWNo4oYRHxEJ /ip4/217.234.48.60/tcp/65012/ipfs/QmPPnZRcPCPxDvqgz3nyg5QshSzCzqa837ABFU4H4ZzUQP /ip4/23.250.20.244/tcp/4001/ipfs/QmUgNCzhgGvjn9DAs22mCJ7bv3sFp6PWPD6Egt9aPopjVn /ip4/34.223.212.29/tcp/1024/ipfs/QmcXwJ34KM17jkwYwGjgUFvG7zBgGGnUXRYCJdvAPTc8CB /ip4/35.154.222.183/tcp/4001/ipfs/Qmecb2A1Ki34eb4jUuaaWBH8A3rRhiLaynoq4Yj7issF1L /ip4/37.187.116.23/tcp/4001/ipfs/QmbqE6UfCJaXST3i65zbr649s8cJCUoP9m3UFUrXcNgeDn /ip4/37.187.98.185/tcp/1045/ipfs/QmS7djjNercLL4R4kbEjs6eGtxmAiuWMwnvAhP6AkFB64U /ip4/37.205.9.176/tcp/4001/ipfs/QmdX1zPzUtGJzcQm2gz6fyiaX7XgthK5d4LNSJq3rUAsiP /ip4/40.112.223.87/tcp/4001/ipfs/QmWPSzKERs6KAjb8QfSXViFqyEUn3VZYYnXjgG6hJwXWYK /ip4/45.32.155.49/tcp/4001/ipfs/QmYdn8trPQMRZEURK3BRrwh2kSMrb6r6xMoFr1AC1hRmNG /ip4/45.63.24.86/tcp/4001/ipfs/Qmd66qwujno615ZPiJZYTm12SF1c9fuHcTMSU9mA4gvuwM /ip4/49.77.250.124/tcp/20540/ipfs/QmPXWsm3wCRdyTZAeu4gEon7i1xSQ1QsWsR2X6GpAB3x6r /ip4/5.186.55.132/tcp/1024/ipfs/QmR1mXyic9jSbyzLtnBU9gjbFY8K3TFHrpvJK88LSyPnd9 /ip4/5.28.92.193/tcp/4001/ipfs/QmZ9RMTK8YrgFY7EaYsWnE2AsDNHu1rm5LqadvhFmivPWF /ip4/5.9.150.40/tcp/4737/ipfs/QmaeXrsLHWm4gbjyEUJ4NtPsF3d36mXVzY5eTBQHLdMQ19 /ip4/50.148.88.236/tcp/4001/ipfs/QmUeaH7miiLjxneP3dgJ7EgYxCe6nR16C7xyA5NDzBAcP3 /ip4/50.31.11.244/tcp/4001/ipfs/QmYMdi1e6RV7nJ4xoNUcP4CrfuNdpskzLQ6YBT4xcdaKAV /ip4/50.53.255.232/tcp/20792/ipfs/QmTaqVy1m5MLUh2vPSU64m1nqBj5n3ghovXZ48V6ThLiLj /ip4/51.254.25.17/tcp/4002/ipfs/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH /ip4/52.168.18.22/tcp/9001/ipfs/QmV9eRZ3uJjk461cWSPc8gYTCqWmxLxMU6SFWbDjdYAsxA /ip4/52.170.218.157/tcp/9001/ipfs/QmRZvZiZrhJdZoDruT7w2QLKTdniThNwrpNeFFdZXAzY1s /ip4/52.233.193.228/tcp/4001/ipfs/QmcdQmd42P3Mer1XQrENkpKEW9Z97ucBb5iw3bEPqFnqHe /ip4/52.53.224.174/tcp/4001/ipfs/QmdhVq4BHYLmrsatWxw8FHVCspdTabdgptUaGxW2ow2F7Q /ip4/52.7.58.3/tcp/4001/ipfs/QmdG5Y7xqrtDkVjP1dDuwWvPcVHQJjyJqG5xK62VzMth2x /ip4/54.178.171.10/tcp/4091/ipfs/QmdtfJBMitotUWBX5YZ6rYeaYRFu6zfXXMZP6fygEWK2iu /ip4/54.190.54.51/tcp/4001/ipfs/QmZobm32XH2UiGi5uAg2KabEh6kRL6x64HB56ZF3oA4awR /ip4/54.208.247.108/tcp/4001/ipfs/QmdDyCsGm8Zzv4uyKB4MzX8wP7QDfSfVCsCNMZV5UxNgJd /ip4/54.70.38.180/tcp/1024/ipfs/QmSHCEevPPowdJKHPwivtTW6HsShGQz5qVrFytDeW1dHDv /ip4/54.70.48.46/tcp/1030/ipfs/QmeDcUc9ytZdLcuPHwDNrN1gj415ZFHr27gPgnqJqbf1hg /ip4/54.71.244.118/tcp/4001/ipfs/QmaGYHEnjr5SwSrjP44FHGahtdk3ShPf3DBYmDrZCa1nbS /ip4/54.89.97.141/tcp/4001/ipfs/QmRjxYdkT4x3QpAWqqcz1wqXhTUYrNBm6afaYGk5DQFeY8 /ip4/58.179.165.141/tcp/4001/ipfs/QmYoXumXQYX3FknhH1drVhgqnJd2vQ1ExECLAHykA1zhJZ /ip4/63.96.220.210/tcp/4001/ipfs/QmX4SxZFMgds5b1mf3y4KKHsrLijrFvKZ6HfjZN6DkY4j5 /ip4/65.19.134.242/tcp/4001/ipfs/QmYCLRXcux9BrLSkv3SuGEW6iu7nUD7QSg3YVHcLZjS5AT /ip4/66.56.15.111/tcp/4001/ipfs/QmZxW1oKFYNhQLjypNtUZJqtZMvzk1JNAQnfGLczan2RD2 /ip4/67.174.159.210/tcp/4001/ipfs/QmRNuP6GpZ4tAMvfgXNeCB6At4uRGqqTXBusHRxFh5n8Eq /ip4/69.12.67.106/tcp/4001/ipfs/QmT1q92VyoqysvC268kegsdxeNLR8gkEgpFzmnKWfqp29V /ip4/69.61.33.241/tcp/4001/ipfs/QmTtggHgG1tjAHrHfBDBLPmUvn5BwNRpZY4qMJRXnQ7bQj /ip4/69.62.223.164/tcp/4001/ipfs/QmZrzE3Gye318CU7ZsZ3YeEnw6L7RkbhBvmfU7ebRQEF54 /ip4/71.204.170.241/tcp/4001/ipfs/QmTwvAzEoWZjFAsv9rhXrcn1XPb7qhxDVZN1Q61AnZbqmM /ip4/72.177.11.53/tcp/4001/ipfs/QmPxFX8j1zbHNzLgmeScjX7pjKho2EgzGLaiANFTjLUAb4 /ip4/75.112.252.166/tcp/11465/ipfs/QmRWC4hgiM7Tzchz2uLAN6Yt1xWptqZWYPb5AWvv2DeMhp /ip4/78.46.68.56/tcp/53378/ipfs/QmbE9eo6PXuSHAASumNVZBKvPsVpSjgRDEqoMNHJ49cBKz /ip4/78.56.33.225/tcp/4001/ipfs/QmXokcQHHxSCNZgFv28hN7dTzxbLcXpCM1MUDRXa8G9wNK /ip4/79.175.125.102/tcp/58126/ipfs/QmdDA6QfLQ5sRez6Ev15yDCdumvBuYygeNjVZqFef693Gn /ip4/80.167.121.206/tcp/4001/ipfs/QmfFB7ShRaVPEy9Bbr9fu9xG947KCZqhCTw1utBNHBwGK2 /ip4/82.119.233.36/tcp/4001/ipfs/QmY3xH9PWc4NpmupJ9KWE4r1w9XshvW6oGVeHAApuvVU2K /ip4/82.197.194.135/tcp/41271/ipfs/QmQLW2mhJYPmhYmhkA2FZwFGdEXFjnsprB5DfBxCMRdBk9 /ip4/82.227.20.27/tcp/50190/ipfs/QmY8bMNkkNZvxw1pGVi4pqiXeszZnHY9wwr1Qvyv6QmfsE /ip4/84.217.19.85/tcp/62227/ipfs/QmaD38nfW4u97DPHDLz1cYWzhWUYPKrEianJs2dKctutpf /ip4/84.217.19.85/tcp/63787/ipfs/QmXKd1pJxTqTWNgGENcX2daiGLgWRPDDsXJe8eecQCr6Vh /ip4/86.0.212.51/tcp/50000/ipfs/Qmb9ECxYmPL9sc8jRNAwpGhgjEiXVHKb2qfS8jtjN5z7Pp /ip4/88.153.7.190/tcp/17396/ipfs/QmWTyP5FFpykrfocJ14AcQcwnuSdKAnVASWuFbtqCw3RPT /ip4/88.198.52.13/tcp/4001/ipfs/QmNhwcGyu8pyCHzHS9SuVyVNbg8SjpTKyFb72oofvL4Nf5 /ip4/88.99.13.90/tcp/4001/ipfs/QmTCM4KLAF1xG4ri2JBRigmjf8CLwAzkTs6ckCQbHaArR6 /ip4/89.23.224.58/tcp/37305/ipfs/QmWqjusr86LThkYgjAbNMa8gJ55wzVufkcv5E2TFfzYZXu /ip4/89.64.51.138/tcp/47111/ipfs/Qme63idhHJ2awgkdG952iddw5Ta9nrfQB3Bpn83V1Bqgvv /ip4/91.126.106.78/tcp/21076/ipfs/QmdFZQdcLbgjK5uUaJS2EiKMs4d2oke1DdyGoHAKRMcaXk /ip4/92.222.85.0/tcp/4001/ipfs/QmTm7RdPXbvdSwKQdjEcbtm4JKv1VebzJR7RDra3DpiWd7 /ip4/93.11.115.24/tcp/34730/ipfs/QmRztqxTvxvQXWi7JbtTXijzzngpDgVYwQ2YBccVkt7qjn /ip4/93.182.128.2/tcp/39803/ipfs/Qma8oBW3GNWvNbdEzWiNWenrGtF3DhDUBcUrrsTJBiNKJ2 /ip4/95.31.15.24/tcp/4001/ipfs/QmPxgtHFqyAdby5oqLT5UJGMjPFyGHu5zQcpZ1sKYcuX75 /ip4/96.84.144.177/tcp/4001/ipfs/Qma7U9CNhPnfLit2UL88CFKvizFCZ7pnxB38N3Y5WsZwFH

Kubuxu commented 7 years ago

Which ipfs version are you running?

kikoncuo commented 7 years ago

@nezzard What tool are you using in your screenshot? I've seen it many times in the forums but I can't find it anywhere.

nezzard commented 7 years ago

@kikoncuo it's a tool from cloud service like dropbox https://disk.yandex.ua/

nezzard commented 7 years ago

@Kubuxu The last at the time

kikoncuo commented 7 years ago

@nezzard I meant the tool which you took the screenshot from, my bad

nezzard commented 7 years ago

@Kubuxu this tool inside the program yandex disk

cpacia commented 7 years ago

So let me tell you some tweaks I've made which has helped quite a bit. 1) I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

2) I also added some caching into the resolver so that if it can't find a record on the network (such as it expiring) it loads it from local cache. Obviously each record that is fetched updates the cache. This isn't really speed related but it does provide a slightly better UX as data remains available after it drops out of the dht.

3) Using #2 for certain types of data where it doesn't matter if it's slightly stale, like profiles, I load the record from cache and use it to return the profile. Then in the background I do the IPNS call to fetch the latest profile and update the cache. This ensures that our profile calls are nearly instant while potentially being only slightly out of date.

whyrusleeping commented 7 years ago

We can probably add flags to the ipfs name resolve api that allow selection (per resolve) of the query size parameter, and also to say "just give me whatever value you have cached".

Both of those would be simple enough to implement without actually having to change too much

whyrusleeping commented 7 years ago

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one". This way you could start working with the first one you receive, then when the right one comes in, switch to using that

MichaelMure commented 6 years ago

I have some trouble as well with IPNS. I have a linux box and a windows box on the same LAN running ipfs 0.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes. I have 400 peers connected on one side, 250 on the other.

@cpacia your changes are in a branch somewhere ? That looks like a very handy addition for my project.

MichaelMure commented 6 years ago

Answering to myself, the fork is here: https://github.com/OpenBazaar/go-ipfs

@whyrusleeping any idea how I can debug this issue ?

whyrusleeping commented 6 years ago

@MichaelMure you cant resolve at all? Or its just very slow?

MichaelMure commented 6 years ago

Sometimes it just take times before being able to resolve and once it has been resolved once it works properly. But in this case it didn't resolve at all even after 30 minutes. It might be another issue but without a way to find out what's going on in ipfs, well ...

nezzard commented 6 years ago

I think ipns is very bad for use You can check http://ipfs.artpixel.com.ua/

It's load for 15 - 20 seconds

hhff commented 6 years ago

I'm also experiencing massive resolution times with IPNS. Same behavior over here - the first resolution can take multiple minutes, then once a it's loaded, I can refresh the content in under a second.

If I leave it for a few minutes, then do another refresh, and that request cycle repeats the same behavior.

The "cache" for the resolution only appears to stay warm for a short period of time.

hhff commented 6 years ago

I'm using a CNAME with _dnslink, for what it's worth.

Content is at www.ember-cli-deploy-ipfs.com

alexandre1985 commented 6 years ago

Ipfs is unusable. I have the daemon running on both of my computers inside a LAN and one "serving" one file (a video) that the other doesn't have. When I try to access that video from the pc that doesn't have the file, using localhost:8080/ipfs/... on my browser, the video is stopping and taking huge amount of times to load. HUGE amount of time in such a way that I can't watch the video. When I netcat that video and pipe it through mplayer to the other computer I can watch the video stream great. So this is a problem of ipfs and it has great great performance issues. So great that it doesn't make sense and makes the technology not worth using (as today 2017-08-24). IPFS isn't delivering what it promised. Very disappointed

kesar commented 6 years ago

Very disappointed

You should ask for a refund 👍

alexandre1985 commented 6 years ago

@kesar I mean this out of love. @jbenet (Juan Benet) says that it is going to release us from the backbone but currently ipfs network performance is very weak. I would like ipfs to succeed but how can that be if I can see a video faster through the backbone than through ipfs hosting video file inside my LAN? The performance of ipfs in this aspect is weak, to be modest. You should try this experiment yourself

Calmarius commented 6 years ago

It took me more than a minute to resolve the domain published by my own computer... And it's not the DNS resolution it hangs at resolving the actual IPNS entry.

$ time ipfs resolve /ipns/QmQqR8R9nfFkWYH9P7xNPtAry8tT63miNyZwt121uXsmSU
/ipfs/QmQunuPzcLp2FiKwMDucJi957SrB8BygKA4C4J4h7VG4M9

real    1m0.078s
user    0m0.060s
sys 0m0.008s
Stebalien commented 6 years ago

We're working on fixing some low hanging fruit in the DHT that should alleviate this: https://github.com/libp2p/go-libp2p-kad-dht/issues/88. You can expect this to appear in a release month or so (4.12 or 4.13).

We're also working on bypassing the DHT for recently accessed IPNS addresses by using pubsub ( https://github.com/ipfs/go-ipfs/pull/4047). However, that will likely remain under an experimental flag for a while as our current pubsub implementation is very naive.

inetic commented 6 years ago

@cpacia

So let me tell you some tweaks I've made which has helped quite a bit.

  1. I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

Could you please elaborate? I'm looking into the config file but can't find any mention of dht nor query. I'm using one of the recent git versions:

commit 5923540d3716f226340db31867c9061fb2d86afe Date: Wed Oct 25 19:52:44 2017 +0100

Calmarius commented 6 years ago

Someone suggested that we should point dnslink TXT records to IPFS paths instead of IPNS, it works around the performance problem.

luckzack commented 6 years ago

@Calmarius haha, it's I said, but it looks stupid.

victorb commented 6 years ago

Update: IPNS has had a upgrade with a experimental feature to use pubsub to speed up publishing/resolving. If both nodes use pubsub and --enable-namesys-pubsub when starting the daemon, publishing/resolving gets a lot faster. More details here: https://blog.ipfs.io/34-go-ipfs-0.4.14#ipns-improvements

inetic commented 6 years ago

@VictorBjelkholm Thanks for the update. I've been testing --enable-namesys-pubsub this last week. It indeed looks like once IPNS publishers and resolvers find the correct place in the DHT, then the next time an IPNS is published the resolvers get it almost instantly.

However, the first resolve is still very slow. And it also seems that now it's even slower in the new v0.4.14 release than it was in v0.4.13. When before it usually took somewhere between 30 seconds and a minute (occasionally more), it now seems to consistently take more than 2 minutes.

BTW, would anyone know what's behind the technical issue in IPNS resolve? DHT lookup in seems very fast when resolving IPFS addresses, so I don't think DHT is the (only) problem.

Also, is there a spec for the IPNS resolve algorithm I could read? I had a quick look at the code some time ago and found it's quite more complicated than BitTorrent's BEP0044. Didn't get too far in analyzing it though.

MichaelMure commented 6 years ago

@inetic find providers (for some data in ipfs) and resolving a value (like an ipns entry) is actually two very different code path and rules in the DHT. I did a review of the DHT a few days ago that might explain the troubles: https://github.com/libp2p/go-libp2p-kad-dht/issues/131

karalabe commented 6 years ago

Hmm, running latest master, I see a funky behavior with IPNS resolutions:

$ time ipfs name publish QmYNQJoKGNHTpPxCBPh9KkDpaExgd2duMa3aF6ytMpHdao
Published to QmNwcx2tLRD4M8CzJ6NrmN1hYAJFwYkwt8JsoKNSiX9c5z: /ipfs/QmYNQJoKGNHTpPxCBPh9KkDpaExgd2duMa3aF6ytMpHdao

real    1m28.431s
user    0m0.047s
sys 0m0.009s

$ time ipfs name resolve QmNwcx2tLRD4M8CzJ6NrmN1hYAJFwYkwt8JsoKNSiX9c5z
/ipfs/QmYNQJoKGNHTpPxCBPh9KkDpaExgd2duMa3aF6ytMpHdao

real    1m0.048s
user    0m0.052s
sys 0m0.000s

$ time ipfs name resolve QmNwcx2tLRD4M8CzJ6NrmN1hYAJFwYkwt8JsoKNSiX9c5z
/ipfs/QmYNQJoKGNHTpPxCBPh9KkDpaExgd2duMa3aF6ytMpHdao

real    1m0.051s
user    0m0.043s
sys 0m0.013s

$ time ipfs name resolve QmNwcx2tLRD4M8CzJ6NrmN1hYAJFwYkwt8JsoKNSiX9c5z
/ipfs/QmYNQJoKGNHTpPxCBPh9KkDpaExgd2duMa3aF6ytMpHdao

real    1m0.047s
user    0m0.036s
sys 0m0.012s

Resolution always takes exactly 1 minute, meaning it's a synthetic timeout somewhere blocking the results. Perhaps it's IPNS waiting for majority consensus before returning it to the user?

vyzo commented 6 years ago

@karalabe there is no majority consensus involved in ipns.

But you are right that there is probably a timeout involved (1min is the default for a lot of things). Are you using pubsub by any chance?

karalabe commented 6 years ago

Not yet, will experiment with that too. For now I'm trying to understand the internals and limitations of the stock IPNS before going further into pubsub territory.

karalabe commented 6 years ago

there is no majority consensus involved in ipns.

Maybe I misunderstood the comment from @whyrusleeping

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one".

vyzo commented 6 years ago

Right, that's not something we do yet.

whyrusleeping commented 6 years ago

We do that, in the DHT: https://github.com/libp2p/go-libp2p-kad-dht/blob/master/routing.go#L84

vyzo commented 6 years ago

I stand corrected -- edit: although this is still not majority consensus.

And there is a 1min timeout 3 lines further down.

MichaelMure commented 6 years ago

The timeout is hit because we don't get enough answers. I believe we don't get enough answers because:

By default, GetValue wait for 16 values to resolve. PutValue is supposed to push the value to 20 nodes, but due to my first point, it's in practice more like 10. Also it's the nodes that we know of, not necessarily the optimal ones as there is no DHT query here. If you add on top the others reasons, I think that explain why a DHT query is slow.

vyzo commented 6 years ago

Perhaps we should simply reduce the number of peers we are expecting answers from?

MichaelMure commented 6 years ago

16 is a trade-off between speed and correctness to sample the DHT enough and get the newer record out of the older ones. I don't know if it's too much though.

If anything, PutValue should be fixed to be sure to actually push to 20 nodes, or even make it a DHT query to reach nodes closer to the key that we might not know of yet.

whyrusleeping commented 6 years ago

+1 to fixing PutValue.

MichaelMure commented 6 years ago

Note: Provide is working in a similar fashion (simple push to nodes we already know without guarantee of a minimum of nodes). FindProviders is less sensitive than GetValue because we only need to find one working provider to start downloading, but changing Provide the same way might help in edge cases and allow to reduce the number of node we keep connected for the DHT.

whyrusleeping commented 6 years ago

I filed an issue in the DHT codebase for this. Hoping to get someone to investigate it soon.

inetic commented 6 years ago

Please correct me if I'm wrong, but isn't waiting for 16 peers to respond wasteful in this case? I may be misunderstanding the code, but it seems waiting for just one valid response should be enough(?).

The way I read the code between lines L84 and L123 is like this:

L96: Make a request to the DHT and wait for 16 replies with a one minute timeout L101: Of the 16 (or less) values, take those that don't have v.Val equal to nil and put it into recs L111: Pick one record from recs using the dht.Selector.BestRecord(key, recs) function

Now, all the dht.Selector.BestRecord function does - it seems - is to pick the first element from the recs array.

Would it not be better to do something like this pseudo-code?

  1. Call GetValues in some goroutine and pass the values to a vals_chan channel.
  2. Inside the GetValue (no plural) function we evaluate each value arriving to vals_chan and
  3. if it's not nil, then just call it best and use it in the code after line L123.
whyrusleeping commented 6 years ago

@inetic the selector for public keys does just pick the first valid record, because all public key records are the same. In that case, yes. It is wasteful to wait for 16 records.

Though since public keys are cached, and can also be obtained through other methods, the slowness of ipns is rarely due to public retrieval. The slowness happens because for IPNS records, we need the 16 values, this is a probabilistic protection against an eclipse attack on a given value. As it gets smaller, it becomes exponentially easier for an attacker to pull it off and give the wrong record to their victim.

That said, in this case, the 'wrong' record must still be a valid record (signed, and with a 'live' TTL). So a successful attack is either censorship by wrong value (giving the victim 16 records with valid signatures but bad TTLs), or a slightly older value.

karalabe commented 6 years ago

Can the TTL be controlled by the user? E.g. can I say I want an IPNS entry to be valid for only 5 mins?

whyrusleeping commented 6 years ago

@karalabe yes, via the --lifetime flag. There is also a TTL flag, but that affects a different TTL than the record validity one.

leonprou commented 6 years ago

I tried to make it faster approaching the IPNS of my own node, but there was no difference.

Can this work like IPFS queries? When I query my ipfs node for a pinned file it takes no time cause the node doesn't need to communicate with the network. The same can be done with IPNS I think. I've found that "decetralize but store your stuff locally" approach pretty useful while IPFS is in alpha and feature are missing.

Please correct me if I'm wrong, just start exploring the InterPlanetary 😊

Kubuxu commented 6 years ago

It is most likely caused by https://github.com/libp2p/go-libp2p-kad-dht/issues/139 . In short, we don't push enough DHT records to resolve IPNS without the timeout and it is slow because of that.