Open arnoldnipper opened 6 years ago
Making read-only pages work cross continents would be great. Writes would probably need to go to the master, unless we implemented multi-master here? Could also put peeringdb behind a CDN? I'm sure we could get volunteer CDN hosting from someone :)
This was already on the PeeringDB Operations Committee road-map. Thanks for confirming the need for distribution.
Are there any code-changes needed for this to generate a different URL for static/non-static objects?
we have yet to investigate that
On Sun, Aug 26, 2018 at 7:49 PM, Eric Loos notifications@github.com wrote:
Are there any code-changes needed for this to generate a different URL for static/non-static objects?
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/peeringdb/peeringdb/issues/367#issuecomment-416056229, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJY2DpIyE3OzyPaYYrUEi4NaxH8jqKaks5uUt-ngaJpZM4WJ47J .
@eloos no, there aren't.
Moving this to backlog as an Operations ticket.
Any ETA for this, @job ?
Today's trace from the venue hotel (Cordis) of Peering Asia 2.0 tracert peeringdb.com
Tracing route to peeringdb.com [52.20.70.156] over a maximum of 30 hops:
1 2 ms 2 ms 125.214.253.252 2 3 ms 2 ms 2 ms 172.25.248.117 3 3 ms 2 ms 2 ms 10.0.16.98 4 3 ms 3 ms 3 ms 10.0.2.1 5 4 ms 3 ms 27 ms 10.0.17.137 6 4 ms 2 ms 2 ms 223-197-123-142.static.imsbiz.com [223.197.123.142] 7 4 ms 4 ms 5 ms 63-216-142-105.static.pccwglobal.net [63.216.142.105] 8 Request timed out. 9 179 ms 179 ms 183 ms TenGE0-0-0-2.br04.dal01.pccwbtn.net [63.218.22.246] 10 189 ms 256 ms 303 ms 52.95.219.100 11 Request timed out. 12 Request timed out. 13 Request timed out. 14 Request timed out. 15 Request timed out. 16 Request timed out. 17 237 ms 229 ms 277 ms 54.239.108.48 18 215 ms 248 ms 244 ms 54.239.110.176 19 266 ms 304 ms 229 ms 54.239.110.191 20 252 ms 219 ms 286 ms 54.239.109.47 21 Request timed out. 22 Request timed out. 23 Request timed out. 24 Request timed out. 25 Request timed out. 26 Request timed out. 27 * Request timed out. 28 212 ms 284 ms 212 ms ec2-52-20-70-156.compute-1.amazonaws.com [52.20.70.156]
Trace complete.
Any ETA for this @job ?
Today's trace is from the KINX venue hoteltracert peeringdb.com
Tracing route to peeringdb.com [54.236.2.139] over a maximum of 30 hops:
1 10 ms 7 ms 1 ms 211.55.52.254 2 Request timed out. 3 Request timed out. 4 2 ms 18 ms 2 ms 112.189.28.189 5 Request timed out. 6 15 ms 3 ms 3 ms 112.174.82.30 7 17 ms 12 ms 9 ms 112.174.85.126 8 253 ms 195 ms 166 ms 112.174.80.174 9 162 ms 168 ms 165 ms ae14.cr3-sea2.ip4.gtt.net [173.205.45.105] 10 379 ms 311 ms 306 ms et-0-0-67.cr6-chi1.ip4.gtt.net [89.149.140.209] 11 181 ms 180 ms 183 ms a100-gw.ip4.gtt.net [173.205.58.74] 12 Request timed out. 13 Request timed out. 14 Request timed out. 15 Request timed out. 16 Request timed out. 17 Request timed out. 18 Request timed out. 19 Request timed out. 20 Request timed out. 21 206 ms 209 ms 355 ms ec2-54-236-2-139.compute-1.amazonaws.com [54.236.2.139]
Trace complete.
Any news on this, @peeringdb/oc
I saw that @job has done work on making beta more geographically diverse. I am not sure where we are for the production platform.
In order to start serving up the PDB application & content in the production environment, we as OPS need each committee/stewards to sign off on how beta right now is running, whether people are happy with that.
CDN log messages are transferred to the /efs
storage location accessible via beta.peeringdb.com
frontends & master.
Various committees should click around on beta.peeringdb.com and tell OPS if something is off or if everything looks like they expect it to work
@leovegoda can you help?
Seemed fine to me. @grizz are you happy with beta's performance the past ~week it's been behind a CDN?
we should be able to close that issue. any objection? if not i'll go ahead
@ynbrthr this is an ops ticket, its not completed yet.
ack, was trying to do some clean-up
@peeringdb/pc please see https://github.com/peeringdb/peeringdb-py/issues/65 as another approach to achieving this goal
How can you give us a feel for ROI? Is this for one user = $$ savings or for 50 = €€?
Thanks Leo
On Wed, Jul 6, 2022 at 15:03 Leo Vegoda @.***> wrote:
@peeringdb/pc https://github.com/orgs/peeringdb/teams/pc please see peeringdb/peeringdb-py#65 https://github.com/peeringdb/peeringdb-py/issues/65 as another approach to achieving this goal
— Reply to this email directly, view it on GitHub https://github.com/peeringdb/peeringdb/issues/367#issuecomment-1176572192, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFA2YQXUMJNWSVA5MYR722LVSXKBZANCNFSM4FRHR3EQ . You are receiving this because you are on a team that was mentioned.Message ID: @.***>
I think it would depend on take-up. Because peeringdb-py is a very efficient local client cache it could potentially help us better manage our spend with AWS. If we did this, we might also want to use logging information to reach out to heavy users and explain the advantages to them of using it.
Seemed fine to me. @grizz are you happy with beta's performance the past ~week it's been behind a CDN?
open for two years now
Passive aggression isn't usually the best idea with uncompensated volunteers. Especially not in a public forum.
I recently did some experiments with fly.io and planetscale, but got blocked by some faulty pdb code. I'll revisit it soon, but remember that this is primarily a cosmetic thing for people using ping... It feels as responsive in a web browser from here in Australia as it did when I was in Europe. Block ICMP Echo Requests and nobody would be any the wiser :P
t feels as responsive in a web browser from here in Australia as it did when I was in Europe.
responsiveness could be way better. Both in Europe and Australia. As someone using the GUI a lot on a daily basis, it makes a difference. Sometimes, you can go fetch a coffee.
That would be the responsibility of the code, not the location of the infrastructure. Next time that situation happens, please tell ops what time it happened (and ideally the query and source IP so we can narrow it down) and we will try to correlate it with anything else going on.
Part of the reason I've been looking at fly.io (as a mechanism to bring compute closer to users) / planetscale (as a serverless database engine) is the vast increases in observability that it will bring, to find these pauses/blocks in operation as well as moving the deployment process into a proper CI/CD system without needing scheduled downtime.
I've been meaning to look at the data model too, as it does seem to have scalability issues. I suspect the "depth" parameter in API calls may be the primary cause of so many database calls, not coalescing queries appropriately.
Who knows, I might rewrite it in entgo.io one day :P
Thanks for the work!
On Tue, Feb 28, 2023 at 4:41 PM Matthew Walster @.***> wrote:
Passive aggression isn't usually the best idea with uncompensated volunteers. Especially not in a public forum.
I recently did some experiments with fly.io and planetscale, but got blocked by some faulty pdb code. I'll revisit it soon, but remember that this is primarily a cosmetic thing for people using ping... It feels as responsive in a web browser from here in Australia as it did when I was in Europe. Block ICMP Echo Requests and nobody would be any the wiser :P
— Reply to this email directly, view it on GitHub https://github.com/peeringdb/peeringdb/issues/367#issuecomment-1448959494, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFA2YQSLAF4KSQBP6Z5EBX3WZZWHNANCNFSM4FRHR3EQ . You are receiving this because you are on a team that was mentioned.Message ID: @.***>
I've been meaning to look at the data model too, as it does seem to have scalability issues. I suspect the "depth" parameter in API calls may be the primary cause of so many database calls, not coalescing queries appropriately.
Should we change that behavior? So far, it's
It would break a lot of people's code I think. What I'm planning on investigating is let's say an org references a series of ixs. Does it make one call to get all those ix objects in a big WHERE clause, or does it do one SELECT call per object, which obviously leads to a lot more database calls.
It's all about finding time to look into it ;)
It's all about finding time to look into it ;)
If you tell me where to look at, I can do. Does it make sense to look at the heavy users, first? I've done a quick compilation of how users call the API (2023-02-28). I.e.fetching the whole OBJ, OBJ with parameters, or OBJ/ID.
6 carrier
6 carrierfac
11 as_set
30 campus
654 netfac
661 ixfac
688 fac
889 org
889 poc
973 netixlan
997 ixlan
1616 ix
1676 net
1723 ixpfx
2 netixlan ID
5 netfac ID
8 poc ID
262 as_set ID
597 fac ID
954 ixlan ID
1609 ix ID
3404 org ID
23519 net ID
12 campus Parameter
669 carrier Parameter
673 carrierfac Parameter
2695 ixfac Parameter
2855 ixlan Parameter
2981 poc Parameter
3231 fac Parameter
3922 ixpfx Parameter
6729 netfac Parameter
8585 ix Parameter
17796 netixlan Parameter
17859 org Parameter
86115 net Parameter
Being at AfPIF in Cape Town a user from this continent suggested to load balance across continents as well.
Tracing route to peeringdb.com [34.229.32.113] over a maximum of 30 hops:
1 18 ms 3 ms 15 ms pr-01-afpif-cpt.za.seacomnet.com [105.31.223.254] 2 87 ms 10 ms 265 ms 105.27.196.97 3 34 ms 16 ms 18 ms xe-0-0-24.es-11-cpt.za.seacomnet.com [105.16.13.61] 4 5 ms 3 ms 4 ms xe-1-3-0.er-01-cpt.za.seacomnet.com [105.16.13.65] 5 164 ms 200 ms 164 ms ce-0-3-0-0.cr-02-cpt.za.seacomnet.com [105.16.31.2] 6 532 ms 227 ms 181 ms xe-0-1-0-8.cr-02-lhr.uk.seacomnet.com [105.16.13.34] 7 148 ms 145 ms 144 ms xe-0-0-1-0.br-02-lhr.uk.seacomnet.com [105.16.35.253] 8 208 ms 148 ms 259 ms 213.130.48.217 9 154 ms 202 ms 217 ms ae-13.r24.londen12.uk.bb.gin.ntt.net [129.250.4.25] 10 663 ms 218 ms 219 ms ae-5.r24.nycmny01.us.bb.gin.ntt.net [129.250.2.18] 11 368 ms 266 ms 704 ms ae-1.r07.nycmny01.us.bb.gin.ntt.net [129.250.3.181] 12 297 ms 245 ms 312 ms ae-0.a01.nycmny01.us.bb.gin.ntt.net [129.250.3.214] 13 300 ms 262 ms 338 ms ae-3.amazon.nycmny01.us.bb.gin.ntt.net [129.250.201.130] 14 Request timed out. 15 Request timed out. 16 Request timed out. 17 413 ms 540 ms 267 ms 54.240.229.147 18 Request timed out. 19 302 ms 280 ms 351 ms 54.239.108.122 20 252 ms 239 ms 247 ms 54.239.110.132 21 372 ms 239 ms 275 ms 54.239.110.133 22 Request timed out. 23 Request timed out. 24 Request timed out. 25 Request timed out. 26 Request timed out. 27 Request timed out. 28 Request timed out. 29 Request timed out. 30 294 ms 222 ms 223 ms ec2-34-229-32-113.compute-1.amazonaws.com [34.229.32.113]
From the hotel
Tracing route to peeringdb.com [52.20.70.156] over a maximum of 30 hops:
1 1 ms 2 ms 1 ms 172.20.0.1 2 Request timed out. 3 Request timed out. 4 16 ms 8 ms 12 ms 105.27.198.165 5 7 ms 3 ms xe-0-0-24.es-14-cpt.za.seacomnet.com [105.16.12.9] 6 112 ms 6 ms 3 ms xe-0-0-24.es-11-cpt.za.seacomnet.com [105.16.13.61] 7 2 ms 2 ms 4 ms xe-1-3-0.er-01-cpt.za.seacomnet.com [105.16.13.65] 8 142 ms 143 ms 149 ms ce-0-3-0-0.cr-01-cpt.za.seacomnet.com [105.16.31.1] 9 146 ms 144 ms 151 ms xe-0-0-0-1.cr-01-lhr.uk.seacomnet.com [105.16.8.234] 10 142 ms 142 ms 146 ms xe-0-0-1-0.br-02-lhr.uk.seacomnet.com [105.16.35.253] 11 142 ms 142 ms 143 ms 213.130.48.217 12 143 ms 145 ms 147 ms ae-13.r24.londen12.uk.bb.gin.ntt.net [129.250.4.25] 13 280 ms 261 ms 303 ms ae-5.r24.nycmny01.us.bb.gin.ntt.net [129.250.2.18] 14 350 ms 219 ms 241 ms ae-1.r08.nycmny01.us.bb.gin.ntt.net [129.250.5.62] 15 295 ms 303 ms 305 ms ae-1.a00.nycmny01.us.bb.gin.ntt.net [129.250.6.55] 16 237 ms 239 ms 238 ms ae-0.amazon.nycmny01.us.bb.gin.ntt.net [129.250.201.118] 17 Request timed out. 18 Request timed out. 19 Request timed out. 20 Request timed out. 21 Request timed out. 22 352 ms 235 ms 240 ms 54.239.108.80 23 225 ms 228 ms 228 ms 54.239.110.130 24 230 ms 255 ms 237 ms 54.239.110.141 25 217 ms 218 ms 228 ms 54.239.111.45 26 Request timed out. 27 410 ms 304 ms 472 ms 72.21.220.13 28 Request timed out. 29 Request timed out. 30 Request timed out.