Closed theneva closed 5 months ago
Thanks for letting us know! The key question is how long it would take to release new version of those modules if a need arises to do so.
@fastify/core wdyt?
I'm not sure. I think it depends on how responsive upstream is being recently across all maintained projects.
I think it depends on how responsive upstream is being recently across all maintained projects.
Indeed. I'm 100% to deprecated this fork once upstream is ahead of it.
I agree. And we can always fork again if need be.
Can you open a PR to move back to the original module in Fastify?
I don't have an even somewhat complete overview of the Fastify project, but I sent https://github.com/fastify/fastify/pull/3115 and it seems to work :smile:
That was quick! Fastify now uses the original proxy-addr
module instead of this fork. Are there any other downstream projects that need updating?
By the way, I'm not suggesting anything more drastic than maybe updating the readme and archiving this repo 😅 I definitely think the repo should still be available (as opposed to deleted) since documentation since older versions of Fastify depend on i
I'm not going to archive or delete this repo. However I'd be happy to merge a change on the README. would you like to send one?
I worked on this package and I think there are still parts, where we can improve the performance. With my PR #52 we should already get perf gains.
@Uzlopak Maybe we should try to contribute them upstream as well?
I am not interested into contributing this to upstream, because I expect that PRs will get merged in 2027.
I agree with not deprecating since in the future we might need to fork and continue again
But should we at least archive this package?
Asking for the v5 updates
No! We use this package in other plugins and our fork is faster than the original. Also i think we have possible huge performance gains. And i mean huge. But we have no resources, to improve it.
Imho we have a bottleneck in fastify, when we need the Feature of this library. When we benchmark we never benchmark with a real world example. In the realworld we would use dockerized instances of fastify servers, and before then a load balancer. So if we set trustproxy then request.ip will be processed by this plugin, as it needs to find the first acceptable ip. For this it processes the IPs, etc.. this is expensive. And to be honest repetetive. So i expect a huge performance drop in a case were we are using trustproxy and send the request.ip back as response.
There are two ways to improve this:
But till today nobody complained about this potential perf bottleneck. Maybe I am fantasizing this issue. Or maybe nobody sees this issue.
A benchmark script would be interesting.
Upstream is basically dead end and thanks to the xz fiasko the maintainer is not willing to give the maintenance to others.
I see, thanks for the explanation
I guess this specific issue can be closed then? :)
I stumbled upon https://github.com/fastify/proxy-addr and https://github.com/fastify/forwarded today and found the situation rather confusing, as it seems they are not used by default in fastify. But somehow still maintained. I have a use case with a few proxies and where performance is important, so I was curious if they provide any benefit. Seems both, fastify and express rely on the unpatched proxy-addr
at the moment.
I used a very basic script:
// let proxyaddr = require('proxy-addr');
// or:
let proxyaddr = require('@fastify/proxy-addr');
let trusted = ['127.0.0.1'];
for (let i = 0; i < 256; i += 10) {
trusted.push(`192.168.1.${i}`);
trusted.push(`192.168.${i}.0/24`);
}
console.log(trusted.length);
let trust = proxyaddr.compile(trusted);
let start = performance.now();
for (let i = 0; i < 10000; i++) {
for (let i = 0; i < 256; i++) {
trust(`192.168.1.${i}`);
trust(`192.168.${i}.${i}`);
trust(`192.168.${i}.12`);
}
}
let end = performance.now();
console.log(end - start);
This runs in about 10s on my machine, for both packages, so at the moment I don't see any improvements by using the fork. This also means it checks around 256k IPs against 53 IPs and subnets per second. Which is not too great, but considering you do this check just a few times per request (or only two times if you just have one proxy), I think it should be okay for most applications.
jshttp
built for correctness but not performance.
Most of the fork from jshttp
because of long period of wait time for an issue to solve.
We switching between jshttp
and our own fork based on state of issue at jshttp
.
Hi there! :wave:
This isn't really a feature proposal, but none of the issue templates really fit, and since it's very repo-specific I don't think it fits in the
help
repo.The jshttp
forwarded
issue linked as justification in this fork's README was recently resolved in theforwarded
module (https://github.com/jshttp/forwarded/pull/9) and released as version 0.2.0.It was pulled into the jshttp
proxy-addr
module in https://github.com/jshttp/proxy-addr/commit/1cdd2f78e0fcc23ceae4723e6b837345c2499772#diff-648afe3d986261d8f2015b2b131b0e4a448d4dc6946cfde1a7a836876cee255eR4-R5 and included inproxy-addr
version 2.0.7.This might mean there's no need to maintain this fork anymore, and I figured I'd give you a heads-up :smile: