Open smalot opened 7 years ago
In fact, in terms of routes, we need to handle in a near future up to 2 million rules (may be more). This is due to a huge amount of domain names.
Fabio seems to be the more efficient tool to handle our needs. But there are 3 main limitations:
Am I right ?
I didn't consider 2M routes a viable use case but I'm more than happy to make this a reality. What a showcase!
You are correct in that fabio cannot handle this dynamically right now because of the consul limitation which is not going to be lifted or made configurable any time soon. We get this request regularly.
However, in addition of being a data source consul is the single source of truth and acts as a synchronization point. When consul gets updated all fabios pull the updated the new version from there. You don't have to know how many fabios there are or where they are. They will do the right thing.
Therefore, using an API, or a dynamic file provider would put the burden on you to provide consistency and timeliness of updates.
Supporting a different data source like a URL would be another option. Fabio could pull the routing table from a URL and could use the eTag
, status code or some other header to determine whether the data has changed. Fabio could also pull an URL from the consul KV store to avoid the polling but that seems overkill.
I'd still have to look at the speed of the routing parser. It is using regular expressions right now but I've gained some experience in coding my own parsers.
How often does the routing table change and how many fabio instances are you running?
Also, how many targets do you have per domain? Is it 2M domains or 2M route entries?
@smalot Is that still an issue for you since I'd like to pick that one up.
Hi,
Loading a 100'000 routes file consume very many ressources. In fact, loading the file itself seems to be very fast (5 to 10 secondes regarding logs), but post-process seems to index data and parse it and is really long (more than 1 minute).
But once done, any request is quickly (as usual) handled.
Is there workaround or optimization task planned ?
Many thanks