intel / openlldp

Other
54 stars 42 forks source link

Startup timing out when there are hundreds of network interfaces #83

Open gonzoleeman opened 2 years ago

gonzoleeman commented 2 years ago

I have had two end-user issues now with lldpad timing out, on startup, because there are hundreds of network interfaces, and the code is trying to check them all.

In both cases, most of the network interfaces were for VMs and were not needed for DCB.

Is there any way to mask off the interfaces we don't want to check, in order to keep the startup time down? Or any other way to either avoid these copious interfaces, or at least not time out waiting for them to be checked?

Before I started looking at how to add a "mask" I thought I'd ask.

gonzoleeman commented 2 years ago

I should add that, when this timeout with lots of NICs happens, the lldpad.conf configuration file seems to get corrupted. I suspect this is because systemd kills lldpad when it takes to long, and lldpad isn't careful about writing back out the config file. But perhaps that's a separate issue.

gonzoleeman commented 2 years ago

Anyone notice this issue?

I'm planning on adding a command-line param that can specify one or more NIC-name patterns. If specified, then lldpad would only check NICs that match the regular-expression passed in.

I maintain the SUSE version of this software, so if I can't get any respone I will have to switch to different LLDP software, or branch my own version of this.

apconole commented 2 years ago

Lee Duncan @.***> writes:

Anyone notice this issue?

I'm planning on adding a command-line param that can specify one or more NIC-name patterns. If specified, then lldpad would only check NICs that match the regular-expression passed in.

I maintain the SUSE version of this software, so if I can't get any respone I will have to switch to different LLDP software, or branch my own version of this.

Sorry - yes, I've seen it as well. Do we know exactly what is taking the time on startup? Maybe there is an issue with the way the config file parser works?

gonzoleeman commented 2 years ago

Lee Duncan @.***> writes:

Anyone notice this issue? I'm planning on adding a command-line param that can specify one or more NIC-name patterns. If specified, then lldpad would only check NICs that match the regular-expression passed in. I maintain the SUSE version of this software, so if I can't get any respone I will have to switch to different LLDP software, or branch my own version of this.

Sorry - yes, I've seen it as well. Do we know exactly what is taking the time on startup? Maybe there is an issue with the way the config file parser works?

Hi. Thank you for your reply.

I believe that, in the long run (after months), the config file grows so large that it takes over an hour to just read and process it. 70 minutes in my case, for a 88M config file. I can ftp it to you, if you like.

But more than that, it seems to be populated with way too many interfaces! The nearest_customer_bridge, nearest_nontpmr_bridge, and lldp sections each have about 100k interfaces listed!! This particular end user is running this on a VM host that has a lot of VMs (and NICs) coming and going. And 99% of the entries in this config file are "empty".

I thought at first that a real database would be an improvement over what is called a config file (but is really a database), but I don't think that addresses the real issue here, which is having so many interfaces.

If you'd like to see the config file, let me know where I can ftp it to. If you want some data from it, let me know, and I can check. "vi" only takes 10 seconds to start up on this file if I'm in read-only mode.

apconole commented 2 years ago

Lee Duncan @.***> writes:

Lee Duncan @.***> writes:

Anyone notice this issue? I'm planning on adding a command-line param that can specify one or more NIC-name patterns. If specified, then lldpad would only check NICs that match the regular-expression passed in. I maintain the SUSE version of this software, so if I can't get any respone I will have to switch to different LLDP software, or branch my own version of this.

Sorry - yes, I've seen it as well. Do we know exactly what is taking the time on startup? Maybe there is an issue with the way the config file parser works?

Hi. Thank you for your reply.

I believe that, in the long run (after months), the config file grows so large that it takes over an hour to just read and process it. 70 minutes in my case, for a 88M config file. I can ftp it to you, if you like.

But more than that, it seems to be populated with way too many interfaces! The nearest_customer_bridge, nearest_nontpmr_bridge, and lldp sections each have about 100k interfaces listed!! This particular end user is running this on a VM host that has a lot of VMs (and NICs) coming and going. And 99% of the entries in this config file are "empty".

I thought at first that a real database would be an improvement over what is called a config file (but is really a database), but I don't think that addresses the real issue here, which is having so many interfaces.

Yes - I think probably there is some kind of heavy operations where we load all this configuration data into memory.

If you'd like to see the config file, let me know where I can ftp it to. If you want some data from it, let me know, and I can check. "vi" only takes 10 seconds to start up on this file if I'm in read-only mode.

Please do send it.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

gonzoleeman commented 2 years ago

@orgcandman wrote:

Please do send it.

It's 5.5M compressed (with xz). Where would you like me to ftp it to?

gonzoleeman commented 2 years ago

@orgcandman wrote:

Please do send it.

It's 5.5M compressed (with xz). Where would you like me to ftp it to?

Ok, I managed to get the compressed file on my google drive link here

The file is the only one in the folder.

gonzoleeman commented 2 years ago

Sorry, closed this issue by mistake! (Doh!)

apconole commented 2 years ago

Lee Duncan @.***> writes:

Sorry, closed this issue by mistake! (Doh!)

No worries - thanks for the file, I've downloaded it. I will probably get to this next week (busy with OvS conf).

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

SrijitNair commented 2 years ago

@gonzoleeman I am facing a related issue in #85 . When you have so many interfaces, does lldp run out of open files and crash ? Let me know if you find a fix.

gonzoleeman commented 2 years ago

Lee Duncan @.***> writes: Sorry, closed this issue by mistake! (Doh!) No worries - thanks for the file, I've downloaded it. I will probably get to this next week (busy with OvS conf). — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

@orgcandman Did you make progress?

My customer really wants this addressed so I've started playing with it myself.

Besides updating our code base to the latest from this repo, I also added two new changes, so far:

  1. Add a command-line option to set the eloop timeout to some value larger than 2 seconds. It still defaults to 2 seconds though.
  2. Add unique return values when the lldpad daemon encounters corruption in the config file (or otherwise can't read the configuration).

I can push my changes to a branch, if you would like to look at them, but conceptually they are fairly simple.

What I would still like to add is:

It seems like other end users are having issues with too many NICs as well, so I think this needs to be addressed.

Thoughts?

gonzoleeman commented 2 years ago

@gonzoleeman I am facing a related issue in #85 . When you have so many interfaces, does lldp run out of open files and crash ? Let me know if you find a fix.

If you have this issue, as well, is there any chance you could help me test? I do not have the equipment to reproduce this issue at all.

In my case, my customer has thousands of virtual NICs, since having thousands of actual NICs is very difficult, physically. Is that the case with you, as well? I'd love to have a setup with lots of virtual NICs that I could test against, as it's difficult for my customer to test.

So if you'd like to help test, let me know.

gonzoleeman commented 2 years ago

Ping? @orgcandman ?? Anyone?

fabrizio8 commented 2 years ago

@gonzoleeman,

I can work on setting up a suitable test environment for this and help you test.

gonzoleeman commented 2 years ago

@gonzoleeman,

I can work on setting up a suitable test environment for this and help you test.

Excellent! I have pushed my test/RFC branch to https://github.com/gonzoleeman/openlldp, branch-1.1-with-lduncan-changes. There is just one new commit (based off of branch-1.1).

The changes:

  1. return a unique value if there's a problem with the config file, and
  2. make eloop-timeout configurable to larger than 2 seconds

For point 1, the idea is to actually replace the corrupted config file with an empty one if corruption is detected. Sounds cheesy, but it seems to work.

For point 2, again, this is a cheesy big hammer IMHO, since the loop really shouldn't take more than 2 seconds if everything is done "right". But for thousands of network interfaces, it seems to help, as timing out sucks.

Lastly, I'd also like to be able to filter out virtual network interfaces, but I haven't done anything on that yet. I could certainly use opinions on both how to do this and what kinds of interfaces should or could be filtered out.

Thanks!

gonzoleeman commented 2 years ago

On May 31, 2022, at 7:29 AM, fabrizio8 @.***> wrote:

@gonzoleeman https://github.com/gonzoleeman,

I can work on setting up a suitable test environment for this and help you test.

Hi @fabrizio8! Any progress? Can I help in any way?

— Lee D

anish commented 1 year ago

@gonzoleeman I can help test this if you're still looking at it

apconole commented 1 year ago

@gonzoleeman and @anish - what if we used an attribute like "ephemeral" and a new tool (something like lldpconf-clean ephemeral) that we can call on startup to delete all the interfaces we don't need? Unfortunately, @fabrizio8 left the company and so this work is getting postponed.

apconole commented 1 year ago

This is pretty bad, actually.. libconfig is doing a linear walk on all the settings during config read. Each time, it will start from the beginning, it seems. We need a replacement, because I don't think anyone uses libconfig the way we do.

apconole commented 1 year ago

maybe we can leverage https://github.com/yrutschle/conf2struct ? It was developed because the sslh program ran into similar issues. If not, we'll need to develop an alternative to the libconfig parsing going on. The stalls are in libconfig itself based on perf.

anish commented 1 year ago

I was thinking about this differently, the startup issues do need to be solved but separately lldp needs interface filtering so it doesn't write down every interface it seems into the config file. That's the issue in our use case (k8s)

gonzoleeman commented 1 year ago

On Mar 8, 2023, at 10:26 AM, Anish Bhatt @.***> wrote:

I was thinking about this differently, the startup issues do need to be solved but separately lldp needs interface filtering so it doesn't write down every interface it seems into the config file. That's the issue in our use case (k8s)

I agree with this, i.e. I’d really to see filtering, perhaps using regular expressions, or based on some property of the interface. But even so I’d lie to see the scanning for interfaces be a bit more efficient, since even with filtering there could be many interfaces.

— Lee Duncan

apconole commented 1 year ago

We certainly need both. However, the load time issue really does need to be resolved, because I can see it being a major blocker in virtual environments. The way libconfig is setup makes it not very efficient or usable for storing details like we have. Actually, I start to wonder if we shouldn't just switch to a sqlite3 DB - I think load and seek times would be significantly improved, and then we can work on figuring out how to filter away interfaces we don't want.

anish commented 1 year ago

We'd lose the ability to configure lldp directly via config file though. I don't know if that is an officially supported way, but works for the brave of heart

apconole commented 1 year ago

well, you can always run sqlite3 /path/to/config and manually execute update / insert / delete commands.

I also agree, we should consider the pros / cons of the approaches before just choosing

gonzoleeman commented 1 year ago

A test with a thousand NICs would be enlightening. --Lee-Man DuncanSent from my iPhone, dudeOn Mar 8, 2023, at 2:35 PM, Aaron Conole @.***> wrote: well, you can always run sqlite3 /path/to/config and manually execute update / insert / delete commands. I also agree, we should consider the pros / cons of the approaches before just choosing

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

apconole commented 1 year ago

I haven't started at any work on this yet - but is there anything anyone sees as a reason we can't make this work? Just wondering because I guess we should be able to build a shim-like layer that lets us map to something else.

There are some unbounded config sections - actually the nice thing about moving to some other backend will also be the fix for ifnames that contain '.' and I think sqlite3 should be significantly faster than libconfig in these extreme cases, while also being "fast enough" in the base case. But maybe we can do some testing. For example, I have a sqlite3 based shell application that does CI and monitors thousands of patches from a patchwork API and from various CI entrypoints. The bottleneck is always the curl requests to the servers - but I will plan to spend some time and develop a quick and dirty shim that we can use at least to test it out.