Open ericvaandering opened 11 years ago
Comment by metson on Thu Jun 17 07:05:23 2010
Pinging this ticket on request from Jon Bakken. FNAL has 20-30 tape drives available to CMS and 10TB is insufficient when things are working well. Jon would like 50-100 TB for FNAL.
A second step optimisation (which should be done "at a later date") is having a central agent that looks at the transfer history into the site and adjusts the window size accordingly.
Comment by wildish on Thu Jun 17 07:37:36 2010
I summarise here what I have said elsewhere:
changing the window-size in PhEDEx can have consequences on the behaviour and performance of the central agents. They will have more work to do per cycle, which can lead to longer transactions, more CPU use, possible deadlocks, and so on. It's not at all clear how serious any of this would be, so we need to be wary of it. Nor can we simply raise the limit in the dev instance and see what happens, since dev is not as complex or interconnected a topology as we have in production.
There are two potential approaches that could be used:
1) careful analysis of the agents to see what the consequences would be, and a planned program to change it. This would take a lot of time and testing, and make a significant dent in our time free for other development (quite a few months). We would need agreement that other things (new website, latency-improvements...) would have to take a back seat.
2) set up a testbed for someone else to experiment with. Someone from *Ops could run the testbed using fake transfers but with a 'real' topology, and see what happens if we start to increase the window-size. This is still a couple of months work, but not as much as 1) if it works (and someone else gets to do it for us!)
We have the tools for that experiment. We can set up fairly realistic workflows that are driven automatically by the LifeCycle agent. We do this for validation runs anyway. If *Ops were to take this over they could investigate the operational-space for us, with much less load on the developers.
Of course, if the second option doesn't work (i.e. we find we hit problems raising the limit), then we have to do the first one anyway. Still, having operators trained to do the testing needed for this sort of change would be valuable.
your call...
Comment by metson on Thu Jun 17 08:26:29 2010
Hi, I think the (necessary) testing you describe is step 2. Step 1 is making the window size a per site parameter in the TMDB and updating the central agents to get the window size from the database. That change shouldn't have a significant impact on the service, so long as the value is set to 10 globally, and it's not a big change (adding a hash of node:window size to FileRouter and changing the prepare call to use that instead of a global parameter). Once that's in place it becomes an Integration issue to measure benefits, push to production etc. I suspect that we don't want T2's to have this handle (at least not in the near future), so it only changes the behaviour at T1's, possibly only FNAL...
I'd propose that this would be the next thing for Nicolo to work on after the time based subscriptions are done, hence assigning this ticket to him. Cheers Simon
Original Savannah ticket 52947 reported by None on Wed Jul 8 09:16:36 2009.
As tape sizes increase at T1, the size limit per window needs to become tuneable. It's not unreasonable for a T1 to want to buffer data prior to writing it to tape, and as tape sizes are now reaching 1TB+ window sizes of 10TB will start to be insufficient. It would be nice to be able to set the window size per destination, as opposed to the current global setting.