Original issue 781 created by arlake228 on 2013-09-24T11:40:16.000Z:
Create a simple tool (perhaps based on DD or any number of the commercial packages) that can do a self test of disk performance. Once this number is known, have it sent up in the sLS information (e.g. along with other self identifying things like processor count/speed). This is step 1 to addressing disk performance in the framework.
Step two would be a tiny standalone tool (e.g. based on, perhaps it is the Periscope 'BLIPP' product) that can do this for a cluster node or storage resource. That information is then registered to a nearby sLS as well.
The end goal is a fully decorated topology map that an infrastructure like PanDA or PhedEX could use to see where bottlenecks are on a path (even if they are at the ends).
Original issue 781 created by arlake228 on 2013-09-24T11:40:16.000Z:
Create a simple tool (perhaps based on DD or any number of the commercial packages) that can do a self test of disk performance. Once this number is known, have it sent up in the sLS information (e.g. along with other self identifying things like processor count/speed). This is step 1 to addressing disk performance in the framework.
Step two would be a tiny standalone tool (e.g. based on, perhaps it is the Periscope 'BLIPP' product) that can do this for a cluster node or storage resource. That information is then registered to a nearby sLS as well.
The end goal is a fully decorated topology map that an infrastructure like PanDA or PhedEX could use to see where bottlenecks are on a path (even if they are at the ends).