Closed thiell closed 8 years ago
Thanks @thiell! Your enthusiasm is definitely a big part of what we hope will make OpenHPC a success.
(sorry for all the questions below, but I'm genuinely interested and curious)
I hadn't heard of clustershell until you mentioned it, and I'm reading through the documentation now. It looks like the feature set overlaps with pdsh, genders and dshbak - what's the primary motivation for a new set of tools? I see "optimized execution algorithms", which I'm assuming maps to "tree mode".
I was also curious about the groups.conf -- I notice it takes the format of 'attr: noderange'. Most systems I've worked on the noderange is for more collapsible and therefore more straight forward to use on the left hand side. Like how genders organized their config file.
Also, I don't see a concept of key=value for sets/groups, is this unsupported?
Again apologies for the wall of questions. Thanks for pointing this project out.
Hi @JohnWestlund. Thanks for taking a look at it!
First of all, I didn't notice that OpenHPC was also based on EPEL7, so the clustershell package should already be available for OpenHPC users (1.7 has just been pushed to stable). However, it could be interesting to configure it properly at installation time (like groups binding, etc).
To hopefully answer your questions:
Let me know what you think. We're really open to suggestions and criticism. ;-)
Admins from my team would never accept to go back to pdsh+dshbak. When you are used to dshbak feature directly integrated into clush, you cannot go back :) This is very convenient and the various display modes are very cool. Having this in Python makes it easier to hack, still keeping nice performances though.
ClusterShell is like pdsh on steroids.
My 2 cents :)
Sounds very interesting. Let me do some groundwork internally.
Hi,
Thanks for the inclusion! Thinking about it again, I'm not sure why the EPEL package doesn't do the job here, but you should have your reasons... :)
Please also note that v1.7.1 was released 2 days ago and fix some issues. I would highly recommend to upgrade while you're at it.
Also, the sourceforge Source URL is deprecated, please use either the official github release source URL:
or PyPI:
For Fedora and EPEL, I use:
Source0: https://github.com/cea-hpc/%{name}/archive/v%{version}.tar.gz#/%{name}-%{version}.tar.gz
Also, I noticed that you didn't reset the Release number (not a big deal).
Thanks!
Hi @thiell - The EPEL route would have definitely preferable, but we're supporting SLES with the OpenHPC 1.1 release, so we get to look busier by packaging more stuff. I've update the spec with the better URL and version. The build service happily clobbers release numbers, so I usually just leave them as I find them. Thanks for the submission. Once we get something in the install guide regarding clustershell we may ask you to take a quick look just to make sure we didn't mess anything up too bad.
Cheers
Thank you @crbaird for the explanation, that makes sense and I forgot about SLES. However, I still have some concerns about the way clustershell will be available through "module". Indeed clustershell provides a framework for cluster admin tools (like shine). Do admins have to use module with OpenHPC? (eg. "module load clustershell" will be needed to use it?)
I know the module is controversial, and in fact it's inconsistent with how we package pdsh. We decided to punt on finding consensus of how to install to a non-default location but still make generally available. The (shortly forthcoming) OpenHPC steering committee will make the call and we'll make all the packages conform to their decision. I think there will be some packages where an admin will load a module (Intel cluster checker, for example), but maybe they will decide to have an admin script in profile.d for things like clush. We very much appreciate the dialog, and we want the process to be transparent. We don't have much prior experience with clustershell, so your input of standard usage models is quite valuable. For instance, do non-privileged users use it much?
Hi @crbaird - ok I see and thanks for taking the time to explain the process in more details - much appreciated. As you don't know clustershell, I would first recommend that you check out the few slides from https://fosdem.org/2016/schedule/event/hpc_bigdata_clustershell/ - it's very short and give a brief overview of clustershell.
As I said I'm not sure at all that the "module way" is a good choice, either for ohpc or for clustershell itself... Of course clustershell targets HPC sys admins first. That's why I guess it should be a system building block and not a module. Advanced nodeset support and nodegroup bindings to external node groups are some of the most successful features. These features are now used for a long time by vendors like Bull or Seagate. Regarding parallel cmd execution, pdsh is well-known and do the job, so clush didn't replace it completely (yet). However we provide more features, clustershell is an active project and we finally got rid of dshbak (just use clush -b)! :)
Note that in the past, we had some discussions with DoE labs and as some of their clusters don't have ssh but only rsh variants like mrsh, we added support for that too: http://clustershell.readthedocs.org/en/latest/release.html?#rsh-worker
For more advanced usage, the tree mode is easy to setup and very powerful to connect to remote nodes through ssh gateways or bastion hosts but originally it has been developed for scalability / large systems - you just need to setup a topology.conf on the head node - more info here http://clustershell.readthedocs.org/en/latest/tools/clush.html#tree-mode - I'm pretty sure it's being used by Atos/Bull on their new sequana X1000 systems.
re: non-privileged users, surprisingly we have several reports of clustershell being used in user mode, at least more than we had originally expected. In 1.7, we added support for user installation through pip install --user clustershell
and per-user config overrides. Right now I remember two use cases: clush used to access docker containers and also an user from Stanford who used the NodeSet
python class for his job scripts (we put an example here after discussing with him: http://clustershell.readthedocs.org/en/latest/guide/examples.html#using-nodeset-with-parallel-python-batch-script-using-slurm ).
I will be happy to answer any other questions you may have.
tests passing in ci
Hi,
I just saw your announcement, very nice initiative! Any plan to integrate clustershell? We just released a new stable version (1.7). Apart from the Python library, CLI like nodeset is useful to support node groups; clush is a full featured parallel shell convenient in HPC environment that can replace pdsh. Used by some vendors and many HPC sites now (like CEA, Stanford University, etc.). And if you use lustre, it's a building block of shine, open source tool to manage lustre filesystems and their clients (https://sourceforge.net/p/lustre-shine/wiki/Home/).
I'm part of the dev team of clustershell, so let me know if you plan to integrate it or if you need help. Latest version is available here https://github.com/cea-hpc/clustershell/releases/tag/v1.7 and this build will be soon in epel7 ( http://koji.fedoraproject.org/koji/buildinfo?buildID=698400 ).