Closed ghost closed 10 years ago
I'll just try and find the time to shove a basic implementation to test against together.
Powershell is an interesting point. I do not really have any powershell experience tho. I tossed that to the IT guys here.
Regards, Thorsten
for storing the python commands in an external file, i think that a yaml include directive might be the best option
Ah yes, that sounds like a good solution. I was not aware of this feature in PyYAML and that would solve the problem nicely.
all that said, this feature adds a new level of complexity to things.
Perhaps the installed package.yaml could just have all the include statements baked out at install time?
I am not so concerned with readability of a released package.yaml file (I don't think anyway), more with management during development. At release time the include directives could be flattened into the top level file, and we continue to release a single package.yaml as we do now. Does this mitigate some of the management complexities?
I don't think I will have time to go down that road during this development sprint.
I'll make a new issue on GitHub for this feature.
I should add as well; I know I'm throwing out these various points/features, but I very much do not expect you to go away and implement them. I'm more than happy to pick up a few to work on myself; just putting them out there to discuss while the topic is open.
the main problem i see with this feature is that the files being sourced are shell-specific
Four potential solutions (and I'm not sure any of these are any good):
source(bash="myfile.sh", tsch="myfile.csh", dos="myfile.bat")
or:
if this.shell == "sh":
source("myfile.sh")
else if this.shell == "tcsh":
source("myfile.csh")
else if this.shell == "dos":
source("myfile.bat")
or:
source("myfile") # and look for the specific extension automagically based on the target shell type.
or: per variant commands (see below).
All are lame for different reasons, I admit.
if rez behaves differently depending on the shell you load your package in then we've sort of failed at shell-agnosticism
The use case that made me think of this particular feature is Houdini, where you are required to source houdini_setup before startup. SideFX provide a bash and tcsh version of this script (and I assume a bat if you download the Windows package).
If this file doesn't exist for the target shell then by definition the environment is broken anyway; presumably Houdini wont start. Likewise for RV; there is a similar setup shell script that is sourced automatically as part of the RV startup process. If you run a shell that this script doesn't support then RV wont start and your package is broken anyway. Perhaps we need a way of saying "this package can only target these particular shell types"?
The alternative would be replicating the contents of this file in that package.yaml commands so that they do become shell agnostic?
We are already leaving the door open for a single package to behave very differently at runtime by including conditionals in our language (which include per variant commands as discussed previously). Looking back at the examples in this thread, most examples would result in the package behaving differently based on the runtime context (irrespective of shell type). While Rez is shell agnostic, a greater emphasis is placed on the developer maintaining parity - the first if
that appears in the command block has the potential to break the concept.
Perhaps the target shell needs to have a stronger link to the operating system variant? Consider a boost package that only contains a Linux variant. Can I ask Rez to build a Windows environment for this package? Yes; because Rez is shell agnostic and the new commands allow me to do this. No; because the package inherently doesn't support Windows, rez-env boost windows
should result in a conflict.
Hey there,
so i started a minimal implementation of WinShell. See the current code here: https://github.com/instinct-vfx/rez/blob/master/python/rez/rex.py
Here is a quick test and the results:
shell = WinShell()
print shell.setenv("envkey", "envvalue")
print shell.unsetenv("envkey")
print shell.prependenv("envkey", "envvalue")
print shell.appendenv("envkey", "envvalue")
print shell.alias("alias", "command")
set envkey=envvalue
set envkey=
set envkey=envvalue;%envkey%
set envkey=%envkey%;envvalue;
doskey alias=command
A few notes here: "set" seems to work as expected (unless i have the wrong expectation. It sets the env variables in the current shell, persists in subshells but is local to the shell context. "doskey" seems to behave similar to alias and also persists in subshells. It does offer parameters tho, i am unsure how alias works in that regard and if this needs to be supported?
Could someone give a bit of insight what exactly "source" does in our case? There is no direct equivalent to my knowledge besides maybe "call" or alike.
I added a private method to handle special characters (namely ["^", "<", ">", "|", "&"]) which need to be prepended with a ^ in environment variables. Other than that there is no need for quoting or escaping.
Regards, Thorsten
Hey mark,
Yeah I've had similar thoughts on this. I don't think it would be quite the same as a source function really, but I see the analogy. In any case - a mechanism for moving the commands into their own py file. I concur. Although 'import' is confusing for obvious reasons, 'include' perhaps.
Cheers A
On Wednesday, November 6, 2013, Mark Streatfield wrote:
With the introduction of multiline string blocks in the commands section of the package.yaml, and more complex logic (apparently only limited by python itself), should we make an include or import function available?
I like the idea that the package.yaml file is easily human readable, and also that I can use an IDE with syntax highlighting etc when crafting the commands with richer logic. This implies it might be preferable to keep the actual commands outside of the package.yaml, with the package.yaml file containing only import("commands.py").
I guess this would actually be a rez-enabled-python equivalent of the source function we discussed earlier to source other bash/tcsh/bat scripts when the environment is created.
What do you think?
— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27918704 .
On Wednesday, November 6, 2013, Chad Dombrova wrote:
Hey Mark, I was thinking about something along the same lines.
for storing the python commands in an external file, i think that a yaml include directivehttp://stackoverflow.com/questions/528281/how-can-i-include-an-yaml-file-inside-anothermight be the best option. if the file has a .yaml or .yml extension, then the directive would load the file as a subdoc as in the example, otherwise it would read the contents of the file as a multiline string.
e.g.
name : foocommands : !include commands.py
i think a python include or source function like you describe (we can't use import because that is protected) might be difficult to do, because it would not get run until the "commands" string is actually executed via the python exec statement, at which point we would need to get that additional code into the local namespace where all the rez objects are setup. essentially, the behavior we want is a bash source or c include, but python does not natively support this.
i think if we reframe the problem as "including" a file into the yaml document it makes more sense. also, it could be pretty damn handy in other cases, like including chunks of configurations that are basically the same between various versions of an application.
I wasn't aware of the multiline import behaviour of yaml when including non yaml content. I like it.
all that said, this feature adds a new level of complexity to things. for example, we'll need to install these files during rez-build/rez-release which means we'll need a way to either manually specify or automatically determine what external files a package.yaml references. should we ensure that the included files always reside _in the same directory or below_the package.yaml or do we allow installing and potentially overwriting higher level files that may be referenced by other packages or versions of the same package, and thus change their behavior? it could get pretty messy and I don't think I will have time to go down that road during this development sprint.
I am very tempted to say that included files must be in the same dir or under. Packages by definition should be self contained... Pulling in pkg config code from some other location entirely defeats the purpose. Sure maybe you could use this to avoid repeated config settings, but even then, imo its much better that all the info you need is right there in the package.
I agree that it's now more complex because we need to find included yaml content in order to install them, does yaml itself give this info in its API? (Sorry I'm not in a position to check myself ATM).
ok, on to the proposed source or include command for shells.
the main problem i see with this feature is that the files being sourced are shell-specific: you can't source a .sh file from tcsh (well, you can, but you'll get errors). we could use the extension to detect what type of file is being included and only run it if the commands are "interpreted" into a shell that supports it.
for example:
source('{root}/scripts/myfile.sh')
if this python code was used to generate bash or sh commands, myfile.shwould be sourced, but if it was used to generate tcsh commands it would be skipped. there are obvious problems with this solution: namely, if rez behaves differently depending on the shell you load your package in then we've sort of failed at shell-agnosticism.
Chad maybe this isn't a problem at all. Consider - if a pkg supports multiple OSs, then it would do so via variants, and in this case, what we need to be able to do is supply per variant commands, which is a known issue anyway. If you have just the one OS, then you're already sourcing the right script anyway. So really I think Rez will give us the ability to side step OS agnostic problems here, and we shouldn't overthink it.
lastly, Windows.
hi Thorsten! i'm pretty sure that rez would not work on windows without cygwin or something similar at this point, however, it is getting closer. i've slowly been converting all path manipulations to use os.path.join but the interpreting of python commands into DOS shell is not working yet. i've put in some reference code from our in-house system to help along any brave soul who wants to try to tackle this task, but it won't be me :) i don't have a windows machine at home or at work. if you want to try your hand at implementing this, i'd be glad to help you along. the relevant python module is rez.rex in my shell-agnostic branch.
Looking forward to seeing what you've done here chad.
Cheers all A
— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27938690 .
I wasn't aware of the multiline import behaviour of yaml when including non yaml content. I like it.
it's not a native feature. PyYaml supports creating custom constructors, which can call arbitrary code. so it is possible to create an "include" constructor that does whatever we want with the argument passed to it. yes, this custom constructor could most likely keep track of all files loaded.
mark's idea of baking the contents into the yaml file is the easiest by far, but even just doing a simple yaml.dump(yaml.load(f))
produces pretty ugly results. i still find myself looking at these released yaml files fairly often, so i'm not sure how much i would love looking at the default formatting.
Chad maybe this isn't a problem at all. Consider - if a pkg supports multiple OSs, then it would do so via variants, and in this case, what we need to be able to do is supply per variant commands, which is a known issue anyway. If you have just the one OS, then you're already sourcing the right script anyway. So really I think Rez will give us the ability to side step OS agnostic problems here, and we shouldn't overthink it.
per-variant commands and per-shell commands are two very different things. that said, i'm not completely opposed to allowing different behavior per shell, but i'm not really in love with the idea.
if this.shell == "sh":
source("myfile.sh")
else if this.shell == "tcsh":
source("myfile.csh")
else if this.shell == "dos":
source("myfile.bat")
that won't work because the shell is not known during "recording". a slight variation that would work:
shells.bash.source("myfile.sh")
shells.tcsh.source("myfile.csh")
shells.dos.source("myfile.bat")
The alternative would be replicating the contents of this file in that package.yaml commands so that they do become shell agnostic?
this is actually what we currently do with our in-house system. i'm not advocating this as the only solution, but it works and I like being able to see and modify what each application is doing.
Could someone give a bit of insight what exactly "source" does in our case? There is no direct equivalent to my knowledge besides maybe "call" or alike.
"source" brings text from another file into the current namespace and executes it in the current process.
On Sat, Nov 9, 2013 at 1:09 PM, Chad Dombrova notifications@github.comwrote:
I wasn't aware of the multiline import behaviour of yaml when including non yaml content. I like it.
it's not a native feature. PyYaml supports creating custom constructors, which can call arbitrary code. so it is possible to create an "include" constructor that does whatever we want with the argument passed to it. yes, this custom constructor could most likely keep track of all files loaded.
mark's idea of baking the contents into the yaml file is the easiest by far, but even just doing a simple yaml.dump(yaml.load(f)) produces pretty ugly results. i still find myself looking at these released yaml files fairly often, so i'm not sure how much i would love looking at the default formatting.
Yes this formatting issue is a pain. I've hit this before, in a different project. I recall there's something I did to fix it somewhat, I think it was a bit of a hack. I'll try and find it.
Chad maybe this isn't a problem at all. Consider - if a pkg supports multiple OSs, then it would do so via variants, and in this case, what we need to be able to do is supply per variant commands, which is a known issue anyway. If you have just the one OS, then you're already sourcing the right script anyway. So really I think Rez will give us the ability to side step OS agnostic problems here, and we shouldn't overthink it.
per-variant commands and per-shell commands are two very different things. that said, i'm not completely opposed to allowing different behavior per shell, but i'm not really in love with the idea.
if this.shell == "sh": source("myfile.sh")else if this.shell == "tcsh": source("myfile.csh")else if this.shell == "dos": source("myfile.bat")
that won't work because the shell is not known during "recording". a slight variation that would work:
shells.bash.source("myfile.sh")shells.tcsh.source("myfile.csh")shells.dos.source("myfile.bat")
What about something like:
auto_source("myfile")
At command bake time, the correct variation of myfile (myfile.bat, myfile.sh etc) would be sourced. I think though, you should still be able to explicitly source a given file, maybe a little differently like so:
source_if("bash", "myfile.sh")
The alternative would be replicating the contents of this file in that package.yaml commands so that they do become shell agnostic?
this is actually what we currently do with our in-house system. i'm not advocating this as the only solution, but it works and I like being able to see and modify what each application is doing.
But, I think it should always be possible to source native script regardless, there are always going to be edge cases where you want to do some pretty specific stuff. I'm pretty sure we're doing this at Method Sydney.
Could someone give a bit of insight what exactly "source" does in our case? There is no direct equivalent to my knowledge besides maybe "call" or alike.
"source" brings text from another file into the current namespace and executes it in the current process.
— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-28112109 .
Hey there,
There is a lot of excitement at my company with adopting rez-config for software configuration ; it feels like it's gonna fix a bunch of headaches for us here. Big thanks for making rez-config.
One thing that I'd love to have thought is not forcing a shell on users ; currently it looks like bash is the only option for working with rez. If you look at virtualenv for python, which also is a system that change the environment (to run a different version of python), it can happily work with a different shell (zsh or tcsh).
It looks like what you'd need is to move to subcommands; this is what git, mercurial, subversion etc ... do; Instead of typing git-add, you just go git add (git SPACE add), and here you go. So this is the only CLI user interface change that people would have to do. git would be rez in your case, and rez would be a python script. In order to not dump the whole source code in it you can still split the code base per sub-command and just import the subcommand.
So you'd go I took a quick glance at your source code and you guys are using optparse in python. If you were to switch to argparse (standard in python 2.7, but it can be backpatched to 2.5, I did that), you would have subcommand support for free.
=> Thoughts ? Would a small throw-away fork on github make sense to see what I have in mind ?
Thanks !
ps: