AcademySoftwareFoundation / rez

An integrated package configuration, build and deployment system for software
https://rez.readthedocs.io
Apache License 2.0
940 stars 332 forks source link

Making rez pure python and shell independent #22

Closed ghost closed 10 years ago

ghost commented 11 years ago

Hey there,

There is a lot of excitement at my company with adopting rez-config for software configuration ; it feels like it's gonna fix a bunch of headaches for us here. Big thanks for making rez-config.

One thing that I'd love to have thought is not forcing a shell on users ; currently it looks like bash is the only option for working with rez. If you look at virtualenv for python, which also is a system that change the environment (to run a different version of python), it can happily work with a different shell (zsh or tcsh).

It looks like what you'd need is to move to subcommands; this is what git, mercurial, subversion etc ... do; Instead of typing git-add, you just go git add (git SPACE add), and here you go. So this is the only CLI user interface change that people would have to do. git would be rez in your case, and rez would be a python script. In order to not dump the whole source code in it you can still split the code base per sub-command and just import the subcommand.

So you'd go I took a quick glance at your source code and you guys are using optparse in python. If you were to switch to argparse (standard in python 2.7, but it can be backpatched to 2.5, I did that), you would have subcommand support for free.

=> Thoughts ? Would a small throw-away fork on github make sense to see what I have in mind ?

Thanks !

ps:

  1. Great choice on yaml, it's a very readable configuration format.
  2. It feels to me that if you'd switch the license to BSD or something like that, it might be easier for company to contribute code.
nerdvegas commented 11 years ago

Hello Benjamin,

Great to hear that you're interested in using Rez. I can tell you that a number of high profile studios are now using Rez as well, although it's a little early for me to be able to say who they are! Incidentally, what studio are you working in?

Re bash, you are right that Rez is bash-dependant. It is also Linux-dependent. Both of these are things that I'd like to change, and there's a whole list of other changes and new features that I'd also like to see happen. It's a matter of finding the time though, as any work on Rez right now happens on my own time. So, I absolutely welcome contributions from users to help make this happen... there's actually a contribution from a studio at the moment that will see its way into the code base soon.

So please do feel free to fork the project and implement some of your ideas, and keep me in the loop. Just bear in mind that backwards compatibility is really important, there are enough users now that this cannot be broken.

Also, wrt sub command- type behaviour, I actually want to do exactly this, as well as generally cleaning up all the supporting Rez code (I did not anticipate it would end up with so many separate tools). Part of this clean up will, I hope, also include moving Rez over to a pytho egg- based distribution, which should simplify its installation process significantly.

Re the license, how do you feel that BSD would help facilitate contribution (its not a loaded question, I do just want to hear your thoughts)?

Thanks, Allan

Ps one more thing, you mentioned optparse/argparse and python 2.5 back port... W do need to keep python support back to as early as possible, I think that's 2.6 at the moment, although there is some call for this to be changed to also support 2.5.

On Tuesday, August 13, 2013, bsergean wrote:

Hey there,

There is a lot of excitement at my company with adopting rez-config for software configuration ; it feels like it's gonna fix a bunch of headaches for us here. Big thanks for making rez-config.

One thing that I'd love to have thought is not forcing a shell on users ; currently it looks like bash is the only option for working with rez. If you look at virtualenv for python, which also is a system that change the environment (to run a different version of python), it can happily work with a different shell (zsh or tcsh).

It looks like what you'd need is to move to subcommands; this is what git, mercurial, subversion etc ... do; Instead of typing git-add, you just go git add (git SPACE add), and here you go. So this is the only CLI user interface change that people would have to do. git would be rez in your case, and rez would be a python script. In order to not dump the whole source code in it you can still split the code base per sub-command and just import the subcommand.

So you'd go I took a quick glance at your source code and you guys are using optparse in python. If you were to switch to argparse (standard in python 2.7, but it can be backpatched to 2.5, I did that), you would have subcommand support for free.

=> Thoughts ? Would a small throw-away fork on github make sense to see what I have in mind ?

Thanks !

  • Benjamin

ps:

  1. Great choice on yaml, it's a very readable configuration format.
  2. It feels to me that if you'd switch the license to BSD or something like that, it might be easier for company to contribute code.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22 .

chadrik commented 11 years ago

Hi all, I agree with removing as much bash as possible, and with creating python sub-commands using argparse.

I've gotten started on it here:

https://github.com/LumaPictures/rez/tree/python-commands

So far I've removed most shell/bash calls from rez-release, and I've converted config and depends into sub-commands.

Feel free to collaborate. Just let me know which commands you're working on so we don't step on each other's toes.

chadrik commented 11 years ago

The previous example uses a simple class system to enforce standardization of sub-commands, but I'm quickly realizing that putting all of the commands in one module will be unruly.

Here's another take which requires each command to be put into a sub-module of a new rez.commands package. each module must provide two functions: command() and setup_parser(), as well as a module-level doc string.

https://github.com/LumaPictures/rez/tree/python-commands2/python/rez/commands

nerdvegas commented 11 years ago

Hy chad,

Re python commands, yeah we have the same idea here. However, I was thinking it would be cleaner to supply these as sub-sub commands, like so: Rez sys mktempf Rez sys py_version And so on. As a bonus, this is analogous to what cmake does with its -E flag (iirc, might have the flag wrong, but it exists).

Re your subcommands structure approach, ie implementing each subcommands in a separate sub module... This is something I already do with all python cli that I write, so I definitely want to go with this approach.

FYI, I've started on merging bsergean's osx port into head, but your mercurial support is coming right after that. Once both of those are done, I'd like to take the work both you and bsergean have done re pythonification of sys commands, that will most likely involve wholesale copy of the python-commands stuff you've done, but just moved into a 'Rez sys' subcommand. And after that, I want to get onto the restructuring the whole thing into a subcommands structure.

Ack, so much work :P

Cheers all A

On Wednesday, August 21, 2013, Chad Dombrova wrote:

The previous example uses a simple class system to enforce standardization of sub-commands, but I'm quickly realizing that putting all of the commands in one module will be unruly.

Here's another take which requires each command to be put into a sub-module of a new rez.commands package. each module must provide two functions: command() and setup_parser(), as well as a module-level doc string.

https://github.com/LumaPictures/rez/tree/python-commands2/python/rez/commands

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22980536 .

chadrik commented 11 years ago

hi allan,

I completed converting the remaining commands that had python counterparts in bin: config-list, config, depends, info, diff, dot, egg-install, env-autowrapper, help, make-project, merge-requests, release, and which.

I don't quite follow about the sys subcommand. Do you want all of the above to under sys? If so, what will be at the same level as sys? my first reaction is that i'd rather type rez config than rez sys config, but i'm interested to know what you have in mind. tools like mercurial have many sub-commands without the need for sub-sub commands. my 2 c.

either way, the current structure should easily adapt. I just need to add some more code to rez_.py to create sub commands of arbitrary depth, mirroring the structure of rez.commands.

-chad

chadrik commented 11 years ago

pushed the arbitrary sub-command depth change.

nerdvegas commented 11 years ago

Hy chad,

No I was implying that sys subcommand should only implement various OS operations expected to be available on all systems - for example, mktemp, that kind of thing. Only Rez itself would uses these 'sys' subcommands, generally the user would not be using them (although in some cases that may be useful). At the same time, I certainly don't want to pollute the main subcommand scope with a list of these abstracted OS operations, hence the 'sys' subcommand.

So for eg: Rez config // do a resolve Rez env // create resolved env Rez sys mktemp // x-platform make temp file

Make sense? A

On Wednesday, August 21, 2013, Chad Dombrova wrote:

hi allan,

I completed converting the remaining commands that had python counterparts in bin: config-list, config, depends, info, diff, dot, egg-install, env-autowrapper, help, make-project, merge-requests, release, and which.

I don't quite follow about the sys subcommand. Do you want all of the above to under sys? If so, what will be at the same level as sys? my first reaction is that i'd rather type rez config than rez sys config, but i'm interested to know what you have in mind. tools like mercurial have many sub-commands without the need for sub-sub commands. my 2 c.

either way, the current structure should easily adapt. I just need to add some more code to rez_.py to create sub commands of arbitrary depth, mirroring the structure of rez.commands.

-chad

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22987768 .

chadrik commented 11 years ago

yes, except I think that most of what's currently written in bash can and should be converted to python, in which case there would be no need for such a command.

nerdvegas commented 11 years ago

That's true, but the exception is going to be Rez-env... I think it's probably not wise to subproc out of a python interpreter that's open during the whole session. I'm open to suggestion on this though, but I am worried that a fairly significant change like this could have unforeseen consequences.

On Wednesday, August 21, 2013, Chad Dombrova wrote:

yes, except I think that most of what's currently written in bash can and should be converted to python, in which case there would be no need for such a command.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22989476 .

chadrik commented 11 years ago

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf
export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT"
bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

nerdvegas commented 11 years ago

You may be right actually. Ok, will hold off implementing those sys calls unless it looks to be necessary.

FYI, I usually structure a python module with cli subcommands like so:

bin/rez // python hashbang python/rez rez_config.py version.py etc.py python/rez/cli env.py config.py release.py util.py

On Wednesday, August 21, 2013, Chad Dombrova wrote:

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT" bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22990533 .

ghost commented 11 years ago

(tangent on multiple shell support)

If rez-env is essentially only doing those 3 lines of bash, it looks like supporting other shell like tcsh (fish and zsh are the other hot ones) shouldn't be too much work.

One thing I was thinking about is that there might be a need for multiple variant of the commands to run in a .yaml / config file. One variant per OS, and one variant per shell.

But at this stage it almost look like a shell independant format would be better (env=something). The only problem might be that the colon / : would clash as it is both a special char for yaml and a special char for the PATH where it's a separator.

On Tue, Aug 20, 2013 at 7:21 PM, allan johns notifications@github.comwrote:

You may be right actually. Ok, will hold off implementing those sys calls unless it looks to be necessary.

FYI, I usually structure a python module with cli subcommands like so:

bin/rez // python hashbang python/rez rez_config.py version.py etc.py python/rez/cli env.py config.py release.py util.py

On Wednesday, August 21, 2013, Chad Dombrova wrote:

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT" bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22990533> .

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22991573 .

chadrik commented 11 years ago

Allan, that layout looks fine. will just need to change rez.commands to rez.cli.

Benjamin, here's a simple way you could use yaml for setting environment variables:

env:
- PATH : [ $PATH, /foo/bar ]
- OTHER : value

it's kind of nice because lists become explicit.

nerdvegas commented 11 years ago

Hey Benjamin,

Command variations is another whole discussion. IMO, the right approach is a DSL that can convert to whatever target OS you specify. I need to think about this a bit more, but that might be useful enough to be project in its own right. Code name: Rex, not that I've started it yet though.

My rough roadmap for the near future, listed by priority, is:

Merge osx support Merge mercurial support Port git code to chads cleaned up release code Restructure Rez into subcommands / pythonify as much as possible Move Rez to an egg-based distribution / simplify installation Add example ready-to-go repo to installation, deprecate current demo Implement 'bind' tool for popular software, and OS/python etc. this take the place of the 'bootstrap' packages in installation. *\ cross platform commands support Improved 'version' sub module, alphanumeric support.

Longer term... More per-variant support - commands, build-only requires etc Cross platform build support (will need to be able to distribute build matrix over appropriate hosts) Improved dependency resolve algorithm (big job) Add 'features' feature Add sub-packages support

So yeah, there's a lot left to do! Cheers A

On Wednesday, August 21, 2013, bsergean wrote:

(tangent on multiple shell support)

If rez-env is essentially only doing those 3 lines of bash, it looks like supporting other shell like tcsh (fish and zsh are the other hot ones) shouldn't be too much work.

One thing I was thinking about is that there might be a need for multiple variant of the commands to run in a .yaml / config file. One variant per OS, and one variant per shell.

But at this stage it almost look like a shell independant format would be better (env=something). The only problem might be that the colon / : would clash as it is both a special char for yaml and a special char for the PATH where it's a separator.

  • Benjamin

On Tue, Aug 20, 2013 at 7:21 PM, allan johns <notifications@github.com<javascript:_e({}, 'cvml', 'notifications@github.com');>>wrote:

You may be right actually. Ok, will hold off implementing those sys calls unless it looks to be necessary.

FYI, I usually structure a python module with cli subcommands like so:

bin/rez // python hashbang python/rez rez_config.py version.py etc.py python/rez/cli env.py config.py release.py util.py

On Wednesday, August 21, 2013, Chad Dombrova wrote:

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT" bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22990533> .

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22991573> .

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22991785 .

ghost commented 11 years ago
  1. Lots of stuff on the TODO list, that's exiting. Making rez pure python and installable via easy_install / pip will be good for new-comers and a better/smoother install experience. Not sure what form removing the configure.sh + install.sh steps will have in this new world. Maybe there has to be an interface to query for getting the required env var as a fallback if those vars are undefined, but maybe that's it. This could come from an Env module. I feel like something that would be appreciated by studios is companion project which list all the dependencies required for all the known tools (houdini / maya etc...). The mac homebrew project is very inspiring in this respect, here it could be focused on studio tools, and the example/demo would be composed of real studio tools so anyone could improve here and express studio dependencies in the open.
  2. On the DSL to set environment in the yaml file, what Chad suggest is pretty nice. It's backward compatible, because now env entries are dictionaries (new code path) or straights item / string (old code path export PATH=/foo/bin:$PATH. A tiny parser might be required, what about quotes, spaces ? Maybe yaml will take care of everything.

Q. What are the subcommands that are querying / executing those set env instructions ?

env:

Cheers !

On Aug 20, 2013, at 7:56 PM, allan johns notifications@github.com wrote:

Hey Benjamin,

Command variations is another whole discussion. IMO, the right approach is a DSL that can convert to whatever target OS you specify. I need to think about this a bit more, but that might be useful enough to be project in its own right. Code name: Rex, not that I've started it yet though.

My rough roadmap for the near future, listed by priority, is:

Merge osx support Merge mercurial support Port git code to chads cleaned up release code Restructure Rez into subcommands / pythonify as much as possible Move Rez to an egg-based distribution / simplify installation Add example ready-to-go repo to installation, deprecate current demo Implement 'bind' tool for popular software, and OS/python etc. this take the place of the 'bootstrap' packages in installation. *\ cross platform commands support Improved 'version' sub module, alphanumeric support.

Longer term... More per-variant support - commands, build-only requires etc Cross platform build support (will need to be able to distribute build matrix over appropriate hosts) Improved dependency resolve algorithm (big job) Add 'features' feature Add sub-packages support

So yeah, there's a lot left to do! Cheers A

On Wednesday, August 21, 2013, bsergean wrote:

(tangent on multiple shell support)

If rez-env is essentially only doing those 3 lines of bash, it looks like supporting other shell like tcsh (fish and zsh are the other hot ones) shouldn't be too much work.

One thing I was thinking about is that there might be a need for multiple variant of the commands to run in a .yaml / config file. One variant per OS, and one variant per shell.

But at this stage it almost look like a shell independant format would be better (env=something). The only problem might be that the colon / : would clash as it is both a special char for yaml and a special char for the PATH where it's a separator.

  • Benjamin

On Tue, Aug 20, 2013 at 7:21 PM, allan johns <notifications@github.com<javascript:_e({}, 'cvml', 'notifications@github.com');>>wrote:

You may be right actually. Ok, will hold off implementing those sys calls unless it looks to be necessary.

FYI, I usually structure a python module with cli subcommands like so:

bin/rez // python hashbang python/rez rez_config.py version.py etc.py python/rez/cli env.py config.py release.py util.py

On Wednesday, August 21, 2013, Chad Dombrova wrote:

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT" bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22990533> .

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22991573> .

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22991785 .

— Reply to this email directly or view it on GitHub.

ghost commented 11 years ago

(shameless plug) and for a very easy thing to merge, you should go pick the README.markdown I've added it makes the landing page looks cool in a browser :)

btw do you know if there's a way to really kill the old google code page ... or edit the wiki there so it's only now a big redirect ... or even kill it altogether so google now make the github page the first link when one search for it ?

On Aug 20, 2013, at 7:56 PM, allan johns notifications@github.com wrote:

Hey Benjamin,

Command variations is another whole discussion. IMO, the right approach is a DSL that can convert to whatever target OS you specify. I need to think about this a bit more, but that might be useful enough to be project in its own right. Code name: Rex, not that I've started it yet though.

My rough roadmap for the near future, listed by priority, is:

Merge osx support Merge mercurial support Port git code to chads cleaned up release code Restructure Rez into subcommands / pythonify as much as possible Move Rez to an egg-based distribution / simplify installation Add example ready-to-go repo to installation, deprecate current demo Implement 'bind' tool for popular software, and OS/python etc. this take the place of the 'bootstrap' packages in installation. *\ cross platform commands support Improved 'version' sub module, alphanumeric support.

Longer term... More per-variant support - commands, build-only requires etc Cross platform build support (will need to be able to distribute build matrix over appropriate hosts) Improved dependency resolve algorithm (big job) Add 'features' feature Add sub-packages support

So yeah, there's a lot left to do! Cheers A

On Wednesday, August 21, 2013, bsergean wrote:

(tangent on multiple shell support)

If rez-env is essentially only doing those 3 lines of bash, it looks like supporting other shell like tcsh (fish and zsh are the other hot ones) shouldn't be too much work.

One thing I was thinking about is that there might be a need for multiple variant of the commands to run in a .yaml / config file. One variant per OS, and one variant per shell.

But at this stage it almost look like a shell independant format would be better (env=something). The only problem might be that the colon / : would clash as it is both a special char for yaml and a special char for the PATH where it's a separator.

  • Benjamin

On Tue, Aug 20, 2013 at 7:21 PM, allan johns <notifications@github.com<javascript:_e({}, 'cvml', 'notifications@github.com');>>wrote:

You may be right actually. Ok, will hold off implementing those sys calls unless it looks to be necessary.

FYI, I usually structure a python module with cli subcommands like so:

bin/rez // python hashbang python/rez rez_config.py version.py etc.py python/rez/cli env.py config.py release.py util.py

On Wednesday, August 21, 2013, Chad Dombrova wrote:

i agree with not wanting to put a python process between the bash processes.

Most of rez-env boils down to these lines:

export REZ_CONTEXT_FILE=$tmpf export REZ_ENV_PROMPT="$REZ_ENV_PROMPT$_REZ_ENV_OPT_PROMPT" bash --rcfile $tmpf2

I'm still not an expert on how rez-env works, but my thought is that we can use python to do everything rez-env is doing now, culminating in creating the two temp files referred to above. python can print the their paths to stdout, and then rez-env can run these couple lines in bash.

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22990533> .

— Reply to this email directly or view it on GitHub< https://github.com/nerdvegas/rez/issues/22#issuecomment-22991573> .

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-22991785 .

— Reply to this email directly or view it on GitHub.

chadrik commented 11 years ago

I've been thinking about the question of rex for a bit, and this morning I had an idea. While I think that the idea of a brand new custom DSL is cool, I think we can get something more powerful and more familiar to developers with python and a little cleverness.

check out this little example:

#!/usr/bin/python
import re
import os
import platform

ALL_CAPS = re.compile('[A-Z][A-Z0-9]*')

class EnvironDict(dict):
    def __setitem__(self, key, value):
        if ALL_CAPS.match(key) and key in self:
            parts = value.split(':')
            i = parts.index('@')
            if i != -1:
                parts[i] = self[key]
                value = ':'.join(parts)
        dict.__setitem__(self, key, value)

    def __getitem__(self, key):
        # don't error on reference to non-existent env variables
        if ALL_CAPS.match(key) and key not in self:
            return ''
        return dict.__getitem__(self, key)

if __name__ == '__main__':
    # pre-populate the globals with environment variables
    g = EnvironDict(os.environ)
    # provide access to some key functions
    g['system'] = os.system
    g['platform'] = platform.system

    code = """
PATH = '/usr/foo:@'
PATH = '/var/bar:@'
not_an_env = '/root/path/'
WHATEVER = not_an_env + 'this'
if SOME_EXTERNAL_VAR and platform() == 'Linux':
    NEW_VAR = '$SOME_EXTERNAL_VAR/poop'
"""
    # evaluate the code
    exec code in g

    # get changed items
    for key, value in g.iteritems():
        if ALL_CAPS.match(key):
            if key not in os.environ or os.environ[key] != value:
                print key, value

With a custom dictionary, we can track changes to variables made by code run with the exec command.
Basic features:

There's obviously a lot more that you can do with this. It's quick and easy, and very powerful. The syntax is concise as well:

PATH = '/usr/foo:@'
PATH = '/var/bar:@'
not_an_env = '/root/path/'
WHATEVER = not_an_env + 'this'
if SOME_EXTERNAL_VAR and platform() == 'Linux':
    NEW_VAR = '$SOME_EXTERNAL_VAR/poop'

-chad

nerdvegas commented 11 years ago

There are a few issues with this I think...

A. It's only able to manipulate env vars. what about creating aliases, or sourcing scripts?

B. I'm still not sure about using conditionals in commands. An important part of Rez is its simplicity, especially the succinctness of the package.yaml files. Would we just start seeing big scripts embedded into the yaml?

C. The self referential @ will be a problem, since an env var may legitimately contain this character already (and one def does... Houdini env vars sometimes have a trailing @).

Having said that, I'm warming to the idea of python as the language. But I think there'd need to be some changes. Specifically:

I'd rewrite your example with my idea of how it would look, but I'm on an ipad and its too tedious... Will follow this up later with an example when I'm on something with a non virtual keyboard.

Cheers A

On Thursday, August 22, 2013, Chad Dombrova wrote:

I've been thinking about the question of rex for a bit, and this morning I had an idea. While I think that the idea of a brand new custom DSL is cool, I think we can get something more powerful and more familiar to developers with python and a little cleverness.

check out this little example:

!/usr/bin/pythonimport reimport osimport platform

ALL_CAPS = re.compile('[A-Z][A-Z0-9]*') class EnvironDict(dict): def setitem(self, key, value): if ALL_CAPS.match(key) and key in self: parts = value.split(':') i = parts.index('@') if i != -1: parts[i] = self[key] value = ':'.join(parts) dict.setitem(self, key, value)

def __getitem__(self, key):
    # don't error on reference to non-existent env variables
    if ALL_CAPS.match(key) and key not in self:
        return ''
    return dict.__getitem__(self, key)

if name == 'main':

pre-populate the globals with environment variables

g = EnvironDict(os.environ)
# provide access to some key functions
g['system'] = os.system
g['platform'] = platform.system
code = """PATH = '/usr/foo:@'PATH = '/var/bar:@'not_an_env = '/root/path/'WHATEVER = not_an_env + 'this'if SOME_EXTERNAL_VAR and platform() == 'Linux':    NEW_VAR = '$SOME_EXTERNAL_VAR/poop'"""
# evaluate the code
exec code in g

# get changed items
for key, value in g.iteritems():
    if ALL_CAPS.match(key):
        if key not in os.environ or os.environ[key] != value:
            print key, value

With a custom dictionary, we can track changes to variables made by code run with the exec command.

Basic features:

  • @ as a short-cut for self-referencing
  • useful/relevant python functions available directly in the root special namespace
  • full flow control (if/else/elif, etc)

There's obviously a lot more that you can do with this. It's quick and easy, and very powerful. The syntax is concise as well:

PATH = '/usr/foo:@'PATH = '/var/bar:@'not_an_env = '/root/path/'WHATEVER = not_an_env + 'this'if SOME_EXTERNAL_VAR and platform() == 'Linux': NEW_VAR = '$SOME_EXTERNAL_VAR/poop'

-chad

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-23031341 .

chadrik commented 11 years ago

A. It's only able to manipulate env vars. what about creating aliases, or sourcing scripts?

that's as simple as writing a python function and exposing it in the globals. alias() or rez.alias(), as you please. since this is python, there would be little point in sourcing a bash script, but you could call system() to execute a shell script.

B. I'm still not sure about using conditionals in commands. An important part of Rez is its simplicity, especially the succinctness of the package.yaml files. Would we just start seeing big scripts embedded into the yaml?

we've yet to see how this will play out, but your current use case is Linux-only which is why the very simple layout works for you. as new users sign on they will have different, and often more complex setups and require more flexibility. i think conditionals are much more graceful than, say, separate commands sections for each variant, like commands.Linux and commands.Darwin. ultimately, we could go down that route only to find that it was not flexible enough and we needed conditionals after all. it also introduces a lot of redundancy between the sections, which is annoying to maintain, and encourages editing errors.

C. The self referential @ will be a problem, since an env var may legitimately contain this character already (and one def does... Houdini env vars sometimes have a trailing @).

since the self-referencing character only makes sense when dealing with lists (i.e. a colon separated string), it's fairly protected. i.e. the following are self-referencing and @ would be expanded to the variable name:

FOO = '@:something'
BAR = 'something:@'

but these are not and @ would be left as is:

FOO = '@something'
BAR = 'something@'
  • the evaluation itself would have to be well protected. Ie, run in a separate interpreter, no imports allowed, no function defs allowed, that sort of thing. I'd definitely want limits like this so that developers are discouraged for writing lots of code here.

the exec statement is given a dictionary which is the globals, so that gives you control over the starting namespace, and also protects anything the script from messing with other namespaces. protecting against defs and imports would be a matter of scanning the code before execing.

nerdvegas commented 11 years ago

On Thursday, August 22, 2013, Chad Dombrova wrote:

A. It's only able to manipulate env vars. what about creating aliases, or sourcing scripts?

that's as simple as writing a python function and exposing it in the globals. alias() or rez.alias(), as you please. since this is python, there would be little point in sourcing a bash script, but you could call system() to execute a shell script.

Yeah that's what I'm getting at with the env namespace, so you might have env.alias() for eg. Rez itself is still going to have to translate this into something OS-specific afterwards of course.

B. I'm still not sure about using conditionals in commands. An important part of Rez is its simplicity, especially the succinctness of the package.yaml files. Would we just start seeing big scripts embedded into the yaml?

we've yet to see how this will play out, but your current use case is Linux-only which is why the very simple layout works for you. as new users sign on they will have different, and often more complex setups and require more flexibility. i think conditionals are much more graceful than, say, separate commands sections for each variant, like commands.Linux and commands.Darwin. ultimately, we could go down that route only to find that it was not flexible enough and we needed conditionals after all. it also introduces a lot of redundancy between the sections, which is annoying to maintain, and encourages editing errors.

Let's give it a go and see what happens. One reason to do this - it gives us per-variant command ability, before implementing that properly (which we still need btw, for non-command related reasons).

C. The self referential @ will be a problem, since an env var may legitimately contain this character already (and one def does... Houdini env vars sometimes have a trailing @).

since the self-referencing character only makes sense when dealing with lists (i.e. a colon separated string), it's fairly protected. i.e. the following are self-referencing and @ would be expanded to the variable name:

FOO = '@:something' BAR = 'something:@'

but these are not and @ would be left as is:

FOO = '@something' BAR = 'something@'

The Houdini var I'm thinking of is a list.

  • the evaluation itself would have to be well protected. Ie, run in a separate interpreter, no imports allowed, no function defs allowed, that sort of thing. I'd definitely want limits like this so that developers are discouraged for writing lots of code here.

    the exec statement is given a dictionary which is the globals, so that gives you control over the starting namespace, and also protects anything the script from messing with other namespaces. protecting against defs and imports would be a matter of scanning the code before execing.

Scanning is pretty hacky.. What if there's a string containing 'import' for example, etc. I'd like to investigate how we might create an interpreter in isolation and then introspect for imports and defs, this would be much more robust.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-23059444 .

chadrik commented 11 years ago

Scanning is pretty hacky.. What if there's a string containing 'import' for example, etc. I'd like to investigate how we might create an interpreter in isolation and then introspect for imports and defs, this would be much more robust.

I've been using python for a long time, and I can safely say that this is a rabbit hole without a satisfactory solution.

You can use the custom python dictionary to detect when a foreign module, function, or class has been bound to a new variable. honestly, i would just recommend a warning in this case: "you are entering dangerous territory. unless you know what you're doing, it is likely that something will break."

chadrik commented 10 years ago

Hi, We are getting closer to a rez rollout at Luma. One of the last remaining tasks is to add support for OSX alongside Linux, which will require resolution of this ticket. I've already converted nearly all of rez's commands from bash to python, so what's left is the shell-independent hyper-super-meta-language. Aside from shell independence, the main feature we're looking for is variant-based conditionals, as discussed above.

I've spent some time today thinking a bit more about what this could look like. Here's an example yaml snippet using python for the commands section:

Example

variants:
- [Linux, fedora, python-3.3]
- [Linux, fedora, python-2.7]
- [Linux, fedora, python-2.6]
- [Linux, ubuntu, python-2.7]
- [Darwin, python-2.6]
- [Darwin, python-2.5]

commands: |
  short_version = '!V1.!V2'
  if package_present('fedora') or package_present('Darwin'):
    APP = '/apps/!NAME/%short_version'
  elif not package_present('ubuntu'):
    APP = 'C:/apps/!NAME/%short_version'
  if package_present('python-3') and not REALLY:
    REALLY = 'yeah'
  PATH.append('$APP/bin')
  if package_present('Linux') and building():
    LD_LIBRARY_PATH.append('!ROOT/lib')
  alias('!NAME-!VERSION', '$APP/bin/!NAME'))
  shell('startserver !NAME')

Features

Variables

Setting and Modifying

Expansion

Package / Variant Conditionals

Other Commands

Let me know what you think.

Are there any other commands you can foresee needing?

ghost commented 10 years ago

You should check out the merge_osx branch where Alan has merged all the osx fixes.

Le Oct 28, 2013 à 14:24, Chad Dombrova notifications@github.com a écrit :

Hi, We are getting closer to a rez rollout at Luma. One of the last remaining tasks is to add support for OSX alongside Linux, which will require resolution of this ticket. I've already converted nearly all of rez's commands from bash to python, so what's left is the shell-independent hyper-super-meta-language. Aside from shell independence, the main feature we're looking for is variant-based conditionals, as discussed above.

I've spent some time today thinking a bit more about what this could look like. Here's an example yaml snippet using python for the commands section:

Example

variants:

  • [Linux, fedora, python-3.3]
  • [Linux, fedora, python-2.7]
  • [Linux, fedora, python-2.6]
  • [Linux, ubuntu, python-2.7]
  • [Darwin, python-2.6]
  • [Darwin, python-2.5]

commands: | short_version = '!V1.!V2' if variant('Linux', 'fedora') or variant('Darwin'): APP = '/apps/!NAME/%short_version' elif not variant('ubuntu'): APP = 'C:/apps/!NAME/%short_version' if variant('python-3') and not REALLY: REALLY = 'yeah' PATH.append('$APP/bin') if variant('Linux') and building(): LD_LIBRARY_PATH.append('!ROOT/lib') alias('!NAME-!VERSION', '$APP/bin/!NAME')) shell('startserver !NAME') Features

Variables

Setting and Modifying

capital variables are automatically exported to the environment explicitly calling set(myvar, 'value') can be used in rare cases of lowercase env vars use VAR.append() and VAR.prepend() to modify list variables (uses os-appropriate separator : or ;) testing VAR evaluates to True if the variable is set (e.g. if VAR: ...) can use + to concatenate strings and env variables (.e.g. VAR + 'mystring') Expansion

implemented using template strings from the standard library rez variables are expanded with ! e.g. !ROOT, !NAME, !VERSION !V1, !V2, !V3, etc : provided for version parts (Major, Minor, etc) env variables are expanded with $ this includes env variables defined both inside and outside of the current package local variables are expanded with % forward-slashes are converted to back-slashes when exported to windows systems Variant Conditionals

uses variant() function returns True if all specified packages are in the active variant uses rez version range notation for matching variant component packages (.e.g. variant(python-3) matches variant [Linux, fedora, python-3.3]) use exact_variant() function for picky matching arguments to these functions only expanded local variables (prefixed with %) Other Commands

building() : returns true if rez is building the package alias() : create a system alias shell() : execute a system command machine() : attribute-based access to machine/system info os: e.g. 'Linux', 'Darwin', 'Windows' arch : e.g. 'i386', 'x86_64' version : os version name : machine name (without domain) domain : machine domain name fqdn : fully qualified domain name (name + '.' + domain) for brevity and readability, rez-specific functions are imported into the default namespace other useful modules may be provided in their own namespace (os, platform, sys) Let me know what you think.

Are there any other commands you can foresee needing?

— Reply to this email directly or view it on GitHub.

chadrik commented 10 years ago

Hi benjamin, those changes will definitely be helpful, but they don't address the issue of shell-independent, or per-variant commands.

-chad

ghost commented 10 years ago

Yes you're right, but strictly speaking rez can already work on osx now.

How do you plan to match sub-variants like Linux distro ? A registry of known / supported ones with hard-coded code through commands that will help identify them (cat /etc/redhat-release) ? autoconf had something like that, I forgot the name of that shell script but it was something like config.sub.

Is everything there for multiple shell support. Say for example I want to install aliases, which have different syntax depending on the shell. Is that something that should be supported or left out ? What about syntax to source a shell script.

On Oct 28, 2013, at 4:40 PM, Benjamin Sergeant bsergean@gmail.com wrote:

You should check out the merge_osx branch where Alan has merged all the osx fixes.

  • Benjamin

Le Oct 28, 2013 à 14:24, Chad Dombrova notifications@github.com a écrit :

Hi, We are getting closer to a rez rollout at Luma. One of the last remaining tasks is to add support for OSX alongside Linux, which will require resolution of this ticket. I've already converted nearly all of rez's commands from bash to python, so what's left is the shell-independent hyper-super-meta-language. Aside from shell independence, the main feature we're looking for is variant-based conditionals, as discussed above.

I've spent some time today thinking a bit more about what this could look like. Here's an example yaml snippet using python for the commands section:

Example

variants:

  • [Linux, fedora, python-3.3]
  • [Linux, fedora, python-2.7]
  • [Linux, fedora, python-2.6]
  • [Linux, ubuntu, python-2.7]
  • [Darwin, python-2.6]
  • [Darwin, python-2.5]

commands: | short_version = '!V1.!V2' if variant('Linux', 'fedora') or variant('Darwin'): APP = '/apps/!NAME/%short_version' elif not variant('ubuntu'): APP = 'C:/apps/!NAME/%short_version' if variant('python-3') and not REALLY: REALLY = 'yeah' PATH.append('$APP/bin') if variant('Linux') and building(): LD_LIBRARY_PATH.append('!ROOT/lib') alias('!NAME-!VERSION', '$APP/bin/!NAME')) shell('startserver !NAME') Features

Variables

Setting and Modifying

capital variables are automatically exported to the environment explicitly calling set(myvar, 'value') can be used in rare cases of lowercase env vars use VAR.append() and VAR.prepend() to modify list variables (uses os-appropriate separator : or ;) testing VAR evaluates to True if the variable is set (e.g. if VAR: ...) can use + to concatenate strings and env variables (.e.g. VAR + 'mystring') Expansion

implemented using template strings from the standard library rez variables are expanded with ! e.g. !ROOT, !NAME, !VERSION !V1, !V2, !V3, etc : provided for version parts (Major, Minor, etc) env variables are expanded with $ this includes env variables defined both inside and outside of the current package local variables are expanded with % forward-slashes are converted to back-slashes when exported to windows systems Variant Conditionals

uses variant() function returns True if all specified packages are in the active variant uses rez version range notation for matching variant component packages (.e.g. variant(python-3) matches variant [Linux, fedora, python-3.3]) use exact_variant() function for picky matching arguments to these functions only expanded local variables (prefixed with %) Other Commands

building() : returns true if rez is building the package alias() : create a system alias shell() : execute a system command machine() : attribute-based access to machine/system info os: e.g. 'Linux', 'Darwin', 'Windows' arch : e.g. 'i386', 'x86_64' version : os version name : machine name (without domain) domain : machine domain name fqdn : fully qualified domain name (name + '.' + domain) for brevity and readability, rez-specific functions are imported into the default namespace other useful modules may be provided in their own namespace (os, platform, sys) Let me know what you think.

Are there any other commands you can foresee needing?

— Reply to this email directly or view it on GitHub.

nerdvegas commented 10 years ago

Hey all,

Chad, I've just quickly skimmed what's here as I'm in the final days of readying for the move to LA. It will be good to catch you up over there and talk Rez in person. Sorry if I've misinterpreted anything, I'm in a rush.

As Benjamin mentioned, the osx_merge branch contains his osx work, if you could aim to merge your own changes in with this, that would be brilliant. If anyone (ben?) could test the resulting code to ensure osx is still working, even better.

Wrt command language: Chad, generally I like where you're going on this. I'd like to get into more detail when I can, that is, in a couple of weeks. But something I wanted to point out - this command language must effectively compile down to OS-specific code (eg a series of bash commands), because this code then goes into the 'context' file, and it's sourcing of the context file that allows rez-env to do its thing. Python can't be in the mix at that point, because we don't want to be running a resolved shell via a python interpreter.

So, there should be a clearly separate submodule that is able to take this python command language, and 'compile' it into OS commands, and this is what I have been thinking would be referred to as 'Rex'. I'd expect this initially to be implemented in a 'rex' subdirectory in rez, with OS-specific bindings in linux.py, osx.py etc, with the intent of extracting it out into its own egg-based module at some later stage (which is what's going to happen with the version and resolving submodules).

So, 'compiling' the commands would be a case of executing the python code, and having functions such as shell(), alias() etc, write that information into a temporary store, then a final pass would invoke OS-specific code to read the info from that store and convert it into OS-specific commands.

Just one brief point on the language itself - the function variant() I think is misleading; package_present() would make more sense IMO. And, the machine() functionality, why not just use the inbuilt python platform module? Several builtin python modules would have to be made available for the command language ahead of time, platform would be one of them.

cheers A

On Tue, Oct 29, 2013 at 11:52 AM, bsergean notifications@github.com wrote:

Yes you're right, but strictly speaking rez can already work on osx now.

How do you plan to match sub-variants like Linux distro ? A registry of known / supported ones with hard-coded code through commands that will help identify them (cat /etc/redhat-release) ? autoconf had something like that, I forgot the name of that shell script but it was something like config.sub.

Is everything there for multiple shell support. Say for example I want to install aliases, which have different syntax depending on the shell. Is that something that should be supported or left out ? What about syntax to source a shell script.

On Oct 28, 2013, at 4:40 PM, Benjamin Sergeant bsergean@gmail.com wrote:

You should check out the merge_osx branch where Alan has merged all the osx fixes.

  • Benjamin

Le Oct 28, 2013 à 14:24, Chad Dombrova notifications@github.com a écrit :

Hi, We are getting closer to a rez rollout at Luma. One of the last remaining tasks is to add support for OSX alongside Linux, which will require resolution of this ticket. I've already converted nearly all of rez's commands from bash to python, so what's left is the shell-independent hyper-super-meta-language. Aside from shell independence, the main feature we're looking for is variant-based conditionals, as discussed above.

I've spent some time today thinking a bit more about what this could look like. Here's an example yaml snippet using python for the commands section:

Example

variants:

  • [Linux, fedora, python-3.3]
  • [Linux, fedora, python-2.7]
  • [Linux, fedora, python-2.6]
  • [Linux, ubuntu, python-2.7]
  • [Darwin, python-2.6]
  • [Darwin, python-2.5]

commands: | short_version = '!V1.!V2' if variant('Linux', 'fedora') or variant('Darwin'): APP = '/apps/!NAME/%short_version' elif not variant('ubuntu'): APP = 'C:/apps/!NAME/%short_version' if variant('python-3') and not REALLY: REALLY = 'yeah' PATH.append('$APP/bin') if variant('Linux') and building(): LD_LIBRARY_PATH.append('!ROOT/lib') alias('!NAME-!VERSION', '$APP/bin/!NAME')) shell('startserver !NAME') Features

Variables

Setting and Modifying

capital variables are automatically exported to the environment explicitly calling set(myvar, 'value') can be used in rare cases of lowercase env vars use VAR.append() and VAR.prepend() to modify list variables (uses os-appropriate separator : or ;) testing VAR evaluates to True if the variable is set (e.g. if VAR: ...) can use + to concatenate strings and env variables (.e.g. VAR + 'mystring') Expansion

implemented using template strings from the standard library rez variables are expanded with ! e.g. !ROOT, !NAME, !VERSION !V1, !V2, !V3, etc : provided for version parts (Major, Minor, etc) env variables are expanded with $ this includes env variables defined both inside and outside of the current package local variables are expanded with % forward-slashes are converted to back-slashes when exported to windows systems Variant Conditionals

uses variant() function returns True if all specified packages are in the active variant uses rez version range notation for matching variant component packages (.e.g. variant(python-3) matches variant [Linux, fedora, python-3.3]) use exact_variant() function for picky matching arguments to these functions only expanded local variables (prefixed with %) Other Commands

building() : returns true if rez is building the package alias() : create a system alias shell() : execute a system command machine() : attribute-based access to machine/system info os: e.g. 'Linux', 'Darwin', 'Windows' arch : e.g. 'i386', 'x86_64' version : os version name : machine name (without domain) domain : machine domain name fqdn : fully qualified domain name (name + '.' + domain) for brevity and readability, rez-specific functions are imported into the default namespace other useful modules may be provided in their own namespace (os, platform, sys) Let me know what you think.

Are there any other commands you can foresee needing?

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27270687 .

chadrik commented 10 years ago

How do you plan to match sub-variants like Linux distro ? A registry of known / supported ones with hard-coded code through commands that will help identify them (cat /etc/redhat-release) ? autoconf had something like that, I forgot the name of that shell script but it was something like config.sub.

This is not something I'm trying to solve or need to solve. How variants are defined will continue to work as it already does.

this command language must effectively compile down to OS-specific code (eg a series of bash commands), because this code then goes into the 'context' file, and it's sourcing of the context file that allows rez-env to do its thing. Python can't be in the mix at that point, because we don't want to be running a resolved shell via a python interpreter.

yes, absolutely. my proposal is not yet implemented, but much of it will be ported from our existing in-house tool for environment variable management, which handles interpreting python code much like what I've proposed into sh, bash, csh, csh, and DOS commands.

I'd expect this initially to be implemented in a 'rex' subdirectory in rez, with OS-specific bindings in linux.py, osx.py etc, with the intent of extracting it out into its own egg-based module at some later stage (which is what's going to happen with the version and resolving submodules).

It's really more shell-specific than OS-specific, since both OSX and Linux have bash, tcsh, etc. Yes, there are variations in these interpreters between OSes, but the commands issued by rez will be pretty straightforward.

Here is the code we're currently using in our in-house tool. There's not much to it, just a few sub-classes, so I don't think that the sub-modules will be necessary:

class Shell(object):
    def __init__(self, **kwargs):
        pass
    def setenv(self, key, value):
        raise NotImplementedError
    def unsetenv(self, key):
        raise NotImplementedError
    def alias(self, key, value):
        raise NotImplementedError

class Bash(Shell):
    def setenv(self, key, value):
        # use %r to get string escaping
        return "export %s=%r;" % (key, value)
    def unsetenv(self, key):
        return "unset %s;" % (key,)
    def alias(self, key, value):
        # bash aliases don't export to subshells; so instead define a function,
        # then export that function
        return "%(key)s() { %(value)s; };\nexport -f %(key)s;" % locals()

class Tcsh(Shell):
    def setenv(self, key, value):
        # use %r to get string escaping
        return "setenv %s %r;" % (key, value)
    def unsetenv(self, key):
        return "unsetenv %s;" % (key,)
    def alias(self, key, value):
        return "alias %s '%s';" % (key, value)

class WinShell(Shell):
    # These are variables where windows will construct the value from the value
    # from system + user + volatile environment values (in that order)
    WIN_PATH_VARS = ['PATH', 'LibPath', 'Os2LibPath']

    def __init__(self, set_global=False):
        self.set_global = set_global
    def setenv(self, key, value):
        value = value.replace('/', '\\\\')

        # Will add to volatile environment variables -
        # HKCU\\Volatile Environment
        # ...and newly launched programs will detect this
        # Will also add to process env. vars
        if self.set_global:
            # If we have a path variable, make sure we don't include items
            # already in the user or system path, as these items will be
            # duplicated if we do something like:
            #   env.PATH += 'newPath'
            # ...and can lead to exponentially increasing the size of the
            # variable every time we do an append
            # So if an entry is already in the system or user path, since these
            # will proceed the volatile path in precedence anyway, don't add
            # it to the volatile as well
            if key in self.WIN_PATH_VARS:
                sysuser = set(self.system_env(key).split(os.pathsep))
                sysuser.update(self.user_env(key).split(os.pathsep))
                new_value = []
                for val in value.split(os.pathsep):
                    if val not in sysuser and val not in new_value:
                        new_value.append(val)
                volatile_value = os.pathsep.join(new_value)
            else:
                volatile_value = value
            # exclamation marks allow delayed expansion
            quotedValue = subprocess.list2cmdline([volatile_value])
            cmd = 'setenv -v %s %s\n' % (key, quotedValue)
        else:
            cmd = ''
        cmd += 'set %s=%s\n' % (key, value)
        return cmd

    def unsetenv(self, key):
        # env vars are not cleared until restart!
        if self.set_global:
            cmd = 'setenv -v %s -delete\n' % (key,)
        else:
            cmd = ''
        cmd += 'set %s=\n' % (key,)
        return cmd

    def user_env(self, key):
        return executable_output(['setenv', '-u', key])

    def system_env(self, key):
        return executable_output(['setenv', '-m', key])

shells = { 'bash' : Bash,
           'sh'   : Bash,
           'tcsh' : Tcsh,
           'csh'  : Tcsh,
           '-csh' : Tcsh, # For some reason, inside of 'screen', ps -o args reports -csh...
           'DOS' : WinShell}

def get_shell_name():
    command = executable_output(['ps', '-o', 'args=', '-p', str(os.getppid())]).strip()
    return command.split()[0]

def get_shell_class(shell_name):
    if shell_name is None:
        shell_name = get_shell_name()
    return shells[os.path.basename(shell_name)]

A few things will need to change -- it needs a subprocess() method, maybe a log() or echo() method. These classes are used to interpret the actions recorded by the python code executed by exec.

Just one brief point on the language itself - the function variant() I think is misleading; package_present() would make more sense IMO.

ah, yes, I agree.

And, the machine() functionality, why not just use the inbuilt python platform module?

This is what we do with our internal tool, and you're right, half of this info comes from the platform module:

I provided machine as a convenience for a few reasons:

all this info is cached, so maybe machine should not be a function, but just provide attribute access:

nerdvegas commented 10 years ago

Chad, that all sounds good, and you're right @ shell- rather than OS- specific.

One thing: Is there going to be an issue incorporating this code? You mentioned it's part of an existing in-house tool, I don't want to run into problems wrt using Luma code.

Also wrt extra functions (log, echo etc). Agreed, and echo seems the clearest need. Also though, comment()! Rez itself injects comments into the context file for debugging purposes, so you can see which package inserted which code. I can see it being useful for packages themselves to inject comments for debugging reasons also.

thx A

On Tue, Oct 29, 2013 at 1:56 PM, Chad Dombrova notifications@github.comwrote:

How do you plan to match sub-variants like Linux distro ? A registry of known / supported ones with hard-coded code through commands that will help identify them (cat /etc/redhat-release) ? autoconf had something like that, I forgot the name of that shell script but it was something like config.sub.

This is not something I'm trying to solve or need to solve. How variants are defined will continue to work as it already does.

this command language must effectively compile down to OS-specific code (eg a series of bash commands), because this code then goes into the 'context' file, and it's sourcing of the context file that allows rez-env to do its thing. Python can't be in the mix at that point, because we don't want to be running a resolved shell via a python interpreter.

yes, absolutely. my proposal is not yet implemented, but much of it will be ported from our existing in-house tool for environment variable management, which handles interpreting python code much like what I've proposed into sh, bash, csh, csh, and DOS commands.

I'd expect this initially to be implemented in a 'rex' subdirectory in rez, with OS-specific bindings in linux.py, osx.py etc, with the intent of extracting it out into its own egg-based module at some later stage (which is what's going to happen with the version and resolving submodules).

It's really more shell-specific than OS-specific, since both OSX and Linux have bash, tcsh, etc. Yes, there are variations in these interpreters between OSes, but the commands issued by rez will be pretty straightforward.

Here is the code we're currently using in our in-house tool. There's not much to it, just a few sub-classes, so I don't think that the sub-modules will be necessary:

class Shell(object): def init(self, **kwargs): pass def setenv(self, key, value): raise NotImplementedError def unsetenv(self, key): raise NotImplementedError def alias(self, key, value): raise NotImplementedError class Bash(Shell): def setenv(self, key, value):

use %r to get string escaping

    return "export %s=%r;" % (key, value)
def unsetenv(self, key):
    return "unset %s;" % (key,)
def alias(self, key, value):
    # bash aliases don't export to subshells; so instead define a function,
    # then export that function
    return "%(key)s() { %(value)s; };\nexport -f %(key)s;" % locals()

class Tcsh(Shell): def setenv(self, key, value):

use %r to get string escaping

    return "setenv %s %r;" % (key, value)
def unsetenv(self, key):
    return "unsetenv %s;" % (key,)
def alias(self, key, value):
    return "alias %s '%s';" % (key, value)

class WinShell(Shell):

These are variables where windows will construct the value from the value

# from system + user + volatile environment values (in that order)
WIN_PATH_VARS = ['PATH', 'LibPath', 'Os2LibPath']
def __init__(self, set_global=False):
    self.set_global = set_global
def setenv(self, key, value):
    value = value.replace('/', '\\\\')

    # Will add to volatile environment variables -
    # HKCU\\Volatile Environment
    # ...and newly launched programs will detect this
    # Will also add to process env. vars
    if self.set_global:
        # If we have a path variable, make sure we don't include items
        # already in the user or system path, as these items will be
        # duplicated if we do something like:
        #   env.PATH += 'newPath'
        # ...and can lead to exponentially increasing the size of the
        # variable every time we do an append
        # So if an entry is already in the system or user path, since these
        # will proceed the volatile path in precedence anyway, don't add
        # it to the volatile as well
        if key in self.WIN_PATH_VARS:
            sysuser = set(self.system_env(key).split(os.pathsep))
            sysuser.update(self.user_env(key).split(os.pathsep))
            new_value = []
            for val in value.split(os.pathsep):
                if val not in sysuser and val not in new_value:
                    new_value.append(val)
            volatile_value = os.pathsep.join(new_value)
        else:
            volatile_value = value
        # exclamation marks allow delayed expansion
        quotedValue = subprocess.list2cmdline([volatile_value])
        cmd = 'setenv -v %s %s\n' % (key, quotedValue)
    else:
        cmd = ''
    cmd += 'set %s=%s\n' % (key, value)
    return cmd

def unsetenv(self, key):
    # env vars are not cleared until restart!
    if self.set_global:
        cmd = 'setenv -v %s -delete\n' % (key,)
    else:
        cmd = ''
    cmd += 'set %s=\n' % (key,)
    return cmd

def user_env(self, key):
    return executable_output(['setenv', '-u', key])

def system_env(self, key):
    return executable_output(['setenv', '-m', key])

shells = { 'bash' : Bash, 'sh' : Bash, 'tcsh' : Tcsh, 'csh' : Tcsh, '-csh' : Tcsh, # For some reason, inside of 'screen', ps -o args reports -csh... 'DOS' : WinShell} def get_shell_name(): command = executable_output(['ps', '-o', 'args=', '-p', str(os.getppid())]).strip() return command.split()[0] def get_shell_class(shell_name): if shell_name is None: shell_name = get_shell_name() return shells[os.path.basename(shell_name)]

A few things will need to change -- it needs a subprocess() method, maybe a log() or echo() method. These classes are used to interpret the actions recorded by the python code executed by exec.

Just one brief point on the language itself - the function variant() I think is misleading; package_present() would make more sense IMO.

ah, yes, I agree.

And, the machine() functionality, why not just use the inbuilt python platform module?

This is what we do with our internal tool, and you're right, half of this info comes from the platform module:

  • machine() :
    • os: platform.system()
    • arch : platform.machine()
    • version : platform.version()
    • name : socket.getfqdn().split('.', -1)[0]
    • domain : socket.getfqdn().split('.', -1)[1]
    • fqdn : socket.getfqdn()

I provided machine as a convenience for a few reasons:

  • it provides info spanning multiple modules
    • the docs on platform.node() says it "may not be fully qualified!", so I prefer to use socket.getfqdn instead to be sure.
    • i find the names used in platform to be very confusing:
  • machine().arch vs. platform.machine()
    • machine().name vs. platform.node()
    • machine().os vs. platform.system()

all this info is cached, so maybe machine should not be a function, but just provide attribute access:

  • machine.arch
  • machine.name
  • machine.os

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27275180 .

chadrik commented 10 years ago

One thing: Is there going to be an issue incorporating this code? You mentioned it's part of an existing in-house tool, I don't want to run into problems wrt using Luma code.

it's not a problem. we're all about that open source stuff :)

Also wrt extra functions (log, echo etc). Agreed, and echo seems the clearest need. Also though, comment()! Rez itself injects comments into the context file for debugging purposes, so you can see which package inserted which code. I can see it being useful for packages themselves to inject comments for debugging reasons also.

yup, sounds good.

mstreatfield commented 10 years ago

In addition to building(), would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?

And although this could be achieved with a call to subprocess, would a source() function be of use? Houdini for example requires you source an additional shell script to configure the environment, this function would help provide this functionality.

Presumably this will result in a change to the config_version number in the package.yaml files? This would allow existing package.yaml files to function as-is and be updated on a per case basis.

I agree that wrapping the platform specific details from Python is a good idea. I find the various modules for obtaining this information in Python a little clunky so abstracting them is good. That said, machine.arch or machine.arch() feels nicer than machine().arch.

Regarding the get_shell_name function, would using the psutil module be preferable to a hard-coded subprocess call? Also, to what extent should the shell implementation used be derived from the Rez configuration or some other indicator?

For example, although my current shell is tcsh (as indicated by the output of ps), perhaps I want Rez to give me the commands for a Win shell? Should the required shell be part of the package request; in the same way Rez automatically adds the platform package (based on REZ_PLATFORM) with the option to override it if required (--no-os)? This would allow me to bake the context for a Windows environment on my Linux machine, for example.

nerdvegas commented 10 years ago

"would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?"

I actually think this would be dangerous - imagine if a package behaved differently when installed locally vs centrally, it would be a like an executable program behaving differently depending on where it's installed - this could cause difficult to debug problems. Packages I feel should behave independently of where they're installed.

cheers A

On Wed, Oct 30, 2013 at 9:22 AM, Mark Streatfield notifications@github.comwrote:

In addition to building(), would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?

And although this could be achieved with a call to subprocess, would a source() function be of use? Houdini for example requires you source an additional shell script to configure the environment, this function would help provide this functionality.

Presumably this will result in a change to the config_version number in the package.yaml files? This would allow existing package.yaml files to function as-is and be updated on a per case basis.

I agree that wrapping the platform specific details from Python is a good idea. I find the various modules for obtaining this information in Python a little clunky so abstracting them is good. That said, machine.arch or machine.arch() feels nicer than machine().arch.

Regarding the get_shell_name function, would using the psutilhttp://code.google.com/p/psutil/module be preferable to a hard-coded subprocess call? Also, to what extent should the shell implementation used be derived from the Rez configuration or some other indicator?

For example, although my current shell is tcsh (as indicated by the output of ps), perhaps I want Rez to give me the commands for a Win shell? Should the required shell be part of the package request; in the same way Rez automatically adds the platform package (based on REZ_PLATFORM) with the option to override it if required (--no-os)? This would allow me to bake the context for a Windows environment on my Linux machine, for example.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27349371 .

chadrik commented 10 years ago

In addition to building(), would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?

at the moment, i agree with allan. will see how i feel once i actually start using this.

And although this could be achieved with a call to subprocess, would a source() function be of use? Houdini for example requires you source an additional shell script to configure the environment, this function would help provide this functionality.

yes, i think it would be of use, and actually i don't think subprocess would suffice if the point of the script is to setup environment variables in the current process.

Presumably this will result in a change to the config_version number in the package.yaml files? This would allow existing package.yaml files to function as-is and be updated on a per case basis.

not necessarily. there is a feature of yaml that allows multiline string blocks, and it is not currently supported by rez. since these multi-line strings are more conducive to actual code blocks, we can add support for them and make use of them a requirement for the new syntax. this will allow us to detect old vs new.

for example:

yaml returns a list of strings for this:

commands:
- export FOO=bar
- export PATH=/path:$PATH

and it returns a multi-line string for this:

commands: |
  FOO='bar'
  PATH.prepend('/path')

i actually already have a working prototype of this whole thing and list vs string is how i'm currently telling the difference between old and new, and i'm able to intermix packages with the two styles.

of course, we can change the config_version as well, but i'm afraid of what confusion this might ultimately cause. maybe better to hold onto this until it is truly needed.

I agree that wrapping the platform specific details from Python is a good idea. I find the various modules for obtaining this information in Python a little clunky so abstracting them is good. That said, machine.arch or machine.arch() feels nicer than machine().arch.

agreed.

Regarding the get_shell_name function, would using the psutil module be preferable to a hard-coded subprocess call?

i agree that using psutil would more robust and pythonic, but it's also yet another dependency, and a compiled dependency as well. get_shell_name() is more of a nicety than a requirement. more on that below....

Also, to what extent should the shell implementation used be derived from the Rez configuration or some other indicator?

For example, although my current shell is tcsh (as indicated by the output of ps), perhaps I want Rez to give me the commands for a Win shell? Should the required shell be part of the package request; in the same way Rez automatically adds the platform package (based on REZ_PLATFORM) with the option to override it if required (--no-os)? This would allow me to bake the context for a Windows environment on my Linux machine, for example.

Here's how we do this with our current system.

for bash:

#--- setpkg-init.sh
# Bash aliases are not inherited, unlike tcsh aliases
# Instead, make them functions and export with "export -f"
function pkg {
    eval `$SETPKG_PYTHONBIN $SETPKG_ROOT/bin/setpkgcli --shell bash --pid $$ "$@"`
}
export -f pkg

and for tcsh:

#--- setpkg-init.csh
alias pkg       'eval `$SETPKG_PYTHONBIN $SETPKG_ROOT/bin/setpkgcli --shell tcsh --pid $$ \!*`'

as you can see, there is a python executable script setpkgcli which is the entry point for all shell-specific wrappers. these wrappers make aliases which hard-code the --shell argument, so get_shell_name() is never actually called.

you can use setpkgcli to print out a series of commands specific to any shell supported by --shell. however, getting output for a shell other than the current one is rare, and would most likely be performed using the python API rather than the CLI.

so, yes, what you're asking for will be covered by the current design. currently there is a CommandRecorder class and a CommandInterpreter sub-class for each shell. Once the operations are recorded, you can use the python API to generate as many different interpretations of those recorded commands as you have interpreters. There is also an interpreter for python which executes the commands in the current python session, modifying os.environ, calling print and subprocess.Popen, etc.

currently, i've removed almost all bash from rez, but there are still the remaining stub scripts in /bin that use bash. i'd like to move toward a system like the one i've outlined above by converting the remaining bash scripts to aliases defined in rez-init.sh and rez-init.csh.

chadrik commented 10 years ago

i have an idea regarding the machine data structure.

one thing that i do not like about our current in-house system is that there are too many ways to format a variable string.

for example, something like this is not uncommon:

root_var = '/something'
MYVAR = root_var + '/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/%s' % VERSION

This ugliness is the inspiration behind the triple formatting prefixes proposed above:

which turns the above into this:

root_var = '/something'
MYVAR = '%root_var/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/!VERSION' 

the problem is that there is quite a bit of data that we need access to both within our python control statements and also within our string formatting.

for example, we need the data provided by the proposed machine structure within our string formatting.

to illustrate the problem, here is some code using the proposed commands API:

if package_present('python') and machine.os != 'Windows':
    PYTHONPATH.append('!ROOT/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/%s-%s' % (machine.os, machine.arch)

what if instead of a few special variables like !ROOT, !NAME, and !VERSION, we take a cue from machine and provide hierarchical data structures with attribute-based access that can be used both in python code and inside formatting operations, with the same names.

here is the above example rewritten:

if pkgs.python and machine.os != 'Windows':
    PYTHONPATH.append('!{this.root}/python/!{pkgs.python.version.thru2}/!{machine.os}-!{machine.arch}')

The version component can be further broken down to provide various representations of a version.

For example, assuming python version 2.7.4, the following all evaluate to true:

pkgs.python.version == '2.7.4'
pkgs.python.version == '2.5-2.7+'

pkgs.python.version.major == 2
pkgs.python.version.minor == 7

pkgs.python.version.part1 == 2
pkgs.python.version.part2 == 7
pkgs.python.version.part3 == 4

pkgs.python.version.thru1 == '2'
pkgs.python.version.thru2 == '2.7'
pkgs.python.version.thru3 == '2.7.4'

The goal is to provide:

what do you think? love it? hate it?

nerdvegas commented 10 years ago

Chad, I really like it. In fact I think we could drop the exclamation and just go with like so: '{this.root}'.

One gripe I have though is with this:

Pkgs.foo.version == '2.4+<3'

In the Rez version sub module, there is a strong distinction between versions and version ranges (well, there will be more so in future work)... I think the equivalence operator used between the two different types here is odd and a bit misleading. I would go for a dedicated function:

Version_in(pkgs.foo.version, '2.4+<3)

Ie, it doesn't make sense to say that a version is equivalent to a range. In fact this leads to an ambiguity:

Pkgs.foo.version == 2.3

This would match on 2.3.1, but what if I wanted to match only on exactly 2.3?

It might be tempting to make values like pkgs.foo.version an object instance, so you can do stuff like: Pkgs.foo.version.in('2.3+<4') However, I don't think we should do this... Because leaving it as a free function that takes two strings instead (ie the version_in function) means we can also use it on arbitrary env vars, which could be useful.

Thx A

On Wednesday, October 30, 2013, Chad Dombrova wrote:

i have an idea regarding the machine data structure.

one thing that i do not like about our current in-house system is that there are too many ways to format a variable string.

for example, something like this is not uncommon:

root_var = '/something'MYVAR = root_var + '/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/%s' % VERSION

This ugliness is the inspiration behind the triple formatting prefixes proposed above:

  • $ for env vars
  • ! for special rez vars
  • % for local vars

which turns the above into this:

root_var = '/something'MYVAR = '%root_var/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/!VERSION'

but we also have data like os, os version, machine name, user, etc, which doesn't fit easily into the proposed system, and you would end up having to use normal string formatting for these.

for example, here is some code using the proposed commands API:

if package_present('python') and machine.os != 'Windows': PYTHONPATH.append('!ROOT/python/$PYTHON_VERSION_MAJOR.$PYTHON_VERSION_MINOR/%s-%s' % (machine.os, machine.arch)

not bad, but I think we can do better.

here's what i'm thinking: what if instead of a few special variables like !ROOT, !NAME, !VERSION, we provide hierarchical data structures with attribute-based access that you can use in the format string. the same structures can be used both inside and outside of formatting operations, with the same names.

here is the above example rewritten:

if pkgs.python and machine.os != 'Windows': PYTHONPATH.append('!{this.root}/python/!{pkgs.python.version.thru2}/!{machine.os}-!{machine.arch}')

  • this provides data about the current package
    • this.name
    • this.root
    • this.base
    • this.version (more on versions below)
    • pkgs looks up the same data structure for other packages
    • pkgs.foo for a non-present package foo returns None
    • replaces the proposed package_present() function.
    • an error is raised if you try to string-format a non-existent value

The version component can be further broken down to provide various representations of a version.

For example, assuming python version 2.7.4, the following all evaluate to true:

pkgs.python.version == '2.7.4' pkgs.python.version == '2.5-2.7+'

pkgs.python.version.major == 2 pkgs.python.version.minor == 7

pkgs.python.version.part1 == 2 pkgs.python.version.part2 == 7 pkgs.python.version.part3 == 4

pkgs.python.version.thru1 == '2' pkgs.python.version.thru2 == '2.7' pkgs.python.version.thru3 == '2.7.4'

The goal is to provide:

  • consistently formatted variable strings that are easy to read
  • a simple way to deal with a multitude of version formats
  • room for future growth as we add new data structures and structure attributes

what do you think? love it? hate it?

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27370266 .

mstreatfield commented 10 years ago

Would it be possible to change pkgs.python.version.part1 to pkgs.python.version.parts[1] (or perhaps using a 0 based index)? Not all packages have a neat major.minor.patch (I'm thinking specifically of some third party ones), and putting the index on the object name feels a little clumsy.

"would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?"

I actually think this would be dangerous - imagine if a package behaved differently when installed locally vs centrally,

My use case would be debugging related variables that I'd like to have set different locally. For example, on release log level should be warning, but on local install it should be debug. Or on release my service URL is http://production, but on local install it should be http://staging. But perhaps the commands section is not the right place for these.

There is also an interpreter for python which executes the commands in the current python session, modifying os.environ, calling print and subprocess.Popen, etc.

That is similar to the solution used where I currently work, and it seems to work quite well. It would make adoption here easier if that mode just so happened to be supported out of the box :-)

As an aside, Rez came up on the StudioSysAdmins mailing list today with Adrian from Flying Bark mentioning that they'd done some work on Windows support. I'm encouraging him to chime in on this conversation, hopefully he'll be able to contribute back. I'm not sure if anyone here is actively using Windows, but it would be nice to have a concrete use case for that OS.

chadrik commented 10 years ago

Would it be possible to change pkgs.python.version.part1 to pkgs.python.version.parts[1](or perhaps using a 0 based index)? Not all packages have a neat major.minor.patch (I'm thinking specifically of some third party ones), and putting the index on the object name feels a little clumsy.

technically very doable, and it was the first suggestion from some of the guys i showed at work, so i think you've got something there.

not doing so was largely a matter of preference, and here's why:

once it's wrapped up in curly braces I find the indexed version harder to read (and there will not be any syntax highlighting inside the string):

'foo/{pkgs.python.version.parts[1]}/bar'

'foo/{pkgs.python.version.part1}/bar'

also, i did not think that this made sense:

foo/{pkgs.python.version.thru[1]}/bar

we definitely need some way to easily get '1.2' from '1.2.3', and it should probably match in style with parts, so i reasoned if we're using thru1, we should also use part1.

btw, there are only as many part# and thru# attributes as there are version parts. it's not hard-coded to 4 in case that makes a difference in your thinking.

it would be great if the world could agree what comes after "major" and "minor" (patch? release? revision? build?), but i'm not sure we should foist additional terms on parts beyond these.

anyway, i'll do whatever people seem to like best. if you have terms you prefer over "part" and "thru" i'm all ears.

As an aside, Rez came up on the StudioSysAdmins mailing list today with Adrian from Flying Bark mentioning that they'd done some work on Windows support. I'm encouraging him to chime in on this conversation, hopefully he'll be able to contribute back. I'm not sure if anyone here is actively using Windows, but it would be nice to have a concrete use case for that OS.

the rez code-base is definitely in flux right now, so likely any changes they've made will be hard to apply after mine, but most of my efforts to de-bashify rez should help them out. I definitely agree we're going to need some windows folks on board because we don't use it at all anymore at luma.

Chad, I really like it. In fact I think we could drop the exclamation and just go with like so: '{this.root}'.

i agree, that would be nice and i think it should be possible. i'm also allowing {root}, {version}, {name}, etc.

One gripe I have though is with this: Pkgs.foo.version == '2.4+<3'

yeah, that was dumb of me. i think i'd prefer something like this, but i have to think about it more:

pkgs.python.version in VersionRange('2.5-2.7+')

I should have a mostly working version of all this ready by the end of the day tomorrow.

Last thought before i trot off to bed:

I realized today that the return values of Resolver.resolve and Resolver.guarded_resolve are going to have to change because the commands list will no longer be a simple list of strings, it will be a list of Command instances to be passed to a CommandInterpreter of your choosing (i feel pretty strongly that the resolver should not care what the destination shell is). trying to maintain backward compatibility in the python internals is starting to become a hindrance. with all the changes coming i think we should just shoot for 2.0 and break compatibility in a few places. what do you all think?

chadrik commented 10 years ago

I pushed my latest changes:

https://github.com/LumaPictures/rez/tree/rex-shell-agnostic https://github.com/LumaPictures/rez/tree/python-commands

chadrik commented 10 years ago

here's a potential compromise on the formatting language that i'm mulling over:

pkgs.python.version.part(1) == 2
pkgs.python.version.part(2) == 7
pkgs.python.version.part(3) == 4

pkgs.python.version.thru(1) == '2'
pkgs.python.version.thru(2) == '2.7'
pkgs.python.version.thru(3) == '2.7.4'

here's what it looks like currently:

commands: |
  PYTHON_MAJOR_VERSION = '{version.part1}'
  PYTHON_MINOR_VERSION = '{version.part2}'

  CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

  if machine.os == 'Linux':
    PYTHON_DIR = '/usr/local/python-{version}'
    PATH.prepend('$PYTHON_DIR/bin')
  elif machine.os == 'Darwin':
    PYTHON_DIR = '/usr/local/python-{version}'
    PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru2}/bin')
  else:
    PYTHON_DIR = 'C:/Python{version.part1}{version.part2}'
    PATH.prepend('$PYTHON_DIR')
    PATH.prepend('$PYTHON_DIR/Scripts')

and with the change:

commands: |
  PYTHON_MAJOR_VERSION = '{version.part(1)}'
  PYTHON_MINOR_VERSION = '{version.part(2)}'

  CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

  if machine.os == 'Linux':
    PYTHON_DIR = '/usr/local/python-{version}'
    PATH.prepend('$PYTHON_DIR/bin')
  elif machine.os == 'Darwin':
    PYTHON_DIR = '/usr/local/python-{version}'
    PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru(2)}/bin')
  else:
    PYTHON_DIR = 'C:/Python{version.part(1)}{version.part(2)}'
    PATH.prepend('$PYTHON_DIR')
    PATH.prepend('$PYTHON_DIR/Scripts')
nerdvegas commented 10 years ago

Re part/thru syntax - firstly, I think it should be zero based, not one based. Second, I think we should support all of: Part0, part1 etc Part(0), part(1) etc Version.major, .minor, .patch (and no further)

I prefer the brackets over square brackets because as chad pointe out, this becomes rather odd when applied to thru... brackets on the other hand make sense in both cases.

Re backwards compatibility. There are a lot of changes in the works now, I think it would be acceptable to break API compatibility, but non API compatibility (cli, yaml syntax etc) must be kept intact, imo.

Also had a couple of thoughts on the {foo.bah} syntax. I think if the var referenced doesn't exist, it should expand to nothing - this is often useful when building up path strings for eg. So a test for var existence would look like so: If '{foo.bah}': Pass

Also, there would be a dedicated function to do the same thing, which would also work in the odd case where you might expect an existing var with an empty string value: If exists('foo.bah'): Pass

In cases where you want to suppress var expansion, you would use another inbuilt function: v = literal('{foo.bah}')

Something I'm slightly concerned about is the littering in the default namespace of possibly a fair number of inbuilt functions (and its not that obvious what is and is not a command builtin). But, it'd be ugly to have, say, rez.function all over the place. What about maybe putting everything into an 'r' namespace? Or going with CapitalCase for all the builtins, to clearly distinguish them from standard python functions? I'm open to suggestions here, but I just think that the current form makes it that bit harder to glance over command code and tell what's going on.

Cheers A

On Friday, November 1, 2013, Chad Dombrova wrote:

here's a potential compromise on the formatting language that i'm mulling over:

pkgs.python.version.part(1) == 2 pkgs.python.version.part(2) == 7 pkgs.python.version.part(3) == 4

pkgs.python.version.thru(1) == '2' pkgs.python.version.thru(2) == '2.7' pkgs.python.version.thru(3) == '2.7.4'

here's what it looks like currently:

commands: | PYTHON_MAJOR_VERSION = '{version.part1}' PYTHON_MINOR_VERSION = '{version.part2}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru2}/bin') else: PYTHON_DIR = 'C:/Python{version.part1}{version.part2}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

and with the change:

commands: | PYTHON_MAJOR_VERSION = '{version.part(1)}' PYTHON_MINOR_VERSION = '{version.part(2)}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru(2)}/bin') else: PYTHON_DIR = 'C:/Python{version.part(1)}{version.part(2)}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27542329 .

nerdvegas commented 10 years ago

On Thursday, October 31, 2013, Mark Streatfield wrote:

Would it be possible to change pkgs.python.version.part1 to pkgs.python.version.parts[1](or perhaps using a 0 based index)? Not all packages have a neat major.minor.patch (I'm thinking specifically of some third party ones), and putting the index on the object name feels a little clumsy.

"would a function to test whether the package is from REZ_LOCAL_PACKAGES_PATH be useful?"

I actually think this would be dangerous - imagine if a package behaved differently when installed locally vs centrally,

My use case would be debugging related variables that I'd like to have set different locally. For example, on release log level should be warning, but on local install it should be debug. Or on release my service URL is http://production, but on local install it should be http://staging. But perhaps the commands section is not the right place for these.

Hmm no these are good points.

There is also an interpreter for python which executes the commands in the current python session, modifying os.environ, calling print and subprocess.Popen, etc.

That is similar to the solution used where I currently work, and it seems to work quite well. It would make adoption here easier if that mode just so happened to be supported out of the box :-)

I agree.

As an aside, Rez came up on the StudioSysAdmins mailing list today with Adrian from Flying Bark mentioning that they'd done some work on Windows support. I'm encouraging him to chime in on this conversation, hopefully he'll be able to contribute back. I'm not sure if anyone here is actively using Windows, but it would be nice to have a concrete use case for that OS.

Yes!

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27463148 .

nerdvegas commented 10 years ago

Hey one thing I've just remembered... It's crucial that the command system be able to track what env vars are set, updated or deleted by what packages. Rez currently does this, but in a hacky and unreliable way, and this is something a dedicated command language can solve. This is used to detect when two packages are trying to set the same variable, which causes an error. This is a necessary feature since it detects what is effectively a conflict, before it causes runtime bugs.

A

On Friday, November 1, 2013, Chad Dombrova wrote:

here's a potential compromise on the formatting language that i'm mulling over:

pkgs.python.version.part(1) == 2 pkgs.python.version.part(2) == 7 pkgs.python.version.part(3) == 4

pkgs.python.version.thru(1) == '2' pkgs.python.version.thru(2) == '2.7' pkgs.python.version.thru(3) == '2.7.4'

here's what it looks like currently:

commands: | PYTHON_MAJOR_VERSION = '{version.part1}' PYTHON_MINOR_VERSION = '{version.part2}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru2}/bin') else: PYTHON_DIR = 'C:/Python{version.part1}{version.part2}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

and with the change:

commands: | PYTHON_MAJOR_VERSION = '{version.part(1)}' PYTHON_MINOR_VERSION = '{version.part(2)}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru(2)}/bin') else: PYTHON_DIR = 'C:/Python{version.part(1)}{version.part(2)}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27542329 .

nerdvegas commented 10 years ago

Hey chad, just a question... Let us assume that PATH does not exist, what then would the following command code do?

PATH.append('foo')

Isnt this syntax problematic, because we can't know ahead of time if the env var we want to prepend or append actually exists? Maybe we can assume that PATH exists, but we can't make this assumption with a lot of other env vars, and we really don't want a lengthy if-then-else clause. Why not then:

append('PATH', 'foo')

A

On Friday, November 1, 2013, Chad Dombrova wrote:

here's a potential compromise on the formatting language that i'm mulling over:

pkgs.python.version.part(1) == 2 pkgs.python.version.part(2) == 7 pkgs.python.version.part(3) == 4

pkgs.python.version.thru(1) == '2' pkgs.python.version.thru(2) == '2.7' pkgs.python.version.thru(3) == '2.7.4'

here's what it looks like currently:

commands: | PYTHON_MAJOR_VERSION = '{version.part1}' PYTHON_MINOR_VERSION = '{version.part2}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru2}/bin') else: PYTHON_DIR = 'C:/Python{version.part1}{version.part2}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

and with the change:

commands: | PYTHON_MAJOR_VERSION = '{version.part(1)}' PYTHON_MINOR_VERSION = '{version.part(2)}'

CMAKE_PREFIX_PATH.prepend('$PYTHON_DIR')

if machine.os == 'Linux': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/bin') elif machine.os == 'Darwin': PYTHON_DIR = '/usr/local/python-{version}' PATH.prepend('$PYTHON_DIR/Python.framework/Versions/{version.thru(2)}/bin') else: PYTHON_DIR = 'C:/Python{version.part(1)}{version.part(2)}' PATH.prepend('$PYTHON_DIR') PATH.prepend('$PYTHON_DIR/Scripts')

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27542329 .

chadrik commented 10 years ago

Re part/thru syntax - firstly, I think it should be zero based, not one based.

ok.

Second, I think we should support all of: Part0, part1 etc Part(0), part(1) etc Version.major, .minor, .patch (and no further)

I'm not a huge fan of providing upper and lowercase options and providing function() and attribute access. it's just too many permutations. after a lot of reflection i definitely like part(1) and thru(1) the best and i think that those and major, minor, patch are all we need. providing too many options makes things more confusing and a bit ugly.

I prefer the brackets over square brackets because as chad pointe out, this becomes rather odd when applied to thru... brackets on the other hand make sense in both cases.

agreed.

Re backwards compatibility. There are a lot of changes in the works now, I think it would be acceptable to break API compatibility, but non API compatibility (cli, yaml syntax etc) must be kept intact, imo.

agreed.

Also had a couple of thoughts on the {foo.bah} syntax. I think if the var referenced doesn't exist, it should expand to nothing - this is often useful when building up path strings for eg. So a test for var existence would look like so: If '{foo.bah}': Pass

templates retrieve the values of objects and insert them into a string. the objects referred to must exist in the current namespace.

assuming the object foo exists and has attribute bah, you could do this:

if foo.bah:
     info('{foo.bah}')

if foo.bah does not exist you get an error. i think this is a pretty sane set of rules.

that said, we are in control of the objects that we put into the namespace when evaluating commands so, for example, we can ensure that version.part(99) and version.patch return an empty string, and thus this would work:

if version.patch:
    info('patch: {version.patch}')
info('this is an empty string: {version.part(99)}')

however, if version does not have a part or patch attribute it will raise an error. this is a good thing, because it prevents errors like simple typos from being silently ignored and becoming difficult to debug.

as another example, i have ensured that pkgs.foo returns an empty string by adding a __getattr__() function to the class so that you use it to test if a package is present:

if pkgs.python:
     info('{pkgs.python.version}')

i prefer making exceptions on a case-by-case basis rather than making the lookup mechanism blindly permissive.

Also, there would be a dedicated function to do the same thing, which would also work in the odd case where you might expect an existing var with an empty string value: If exists('foo.bah'): Pass

i don't think this is necessary, because you can do this:

if hasattr(foo, 'bah'):
    pass

this will work unless you are not even sure that foo exists, but again, we are in control of the namespace and there are only a handful of things being put there and they will always be there.

In cases where you want to suppress var expansion, you would use another inbuilt function: v = literal('{foo.bah}')

i think we can do this with either '{{foo.bah}}' or '!{foo.bah}'.

Something I'm slightly concerned about is the littering in the default namespace of possibly a fair number of inbuilt functions (and its not that obvious what is and is not a command builtin). But, it'd be ugly to have, say, rez.function all over the place. What about maybe putting everything into an 'r' namespace? Or going with CapitalCase for all the builtins, to clearly distinguish them from standard python functions? I'm open to suggestions here, but I just think that the current form makes it that bit harder to glance over command code and tell what's going on.

so far this is what we have in the namespace:

attribute lookups

commands

misc functions

we could move building() into a state namespace if we think there may be a additional states we want to check for. or we could make the idiom state() == 'building'.

we could also get rid of version, root, and base since they are already on this, but those of course are the most commonly used.

personally, i don't think this is too many things in the namespace. this code is supposed to be compact and highly tailored to doing a few very specific things.

Hey one thing I've just remembered... It's crucial that the command system be able to track what env vars are set, updated or deleted by what packages. Rez currently does this, but in a hacky and unreliable way, and this is something a dedicated command language can solve. This is used to detect when two packages are trying to set the same variable, which causes an error. This is a necessary feature since it detects what is effectively a conflict, before it causes runtime bugs.

yup, i'm already tracking changes per package and storing them on ResolvedPackage.commands (the raw string is going on ResolvedPackage.raw_commands)

I have not yet re-added the overwrite check you're talking about. thanks for reminding me.

Hey chad, just a question... Let us assume that PATH does not exist, what then would the following command code do?

PATH.append('foo')

Isnt this syntax problematic, because we can't know ahead of time if the env var we want to prepend or append actually exists? Maybe we can assume that PATH exists, but we can't make this assumption with a lot of other env vars, and we really don't want a lengthy if-then-else clause. Why not then:

append('PATH', 'foo')

functionally they are the same, it's just a difference in style.

in a normal python session when you do something like:

foo

what you are essentially doing is globals()['foo'] or locals()['foo'] depending on where you are. the rez command code is being run using the exec statement, which takes a python dictionary as the namespace. but i'm using a dictionary subclass that handles the special treatment of ALL_CAPS variables as environment variables.

the lookups on the dictionary redirect to either the real namespace or the special environ dictionary.

    def __setitem__(self, key, value):
        if self.ALL_CAPS.match(key):
            self.environ[key] = value
        else:
            if isinstance(value, basestring):
                value = self.expand(value)
            self.vars[key] = value

    def __getitem__(self, key):
        if self.ALL_CAPS.match(key):
            return self.environ[key]
        else:
            return self.vars[key]

so, bottom line is, it works without error because the namespace dictionary detects that the variable is ALL_CAPS and self.environ[key] will always return a valid EnvironmentVariable class.

i prefer the current, method-based approach because it makes the files very easy to read, with all the variables nicely lined up on the left. but, what you've suggested is already supported as well :)

nerdvegas commented 10 years ago

On Saturday, November 2, 2013, Chad Dombrova wrote:

Re part/thru syntax - firstly, I think it should be zero based, not one based.

ok.

Second, I think we should support all of: Part0, part1 etc Part(0), part(1) etc Version.major, .minor, .patch (and no further)

I'm not a huge fan of providing upper and lowercase options and providing function() and attribute access. it's just too many permutations. after a lot of reflection i definitely like part(1) and thru(1) the best and i think that those and major, minor, patch are all we need. providing too many options makes things more confusing and a bit ugly.

Sorry I'm on an ipad, the capitalisation was unintended.

I prefer the brackets over square brackets because as chad pointe out, this becomes rather odd when applied to thru... brackets on the other hand make sense in both cases.

agreed.

Re backwards compatibility. There are a lot of changes in the works now, I think it would be acceptable to break API compatibility, but non API compatibility (cli, yaml syntax etc) must be kept intact, imo.

agreed.

Also had a couple of thoughts on the {foo.bah} syntax. I think if the var referenced doesn't exist, it should expand to nothing - this is often useful when building up path strings for eg. So a test for var existence would look like so: If '{foo.bah}': Pass

templates retrieve the values of objects and insert them into a string. the objects referred to must exist in the current namespace.

assuming the object foo exists and has attribute bah, you could do this:

if foo.bah: info('{foo.bah}')

if foo.bah does not exist you get an error. i think this is a pretty sane set of rules.

that said, we are in control of the objects that we put into the namespace when evaluating commands so, for example, we can ensure that version.part(99) and version.patch return an empty string, and thus this would work:

if version.patch: info('patch: {version.patch}')info('this is an empty string: {version.part(99)}')

however, if version does not have a part or patch attribute it will raise an error. this is a good thing, because it prevents errors like simple typos from being silently ignored and becoming difficult to debug.

as another example, i have ensured that pkgs.foo returns an empty string by adding a getattr() function to the class so that you use it to test if a package is present:

if pkgs.python: info('{pkgs.python.version}')

i prefer making exceptions on a case-by-case basis rather than making the lookup mechanism blindly permissive.

Also, there would be a dedicated function to do the same thing, which would also work in the odd case where you might expect an existing var with an empty string value: If exists('foo.bah'): Pass

i don't think this is necessary, because you can do this:

if hasattr(foo, 'bah'): pass

this will work unless you are not even sure that foo exists, but again, we are in control of the namespace and there are only a handful of things being put there and they will always be there.

Ok that all sounds reasonable.

In cases where you want to suppress var expansion, you would use another inbuilt function: v = literal('{foo.bah}')

i think we can do this with either '{{foo.bah}}' or '!{foo.bah}'.

Yeah that should do.. I prefer the former.

Something I'm slightly concerned about is the littering in the default namespace of possibly a fair number of inbuilt functions (and its not that obvious what is and is not a command builtin). But, it'd be ugly to have, say, rez.function all over the place. What about maybe putting everything into an 'r' namespace? Or going with CapitalCase for all the builtins, to clearly distinguish them from standard python functions? I'm open to suggestions here, but I just think that the current form makes it that bit harder to glance over command code and tell what's going on.

so far this is what we have in the namespace:

attribute lookups

  • pkgs
  • machine
  • this
  • version
  • root
  • base

commands

  • info()
  • error()
  • setenv()
  • unsetenv()
  • appendenv()
  • pependenv()
  • alias()
  • command()
  • comment()

misc functions

  • building()

we could move building() into a state namespace if we think there may be a additional states we want to check for. or we could make the idiom state() == 'building'.

we could also get rid of version, root, and base since they are already on this, but those of course are the most commonly used.

I like state namespace... Perhaps the build type (debug etc) could go here also.

personally, i don't think this is too many things in the namespace. this code is supposed to be compact and highly tailored to doing a few very specific things.

Hey one thing I've just remembered... It's crucial that the command system be able to track what env vars are set, updated or deleted by what packages. Rez currently does this, but in a hacky and unreliable way, and this is something a dedicated command language can solve. This is used to detect when two packages are trying to set the same variable, which causes an error. This is a necessary feature since it detects what is effectively a conflict, before it causes runtime bugs.

yup, i'm already tracking changes per package and storing them on ResolvedPackage.commands (the raw string is going on ResolvedPackage.raw_commands)

I have not yet re-added the overwrite check you're talking about. thanks for reminding me.

Hey chad, just a question... Let us assume that PATH does not exist, what then would the following command code do?

PATH.append('foo')

Isnt this syntax problematic, because we can't know ahead of time if the env var we want to prepend or append actually exists? Maybe we can assume that PATH exists, but we can't make this assumption with a lot of other env vars, and we really don't want a lengthy if-then-else clause. Why not then:

append('PATH', 'foo')

functionally they are the same, it's just a difference in style.

in a normal python session when you do something like:

foo

what you are essentially doing is globals()['foo'] or locals()['foo']depending on where you are. the rez command code is being run using the exec statement, which takes a python dictionary as the namespace. but i'm using a dictionary subclass that handles the special treatment of ALL_CAPS variables as environment variables.

the lookups on the dictionary redirect to either the real namespace or the special environ dictionary.

def __setitem__(self, key, value):
    if self.ALL_CAPS.match(key):
        self.environ[key] = value
    else:
        if isinstance(value, basestring):
            value = self.expand(value)
        self.vars[key] = value

def __getitem__(self, key):
    if self.ALL_CAPS.match(key):
        return self.environ[key]
    else:
        return self.vars[key]

so, bottom line is, it works without error because the namespace dictionary detects that the variable is ALL_CAPS and self.environ[key]will always return a valid EnvironmentVariable class.

i prefer the current, method-based approach because it makes the files very easy to read, with all the variables nicely lined up on the left. but, what you've suggested is already supported as well :)

Ok neat, I didn't realise you were subclassing the environ dict.

— Reply to this email directly or view it on GitHubhttps://github.com/nerdvegas/rez/issues/22#issuecomment-27612855 .

mstreatfield commented 10 years ago

With the introduction of multiline string blocks in the commands section of the package.yaml, and more complex logic (apparently only limited by python itself), should we make an include or import function available?

I like the idea that the package.yaml file is easily human readable, and also that I can use an IDE with syntax highlighting etc when crafting the commands with richer logic. This implies it might be preferable to keep the actual commands outside of the package.yaml, with the package.yaml file containing only import("commands.py").

I guess this would actually be a rez-enabled-python equivalent of the source function we discussed earlier to source other bash/tcsh/bat scripts when the environment is created.

What do you think?

instinct-vfx commented 10 years ago

Hi there,

i just recently stumbled back over rez and it seems to solve quite a few of my problems. Thanks for all the great work so far! And thanks Chad for all the shell-agnostic work. I see you are also keeping windows in the loop which is great. We are a pure windows based shop. For the fun of it i gave rez a quick try in cygwin. Is windows actually beeing worked on or only in there for completeness? Is there anything that needs help on the windows end?

Kind regards, Thorsten

chadrik commented 10 years ago

Hey Mark, I was thinking about something along the same lines.

for storing the python commands in an external file, i think that a yaml include directive might be the best option. if the file has a .yaml or .yml extension, then the directive would load the file as a subdoc as in the example, otherwise it would read the contents of the file as a multiline string.

e.g.

name : foo
commands : !include commands.py

i think a python include or source function like you describe (we can't use import because that is protected) might be difficult to do, because it would not get run until the "commands" string is actually executed via the python exec statement, at which point we would need to get that additional code into the local namespace where all the rez objects are setup. essentially, the behavior we want is a bash source or c include, but python does not natively support this.

i think if we reframe the problem as "including" a file into the yaml document it makes more sense. also, it could be pretty damn handy in other cases, like including chunks of configurations that are basically the same between various versions of an application.

all that said, this feature adds a new level of complexity to things. for example, we'll need to install these files during rez-build/rez-release which means we'll need a way to either manually specify or automatically determine what external files a package.yaml references. should we ensure that the included files always reside in the same directory or below the package.yaml or do we allow installing and potentially overwriting higher level files that may be referenced by other packages or versions of the same package, and thus change their behavior? it could get pretty messy and I don't think I will have time to go down that road during this development sprint.

ok, on to the proposed source or include command for shells.

the main problem i see with this feature is that the files being sourced are shell-specific: you can't source a .sh file from tcsh (well, you can, but you'll get errors). we could use the extension to detect what type of file is being included and only run it if the commands are "interpreted" into a shell that supports it.

for example:

source('{root}/scripts/myfile.sh')

if this python code was used to generate bash or sh commands, myfile.sh would be sourced, but if it was used to generate tcsh commands it would be skipped. there are obvious problems with this solution: namely, if rez behaves differently depending on the shell you load your package in then we've sort of failed at shell-agnosticism.

lastly, Windows.

hi Thorsten! i'm pretty sure that rez would not work on windows without cygwin or something similar at this point, however, it is getting closer. i've slowly been converting all path manipulations to use os.path.join but the interpreting of python commands into DOS shell is not working yet. i've put in some reference code from our in-house system to help along any brave soul who wants to try to tackle this task, but it won't be me :) i don't have a windows machine at home or at work. if you want to try your hand at implementing this, i'd be glad to help you along. the relevant python module is rez.rex in my shell-agnostic branch.

instinct-vfx commented 10 years ago

Hi Chad,

thanks for the input! So basically i would implement all methods from the CommandInterpreter base class in the WinShell class, right? And make sure they are as closely as possible to their sh counterparts.

Regards, Thorsten

instinct-vfx commented 10 years ago

Some more questions after looking into it for a bit. I am not sure why you are using setx and the Registry to set environment variables. I assume this comes from your internal tools in some way? Persisting environments to system or even user space does not sound like a good idea too me. Current shell + subshells is what we want, no?

I obviously might be missing something here beeing so new to all this!

Regards, Thorsten

chadrik commented 10 years ago

it has been years since we wrote and used that code on windows, but i recall problems of speed and possibly subshell inheritance with env vars in windows. I'll ask around at work to see if anyone can remember something more specific. i would say you can probably leave that behavior out for now and we will likely rediscover why we added it once we put the base implementation into use.

is the windows powershell something we would need to add separate support for?