Open CSDUMMI opened 5 years ago
I agree 100%. I haven't worried about it because I'm currently worrying about failure and not success :)
I'm thinking, just as you, that p itself should contain just the bare minimum needed to install the next thing it needs and then the next and so on.
Project type detection for officially supported project types and a way to download the full support seems like it can still be pretty small. Plus all the functionality for the command dispatch and config file parsing of course.
I think packages could just be a json or yaml file even.
run:"/bin/python"
test:"/bin/python -m pytest $@"
...
Only an example, but this could perhaps replace all your p-python- files
True. I already support .ini files that could take the place of all those files. I did it this way to test out how it'll work if one calls out to subprocesses in several layers. I want both, to make sure I can cover all use cases and make it easy for people to add their stuff in the way they feel most comfortable with.
What format do you want to use for the packages? We should probably use one that can be parsed quickly in python, right? and one where you could include everything about a language support. Could you tell me what the bare minimum of information is, that p needs to support a language? Then we could make a package format together.
I think we start with .ini files. This test shows the basics: https://github.com/boxed/p/blob/master/tests/test_p.py#L73
It will most certainly not cover all cases, and maybe especially not how to install the entire infrastructure of a language to a specific OS, but we need to get to a Minimum Viable Product that is useful to many people first. We can make big changes later.
Do you know any benchmarks for parsing of .ini, .yaml, .json ?
No. But I think we're a long way from that being very relevant. We have to make it work first! That is, make it actually work for enough languages/project types to be useful enough.
Yes, but I think language packages should be part of the MVP
Depends on one's definition for sure. I already use p for my work so it's usable for some cases already.
Just one idea: Why don't we just use directories or their tar packages as p language support packages? The directory structure would look like this:
python/
├── match.txt
├── env_path
├── init
├── install
├── repl
├── run
├── test
└── uninstall
Then we add these packages' path to a .pconfig file, like this:
$ cat .pconfig
~/.p/
.p
would be the location of the installed packages (there can be more locations).
And it would look like this:
.p
├── python
└── swift
And if you then type:
$ p python install CLI-csdummi
p looks at the directories in .pconfig
(preferring the first occurrence)
and once it finds the python
directory in .p
, p looks at the binaries in python
,
finds the install
binary and executes it with CLI-csdummi
argument.
With this it would be very easy to maintain a language support package and send them over a network, because we can just use tar to compress the folders.
It certainly has benefits. It's easier to remove a package for example. The problem is that you can't as easily add/override commands.
What do you mean? You can simply add a binary to the right folder, and the command will be invoked. There of course are some meta files like the match string for project type detection, these files I would put into the package, either as hidden or with a file extension for which we can then filter them.
You can simply add a binary to the right folder, and the command will be invoked.
That's not great. You shouldn't put the package manager stuff in the same bucket as everything else, because then you lose your own customization when you delete that folder (aka uninstall a package). It also means we can't upgrade the package, because it's a mix of what we had and what the user had. It also means you can't shadow things meaningfully.
You shouldn't put the package manager stuff in the same bucket as everything else,
What is everything else? And if you want to remove a package, you should be aware that your customization of that package is removed as well.
I'm unable to understand your concerns, what is impossible or harder to do with this structure?
It's much nicer if you can have customizations in your home directory or with your dot files.
Where do you think should we then put the packages, we have to store them somewhere and can change the directory, where we store the packages.
And many Applications do that already, look at your home folder, git, kde, java, mozilla, firefox, emacs, ghc, gnome, nano, node, stack, ssh, have hidden folders in my home folder, so that isn't unprecedented.
I think looking at the system path and the current directory first, then it could look in directories like you suggest. It increases complexity a bit but might be worth it.
But, that means their might be confusion, for example:
You have a Python project open and one of your binaries there is called install
,
should p execute that or .p/packages/python/install
, if you type p install
?
If you can put p language packages everywhere, will that will lead to confusion by the user, who can't find the packages supporting a specific language, because the user has to go through all locations in PATH and the current directory. To store all packages in one place makes it clear where a package for a language resides.
You have a Python project open and one of your binaries there is called install, should p execute that or .p/packages/python/install, if you type p install?
Well no, that's not it. If there is a binary called p-python-install
then there will be the name conflict you spell out above. But that's the power of the system! That you can override when needed.
But what happens, if the user forgets to delete one such overriding binary and writes another one,
say another p-python-install
.
Which binary will be executed, if the user types p install
?
You can't say before hand without knowing which one occurs first in PATH
and then the user has to find the old binary, delete it and run p install
again.
And this concept also introduces a vulnerability, of sorts.
What happens if the user downloads an evil program and adds it to the user's
PATH?
The malware could have such a file as p-python-install
and execute
on p install
, without the user noticing for a long time.
This could happen if due to a man-in-the-middle, the attacker inserts
another file into an otherwise legitimate program.
Sure. But no one seems to care for git. I don't think it's cause for concern.
Can you override git commands?
I think that we can do this, though I don't know how many users would use this feature.
And thus I wouldn't want to use the same naming convention in .p/packages/<language>/<command>
,
because that is very redundant
But I don't think we should look into every location in PATH,
not just that we look in places, that nobody should put customization.
I would either introduce a new environment variable (P_PACKAGES?)
or a file, which is probably easier to edit / use.
Oh, and we should print the location of command-binaries, if it isn't located in the default location
(.p/packages/
)
We could reserve "p which" to display where all commands come from. Including from ini-files etc.
No, that wouldn't be enough, people wouldn't execute p which
before
each of their other commands.
They need to be prompted by that every time, otherwise who will notice, if they execute something, they don't intent to?
Or, to propose another solution:
We could have two categories
of locations in the locations file: (.p-locations.ini
?)
Not trusted would be the default and the user may change one specific location consciously to a trusted location. Now they are prompted with the specific path, if they want to execute not trusted locations. And nothing is printed, if the user executes a trusted location.
I just disagree with everything there. I'm talking about a debug command to help people figure out what p is doing. This is useful when it does something unexpected.
Prompting every time would make the entire project of p totally worthless and void.
I think we are just not yet there, we have to develop p and the support for packages. Can we perhaps first agree on how we want to support languages:
.p/packages/<language>/<command>
)p-<language>-<command>
)p-<language>-<command>
)
If we want to print the paths of some or any of these commands,
we can do this afterwards.
But now we have first decide about how we want to continue the
development, we have to especially answer the question about the language https://github.com/boxed/p/issues/4I think packages like you suggest is a nice idea. But they should be in addition to what already exists. So p should first try to find a binary p-python-foo and then if that doesn't exist take .p/python/foo
As for the language, I think it's better to stick with python3 until we're ok with the features and usability.
Alright, I agree.
As it seems to be today, you add support to every language for p to this main repository. These are the files to provide support for specific languages:
What is this supposed to be in a few years, if you added support for only a fraction of these languages? And if download p then, do I have to download support for all these languages as well? Although I will only use two or three of them at a time?
This approach can't work in such a scenario! I think we should thus think about a more modular approach to installing support for new languages.
For example:
Let's say that we develop a standard for how to implement support for a language.
Then we create a package type (such as wheel for python), and a tool, or a command p, that creates this package from a folder and uploads it to a server.
Then everybody can just download this package file and incorporate it into their version of p.
What do you think? Should we develop such a standard package for language support?