mosteo / alire-old-discussion

Design of an Ada language library repository
5 stars 1 forks source link

Client-side deployment methods #2

Open mosteo opened 8 years ago

mosteo commented 8 years ago

e) e.g apt-get fetches packages for you and updates only what is necessary, but you need that executable on your machine. The dub (dlang apt-get) executable works the same way. Mr. Brukardt had an objection has to have to learn and depend on a new local executable to operate. I think someone proposed everything could be done by the server. I dont get it on how to ONLY remotely operate dependencies on a user local machine. The only thing I see is the server bundling things in a zip file with all the .gpr files automatically generated to build properly for a given machine/os combination. The user would have to manually download the archive and launch the build by hand.

OK. I essentially agree about having a client tool. Although I can see Mr. Brukardt way, requesting a library, which in turn would cause the zipping of every dependency, to me this is a long-term, even optional, goal, since it requires remote code execution (unless javascript can do this, which I don't know). To me, we should first focus on client-side tools (preferably just one, a-la apt-get, but more on this below) and server-side standard, free, services, just like this one. In other words: a zero-cost solution without possibility of downtime, nor maintenance requirements.

(In that vein, I wouldn't want to have something where you upload your library and it is prepared/stored. I'd stop at the level of a pull request for metadata).

As for the client tool, I'm nowadays a linux-only man, so with my partial view I'd advocate for either a downloadable static* pre-built executable, or (yes, I'm going to say it) a python or shell script. Of course, ideally if this takes off, the client tool could be just a regular package from the distro. For windows I only see the static executable way.

Question: does deployment includes compilation? Or could we stop as a first stage at the source code ready stage?

Perhaps we could try to define the bare minimum objectives for a pre-pre-alpha milestone.

*I've never achieved a fully static executable in linux with gnat. Even with -static, I ended with a ld-linux.so or somesuch dependency.

ohenley commented 8 years ago

a. agree with you: free repo (github, bitbucket); meta-infos accompanying the libs source code, on repo; client side tool that handles platforms in ideally, a most unified way. I propose Go instead of scripts and/or Python. More below.

b. what is missing is a way to list, in a centralized way, packages existence. Is it possible to set a tag in the meta-infos that could be crawled by the client-side tool to list packages availability; a-la AdaIC search engine. Therefore, no need for a centralized website at all.

c. I am nowadays a linux-only man too. Except at my job... they force feed me with windows and C++. ;)

d. I would stop at the source code ready stage for now.

e. What about wiki pre-pre-alpha objectives section?

Why Go: (better said by this guy)

Due to Go's consistent behavior across platforms, it's easy to put out simple command-line apps that run most anywhere. It's another echo of Go's similarities to Python, and here Go has a few advantages.

For one, the executables created by Go are precisely that: Stand-alone executables, with no external dependencies unless you specify them. With Python, you must have a copy of the interpreter on the target machine or an interpreter of a particular revision of Python (in the case of some Python scripts).

Another advantage Go has here is speed. The resulting executables run far faster than vanilla Python, or for that matter most any other dynamically executed language, with the possible exception of JavaScript.

Finally, none of the above comes at the cost of being able to talk to the underlying system. Go programs can talk to external C libraries or make native system calls. Docker, for instance, works this way. It interfaces with low-level Linux functions, cgroups, and namespaces, to work its magic.

http://www.javaworld.com/article/2929811/scripting-jvm-languages/whats-the-go-language-really-good-for.html

mosteo commented 8 years ago

If it weren't as ugly :) Seriously, I don't object, although perhaps we should go the full Ada way as discussed in #1.

I have added a couple of points in the wiki on the assumption that the client tool is still the way to go, while discussion continues...

Lucretia commented 8 years ago

1) Randy's wrong, plain and simple. If he's not prepared to install a simple package, then it's his loss. 2) All source should be stored on external services to reduce complexity and load, i.e. github, Bit bucket, etc. 3) The client should be a shared lib which implements the functionality and an application which uses this to expose the features of the library. The library can then be built into other tools, IDE's, etc. 4) The server only needs a simple way of getting a package description file(s) up/down-loaded and updated, i.e. initially via the web and then later maintain by command line tool.

Most of the posts I see about this are trying too hard to overcomplicate this, KISS principle!

OneWingedShark commented 8 years ago

@Lucretia said: "Most of the posts I see about this are trying too hard to overcomplicate this, KISS principle!"

While I do agree, I think that using these external services would be a bad idea. These tend to be file-bound (text, in particular) instead of considering the program as data which can be verified (ensured compilable, or correct to some theorm [via prover]) -- this text-file view of programs is detrimental to some [many, IME] goals of a library repository, particularly ensuring the distribution of quality (usable) code.

In short, I think extant solutions are a "too simple to be usable/effective" approach. (As in the quote of "things shoulld be made as simple as possible, but no simpler.")

Lucretia commented 8 years ago

You keep talking about libraries only, there are applications too and I want to install them using this tool.

OneWingedShark commented 8 years ago

I use "library" because 'package' could be confusing considering it is an Ada construct; it doesn't quite have the proper connotation, but neither does 'module' or even 'package'. Application doesn't either, as it excludes the others.

'Compilation', 'compilation unit(s)', and 'source set' are even worse.

Also, by 'install' do you mean merely retrival? Or retrival and compiling? Retrival, compiling and integration?

ohenley commented 8 years ago

@mosteo: No problem about getting everything done in Ada if it is feasible without pain. (I am not familiar with "admin issues" handled in Ada)

@Lucretia: IMO the idea, brought by OneWingedShark, of guarantying/identifying the "viability" of xyz dependency configuration is really interesting and would surely represent a competitive advantage over what other communities have. BUT, I think it can be envisioned as a complementary feature at first ... because for the moment, we have plain nothing.

In due course the tool could have an option to interrogate the build server and retrieve only a "sound" dependency config base on some criteria (latest code for xyz, oldest code only for x, whatever...)

mosteo commented 8 years ago

@Lucretia Well, I may be a bit verbose, but I think the bottom of the matter is indeed quite simple since, as @ohenley says, we are starting from nothing :) And I agree with your points above. Retrieval indeed is almost trivial once the metadata format is agreed. I see more interesting issues in the submitting side of things, given the several conflicting goals there. I'll open a thread about this :D

@OneWingedShark I don't think that the external services are too simple. Ensuring consistency is a matter of not cloning 'master' or 'tip' but 'd268935a9fda'. As long as libraries are added in a tested compilable state, the problem is solved. The dependency tree roots would be then tuples of '(platform,compiler version,library revision)' and you go from there.

@ohenley Do other projects don't use consistent versions on retrieval?

OneWingedShark commented 8 years ago

@mosteo - I think @ohenley is refering to the case where some dependency's latest version is incompatible (public interface-wise) with the previous version, thus rendering the original non-compilable. Thus a recursive "get the latest version of X" could yield a set that isn't consistant.

mosteo commented 8 years ago

In regard to version consistency, I see both D and Hackage use semver versioning. So you trust that patch and minor versions won't break compatibility. Basically, it's the same binary packaging systems do too. I suppose if we wanted we could enable a strict mode in which only known working version combinations are used, and risk breakage for minor updates (you may want some bug fix).

OneWingedShark commented 8 years ago

@mosreo - WRT semver versioning: why is this an issue? We have the code, so generating the proper version-numper portion can actually be done automatically. I realize that my solution comes across as "heavy handed" but to a PHP programmer the Ada type-system is also "heavy-handed"… but neither is w/o cause or advantages.

It's so frustrating that I can't seem to articulate the advantages.

mosteo commented 8 years ago

@OneWingedShark I still have to read the paper, so it's probably my fault if I'm not seeing the exact advantages. Will do today.

Versioning is only an issue (in my naïve view) if you want to allow independent library updates (for bug fixes, I guess). If you tie checkouts to proven configurations (i.e. the ones verified at submit time, or tested), there should be no surprises.