Open rxrbln opened 3 years ago
Agreed.
A mirror of binary packages would make things easier for a large number of users, notably people with older hardware/not mainstream architectures for stuff installing kernels etc. For that we would need a maintainer's pgp keyring to sign each produced binaries before publishing them. The signing phase could easily be integrated as a post-build option.
hello just saw the dflybsd vt
why not use opkg as a package manager?
it is mostly gzip/tar files anyway
Thanks! While historically mine used some proprietary GEM format, that we did away with long time ago for tar. Extracting a tar is really not such a problem, while we need to maintain our plain/text var/adm meta-data though. Using any external pkger does not really help with that. What we want to do instead is deliver a unified install experience whether the user installs from binaries or source, and also provide a better pre-compiled reasonable extended binary pkg stream in general. Currently the best plan is therefor to fully integrated this with refactored t2-src build scripts to be likely renamed from scripts/* to just pkg (or "t2", or "t2-pkg" or something like that) which can build from source combined with sourcing pre-compiled binary pkg streams. opkg does not really help here. The new flow would be something like:
# (t2-)pkg install blender
| scanning dependencies ...
| adding openimageio (binary, 3.7 MB)
| adding openjpeg (binary, 2.3 MB)
| blender not in any binary stream, build from source (ETA 23m)? (y/n)
| libjpeg new version (7.12) in src over binary stream, build from source (ETA 2m)? (y/n)
Downloading 2 packages (6 MB) from binary pkg streams
Compiling 1 package from source (ETA 23m), processed? (y/n)
...
IMHO, we should even target the design in such a way, that we replace our mine inherited gas/gui installer, and the installation from distributed media would be identical, re-using the same t2-src scripts to install pkg sets from the on ISO "disc" binary-stream.
hmm ... why even ask for install from source, if only kinda source packages are available ?
given all of deb/rpm/ipkg having post-install script hooks just leverage on the functionality.
the real question to answer is if binary packages should be separated into dev and non-dev so you would be able to up with a binary-only system that can be installed without a compiler.
ie. having an install binary-only iso variant.
if you dont care about such a thing, just ignore me.
cheers
T
Because not everyone has an Eypc ThreadRipping system. I recently built btop on a Mango Pi and it took 45 minutes. Even building xterm on an Sgi Octane may take 5 minutes, Perl on a dual CPU 1GHz UltraSPARC 20 minutes, ... IMHO it is nice to inform the user that it might take a while, ... Given our cached reference build-time we could even roughly estimate the time.
Theoretically at the end of ROCK Linux we experimented auto breaking up packages into -devel and -doc by regex matching man/info etc. to -doc and .h, .a, eta to -devel. We could indeed bring something like that to production.
We of course value all user feedback!
omg ... all those nice old systems.
still i remember, the times clifford did rock ...
good to hear that feedback is valued !
cheers, T
Please don't. Having it unsplit and all in one so I "the end-system builder" can decide what to keep is the main reason I still use T2.
scsijon
On 16/7/23 21:07, René Rebe wrote:
/cut
Theoretically at the end of ROCK Linux we experimented auto breaking up packages into -devel and -doc by regex matching man/info etc. to -doc and .h, .a, eta to -devel. We could indeed bring something like that to production.
/cut
the discussion is actually not easy as system builds can be very opinionated.
all of them have pros and cons for the many use-cases there are.
as T2 bills itself as T2 SDE is not just a regular Linux distribution - it is a flexible Open Source System Development Environment or Distribution Build Kit.
you have to make some choices or step on some toes to not be swamped with work to satisfy everyone.
but looking at the minimal iso and the package tars, it follows open source priniciples to the best intend.
i was only arguing that you usually should not try to invent another package management system as there are already enough out there.
just my thoughts
cheers,
T
Please don't. Having it unsplit and all in one so I "the end-system builder" can decide what to keep is the main reason I still use T2. scsijon …
Well, if we split packages into -dev and -doc by default, you still would have the choice to do with them as you please and e.g. just process the $pkg{,-doc,-devel} together in your Puppy Linux post processing, ... You actually have more fine grained choice, that actually should increase your level of customization and make your life easier.
... looking at your packages (.tar.zstd)
fun fact ... packaging your existing tars with a simple control file would make them ipkg/opkg and allow you to enjoy the benefits.
The current binary package (tarball, but previously additionally wrapped obsolete "mine" ".gem") is a bit superfluous and dated. Replaing mine with simply extracting the tar with "tar" is long overdue. While at it adding digital signature checking would certainly nice to have as would be optional "emerging" packages from cloud pre-build locations for primary architectures. IMHO the next gen package format should be kept as simple as possible (tar) and simply scripted around that like the rest of t2 (without the current, written in "C" and "gasgui" mine).
Additionally this could be designed together, or combined with #59 so that installing a binary T2 distribution is using the same, refactored "t2" scripts/ as done for later package additions or updates, e.g. from the 2nd stage installer:
./t2 add $packages or ./t2 add --system or we we call it pkg: ./pkg add --system