Open mk-pmb opened 3 years ago
Looking at the backports directory, I wondered why do we use perl for something that basically looks like simple string replacements?
I think the reason probably is that these scripts probably date back to a time before Python was popular for something like this (and probably also before it was common for programming languages to support proper regular expressions) ^^
Some general thoughts first:
.deb
package. This is something that cmake (CPack) can do for us. Thus I would try to extend the current cmake files in the main repository so that it builds working .deb
packages and then start from there.If we use cmake to build the package, we might have to unpack and edit the dependencies for older releases before repacking and uploading them.
Now to your actual arguments:
precise
, trusty
and xenial
:sweat_smile: probably also before it was common for programming languages to support proper regular expressions
What do you mean with "proper"? The kinds of substitutions we do here could easily be done with even the oldest versions of sed, dating back to 1974. It's still a really good tool for replacing text in files.
I think I would like to use Python as the programming language for writing any of these scripts.
Ok, good luck then. I'm curious to see what it will look like. I like being convinced of new approaches that turn out to actually work. The "usual" debian way seems to be shell scripts, so lots of packaging tools are optimized for that style. My expectation is that either the python code will try to be a shell script, or it will have lots of boilerplate and re-invent some packaging tools. I hope cmake can do all of the actual work, so the python script will be only for figuring out the appropriate cmake incantation.
My fear is that maintenance effort will explode because people use fashionable approaches (Python) instead of efficient ones. Even when using perl, we have stuff like
my $file;
while (<F>) {
$file .= $_;
}
instead of just
my $file = join '', <F>;
Looking at the backports again, I see that sometimes we have to handle several files, so we do need some script to call sed for each file. (Or we'd need to use dirty hacks that won't help reduce maintenance effort.) But even with a shell script, compare the current trusty backport script with what it could look like:
#!/bin/sh
# -*- coding: utf-8, tab-width: 2 -*-
set -e
sed -re '
s/ libopus-dev,$//
s/libzeroc-ice-dev/libzeroc-ice35-dev/g
s/zeroc-ice-compilers/ice35-translators/g
s/zeroc-ice-slice/ice35-slice/g
' -i -- control
sed -re '
s/CONFIG\*=no-bundled-opus /CONFIG*=bundled-opus /
' -i rules
(The minor changes to the regexps are intentional.)
And sed will even take care of lots of the edge cases of file system access that are foolishly ignored in the perl scripts.
What do you mean with "proper"? The kinds of substitutions we do here could easily be done with even the oldest versions of sed, dating back to 1974. It's still a really good tool for replacing text in files.
To my knowledge perl originally was pretty much the only programming lagnauge/tool providing what is nowadays commonly expected of RegEx. In any case though this was all pure speculation. I don't actually know why it was written in perl :sweat_smile:
My expectation is that either the python code will try to be a shell script, or it will have lots of boilerplate and re-invent some packaging tools. I hope cmake can do all of the actual work, so the python script will be only for figuring out the appropriate cmake incantation.
I guess we'll have to see what even has to be done once cmake is able to build deb packages for us. If that turns out to be a no-brainer, then I'm fine with shell scripts. If however we require a more complex toolcahin (also supporting different commandline flags) then I think it's easier to use Python for that. But as I said: let's see what even remains to be done :point_up:
Your shell script does indeed look better to me than the current version. Given that we have to rewrite the whole process to work with cmake though, I am not even sure whether we'll stick to the current approach or whether we might find a better way of handling this. Thus any work that is done in refactoring these old scripts might not actually be worth it at this point (unless coupled with making them work with the new cmake system) :thinking:
Hi! Thanks for maintaining the PPA. Looking at the backports directory, I wondered why do we use perl for something that basically looks like simple string replacements?
My idea is to keep precise, trusty and xenial as they are, until they're eventually phased out.
For artful, bionic, cosmic, disco, eoan, and focal:
It seems the relevant parts are the three regexp lines. I'd switch that script from perl to sed, so we can omit the file I/O stuff. We'd need to either pass the filename as argument or write a small wrapper script; both would seem a lot more elegant than the perl I/O boilerplate. I'd choose the filename as argument route, because that would make it easier to preview the effects.
They seem identical except for the distribution comments. How about I merge the comments so the artful file says it's for all those versions, and make the others symlinks to artful? It seems to me that this would convey intent a lot better. With that, #9 would have been a 1 line change adding a bullet point to the list of versions in the comment, and 1 added symlink, rather than lot of copied code.
The only place I could imagine they're being called from, is
ppagen.bash
line 112 (if [ -x debian/backports/${DIST} ]; then perl debian/backports/${DIST}; fi
) but in my checkout of the files, they aren't marked executable, so I guess they aren't used at all.Given the
control
file on current master, the only change the scripts do is to add a second newline character at the end of the file. That's probably why no-one noticed that they aren't marked executable. Could we add that blank line in the original file, and then just drop these backport files?