Open dstillman opened 7 years ago
Or, alternatively, package the Linux version using Flatpak or AppImage.
there's currently a flatpak package of this on flathub - although a native one from source code would be better. As per flathub policy, if you'd like to be responsible, as the developer you'd be more than welcome.
I added a shell script that creates the appimage on a fork. It essentially depends on 'convert' (for the icon) and wget which are available on most systems I expect. It isn't very properly written (so I will not make a pull request of this in the current state). Someone could probably easily take those steps and write it in a way that it is more properly integrated into the zotero build system.
From the top of my head I know of canoe as another project using npm that builds appimages as releases. (Gruntfile? I know very little about this system in general, sorry) Maybe one can see how it is built there.
@retorquere maintains a repo of zotero debs.
.deb
packaging is easy if you have access to the files as laid out just before they'll be tarballed-up., and if you're OK with what is known as a "simple repo" my scripts are pretty stable now, but the way they package Zotero they're not likely to ever be mainlined into Debian (which is the usual path to get into derivative distros like Ubuntu) because I just put Zotero on disk as it is tarballed, and the zotero-standalone-build fetches a binary (firefox ESR) -- Debian wants everything to be either build from source, or built on an existing debian dependency.
PPA packaging is a mess and is hard to automate. The only benefit is that it's an easier route to get into Ubuntu, but they also want from-source builds, so see former point.
My scripts would be simple to adapt for packaging as part of the build process -- and they'd be stripped pretty far down as they'd not have to detect new versions being available, they wouldn't need to be for both Zotero and Juris-M, etc. I'd be happy to relinquish hosting the .deb
s.
The only real thing that would remain is where to host the binaries -- I host them in github releases, but hosting them on S3 would be simpler still. The most important driver for the decision on where to host them is whether you want download stats I think. GH stats are not super great, I don't know about S3. SourceForge is also super easy to set up, and has better download stats (and it's allowed to only use them as a bin hosting service), but SF isn't as popular with the OSS crowd as they once were.
Bintray seemed like the most logical choice but it was finicky to set up, and I hit upload limits during the first two weeks of testing, and I didn't want to be hobbled by the risk that I'd need to put out a release but couldn't.
@retorquere aside: Your various contributions to Zotero, especially BBT have saved days of my life, thank you.
Also, thanks for the high-speed explanation of debs, Emiliano! I wonder if we should be avoiding some of the friction that you identified by packaging it the way Ubuntu seems to want pre-built apps distributed these days, i.e. Snap packages. This process I think is not dissimilar to the flatpak builds, which one can also run on Ubuntu.
Snapcraft explicitly supports pre-built-binaries and seems to include hosting, so that sidesteps the hosting and build questions. It might have other frictions, however. I'm not sure if people are have strong commitments to having both formats/pros and cons?
I have no strong opinion on snaps -- I tend to opt for debs when available but that's probably just because I'm used to them. To be clear, debs support pre-build binaries, it's just that PPAs/Debian build rules don't allow them (which is why e.g. Oracle Java is packaged as a shell script that downloads and installs Java during .deb installation).
Snap tools are likely to be more pleasant to work with, and I say this not because I know them but because I can't imagine them being equally frustrating as the .deb tools. It took me ages to find out what was in the end very simple; there's a thousand ways to package debs, and a lot of the docs and tutorials tend to
It is not a pleasant process, but once done, it's set and forget. Because I still want to verify I'm releasing the right thing I'm releasing "manually" but that's really just a commit of an auto-updated config file (and that would be skipped for a real Zotero build, because the check I do is just whether it indeed grabbed the correct version); Travis takes care of the actual build & repo update.
One upside of debs vs snaps is that Chromebook users can install those. Other than that, anything that reliably gets me the latest version of Zotero is fine by me. I have no preference one way or the other.
I did have at one point a setup that used a fork of zotero-standalone-build to build a PPA-compatible package from scratch, but the PPA build infra blocks network access during build, so you can't fetch the firefox binary as part of the build.
Since you can't fetch binaries during build, and you're not allowed to have them as "source", PPAs/Debian mainline will just not work. The same goes for anything that relies on npm
packages -- they must be existing Debian/Ubuntu packages. I don't see mainlining ever happening for this reason, because it will affect the electron build in the same way. (edit: although technically you could check in the node_modules
directory to get around this, but that's yucky)
Self-hosted debs don't have this problem, and neither do snaps/flatpaks.
I can convert (simplify, mainly) my existing script for https://github.com/retorquere/zotero-deb, but I'd need to know
(edit: downside of a simple repo is that there's no multiple versions for separate distro-versions, but Zotero doesn't have this in any case)
what is the build environment for releases?
Not Debian-based, but we could use Docker for that part.
what is the stage in the build where the client is laid out on disk as it is about to be tarballed and where is it laid out?
Files are available in dist/
($DEST_DIR
) before the tarball step.
are you OK with a "simple repo" built with apt-ftparchive?
I'm not sure what that means.
where do you intend to host the repo (S3, GH release, something else)?
The repo in this case is…the thing that gets checked for updates? What form does that take? We tend to keep update manifests on actual webservers in case we need to do conditional updates for any reason (e.g., not serving an update to a particular set of clients). Downloadable files would go on S3 with the rest of the binaries. But I can handle adding the upload stuff as long as I know what files need to go where.
Thanks for working on this!
Files are available in
dist/
($DEST_DIR
) before the tarball step.
But they wouldn't be in the Docker env, right? The scripts inside docker could just fetch the tarball as I do now.
are you OK with a "simple repo" built with apt-ftparchive?
I'm not sure what that means.
It means that given a Zotero arch/version, there's only one .deb, not one for each Debian/Ubuntu/whatnot release. In a full repo, you can have one version for Ubuntu 18.10, one for 18.04, etc, but I tried setting that up and it was a major PITA. Given that Zotero only has a single build per arch/version, I don't see any benefit in trying.
where do you intend to host the repo (S3, GH release, something else)?
The repo in this case is…the thing that gets checked for updates?
Correct, and where the debs are downloaded from
What form does that take?
Any web server that supports https and that allows files to be downloaded directly (although 301/302 redirects are OK) with a GET;
https://hostname/whatever/path/here/can/be/long
will work as long as
https://hostname/whatever/path/here/can/be/long/InRelease
https://hostname/whatever/path/here/can/be/long/Packages.bz2
etc will work; eg, a simple website hosted in an S3 bucket will work
You can see a full list of assets at https://github.com/retorquere/zotero-deb/releases/tag/apt-get for a simple repo; obviously, a Zotero repo would not host Juris-M binaries. I'd still be maintaining the repo for JM binaries until Frank decides he wants to host them himself, but I'd strip Zotero from the repo.
We tend to keep update manifests on actual webservers in case we need to do conditional updates for any reason (e.g., not serving an update to a particular set of clients).
A particular set of clients as in OS-dependent? Or another discriminator?
Downloadable files would go on S3 with the rest of the binaries.
A redirect to S3 would be OK (technically the assets on GH releases are hosted from S3 buckets, and you get sent there by a redirect), but simple repos (and as far as I can tell, full repos) require all URLs to sit on the same base; you can't have just https://hostname1/Packages and https://hostname2/zotero_5.0.73-1_amd64.deb
(the -1
at the end is a version bump so I can fix things in the packaging while keeping the binaries at the same version).
But they wouldn't be in the Docker env, right?
No, but we could easily specify the dist directory as a mounted volume. While we could run this as a separate step after the normal tarball upload, there's no real reason to redownload and extract the same files.
Given that Zotero only has a single build per arch/version, I don't see any benefit in trying.
Right.
A particular set of clients as in OS-dependent? Or another discriminator?
OS version or current Zotero version, mainly. Though I doubt we'd care about distro/version here, and presumably we wouldn't get the Zotero version anyway. So this probably doesn't really matter.
(Basically, this has been useful in the past for things like not serving Zotero above a certain version to users on old versions of macOS, or serving a special manual-upgrade message to clients where the updater was broken.)
Does the build process build both 32 and 64 bit? To finalize the repo you need all packages in place.
It's possible to run them one by one no issue, but
Can I detect in the dist
directory what architecture (32/64 bit) was being built?
(other than by inspecting the ELF header of zotero-bin
)
After the done
here, there'll be Zotero_linux-i686
and Zotero_linux-x86_64
directories.
OS version or current Zotero version, mainly. Though I doubt we'd care about distro/version here, and presumably we wouldn't get the Zotero version anyway. So this probably doesn't really matter.
(Basically, this has been useful in the past for things like not serving Zotero above a certain version to users on old versions of macOS
I hate to say this, but this is possible using a full repo. You can have varying versions per target distro. It was a major PITA to get even halfway though, I didn't get it to work last time, and the docs on the process are not set up for easy digestion. Tutorials abound, many of them outdated and conflicting.
or serving a special manual-upgrade message to clients where the updater was broken.)
Broken updaters are out of scope for debs -- Zotero-updating is disabled in the debs because they're installed as root, updates would come from the dep repo.
After the
done
here, there'll beZotero_linux-i686
andZotero_linux-x86_64
directories.
Ah but that script has $arch
so it could just pass that in.
I hate to say this, but this is possible using a full repo.
That's OK. There's essentially no chance we'd worry about old distro compatibility the way we've had to worry about macOS and Windows compatibility.
Broken updaters are out of scope for debs
Right.
If we do decide to split things up and use redirects to S3 for the debs, I'll deal with it. For now we can assume they'll just go to the same place.
I hate to say this, but this is possible using a full repo.
That's OK. There's essentially no chance we'd worry about old distro compatibility the way we've had to worry about macOS and Windows compatibility.
You have no idea what a relief this is 😆
If we do decide to split things up and use redirects to S3 for the debs, I'll deal with it. For now we can assume they'll just go to the same place.
That's cool. Then it's mostly done -- at the done
point there's Zotero_linux-i686 and Zotero_linux-x86_64? Not "or"?
edit: I see now, it is indeed "and"
Alright, the current script can be started after done
; when done, it will have apt/repo
ready to upload. a GPG key needs to be present, I've generated mine using this.
"the current script" being https://github.com/zotero/zotero-standalone-build/pull/73
Resurrecting this in light of https://forums.zotero.org/discussion/comment/394321/#Comment_394321
What's the current feasibility of just integrating the .deb build step into our Linux build script?
We run the build script for Linux releases manually in a Linux (but non-Debian-based) environment.
As long as either the tarball or the layout pre-tar is available, it can create the package. The main question is whether it is desired to retain older debs. When a new deb is added, the old debs must be present to rebuild the index. I do that now by downloading the release when a new build is needed.
Another question is version management. De debs follow the zotero release of course, but if a problem is found in the packaging itself, I add an extra sub-minor component to keep the zotero release number while still promoting apt to upgrade. These sub-minors are semiautomatic. I can explain this when we have the infra in place.
I'm in the last stages of a major simplification of the build script, which should be done in the weekend.
A debian environment is required but that is easily done in a github action (which is where it runs now).
The main question is whether it is desired to retain older debs. When a new deb is added, the old debs must be present to rebuild the index. I do that now by downloading the release when a new build is needed.
Yes, our packaging scripts do something similar to generate binary diffs. There's a cache folder of recent downloads, and if they're missing it will download them based on an index of recent release versions and a specified number of versions to support. But what's the practical upshot in this case? These are the older versions that people can install manually by passing a different version number to dpkg/apt? And if they're not present those versions become no longer available for manual installs?
A debian environment is required but that is easily done in a github action (which is where it runs now).
We'd have to do it in Docker. We do tests via CI but not builds.
Yes, our packaging scripts do something similar to generate binary diffs. There's a cache folder of recent downloads, and if they're missing it will download them based on an index of recent release versions and a specified number of versions to support. But what's the practical upshot in this case? These are the older versions that people can install manually by passing a different version number to dpkg/apt? And if they're not present those versions become no longer available for manual installs?
Correct. My assumption is that people might sometimes want to downgrade for diagnosis / emergencies. But if not present during the repo build, they'd be unavailable to apt; if the debs are available elsewhere they could be installed using dpkg.
We'd have to do it in Docker. We do tests via CI but not builds.
Docker would work.
OK, happy to look at whatever you have after you're done working on it. Would be nice to make these part of the official release process.
Do you have any sense of how often most distros are set up to check for and prompt for updates? Should we expect more people to be running out-of-date versions if we make this the recommended installation method?
(One option down the line might be to show an optional notification within Zotero if the current version of Zotero is out of date and the main update system is disabled.)
The default is daily checks.
There's undoubtedly still tweaks to do for the integration, but this would be most of what's needed.
build.py <path to zotero laid out on disk just before it is tar-bzipped>
will create a temporary directory build
in which it will prep the deb package, and a directory repo
that is a ready-to-go repo after it has ran. This can eg be uploaded to an S3 bucket configured for static web hosting, or anything else that makes this directory available under a stable URL.
The build script currently deduces the architecture (32 or 64 bit intel) from the path being passed since I could not find an indicator of the arch inside the directory.
config.ini
holds configuration for the build process, currently:
[maintainer]
email=emiliano.heyns@iris-advies.com
gpgkey=dpkg
[deb]
description=Zotero is a free, easy-to-use tool to help you collect, organize, cite, and share research
dependencies = gnupg
5.0.96.3 = 2
The build script will add whatever is in dependencies
(comma-separated) as dependencies of the package; TBH I don't recall why I added gnupg as a dependency. Previously this also listed libnss3-dev
which was needed for some systems. The script also automatically add the dependencies for Firefox ESR, except libgcc and lsb-release.
the maintainer email will be added to the package, it can be anything but a support email is suggested. The gpg key for signing the package can be generated using
cat << EOF | gpg --gen-key --batch
%no-protection
Key-Type: RSA
Key-Length: 4096
Key-Usage: sign
Name-Real: dpkg
Name-Email: dpkg@iris-advies.com
Expire-Date: 0
%commit
EOF
where the Name-Real
must match the gpgkey
in config.ini
. Name-Email can be any valid email address.
The line 5.0.96.3 = 2
is not required, it is an example of patch-handling for the deb-packaging itself. If something in the packaging process is broken and you need to release a new deb package while keeping the Zotero version the same, adding a (higher) number there will cause client systems to download the newly packaged version.
This script can be ran several times to get all combos (32bit, 64bit, 32bit beta, 64bit beta) and will store all those debs in repo
along with whatever .debs were already present. After each run, the repo is updated to include all .debs that are present there.
I'm sure I've left many questions, fire away.
I'm getting increasing error reports, and github just says it's a problem in apt, not github, which is correct but doesn't help. Whatever comes in its place I'd prefer to be stable from that point on to minimize user disruption. The script is done except for the part where it is injected into the build process, but I don't know where that would be. It would always still be possible to keep the current behavior, which is to probe for new tarballs and download/rebuild when detected, as I do now. That makes the repo process distinct from the zotero build.
A temporary mitigation could be if I could get an url/domain under zotero.org which could redirect to whatever place I put them for now (probably sourceforge), that way when we do move, it could just redirect to the new place. I could do this with a domain I own, but that would still mean the deb users would have to reconfigure their system, and once again when they move to a zotero url.
This is becoming a matter of urgency for me, as I'm getting more and more reports. If none of the options I mentioned above are feasible, I'm going to have to put something together on a domain of my own, but that introduces more disruption for existing users. If I can get at least an URL I could redirect that could later be pointed to the official Zotero hosting, I can do everything needed with no future disruptions. But I need to do something now. Not only are people installing ppa-fixed apt
, which is very iffy, it doesn't work on all distros, and I think Chromebook users can't use it so are excluded from that fix.
OK, sorry. I haven't reviewed this yet, but for now can we just upload some static files to an S3 bucket? How important is it that it can handle arbitrary/conditional redirects later on?
We'd ultimately make the packages available from download.zotero.org, which is a CDN in front of an S3 bucket. E.g., the tarball is at https://download.zotero.org/client/release/5.0.96.3/Zotero-5.0.96.3_linux-x86_64.tar.bz2. Can we just manually upload files to that same folder for now?
Are you building debs for the beta?
A simple bucket would do. All a repo of this kind needs is an URL (doesn't need to be top-level) where all repo files can be fetched by appending their filename to the URL. The redirect proposal is just so that we can pick a stable URL, and if at some point the bucket name/physical location would change, the redirect would mean no action is needed from the users. If we already know the stable place, the redirect is unnecessary.
It is possible to upload them to the same bucket you mention. For repo maintenance it might be easier to use a key within the bucket, but it's possible by just using the toplevel of the bucket and mixing them with other downloads. These files could just be placed there, and it'd work, including the install.sh
. If that's the short-term route, I can just leave the build process as-is for now, and the files could be taken by Zotero from either the GH release or the sourceforge mirror to manually upload to the bucket.
I can also prep things so a GH action would upload the repo to an S3 bucket on change (under a key if need be), test that with a bucket of my own, and we could maybe review that. In that setup, deb releases would trail the tarball releases by no more than two hours (the lowest granularity you can set for scheduled GH actions), but they can also always be ran manually for urgent updates. This is close to my current setup, but since it would be Zotero-only (I presume) and not Juris-M, I would simplify some generalizations (although I can imagine we could offer Frank deb hosting, as I currently do).
I do build beta's, which are updated nightly using a GH action.
Do the files from each release have to be in within the root path? E.g., does zotero_5.0.96.3_i386.deb
have to be in the same directory or a subdirectory of the parent directory containing Packages
?
Assuming these files all have to be together, I guess it would look like this:
/client/release/apt/Packages
/client/release/apt/5.0.96.3/zotero_5.0.96.3_i386.deb
/client/beta/apt/Packages
/client/beta/apt/5.0.97-beta.57+07df7d0de/5.0.97-beta.57+07df7d0de_i386.deb
(If the .deb files could be at arbitrary URLs, we could keep those in the main versioned folders (e.g., /client/release/5.0.96.3
) and just have the Packages
/etc. files in apt
folders under the channel folders.)
Also, if Packages
/etc. are behind download.zotero.org, we'll have to either set cache control headers on those files to no more than a few minutes so that CloudFront pulls new versions quickly or run a CloudFront invalidation as part of the release process.
I think we'd host install.sh on www.zotero.org alongside our existing download redirector (e.g., https://www.zotero.org/download/client/dl?channel=release&platform=mac&version=5.0.96.3
), and use something like https://www.zotero.org/download/client/apt/release/install.sh
.
Do the files from each release have to be in within the root path? E.g., does
zotero_5.0.96.3_i386.deb
have to be in the same directory or a subdirectory of the parent directory containingPackages
?
That doesn't seem to be required.
Assuming these files all have to be together, I guess it would look like this:
/client/release/apt/Packages /client/release/apt/5.0.96.3/zotero_5.0.96.3_i386.deb /client/beta/apt/Packages /client/beta/apt/5.0.97-beta.57+07df7d0de/5.0.97-beta.57+07df7d0de_i386.deb
If that's desired, that can be done, but the beta and the release can co-exist in one repo. Multiple programs can be packaged in a single repo
(If the .deb files could be at arbitrary URLs, we could keep those in the main versioned folders (e.g.,
/client/release/5.0.96.3
) and just have thePackages
/etc. files inapt
folders under the channel folders.)
That seems to work. I'll do a test later tonight.
I think we'd host install.sh on www.zotero.org alongside our existing download redirector (e.g.,
https://www.zotero.org/download/client/dl?channel=release&platform=mac&version=5.0.96.3
), and use something likehttps://www.zotero.org/download/client/apt/release/install.sh
.
That should work. The install.sh
is release-independent.
Do you want to build them during the build of the Zotero client, or from the tarballs?
Another point of note - all debs that you want to have apt-gettable need to be present during repo (re)build. I do that now by using the github action caching, downloading any debs that are missing for whatever reason but which do not need to be rebuilt. If you don't want any of that, detecting a rebuild being needed is not complex (I have that in use, but it's not yet build into the simplified script), and could just download the lot or any limited history, rebuild what needs rebuilding, and publish.
Do the files from each release have to be in within the root path? E.g., does zotero_5.0.96.3_i386.deb have to be in the same directory or a subdirectory of the parent directory containing Packages?
That doesn't seem to be required.
Meaning that Filename
can be an absolute URL, or a relative URL with ..
? Not sure how to square that with https://wiki.debian.org/DebianRepository/Format#Filename
But if we can do this, then yes, I think we'd want to have this layout:
/client/release/apt/Packages
/client/release/5.0.96.3/zotero_5.0.96.3_i386.deb
/client/beta/apt/Packages
/client/beta/5.0.97-beta.57%2B07df7d0de/5.0.97-beta.57+07df7d0de_i386.deb
There are very occasionally emergencies where we want to pull a release immediately, even if it triggers installation errors, so being able to just rename the folder and invalidate all files for all platforms would be nice.
Do you want to build them during the build of the Zotero client, or from the tarballs?
There's no real difference. They'd be part of the zotero-standalone-build process, which produces the tarball. But the files will all be present, so no need to re-extract the tarball if that can be avoided.
Another point of note - all debs that you want to have apt-gettable need to be present during repo (re)build.
That's fine. We can keep all historical debs mirrored on the build machine.
Do the files from each release have to be in within the root path? E.g., does zotero_5.0.96.3_i386.deb have to be in the same directory or a subdirectory of the parent directory containing Packages?
That doesn't seem to be required.
Meaning that
Filename
can be an absolute URL, or a relative URL with..
? Not sure how to square that with https://wiki.debian.org/DebianRepository/Format#Filename
I'm building a simplified repo with apt-ftparchive
; simplified repo's seem to follow other rules (and the apt source stanza is different for them). The repo building instructions are scattered and seem to range from always-out-of-date tutorials to how-to-design-an-airplane-using-quantum-mechanics details. This is what I managed to get to work consistently. But I'll know whether these relative URLs work when I have the build script updated (doing finishing-up now) and test an actual deployment to S3. Another benefit of these simplified repos is that you don't need to have separate repos for each and every debian/ubuntu/mint/etc release.
But if we can do this, then yes, I think we'd want to have this layout:
/client/release/apt/Packages /client/release/5.0.96.3/zotero_5.0.96.3_i386.deb /client/beta/apt/Packages /client/beta/5.0.97-beta.57%2B07df7d0de/5.0.97-beta.57+07df7d0de_i386.deb
Can do. So you want history on the beta's too? It's automatic given the current setup, as the build process will just find all debs under /client
. They do need to be findable from a single directory for apt-ftparchive
to work.
There are very occasionally emergencies where we want to pull a release immediately, even if it triggers installation errors, so being able to just rename the folder and invalidate all files for all platforms would be nice.
Got it.
Do you want to build them during the build of the Zotero client, or from the tarballs?
There's no real difference. They'd be part of the zotero-standalone-build process, which produces the tarball. But the files will all be present, so no need to re-extract the tarball if that can be avoided.
I just need to have these available in an environment where the apt tools are available, and the existing debs can be found.
That's fine. We can keep all historical debs mirrored on the build machine.
All right. So I assume this will all run in a docker container then.
I assume the packaging script does not need to upload the results to S3 (something else will take care of it). The script will be callable either for every separate build, or it can be passed a list of directories with laid out Zotero builds, but it does assume it will not be called concurrently.
~Ah and also: I need to know the architecture (i686
or x86_64
) to make the package; I did not find this info in the laid out structure, I can take the arch either from the directory name (when unpacking the tarballs, the arch is in the directory name) or by inspecting zotero-bin
. Preference?~ using magic
is easy and reliable.
So you want history on the beta's too?
Not particularly, but it doesn't matter. Possible we'd automatically clean up more than a few recent ones.
So I assume this will all run in a docker container then.
I suppose. Our Linux build machine isn't Debian-based, so that's probably easiest. We can mount the staging
folder and a folder containing the debs into the container as volumes.
I assume the packaging script does not need to upload the results to S3 (something else will take care of it).
Correct.
The script will be callable either for every separate build, or it can be passed a list of directories with laid out Zotero builds, but it does assume it will not be called concurrently.
I'm assuming we'll run this from build.sh for each architecture after this line.
I'm assuming we'll run this from build.sh for each architecture after this line.
Can you get me a find staging -type d
of what the script can be expected to find? I think I have a decent idea, but it will be easier to get the script right.
So you want history on the beta's too?
Not particularly, but it doesn't matter. Possible we'd automatically clean up more than a few recent ones.
Since apt-ftparchive
just rebuilds exactly to the debs it finds, any cleanup will automatically propagate to the repo.
staging/Zotero_linux-x86_64
staging/Zotero_linux-x86_64/fonts
staging/Zotero_linux-x86_64/defaults
staging/Zotero_linux-x86_64/defaults/preferences
staging/Zotero_linux-x86_64/gmp-clearkey
staging/Zotero_linux-x86_64/gmp-clearkey/0.1
staging/Zotero_linux-x86_64/gtk2
staging/Zotero_linux-x86_64/poppler-data
staging/Zotero_linux-x86_64/poppler-data/nameToUnicode
staging/Zotero_linux-x86_64/poppler-data/cidToUnicode
staging/Zotero_linux-x86_64/poppler-data/cMap
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-Japan1
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-Korea1
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-KR
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-GB1
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-Japan2
staging/Zotero_linux-x86_64/poppler-data/cMap/Adobe-CNS1
staging/Zotero_linux-x86_64/poppler-data/unicodeMap
staging/Zotero_linux-x86_64/extensions
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/defaults
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/defaults/preferences
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/install
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/chrome
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/scripts
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/resource
staging/Zotero_linux-x86_64/extensions/zoteroOpenOfficeIntegration@zotero.org/components
staging/Zotero_linux-x86_64/dictionaries
staging/Zotero_linux-x86_64/icons
staging/Zotero_linux-x86_64/chrome
staging/Zotero_linux-x86_64/chrome/icons
staging/Zotero_linux-x86_64/chrome/icons/default
staging/Zotero_linux-x86_64/components
And same for Zotero_linux-i686
.
And where would /client
be available to the script?
We can mount that into the container at any path. /files
?
I assume the script will take a channel string (e.g., beta
) as an argument?
We can easily pass arch
(i686
, x86_64
), channel
(release
, beta
, dev
), and version
(5.0.96.3
, 5.0.97-beta.37+ddc7be75c
) as necessary.
With the switch to a single standalone version, it's possible we should put out an official Ubuntu package. I have no idea what's involved with this, but https://forums.zotero.org/discussion/25317/install-zotero-standalone-from-ubuntu-linux-mint-ppa/p1 and https://github.com/smathot/zotero_installer are possibly relevant.
I'm not going to set this up (other than creating an official Zotero account somewhere if necessary), but if it can be made simple enough I'm happy to add it to the existing deploy scripts.