Closed cazlo closed 6 months ago
TurboVNC has always been distributed as a unified package ("always" = for almost 20 years), because a unified package is more intuitive for most users. This reflects the fact that TurboVNC is an enterprise solution, and even though it is fully open source, it has to compete with closed-source solutions (e.g. RealVNC) and hybrid solutions (e.g. ThinLinc, which is based on TigerVNC) in terms of ease of installation and use. Funded development is the only way I can work on open source projects full time, and that funded development generally comes from corporations whose deployment needs tend more toward simplicity than toward saving disk space. Thus, I have to be really sensitive to things such as ease of installation, because those things potentially affect my livelihood by virtue of affecting the uptake of TurboVNC among the types of organizations that are most likely to fund it.
The changes aren't as simple as you suggest. One issue is that the official build system generates the RPMs from the SRPM as a sanity check to ensure that the SRPM can be used as a pristine source. The choice of whether to build the viewer or server is embedded in the RPM spec file, so in fact, what you propose would require modifying the spec file so that it generates two binary packages from the same source package. For symmetry, ease of documentation, and intuitiveness, it would be necessary to do the same for the DEB packages, but that creates a whole new set of issues. It would minimally be necessary to modify the in-tree build/packaging system, the official build scripts, and the User's Guide. Then I would have to answer the inevitable tech support requests from people who didn't get the memo. (I guarantee you that there will be complaints.) Ultimately, you are asking me to contribute unpaid labor to make things easier for you but harder for myself. That doesn't make a lot of sense. I think it would be best if you maintain your own packages to meet your specific downstream needs, since those needs are not well aligned with our upstream needs.
As part of https://github.com/cazlo/gl-vdi-containers/pull/1, I have been investigating options for providing scalable, containerized Linux VDI with performant OpenGL capabilities. TurboVNC in concert with VirtualGL is an attractive solution to this problem.
In this containerized context, it is common practice to optimize and minimize external dependencies, only pulling in binaries which are strictly necessary for the application context. In this spirit, it is natural to split our system into a separate client and server images. See diagram below illustrating a potential deployment of this concept:
In this context, the image defining the
server
box does not necessarily need thevcnviewer
binary (and java related dependencies).Similarly, the image defining the
client
box(es) do not necessarily need thevncserver
and related binaries.I have briefly analyzed the "unified"
rpm
installer and discovered we can save > 50 MB in the server image size if we were to split up serer and client artifacts. Details regarding this finding follow:I first extracted the 3.1 release rpm into a temporary directory with
rpm2cpio turbovnc-3.1.x86_64.rpm | cpio -idmv -D $(pwd)/turbovnc-3.1
I then ran du against the directory to see stats about the disk use of the artifacts:
Here we can see that uncompressed, the rpm installer will drop about 69M of files onto the image. The vast majority of this, 62M, comes from
/opt/TurboVNC/java
which as far as I can tell is only used by the client viewer, not any server context.Furthermore, inspecting the
/opt/TurboVNC/bin
folder, we see most binaries here are only relevant to the server context:Here, only 4KB are relevant to the client context, with the remaining ~6.3MB being only relevant to the server context.
Given these findings, I recommend splitting into 2 separate assets for the client and server contexts of this system. For example on linux, the directory structure would end up something like:
TurboVNC-server
TurboVNC-client
On cursory look, most of the hooks are already in place for this, with the CMAKE build having separate options for building the vncviewer and vncserver (e.g. https://github.com/TurboVNC/turbovnc/blob/main/CMakeLists.txt#L40). These hooks appear to be used further in the cmake build, however I have not deeply investigated the process yet for building separated packages. My initial look is making me think we could accomplish this with changes to the build script in places like https://github.com/TurboVNC/buildscripts/blob/main/buildvnc.linux#L37-L41 and https://github.com/TurboVNC/buildscripts/blob/main/buildvnc#L211-L229