Closed GregDomjan closed 9 years ago
I could see benefit in perhaps combining archives within a single OS - but I don't know that creating a cross-os NAR would be too useful.
It would also be nice to support ios and Mac OS "fat" binaries (via the lipo command)...and is kind of related to this (a single archive for multiple architectures on a single os)
On Nov 13, 2013, at 8:08 PM, Greg Domjan notifications@github.com wrote:
Should NAR (further) support a 'matrix' style build through a single - NAR support a matrix of libraries architectures & linkers could be made into multiple entries ie.
x86 amd64
There could be issues with a spare matrix?
Or perhaps it should go the other way - no default compilation in the lifecycle, and each part of a matrix is defined as a seperate execution.
One of the things I found annoying especially with large 'include' areas was that the noarch copy of includes happened for each library.
— Reply to this email directly or view it on GitHub.
The different libraries make seperate NAR atrifact files currently, it was not my intent to merge the archives. Rather, to discuss managing configuration of the build - what to build.
There is another issue that interacts with this in regard to what a particular host can build Vs what the overall deployment might be.
It seems there should be something like a profile section to contain the actual build steps and take into account wether cross compile tools are available, or should it be launching a sub job on another build host to satisfy what cannot be built locally.
I think an interesting target might be to have boost as a nar artifact which covers multiple linker versions, multiple os, multiple arch, static/shared, w/wo debug symbol, debug. Then could even add some of the classifiers, such as zlib, icu4c.
There is a version of wrapping boost as a 'dumb' jar loaded (stale now) from googlecode - http://mvnrepository.com/artifact/com.googlecode.boost-maven-project/boost-api
The advantage of the matrix config is just succinctness, right? Seems like a good idea to me.
@GregDomjan's further comments about cross-compile tools etc. are really a separate topic from this issue, right?
And I definitely agree about wrapping Boost as a NAR, so it can be consumed as a dependency. Although Boost is probably too ambitious as a first target. We should probably pick something that has a simple ./configure
/make
/make install
classic build system first, and that has only one shared library. Boost is dozens of libraries at this point and also has a custom build system... :worried:
Funny you talk of boost - I just a few days ago created a bunch of pom files and a shell script which will build the boost libraries and package them as individual artifacts. I didn't use NAR - but it may be able to be modified.
https://github.com/toonetown/boost-maven
BTW - the build script is intended to be run on OS X - since that's what I use for developing, but you should be able to modify it for any other platform. It also only (currently) builds the platforms that I am using on the projects I am currently working on. But thought I'd pass the project along for anyone who may be interested.
@ctrueden yeah matrix config would be about succinctness. Adding an extra architecture rather than an execution block.
Also there are some parts of goals that only need to run once for the matrix. For instance to reduce making multiple copies of headers would require some effort in either resource filtering and up to date checks or breaking up some of the compile mojo so that it could run only once if using executions.
Fair call that boost may not be the best one to start with, I picked it as an example of a fairly popular one. There are optional external dependencies that might be better for firsts, zlib, icu4c (which is also several libs).
@toonetown I have boost compile using BJam for Windows to 32 and 64 bit and then use NAR to package up the bits we use, unfortunately it is using a private version of NAR so the config isn't compatible either.
Well, if you're talking about Boost, there are fundamental problems: typically, different compiler versions on the same operating system/architecture are incompatible with each other.
For example, when I recently compiled a shared library with GNU C++ 4.9 and tried to load this via dlopen()
(the POSIX equivalent for System.loadLibrary()
) in a program compiled by XCode's GNU C++ 4.2 (my laptop cannot be upgraded to a newer MacOSX, and 4.2 is the last compiler version supported on my MacOSX 10.6), it crashed. Badly. As in "Abort trap" badly.
It appears that the C++ initializations/deinitializations are not quite compatible between different libstdc++ versions, and worse: the incompatible code seems to be present in the libraries linked to libstdc++.
So if we want to provide Boost as NAR artifacts, we'll need to be very, very careful about compiler versions and having different classifiers for them so that we do not get bitten by above-mentioned problem.
I think CPP linkage is getting away from the initial issue of the matrix compile, though it does potentially add another category to the matrix.
I agree there is an issue with CPP linked as a shared lib - raised as issue https://github.com/maven-nar/nar-maven-plugin/issues/70
Hmm.
The longer I think about this, the more I think that it should not be a functionality supported by the nar-maven-plugin
.
Instead, the Maven configuration (e.g. with multiple executions) could provide the Matrix build functionality. That would separate concerns greatly, and also relieve the nar-maven-plugin
of having to acquire even more, hard-to-maintain features...
Thoughts?
Having multiple executions is also seeming to me the better way to go and leaves configuration aggregation with maven to deal with.
I have an annoyance with the current grouping of some noarch actions with aol specific actions in the current compile/test-compile.
When dealing with boost as an example, copying the headers seems to take minutes, a second execution of compilation copying the headers a second time is a time killer.
How best then to manage default execution of compilation Vs the needs of noarch and aol specific. Should compilation/execution/test-compile/test-execute no longer be part of the default lifecycle?
The process working with includes seems incomplete, not allowing for initial resource filtering before compile, or allowing for mapping generated includes into the final packaged includes. I'd like to move the 'copy includes' action currently in compile to another goal.
Yeah, Boost is a beast... ;-)
As to copy-includes
, it strikes me like something that should probably be performed in the generate-sources
or process-sources
phase, not the compile
phase.
@GregDomjan whither now? Shall we close this ticket?
Closing - Current matrix of multiple libraries is already too much variance from the maven style of configuration. It should go the other way
Current matrix of multiple libraries is already too much variance from the maven style of configuration. It should go the other way
- each part of a "matrix" is defined as a separate execution.
- in my opinion should move to no default compilation in the lifecycle
I have to admit that I do not understand..."no default compilation in the lifecycle"? Surely NAR has to compile something by default... That's what users expect, leastways this user...
Ditto, that's why I'm switching to NAR... On Dec 8, 2014 4:50 PM, "dscho" notifications@github.com wrote:
Current matrix of multiple libraries is already too much variance from the maven style of configuration. It should go the other way
- each part of a "matrix" is defined as a separate execution.
in my opinion should move to no default compilation in the lifecycle
I have to admit that I do not understand..."no default compilation in the lifecycle"? Surely NAR has to compile something by default... That's what users expect, leastways this user...
— Reply to this email directly or view it on GitHub https://github.com/maven-nar/nar-maven-plugin/issues/56#issuecomment-66194092 .
There are a couple of reasons I put "no defualt compilation" forward.
Having the input that having a lifecycle with default compilation is useful to some is important. So to give it another twist, there could be additional lifecycles either as <package>nar-cpptask or nar-configure
and or different lifecycle modules that give different meaning the the <package>nar
Surely NAR has to compile something by default... That's what users expect, leastways this user...
We simply cannot change the current behavior without breaking backwards-compatibility. That would be a big no-no-no.
not all modules require compilation - some 'libraries' are just groups of headers and you have to make a mess of config to disable to default build.
Why? If there's nothing to compile, NAR should simply detect that and continue without compiling anything, just packaging the headers into the -noarch
artifact.
what is a sensible default for the 'first' build when you then have to specify additional executions for others - ie. we need static and dynamic zlib compiled 32 and 64 bit and against release and debug shared runtime.
The Maven way would be to specify multiple <execution>
s. I used the following src/main/c/main.c
for testing:
#include <stdio.h>
int main(int argc, char **argv)
{
printf("%d bit\n", (int) sizeof(void *) * 8);
return 0;
}
and this pom.xml
:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.github.maven-nar</groupId>
<artifactId>multi-arch-example</artifactId>
<packaging>nar</packaging>
<name>Multi-architecture NAR Executable Test</name>
<version>1.0-SNAPSHOT</version>
<properties>
<skipTests>true</skipTests>
</properties>
<build>
<defaultGoal>integration-test</defaultGoal>
<plugins>
<plugin>
<groupId>com.github.maven-nar</groupId>
<artifactId>nar-maven-plugin</artifactId>
<version>3.2.0</version>
<extensions>true</extensions>
<configuration>
<libraries>
<library>
<type>executable</type>
<run>true</run>
</library>
</libraries>
</configuration>
<executions>
<execution>
<id>32-bit</id>
<phase>compile</phase>
<goals><goal>nar-compile</goal></goals>
<configuration>
<aol>i386-MacOSX-gpp</aol>
<linker>
<name>gcc</name>
<options>
<option>-m32</option>
</options>
</linker>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
The interesting part is of course this one:
<executions>
<execution>
<id>32-bit</id>
<phase>compile</phase>
<goals><goal>nar-compile</goal></goals>
<configuration>
<aol>i386-MacOSX-gpp</aol>
<linker>
<name>gcc</name>
<options>
<option>-m32</option>
</options>
</linker>
</configuration>
</execution>
</executions>
You will note that it is quite platform-specific, in my case it targets MacOSX 32-bit. That means that it would be better in a <profile>
that is active on <os><name>Mac OS X</name></os>
. Note: on Windows and Linux, you would not make the <profile>
activate automatically because it must not be assumed that cross-architecture compilers are available there.
Additional remark: cross-architecture unit testing will be impossible for libraries because the JVM launched to run the unit test won't be able to load the library because it targets a different architecture. I leave it to you to test whether above pom.xml
needs adjusting in the presence of unit tests.
Please also note that the <linker>
section is required at the moment for two reasons:
<name>
is not provided, the plugin will throw a NullPointerException
in line 187 of AbstractNarMojo
.<option>-m32</option>
is not provided, the linker will try to link a 64-bit executable, and fail.I would consider both of these issues to be bugs, and I would really appreciate if you found the time to fix them.
Oh, additional note, just in case it is not clear: the pom.xml
results in two platform-dependent artifacts to be built: multi-arch-example-1.0-SNAPSHOT-i386-MacOSX-gpp-executable.nar
and multi-arch-example-1.0-SNAPSHOT-x86_64-MacOSX-gpp-executable.nar
. The latter is built thanks to the default execution, the former thanks to the additional <execution>
section.
@dscho Thanks for the detailed example. I think one of @GregDomjan's points is that having one of the two platform-dependent artifacts be the "default" one is rather arbitrary, and it would be cleaner if all such artifacts were explicitly enumerated in the same way.
We simply cannot change the current behavior without breaking backwards-compatibility. That would be a big no-no-no.
I agree that we should strongly favor backwards compatibility. But personally I would be willing to go to 4.0.0
if the new feature is powerful/compelling enough. (Not necessarily saying it would be in this case—but would merit discussion case-by-case. So I wouldn't go so far as to call such breakages a "no-no-no" in general.)
I think one of @GregDomjan's points is that having one of the two platform-dependent artifacts be the "default" one is rather arbitrary
To the contrary. Remember that there is a main target, the one corresponding to the current JVM. Only with this is it possible to run unit tests of native libraries.
For executables, it is slightly less restrictive. On 64-bit MacOSX, you can run 32-bit MacOSX executables. On 64-bit Windows, you can execute 32-bit Windows executables. On 64-bit Linux, you need 32-bit libraries to be installed to run 32-bit Linux executables, but most of the 64-bit Linux installations lack those libraries. So even for native executables, the unit and integration tests cannot be run in general when even only the architecture differs from the JVM running Maven.
For all AOs differing from the current JVM (with the notable exception of i386-MacOSX when called from a 64-bit MacOSX JVM) you need cross compilers, something that you simply cannot expect to be available in all developers' setups. Together with the fact that unit tests cannot be run in general for other architectures and/or Operating Systems, the other targets are therefore very much second-class citizens, providing a very, very good reason why the default is the default.
I agree that we should strongly favor backwards compatibility. But personally I would be willing to go to 4.0.0 if the new feature is powerful/compelling enough
Of course we can go to 4.0.0 if we have any new feature that is compelling enough. That does not merit mentioning. Would the feature this ticket is about merit 4.0.0? Of course, it would even require it according to SemVer - which has been your and my understanding as maintainers of NAR, even if we failed to communicate that clearly. The question, however, is whether we want that feature at all. Given my argument above, I doubt it.
Now, being pretty much the only person who puts energy into the maintenance of the NAR plugin, I would have expected that disagreements are expressed by Pull Requests accompanied by compelling arguments, or at least by strong arguments that are able to convince me. If you think you can still provide the former or the latter, please do not hesitate.
unit tests cannot be run in general for other architectures and/or Operating Systems, the other targets are therefore very much second-class citizens
Indeed. But there is also L, the compiler/linker. If you target two different compilers (say: gcc and MSVC on Windows; or two different incompatible versions of gcc), which should be the default? For consistency and elegance, I like the idea that all AOLs (at least: all AOLs needing explicit configuration) are somehow declared in the same way—ideally without using profiles. I agree with you that it makes total sense for one of the configurations to be autoselected as the default/active one when building, rather than doing nothing by default.
That said, I have not put extensive thought into this issue. My perspective is naive, and born from a simple desire for consistency. I am not personally proposing anyone undertake any work here and now—but rather commenting on what "would be nice."
being pretty much the only person who puts energy into the maintenance of the NAR plugin
disagreements are expressed by Pull Requests accompanied by compelling arguments
As you know, PRs for substantial changes require many hours of effort. In cases like this, it makes more sense to discuss and agree before starting down that road.
@dscho your doing great things, I appreciate your efforts, we just don't seem to come at these things from the same direction. @ctrueden thanks, you have been able to say it better than I could.
Unfortunately we don't all get to stay on track. I had several months of changes while I worked through understanding of NAR and what would happen with certain changes. In the end many of the changes where going the wrong direction and just at the same time the shared version started to take off and I got dragged to other things before I could contribute more than a few small parts that where relevant, and try to drop some reminder issues. We got part way to start using NAR and then got put on hold for other priorities over improving our build, so doing things like #78 and #66 didn't progress from me, but I did/do intend to get back to such things if nobody else had done them ahead of me.
disagreements are expressed by Pull Requests accompanied by compelling arguments
As you know, PRs for substantial changes require many hours of effort. In cases like this, it makes more sense to discuss and agree before starting down that road.
Unless that PR is demonstrating a use case, e.g. by adding documentation suggesting the recommended way to do multi-arch builds. That way, the person who is most interested in the issue gets to propose a way, and others can chime in and provide their own improvements, either as comments, or for more complicated stuff by offering their own PRs on top of the branch to be merged.
we just don't seem to come at these things from the same direction.
@GregDomjan that is not necessary. But we should come to an agreement regarding the open tickets: I personally find it very distracting - and misleading! - to have open tickets that are stalled. I find it better to close them when it becomes apparent that the issues are not important enough to be addressed, at least not for the time being.
Should a ticket be closed prematurely, there is no harm at all, simply reopen it when there is new development, or alternatively open a PR when you have one, referring to the original ticket so that the context is not lost.
If you agree with this course of action, it has the further advantage that this project does not look ill-maintained to potential contributors (who might be put off if they see open tickets that have not moved at all throughout the past year).
Would it not be nice to have a cleaner list of open tickets, one that reflects reality better?
But then, I see that you already agree, because you closed this ticket.
we should come to an agreement regarding the open tickets
@dscho I agree. OK if we discuss further in the discussion thread?
Should NAR (further) support a 'matrix' style build through a single <configuration> NAR support already supports a matrix of libraries - static/shared/exe as many different settings as you like. other attributes like architectures & linkers could be made into multiple entries ie.
There could be issues with a sparse matrix?
Or perhaps it should go the other way - no default compilation in the lifecycle, and each part of a matrix is defined as a seperate execution.
One of the things I found annoying especially with large 'include' areas was that the noarch copy of includes happened for each library.