etotheipi / BitcoinArmory

Python-Based Bitcoin Software
Other
826 stars 618 forks source link

Autotools gitian 0.94 #304

Closed josephbisch closed 9 years ago

josephbisch commented 9 years ago

@droark

This should be everything needed for deb packages with the Debian reproducible build toolchain and for the RPi version with Gitian with the exception of the signing stuff. I tested the RPi Gitian build and it reproduces exactly. Since my last reported status, it uses all native_qt48 dependencies from depends instead of from the package manager. The deb package scripts are not tested on this branch, but nothing should have changed from when I tested it on the other branch it was on and was able to reproduce the binary package (.deb).

I moved the existing Makefiles to Makefile.old. The documentation will have to be updated to reflect that change. I think that is just the page on the website. I don't see any explanation of building from source within the actual Armory source tree.

We don't currently use the Linux Gitian descriptor. Unless we want to also offer a basic compressed version of the source tree after having built _CppBlockUtils.so, I suggest we delete the descriptor and just go with the scripts in dpkgfiles.

This PR being merged may cause conflicts when merging the Windows and OS X PRs in the future, because there are instances where there are differences due to the Windows and OS X PRs using Python 3 and this PR being reverted to Python 2, but we will just deal with resolving merge conflicts when we get to that point.

I had to add native_qt48 and native versions of all of its dependencies. For Raspberry Pi, I only build native PyQt, host Python, and all of their dependencies (native Qt, native Python, etc.). This is because we don't bundle Python and Qt with the Raspberry Pi download. So we just need native PyQt for pyrcc4 and host Python for the Python libs.

That is why the packages/packages.mk file may look a little messy, due to only building certain packages if the architecture is not arm.

The Gitian descriptor for RPi relies on a compressed version of the RPi tools that can be added to gitian-build/inputs/ to make it accessible to Gitian.

@theuni - The explanations you offered and changes you suggested in the past have been very helpful. It would mean a lot to me if you would take another look, particularly at the Gitian/depends stuff for Raspberry Pi. As I have already stated, I was able to reproduce the Raspberry Pi version of Armory, but I am sure there are things I don't do quite right or the optimal way.

theuni commented 9 years ago

@josephbisch I would be happy to help review, but this is a lot to take in at once. Would it be possible to do this in a few chunks, maybe the autotools addition first?

josephbisch commented 9 years ago

@theuni Thanks and yes it is a lot to take in being that this PR is basically everything autotools and gitian related for the next Armory version. I'm not sure if you mean you just want to review it in chunks or you actually want me to split this in to separate PRs. If it is the former, then to see the autotools stuff, you can just look at build-aux/*, autogen.sh, configure.ac, Makefile.am, and cppForSwig/Makefile.am. There is also cppForSwig/cryptopp/Makefile that I left as a regular Makefile.

Let me know if there is anything I can do to help.

droark commented 9 years ago

Thanks, Joseph. I'll look things over when I can. Just as a heads up, between some last-minute schedule shifting and an upcoming vacation (partially working), it may take a little while before I can fully sign off on this. That said, the quick skim I did earlier looked good. :)

As for potentially splitting up the commit, I'll let @theuni make that call, especially if it makes his life easier. I'm thinking it might make sense to split it up as follows. I know it might be difficult since you basically had to pull in a bunch of stuff from elsewhere. (Cherry picking commits might be the best option in order to preserve the commit history. Your call. I don't want to overwhelm you.)

Thanks.

josephbisch commented 9 years ago

@theuni - Ping. This is just a reminder about this PR.

Let me know if there is anything I can do to help with you reviewing this PR.

Thanks.

droark commented 9 years ago

Okay. Finished my initial pass. I'll attempt to build the Linux and OS X builds. I'll try to build RPi too if I have time but I can't verify it due to lack of equipment available to me at the moment. I'll report back with anything I find and with any code changes I had to make to get everything to work.

josephbisch commented 9 years ago

Okay, I will hold off on making the changes until you report back from the builds. I can always test the RPi build if you send the result to me.

droark commented 9 years ago

Okay. I've done some work and have some incomplete feedback.

Linux: Regular make works. Reproducible Debian (make_deb_package.py) is a work-in-progress. OS X: Doesn't work out of the box. Some work will be necessary to figure out the best solution (probably use the current OS X build script and modify as needed). RPi: Not tested yet.

Here's some feedback I have that's not in the code. (By the way, feel free to go ahead and commit the changes I requested.)

I do have a bit more feedback.

Thanks, and sorry again that it took so long to go over this.

josephbisch commented 9 years ago

I think I figured out the -Wdate-time issue. See this commit from the normal dpkg. It adds an option to enable -Wdate-time in the dpkg buildflags, but has it disabled by default. Then look at this file from the reproducible build dpkg. Notice how timeless is set to one at the latter link, enabling -Wdate-time. So by using the dpkg from the Debian reproducible build toolchain, -Wdate-time is being added to the dpkg buildflags when building a package in the chroot. And then autoconf is picking the flag up from there. I'll look at overriding the dpkg buildflags. Otherwise using gcc < 4.9 is not an option.

I think I just have to do DEB_CFLAGS_STRIP=-Wdate-time in the rules file to remove it from CFLAGS, but I will have to try tomorrow.

theuni commented 9 years ago

Sorry for dropping this, I was at a conference and it fell off my list :\

I'm happy to have a look at it this week if you'd still like a review.

josephbisch commented 9 years ago

@theuni - Thanks, I'm sure Doug will still want you to review this and I know I do. As you can see, Doug already came up with a number of things that need to be fixed, so this wouldn't have been merged before those are all fixed anyway, so there is time for you to look at it this week.

droark commented 9 years ago

@josephbisch - Good work. I'll see if utopic produces better results. If so, and you can't override the flag (or overriding it breaks something), we may just have to tell people to use utopic or vivid when building the chroot.

@theuni - I won't turn down a review. :) Thanks. I looked everything over already but I probably missed certain things. If you'd like to go ahead and review, that's fine, although there will be some changes made as bugs are squashed.

droark commented 9 years ago

Okay. Looks like utopic will be the minimum target under cowbuilder if using an Ubuntu target. (I don't know about Debian.) Once I used that instead of trusty as a target, all autotools checks passed, and everything seemed to build properly. I'm looking into a couple of things I saw that are probably normal but need to be double checked.

droark commented 9 years ago

@josephbisch - If you get a chance, can you take a .deb you build and run it on Ubuntu 12.04 and 14.04? I don't have my external HD handy and can't test that the .debs work properly. (Building on 14.04 would be great too, even though I'm sure the autotools glitch you hit won't affect standalone builds.) If you can't, don't worry. I'll check when I return next week.

josephbisch commented 9 years ago

I tested a .deb that I built using a 14.04 chroot and tested it on 12.04 and 14.04 using Virtualbox. Everything was amd64. I didn't test with i386. I was able to do the build with 14.04, because I resolved the -Wdate-time issue. My theory was basically correct, except it was part of CPPFLAGS instead of CFLAGS, so I had to strip it from CPPFLAGS. I ended up using DEB_CPPFLAGS_MAINT_STRIP=-Wdate-time. I don't think this is really significant to us, but I added in MAINT to the variable name, because apparently it allows users to override my choice to strip -Wdate-time using DEB_CPPFLAGS_STRIP if they want.

There were some other things I came across when doing the build.

I had to add some more options to the command I ran to create the chroot, because I am on Debian. I don't recall having to use these extra options when making the utopic chroot, but it doesn't make sense that I wouldn't have to, so maybe I am just remembering wrong. I added --mirror=http://my-prefered-mirror.com/ubuntu (mirrors.kernel.org is an example) and --debootstrapopts --keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg. You have to make sure you have the ubuntu-archive-keyring package installed if you are on Debian. After the additions to the chroot creation command, I was able to follow the directions exactly as they were.

@droark - Did you have to modify the rules file to override dh_auto_clean to be able to do the build? I got errors when allowing the default dh_auto_clean to take place. It was due to the setup.py that exists to function with py2exe to create the current Windows build. It throws an error since py2exe is Windows only. I had to add a line to the two rules template files in dpkgfiles. The line is override_dh_auto_clean:. Maybe 15.04, or whatever version you used after 14.04 didn't work, does things differently, but I doubt it.

I'll put together commit(s) for all this.

droark commented 9 years ago

@josephbisch - Good work. I may be a bit slow to process everything, as I've fallen ill. :( That said, what you wrote looks good.

Just FYI, I didn't have to override anything. If overrides are necessary for Debian, and they don't affect people using Ubuntu (host or target), it's fine. Feel free to make a commit with changes. I can try them on my system.

josephbisch commented 9 years ago

I'm sorry to hear that you're ill.

I pushed a bunch of commits that fix a lot of the issues. I think I just have stuff with the actual make_deb_package script left and pulling in stuff from BC Core (like Qt 5.5).

I overrode dh_auto_clean. I'm not sure why I'm having an issue with dh_auto_clean that you aren't, but I am. It's not urgent that you test it, so take your time. It is probably good for me to finish the rest of the changes first anyway, so that they can all be tested with one build.

I found out about a program called yes (part of GNU coreutils, so it should be on all systems) that I can use in the make_deb_package.py script to pipe to dh_make to automatically send the character "y" to dh_make, automating the dh_make process. After that, the whole process should be automated except for entering the password when root privileges are needed.

josephbisch commented 9 years ago

Sorry, by whole process being automated, I mean apt-get gets passed -y and such. The user still has to copy and paste the commands from the README to create the chroot and then run the make_deb_package.py script. I realized after posting that that it sounds like everything happens automatically by running a single script.

droark commented 9 years ago

Thanks. That all sounds good.

I'd like to clarify one thing. In the rules template, override_dh_auto_build is missing "make STATIC_LINK=1". I take it this is intentional due to make_deb_package.py having a static build option. Correct? Just confirming.

josephbisch commented 9 years ago

With Autotools, you now configure static linking via the configure script. Check out the following from the static template:

override_dh_auto_configure:
    dh_auto_configure -- --enable-static-link

With dh_auto_configure, anything after the -- gets passed to the configure script. The --enable-static-link flag causes a variable to be defined that gets passed to the Makefile.

The static build option that make_deb_package.py has just causes the static rules template to be used.

droark commented 9 years ago

Thanks. Just to make sure we're on the same page, "./configure --enable-static-link" will enable static linking when done manually, correct? That's my understanding.

josephbisch commented 9 years ago

Yes. On Aug 5, 2015 4:14 PM, "Douglas Roark" notifications@github.com wrote:

Thanks. Just to make sure we're on the same page, "./configure --enable-static-link" will enable static linking when done manually, correct? That's my understanding.

— Reply to this email directly or view it on GitHub https://github.com/etotheipi/BitcoinArmory/pull/304#issuecomment-128134200 .

josephbisch commented 9 years ago

I'm looking at the downloading of the deb dependencies and I'm not sure if we're actually able to do it the way I was trying to. With apt-get -d or apt-get --download-only, we would be able to take advantage of apt-get to download the current versions of packages. The problem is that the downloading of the dependencies currently takes place on the host, not the chroot. So we would have to add the repos for the lowest Ubuntu version we support to the sources.list of the host. We could do that and set the priority of the repos low enough, so that they won't mess with the normal operation of apt-get for the user, but this doesn't seem ideal.

There may be a way to get the packages from the chroot, but I'm not sure. Anyway, the chroot may be too new a version of Ubuntu.

I notice from looking at the script on the master branch that it never had options for downloading the dependencies or building them for source. So they are options that I tried to add. So maybe we don't really need them? Alan must already have a method of getting the dependencies, but I don't see anything in the Armory repo to automate it, so he must just get them from the build computer.

The reason I tried adding the dependency downloading is that people may be running this script on a non-Ubuntu host, or even if it is Ubuntu, it may be too new a version for the offline bundle to use the dependencies from the host. So I wanted an easy way for them to get the dependencies.

@droark - Can you see how Alan currently gets the dependencies and see if that is sufficient? I personally think it is okay if there isn't an automatic way for everyone to create the offline bundle, as long as Alan has a good way of creating it.

droark commented 9 years ago

@josephbisch - Thanks for the update. To be honest, I'm not totally certain of the best path forward. Alan will answer the dependency question in the morning. In the meantime, maybe Cory will have some ideas if he does a review? I'm not intimately familiar with the ins & outs of these kinds of things, unfortunately.

One thing I do know is that there are definitely no dependencies stored in the repo, other than any code we can build (e.g., Crypto++ and LMDB). Alan really wants to avoid bloating the repo with binaries and such. So, any solution that requires the repo to store dependencies is a non-starter.

Also, perhaps I'm misunderstanding the question, but I think it's fair to say that we only support certain configs. What we can do is build, say, using a Trusty chroot and host, along with certain config options, and make those options known to everybody. That's what people check against when they do their own builds. If they want to build using Wheezy or Vivid or something else, that's fine, but they're on their own if the binaries aren't exact matches.

josephbisch commented 9 years ago

What I'm saying is that even if we are using trusty for the builds, we need the dependency packages from 12.04 if we want that to be the lowest version supported by the offline bundle.

droark commented 9 years ago

Ahhh. The only potential solution I know of offhand is to find out which packages 12.04 uses and download those specific packages. This page has more info, which basically boils down to executing something like "apt-get install qt4=4.8.6.ubuntu.1" for each dependency. I believe this will work. It's more work upfront but it should be the least intrusive too.

josephbisch commented 9 years ago

The problem is that apt-get needs to know about the repository that contains the version of the package you want to download. So you have to add a line to /etc/apt/sources.list for 12.04 like the following:

deb http://us.archive.ubuntu.com/ubuntu/ precise main universe

You can see from the following that I tried to do basically what the link stated with Debian, using a version of a package that is only available in Debian Wheezy, when I don't have Wheezy in my sources.list.

joseph@crunchbang:~/temp$ apt-get download vim=2:7.3.547-7
E: Version '2:7.3.547-7' for 'vim' was not found

So we would need to modify the sources.list on the host to have the 12.04 line added to it. Which I feel is intrusive, especially because I don't think people expect that to be modified as part of building a package.

Edit: There apparently is a way to tell apt-get to use an alternative sources.list. So we can ship a sources.list in dpkgfiles that is just able to download 12.04 packages. Then we would use -o Dir::Etc::SourceList=/path/to/my/sources.list to tell apt-get to use our sources.list. See this serverfault post for more details.

droark commented 9 years ago

Thanks for clarifying. I suspect the serverfault post you linked has the cleanest solution. Intrusive options are really non-starters. This seems best. Having an auditable sources.list file is the best of both worlds.

Also, I see a config file too? You may want to look into adding a basic config too, just to be safe. People could have all kinds of weird config settings. I think it's best to just lock down everything as much as possible. In fact, it looks like it might be possible to just use Dir::Etc::SourceList in the config file? If the config file's too much effort, don't hesitate to drop it. I just think it's worth investigating.

josephbisch commented 9 years ago

I have it working actually. I just need to finish integrating it into the script before I commit it. The config file I ultimately went with is:

Dir::State "./var/lib/apt";
Dir::State::status "./var/lib/dpkg/status";
Dir::Etc::SourceList "../sources.list";
Dir::Etc::SourceParts "./parts";
Dir::Cache "./var/cache/apt";
Dir::Etc::trustedparts "/usr/share/keyrings/";

The config file is in dpkgfiles/packages/etc/apt.conf. Next to etc are the directories var and parts. The sources.list file is in dpkgfiles. I pick up the trusted keys from /usr/share/keyrings, because that is where the Ubuntu keys are put when installing the ubuntu-archive-keyring package on Debian. I'll have to verify that they appear there on Ubuntu also, but I don't see why they wouldn't.

In the config file I set SourceParts just to override the default to prevent the use of any repos that the user may have in /etc/apt/sources.list.d/. So packages/parts is empty. And I used trusted parts instead of trusted, because we want to pick up all the keyrings in /usr/share/keyrings/. Using trusted would just allow us to specify a single keyring afaict.

For apt and dpkg to function, certain directories and a status file need to exist under packages/var and packages/etc, so I have the make_deb_package.py script create them (since we can't store empty directories with git).

In dpkgfiles/packages, I run the following to download the packages to that directory:

# no need for root permissions, so don't use sudo
apt-get -c etc/apt.conf update
apt-get -c etc/apt.conf download package:arch # package is package name, arch is amd64 or i386

So to summarize, once I push the next commit(s), the user should just need to install ubuntu-archive-keyring (if not on Ubuntu) and then run the make_deb_package script.

There is also the building dependencies from source feature that currently has hardcoded URLs. I think we can use a similar method with apt-get source to make that function robust as well (i.e. prevent problems from URLs breaking). I'm not sure how useful building the dependencies from source is (versus downloading the deb files), because I'm pretty sure that not every dependency of Armory is itself reproducible.

droark commented 9 years ago

Good work! Looks really nice. Just be sure to add appropriate comments, and I'll probably be happy.

Regarding making the source feature more robust, I'll talk to Alan. I'm all for making it more robust, as having to randomly update URLs isn't fun. I just don't know how far down the rabbit hole Alan wants us to go.

droark commented 9 years ago

@josephbisch - See commit 083fc5bb9a45f0fbcb805dce31886451013cec6e for the script Alan ran to get the dependencies for the offline bundle. I don't think you'll need it but it might be nice to check your work against it.

etotheipi commented 9 years ago

My apologies that the script doesn't have comments. It is only used once in a blue moon and I had never even committed it to the project until now.

The script is intended to be run on a fresh install of the target OS (I use a VM for that, of course). It uses the apt-get install command to find the listed packages and all their dependencies. It must be run on a fresh install so that it collects the dependencies that the offline computer needs (being in the fresh install state). If you run it on a computer that's already got Armory running, it will only grab a subset since the dependencies don't need to be installed.

It uses --print-uris which tells it to not actually download and install, but just give us a list of download links and expected MD5s of the packages. Then the script actually downloads them and verifies the MD5s, and puts all of them in the target directory. The end result is a full set of .deb files needed to run Armory on a fresh OS install, internetless computer.

As I mentioned to Doug, this is not intended to be run on every release. I have done the download once and have a permanent set of offline deps on the offline computer to be used to make the offline bundles with the new armory package.

josephbisch commented 9 years ago

Thanks for the script. I'm not sure if there is an advantage to using wget instead of using apt-get directly to download the packages. It should be possible to configure apt-get to print out the list of packages including the dependencies of the dependencies and then feed that back to apt-get download. And according to this line in apt, the hash is checked using the best hash available (sha512, sha256, and so on), so there is no need to manually verify the hash if we go that route. Though I'm sure there is still a way to get the hashes to publish.

Though, now that I think about it, since I got the names of packages by downloading the offline bundle, I should already be downloading all the necessary packages in make_deb_packages.py (including the dependencies of the dependencies and so on), so I don't think a solution to get a list of the full dependency chain automatically is necessary. Though actually it would make sense for when dependencies change.

One advantage of the method I am using right now in make_deb_package.py is that apt-get runs with a config file that tells it to use a temporary working directory that replicates the apt and dpkg structure that normally resides in /etc/ and /var/. It also points to a sources.list for Ubuntu 12.04 that we would ship in the source tree. It would reside in dpkgfiles. So by using non-default directories and a non-default sources.list file, we would allow the script to function on other versions of Ubuntu than 12.04 and also on Debian. We also wouldn't be messing with the user's main apt/dpkg setup. And the script will work even on non-fresh OS installs.

Since the make_deb_package.py script does download the dependencies in a way that won't have URLs breaking in the future, I think I'll push what I have now and then we can see how we want to proceed.

josephbisch commented 9 years ago

I pushed what I have right now. I still need to make the parameters to the script use sensible defaults so that most users can just run the script without parameters (right now all parameters are always required). Also, as I said already, there should be a way to determine the dependencies recursively, rather than having them all hardcoded as I do now.

Note that if you are on Debian, you need to sudo apt-get install ubuntu-archive-keyring.

droark commented 9 years ago

@josephbisch - At a glance, I'm pretty happy. Will take a closer look tomorrow. Keep going the way you're going for now.

Thanks.

etotheipi commented 9 years ago

Carry on with the way you proposed getting packages and dependency trees. You asked how I was doing it, so I answered :)

At the time, I wasn't sure of the best way to guarantee we get the whole dependency tree without doing it on a fresh install. In fact, I'm still not sure: if it's not a fresh install, how does it know what dependencies would be needed that aren't already installed in a fresh OS install? I would expect the script would have to either fetch just the dependencies that are not already installed, or fetch everything including low level system library packages that normally come with the OS. In other words, how does it know when to stop recursing the dependency tree?

For reference, I run my script with the package list on the build from source page: https://bitcoinarmory.com/building-from-source/ : git-core build-essential pyqt4-dev-tools swig libqtcore4 libqt4-dev python-qt4 python-dev python-twisted python-psutil

josephbisch commented 9 years ago

I'll have to wait until tomorrow when I am at my computer to try, but I think I have a solution to the issue of how to know when to stop recursing the dependency tree.

See this page. It seems like we can look at what packages are marked as either required or important and stop recursing when we get to a package from the list of required and important packages. As the link says, there are some packages like the kernel that won't be part of that list, so it might not be the complete solution, but it is a start.

josephbisch commented 9 years ago

It seems to be working. I ultimately couldn't use aptitude, because there appears to be no way to get it to use a custom configuration file (other than modifying the user's config file). Instead I used debootstrap --print-debs precise ... and apt-cache depends --recurse ... with some more commands to format the output the way we need it.

The list I end up with after removing the packages output by debootstrap and from the list of dependencies for ubuntu-desktop is:

libgomp1
python-twisted-conch
python2.7-dev
dpkg-dev
git-man
linux-libc-dev
python-pyasn1
qt4-linguist-tools
gcc
git
liberror-perl
python-twisted-names
make
libquadmath0
qt4-qmake
g++
python-qt4
libtimedate-perl
python-twisted-mail
python-crypto
libexpat1-dev
python-twisted-news
swig2.0
libdpkg-perl
libssl-dev
python-twisted-words
libqt4-test
libc6-dev
libstdc++6-4.6-dev
python-twisted-lore
libqt4-help
gcc-4.6
python-twisted-runner
libqt4-scripttools
libc-dev-bin
libqtassistantclient4
zlib1g-dev
g++-4.6
binutils

As you can see, the list is rather long. But it includes stuff like dev packages, because the list from the building from source page on the Armory website is listing the packages necessary to build Armory. I'm guessing just feeding swig libqtcore4 python-qt4 python-twisted python-psutil into the script will be enough to run Armory as a user.

When I just feed those packages into the script, I get:

python-twisted-news
python-twisted-lore
python-twisted-conch
python-twisted-names
python-twisted-words
libqt4-test
python-twisted-runner
libqtassistantclient4
libqt4-scripttools
python-twisted-mail
libqt4-help
python-pyasn1
python-crypto
swig2.0

That seems more like what I would expect the list of dependency packages would be for the end-user, but I have to test it on a clean version of 12.04 to be sure. I don't happen to have a clean 12.04 right now, but I did test with a clean Linux Mint 17.2 VM snapshot.

I ran into issues with the package versions when trying to install the dependencies in Linux Mint 17.2 (because 17.2 is based on a newer version of Ubuntu than 12.04). An example of one of the error messages is:

 libqt4-help:amd64 depends on libqt4-network (= 4:4.8.1-0ubuntu4.9); however:
  Version of libqt4-network:amd64 on system is 4:4.8.5+git192-g085f851+dfsg-2ubuntu4.1.

But the Armory website already says that the offline bundle is for 12.04 exact, so I guess it isn't expected to work with Linux Mint 17.2 anyway.

I'm going to commit it the way it is now.

josephbisch commented 9 years ago

When testing on Ubuntu 12.04, I get a message like the following:

dpkg: dependency problems prevent configuration of libqt4-test:
 libqt4-test depends on libqtcore4 (= 4:4.8.1-0ubuntu4.9); however:
  Version of libqtcore4 on system is 4:4.8.1-0ubuntu4.

It looks like libqtcore4 is an older version on my Ubuntu install, but the script I used downloaded the latest version of the Qt packages, so there is a version compatibility problem. Has this been encountered before? Looking at a 0.92.2-testing offline bundle I have lying around, I see that there are packages that end in -0ubuntu4.8, so it looks like I am doing things the same way they are currently done, by including the latest version of each package. I'm guessing that it is expected that the user has an up-to-date 12.04 install on the offline machine as of the release of the offline bundle that the user is using?

etotheipi commented 9 years ago

Actually, the last time we had to update the offline bundle was somewhere between Ubuntu from 12.04.3 and 12.04.5. I suspect that might explain what you're seeing in terms of different dependencies between bundles. In other words, once 12.04.5 was the default download for 12.04, I had to re-run my package collector script and update the offline computer with it for future offline bundles. Then we had to go back and add a note to the download page indicating that previous offline bundles only worked with 12.04.3 exactly and included a direct link (unfortunately, the webpage has been modified multiple times and those messages may have been lost in transition).

And yes, the offline bundle has always been very specific. It's never worked across Ubuntu/Debian versions. I was actually surprised that 12.04 changed enough between .3 and .5 to cause that issue above.

I don't want to spend too much time on this, as the current system is working for us: download once and create the bundles from that. Rather, as long as the deb-building process is reproducible, does the offline bundling process have to be? Kind of, we can publish the offline bundle list with hashes, and anyone can check that the offline bundle actually contains those packages and that the hashes match a known public state (at some time in the past). It essentially becomes reproducible by freezing and publishing the state of the dependencies and then making that part of the bundling process.

Perhaps I'm being stubborn, simply to avoid changing things that are working.

And you're right I copied in the whole list of build packages as well as execution packages. My mistake. The second list you posted looks more like what has been used in the past.

josephbisch commented 9 years ago

I'm moving on, because this new way should work fine the way it is now if the existing solution worked fine.

I'm going to give the script default values as determined by what the README says and change the arguments to be options. Then this should be ready for the closer look @droark .

theuni commented 9 years ago

What's the reason for all of the native Qt packages and dependencies?

josephbisch commented 9 years ago

@theuni - Needed for native PyQt (for pyrcc4 to run as part of the build), afaict.

theuni commented 9 years ago

@josephbisch rcc is installed into the native path by the qt install. It'd be trivial to do the same for qmake. Rebuilding all of those deps just for tools that have already been built is overkill imo.

josephbisch commented 9 years ago

@theuni - Thank you for the review.

Yes, rcc is installed into the native path, but we need the version of rcc that generates Python code, which is pyrcc. For some reason pyrcc and other tools that are part of PyQt are built for the host instead of for the build system as would be expected. So the only solution I found was to build a native PyQt to get a pyrcc that can run on the build system. Though maybe there is another method for doing what I want that I am missing.

theuni commented 9 years ago

@josephbisch mm, yea, I would expect that to be built for the builder as well. Maybe it ends up with CC/CFLAGS exported in its environment?

Imo it'd be preferable to fix that, or even hack something if necessary, than to build 2 versions of qt.

droark commented 9 years ago

@theuni - Thanks for the review! Much appreciated. I've learned quite a bit from the process myself. Anyway, I suspect some of your comments go back to how Armory has been built in the past. Joseph tended to use the current build system as a guide, even though there were undoubtedly things that were extraneous, could be done more efficiently, or were even best off removed altogether. So, not only are we upgrading the build system, we're dealing with a bit of technical debt.

@josephbisch - Once you've evaluated the feedback, get in touch and let me know how much work you think it'll take to fix everything. Alan is free to veto me but I'd rather take a little extra time and solidify things now, especially since this is going to form the basis of the build system for quite awhile.

josephbisch commented 9 years ago

Note that with the last commit, I am able to cross compile the OS X version of Armory (just making _CppBlockUtils.so, not the .app; the app is done using osxbuild) by just running the following:

./autogen.sh
./configure --host x86_64-apple-darwin11 --with-osx-sdk-path=/path/to/MacOSX10.9.sdk/
make

Of course, being that osxbuild isn't used, the ArmoryMac module isn't created. So the ArmoryMac code needs to be removed from ArmoryQt.py to run it. But this is an easier way to cross compile the OS X version for testing.

droark commented 9 years ago

@josephbisch - Thanks for the OS X heads up! One thing I've wanted to work on for awhile is some way to quickly recompile the OS X code. This isn't exactly what I need - updated Python files aren't copied over - but it's a start. Something cleaner can be cooked up later.

josephbisch commented 9 years ago

What do you mean by "updated Python files aren't copied over"?

josephbisch commented 9 years ago

You mean a way to cross compile from Linux then copy the source tree to the Mac to test?