OpenCPN / plugins

Container Project for an Integrated Plugin Management Facility
17 stars 20 forks source link

Flatpak build files not consistently available #295

Closed jongough closed 3 years ago

jongough commented 3 years ago

There is an issue with http://opencpn.duckdns.org/opencpn/opencpn.flatpakref, flatpak builds keep failing because this website does not respond and a timeout occurs. The file it is trying to get contains:

[Flatpak Ref]
Name=org.opencpn.OpenCPN
Title=OpenCPN - Concise ChartPLotter - Main package
Description=OpenCPN main package, loaded by Plugin.Base
Branch=master
Url=http://opencpn.duckdns.org/opencpn/repo
Homepage=https://opencpn.org
Icon=https://opencpn.org/OpenCPN/assets/img/logoOriginal.png
GPGKey=mQENBFyk1UIBCACwFaQLPtrmEhOddy7LYJ6wmRJs9MNBtSp7F3+3uWX+uycQSW3strOwbyYwiDsuymWwmjRfDDBClYvIrFlUAB/OPiuSimE7CHZVye0zbT9i9W4LMf2uRYtYxkZYAngF6NPye8qANRbwocqn3QPBjlNdjd7OjBSCpBUwy50EW3JtstxKeth3Fn7a6EI3sCkHZLaFxhO3A6uxk89Wpq1lBqWAkkR1zMHrIs1TKrvGP/Dfc0ShYdwWaOeuzkWaDYkF0SZiSQcV3JLkIIJh8KJpos4+izYdmrLkDFdpyXit9OByd42mjbWkHKicAkUKrsjQDL2G6FSXs1IdTBh3j1y88aVPABEBAAG0EmxlYW1hc0BvcGVuY3BuLm9yZ4kBVAQTAQgAPgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgBYhBBcYLymPkdBZBDv7MPuc+iyUDyOBBQJeYh4bBQkCqpbZAAoJEPuc+iyUDyOB/IYH/Rk9UVwUoe+9kNkR/lFrxoMWr9nPCRVs/8Sa1cc/qjuyp1sHkF4fiOJtbOUIam3mWkz1Kikh52i8A+aWdTQSicFORR2Te8rbi3KMsXA2+KT6IKLw1oq7/uK935PKShIm7JPjCwiWpY87zYynmhsr34/7PbsYY4El9wxyNFBUOxXrJgQwUHAlg4JlMEgm8CTG0eNjCjMRc4EbAD3T4oDRFgtiBAG/cEBgnUYm+O5BSUSg8xPPUXx/1+n35+HMQ9Dd1WTL9o/ZEAqgQWW4kXVGd4oUKZiKW2tWo1mMlKMvCPykk9b8delXcZ/cb04dVeUfqL9j5LPeDLXUurxFd0AWl0S5AQ0EXKTVQgEIAKurHsSJyZCsi0MBYbPd9LFYgf6hQK61TwVq7bk0d9VJxbCBhHl2p+hFJB786cQrPEXOy76QrLvooM8dsP6f/qdUkIb4esBteq07bga+HQOc94KafZ54OWFMn/7g+abU3RAKbM7RwfDj20wvEQuqxMeA5JNzZaamAa84Uq9ZdMpDJL4qN/4VTST5XFgGpRYM+qTC54HRfRh2uOUy13nlYpbUKQU1PcBvuOU6lhRs1fEr+VBbNfVmS1zfagNQ8h0om+65AynWkU9eqCIstaH++6e7bobv1rLo9QZsyMZl14Qw5FQzGKobqy8TdB2/D62SyDJOcIVjef0eB55Ys30cSkkAEQEAAYkBNgQYAQgAIBYhBBcYLymPkdBZBDv7MPuc+iyUDyOBBQJcpNVCAhsMAAoJEPuc+iyUDyOBQa0IAKwnA2CF9V21VeWdhQxSrYgtL8cLcsV7wVAsR5dPvC6gDzbrG6kFd5GRAs28EdsqDJvyhqlGWqcMc1ldEUiE3OpPxCTLHfRoccBgTNYJd2yTPyKjDyqsiEYGtKXBMadUrS0NdOM/d4Yj7mtX9VDS/bl/KWo9W3PJc2IxKeXGEDhOdy8jppFStPCoIOeSTDWGzRAZ9UPMANojbx5cmYrP8SxCSneJGZKbdvDFMEmmmFwHut3PFPdmF8rwkpQQzjXrR5a59pereGU910jWpxaazwMaoJaI8NOOp06XZfufR6IGyMWvhvwtxKW6GG+WfLCta9hqtmzZQiwLeStn3dfkbbc=

Which refers to Url=http://opencpn.duckdns.org/opencpn/repo. The error that turns up in the flatpak build is:

+ flatpak install --user -y http://opencpn.duckdns.org/opencpn/opencpn.flatpakref
GLib-GIO-Message: 08:26:37.680: Using the 'memory' GSettings backend.  Your settings will not be saved or shared with other applications.
error: Can't load uri http://opencpn.duckdns.org/opencpn/opencpn.flatpakref: Could not connect: Socket I/O timed out

Is there some other way to install the required files for flatpak?

leamas commented 3 years ago

OK, let me introduce opencpn.duckdns.org: opencpn duckdns

It's on top of one of my wardrobes, a piece of furniture I occasionally move around (minor rebuilding project...)

No, it's not ideal. To make it more interesting, the power connector has a tendency to slip out even for minor disturbances, In the short term, I could certainly fix this.

But in the longer term I think we need another kind of hosting solution, where we can drop files and have them available under URL:s we choose (and also by other means). A virtual ECS instance would be great, with traditional user handling, ssh access, outr own web server and what not. But in the end, this is about money.

leamas commented 3 years ago

Also, the question is what the plugins need. opencpn.duckdns.org represents the development branch, it is rebuilt from master "nightly". The latest release is available from flathub.org, which certainly is more stable.

Related to this is another issue: both the nightly builds and the flathub.org ones uses the master branch. This means the we cannot install two different versions (stable/development) in parallel. We shoult try fix this by publishing the nightly builds on another branch like 'devel'

jongough commented 3 years ago

Is it possible to use CloudSmith and generate the required files from the standard OCPN build process? This way we could have development and production versions available for use.

leamas commented 3 years ago

Is it possible to use CloudSmith and generate the required files from the standard OCPN build process?

I don't follow you here... generate exactly which files, and from what input?

jongough commented 3 years ago

Whatever your machine is building on a nightly basis could this not be put into the standard OCPN build process such that any change in OCPN will build a new version of the files needed by flatpak. The files could then be stored in cloudsmith in an OCPN flatpak repository. This could then be access by the plugin build process rather than your machine.

I know very little about the flatpak build process so I cannot be specific about the requirements. I just need the location of the files to be available when my builds start. At the moment this is not the case.

rgleason commented 3 years ago

bdbcat has a cloudsmith repository here: https://cloudsmith.io/~david-register/repos/ Get David to make a Flatpak Repository perhaps, and use his Cloudsmith Key to enable upload.

jongough commented 3 years ago

Your build server has done well for getting the flatpak process going. However, for me I seem to be out of timesync and I get many failures trying to get to your machine. I as wondering if it was now time to create a more 'formal' process for access to these files such that you don't have to keep your server up and running.

leamas commented 3 years ago

hm...

was now time to create a more 'formal' process for access to these files

In short: yes, we need a better solution. But cloudsmith, or anything like that doesn't fit the bill. I'll try to explain why, but this will likely become a too long post.


The flatpak build process creates a repository. This is based on ostree, which in many ways is similar to git. In particular, each build doesn't create a new file as other builds. It just creates a commit and tag in the repo. When you as user "downloads" an application you basically clone a repo, similar to a git clone.

The repo is quite large, currently about 300M.

Building opencpn flatpak plugins means that you need to install the opencpn flatpak application so you can link to it during the build. This is done by dowloading an opencpn.flatpakref or opencpn.flatpakrepo file. These are just pointers to the repo which makes it easy for users to easily add an application or repo to their own software. The basic contract is that if you can access these files you can also access the repo they refer to.

Storing the flatpakref/flatpakrepo file on cloudsmith would easily break this contract: since the repo might be unavailable while the files isn't. There will also be a synchronization problem. In short: we should IMHO keep the flatpakrepo/flatpakref files in the same server as the repo to keep the deployment simple and sane. More to come...

leamas commented 3 years ago

We have a availability problem. In order to fix, we need server space. The requirements are basically a publc web server which we can access also using ssh + rsync. Practically, we also need to control the URL:s exposed. I'm not aware of any publicly available hosting solution which fulfils these requirements.

A full-fledged server, like an amazon ECS instance, fixes all this. But, but albeit not that expensive, it has a cost which we doesn't have any budget for.

An alternative would be if anyone by us could donate some server space, so we could have two or three redundant servers. I'll come back to the reqs. Having two or three servers we could just teach the build system to spread the usage over them, and fall back if/when any of them fails.

leamas commented 3 years ago

So, I propose the following:

leamas commented 3 years ago

So, what we could do is ask developers to donate some server space in a llinux server. The requirements:

A raspberry should work just fine, as should basically any linux server. I have a 100/100 Mbit connection, and no congestion impact I'm aware of.

If we could get one or two more servers this way we should be able to set up a solution with a reasonable availability after teaching the build system to use it (the correct way would be to use DNS, but then we need full access to a public DNS server creating more budget problems).

Thoughts?

jongough commented 3 years ago

I am not sure quite what the real load on the servers would be once we have everything working. Is there anyway that you can measure your system to get a feel for how much internet usage there is and how much storage is required (memory and disk). I am not sure I should even suggest this, but for s63 and oeSenc the charts are stored and compiled on a company server in Spain (I believe). I wonder if that may be an option. I cannot do it as I am on an ADSL connection which is fragile and subject to outages due to weather (we on an obsolete copper phone network with well past end of life exchange components) and the upload speed makes true cloud usage 'interesting'.

leamas commented 3 years ago

I have started a vnstat(1) server on my box to measure the network load. We'll need to wait for some days to get the statistics.

jongough commented 3 years ago

Your idea about a Pi sounds fine and a usb HDD, usb stick or a 32GB micro SD would do fine for storage. An unmetered high speed internet with good upload speed (yours sounds like a dream to me!) would also be required. The hardware could be setup for ~AU$100-AU$150 if there is a need to by a Pi. If I had the internet I have an old Pi 2B, but.... I wonder if Rick or Dave would be interested in setting up one of the other sites?

leamas commented 3 years ago

Let's wait a little now so we have some figures from network load from vnstat. Then, if no-one pops up here, let's file a new issue about donating to make it more visible. If it doesn't help, we could make a try at the Cruiser's Forum.

leamas commented 3 years ago

BTW, a server like this doesn't need to be designated for only this purpose. I use my rPI also for other things.

rgleason commented 3 years ago

We have pretty good service in Boston. I have a 32gb microSD card for a PI yet to be purchased. I am not linux guy but can perhaps set it up with some direction.

What would happen to our basic internet service, would we have trouble streaming and getting the news etc We do not have TV service in Boston and stream any movie or news. The service is good for our use, but what would happen if frequent use of 500mb downloads? Maybe Amazon AWS servers are a better solution? but what would that cost be per download? I think this is a pretty big and active community.

I would need something like a Raspberry Pi 4B 8g Ram Canakit complete starter kit for $140 and 3.5A USB-C power supply is also included and an adapter for HDMI to microHDMI $7. Total $150 Will have to discuss with my wife as, we have no income right now, due to covid,

I have a flat panel HDMI moniter and a keyboard but it is a round connector. I guess the mouse would use a USB and so would the keyboard, I may have an adapter.

I'll have to find another port on our router or switch.

How difficullt would it be to get it is setup so you guys have access and can manage it?

leamas commented 3 years ago

It would be great if you could set up something like this. 8GB of RAM is a lot of overkill, 16 GB SD card is perfectly fine if only looking for this task. A pi3 is more than enough, and 2GB memory is also fine. Remember, this is basically a headless setup; it doesn't require that much. IO performance will be the bottleneck in any case.

That said, perhaps you should a talk with your wife before proceeding?

And, perhaps someone has a rPI to donate? If you have some reasonable bandwidth available this is actually a much more scarce resource than rPIs.

rgleason commented 3 years ago

Rpi 3B+ is the same price as Rpi 4B 2Gb $35 or 4Gb $55 + power $8 + case $6 + microHDMI $9 Total $58 or $78. I think my Micro SD is 32gb.

How much trouble is this to to setup? What OS?, Just the one on RPI website? That I can certainly do. SSH? What else is needed? Can you walk me through it when the time comes to give you access to that?

Will giving access like this to everyone and to you guys make the rest of my network insecure? What protects the rest of my network? I don't want any holes. We use Norton, but the RPI would not. I have a USB port on my ASUS Router that might help.

leamas commented 3 years ago

Will giving access like this to everyone and to you guys make the rest of my network insecure

The rPI will be a guest in your network, and guests should never be trusted. There are two aspects:

The risks related to the one or two ssh users could (should) be handled by the windows firewall. You could basically block access from all hosts on you own network besides your router. This should keep your windows PC(s) safe, even from attackers logged in using ssh.

leamas commented 3 years ago

How much trouble is this to to setup? What OS?

It's pretty simple when it comes to the rPI. Actually, I could send you a pre-configured SD-card which makes it run out of the box. And even if you do it yourself, it's not much trouble. The "normal" OS is Raspbian, basically Debian ported to rPI.

There is some more work with the networking. There is a need to set up tunnels (a. k. a. virtual hosts) in your router so the http and ssh traffic routed to your address ends up in the rPI. Here, you are on you own. There is also a need for some minimal network planning since the rPI needs a fixed ip address.

rgleason commented 3 years ago

Alec, my Asus Router has a DMZ zone I believe (is this a virtual host?). Would that provide the ip? I believe I gave up the static ip address for our router sometime ago when we changed from DSL to Verizon Cable, which is dynamically assigned.

Would the DMZ zone protect my office network from interference/hacking? Is the DMZ in front of my office router essentially, or is it routed through my office router, exposing our office network? Maybe it is some kind of protected tunnel through my office router?

leamas commented 3 years ago

A DMZ zone is not a virtual hosts, it's a great way to keep the rPI separated from your internal network.

There is no need for a static IP on the router. The need is the rPI, which indeed requires a static internal address.

All routers i have seen has some kind of way to create tunnels, whatever they call it. It makes it for example possible set it up so that anything to port 22 (ssh) on the router's external interface is routed to the rPI.

rgleason commented 3 years ago

Thanks,

Also, if we have several of these servers, how would that tasks be shared? Can I somehow set some limits, so when we are there using the internet (right now somewhat infrequent), our use is not restricted in bandwidth.

If I were to setup a "tunnel" and direct port 22 ssh to the rpi, wouldn't this affect my use of ssh for github and access to our webservers? Sorry I am not a guru about this stuff.

leamas commented 3 years ago

Also, if we have several of these servers, how would that tasks be shared?

We could teach the build scripts to pick the first attempt randomly, falling back to others if the first fails.

I were to setup a "tunnel" and direct port 22 ssh to the rpi, wouldn't this affect my use of ssh [...]

No. The setup is for incoming ssh from the net. Outgoing ssh from you to the outside world is not affected.

rgleason commented 3 years ago

We have an ASUS RT N65R Router with DMZ DMZ Manual https://setuprouter.com/dmz/ DMZ Manual https://setuprouter.com/router/asus/rt-n65r/dmz-79356-large.htm DMZ FAQ https://www.asus.com/support/FAQ/1011723/ Port Forwarding https://setuprouter.com/what-is-port-forwarding/ Port Trigger https://setuprouter.com/router/asus/rt-n65r/port-triggering-79335-large.htm

I think the DMZ would be easier and better.

Could we then use the rpi with apache to host our webites and email? I am somewhat familiar with virtualmin and webmin as I used it with a mulit virtual hosing for our websites for a number of years. I got a bit complicated though so I off loaded it. Now we are having service problems with the hosting.... Off topic...yes.

leamas commented 3 years ago

The DMZ is easier and should work fine. It's considered somewhat more risky, but we only risk the rPI. And I shouldn't say it's a risk to be afraid of.

Hosting your website will be fine, at least as long as we are talking about static pages. It's not that the rPI cannot handle dynamic content, but it opens up more possibilities for mischief.

Hosting your email is another issue. This means opening new ports and starting new services. In your situation I think I would wait with this until I get some experience. Hosting email is more demanding, so to speak.

rgleason commented 3 years ago

Alec, I totally agree about email hosting, that is where all the issues are, and main email hosts are blacklisting small hosts even if properly configured... websites are using bootstrap (is that dynamic?) and otherwise are simple html. I read that DMZ opens up all the ports so the computer with the static IP has to have a good firewall. What is a good firewall for linux? I havent a clue. Would it be safer to do Port Forwarding? Which ports would be needed to be opened up?

leamas commented 3 years ago

The rPI is safe. It will only have ports 80 (http) and 22 (ssh) open, there is a very little attack vector. Let's go with the firewall option. This also means that the rPI cannot access you home PC(s) in any way, creating some kind of trust.

rgleason commented 3 years ago

OK, that sounds very good to me. Would the 4gb ram rpi 4 be okay with our websites perhaps running apache with virtual/web min for the websites? Also I read that Bootstrap uses html, javacript and CSS templates, so it is not "dynamic" like a CMS such as wordpress or concrete5 with dynamic content, so I can host all 5 websites (which are not very busy)

leamas commented 3 years ago

Right, bootstrap is just some bundled css + javascript, nothing dynamic. Note that from the web server perspective also javascript is "static".

You could host a lot of websites, for sure. On a 16GB sd card (which already has the opencpn stuff) there is about 2-3 GB available. That's a lot of websites ;)

The limitation is the bandwidth. What is your speed, really?

leamas commented 3 years ago

Would the 4gb ram rpi 4 be okay with our websites perhaps running apache with virtual/web min for the websites?

As long as it's not too much traffic, yes.

rgleason commented 3 years ago

Alec, good question! I should have started with that. We have "Performance Internet" from Xfinity which I find is 5mbps up and 100mbps down, We use between 0gb and 200gb per month. I am not aware of any limits on monthly data use. Xfinity's account seems to hide this information, but I've confirmed it now. This service is adequate to stream a movie, etc.

rgleason commented 3 years ago

Actually the contract says "as fast as"

rgleason commented 3 years ago

Alec, would Github packages work?

https://github.com/rgleason?tab=packages or the one for opencpn?

leamas commented 3 years ago

No, nor would any other way to publish files. We need ssh access and rsync. this is explained in the start of this PR

rgleason commented 3 years ago

Ok, is my service fast enough? Maybe when we need the bandwidth I can disable it?

leamas commented 3 years ago

yes, and no... The bottleneck is your 5Mbps upload link, this is what carries the downloads from the rPI to users out there.

Looking at the preliminary figures from the load measurement it should'nt be a problem. But we need to wait a day or two to get a more complete sample.

Also: have you been setting up a web server before? Creating a number of virtual web servers is IMHO non-trivial.

rgleason commented 3 years ago

I have used a Virtual Server with Virtualmin and Webmin to configure and use web serveres and email servers. This has isolated me from using Apache directly. However I have maintained/updated this Centos system for a number of years with some support. Yes 5 Mbps = 0.625 MB/s, so 300MB / 0.625MB/s = 462 seconds = 7.7 minutes without interruptions. I think this might be too slow. What do you think?

leamas commented 3 years ago

If there wasn't alternatives I guess we could live with it.

But filling the pipe for 5-10 minutes isn't that nice, neither for you nor opencpn devs. Let's look for other possibilities first.

leamas commented 3 years ago

hm... If we got this cloud of some rPIs running, this could also be the base for solving https://github.com/OpenCPN/OpenCPN/issues/1865, the plugin downloader, which also needs a more full-fledged server rather than a web-hosting solution

leamas commented 3 years ago

I don't think getting two or more rPIs running is the major problem, we will eventually be able to solve it.

However, the system for accessing them needs some thinking. Especially if users should access it (the downloader) we need a transparent DNS setup, a simple round-robin with multiple A-records for a single DNS name. And, not least, possibilites to update this list in runtime. Practically, this means we need a DNS provider which gives us nsupdate access.

Most certainly don't, at most they have am API allowing clients to update a registered name's new address. This cannot handle multiple A-records with the same name, nor can it remove a record. Which we need to do when discovering that a host is down.

rgleason commented 3 years ago

It's not a problem for us when we are not there using the internet.

leamas commented 3 years ago

I need to get back to the drawing board, mostly because of the need also for regular users to access our servers.

Ouch...

rgleason commented 3 years ago

ns1 &* ns2 are reasonable on separate servers is reasonable to do. (Some of this is beyond anything I have done.) Maybe these will help.

Things I wish I'd known about nsupdate and dynamic dns

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/dns-updates-external

https://www.lesbell.com.au/index.php/articles/12-networking/2-ddns-updates Dynamic updates for linux clients with DHCP "However, there is a danger here. BIND will accept dynamic updates from any address in the range(s) specified in the allow-update substatement. IP addresses can be easily spoofed,"

RFC 2137 outlines an approach to secure DNS updates based on public-key cryptography. However, there are two major objections to this approach: firstly, it is computationally intensive, and secondly it is complex and tricky to set up. But RFC 2845 introduces a simpler technique, based on signing the updates with a shared secret key. This approach will prove adequate for many environments.

  1. create shared key
  2. Copy the shared secret to both the DNS and the client machines
  3. Configure the serve
  4. . Configure the client.
rgleason commented 3 years ago

https://www.cloudns.net/dynamic-dns/?utm_source=adwords&utm_medium=dynamic_dns&utm_campaign=dynamic_dns_keyword

https://doc.powerdns.com/authoritative/dnsupdate.html

https://jpmens.net/2012/06/29/dynamic-dns-updates-using-gss-tsig-and-kerberos/

rgleason commented 3 years ago

Not quite sure about the direction and need, but this free service might help for 2 dns. https://www.cloudns.net/dynamic-dns/ Look at "Free forever" https://www.cloudns.net/wiki/article/36/

rgleason commented 3 years ago

Alec Premium DNS is only $2.95/month

leamas commented 3 years ago

@rgleason : sorry to say, but your links are not really useful. Could you please stop this, I can google myself (hell, I maintain (pypi, fedora, debian) a dynamic dns client which I wrote). Unless, of course, you have some substantial information on the question asked..

This has turned out to be more complicated then I thought. Also, Xmas is upon us. As I said, I basically need to get back to the drawing board, and this is gonna take some time. We are not talking about hours, here.

leamas commented 3 years ago

Notes to /me:

This makes each server in the opencpn "cloud" independent of others besides. The single point of failure would be namecheap or whatever provider used; we can live with that. The http proxies used for accessing namecheap are typically unstable, and thus also a problem.

EDIT: Simplified, first round