Open pacharanero opened 6 years ago
Docker in Vagrant provides for an entirely clean and contained environment to run your containers. The performance hit is negligible.
See https://github.com/robdyke/livecode-community-server for an example of this approach.
I'm starting to use https://github.com/aelsabbahy/goss for unit testing docker containers. YMYMAL...
many thanks @pacharanero @robdyke
before we discuss this further can I ask you to take a look at this recent work done here by @robtweed https://github.com/RippleOSI/Ripple-QEWD-Microservices this is a move to refactor our Qewd.js based middleware into microservices & serve those up on Docker containers
we also need advice on packaging Angular & React versions of PulseTile https://github.com/PulseTile/PulseTile-React https://github.com/PulseTile/PulseTile FYI @kbeloborodko
Also need to get update on the Docker work on EtherCIS being done by @chevalleyc @serefarikan
@robdyke re Vagrant TBH had a poor experience with Vagrant in the past & would ask is it really needed/where is the value add if we are already using Docker? trying to keep working parts to minimum and would need some educating/convincing on that tbh.. Thanks T
I like making stuff that can be used by people with different levels of technical proficiency and on as many operating systems as possible. With Vagrant and VirtualBox being relatively trivial to install on a Windows or Apple device and that running Docker containers with Vagrant is as simple as including a Vagrantfile, I strongly support @pacharanero in specifying Vagrant support.
Thanks for comments @tony-shannon @robdyke
As @robdyke says, Vagrant adds better cross-OS compatibility (for Windows and MacOS in particular, in which Linux containers are not native and have to have a VM compatibility layer anyway, even when using 'native' Docker).
The development overhead of doing both (Vagrant and Docker) is not high, because you set things up so that vagrant just acts as a wrapper around the Docker provisioning.
If it was me, I'd probably avoid the need for Vagrant on Windows and OS X, which adds another moving part into the mix. See https://blog.codeship.com/docker-for-windows-linux-and-mac/
With one additional YAML file we'd be widening participation opportunities.
For inidus I've got the entire Marand Think!EHR stack dockerised (native & in VBox), running all the Marand components as containers in one VBox and 'thirdparty' plugin apps (such as Apperta/DiADemM from another VBox running containers. This demonstrates nice and cleanly how to go about hooking up apps with the stack.
Thanks for the share. I'll look at installing Docker for OS X on my MB here so I can get a sense of it.
I'd go with Vagrant for windows and mac. I don't know if anybody else is using it but windows 7 requires a virtual machine layer to run docker, either installed by docker installer or manually.
I don't know about macos but if docker setup is problematic, vagrant would help with that. I think we're talking about pre-configured-with-docker vagrant images here so it should not introduce an extra layer, it would only replace an existing one (docker)
the other benefit of vagrant would be that if someone wants to play with docker, they'd at least be able to use the instructions for the most common docker install base (linux) This is the reason I moved my development env to linux: just too tired of chasing windows specific parameters, gotchas etc.
I don't know the publication mechanism for vagrant images though: I'm assuming it supports a mechanism similar to dockerhub.
ok thanks all
To be honest, that tech thread jumped on a bit, but I'm still unsure after that of the preferred relationship between OS, Vagrant, Docker & would appreciate a "rich picture" that can explain these elements to the masses. Are any of these contenders and if so what brief Plain English explainer text should go with it? https://www.google.ie/search?q=vagrant+docker&source=lnms&tbm=isch&sa=X&ved=0ahUKEwityoyBgYDZAhWCOsAKHbeoDQQQ_AUICigB&biw=1920&bih=949
the 7 steps to hackday heaven that Marcus outlined sound good, but how much effort to upkeep docker + vagrant scripts as these continue to evolve independently?
Can we focus on docker scripts primarily with a vagrant add on as an easy but separate item of work that can be added into the mix as/when the need arises?
thanks all Tony
I'd say that this image is as simple a representation as I can find.
It's important to stress that the Vagrant element requires one file (the Vagrantfile) to fire up a VM (or indeed native docker!) and run docker-compose (build / up / etc).
Technical effort would be focused on the containerisation of the service components that make up the Ripple stack.
Image credits due to Lucas Jellema and Amis
From the Vagrant provider documentation for Docker Basic Usage
If the system cannot run Linux containers natively, Vagrant automatically spins up a "host VM" to run Docker. This allows your Docker-based Vagrant environments to remain portable, without inconsistencies depending on the platform they are running on.
thanks Rob Thats a helpful image and I'll reuse it here, with due thanks for the original author if thats ok (unclear who he/she is)
so that helps my understanding. Am I correct then in that a Vagrant file sits outside and beyond the container/docker work, so Docker fans can crack on and use Docker stuff without the Vargrant file.. and the added Vagrant file is a nice to have addition for those who wish to use it? If so that Vagrant file should be v easy to maintain, I assume.. correct? thanks again
Here is an example Vagrantfile for starting an Ubuntu Xenial guest and running docker-compose - it's 25 lines including white space.
The docker-compose.yml file for the 3 services is 44 lines including white space.
thanks @robdyke Again can I clarify please; Am I correct then in that a Vagrant file sits outside and beyond the container/docker work, so Docker fans can crack on and use Docker stuff without the Vargrant file.. and the added Vagrant file is a nice to have addition for those who wish to use it?
I give you :+1:
thank you @robdyke I'll take that as a Yes and am looking for an emoji that befits the occasion Love shared vai emoji. Hope that helps.
Can you & marcus give an estimate of the effort involved then please
FYI all please see related work being done on this here https://github.com/RippleOSI/Ripple-Stack-Vagrant-Docker
UPDATE (also see https://github.com/RippleOSI/Ripple-Stack-Vagrant-Docker/issues/1)
Some progress towards a 'Hack Day Ready' Ripple Stack:
Using https://github.com/RippleOSI/Ripple-QEWD-Microservices I've created a Vagrant setup which instantiates an Ubuntu Server (16.04) and provisions it with Docker containers containing each of the microservices. See this at https://github.com/RippleOSI/Ripple-Stack-Vagrant-Docker I'd appreciate feedback from @therippleteam about whether it works for them on their machine following my documentation. It should be a matter of cloning a couple of repos, adding some (unavoidable, unfortunately) config, and typing vagrant up
- the rest should be fairly automatic and should result in a Helm landing page at localhost:8000.
Feedback, stack traces, and PRs most welcome.
The source code of each of the 3 Ripple components remains in it's own repository, and separate from the repo which houses the Vagrantfile and docker-compose.yml - this is deliberate, since it allows us to edit and develop the Ripple stack services independently, while keeping a clean Git history for each. I anticipate that while we hope for many people to be developing WITH the Ripple Stack, comparatively few people will be developing the actual stack itself. (To use an analogy: there are many thousands of Angular developers, but few of them will get involved to the degree that they contribute to the Angular source code)
I've experimented with dockerizing Pulsetile, however after some thought it didn't seem to make much sense or be necessary really. Pulsetile is essentially a UI framework in HTML/CSS/JS, much like Bootstrap or Foundation.
I started to wonder if we might be better to distribute Pulsetile in a way that is more common for such frameworks, eg using NPM or a CDN perhaps. Suggestions and discussion on this is welcomed.
Another thought I had along these lines is that since React is now the main effort in terms of Pulsetile development, could we re-factor Pulsetile into more of a 'React plugin'? This would have the advantage that we would not need to convince people about the sanity of using a 'minority'/'unheard-of' UI/frontend framework (Pulsetile) - instead we say 'use React' but these Pulsetile plugins give you a load of structure and utility for free.'
As mentioned above, in doing this work I've been thinking hard about who our target audience is. We probably have two use-cases for a reproducible, consistent, easy-to-start developer environment:
We want to make it easier for people to build EHRs and PHRs using the Ripple components. Getting started with them is hard and there is a learning curve. These people are not developing Ripple, they are merely using it to develop something else. For this group (the majority, eventually) we should minimise the complexity of the stack - for example they would only need a 'precompiled' or built version of Pulsetile as opposed to the original source code, since they are unlikely to want to make changes to Pulsetile itself at this stage. This is, to me, the 99% use-case and we should mainly target these developers.
Thinking about developers who are working on and improving the Ripple Stack - these are likely to be a small number of known 'core team' developers, and they will also likely have much more familiarity with the stack components (at least the ones they are working on). For this group, the rapidly-deployed development environment is less about getting them up and running, and more about having consistency between development environments used by different members of the team. It reduces the 'well it works on my machine' kinds of conversations. This is a minority use-case, but luckily, almost everything we do for use-case #1 will probably improve use-case #2 too.
I've been able to mark one of the milestones at the top of this Issue as complete:
docker run qewd should result in a running QEWD stack application (or several applications) running on a given port
thanks Marcus @pacharanero
To reaffirm we are trying to improve packaging and ease of use of 3 components ( A PulseTile, B QEWD -Ripple C EtherCIS & how to easily combine them
From your update QEWD Ripple is in better shape, we still need to progress work on A & C
Lets focus on A PulseTile next We need some detail on your ideas discussed yesterday please on; move to npm for packaging PulseTile and modules and how that could be used to build PulseTile
I dont quite get the point re devs using v devs developing, but take the point that tackling 1 should improve the other.
I dont quite get the point re devs using v devs developing...
Here's my take on it:
A: 'I'm Building Something WITH Ripple': If you are a developer who is using the Ripple Stack 'off-the-peg' as components to develop your PHR/EHR application, then you are 'building with' the Ripple stack. These people will be leveraging the tools in the 3 stack components to build a separate '4th' application (their product).
As an analogy - there are many developers using React, Ruby On Rails, and suchlike frameworks/libraries to build their software product, but they don't modify the libraries themselves, they are simply using them in building their application.
To serve these users well, we need to ensure that all the Ripple Stack components are fully decoupled from any applications eg Helm - ideally the 'out of the box experience' should be a minimally functional PHR-looking app, but nothing more.
B: 'I'm Developing The Ripple Stack': In contrast, these are developers who are actually working on and contributing to the development of the Ripple Stack (this would be people like me, yourself @tony-shannon , Christian, RobT, Bogdan, Seref, Kirill, etc) so we are changing the codebase of the stack itself. That's the essential difference - whether you're changing the Ripple codebase or simply using it as-is. (Essentially the 'A' user could import all of the Ripple codebase from static repositories eg CDN/Docker Hub etc because they aren't going to be changing the Ripple source code at all. The 'B' user needs to clone the stack repos because they are working on the stack itself)
In our case, the group of people in 'A' is pretty close to zero and most of our devs are in 'B' - but that is because it's a new framework and we've not yet developed a community of users. However, if you look at established frameworks eg React, Rails, Angular etc I should think that there are 1000s of times more people in 'A' than 'B' - hence we should plan that way as we work.
...but take the point that tackling 1 should improve the other.
To an extent this is true, but we will need to understand our target well, to make sure that our 'Quick Start' setup and turorials definitely aim themselves toward the 'A' user - because the 'B' user is already likely to have better knowledge of how everything goes together, and will possibly have very bespoke requirements as they work on improving their particular section of the Ripple stack. The 'A' user is a total beginner and needs hand-holding and automated stack setup, which is what we're working on.
many thanks @pacharanero for the nice explanation of those differences in your head, which I get
related Q : how does that A / B strategy affect our Dockerisation approach, in clear/concrete terms please? thank you T
how does that A / B strategy affect our Dockerisation approach, in clear/concrete terms please?
Mostly it does not affect dockerization. For Dev A, QEWD and Ethercis are going to be coming directly from Docker Hub and Pulsetile will come from NPM or a CDN or a release from GitHub.
It's the same for Dev B, they still benefit from a standardised reproducible development environment which is the same as what everyone else is using. Except they will possibly need to include the source code of whatever component they are developing eg QEWD/EtherCIS/Pulsetile. Which Dev B in more than capable of doing manually.
When I'm saying we should 'target' Dev A, I just mean that the default development environment does not need to include the full source code for all 3 stack components (which it currently does do, but I'm working on this). Instead it should be pulling Docker boxes or built, compressed, minified JS code from CDN/NPM etc
When I'm saying we should 'target' Dev A, I just mean that the default development environment does not need to include the full source code for all 3 stack components (which it currently does do, but I'm working on this). Instead it should be pulling Docker boxes or built, compressed, minified JS code from CDN/NPM etc
OK, thanks & sure I get that, am not concerned about dev type B either, they should be ok We still need guidance on the starter set eg PulseTile core +/- handling plugins in this context - for Dev A, in terms of guiding them
_[origin: google meet discussion TS/MB/PB 2018.01.24] [origin: MB Ripple Stack report Q3/Q4 2018]_
It would be useful if it were easier to set up and run a complete Ripple stack instance for use at Hack Days and for rapid application development. This should follow a 'batteries included' philosophy such that as much as possible is automated, and the stack should be functional to a basic level without having to add further technical artefacts such as openEHR templates.
Real deployments in production may well differ from this 'dev/demo' Docker stack, including having a different selection of openEHR temlpates available, depending on the needs of the application.
Prerequisites/milestones to delivery
docker run pulsetile
should result in a running pulsetile front end application on a given portdocker run qewd
should result in a running QEWD stack application (or several applications) running on a given portdocker run ethercis
should result in a running etherCIS server at a given portAcceptance criteria
docker-compose up
in the root directory of a freshly cloned Ripple Stack repo should pull the necessary Docker containers as detailed above, configure and run them, make them available to each other on the requisite ports, and set up Docker storage volumes as necessary, resulting in a fully operable Ripple stack which can be interacted with via the web browser in the first instance.vagrant up
in that same root directory should alternatively instantiate a Virtual Machine in which the docker-compose vagrant provisioner plugin will provision the Ripple Stack as in the preceding criterion. Although this setup has significant performance disadvantages compared to native Docker, it is often a preferred option for Windows users.