Closed bergtwvd closed 5 years ago
I have posted the ppt with the overview of the IVCT container images in Science Connect (under the folder https://scienceconnect.sto.nato.int/apps/29785). This is the container decomposition for now; the ppt will be updated as needed.
The container design needs to be expanded with a network design. There are different deployment options and different approaches to network containers, and the SUT typically runs outside the container network. Depending on the deployment option chosen, a container may need to publish its LRC / CRC port (range) to the host so that the SUT can connect to the IVCT. Also the virtual machine used has influence (i.e. Hyper-V, Virtual Box). These VMs have different approaches to access services in a VM.
I am just adding notes to this issue so we won't forget about them.
The IVCT (container) configuration needs to be simplified and documented. Ideally the dependencies between containers should be minimal.
IVCT containers should only use the AMQ bus the communicate among themselves.
Badges and test case data should (ultimately) be provided by a database service.
information has been added to Wiki: https://github.com/MSG134/IVCT_Framework/wiki/IVCT-Container-Design
Although discussed, we haven't really agreed on a "container design" of the IVCT. Several related issues are reported, but I think none about the container image design.
How to containerize the webapps? One image per webapp? How to containerize the runner?
The LRC dependency should not be with the webapp container images. The execution of the test cases should not be in the context of the webapp. I.e. there should be a TC runner that is solely controlled via AMQ messages; this TC runner can have the LRC dependency.