Open larsonmpdx opened 6 years ago
@larsonmpdx I am investigating.
@larsonmpdx its not clear to us why tests are failing in your custom docker image, but working in ours. I do see your container coming up with "--privileged=true --net=bridge" docker options, similar to ours. Is your custom image based of Ubuntu 16.04 as well?
yes, ubuntu 16.04. the image works with virtual cans when run locally. I can make a minimal reproducer if that helps
@larsonmpdx we do not understand what virtual cans are - please can you elaborate on what they are and what your tests do? I am not sure we can help yet since we are not experts in how virtual cans work.
they're a loopback network interface for CAN software development. the tests read and write to the interface
I tried to reproduce this and show the difference between using the shippable docker image or a custom docker image, but I can't get the shippable docker image to work like I thought it did before. here's a minimal test case:
https://github.com/larsonmpdx/shippable
the command "cansend ..." should do nothing and exit with code 0, instead it shows the virtual can network interface isn't set up in the container
I think my earlier success (#4155) was from some debugging work in which I set up the CAN interfaces by hand, and then the host stayed set up between runs. And at some point that host went away or was rebooted and the automation we thought was working was revealed to do nothing. Does that sound likely?
hi, do you need any help from me to reproduce this?
would it be a good idea for me to run a BYON box in the interim? I could reliably set network settings on that
@ric03uec we need to look into this.
@larsonmpdx Hello,I came across this while having a same issue. After creating a vcan network on the host pc how can i get my docker container to interface with it.
I tried using --network=host
, the only problem is there is no network isolation b/w docker host and containers anymore. Is there a way to achieve the above but without sharing the host network
Description of your issue:
See previous resolved issue: https://github.com/Shippable/support/issues/4155
In the previous issue I had trouble configuring a host to have the virtual can kernel module before starting a container, I got help setting up a runSh job to do this. Everything worked. Then I changed my runCI job to use a custom container and I can't get it going again. The image runs correctly but the test container doesn't have access to the virtual CANs and the tests needing that fail (the others pass). I added "--privileged=true --net=bridge" so the boot_container step looks the same as before. Here's a simplified form of my shippable.yml:
working build (with the default container): https://app.shippable.com/github/MIQ-sirrus7/witcan/runs/32/1/console corresponding runSh job: https://app.shippable.com/github/MIQ-sirrus7/jobs/witcan_runSh/builds/5aa0e705fece96150067ae56/console
failing build (my container): https://app.shippable.com/github/MIQ-sirrus7/witcan/runs/55/1/console corresponding runSh job: https://app.shippable.com/github/MIQ-sirrus7/jobs/witcan_runSh/builds/5aa1e536fdc5ae1500316344/console