Shippable / support

Shippable SaaS customers can report issues and feature requests in this repository
101 stars 28 forks source link

using virtual CAN network interface with custom image #4184

Open larsonmpdx opened 6 years ago

larsonmpdx commented 6 years ago

Description of your issue:

See previous resolved issue: https://github.com/Shippable/support/issues/4155

In the previous issue I had trouble configuring a host to have the virtual can kernel module before starting a container, I got help setting up a runSh job to do this. Everything worked. Then I changed my runCI job to use a custom container and I can't get it going again. The image runs correctly but the test container doesn't have access to the virtual CANs and the tests needing that fail (the others pass). I added "--privileged=true --net=bridge" so the boot_container step looks the same as before. Here's a simplified form of my shippable.yml:

language: c

compiler:
  - gcc

build:
  pre_ci:
    - $(aws ecr get-login --no-include-email --region $PRIMARY_ACCOUNT_AWS_REGION)
    # this image is built in another runSh job
    - docker pull $PRIMARY_ACCOUNT_NUMBER.dkr.ecr.$PRIMARY_ACCOUNT_AWS_REGION.amazonaws.com/repo_amd64_base:latest
    - docker tag $PRIMARY_ACCOUNT_NUMBER.dkr.ecr.$PRIMARY_ACCOUNT_AWS_REGION.amazonaws.com/repo_amd64_base:latest repo_amd64_base
    # this image stays on the build machine from pre_ci to pre_ci_boot
    - docker build -t repo_testing -f Dockerfile.repo_testing .
  pre_ci_boot:
    image_name: repo_testing
    image_tag: latest
    pull: false # image should exist from pre_ci job, and should already be local
    options: "--privileged=true --net=bridge"
  ci:
    - ./test.sh

integrations:
  generic:
    - integrationName: aws-keys-primary
    - integrationName: aws-accounts

jobs:
  - name: repo_runSh
    type: runSh
    steps:
      - IN: repo_gitRepo
      - TASK:
          name: setup_repo_host
          runtime:
            container: false # run on host
          script:
            - sudo modprobe can
            - sudo modprobe can_raw
            - sudo modprobe vcan
            - sudo ip link add dev vcan0 type vcan || true
            - sudo ip link set up vcan0 || true
            - sudo ip link add dev vcan1 type vcan || true
            - sudo ip link set up vcan1 || true
      - OUT: repo_ciRepo
        replicate: repo_gitRepo
  - name: repo_runCI
    type: runCI
    steps:
      - IN: repo_ciRepo

resources:
  - name: repo_gitRepo
    type: gitRepo
    integration: github
    versionTemplate:
      sourceName: org/repo
      buildOnCommit: true
      buildOnPullRequest: true
      buildOnPullRequestClose: false
      buildOnRelease: false
      buildOnTagPush: false

working build (with the default container): https://app.shippable.com/github/MIQ-sirrus7/witcan/runs/32/1/console corresponding runSh job: https://app.shippable.com/github/MIQ-sirrus7/jobs/witcan_runSh/builds/5aa0e705fece96150067ae56/console

failing build (my container): https://app.shippable.com/github/MIQ-sirrus7/witcan/runs/55/1/console corresponding runSh job: https://app.shippable.com/github/MIQ-sirrus7/jobs/witcan_runSh/builds/5aa1e536fdc5ae1500316344/console

ambarish2012 commented 6 years ago

@larsonmpdx I am investigating.

ambarish2012 commented 6 years ago

@larsonmpdx its not clear to us why tests are failing in your custom docker image, but working in ours. I do see your container coming up with "--privileged=true --net=bridge" docker options, similar to ours. Is your custom image based of Ubuntu 16.04 as well?

larsonmpdx commented 6 years ago

yes, ubuntu 16.04. the image works with virtual cans when run locally. I can make a minimal reproducer if that helps

ambarish2012 commented 6 years ago

@larsonmpdx we do not understand what virtual cans are - please can you elaborate on what they are and what your tests do? I am not sure we can help yet since we are not experts in how virtual cans work.

larsonmpdx commented 6 years ago

they're a loopback network interface for CAN software development. the tests read and write to the interface

larsonmpdx commented 6 years ago

I tried to reproduce this and show the difference between using the shippable docker image or a custom docker image, but I can't get the shippable docker image to work like I thought it did before. here's a minimal test case:

https://github.com/larsonmpdx/shippable

the command "cansend ..." should do nothing and exit with code 0, instead it shows the virtual can network interface isn't set up in the container

larsonmpdx commented 6 years ago

I think my earlier success (#4155) was from some debugging work in which I set up the CAN interfaces by hand, and then the host stayed set up between runs. And at some point that host went away or was rebooted and the automation we thought was working was revealed to do nothing. Does that sound likely?

larsonmpdx commented 6 years ago

hi, do you need any help from me to reproduce this?

would it be a good idea for me to run a BYON box in the interim? I could reliably set network settings on that

manishas commented 6 years ago

@ric03uec we need to look into this.

anshdavid commented 4 years ago

@larsonmpdx Hello,I came across this while having a same issue. After creating a vcan network on the host pc how can i get my docker container to interface with it.

I tried using --network=host, the only problem is there is no network isolation b/w docker host and containers anymore. Is there a way to achieve the above but without sharing the host network