VROOM-Project / vroom-docker

Docker image for vroom and vroom-express
BSD 2-Clause "Simplified" License
88 stars 58 forks source link

driving-car after 1.9.0 version #64

Closed KaMaToZzz closed 1 year ago

KaMaToZzz commented 2 years ago

Hello,

Why after 1.9.0 version no profile driving-car using ors route? If downgrade to 1.8.0 its work fine.

Version: 1.12.0 & 0.11.0: { "vehicles":[ { "id":1, "start_index":0, "profile":"driving-car", ..... Got response: { "code": 1, "error": "bad optional access" }

And log inside container: `

vroom-express@0.11.0 start /vroom-express node src/index.js vroom-express listening on port 3000! Wed, 09 Nov 2022 10:13:19 GMT: [Error] bad optional access Wed, 09 Nov 2022 10:13:30 GMT: [Error] bad optional access `

And if I change profile to car, its counted fine: { "vehicles":[ { "id":1, "start_index":0, "profile":"car", .....

nilsnolde commented 2 years ago

Hm good question. Not sure what could be causing that. Are you sure you wiped all remnants of the previous docker image, especially the config.yml? Don’t think anything changed there, but it’s good practice anyways. Can you paste the contents of your config.yml and also share how exactly you run ORS (docker-compose.yml or docker command)?

If that doesn’t help, this will need to wait a little, I’m on a longer holiday for a few more weeks.

KaMaToZzz commented 2 years ago

Have a good vacation:)

By topic: ORS I deploying on another instance vps.

Config vroom:

cliArgs:
  geometry: false
  planmode: false
  threads: 4
  explore: 5
  limit: '1mb'
  logdir: '/..'
  logsize: '100M'
  maxlocations: 1000
  maxvehicles: 200
  override: true
  path: ''
  port: 3000
  router: 'ors'
  timeout: 300000
routingServers:
  osrm:
    car:
      host: '192.168.166.10'
      port: '5000'
    bike:
      host: '192.168.166.10'
      port: '5000'
    foot:
      host: '192.168.166.10'
      port: '5000'
  ors:
    driving-car:
      host: '192.168.166.10'
      port: '8080'
    driving-hgv:
      host: '192.168.166.10'
      port: '8080'
    cycling-regular:
      host: '192.168.166.10'
      port: '8080'
    cycling-mountain:
      host: '192.168.166.10'
      port: '8080'
    cycling-road:
      host: '192.168.166.10'
      port: '8080'
    cycling-electric:
      host: '192.168.166.10'
      port: '8080'
    foot-walking:
      host: '192.168.166.10'
      port: '8080'
    foot-hiking:
      host: '192.168.166.10'
      port: '8080'

vroom docker-compose.yml (build/deploying by Jenkins)

---

version: "3.4"
services:
  vroom:
    restart: always
    image: registry.site.local/vroom-express:${CONTAINER_TAG}
    container_name: vroom_test
    environment:
      - VROOM_ROUTER=ors
      - VROOM_ROUTER_IP=192.168.166.10
      - SERVICE_3000_NAME=vroom-express-vroom-test
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
        delay: 10s
        order: start-first
      restart_policy:
        condition: any
        max_attempts: 10
        delay: 5s
        window: 120s
      placement:
        constraints: [node.role == worker]
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 1s
      retries: 3
      start_period: 10s
    networks:
      - internal
    ports:
      - 8805:3000
    build:
      dockerfile: Dockerfile
      context: .
networks:
  internal:
    external: true

vroom Dockerfile:

FROM debian:bullseye-slim as builder
LABEL maintainer=nils@gis-ops.com

WORKDIR /

RUN echo "Updating apt-get and installing dependencies..." && \
  apt-get -y update > /dev/null && apt-get -y install > /dev/null \
  git-core \
  build-essential \
    g++ \
  libssl-dev \
    libasio-dev \
  libglpk-dev \
    pkg-config

ARG VROOM_RELEASE=v1.12.0

RUN echo "Cloning and installing vroom release ${VROOM_RELEASE}..." && \
    git clone  --recurse-submodules https://github.com/VROOM-Project/vroom.git && \
    cd vroom && \
    git fetch --tags && \
    git checkout -q $VROOM_RELEASE && \
    make -C /vroom/src -j$(nproc) && \
    cd /

ARG VROOM_EXPRESS_RELEASE=v0.11.0

RUN echo "Cloning and installing vroom-express release ${VROOM_EXPRESS_RELEASE}..." && \
    git clone https://github.com/VROOM-Project/vroom-express.git && \
    cd vroom-express && \
    git fetch --tags && \
    git checkout $VROOM_EXPRESS_RELEASE

FROM node:12-bullseye-slim as runstage
COPY --from=builder /vroom-express/. /vroom-express
COPY --from=builder /vroom/bin/vroom /usr/local/bin

WORKDIR /vroom-express

RUN apt-get update > /dev/null && \
    apt-get install -y --no-install-recommends \
      libssl1.1 \
      curl \
      libglpk40 \
      > /dev/null && \
    rm -rf /var/lib/apt/lists/* && \
    # Install vroom-express
    npm config set loglevel error && \
    npm install && \
    # To share the config.yml & access.log file with the host
    mkdir /conf

COPY ./config.yml /vroom-express/config.yml 
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
ENV VROOM_DOCKER=osrm \
    VROOM_LOG=/conf

HEALTHCHECK --start-period=10s CMD curl --fail -s http://localhost:3000/health || exit 1

EXPOSE 3000
ENTRYPOINT ["/bin/bash"]
CMD ["/docker-entrypoint.sh"]

OpenRouteService docker-compose deploying by Ansible playbook on another instance (work fine, healthcehck - ok):

- name: Run ORS container
  docker_compose:
    debug: true
    project_name: ors
    definition:
      version: '2'
      services:
        ors-app:
          container_name: ors-app
          restart: always
          ports:
            - 8080:8080
            - 9001:9001
          dns:
            - 192.168.165.5
            - 192.168.166.5
          image: registry.site.local/registrysite-ors:6.6.1
          # build:
          #    context: .
          #   dockerfile: /home/admin/Dockerfile
          volumes:
            - ors-graphs:/ors-core/data/graphs
            - ors-elevation_cache:/ors-core/data/elevation_cache
            - ./logs/ors:/var/log/ors
            - ./logs/tomcat:/usr/local/tomcat/logs
            - ./conf:/ors-conf
            - /home/admin/data/ru-kz.osm.pbf:/ors-core/data/osm_file.pbf
          environment:
            - BUILD_GRAPHS=false  # Forces the container to rebuild the graphs, e.g. when PBF is changed
            - "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=4 -Xms4g -Xmx48g"
            - "CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost"
      volumes:
        ors-graphs:
        ors-elevation_cache:
nilsnolde commented 2 years ago

Yeah, can't see anything obviously wrong.. The error itself does sound more like a C++ error, rather than JS, but I'm not that familiar (anymore) with the JS code.

I'm wondering if there was a regression in the ORS router wrapper @jcoupey ? AFAIK there's nothing actually testing the HTTP wrappers right? I actually wanted to raise that the other day, not sure why I didn't do that..

@KaMaToZzz I see you're using ORS v6.x. It could be that smth changed in the response format, the current ORS implementation in vroom was done with 5.x IIRC. But you mentioned it works with vroom 1.8 right? That wasn't with another ORS version was it?

jcoupey commented 2 years ago

I'm wondering if there was a regression in the ORS router wrapper

Setting aside some tiny technical changes applied to all wrappers, the ORS wrapper in C++ did not change for months. So basically the core ORS-related code is the same as when it was first added.

KaMaToZzz commented 2 years ago

@KaMaToZzz I see you're using ORS v6.x. It could be that smth changed in the response format, the current ORS implementation in vroom was done with 5.x IIRC. But you mentioned it works with vroom 1.8 right? That wasn't with another ORS version was it?

Yes, I test vroom on this version ORS v6.x 1.8 work fine with ORS v6.x, 1.9 doesn't work with ORS v6.x.

nilsnolde commented 1 year ago

Closing here as it's surely not a problem with this project. If anyone runs into this, please open an issue upstream.