Closed tonydm closed 1 year ago
@tonydm did you fixed this by any chance by yourself? I am facing the same issue.
Sorry, I didn't. I posted here 21 days ago w/ no response. I spent way too much time on this one from linuxserver.io so I moved on. I could not find an issue with the deployment. I did get it working after I built it myself. After I got it running and played around with it, I didn't like working with it at all and wished I hadn't spent so much time trying to get it up and running. It's just my preference of course. I'm not knocking the project and the hard work that has been put into it by the folks at linuxserver.io. They do some great work!!! Same is true of the originators of Netbox. In the end, it just wasn't for me, so I deleted the directory I had all my test Dockerfile and docker-compose.yml files. I'm sorry, I simply can't recall the details or I would pass along something to point you in the right direction.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
How to bring activity to this issue? 😅🤷♂️
Any useful information I can share for someone to pick this up?
It looks like I didn't create an environment variable for the app URL. You can resolve this by adding the URL you are accessing the app to the configuration.py file in /config in ALLOWED_HOSTS
.
Correction: it IS in the docs / container and you do have it. Please be sure this is this matches the URL you are attempting to hit the app with.
i have this issue too
As @alex-phillips said this is most typically caused by the ALLOWED_HOST (not ALLOWED_HOSTS) env being incorrect. It needs to match the hostname or IP address that you are using to connect to the web service. If you are using multiple you can remove the ALLOWED_HOST env and set it directly in the configuration.py in the /config mount, in the form ALLOWED_HOSTS = ['netbox.example.com', 'netbox.internal.local']
as documented therein.
As @alex-phillips said this is most typically caused by the ALLOWED_HOST (not ALLOWED_HOSTS) env being incorrect. It needs to match the hostname or IP address that you are using to connect to the web service. If you are using multiple you can remove the ALLOWED_HOST env and set it directly in the configuration.py in the /config mount, in the form
ALLOWED_HOSTS = ['netbox.example.com', 'netbox.internal.local']
as documented therein.
that did not work for me. i add my ip address of my local imac 192.168.65.40 (- ALLOWED_HOST=192.168.65.40) but i still get bad request (400)
To remove any ambiguity you can do this:
If you are not yet sure what the domain name and/or IP address of the NetBox installation will be, and are comfortable accepting the risks in doing so, you can set this to a wildcard (asterisk) to allow all host values:
ALLOWED_HOSTS = ['*']
ALLOWED_HOSTS = ['*'] does not work. Also bad request. My log:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0., [s6-init] ensuring user provided files have correct perms...exited 0., [fix-attrs.d] applying ownership & permissions fixes..., [fix-attrs.d] done., [cont-init.d] executing container initialization scripts..., [cont-init.d] 01-envfile: executing... , [cont-init.d] 01-envfile: exited 0., [cont-init.d] 10-adduser: executing... , -------------------------------------, _ (), | | , | | / | | | / \ , | | _ \ | | | () |, || |__/ || \/, Brought to you by linuxserver.io, -------------------------------------, To support LSIO projects visit:, https://www.linuxserver.io/donate/, -------------------------------------, GID/UID, -------------------------------------, User uid: 1000, User gid: 1000, -------------------------------------, [cont-init.d] 10-adduser: exited 0., [cont-init.d] 50-config: executing... , Operations to perform:, Apply all migrations: admin, auth, circuits, contenttypes, dcim, extras, ipam, secrets, sessions, taggit, tenancy, users, virtualization, Running migrations:, No migrations to apply., Superuser creation skipped. Already exists., [cont-init.d] 50-config: exited 0., [cont-init.d] 90-custom-folders: executing... , [cont-init.d] 90-custom-folders: exited 0., [cont-init.d] 99-custom-files: executing... , [custom-init] no custom files found exiting..., [cont-init.d] 99-custom-files: exited 0., [cont-init.d] done., [services.d] starting services, [services.d] done., [uWSGI] getting INI configuration from uwsgi.ini, [uwsgi-static] added mapping for /static => static, Starting uWSGI 2.0.18 (64bit) on [Thu Jul 29 05:47:14 2021] , compiled with version: 9.3.0 on 17 April 2020 16:07:02, os: Linux-5.10.17-v8+ #1421 SMP PREEMPT Thu May 27 14:01:37 BST 2021, nodename: 4252d07bfcbf, machine: aarch64, clock source: unix, pcre jit disabled, detected number of CPU cores: 4, current working directory: /app/netbox/netbox, detected binary path: /usr/sbin/uwsgi, your memory page size is 4096 bytes, detected max file descriptor number: 1048576, building mime-types dictionary from file /etc/mime.types...1293 entry found, lock engine: pthread robust mutexes, thunder lock: disabled (you can enable it with --thunder-lock), uwsgi socket 0 bound to TCP address :8000 fd 3, Python version: 3.8.10 (default, May 6 2021, 06:30:44) [GCC 9.3.0], Python main interpreter initialized at 0x558875c1e0, python threads support enabled, your server socket listen backlog is limited to 100 connections, your mercy for graceful operations on workers is 60 seconds, mapped 145840 bytes (142 KB) for 1 cores, Operational MODE: single process , running "exec:/usr/bin/python3 ./manage.py collectstatic --noinput" (pre app)..., 957 static files copied to '/app/netbox/netbox/static'., running "exec:/usr/bin/python3 ./manage.py remove_stale_contenttypes --no-input" (pre app)..., running "exec:/usr/bin/python3 ./manage.py clearsessions" (pre app)..., running "exec:/usr/bin/python3 ./manage.py invalidate all" (pre app)..., WSGI app 0 (mountpoint='') ready in 5 seconds on interpreter 0x558875c1e0 pid: 306 (default app), uWSGI is running in multiple interpreter mode , spawned uWSGI master process (pid: 306), spawned uWSGI worker 1 (pid: 334, cores: 1), [uwsgi-daemons] spawning "/usr/bin/python3 ./manage.py rqworker" (uid: 1000 gid: 1000), [pid: 334|app: 0|req: 1/1] 10.0.0.2 () {34 vars in 1170 bytes} [Thu Jul 29 03:49:15 2021] GET / => generated 143 bytes in 1574 msecs (HTTP/1.1 400) 2 headers in 67 bytes (1 switches on core 0), [pid: 334|app: 0|req: 2/2] 10.0.0.2 () {34 vars in 1170 bytes} [Thu Jul 29 03:49:18 2021] GET / => generated 143 bytes in 74 msecs (HTTP/1.1 400) 2 headers in 67 bytes (1 switches on core 0)
Can you see anything what could make this error?
The only thing I can see that's different to my setup is that you're running on ARM.
The only thing I can see that's different to my setup is that you're running on ARM.
When i installed it outside of docker it will work. But i want to use it in docker!!!!
I find this on a website:
The error message “HTTP 400 Bad Request” does not make it immediately clear where the communication problem actually lies. If the targeted web server uses IIS 7.0, IIS 7.5 or IIS 8.0, more detailed information can be read from the status code:
400.1: Invalid destination header
400.2: Invalid depth header
400.3: Invalid If header
400.4: Invalid overwrite header
400.5: Invalid Translate header
400.6: Invalid request body
400.7: Invalid content length
400.8: Invalid timeout
400.9: Invalid lock token
Can i figure out which sub-error i get ?
Let me spin mine up on an ARM host and see if it behaves the same as on my x64 host.
I don't believe nginx does sub-status codes.
Just spun up on my arm64 host and it works fine. This is my exact compose, completely clean install - no other changes made. Ubuntu 20.04.2, Docker 20.10.7.
services:
netbox:
image: ghcr.io/linuxserver/netbox:latest
container_name: netbox
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- SUPERUSER_EMAIL=${SUPERUSER_EMAIL}
- SUPERUSER_PASSWORD=${SUPERUSER_PASSWORD}
- ALLOWED_HOST=${ALLOWED_HOST}
- DB_NAME=netbox
- DB_USER=netbox
- DB_PASSWORD=${DBPASS}
- DB_HOST=netbox-db
- DB_PORT=5432
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_DB_TASK=8
- REDIS_DB_CACHE=9
volumes:
- web:/config:rw
ports:
- 8000:8000
networks:
- public
- private
depends_on:
- netbox-db
- redis
restart: unless-stopped
netbox-db:
image: postgres:12-alpine
container_name: netbox-db
environment:
- POSTGRES_PASSWORD=${DBPASS}
- POSTGRES_USER=netbox
volumes:
- db:/var/lib/postgresql/data
restart: unless-stopped
networks:
- private
redis:
image: redis:alpine
container_name: redis
restart: unless-stopped
volumes:
- redis:/data
networks:
- private
networks:
private:
driver: bridge
ipam:
config:
- subnet: 172.20.3.0/24
internal: true
public:
external: true
volumes:
web:
db:
redis:
can you fast try it with docker-swarm? because i have a swrm cluster
It's not something I have setup at the moment to be able to test.
It's not something I have setup at the moment to be able to test.
ok, thx. i will try in the afternoon at home.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I had this problem today and using 0.0.0.0 helped.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Error: Bad Request (400)
Expected Behavior
Netbox container comes up and is accessible
Current Behavior
Bad Request (400)
Steps to Reproduce
Environment
OS: Ubuntu 18.04.5 LTS Arch: x86_64 Official Docker Install script curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh && rm get-docker.sh Docker Version: 20.10.2 build 2291f61
Command used to create docker container (run/create/compose/screenshot)
Docker logs