amir20 / dozzle

Realtime log viewer for docker containers.
https://dozzle.dev/
MIT License
5.89k stars 295 forks source link

DOZZLE_FILTER enviroment variable not used in new agent mode #3067

Closed githubbiswb closed 3 months ago

githubbiswb commented 3 months ago

Describe the bug I have an environment variable set to filter to a specific swarm stack. This worked before with the remote host mode with the docker socket proxy container, but seems to not work in the new 8.0.1 version and agents.

To Reproduce Set DOZZLE_FILTER: and it seems ignored

Expected behavior The DOZZLE_FILTER would filter out everything but the match.

Desktop (please complete the following information):

amir20 commented 3 months ago

Hmmm you might be right! 😅 I think I missed this. I'll see what I can do over the weekend.

Keep the feedback coming!

amir20 commented 3 months ago

@githubbiswb How are you running the agent? I was looking at the code really quick and it should work. Are using the agent sub command?

githubbiswb commented 3 months ago

Below is my swarm compose file

services:
  dozzle:
    #image: dockreg.biswb.com:5000/amir20-dozzle:06212
    image: dockreg.biswb.com:5000/amir20-dozzle:070524
    environment:
      DOZZLE_REMOTE_AGENT: ubudockceph18001-doz_agent:7007,ubudockceph18002-doz_agent:7007,ubudockceph18003-doz_agent:7007,ubudockceph18004-doz_agent:7007,ubudockceph18005-doz_agent:7007
      DOZZLE_AUTH_PROVIDER: simple
      #DOZZLE_FILTER: name=filebrowser
    networks:
      - DockInternalComms      
    volumes:
      - data:/data
#      - /var/run/docker.sock:/var/run/docker.sock:ro
    deploy:
      replicas: 1
      update_config:
        delay: 20s
        failure_action: rollback   

  agent:
    image: dockreg.biswb.com:5000/amir20-dozzle:070524
    hostname: "{{.Node.Hostname}}-{{.Service.Name}}"
    command: agent
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - DockInternalComms      
    deploy:
      mode: global
      update_config:
        delay: 20s
        failure_action: rollback
amir20 commented 3 months ago

Ah. Your filter needs to be on the agent not the main Dozzle instance. This is done intentionally so that each agent could have different filter rules as needed.

amir20 commented 3 months ago

BTW, why are you rebuilding Dozzle? Just curious...

githubbiswb commented 3 months ago

Ah. Your filter needs to be on the agent not the main Dozzle instance. This is done intentionally so that each agent could have different filter rules as needed.

That does work as expected then, however maybe this then turns into a feature request.

My idea is each host runs a single agent, and then I spin up multiple dozzle containers that show the UI with a filter of only the needed logs for that service.

That way we only have to spin up a single container for each needed application administrator who only cares about their particular logs.

We can set it up the way you are going, but then this requires a separate dozzle container for UI AND another dozzle agent on every host. As our application admins needs grow, so will the list of tons of agents.

githubbiswb commented 3 months ago

BTW, why are you rebuilding Dozzle? Just curious...

I am not rebuilding anything with it, I am only pushing an exact copy into a private registry. I keep my docker hosts disconnected from the internet for lots of reasons, and so they require a private registry to be able function.

It also means I don't overrun my docker hub pulls by referencing my local private registry in the cases where I allow my docker hosts internet access

amir20 commented 3 months ago

My idea is each host runs a single agent, and then I spin up multiple dozzle containers that show the UI with a filter of only the needed logs for that service.

Interesting, so you would have one to many agents where there could be multiple UIs deployed?

Yea, this would be a feature request. It wouldn't be too much work to add the filter on the UI. But I do wonder if there are better ways.

  1. Are you using authentication?
  2. What happens if the UI and the agents have filters that don't intersect?
  3. Should I remove filters from agents and just put them on the UI?

We can come back to this once the agents are stable.

githubbiswb commented 3 months ago

Interesting, so you would have one to many agents where there could be multiple UIs deployed? I would have a single agent on each host, and many dozzle ui containers, once for each admin and their specific needs. I could do this with swarm labels as well but I expect that is also implmented at the agent side not the UI side at this point

Yea, this would be a feature request. It wouldn't be too much work to add the filter on the UI. But I do wonder if there are better ways.

1. Are you using authentication?

Yes, simple in one enviroment and the other will have authentication through a yet to be devoloped solution, so for now we use random nonce urls for each admin and instruct them not to share. And have our reverse proxy point to different dozzle instances based on the nonce.

2. What happens if the UI and the agents have filters that don't intersect?

Not sure I understand the question, sorry

3. Should I remove filters from agents and just put them on the UI?

If its not too hard I would say let them happen on either. Then someone can decide where they want to filter.

We can come back to this once the agents are stable.

Sounds good here

amir20 commented 3 months ago

Yes, simple in one enviroment and the other will have authentication through a yet to be devoloped solution, so for now we use random nonce urls for each admin and instruct them not to share. And have our reverse proxy point to different dozzle instances based on the nonce

Would a better solution be to leverage the proxy fowrard solution and have the proxy provide a whilelist of docker patterns allowed? That way you only need to deploy one instance with different patterns for each authenticated user?

Not sure I understand the question, sorry

I mean what if the agent has foo=bar and the UI has foo=blah. Then the UI would never show anything.

githubbiswb commented 3 months ago

Would a better solution be to leverage the proxy fowrard solution and have the proxy provide a whilelist of docker patterns allowed? That way you only need to deploy one instance with different patterns for each authenticated user?

That could work, is it implemented currently? I see Remote_user, Remote_email and Remote_name currently.

I mean what if the agent has foo=bar and the UI has foo=blah. Then the UI would never show anything.

yeah I would be fine with that showing nothing since the match is an AND

githubbiswb commented 3 months ago

This would work as well if implemented with the agents like was done with the remote hosts

https://dozzle.dev/guide/remote-hosts#adding-labels-to-hosts

amir20 commented 3 months ago

Agent comes from https://dozzle.dev/guide/agent#changing-agent-s-name

githubbiswb commented 3 months ago

Agreed, no issue there, I just address it with my hostname: in the global config and the logs come right through.

hostname: "{{.Node.Hostname}}-{{.Service.Name}}"

Then my remote agents take on the form of -:

DOZZLE_REMOTE_AGENT: ubudockceph18001-doz_agent:7007,ubudockceph18002-doz_agent:7007,ubudockceph18003-doz_agent:7007,ubudockceph18004-doz_agent:7007,ubudockceph18005-doz_agent:7007

But if we could filter on the remote agent like you can with remote host labels, that would work. Then it would look like

DOZZLE_REMOTE_AGENT: ubudockceph18001-doz_agent:7007|filebrowser,ubudockceph18002-doz_agent:7007|filebrowser,ubudockceph18003-doz_agent:7007|filebrowser,ubudockceph18004-doz_agent:7007|filebrowser,ubudockceph18005-doz_agent:7007|filebrowser
amir20 commented 3 months ago

Maybe I misunderstanding something, but for the agent's name to work you need to use DOZZLE_HOSTNAME=my-special-name on the agent per documentation. hostname: xxx won't work since that only sets the internal hostname.

Is that what you mean? If done right, you wouldn't need to do host-ip|name any more. I think this is a more elegant solution since it is up to agent to choose a name in this set up.

githubbiswb commented 3 months ago

Check my work I think I am setting it up how you are indicating, and it doesn't work this way, although would be more elegant as you state

With the DOZZLE_REMOTE_AGENT and the hostname both set, swarm can use the internal network to have the dozzle container find the agents.

Without it, I am unsure of how the dozzle UI is able to first find the agents (maybe it scans the network?) and secondly can tell them apart since the my-special-name would be applied on ALL containers that spin up on the docker nodes, as that is what the global tag does.

  dozzle:
    #image: dockreg.biswb.com:5000/amir20-dozzle:06212
    image: dockreg.biswb.com:5000/amir20-dozzle:070524
    environment:
      #DOZZLE_REMOTE_AGENT: ubudockceph18001-doz_agent:7007,ubudockceph18002-doz_agent:7007,ubudockceph18003-doz_agent:7007,ubudockceph18004-doz_agent:7007,ubudockceph18005-doz_agent:7007
      DOZZLE_AUTH_PROVIDER: simple
    networks:
      - DockInternalComms      
    volumes:
      - data:/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    deploy:
      replicas: 1
      update_config:
        delay: 20s
        failure_action: rollback   

  agent:
    image: dockreg.biswb.com:5000/amir20-dozzle:070524
    #hostname: "{{.Node.Hostname}}-{{.Service.Name}}"
    command: agent
    environment:
#      DOZZLE_FILTER: name=filebrowser
      DOZZLE_HOSTNAME: my-special-name
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - DockInternalComms      
    deploy:
      mode: global
      update_config:
        delay: 20s
        failure_action: rollback  
amir20 commented 3 months ago

Hello @githubbiswb,

I think we are talking about two different issues. I was talking about setting agent name because in comment https://github.com/amir20/dozzle/issues/3067#issuecomment-2211844750 you mentioned wanting to use hostname|label which made me think you wanted to have labels for agents too. That's how it was done in REMOTE_HOST.

First, let's see how agent names work.

I tested with this

services:
  dozzle-with-agent:
    image: amir20/dozzle:latest
    environment:
      - DOZZLE_REMOTE_AGENT=agent:7007      
    ports:
      - 8082:8080
    depends_on:
      agent:
        condition: service_healthy
  agent:    
    command: agent
    image: amir20/dozzle:latest
    environment:    
      - DOZZLE_HOSTNAME=agent
    healthcheck:
      test: ["CMD", "/dozzle", "healthcheck"]
      interval: 5s
      retries: 5
      start_period: 5s
      start_interval: 5s
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro

And as expected, the host shows up with name agent.

Screenshot 2024-07-07 at 8 04 37 AM

But then I read you last comment again and I think you are asking a different question. To connect to agents manually, you do need to set DOZZLE_REMOTE_AGENT.

However, I had missed that you are also doing mode: global. I hadn't seen this before and was under the impression that you are deploying agents to selected nodes. If you are deploying to all nodes, then you should use swarm mode. 🚀

With swarm mode, you can do:

services:
  dozzle:
    image: amir20/dozzle:latest
    environment:
      - DOZZLE_MODE=swarm
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 8080:8080
    networks:
      - dozzle
    deploy:
      mode: global
networks:
  dozzle:
    driver: overlay

In swarm mode, Dozzle discovers its own agents by using DNS. It could take a minute before all instances have found each other since they all start at different times.

Once started, it should look like this:

Screenshot 2024-07-07 at 8 09 03 AM

All the nodes discovered and continue to be discovered if any fail.

If you want to change the names of swarm Dozzle node then I think you can still do DOZZLE_HOSTNAME: "{{.Node.Hostname}}-{{.Service.Name}}". However, I have not tested this. :)

I hope this works as I spent a lot of time getting this right! If documentation is incorrect or confusing then any PRs would be appreciated to improve it.

amir20 commented 3 months ago

I also noticed you have authentication. Added an example https://dozzle.dev/guide/swarm-mode#setting-up-simple-authentication-in-swarm-mode. There maybe subtle bugs as I haven't tested auth with swarm mode much.

githubbiswb commented 3 months ago

I also noticed you have authentication. Added an example https://dozzle.dev/guide/swarm-mode#setting-up-simple-authentication-in-swarm-mode. There maybe subtle bugs as I haven't tested auth with swarm mode much.

Using simple auth for just the http login looks fine to me!

githubbiswb commented 3 months ago

https://github.com/amir20/dozzle/issues/3067#issuecomment-2212481901

Thank you for all of the help!

Yeah when I switch it to swarm and global it connects right up, I was still carrying some of how it worked before over to now, and this is much cleaner. It also respects the FILTER as expected.

With that said though I still could see a use of using the remote agents like before and deploying it globally, and deplotying additinal single dozzle containers to connect to that global service and apply a filter at that level.

Otherwise you have to keep deploying global services for each time you want a different filter.

OR

Perhaps it is global and the reverse proxy can pass the filter to dozzle somehow and only display the requested results.

So I can submit that as a feature request or explore the reverse proxy path. Either way thank you for all of your effort!

amir20 commented 3 months ago

With that said though I still could see a use of using the remote agents like before and deploying it globally, and deplotying additinal single dozzle containers to connect to that global service and apply a filter at that level.

I hear you. But a lot of work right now. So let's wait until everything is smooth. I just released agents and swarm mode last day. Still waiting to see how it works out.