Closed idleite closed 3 months ago
Hi @Idleite
Thank you for tackling this! I'm actually in the process of doing some major refactoring of the core JS files. I want to tackle that first before I move on to anything else. It won't effect what you've completed, but I would like to finalize those changes first before I move on to test this new addition.
Just giving you a heads up. Thanks again!
I think this should also close any requests for portainer support or dockge support as this would go straight to docker making adding portainer or dockge support redundant
I think this should also close any requests for portainer support or dockge support as this would go straight to docker making adding portainer or dockge support redundant
Issue is that it does only work for the portall host. also if the container is in host network mode it will not work (you can get the image exposed ports in ['Config']['ExposedPorts'] it can change if some env made the app port change but it can be checked with nmap, this would allow to have a tiny bit more port in the event where you put container in host network mode without changing the default port)
Personnaly I had more of a agent based approach. Kind of like you did but an agent in a docker container that you would deploy on each host you need and that would interact with the portall api and periodically fetch and updates ports.
This would however require an endpoint for the agent to fetch the existing ports in portall so that it can compare, assign IP Nickname accordingly and delete docker ports on portall that arent used anymore.
Woudl love to have a feature like that, wouldn't need to do anything additional to get your used ports or remove port not used anymore
this doesn't have to use the local docker host there is a env in the compose for the url of the socket you could use a tls or http socket here
i'll add logic to check for host mode and change accordingly
this doesn't have to use the local docker host there is a env in the compose for the url of the socket you could use a tls or http socket here
although i have not tested this i'll also test this
this doesn't have to use the local docker host there is a env in the compose for the url of the socket you could use a tls or http socket here
My point is that it doesn't support import for multiple IPs (also it sets the IP as 127.0.0.1)
Would be great if you have only one host to manage but if as me you have multiple host it's kind of limited and I find it weird to have ability to import from docker for only one host when the tool support multiples
Apart of that, tbh I was kind of hoping for more than import as I stated above and I have some small script that does work when using the db but obviously not the api (since it doesn't have a way to get port list for now) and doesn't work if runned on an other host.
Agent based approach would only require one new route to the api so quite doable imo.
Idk what will be used worst case I'll just fork ig.
Here is what I had in mind. So here it use the sqlite file I had but it can be adapted to using the api. It does not delete unused ports for now too, and maybe network mode host code part can be made better (not a big fan of checking before and after the func if its in host mode, better having it be checked one time) but its kind of a proof of concept
import docker
import sqlite3
import socket
import nmap
import logging
# Configure logging to stdout
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
def get_container_ports(container):
network_mode = container.attrs['HostConfig']['NetworkMode']
if network_mode == 'host':
exposed_ports = container.attrs['Config']['ExposedPorts']
if exposed_ports:
ports = [int(port.split('/')[0]) for port in exposed_ports.keys()]
logger.debug(f"Found exposed ports {ports} for container {container.name} in host network mode.")
return ports
else:
logger.debug(f"No exposed ports found for container {container.name} in host network mode.")
return []
else:
ports = container.attrs['NetworkSettings']['Ports']
exposed_ports = []
for container_port, host_bindings in ports.items():
if host_bindings:
for binding in host_bindings:
exposed_ports.append(int(binding['HostPort']))
logger.debug(f"Found port {binding['HostPort']} (container port {container_port}) "
f"for container {container.name} in bridge network mode.")
return exposed_ports
def check_port_usage(ip, port):
nm = nmap.PortScanner()
nm.scan(ip, str(port))
port_state = nm[ip]['tcp'][port]['state']
logger.debug(f"Port {port} state: {port_state}")
return port_state == 'open'
def get_nickname(cursor, ip_address):
cursor.execute("SELECT nickname FROM port WHERE ip_address = ? AND nickname IS NOT NULL ORDER BY id ASC",
(ip_address,))
result = cursor.fetchone()
return result[0] if result else None
def update_database(container_name, ip_address, ports):
conn = sqlite3.connect('portall.db')
cursor = conn.cursor()
nickname = get_nickname(cursor, ip_address)
for port in ports:
cursor.execute("SELECT * FROM port WHERE description = ? AND port_number = ?",
(f"{container_name}_docker", port))
if cursor.fetchone() is None:
cursor.execute("INSERT INTO port (nickname, description, port_number, ip_address) VALUES (?, ?, ?, ?)",
(nickname, f"{container_name}_docker", port, ip_address))
logger.debug(f"Added port {port} for container {container_name} to database.")
conn.commit()
conn.close()
def process_container(container):
container_name = container.name
ip_address = socket.gethostbyname(socket.gethostname())
ports = get_container_ports(container)
used_ports = []
if container.attrs['HostConfig']['NetworkMode'] == 'host':
for port in ports:
if check_port_usage(ip_address, port):
used_ports.append(port)
logger.debug(f"Port {port} is used for container {container_name} in host network mode.")
else:
logger.debug(f"Port {port} is not used for container {container_name} in host network mode.")
else:
used_ports = ports
if used_ports:
logger.info(f"Processing container {container_name}")
update_database(container_name, ip_address, used_ports)
else:
logger.warning(f"No used ports found for container {container_name}.")
def main():
try:
client = docker.from_env()
containers = client.containers.list()
logger.info("Starting container processing")
for container in containers:
logger.debug(
f"Inspecting container: {container.name}, Network Mode: {container.attrs['HostConfig']['NetworkMode']}")
process_container(container)
logger.info("Container processing completed")
except Exception as e:
logger.error(f"An error occurred: {str(e)}", exc_info=True)
if __name__ == "__main__":
main()
My point is that it doesn't support import for multiple IPs (also it sets the IP as 127.0.0.1)
Would be great if you have only one host to manage but if as me you have multiple host it's kind of limited and I find it weird to have ability to import from docker for only one host when the tool support multiples
Apart of that, tbh I was kind of hoping for more than import as I stated above and I have some small script that does work when using the db but obviously not the api (since it doesn't have a way to get port list for now) and doesn't work if runned on an other host.
it defaults to 127.0.0.1 if you dont set a label on the container if you set com.portall.ip to the ip you want it will use this, It will do the same thing if you use com.portall.description. I was also in the process of setting it up to not just import but to constantly pull from a socket the code for this is in utils/docker/socket.py
My point is that it doesn't support import for multiple IPs (also it sets the IP as 127.0.0.1)
Would be great if you have only one host to manage but if as me you have multiple host it's kind of limited and I find it weird to have ability to import from docker for only one host when the tool support multiples
Apart of that, tbh I was kind of hoping for more than import as I stated above and I have some small script that does work when using the db but obviously not the api (since it doesn't have a way to get port list for now) and doesn't work if runned on an other host.
it defaults to 127.0.0.1 if you dont set a label on the container if you set com.portall.ip to the ip you want it will use this, It will do the same thing if you use com.portall.description. I was also in the process of setting it up to not just import but to constantly pull from a socket the code for this is in utils/docker/socket.py
Didn't see the most recent commits my bad.
Not a big fan of having to set labels for both IP and description for all my containers tho 😅.
If there's some way to have multiple sockets, and a way to set up for all containers of a socket then I'm guess I'm fine with that too.
Would love to have a way to delete and update unused/changed port but it can probably be done too.
Oh nice one. I didn't realise this was already being worked on so I also implemented the same sort of thing. You just have to mount the docker.sock and I added a button on the import page to import from running docker containers.
My fork is here fyi but seems you've already got it handled: https://github.com/jontstaz/Portall
looking forward to this getting merged so i can start using portall!
I'll make a docker container on my repo so it can still be used
I brought this up elsewhere, but my wife and I are busy taking care of our newborn, so development is on a temporary pause.
In the meantime, I think using @Idleite’s fork is a good substitute if you like what they’re doing and want to use their features.
Docker support is definitely on my to-do list, along with supporting Portainer for those who prefer that. I’m working on developing the groundwork for a ‘plugin’ system that would house all of these external support systems now and in the future.
Cheers!
Docker image is Now available at ghcr.io/idleite/portall
Docker support is definitely on my to-do list, along with supporting Portainer for those who prefer that. I’m working on developing the groundwork for a ‘plugin’ system that would house all of these external support systems now and in the future.
i can definitely begin work on a separate system for external plugins. Ill make a different pr for it then refactor this pr for the plugin system
@Idleite
not really sure what to put inside the File Content
while selecting docker socket. whatever i try i get Error importing data.
docker run -d \
--name portall \
--restart unless-stopped \
-p 51643:8080 \
-e SECRET_KEY='not_sure_whats_the_point_of_this_as_well' \
-e DOCKER_HOST=/var/run/docker.sock \
-v /home/radu/.portall:/app/instance \
-v /var/run/docker.sock:/var/run/docker.sock \
ghcr.io/idleite/portall:latest
docker host should have the protocol like this unix://var/run/docker.sock
the file content does nothing you have to put something in it but it does not matter
i made the change but it still doesn't work:
2024-08-05T17:43:22.519631117Z ERROR:app:Exception on /import [POST]
2024-08-05T17:43:22.519653839Z Traceback (most recent call last):
2024-08-05T17:43:22.519657609Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
2024-08-05T17:43:22.519662169Z self.dialect.do_execute(
2024-08-05T17:43:22.519665585Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
2024-08-05T17:43:22.519668062Z cursor.execute(statement, parameters)
2024-08-05T17:43:22.519670208Z sqlite3.OperationalError: no such column: port.docker_id
2024-08-05T17:43:22.519672366Z
2024-08-05T17:43:22.519674459Z The above exception was the direct cause of the following exception:
2024-08-05T17:43:22.519676655Z
2024-08-05T17:43:22.519678681Z Traceback (most recent call last):
2024-08-05T17:43:22.519680868Z File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1473, in wsgi_app
2024-08-05T17:43:22.519683131Z response = self.full_dispatch_request()
2024-08-05T17:43:22.519685281Z File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 882, in full_dispatch_request
2024-08-05T17:43:22.519687516Z rv = self.handle_user_exception(e)
2024-08-05T17:43:22.519689624Z File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 880, in full_dispatch_request
2024-08-05T17:43:22.519691876Z rv = self.dispatch_request()
2024-08-05T17:43:22.519693996Z File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 865, in dispatch_request
2024-08-05T17:43:22.519696227Z return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
2024-08-05T17:43:22.519698497Z File "/app/utils/routes/imports.py", line 75, in import_data
2024-08-05T17:43:22.519700708Z existing_port = Port.query.filter_by(
2024-08-05T17:43:22.519702869Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2728, in first
2024-08-05T17:43:22.519705123Z return self.limit(1)._iter().first() # type: ignore
2024-08-05T17:43:22.519707260Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2827, in _iter
2024-08-05T17:43:22.519709730Z result: Union[ScalarResult[_T], Result[_T]] = self.session.execute(
2024-08-05T17:43:22.519711944Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 2351, in execute
2024-08-05T17:43:22.519714191Z return self._execute_internal(
2024-08-05T17:43:22.519716352Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 2236, in _execute_internal
2024-08-05T17:43:22.519718757Z result: Result[Any] = compile_state_cls.orm_execute_statement(
2024-08-05T17:43:22.519720999Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
2024-08-05T17:43:22.519735617Z result = conn.execute(
2024-08-05T17:43:22.519738908Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1418, in execute
2024-08-05T17:43:22.519741315Z return meth(
2024-08-05T17:43:22.519743522Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection
2024-08-05T17:43:22.519745852Z return connection._execute_clauseelement(
2024-08-05T17:43:22.519748074Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement
2024-08-05T17:43:22.519750322Z ret = self._execute_context(
2024-08-05T17:43:22.519752455Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
2024-08-05T17:43:22.519754712Z return self._exec_single_context(
2024-08-05T17:43:22.519756847Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
2024-08-05T17:43:22.519759098Z self._handle_dbapi_exception(
2024-08-05T17:43:22.519761202Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2353, in _handle_dbapi_exception
2024-08-05T17:43:22.519763463Z raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
2024-08-05T17:43:22.519765647Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
2024-08-05T17:43:22.519767918Z self.dialect.do_execute(
2024-08-05T17:43:22.519770028Z File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
2024-08-05T17:43:22.519772271Z cursor.execute(statement, parameters)
2024-08-05T17:43:22.519774404Z sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: port.docker_id
2024-08-05T17:43:22.519777633Z [SQL: SELECT port.id AS port_id, port.ip_address AS port_ip_address, port.nickname AS port_nickname, port.port_number AS port_port_number, port.port_protocol AS port_port_protocol, port.description AS port_description, port."order" AS port_order, port.docker_id AS port_docker_id
2024-08-05T17:43:22.519780910Z FROM port
2024-08-05T17:43:22.519783078Z WHERE port.ip_address = ? AND port.port_number = ? AND port.port_protocol = ?
2024-08-05T17:43:22.519785312Z LIMIT ? OFFSET ?]
2024-08-05T17:43:22.519787424Z [parameters: ('127.0.0.1', 51643, 'tcp', 1, 0)]
2024-08-05T17:43:22.519789601Z (Background on this error at: https://sqlalche.me/e/20/e3q8)
i don't see the point of the DOCKER_HOST=unix://var/run/docker.sock
environment value, simply mounting the socket should be enough. also the textview should be hidden for the "docker socket" option as it creates confusion
i have lots of containers so portall would make a lot of sense for me
i removed the portall mount point folder (rm -rf /home/radu/.portall
) and started the container again - it works now!
docker run -d \
--name portall \
--restart unless-stopped \
-p 51643:8080 \
-e SECRET_KEY='whatever' \
-e DOCKER_HOST=unix://var/run/docker.sock \
-v /home/radu/.portall:/app/instance \
-v /var/run/docker.sock:/var/run/docker.sock \
ghcr.io/idleite/portall:latest
the docker host environment variable allows for docker sockets at different locations or on remote hosts. When i finish the rest of the setting system i will remove this.
@Idleite I merged your docker implementation into the v1.0.9 branch.
I made several changes, all of them UI related. I've moved the docker logic into my 'Plugins' feature. There are some things I would love to see from this addition. Let me know if you're up for it.
DOCKER_HOST=unix://var/run/docker.sock
would work on something like Windows. import.html
, but the addition doesn't seem fully implemented. If you're up for it, I would love to see you finish this implementation. It think we're off to a great start. Feel to create a new PR aim at the v1.0.9 branch and let me know if you have any questions.
Cheers!
I started working on this and have made good progress. I just need to update the UI for the new 'Docker' settings tab.
I'll continue work on this and finish up the backend of the docker settings
So I implemented the Docker plugin..mostly! Still has some parts needing to be fleshed out.
I do, however, want to move the functionality into this new 'Docker Plugin Settings' sub-menu that gets created when the docker plugin is enabled.
Currently, as seen in the first image, there's single fields for host/socket. But I know there are folks out there who may have multiple hosts running docker, so I think it would be better to be able to add multiple ones, hence the updated table in the 'Docker Plugin Settings' menu.
I don't think I want the 'Docker' tab to be a permanent feature - cause I don't want a bunch of new tabs for every plugin that gets added - but it's there for now during development. What I would prefer, and feel free to chime in with your opinion, is to have the page exist and have a 'Configure' button next to the Docker plugin under 'Plugins' that simply takes you the page with the Host/Socket table. I think that's a much cleaner approach.
I have not implemented the functionality of moving the Docker plugin logic from the 'Plugins' page into the new 'Docker Plugin Settings' page. I may leave that for you to tackle. Remember to include a 'Delete' button on the table to remove entries as well!
v1.0.9 has a lot of codebase changes, as I did a lot of cleaning (mainly JS related), so hopefully things are a bit easier to follow now. Still have more work to do - but I'm rather busy at home with family business at the moment! Hopefully it's enough to help you get started.
Ping me if you have any questions. Cheers!
I believe this plugin system severely over complicates the code
I'm not a big fan of it's current implementation - but development has to start somewhere.
It's either to integrate each new feature as an integral part of the application, or to design an auxiliary system that would allow additional features, separate from the core functions, to be added - potentially by others, to compliment the core application.
I know I'll come back to redesigning it, but if you have any ideas on how to approach this right now, I would be glad to hear them.
Thanks!
This PR Completes Docker Support in the planned_features.md