Closed jeff47 closed 2 years ago
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I dont really see any point to this.. if you want to change the web or peer ports you just map them i want peerport 10500 i use -v 10500:9091; there's no need to change anything inside the container or add more environment variables.
I dont really see any point to this.. if you want to change the web or peer ports you just map them i want peerport 10500 i use -v 10500:9091; there's no need to change anything inside the container or add more environment variables.
But then your announce url will be wrong, I believe. You'll be inaccessible because it will announce the wrong port.
But then your announce url will be wrong, I believe. You'll be inaccessible because it will announce the wrong port.
can you demonstrate this in practice? I do not think this would be an issue as my port is changed and i have no issues.
But then your announce url will be wrong, I believe. You'll be inaccessible because it will announce the wrong port.
can you demonstrate this in practice? I do not think this would be an issue as my port is changed and i have no issues.
Have you tested your setup on any private trackers where you can see what you are seeding? It won't be an issue for you downloading, only when seeding.
Transmission has no knowledge of the port mapped from outside the docker container, it only knows what port it is bound to inside the container. So that is what it will announce. I've tested this using the connection tracker of a private tracker, and if I change the port mapping in docker-compose, I am unreachable because Transmission reports that it is listening on the internal port, which not open on the docker side.
Have you tested your setup on any private trackers where you can see what you are seeding? It won't be an issue for you downloading, only when seeding.
Transmission has no knowledge of the port mapped from outside the docker container, it only knows what port it is bound to inside the container. So that is what it will announce. I've tested this using the connection tracker of a private tracker, and if I change the port mapping in docker-compose, I am unreachable because Transmission reports that it is listening on the internal port, which not open on the docker side.
you are right, that was my mistake, i checked and I did change my settings.json accordingly.
with that said, did you test this with a reboot? do your settings remain in settings.json after a restart? if yes, im on board with this -- HOWEVER, while you made the backend changes, you didnt update the readme-vars to reflect the new envvars or how to use them, you'll want a note in the change log as well. As is, this PR would not work unless someone has knowledge of the requirements
As you dress that up with the missing bits, the webui portion needs to be removed as it doesn't suffer from the issue described with peerport. We look forward to seeing this
Have you tested your setup on any private trackers where you can see what you are seeding? It won't be an issue for you downloading, only when seeding. Transmission has no knowledge of the port mapped from outside the docker container, it only knows what port it is bound to inside the container. So that is what it will announce. I've tested this using the connection tracker of a private tracker, and if I change the port mapping in docker-compose, I am unreachable because Transmission reports that it is listening on the internal port, which not open on the docker side.
you are right, that was my mistake, i checked and I did change my settings.json accordingly.
with that said, did you test this with a reboot? do your settings remain in settings.json after a restart? if yes, im on board with this -- HOWEVER, while you made the backend changes, you didnt update the readme-vars to reflect the new envvars or how to use them, you'll want a note in the change log as well. As is, this PR would not work unless someone has knowledge of the requirements
As you dress that up with the missing bits, the webui portion needs to be removed as it doesn't suffer from the issue described with peerport. We look forward to seeing this
I think the webui part is important, however. I run two instances of this container on my server, and there needs to be a way to change the webui port. It isn't a required parameter, just an optional one same as the peerport.
I think the webui part is important, however. I run two instances of this container on my server, and there needs to be a way to change the webui port. It isn't a required parameter, just an optional one same as the peerport.
We will not merge if the webui portion exists in the PR. that one isn't up for debate - webui inside the docker network is unique, outside the network, you just map it, it does not suffer from the same issue as peer port.
I think the webui part is important, however. I run two instances of this container on my server, and there needs to be a way to change the webui port. It isn't a required parameter, just an optional one same as the peerport.
We will not merge if the webui portion exists in the PR. that one isn't up for debate - webui inside the docker network is unique, outside the network, you just map it, it does not suffer from the same issue as peer port.
Oh, of course. Makes complete sense. I got tunnel vision because of the other issue and forgot it was that easy to fix.
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/index.html https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/shellcheck-result.xml
I think the webui part is important, however. I run two instances of this container on my server, and there needs to be a way to change the webui port. It isn't a required parameter, just an optional one same as the peerport.
We will not merge if the webui portion exists in the PR. that one isn't up for debate - webui inside the docker network is unique, outside the network, you just map it, it does not suffer from the same issue as peer port.
OK, I updated the code (removed WEBUI) and edited readme_vars.yml and readme.md to document the option. Let me know if I missed anything please.
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/index.html https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/shellcheck-result.xml
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/index.html https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/shellcheck-result.xml
I am a bot, here are the test results for this PR: https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/index.html https://ci-tests.linuxserver.io/lspipepr/transmission/3.00-r2-pkg-1704c93c-pr-182/shellcheck-result.xml
I think the webui part is important, however. I run two instances of this container on my server, and there needs to be a way to change the webui port. It isn't a required parameter, just an optional one same as the peerport.
We will not merge if the webui portion exists in the PR. that one isn't up for debate - webui inside the docker network is unique, outside the network, you just map it, it does not suffer from the same issue as peer port.
I disagree and do see the benefit of having the ability to set the WEBUI port via an environment variable. In a scenario where you have more than one transmission container connected to a single gluetun container network stack (or other container vpn network), each one would need to have a unique WEBUI port in their settings.json file in order to be accessed, as the port is mapped on the vpn container, not the transmission container.
https://github.com/qdm12/gluetun/wiki/Port-mapping
If its not breaking anything, what's the real harm in having the option?
In a scenario where you have more than one transmission container connected to a single gluetun container network stack (or other container vpn network) . . .
That's neither a common scenario nor one we support
If its not breaking anything, what's the real harm in having the option?
If we applied that logic, all of our images would have a million env vars and it would cause lots of support overhead, not to mention it would turn away lots of potential users due to extreme confusion or false sense of complexity
That's neither a common scenario...
With all due respect that's just your opinion.
If we applied that logic, all of our images would have a million env vars and it would cause lots of support overhead, not to mention it would turn away lots of potential users due to extreme confusion or false sense of complexity
There's no need for hyperbole. I was obviously referring to having this particular option, as there is value in it in some scenarios.
With all due respect that's just your opinion.
With all due respect, our opinion is all that matters. if you disagree, feel free to fork.
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
The addition of $PEERPORT and $WEBUI checks would allow users to specify these ports via environmental variables, and add flexibility, particularly if users run multiple instances.
Description:
Adds ability to specify ports for the WEBUI (RPC) and listening port via environmental variables.
Benefits of this PR and context:
Adds flexibility, simplifies setup.
How Has This Been Tested?
Tested locally. Only effect is to /config/settings.json, and only if the environmental variables are set.
Source / References:
https://github.com/linuxserver/docker-transmission/issues/179