Open bharathkkb opened 4 years ago
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days
Does tinyproxy even get started? It seems like the startup script just installs it and doesn't configure or start it.
Hmm. I had to install and configure tiny proxy manually:
# SSH into the bastion-host VM
gcloud beta compute ssh bastion-host --tunnel-through-iap --project ${PLATFORM_PROJECT_ID} --zone ${BASTION_ZONE}
# Become root
sudo su
# Create Tinyproxy user
useradd -M -U -s /bin/false tinyproxy
# Create Tinyproxy log file
mkdir -p /usr/local/var/log/tinyproxy
touch /usr/local/var/log/tinyproxy/tinyproxy.log
chown tinyproxy:root /usr/local/var/log/tinyproxy/tinyproxy.log
# Configure Tinyproxy
mkdir -p /usr/local/etc/tinyproxy
cat > /usr/local/etc/tinyproxy/tinyproxy.conf << EOF
##User/Group to use after dropping root
User tinyproxy
Group tinyproxy
##Port and address to bind to
Port 8888
Bind 0.0.0.0
##File locations
DefaultErrorFile "/usr/local/share/tinyproxy/default.html"
StatFile "/usr/local/share/tinyproxy/stats.html"
LogFile "/usr/local/var/log/tinyproxy/tinyproxy.log"
LogLevel Info
PidFile "/var/run/tinyproxy.pid"
##Authentication
Allow 127.0.0.1
##HTTP Headers
ViaProxyName "server-hostname"
DisableViaHeader No
##Threading
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxRequestsPerChild 0
##Connection
Timeout 600
MaxClients 100
EOF
# Configure Tinyproxy systemd service
cat > /etc/systemd/system/tinyproxy.service << EOF
[Unit]
Description=Tinyproxy daemon
Requires=network.target
After=network.target
[Service]
Type=forking
PIDFile=/var/run/tinyproxy.pid
ExecStart=/usr/bin/tinyproxy -c '/usr/local/etc/tinyproxy/tinyproxy.conf'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
# Reload systemd config
systemctl daemon-reload
# Configure tinyproxy to start on boot
systemctl enable tinyproxy.service
# Start tinyproxy now
systemctl start tinyproxy
# exit root
exit
# exit ssh
exit
# Get cluster config with internal IP
gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${CLUSTER_REGION} --project ${PLATFORM_PROJECT_ID} --internal-ip
# Run SSH Tunnel as background job
gcloud beta compute ssh bastion-host --tunnel-through-iap --project ${BASTION_PROJECT_ID} --zone ${BASTION_ZONE} -- -4 -N -L 8888:127.0.0.1:8888 -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 10" &
# Now the proxy works!
HTTPS_PROXY=127.0.0.1:8888 kubectl get pods --all-namespaces
Ah, nevermind. installing tinyproxy with systemd DOES install a systemd service. It just doesn't configure it very well.
$ cat /lib/systemd/system/tinyproxy.service
[Unit]
Description=Tinyproxy lightweight HTTP Proxy
After=network.target
Documentation=man:tinyproxy(8) man:tinyproxy.conf(5)
[Service]
EnvironmentFile=-/etc/default/tinyproxy
Type=forking
ExecStart=/usr/bin/tinyproxy $FLAGS
PIDFile=/run/tinyproxy/tinyproxy.pid
PrivateDevices=yes
[Install]
WantedBy=multi-user.target
$ cat /etc/default/tinyproxy
### tinyproxy defaults
# Edit to configure alternate config file...
#
# If running under systemd, please make sure to uncomment
# both variables below!
#CONFIG="/etc/tinyproxy/tinyproxy.conf"
#FLAGS="-c $CONFIG"
# Add more command line options, as desired...
#FLAGS="$FLAGS ..."
So it's running entirely default settings. So /etc/tinyproxy/tinyproxy.conf, /var/run/tinyproxy/tinyproxy.pid, /var/log/tinyproxy/tinyproxy.log
The default config (on debian 10) doesn't look too bad:
$ cat /etc/tinyproxy/tinyproxy.conf | grep -v "^#" | grep -v "^$"
User tinyproxy
Group tinyproxy
Port 8888
Timeout 600
DefaultErrorFile "/usr/share/tinyproxy/default.html"
StatFile "/usr/share/tinyproxy/stats.html"
LogFile "/var/log/tinyproxy/tinyproxy.log"
LogLevel Info
PidFile "/run/tinyproxy/tinyproxy.pid"
MaxClients 100
MinSpareServers 5
MaxSpareServers 20
StartServers 10
MaxRequestsPerChild 0
Allow 127.0.0.1
ViaProxyName "tinyproxy"
ConnectPort 443
ConnectPort 563
Seems like the obvious enhancement would be simply to add Restart=on-failure
to the service config.
on-failure won't restart tinyproxy if it gets killed, but always will.
Adding auto-restart can be done with:
sudo sed -i 's/^\[Service\]$/[Service]\nRestart=always/' /lib/systemd/system/tinyproxy.service
sudo systemctl daemon-reload
sudo systemctl restart tinyproxy
Thanks for investigating @karlkfi If you get a chance, can you send a PR to add this to the startup script?
Suggest to clarify instructions for use from Google Cloud Shell.
When running from Google Cloud Shell, I found it was necessary to force ipv4 with -4 flag after command 5. This then jumps right into the bastion itself.
Then it was necessary to open a new Cloud Shell tab to run command 6.
Modifying command 5 to run in the background (&) then running command 6 in same Cloud Shell tab did not seem to work.
TinyProxy doesn't support gRPC- I honestly think we'd be better off finding a different proxy to use that supports it.
I added a PR that auto-restarts tinyproxy.
However after a little more digging I think it's actually possible to remove tinyproxy completely on Kubernetes v1.24+ and use an SSH SOCKS proxy (which is built into sshd) as described here: https://kubernetes.io/docs/tasks/extend-kubernetes/socks5-proxy-access-api/
So in the context of this example:
-- -L8888:127.0.0.1:8888
use -- -D 1080
export HTTPS_PROXY=localhost:8888
use export HTTPS_PROXY=socks5://localhost:1080
This works for me! If someone else can confirm we should update the example outputs/documentation to remove the script entirely.
Thanks @grugnog. It is working.
I also hit issues getting tinyproxy configured. I found the info from @grugnog helpful, but I extended it further. My use case is CI/CD deployment from a system that does not have network access to the private cluster.
The issue with the export HTTPS_PROXY=""
approach is that I either lose access to my CI/CD system APIs while the variable is set or I have to remember to set it for individual kubectl
commands. Setting the proxy in kubectl config
ensures that the proxy only affects connections to the cluster.
The sleep command is helpful for local dev use. I found myself forgetting that I already had the proxy open. It could be omitted for CI/CD jobs.
# get cluster credentials
gcloud container clusters get-credentials my-cluster
# open the ssh connection via IAP
# -f sends it to background
# -T removes the terminal (sleep command doesn't need it)
# -D specifies the socks5 proxy and the port
# sleep will hold the proxy open for the specified number of seconds
gcloud compute ssh my-proxy --tunnel-through-iap --ssh-flag "-f -T -D 1080" --command="sleep 1800"
# (optional) check that socks port is open and SSH process is running
netstat -an | grep 1080
ps -elf | grep start-iap-tunnel
# set the proxy config
# the kube config cluster name is different than the cluster name for get-credentials.
# this command will retrieve the full cluster name from kube config (assuming that your target cluster is the active context).
kubectl config set-cluster $(kubectl config view --minify --output 'jsonpath={.contexts[].context.cluster}') --proxy-url="socks5://localhost:1080"
# run some commands
kubectl get nodes
kubectl get pods
kubectl apply -f deploy-me.yaml
# manually kill the background process before the timeout expires
pgrep -f start-iap-tunnel | xargs kill
Currently the Safer Cluster Access with IAP Bastion Host example does not monitor the status of Tinyproxy. We should consider adding a custom conf file which writes to a logfile and additionally look into monitoring options for surfacing any errors. Alternatively we could include some troubleshooting documentation differentiating a Tinyproxy issue from IAP/Bastion host/ K8s connectivity issues.