Closed alanhe421 closed 1 year ago
Yes, as long as the server allows it.
I'm not sure I understand, there's no special configuration required on the server side, right? That's my understanding.
Servers can reject just about any request clients make.
However, as far as I know OpenSSH for example has no built-in way of limiting the number of forwarded connections (OpenSSH can limit the max number of sessions per SSH connection however (10 by default) -- this includes things like exec()
, shell()
, subsystem()
, and sftp()
). Other server implementations may have such options. Another possibility is the underlying OS may have facilities for limiting such things.
In other words, you can start multiple sessions and open multiple forwarded connections, but don't be surprised if you see the requests start to fail at some point.
so I can do like this?
const { Client } = require('ssh2');
const conn = new Client();
conn.on('ready', () => {
console.log('Client :: ready');
// forward1
conn.forwardOut('192.168.100.102', 0, '127.0.0.1', 80, (err, stream) => {
});
// forward2
conn.forwardOut('192.168.100.102', 0, '127.0.0.1', 80, (err, stream) => {
});
}).connect({
host: '192.168.100.100',
port: 22,
username: 'frylock',
password: 'nodejsrules'
});
Sure.
I'll try it out quickly. As far as I understand, does this mean that multiple ports will be occupied locally? For example, when I tested "ssh -L", I noticed that it only occupies one local port and only uses one SSH connection.
ssh2
doesn't handle the local side of port forwarding as compared to ssh -L
, that's up to you to handle (e.g. spinning up a net.Server
and calling forwardOut()
for each new incoming socket). forwardOut()
just creates the connection on the server side, represented by the duplex stream passed to your callback. This gives you the flexibility to make connections that are entirely in-process.
Based on what you mentioned, I understand that this approach is also feasible. I just need to create a connection through SSH and perform multiple forwarding operations so that each forwarded socket aligns with the createConnection socket established by the proxy. However, during actual testing, I found that there were still some errors occurring which is quite strange.
The request failure you've highlighted is not an issue, that's a response to the keepalive request, which is just a fake request to ensure the connection is still alive.
forward1 and forward2 should keep same srcPort or diffrent?
source ip and port don't need to be unique and can be any reasonable value. I'm not even sure the OpenSSH server utilizes those values at all. The important parts for forwardOut()
are the destination IP and port.
I'm not sure if this is normal. As mentioned above, the HTTP Agent built as a whole is designed to proxy access to the web on the server. After creating an SSH connection, I forwarded it multiple times.
Through debugging, I found that I had forwardout 5 times. When I closed the webpage, I hoped to trigger client.end only when all sockets were closed. However, strangely enough, socket.once('close) was only triggered once.
I checked and found that each forwardout socket was different.
My understanding is that since the page has been closed, logically all sockets should be closed too; at least there should be five triggers here instead of just one close trigger. So my question is: isn't it wrong for SSHHTTPAgent forwardout to be used multiple times?
this.agent = new SSHTTPAgent({
...connectOpts,
debug: console.log
}, {
keepAlive: true,
keepAliveMsecs: connectOpts.keepaliveInterval,
timeout: 70 * 1000,
maxSockets: 3
});
createConnection(options, cb) {
const srcIP = (options && options.localAddress) || this._defaultSrcIP;
const srcPort = (options && options.localPort) || 0;
const dstIP = options.host;
const dstPort = options.port;
if (!this.server) {
this.server = new SshTcpServer();
}
this.server.connect(this._connectCfg).then(() => {
this.server.forwardOut(srcIP, srcPort, dstIP, dstPort, (err, stream) => {
// trigger 5❗️-------------------------------------
stream.on('close', () => {
// trigger 1 ❗️-----------------------------------
debugger;
});
cb(null, decorateStream(stream, ctor, options));
});
});
}
const Client = require('ssh2/lib/client.js');
const {EventEmitter} = require('events');
const {randomString} = require('../../utils');
const timeout = 1000 * 60; // 1min
class SshTcpServer extends EventEmitter {
constructor() {
super();
this.client = null;
this.serverId = randomString(10);
}
connect(connectOpts) {
return new Promise((resolve, reject) => {
if (!this.client) {
console.log('connect ssh tcp server');
this.client = new Client();
this.client.on('ready', () => {
resolve();
}).connect(connectOpts);
} else {
console.log('use existed ssh tcp server');
resolve();
}
});
}
forwardOut(srcIP, srcPort, dstIP, dstPort, cb) {
this.client.forwardOut(srcIP, srcPort, dstIP, dstPort, (err, socket) => {
if (err) {
return cb(err);
}
socket.on('close', () => {
console.log('socket close', this.serverId);
this.afterSocketClose();
});
cb(err, socket);
});
}
afterSocketClose() {
if (!this.client) {
return;
}
if (this.client._chanMgr._count === 1) {
if (this.timeoutTimer) {
clearTimeout(this.timeoutTimer);
}
this.timeoutTimer = setTimeout(() => {
if (this.client._chanMgr._count === 0) {
this.end();
}
}, timeout);
}
}
end() {
if (this.client) {
this.client.end();
this.client = null;
}
}
}
module.exports = {
SshTcpServer
}
after web closed, how to watch all socket is closed, I want to confirm whether to client.end()
I conducted an experiment and found that srcPort does not actually occupy the local port, for example.
conn.forwardOut('localhost', 8000, '127.0.0.1', 80, (err, stream) => {
});
lsof -i :8000
return nothing.
I'm not sure what you're trying to accomplish here. You shouldn't be using HTTP/HTTPS agents if you're trying to replicate the functionality of ssh -L
. Like I said in https://github.com/mscdex/ssh2/issues/1302#issuecomment-1558481368, all you need for that is a single net.Server
where you pipe()
between the incoming socket and the stream from forwardOut()
. Whether you reuse the same SSH connection for all incoming sockets or use one SSH connection per incoming socket (or something in between) is totally up to you.
HTTP/HTTPS agents are only for requests that originate from within your own process via node's http
/https
request()
API.
I conducted an experiment and found that srcPort does not actually occupy the local port, for example.
That's what I already said in https://github.com/mscdex/ssh2/issues/1302#issuecomment-1558481368.
My local demand is to establish an HTTP Agent through SSH connection to proxy access the target machine's intranet WEB. SSH -L is the basic solution I can think of. Of course, if it does not occupy a local port, I would be more willing. However, as shown in the above code, when using multiple forwardouts with ssh connect.
I cannot find a way to determine the time point when all sockets are closed after closing the webpage.
After all, I need to determine whether client.end is needed or not.
Thank you.
I need to determine whether client.end is needed or not.
The easiest solution would probably to just maintain a Set
of the open streams and check the size when a stream closes to determine if you should close the underlying SSH connection, if that's how you want it to behave.
very thank you.
TCP supports multiplexing. So, if I understand correctly, after connecting via SSH, can I forward multiple streams or channels?