Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How to forward SSH access of one machine, through another, to the rest of a network?

+6
−0

I have a setup with two Raspberry PIs, a CM3 module with only a USB connection to the outside, and the Pi4 with a lot of connectors.

I set up things such that CM3 functions as USB Gadget, and once connected to one of the USB jacks on the Pi4, I successfully logged into the CM3 with SSH—from a console on the Pi4.

Now, the Pi4 is connected, through its LAN jack, to a small network. I would like to be able to access the CM3 from devices in that network (e.g. log in with SSH). So I guess I need to set up some sort of forwarding within the Pi4 in order to do that. I'm not sure of the correct terminology for that—looked for "bridge" tutorials, but what they did seemed not what I need, or perhaps I only found unfitting tutorials.

What do I need to do to set this up?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

3 answers

+5
−0

I'm having trouble visualizing exactly what your setup is like, but if I understand correctly, then:

  • You have one client that you are connecting from
  • You have one server that you are able to connect to
  • You have some number of hosts which are only reachable through that server, to which you would like to connect
  • All connections are done with SSH

This can be accomplished by using the "server" as a jumpstation to reach the hosts logically behind it, using SSH's port forwarding functionality.

(It's probably worth noting that what I describe here isn't strictly a jumpstation setup, as a jumpstation is typically taken to be a minimal host to which a connection is established for the express purpose of establishing a further, isolated connection, thereby maintaining a degree of network isolation between the two endpoint hosts such that one still can't directly reach the other. Still, it's close enough that the term seems applicable.)

First, set things up so that you can conveniently connect from the client to the server. It sounds like you have already done this, but if not, that's the place to start.

Second, start a SSH connection with port forwarding to the host that you ultimately want to be able to connect to.

Third, connect to the local-listen forward port on the client, in order to reach the host you want to ultimately connect to.

Port forwarding can be a little tricky to get right, but the basic principle is simply that you choose a local port, which the server will then forward, according to its own network settings, to a different host and port. This enables you to reach a host through the SSH connection which you couldn't reach directly; even one on an entirely different LAN segment. The only requirement is that it's reachable over TCP/IP from the server; to the final destination host, the traffic looks like it came from the server, not from the ultimate client.

SSH supports local and remote port forwarding. (Mostly when one talks of SSH port forwarding, one refers to local port forwarding.) To remember the direction of the traffic flow for each, keep in mind that the "local" and "remote" specifier binds more tightly to "port", and specifies where the listener to that port is set up. A "local port" forward sets up a local listener to a port and passes that traffic through the SSH connection to be further forwarded by the server to some destination; a "remote port" forward sets up a remote listener to a port and passes that traffic through the SSH connection to be further forwarded by the client to some destination. In both cases, the destination can be the local host.

With the OpenSSH command-line client, local port forwarding is specified using the -L parameter, and remote port forwarding is specified using the -R parameter. Other implementations are likely to do it differently, and may have a different format for how to specify the forwarding, but the general principle should transfer readily between implementations. My examples here are for OpenSSH, since that is probably the most commonly used SSH implementation on *nix systems.

Assuming that the host you ultimately want to connect to is 172.16.0.99, which the client you are connecting from can't reach directly. Also assume that things are set up such that ssh 10.1.2.3 causes you to be logged in to the server that can reach 172.16.0.99. Now add port forwarding:

ssh -L 12345:172.16.0.99:22 10.1.2.3

What this does is tell SSH to bind to local port 12345, and that any traffic that arrives on that port is to be forwarded through the SSH connection in such a way that the server you connect to (10.1.2.3) will itself forward that data to 172.16.0.99 port 22. Any reply traffic will take the opposite direction back to the client.

Once the SSH session is up and running, you should now be able to, on the client

ssh -p 12345 127.0.0.1

which will result in a connection being established all the way to 172.16.0.99 port 22. Through that connection, in turn, in this particular example SSH will attempt to establish a brand-new SSH session.

If that doesn't work, check to make sure that port forwarding is enabled on the server (in our case, the 10.1.2.3), and that it, in turn, can reach the host specified client-side in the port forwarding declaration. Aside from the specific log files, /etc/ssh/sshd_config settings like AllowTcpForwarding and PermitOpen may be a good place to start debugging.

Note that there is nothing special about running SSH through the tunnel. For example, we could use telnet instead:

ssh -L 12345:172.16.0.99:23 10.1.2.3

followed by

telnet 127.0.0.1 12345

While in this case the connection between 10.1.2.3 and 172.16.0.99 will be completely unprotected, the traffic that flows between the client (the host you are sitting at) and 10.1.2.3 will be encrypted and integrity protected by virtue of being routed through the SSH session.

Particularly for remote port forwarding, you may also want to look at the GatewayPorts setting in OpenSSH's /etc/ssh/sshd_config; if left at its current default value of no, connections from other hosts to the remote forward port are prevented. Depending on your intended usage, you may need to adjust this, possibly within an appropriate Match directive.

Also, port forwarding is only guaranteed to work for TCP; while in principle I suspect UDP could be forwarded just as well, it looks like the standard only allows for TCP port forwarding. See RFC 4254 section 7 for some of the gory details. However, OpenSSH also supports doing port forwarded I/O through a UNIX socket; see the ssh man page on -L and -R respectively for details.

(As an aside, if you do this a lot, you may want to look at autossh to automatically establish and maintain a SSH connection, and/or LocalForward in the SSH client configuration, as appropriate.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

Yes, you got the scenario right. Anything missing in the explanation which makes this not 100% clear?... (2 comments)
+4
−0

Canina's answer contains a good solution if you only want to reach a small number of hosts.

But requiring a port[1] for each host you want to reach doesn't scale to many hosts. It also requires you set the tunnel up before you need it. That's not good if you don't know which hosts you'll need to connect to.

In an earlier setup at the company I work for, we had a bunch of hosts I could only reach through a jumphost, and the list constantly grew. To reach them, I had a line like the following in my .ssh/config for a wildcard entry matching the hosts I couldn't reach directly:

ProxyCommand ssh <myuser>@<jumphost> nc -w1 %h %p

This made SSH[2] connect to the jumphost and run nc on it, which then proxied everything to the host I really wanted to connect to. I don't remember, but this probably required me to run an SSH-agent with forwarding enabled.


  1. A local port, not one on the jumphost, so many users don't compete for resources. ↩︎

  2. I am—and also was back then—on a Linux machine, using the OpenSSH implementation. ↩︎

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

+1
−0

Grove's answer is likewise good, but OpenSSH introduced ProxyJump (-J) in 2016 with v7.3, making it even easier to chain SSH hosts together:

ssh -J user@interstitial.example me@destination.example

"Real" port forwarding like Canina's answer still has important uses,[1] but chaining SSH hops doesn't have to be done that way.

Permanent config

Since access for most people doesn't change very often, it's nice to keep the details on disk. You can set them up in ~/.ssh/config like this:

Host interstitial jumper # any number of aliases
    HostName interstitial.example
    User user

Host destination # again, any number of aliases
    HostName destination.example
    User me
    ProxyJump interstitial # any alias of the jump host

and just run

ssh destination

And all of the connection info is handled for you. If the ports are not the default 22, you can add them to the config. Also, SSH tab-completes hosts in your config file, so you can probably just run ssh de<TAB><RETURN>.

If multiple hops are required to reach the destination, just add an entry for each of them in ~/.ssh/config with a ProxyJump for the previous hop. You can also do multiple hops with -J by comma-separating them.

Diagram

It's worth noting that all the SSH sessions with ProxyJump are direct from your computer to each remote computer, no matter how many hops it takes. Tunnels are nested, not link-to-link. You don't need to forward your SSH agent.

+-------+   .-----------.   +----------------+   +---------------+
|       |__/  tunnel to  \__|                |   |               |
|  You  |    interstitial   |  Interstitial  |   |  Destination  |
|       |========================================|               |
|       |       nested tunnel to destination     |               |
|       |========================================|               |
|       |-------------------|                |   |               |
+-------+                   +----------------+   +---------------+

  1. If you want local access to non-SSH resources available from the destination, you can do that. Add a LocalForward option on the destination server's config, which will bring them through the tunnel back to your local computer. ↩︎

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »