Categories
Web Site Management

Building BlueElysium: Part 2 – The Mail Server Journey

From “How hard can it be?” to “Oh, that’s why people use Gmail” – A tale of SMTP, DKIM, and debugging


Introduction

After successfully building the web server foundation in Part 1, I was feeling confident. “Email can’t be that hard,” I thought. “It’s just SMTP and IMAP, right?”

Narrator: It was, in fact, that hard.

This is the story of building a production-ready mail server, complete with authentication, encryption, spam filtering, virus scanning, and enough troubleshooting to humble any system administrator. But in the end, when that first email arrived in my inbox with a green checkmark and perfect DKIM signature, it was absolutely worth it.

The Email Vision

Goal: Self-hosted email server with:

  • Full SMTP and IMAP support
  • Web-based email account management
  • DKIM signing for email authentication
  • SPF and DMARC for sender validation
  • Spam filtering (SpamAssassin)
  • Virus scanning (ClamAV)
  • Fail2Ban for intrusion prevention
  • SSL/TLS encryption everywhere

Reality Check: This took considerably longer than the web server.

Part 1: Choosing the Stack

The Research Phase

I evaluated several options:

  1. Postfix + Dovecot (manual) – Ultimate flexibility, maximum pain
  2. Mail-in-a-Box – Too automated, limited customization
  3. iRedMail – Good, but complex setup
  4. docker-mailserver – Perfect balance of automation and control

Winner: docker-mailserver (DMS)

Why?

  • Well-maintained and documented
  • Includes everything: Postfix, Dovecot, OpenDKIM, SpamAssassin, ClamAV, Fail2Ban
  • Integrates with existing Docker infrastructure
  • Active community support
  • Can use PostfixAdmin for web management

The Architecture

Internet (Port 25/587/465/993)
    ↓
Nginx (SSL termination for HTTPS, pass-through for mail)
    ↓
docker-mailserver (SMTP/IMAP)
    ↓
MySQL (PostfixAdmin database)
    ↓
PostfixAdmin (Web interface)

Part 2: Initial Configuration

Step 1: DNS Records

Before even starting the containers, I needed proper DNS records. This is critical – email won’t work without them.

# A Records
mail.blueelysium.net        A       [SERVER_IP]
mailadmin.blueelysium.net   A       [SERVER_IP]

# MX Record (highest priority)
blueelysium.net             MX 10   mail.blueelysium.net.

# SPF Record (authorize this server to send email)
blueelysium.net             TXT     "v=spf1 mx ~all"

# DMARC Record (email authentication policy)
_dmarc.blueelysium.net      TXT     "v=DMARC1; p=quarantine; rua=mailto:postmaster@blueelysium.net"

Note: DKIM record comes later after generating keys.

Step 2: Docker Compose Configuration

Adding the mail server to docker-compose.yml:

mailserver:
  image: ghcr.io/docker-mailserver/docker-mailserver:latest
  container_name: mailserver
  hostname: mail
  domainname: blueelysium.net
  ports:
    - "25:25"     # SMTP
    - "143:143"   # IMAP
    - "465:465"   # SMTPS
    - "587:587"   # Submission
    - "993:993"   # IMAPS
  volumes:
    - ./docker-data/dms/mail-data:/var/mail
    - ./docker-data/dms/mail-state:/var/mail-state
    - ./docker-data/dms/mail-logs:/var/log/mail
    - ./docker-data/dms/config:/tmp/docker-mailserver
    - ./docker-data/certbot/conf:/etc/letsencrypt:ro
  environment:
    - ENABLE_SPAMASSASSIN=1
    - ENABLE_CLAMAV=1
    - ENABLE_FAIL2BAN=1
    - ENABLE_POSTGREY=0
    - ENABLE_AMAVIS=1
    - ONE_DIR=1
    - DMS_DEBUG=0
    - PERMIT_DOCKER=network
    - SSL_TYPE=letsencrypt
    - SSL_DOMAIN=mail.blueelysium.net
    - POSTFIX_MAILBOX_SIZE_LIMIT=0
    - POSTFIX_MESSAGE_SIZE_LIMIT=50000000

Key Decisions:

  • Shared SSL certificates with web server (Let’s Encrypt)
  • All security features enabled from day one
  • MySQL backend for virtual mailboxes (integration with PostfixAdmin)
  • One directory mode for easier backups

Step 3: PostfixAdmin Integration

PostfixAdmin provides a web interface for managing email accounts, domains, and aliases.

postfixadmin:
  image: postfixadmin/postfixadmin:latest
  container_name: postfixadmin
  ports:
    - "8080:80"
  environment:
    POSTFIXADMIN_DB_TYPE: mysqli
    POSTFIXADMIN_DB_HOST: db
    POSTFIXADMIN_DB_NAME: ${MYSQL_PFBE_DATABASE}
    POSTFIXADMIN_DB_USER: ${MYSQL_PFBE_USER}
    POSTFIXADMIN_DB_PASSWORD: ${MYSQL_PFBE_PASSWORD}
    POSTFIXADMIN_SETUP_PASSWORD: ${POSTFIXADMIN_SETUP_PASS}
  depends_on:
    - db

Initial Setup:

  1. Access https://mailadmin.blueelysium.net:8080/setup.php
  2. Create setup password hash
  3. Create superadmin account
  4. Add domain: blueelysium.net
  5. Create first mailbox: frank@blueelysium.net

Part 3: The MySQL Integration Challenge

The Problem

docker-mailserver needs to query MySQL for:

  • Virtual domains
  • Virtual mailboxes
  • Virtual aliases
  • User authentication

But it doesn’t include MySQL client libraries by default!

The Solution: user-patches.sh

I created a custom startup script that runs after DMS initializes:

#!/bin/bash
# docker-data/dms/config/user-patches.sh

echo "Installing MySQL client libraries..."
apt-get update
apt-get install -y postfix-mysql dovecot-mysql

echo "Configuring Postfix virtual domains..."
cat > /etc/postfix/mysql-virtual-mailbox-domains.cf << 'EOF'
user = ${MYSQL_PFBE_USER}
password = ${MYSQL_PFBE_PASSWORD}
hosts = db
dbname = ${MYSQL_PFBE_DATABASE}
query = SELECT domain FROM domain WHERE domain='%s' AND active='1'
EOF

echo "Configuring Postfix virtual mailboxes..."
cat > /etc/postfix/mysql-virtual-mailbox-maps.cf << 'EOF'
user = ${MYSQL_PFBE_USER}
password = ${MYSQL_PFBE_PASSWORD}
hosts = db
dbname = ${MYSQL_PFBE_DATABASE}
query = SELECT maildir FROM mailbox WHERE username='%s' AND active='1'
EOF

echo "Configuring Postfix virtual aliases..."
cat > /etc/postfix/mysql-virtual-alias-maps.cf << 'EOF'
user = ${MYSQL_PFBE_USER}
password = ${MYSQL_PFBE_PASSWORD}
hosts = db
dbname = ${MYSQL_PFBE_DATABASE}
query = SELECT goto FROM alias WHERE address='%s' AND active='1'
EOF

# Update Postfix main.cf
postconf -e "virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf"
postconf -e "virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf"
postconf -e "virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf"

# Reload Postfix
postfix reload

echo "MySQL integration complete!"

Critical Learning: This script runs every time the container starts, ensuring configuration persists.

Part 4: DKIM Configuration Drama

What is DKIM?

DKIM (DomainKeys Identified Mail) cryptographically signs outgoing emails to prove they’re legitimate. Without it, emails often go to spam.

Generating DKIM Keys

docker exec mailserver setup config dkim domain blueelysium.net

This creates:

  • Private key: /tmp/docker-mailserver/opendkim/keys/blueelysium.net/default.private
  • Public key: /tmp/docker-mailserver/opendkim/keys/blueelysium.net/default.txt

The First DKIM Crisis

Symptom: Emails weren’t getting DKIM signatures

Error in logs:

opendkim[212]: error loading key 'mail._domainkey.blueelysium.net'

Investigation:

docker exec mailserver cat /etc/opendkim/KeyTable
mail._domainkey.blueelysium.net blueelysium.net:mail:/etc/opendkim/keys/blueelysium.net/mail.private

docker exec mailserver ls /etc/opendkim/keys/blueelysium.net/
default.private  default.txt

The Problem: The configuration was looking for selector mail, but the actual key was named default!

The Fix:

Updated KeyTable:

docker exec mailserver bash -c 'echo "default._domainkey.blueelysium.net blueelysium.net:default:/etc/opendkim/keys/blueelysium.net/default.private" > /etc/opendkim/KeyTable'

Updated SigningTable:

docker exec mailserver bash -c 'echo "*@blueelysium.net default._domainkey.blueelysium.net" > /etc/opendkim/SigningTable'

Making it Persistent:

Created permanent config files in docker-data/dms/config/opendkim/:

sudo bash -c 'cat > docker-data/dms/config/opendkim/KeyTable << "EOF"
default._domainkey.blueelysium.net blueelysium.net:default:/etc/opendkim/keys/blueelysium.net/default.private
EOF'

sudo bash -c 'cat > docker-data/dms/config/opendkim/SigningTable << "EOF"
*@blueelysium.net default._domainkey.blueelysium.net
EOF'

Publishing DKIM DNS Record

docker exec mailserver cat /etc/opendkim/keys/blueelysium.net/default.txt

Output:

default._domainkey IN TXT ( "v=DKIM1; h=sha256; k=rsa; "
"p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsks63VLLrZy0EJcDn5mp..."
"...rest of very long key..." )

Added to DNS as single TXT record:

default._domainkey.blueelysium.net    TXT    "v=DKIM1; h=sha256; k=rsa; p=MIIBIjANBgkq...COMPLETE_KEY...AQAB"

Critical: Must include the ENTIRE public key in one record. I initially had a truncated version that caused validation failures.

Verification:

docker exec mailserver opendkim-testkey -d blueelysium.net -s default -vvv

Output: key OK

🎉 First Success: Sent test email, received with DKIM signature verified!

Part 5: The Desktop Connection Mystery

The Problem

Symptoms:

  • Laptop (different VLAN): ✅ Could connect to mail server
  • Desktop (different VLAN): ❌ Connection timeout on ports 993, 465, 587
  • Same desktop: ✅ Could access website (ports 80, 443)

What I Tried:

  1. Firewall rules on UniFi – Created allow rules, no change
  2. UFW on server – All ports already allowed
  3. IPS/Threat Management – Temporarily disabled, no change
  4. Windows Firewall – Disabled on desktop, still timing out
  5. Docker iptables – Verified forwarding rules correct

The Investigation

Used tcpdump on the server to watch network traffic:

# Monitor port 993 traffic
sudo tcpdump -i any port 993 -nn

# Send test connection from desktop
# Desktop: nc -v mail.blueelysium.net 993

What I Saw:

SYN packet from 10.0.6.91:52345 → server:993  ← Desktop sending request
Packet forwarded to Docker container          ← Reached the server
No SYN-ACK response                           ← Container not responding!

The Smoking Gun: Packets were reaching the server and Docker container, but the container wasn’t responding. This pointed to something inside the mailserver container blocking the connection.

The Fail2Ban Revelation

On a hunch, I checked Fail2Ban status:

docker exec mailserver fail2ban-client status dovecot

Output:

Status for the jail: dovecot
|- Filter
|  |- Currently failed: 1
|  |- Total failed: 2
|  `- File list: /var/log/mail.log
`- Actions
   |- Currently banned: 1
   |- Total banned: 1
   `- Banned IP list: <My LAN IP>

BINGO! My desktop IP was banned by Fail2Ban!

Why? I had been testing authentication multiple times during setup, triggering Fail2Ban’s “too many failed login attempts” protection.

The Fix: Part 1 – Unban

docker exec mailserver fail2ban-client set dovecot unbanip <My LAN IP>

✅ Desktop immediately connected!

Part 6: The Dovecot Authentication Puzzle

The Next Challenge

Desktop could now connect, but authentication failed:

Outlook Error: “Cannot connect to server. Verify username and password.”

Server Logs:

dovecot: auth-worker: Invalid password in passdb: Not a valid MD5-CRYPT or PLAIN-MD5 password

Understanding the Problem

PostfixAdmin stores passwords as SHA512-CRYPT hashes:

$6$rounds=5000$randomsalt$hashedpassword...

Dovecot was trying to read them as PLAIN-MD5, which didn’t match.

The Solution

Modified Dovecot’s SQL configuration to handle the hash format:

docker exec mailserver bash -c "cat > /etc/dovecot/dovecot-sql.conf.ext << 'EOF'
driver = mysql
connect = host=db dbname=pfbe user=pfuser password=SecurePassword
default_pass_scheme = CRYPT
password_query = SELECT username AS user, CONCAT('{CRYPT}', password) AS password FROM mailbox WHERE username='%u' AND active='1'
user_query = SELECT maildir AS home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username='%u' AND active='1'
EOF"

Key Changes:

  1. default_pass_scheme = CRYPT – Tells Dovecot to expect crypt() format
  2. CONCAT('{CRYPT}', password) – Prefixes hash with {CRYPT} so Dovecot knows how to handle it

Added to user-patches.sh for persistence.

Restarted mailserver:

docker restart mailserver

✅ Authentication successful! Could now send and receive emails!

Part 7: Client Configuration

Outlook Configuration

IMAP Settings (Receiving):

  • Server: mail.blueelysium.net
  • Port: 993
  • Encryption: SSL/TLS
  • Username: frank@blueelysium.net
  • Password: (from PostfixAdmin)

SMTP Settings (Sending):

  • Server: mail.blueelysium.net
  • Port: 587
  • Encryption: STARTTLS
  • Authentication: Required
  • Username: frank@blueelysium.net
  • Password: (same as above)

Alternative SMTP:

  • Port: 465 with SSL/TLS also works

Testing

Send test email:

echo "Test email body" | mail -s "Test Subject" frank@blueelysium.net

Check mail queue:

docker exec mailserver postqueue -p

View logs:

docker logs mailserver --tail 50

Check mailbox:

docker exec mailserver ls -la /var/mail/blueelysium.net/frank/cur/

Part 8: Fine-Tuning & Optimization

SpamAssassin Training

# Train on spam
docker exec mailserver sa-learn --spam /var/mail/blueelysium.net/frank/.Junk/cur/*

# Train on ham (legitimate email)
docker exec mailserver sa-learn --ham /var/mail/blueelysium.net/frank/cur/*

# Check stats
docker exec mailserver sa-learn --dump magic

ClamAV Database Updates

ClamAV updates virus definitions automatically, but I verified:

docker exec mailserver freshclam

Monitoring Configuration

Created simple monitoring script:

#!/bin/bash
# check-mail.sh

echo "=== Mail Server Status ==="
docker exec mailserver supervisorctl status

echo -e "\n=== Recent Auth Attempts ==="
docker logs mailserver | grep "auth" | tail -10

echo -e "\n=== Mail Queue ==="
docker exec mailserver postqueue -p

echo -e "\n=== Fail2Ban Status ==="
docker exec mailserver fail2ban-client status dovecot
docker exec mailserver fail2ban-client status postfix

SSL Certificate Sharing

The mail server uses the same Let’s Encrypt certificates as the web server:

volumes:
  - ./docker-data/certbot/conf:/etc/letsencrypt:ro

Important: After certificate renewal, restart mail server:

docker compose restart mailserver

Added to renewal script for automation.

Part 9: Testing & Validation

Email Deliverability Testing

Tools Used:

  1. mail-tester.com – Comprehensive deliverability check
  2. MXToolbox – DNS and configuration validation
  3. Gmail – Send test email, check headers

Initial Score: 8.5/10

Issues Found:

  • Reverse DNS (PTR record) not set – Contacted ISP to set PTR record
  • DMARC record could be stronger – Updated to p=quarantine

Final Score: 10/10 🎉

DKIM Validation

Sent email to Gmail, viewed headers:

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=blueelysium.net;
  s=default; t=1700000000;
  bh=abc123...;
  h=Date:From:To:Subject:From;
  b=xyz789...
dkim=pass (2048-bit key) header.d=blueelysium.net header.i=@blueelysium.net

✅ DKIM signing and verification working perfectly!

SPF Validation

Received-SPF: pass (google.com: domain of frank@blueelysium.net designates [SERVER_IP] as permitted sender)

✅ SPF validation passing!

Load Testing

Sent 100 test emails to verify performance:

for i in {1..100}; do
  echo "Test email $i" | mail -s "Load Test $i" frank@blueelysium.net
done

Results:

  • All 100 emails delivered
  • Average processing time: <1 second per email
  • No performance degradation
  • CPU usage stayed under 30%

Part 10: Lessons Learned (The Hard Way)

1. DNS is Everything

Email is heavily dependent on DNS.

Lesson: Set up ALL DNS records before starting, and verify with multiple tools.

2. Fail2Ban is Aggressive

Fail2Ban protected the server beautifully, but also banned me multiple times during testing.

Lesson: Always whitelist internal networks before testing authentication.

3. DKIM Selector Names Matter

The mismatch between mail and default cost me hours of debugging.

Lesson: Check DKIM KeyTable and SigningTable match actual key filenames.

4. Password Hashing is Tricky

PostfixAdmin’s SHA512-CRYPT format required specific Dovecot configuration.

Lesson: Always specify default_pass_scheme and test authentication immediately.

5. Logs are Your Friend

Every single issue was solved by reading logs carefully:

docker logs mailserver | tail -100
docker exec mailserver grep "error" /var/log/mail.log

Lesson: When stuck, read the logs. Then read them again.

6. Network Troubleshooting Tools

tcpdump was instrumental in discovering the Fail2Ban issue:

sudo tcpdump -i any port 993 -nn

Lesson: Learn basic network debugging tools before starting.

7. Documentation Saves Time

Every fix I documented helped when similar issues appeared later.

Lesson: Document everything, especially working configurations.

Part 11: The Backup System

With email working, data protection became critical. I built a comprehensive backup system:

What Gets Backed Up:

  • All mailboxes (/var/mail/)
  • Mail server state and configuration
  • PostfixAdmin database
  • DKIM keys
  • SSL certificates

Backup Schedule:

  • Weekly full backups (Sunday 2:00 AM)
  • 4-week retention
  • SHA256 integrity verification

Backup Script Highlights:

# Backup mail data
tar czf mail_data.tar.gz -C /var/mail .

# Backup mail state
tar czf mail_state.tar.gz -C /var/mail-state .

# Backup DKIM keys
tar czf dkim_keys.tar.gz -C /tmp/docker-mailserver/opendkim .

# Backup configuration
tar czf config.tar.gz docker-data/dms/config/

Restore Testing:

  • Monthly restore verification
  • Complete disaster recovery procedures documented

Full documentation: docs/BACKUP-RESTORE-GUIDE.md

Part 12: Current Status & Performance

Mail Server Stats (After 1 Month)

Reliability:

  • Uptime: 99.9%
  • Email delivery success: 100%
  • Zero deliverability issues
  • No bounces or rejections

Security:

  • Fail2Ban: 15 IPs banned (external attackers)
  • SpamAssassin: 23 spam emails caught
  • ClamAV: 0 viruses detected (thankfully!)
  • SSL/TLS: All connections encrypted

Performance:

  • Email delivery: <1 second
  • IMAP sync: Nearly instant
  • Resource usage: 200MB RAM, <5% CPU
  • Storage: 82KB mail data (mostly test emails)

What’s Working Brilliantly

✅ DKIM signing – 100% verification rate
✅ SPF validation – All emails pass
✅ SSL/TLS encryption – A+ rating
✅ Spam filtering – Excellent catch rate
✅ Fail2Ban protection – Blocking attacks automatically
✅ Automated backups – Weekly, verified backups
✅ Client compatibility – Works with Outlook, Thunderbird, mobile

What Could Be Better

🔸 Monitoring dashboard – Currently manual checks
🔸 Email retention policies – Not automated yet
🔸 Redundancy – Single server, no failover
🔸 Push notifications – Would need additional setup

Part 13: The Bigger Picture

Why Self-Host Email?

Advantages:

  1. Complete Privacy – My data, my server
  2. No Limits – No storage quotas or message limits
  3. Custom Domains – Professional email addresses
  4. Learning Experience – Deep understanding of email infrastructure
  5. Cost – Free after initial setup (besides server costs)

Disadvantages:

  1. Complexity – Requires technical knowledge
  2. Responsibility – You’re the sysadmin
  3. Deliverability – ISPs may be suspicious of self-hosted
  4. Maintenance – Updates and monitoring required
  5. No Support – You’re on your own for troubleshooting

My Verdict: Worth it for the learning and control, but not for everyone.

Skills Gained

Through this project, I learned:

  • Email Protocols: SMTP, IMAP, POP3 in depth
  • Authentication: DKIM, SPF, DMARC implementation
  • Security: Fail2Ban, SSL/TLS, encryption
  • Networking: DNS, iptables, Docker networking, tcpdump
  • Troubleshooting: Log analysis, packet inspection, systematic debugging
  • Database Integration: MySQL backend for virtual users
  • Automation: Backup scripts, monitoring, maintenance
  • Documentation: Writing technical documentation

These skills translate directly to professional DevOps/SysAdmin work.

Part 14: Common Issues & Solutions

Issue 1: Email Goes to Spam

Symptoms: Emails arrive in recipient’s spam folder

Solutions:

  1. Check DKIM signature: opendkim-testkey
  2. Verify SPF record: dig blueelysium.net TXT
  3. Add DMARC record
  4. Set up reverse DNS (PTR record)
  5. Test with mail-tester.com
  6. Warm up IP (gradually increase send volume)

Issue 2: Cannot Receive Email

Symptoms: Outgoing works, but incoming fails

Check:

# Verify MX record
dig blueelysium.net MX

# Check if server is listening
netstat -tulpn | grep ":25"

# Test SMTP connection
telnet mail.blueelysium.net 25

# Check logs
docker logs mailserver | grep "error"

Common Causes:

  • Port 25 blocked by ISP (can’t be fixed easily)
  • Incorrect MX record
  • Firewall blocking port 25
  • Postfix not running

Issue 3: Authentication Failures

Symptoms: Client says “invalid password”

Debug:

# Test authentication directly
docker exec mailserver doveadm auth test frank@blueelysium.net

# Check password query
docker exec mailserver mysql -h db -u pfuser -p -e "SELECT username, password FROM mailbox WHERE username='frank@blueelysium.net'"

# Review Dovecot SQL config
docker exec mailserver cat /etc/dovecot/dovecot-sql.conf.ext

Issue 4: Fail2Ban Blocking Legitimate Users

Symptoms: Sudden connection timeouts after several login attempts

Fix:

# Check banned IPs
docker exec mailserver fail2ban-client status dovecot

# Unban IP
docker exec mailserver fail2ban-client set dovecot unbanip 10.0.6.91

# Add to whitelist (in fail2ban-jail.local)
ignoreip = 10.0.6.0/24

Issue 5: Disk Space Filling Up

Symptoms: Server running out of space

Check:

# Find large directories
du -sh /var/mail/* | sort -h

# Check mail logs
du -sh /var/log/mail/*

# ClamAV database
du -sh /var/lib/clamav

Solutions:

  • Implement mailbox quotas in PostfixAdmin
  • Rotate logs more aggressively
  • Clean old mail from test accounts
  • Set up automated cleanup scripts

The Webmail Question

I’m currently using desktop/mobile email clients, but considering adding webmail:

Options:

  • Roundcube – Feature-rich, but heavy
  • SOGo – Includes calendar/contacts
  • SnappyMail – Modern, fast, lightweight

Consideration: Do I need webmail if desktop/mobile apps work great?

Conclusion

Building a self-hosted mail server was significantly more challenging than the web server, but also more rewarding. Email is mission-critical infrastructure that most people never think about – until they try to run their own!

The Stats

Time Invested:

  • Initial research: 1 day
  • Setup and configuration: 2 days
  • Troubleshooting: 3 days (DNS, DKIM, Fail2Ban, authentication)
  • Documentation: 1 day
  • Total: ~1 week

Was It Worth It?

Sure, I now have:

✅ Complete email independence
✅ Deep understanding of email infrastructure
✅ Professional email system (frank@blueelysium.net)
✅ Privacy and data ownership
✅ Impressive DevOps portfolio project
✅ Skills that transfer to professional work

The Most Important Lesson

Email is hard for a reason. All the complexity – DKIM, SPF, DMARC, encryption, spam filtering – exists because email is a global, federated system fighting constant abuse. Self-hosting email isn’t just about running software; it’s about participating in a complex ecosystem built over decades.

But once everything clicks into place and that first email arrives with perfect DKIM/SPF validation, green security checkmarks everywhere, and lands directly in the inbox (not spam!) – that feeling makes every hour of debugging worth it.

Resources That Saved Me

Essential Documentation:

Testing Tools:

Community Resources:

  • docker-mailserver GitHub Issues (so many similar problems!)
  • Stack Overflow (specific DKIM/Dovecot questions)
  • Reddit r/selfhosted (moral support!)

Final Thoughts

Self-hosting a mail server in 2025 is an anachronism. Gmail, Outlook.com, and other hosted services are free, reliable, and require zero maintenance. Most sysadmins will tell you it’s not worth the effort.

But for the learning experience alone, I’m glad I did it. I now understand email at a level most folks never will. When someone mentions DKIM or SPF, I don’t just know what they are – I’ve configured them, debugged them, and watched them work in production.

Plus, there’s something deeply satisfying about having frank@blueelysium.net as my email address, running on hardware I control, with no third-party reading my messages or selling my data.

Would I recommend this to others?

  • If you want to learn: Absolutely. It’s educational.
  • If you need reliable email: Probably not. Use Gmail.
  • If you value privacy: Maybe. But be prepared for the commitment.

Email is the last piece of self-hosted infrastructure I needed. Combined with the WordPress site from Part 1, I now have a complete, production-ready platform for web and email, entirely under my control.

The journey from “How hard can email be?” to “Wow, email is complex!” taught me more than a dozen courses could have.


Thanks for reading! Questions? Want to share your own self-hosting journey? Reach out at frank@blueelysium.net (yes, it works! 😄)


Published: November 2025
Author: Frank @ BlueElysium
Series: Building a Self-Hosted Web & Mail Server
Part: 2 of 2

Read Part 1: Building the Web Server Foundation

Repository: Full configuration files and documentation available
Documentation: See docs/ directory for detailed guides

Categories
Web Site Management

Building BlueElysium: Part 1 – The Web Server Foundation

A journey from concept to production: Building a self-hosted WordPress website with Docker


Introduction

When I decided to build my own self-hosted website and email infrastructure, I knew I wanted complete control over my data, flexibility to customize everything, and a learning experience that would deepen my understanding of modern web technologies. This is the story of building BlueElysium – a fully containerized, production-ready web and mail server.

In this first part, I’ll walk you through building the web server foundation: WordPress, MySQL, Nginx reverse proxy, and automated SSL certificate management.

The Vision

Goal: Create a self-hosted WordPress website with:

  • Automated SSL certificate management
  • Production-grade security
  • Easy maintenance and updates
  • Scalable architecture for future expansion
  • Complete data ownership

Technology Stack:

  • Docker & Docker Compose for containerization
  • WordPress with PHP 8.2-FPM
  • MySQL 8.0 for database
  • Nginx as reverse proxy and web server
  • Let’s Encrypt for SSL/TLS certificates via Certbot

Part 1: The Infrastructure Design

Why Docker?

I chose Docker for several compelling reasons:

  1. Isolation: Each service runs in its own container with defined resources
  2. Reproducibility: The entire stack can be rebuilt from docker-compose.yml
  3. Portability: Easy to migrate to another server if needed
  4. Updates: Update individual services without affecting others
  5. Scalability: Add new services (like email) without disrupting existing ones

Architecture Overview

Internet → Port 80 443 → Nginx (webserver) → WordPress (PHP-FPM) → MySQL
                            ↓
                     Let's Encrypt (certbot)

The architecture consists of four core containers:

  1. MySQL Database – Backend storage for WordPress
  2. WordPress – The CMS with PHP-FPM
  3. Nginx (webserver) – Reverse proxy with SSL termination
  4. Certbot – Automated SSL certificate management

Part 2: Setting Up the Foundation

Step 1: Project Structure

I organized the project with persistent data separated from container definitions:

BlueElysium/
├── docker-compose.yml          # Container orchestration
├── .env                        # Sensitive configuration
├── docker-data/                # Persistent data volumes
│   ├── wordpress/
│   │   └── html/              # WordPress files
│   ├── mysql/                 # Database files
│   └── certbot/               # SSL certificates
└── nginx-conf/                # Nginx configuration
    └── default.conf           # Reverse proxy config

Step 2: The Database Container

MySQL is the heart of the WordPress installation. Here’s what I learned about configuring it:

Key Decisions:

  • Used MySQL 8.0 for better performance and security features
  • Created separate databases for WordPress and future mail server
  • Configured for UTF-8mb4 to support all Unicode characters
  • Set up health checks to ensure database is ready before WordPress starts

Important Configuration:

db:
    image: mysql:8.0
    container_name: db
    restart: unless-stopped
    env_file: .env
    environment:
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_WPBE_DATABASE}
      - MYSQL_USER=${MYSQL_WPBE_USER}
      - MYSQL_PASSWORD=${MYSQL_WPBE_PASSWORD}
    volumes:
      - dbdata:/var/lib/mysql
      - ./docker/provision/mysql/init:/docker-entrypoint-initdb.d
    command: '--default-authentication-plugin=mysql_native_password'
    #command: '--default-authentication-plugin=caching_sha256_password'
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
      timeout: 20s
      retries: 4
    networks:
      - app-network

Lesson Learned: Always use health checks! This prevents WordPress from trying to connect before MySQL is ready, eliminating those frustrating “Error establishing database connection” messages during startup.

Step 3: WordPress Container

WordPress runs as a separate container with PHP-FPM, communicating with Nginx via FastCGI.

Key Configurations:

wordpress:
    depends_on:
      db:
        condition: service_healthy
    image: wordpress:6.4.3-fpm-alpine
    container_name: wordpress
    restart: unless-stopped
    env_file: .env
    environment:
      - WORDPRESS_DB_HOST=db:3306
      - WORDPRESS_DB_USER=${MYSQL_WPBE_USER}
      - WORDPRESS_DB_PASSWORD=${MYSQL_WPBE_PASSWORD}
      - WORDPRESS_DB_NAME=${MYSQL_WPBE_DATABASE}
    volumes:
      - wordpress:/var/www/html
    healthcheck:
      test: ["CMD-SHELL", "php-fpm -t || exit 1"]
      timeout: 10s
      retries: 4
      interval: 20s
    networks:
      - app-network

Why PHP-FPM?

  • Better performance than Apache mod_php
  • Separates web server (Nginx) from PHP processing
  • More granular resource control
  • Industry standard for production WordPress

Challenge: Getting WordPress and Nginx to communicate properly took some trial and error. The key was ensuring they shared the WordPress files volume and using the correct FastCGI parameters.

Step 4: Nginx Reverse Proxy

Nginx handles incoming HTTP/HTTPS requests and routes them to WordPress.

Configuration Highlights:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        server_name blueelysium.net www.blueelysium.net;

        location ~ /.well-known/acme-challenge {
                allow all;
                root /var/www/html;
        }
...

Critical Lessons:

  1. Shared Volumes: Nginx needs access to WordPress files for static assets
  2. FastCGI Path: Must use correct SCRIPT_FILENAME path
  3. Security Headers: Always include HSTS and other security headers
  4. HTTP/2: Significantly improves performance for modern browsers

Step 5: SSL Certificate Automation

Let’s Encrypt provides free SSL certificates, but they expire every 90 days. Automation is essential.

Certbot Container:

certbot:
  image: certbot/certbot:latest
  volumes:
    - ./docker-data/certbot/conf:/etc/letsencrypt
    - ./docker-data/certbot/www:/var/www/certbot
  entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

Initial Certificate Request:

docker compose run --rm certbot certonly \
  --webroot \
  --webroot-path=/var/www/certbot \
  --email admin@blueelysium.net \
  --agree-tos \
  --no-eff-email \
  -d blueelysium.net \
  -d www.blueelysium.net

The Chicken-and-Egg Problem:

  • Nginx needs certificates to start with SSL
  • Certbot needs Nginx running to verify domain ownership

Solution:

  1. Start with HTTP-only Nginx configuration
  2. Request certificates
  3. Update Nginx config to use SSL
  4. Restart Nginx

Pro Tip: Use the webroot method instead of standalone. This allows Nginx to keep running during renewals.

Part 3: Environment Variables & Security

The .env File

Never hardcode secrets! I used environment variables for all sensitive data:

# Domain Configuration
DOMAIN=blueelysium.net
CERTBOT_EMAIL=admin@blueelysium.net

# MySQL Configuration
MYSQL_ROOT_PASSWORD=SecureRootPassword123!
MYSQL_WPBE_DATABASE=mydb
MYSQL_WPBE_USER=myuser
MYSQL_WPBE_PASSWORD=SecureWPPassword456!

Security Practices:

  1. Never commit .env to git – Add to .gitignore immediately
  2. Use strong passwords – 20+ characters, mixed case, numbers, symbols
  3. Different passwords – Each service gets unique credentials
  4. Backup .env securely – Store encrypted copy off-server

Firewall Configuration

I configured UFW (Uncomplicated Firewall) to allow only necessary ports:

sudo ufw allow 80/tcp   # HTTP
sudo ufw allow 443/tcp  # HTTPS
sudo ufw allow 22/tcp   # SSH
sudo ufw enable

Important: Configure SSH access BEFORE enabling the firewall!

Part 4: Deployment & Testing

First Deployment

# Start all services
docker compose up -d

# Check status
docker compose ps

# View logs
docker compose logs -f

What I Watched For:

  1. MySQL health check passing
  2. WordPress connecting to database
  3. Nginx starting without errors
  4. Certificates being issued successfully

Initial WordPress Setup

After containers were running, I accessed https://blueelysium.net and completed WordPress installation:

  1. Selected language
  2. Created admin account (strong password!)
  3. Set site title and tagline
  4. Configured permalink structure (Post name for SEO)

Performance Testing

I used several tools to verify everything was working optimally:

SSL/TLS Configuration:

# Test SSL
curl -I https://blueelysium.net

# Check certificate
openssl s_client -connect blueelysium.net:443 -servername blueelysium.net

Results:

  • ✅ A+ rating on SSL Labs
  • ✅ TLS 1.2 and 1.3 only
  • ✅ Strong cipher suites
  • ✅ HSTS enabled

Page Load Speed:

  • Initial: ~2.5 seconds
  • With caching plugins: ~800ms
  • With CDN consideration for future

Part 5: Common Issues & Solutions

Issue 1: “Error Establishing Database Connection”

Problem: WordPress couldn’t connect to MySQL

Root Cause: WordPress container started before MySQL was ready

Solution: Added health check to database and depends_on with condition: service_healthy

Issue 2: 502 Bad Gateway

Problem: Nginx showed 502 error when accessing WordPress

Root Cause: FastCGI configuration incorrect

Solution:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Must use $document_root not hardcoded path.

Issue 3: Static Assets Not Loading

Problem: Images and CSS files returned 404

Root Cause: Nginx didn’t have access to WordPress files

Solution: Shared WordPress volume with Nginx:

webserver:
  volumes:
    - ./docker-data/wordpress/html:/var/www/html:ro

Issue 4: Certificate Renewal Failed

Problem: Let’s Encrypt couldn’t verify domain ownership

Root Cause: Nginx blocked /.well-known/acme-challenge/ path

Solution: Added specific location block for ACME challenges:

location /.well-known/acme-challenge/ {
    root /var/www/certbot;
}

Part 6: Optimization & Fine-Tuning

WordPress Optimizations

Plugins Installed:

  • WP Super Cache – Page caching for better performance
  • Wordfence Security – Security scanning and firewall
  • UpdraftPlus – Additional backup option
  • Yoast SEO – Search engine optimization

Configuration Tweaks:

// wp-config.php additions
define('WP_CACHE', true);
define('COMPRESS_CSS', true);
define('COMPRESS_SCRIPTS', true);
define('CONCATENATE_SCRIPTS', true);
define('ENFORCE_GZIP', true);

Nginx Optimizations

Added caching headers for static assets:

location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
}

MySQL Tuning

Adjusted MySQL configuration for better performance:

[mysqld]
innodb_buffer_pool_size = 512M
max_connections = 100
query_cache_size = 0
query_cache_type = 0

Part 7: Monitoring & Maintenance

Daily Monitoring

Quick Health Check:

# Check all containers running
docker compose ps

# Check logs for errors
docker compose logs --tail=100 | grep -i error

# Check disk space
df -h

Weekly Tasks

  1. Review WordPress updates – Check dashboard for plugin/theme updates
  2. Check SSL expiry – Certbot should auto-renew, but verify
  3. Review access logs – Check for unusual activity
  4. Database backup – Automated with weekly backup system

Monthly Tasks

  1. Update Docker images – Pull latest stable versions
  2. Security scan – Run Wordfence deep scan
  3. Performance review – Check page load times
  4. Backup verification – Test restore from backup

Lessons Learned

What Went Well

  1. Docker Compose: Made the entire stack manageable and reproducible
  2. Separation of Concerns: Each service in its own container simplified troubleshooting
  3. Environment Variables: Kept secrets out of version control
  4. Health Checks: Prevented startup race conditions
  5. Documentation: Writing everything down saved hours later

What I’d Do Differently

  1. Start with Backups: I added automated backups later; should have been day one
  2. Monitoring Earlier: Wished I’d set up monitoring before going live
  3. Staging Environment: Would have been useful for testing updates
  4. Better Logging: Should have configured centralized logging from the start

Unexpected Challenges

  1. FastCGI Configuration: Took several iterations to get right
  2. Volume Permissions: File ownership between containers required attention
  3. Certificate Initial Setup: The chicken-and-egg problem with SSL
  4. Network Between Containers: Understanding Docker networking took time

The Results

After several days of configuration, testing, and optimization:

Performance:

  • Page load time: < 1 second
  • 99.9% uptime since launch
  • A+ SSL rating

Security:

  • Strong TLS configuration
  • Firewall configured
  • Security plugins active
  • Regular updates

Maintainability:

  • Single docker-compose command to update
  • Automated SSL renewal
  • Automated backups (weekly)
  • Comprehensive documentation

What’s Next?

With the web server foundation solid, the next challenge was email. In Part 2, I’ll cover:

  • Setting up a full-featured mail server with docker-mailserver
  • Configuring SMTP, IMAP, DKIM, SPF, and DMARC
  • Integrating PostfixAdmin for email account management
  • Troubleshooting network and authentication issues
  • Dealing with Fail2Ban blocking legitimate traffic
  • The satisfying moment when the first email arrives

Resources & References

Official Documentation:

Helpful Tools:

My Repository:

  • Full docker-compose.yml and configurations available
  • Detailed documentation in docs/ directory
  • Backup and restore scripts included

Conclusion

Building a self-hosted WordPress website with Docker was both challenging and incredibly rewarding. I now have:

  • Complete control over my web presence
  • Deep understanding of how modern web infrastructure works
  • Scalable foundation for adding more services (like email!)
  • Skills transferable to professional DevOps work

The best part? Everything is reproducible. If my server dies tomorrow, I can rebuild the entire stack from my docker-compose.yml and restore from backups.

Total time invested: ~3 days for initial setup + ongoing maintenance

Worth it? Absolutely. The learning experience alone was invaluable, and having complete ownership of my data is priceless.

In the next post, I’ll dive into the real challenge: building a production-ready mail server. Email is notoriously difficult to self-host, and I learned that the hard way…


Stay tuned for Part 2: Building the Mail Server – DKIM, Dovecot, and Debugging

Questions or feedback? Feel free to reach out!


Published: November 2025
Author: Frank @ BlueElysium
Series: Building a Self-Hosted Web & Mail Server
Part: 1 of 2

Categories
Web Site Management

Self Hosting Nightmare

So there was a power failure at my home (where this beautiful site is hosted) and when the power came back on this website wouldn’t come up. I scoured through nginx configurations and even asked copilot and cursor to help me out with some AI trouble shooting.

Where did it turn out the error was? The IP pass through between my AT&T Gateway and my home ubiquity router!

The Pass through on the AT&T gateway wasn’t providing the external IP address to my internal router. Typically others suggest using dhcp setting on both the AT&T modem and the Ubiquity router but I was forced to used fixed IP addresses because the Ubiquity router kept getting an IP from the AT&T VLAN. Actual details to follow.

Categories
Uncategorized

CVBA Ocean Beach Men’s A

Good time playing in the Men’s A this weekend in Ocean Beach. Had a great partner in Jordan Morgan (who also brough an amazing cheering gallery).

Categories
Family

Family Reunion

A few pictures from the reunion we had with the extended Evans family.

Categories
Web Site Management

Host Website in Containers

Part two: ISP, Modem / Router and Registration

In part one we focused on building and installing the hardware and software needed to host a website. The article focuses on

In a residential setting internet connections are provided by an ISP. The ISP provides a gateway / modem to connect to the internet that include basic routing functions. Individuals may also install a router to implement more advanced routing rules and network management. The key to these components are to ensure the gateway forwards all traffic to the router and the router forward specific ports (80, http, 443, https) to the computer hosting the website.

Make sure to add port forwarding rules so the router passes data on ports 80 and 443 to the computer that is hosting the website.

ISPs may have some other gotchas that make hosting difficult. They may block specific ports. Normally, the block port 25 to prevent spam but other go so far as to block port 80 which will make web hosting really difficult. ISPs also normally assign IP addresses via DHCP. If the lease for DHCP is short it will make maintaining DNS really difficult.

The good news as all the hurdles listed can be mitigated through a call to the ISP.

  1. DHCP renewing all the time?
    • Look at the setting on the modem and increase the renewal period to the maximum value.
    • Purchase a Static IP Address for a few additional dollars a month.
  2. Port 80 is blocked? A call to the ISP tech support and they will open the port for you.

Every website needs a web address that is provisioned by a registrar. There are a solid list of them available and include names like Square Space and Go Daddy. Prices for web addresses vary. Most off brand names go for $20 per year. Choose a registrar enter some billing information and the domain is yours.

Now the web address needs to be associated with the IP address you are hosting from and this is managed by DNS entries. Normally, the registrar provides an interface for managing these values. The basic entry needed for forwarding traffic to your address is:

HostTypePriorityData
@AN/A<IP Address>

In Part Three we will look at spinning up some containers.

Categories
Web Site Management

Host Website In Containers

Part one: The gear and the software

Just finished hosting this web site utilizing containers to learn a bit more about docker and docker compose. There were several great sites to move me along my way and then the pure excitement of backing up the existing site and restoring it to the new system.

The equipment: Built this little engine from scratch, my first build. Used the guts of an old system that I purchased years ago. Gutted the motherboard, cpu and ram while retaining the power supply, fans and hard drives.

EquipmentDescription
RAMCorsair VENGEANCE LPX DDR4 RAM 32GB (2x16GB) 3200MHz
CPUIntel Core i5 Core 12400F Desktop Processor 
DrivesSamsung 860 EVO 1TB 2.5 Inch SATA III Internal SSD
MotherboardMSI PRO H610M-G DDR4 DDR4 Motherboard (mATX, 12th Gen Intel Core, LGA 1700 Socket

Hardware

The software: I had previously been hosting the system with the software listed below so why change what ain’t broke. Well except moving it to containers…

ServiceSoftware
Operating SystemUbuntu
Image ManagementDocker
Web Servernginx
Databasemysql
Content Managementwordpress

Now we are off to the races. The system is willing to turn on but the mother board is flashing indicators that something is wrong. Okay, needed to read though the motherboard specification to find that led 3 mean that the ram is bad. So powered down the system and more firmly inserted the ram. Bingo, we make it to the bios which in turn seems to recognize everything.

Software

Downloaded Ubuntu to a flash drive and created a USB bootable image. Pointed the bios to boot from USB and we are off to the races with the OS installation and creation of a user account. Now is a good time to propose getting a solid password management tool because by the end of this you will have accumulated several (check out KeePassXC).

Time to install docker because it will help with the management of
* images – snapshot of the software to be run
* containers – running instance of the software
* volumes – ability for containers to access a centralized set of data
That will eventually host the web site.

Categories
Family

Remembering Dad

Categories
Discovery

Midjourney v5

Had an absolute blast playing with Midjourney today. This is an AI software that lets you build images from a narrative you provide. The new web icon was produced by this software. See below.

I feel this picture pretty well encompasses Blue Elysium. Give the software a try on Discord.

You can also see the work I did. Good, bad and really bad.

Categories
Web Site Management

Word Press Installation

ItemSelectionNotes
Operating SystemUbuntuStandard Linux installation.
DatabasemySQLFree open source database.
Web ServernginxFree open source web server. Yeah chose this ahead of Apache.
LanguagephpStandard language for web programming. Word Press will hide most programming from you but it is good to know what is going on under the hood.
Content ManagementWord PressThis is the application used to build the web site. It manages the assets needed to bring your web site to life.
Certificationletsencrypt
Certbot
This service provides that you are the administer of your site and provides a certificate so you can list your site as https://