Back to all docs

Understanding Swap Space: Why It Matters for Memory-Constrained Systems

Date: 2025-11-08 Context: Learning why swap space matters when running out of RAM

The Problem We Encountered

$ free -h
               total        used        free      shared  buff/cache   available
Mem:           909Mi       792Mi        38Mi       1.2Mi       164Mi       116Mi
Swap:             0B          0B          0B

Key observations:

What is RAM (Physical Memory)?

Random Access Memory (RAM) is the primary memory where:

Characteristics:

What is Swap Space?

Swap is disk space used as "overflow" memory when RAM is full.

Think of it like this:

Technical definition: Swap is a dedicated area on disk (HDD/SSD) that the operating system uses as virtual memory extension.

How Swap Works

Memory Pressure Scenario

  1. System has 909 MB RAM, all programs want 1.2 GB total

  2. Without Swap:

    Process A: 300 MB ✓
    Process B: 400 MB ✓
    Process C: 200 MB ✓ (RAM full - 900 MB used)
    Process D: 300 MB ✗ FAIL
    
    Result: Process D cannot start
            OR: OOM Killer terminates existing process
    
  3. With Swap (2 GB):

    Process A: 300 MB (RAM) ✓
    Process B: 400 MB (RAM) ✓
    Process C: 150 MB (RAM) + 50 MB (Swap) ✓
    Process D: 59 MB (RAM) + 241 MB (Swap) ✓
    
    Result: All processes run (slower, but don't crash)
    

The Swapping Process

When RAM fills up:

  1. Kernel identifies "cold" pages (memory not recently accessed)
  2. Writes them to swap space on disk
  3. Frees that RAM for new allocations
  4. Page fault occurs when program needs swapped data
  5. Kernel swaps it back into RAM (might swap something else out)

Performance impact:

Why Swap Matters for Our Use Case

Scenario: Running npm run build

Build process phases:

Phase 1: Install dependencies
  Memory: 200 MB (manageable)

Phase 2: TypeScript compilation
  Memory: 400 MB (tight but okay)

Phase 3: Next.js optimization (webpack)
  Memory: 600 MB
  + Source maps: 200 MB
  + Minification: 150 MB
  ─────────────────
  Peak: 950 MB ← EXCEEDS 909 MB!

Without swap:

Available RAM: 909 MB
Build needs:   950 MB
Deficit:       -41 MB

Result: OOM Killer terminates the build process
        OR: Build fails with "JavaScript heap out of memory"

With 2 GB swap:

Total virtual memory: 909 MB RAM + 2048 MB Swap = 2957 MB
Build needs:          950 MB
Available:            2957 MB ✓

Result: Build completes (slower during peak, but succeeds)

The OOM Killer

Out-Of-Memory Killer is Linux's last resort when RAM exhausted and no swap.

How it works:

  1. System runs out of memory
  2. No swap available to relieve pressure
  3. Kernel cannot allocate memory for critical operations
  4. OOM Killer activates:
    • Scores each process (based on memory usage, importance)
    • Kills highest-scoring process to free memory
    • Prevents total system freeze

Example from our scenario:

# Build process is using 600 MB
# Claude process: 260 MB
# Other services: 200 MB
# Total: 1060 MB (exceeds 909 MB)

# OOM Killer might choose:
# - Kill build process (600 MB, newest, high score)
# - Result: Build fails, system survives

Checking OOM kills:

dmesg | grep -i "killed process"
# Example output:
# Out of memory: Killed process 12345 (node) total-vm:600MB

Diagnostic reasoning:

  1. Saw free -h output:

    • Available: 116 MB
    • Swap: 0 B
  2. Knew build requirements:

    • Next.js builds are memory-intensive (webpack, terser, etc.)
    • Can spike to 600-1000 MB during optimization
  3. Calculated risk:

    • 116 MB available + no swap = high OOM risk
    • Build might fail or kill other processes
  4. Solution pattern:

    • Swap provides safety buffer
    • Allows temporary memory spikes
    • Prevents catastrophic failures

When swap is critical:

When swap is less important:

Swap Configuration Guidelines

Size recommendations:

RAM Size Swap Size Reasoning
< 2 GB 2x RAM Need maximum buffer
2-8 GB = RAM Balanced approach
8-16 GB 0.5x RAM Less critical
> 16 GB 2-4 GB or none Optional safety net

Our case: 909 MB RAM → Recommend 2 GB swap

Creating swap on Linux:

# Method 1: Swap file (flexible, can remove later)
sudo fallocate -l 2G /swapfile      # Allocate 2 GB file
sudo chmod 600 /swapfile            # Secure permissions
sudo mkswap /swapfile               # Format as swap
sudo swapon /swapfile               # Activate
sudo swapon --show                  # Verify

# Make permanent (survives reboot):
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# Method 2: Swap partition (permanent, faster)
sudo mkswap /dev/sdX               # X = partition number
sudo swapon /dev/sdX

Removing swap:

sudo swapoff /swapfile             # Deactivate
sudo rm /swapfile                  # Delete file
# Remove from /etc/fstab if made permanent

Monitoring swap usage:

# Quick check
free -h

# Detailed info
swapon --show

# Which processes using swap
for dir in /proc/*/status; do
  awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $dir
done | sort -k 2 -n -r | head -20

# Swap activity over time
vmstat 1

Swappiness: Fine-Tuning Behavior

Swappiness controls kernel's tendency to swap (0-100).

# Check current setting
cat /proc/sys/vm/swappiness
# Default: usually 60

# Meaning:
# 0   = Avoid swap until absolutely necessary
# 10  = Prefer keeping things in RAM
# 60  = Balanced (default)
# 100 = Aggressively swap to keep RAM free

For our scenario:

# Set to 10 (use swap only when needed)
sudo sysctl vm.swappiness=10

# Make permanent:
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf

Why lower swappiness?

Real-World Analogy

Desk Workspace (RAM) vs Filing Cabinet (Swap)

Scenario: You're working at a small desk

Without filing cabinet (no swap):

With filing cabinet (swap):

Key Takeaways

1. Memory Hierarchy

CPU Registers (fastest, smallest)
    ↓
CPU Cache (L1, L2, L3)
    ↓
RAM (fast, limited)
    ↓
Swap (slower, extends capacity)
    ↓
Disk (slowest, largest)

2. Swap is NOT free memory

3. Swap prevents catastrophic failures

4. When to add swap

5. Monitoring is key

# Watch for swap usage
watch -n 1 free -h

# If swap constantly active = need more RAM
# If swap rarely used = good safety net
# If swap at 100% = severe memory pressure

Transferable Concepts

1. Virtual Memory

2. Paging

3. Thrashing

4. Memory Overcommit

5. Resource Constraints

Real-World Validation: Ahaia Music Build

Date: 2025-11-08

We just proved this theory in practice with the Next.js build.

Without Swap (First Attempt)

$ free -h
Mem:    909Mi total, 792Mi used, 116Mi available
Swap:   0B

$ npm run build
# Result: OOM Killer terminated the build process

With Swap (Second Attempt)

$ sudo fallocate -l 2G /swapfile
$ sudo chmod 600 /swapfile
$ sudo mkswap /swapfile
$ sudo swapon /swapfile

$ free -h
Mem:    909Mi total, 582Mi used, 326Mi available
Swap:   2.0Gi total, 0B used

$ npm run build > build.log 2>&1
# Build running...

$ free -h (during build)
Mem:    909Mi total, 454Mi used, 454Mi available
Swap:   2.0Gi total, 152Mi used  ← SWAP ACTIVELY USED

# Result: ✓ Build completed successfully

Key Observations

  1. Swap was essential: Build consumed 152 MB of swap
  2. Prediction was accurate: We estimated ~950 MB peak, actual usage confirmed this
  3. Build succeeded: Without swap = OOM kill, with swap = success
  4. Performance acceptable: Build completed despite swapping (slower but functional)

Build output:

Route (app)                              Size  First Load JS
┌ ○ /                                   123 B         102 kB
└ ○ /_not-found                         995 B         103 kB
+ First Load JS shared by all          102 kB

○  (Static)  prerendered as static content

This confirms: Swap is not theoretical - it's practical and necessary for resource-constrained builds.

Further Reading


Bottom Line: Swap is like an emergency fund. You hope you don't need it, but you'll be glad it's there when you do. On memory-constrained systems, it's the difference between a slow build and a failed build.