Load Testing Your Applications

Reverend Philip Dec 17, 2025 1 min read

Verify your application handles traffic at scale. Learn load testing tools, realistic test scenarios, and performance benchmarking.

Load testing reveals how your application behaves under stress before real users discover its limits. This guide covers load testing strategies, tools, and interpreting results to build applications that scale reliably.

Why Load Test?

Discover Before Users Do

Production failures are expensive:

  • Lost revenue during downtime
  • Damaged reputation
  • Emergency debugging under pressure

Load testing reveals:

  • Maximum concurrent users
  • Response time degradation patterns
  • Resource bottlenecks
  • Breaking points

Types of Performance Tests

Test Type Purpose Duration
Load Test Normal expected load Minutes to hours
Stress Test Beyond normal capacity Until failure
Spike Test Sudden traffic surge Short bursts
Soak Test Sustained load Hours to days
Breakpoint Test Find maximum capacity Incremental increase

Load Testing Tools

k6 (Recommended)

Modern, developer-friendly tool written in Go:

// load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
    stages: [
        { duration: '2m', target: 100 },  // Ramp up
        { duration: '5m', target: 100 },  // Stay at 100 users
        { duration: '2m', target: 0 },    // Ramp down
    ],
    thresholds: {
        http_req_duration: ['p(95)<500'],  // 95% under 500ms
        http_req_failed: ['rate<0.01'],    // Error rate under 1%
    },
};

export default function () {
    const response = http.get('https://myapp.com/api/products');

    check(response, {
        'status is 200': (r) => r.status === 200,
        'response time < 500ms': (r) => r.timings.duration < 500,
    });

    sleep(1);
}
# Run test
k6 run load-test.js

# Run with more VUs
k6 run --vus 200 --duration 10m load-test.js

Apache JMeter

Industry-standard GUI tool:

<!-- test-plan.jmx -->
<ThreadGroup>
    <stringProp name="ThreadGroup.num_threads">100</stringProp>
    <stringProp name="ThreadGroup.ramp_time">60</stringProp>
    <stringProp name="ThreadGroup.duration">300</stringProp>
</ThreadGroup>

Artillery

Node.js-based, YAML configuration:

# artillery.yml
config:
  target: "https://myapp.com"
  phases:
    - duration: 120
      arrivalRate: 10
      name: "Warm up"
    - duration: 300
      arrivalRate: 50
      name: "Sustained load"

scenarios:
  - name: "Browse products"
    flow:
      - get:
          url: "/api/products"
      - think: 2
      - get:
          url: "/api/products/{{ $randomNumber(1, 100) }}"
artillery run artillery.yml

Locust

Python-based with real-time web UI:

# locustfile.py
from locust import HttpUser, task, between

class WebsiteUser(HttpUser):
    wait_time = between(1, 3)

    @task(3)
    def view_products(self):
        self.client.get("/api/products")

    @task(1)
    def view_product_detail(self):
        product_id = random.randint(1, 100)
        self.client.get(f"/api/products/{product_id}")

    def on_start(self):
        # Login once per user
        self.client.post("/api/login", json={
            "email": "test@example.com",
            "password": "password"
        })
locust -f locustfile.py --host=https://myapp.com

Realistic Test Scenarios

User Journey Simulation

// k6: Realistic e-commerce flow
import http from 'k6/http';
import { check, group, sleep } from 'k6';

export default function () {
    group('Browse', function () {
        http.get('https://myapp.com/');
        sleep(2);
        http.get('https://myapp.com/api/products?category=electronics');
        sleep(3);
    });

    group('Product Detail', function () {
        const productId = Math.floor(Math.random() * 100) + 1;
        http.get(`https://myapp.com/api/products/${productId}`);
        sleep(5);
    });

    group('Add to Cart', function () {
        http.post('https://myapp.com/api/cart', JSON.stringify({
            product_id: 42,
            quantity: 1
        }), {
            headers: { 'Content-Type': 'application/json' }
        });
        sleep(2);
    });

    group('Checkout', function () {
        // Only 10% proceed to checkout
        if (Math.random() < 0.1) {
            http.post('https://myapp.com/api/checkout', JSON.stringify({
                payment_method: 'card'
            }), {
                headers: { 'Content-Type': 'application/json' }
            });
        }
    });
}

Data-Driven Testing

// k6: Load test data from file
import { SharedArray } from 'k6/data';
import http from 'k6/http';

const users = new SharedArray('users', function () {
    return JSON.parse(open('./test-users.json'));
});

export default function () {
    const user = users[__VU % users.length];

    const loginRes = http.post('https://myapp.com/api/login', JSON.stringify({
        email: user.email,
        password: user.password
    }), {
        headers: { 'Content-Type': 'application/json' }
    });

    const token = loginRes.json('token');

    http.get('https://myapp.com/api/profile', {
        headers: { 'Authorization': `Bearer ${token}` }
    });
}

Key Metrics

Response Time Metrics

Avg Response Time:     150ms    # Average (misleading)
Median (p50):          120ms    # Half of requests faster
p90:                   250ms    # 90% of requests faster
p95:                   400ms    # 95% of requests faster
p99:                   850ms    # 99% of requests faster
Max:                   2500ms   # Worst case

Focus on percentiles, not averages. The p95 and p99 show what slow users experience.

Throughput Metrics

Requests/second:       500 rps
Successful requests:   49,500
Failed requests:       500
Error rate:            1%

Resource Metrics

Monitor during tests:

  • CPU utilization
  • Memory usage
  • Disk I/O
  • Network bandwidth
  • Database connections
  • Queue depth

Analyzing Results

Response Time Degradation

Load:    Response Time:
50 VU    100ms
100 VU   150ms
200 VU   300ms
300 VU   800ms    <- Performance cliff
400 VU   2500ms   <- Unacceptable
500 VU   Timeout  <- Breaking point

Identifying Bottlenecks

Symptom:                    Likely Cause:
CPU at 100%                 Application code, no caching
Memory growing              Memory leaks, no limits
Database CPU high           Missing indexes, N+1 queries
Disk I/O high              Too much logging, no SSD
Connection pool exhausted   Pool too small, slow queries

Database Analysis

-- Find slow queries during load test
SELECT query, calls, mean_time, total_time
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 20;

-- Check connection count
SELECT count(*) FROM pg_stat_activity;

-- Lock contention
SELECT * FROM pg_locks WHERE NOT granted;

Laravel-Specific Testing

Testing Authenticated Routes

// k6: Laravel Sanctum authentication
import http from 'k6/http';

export function setup() {
    // Get CSRF token
    const csrfRes = http.get('https://myapp.com/sanctum/csrf-cookie');

    // Login
    const loginRes = http.post('https://myapp.com/login', JSON.stringify({
        email: 'test@example.com',
        password: 'password'
    }), {
        headers: {
            'Content-Type': 'application/json',
            'X-XSRF-TOKEN': csrfRes.cookies['XSRF-TOKEN'][0].value
        }
    });

    return {
        cookies: loginRes.cookies
    };
}

export default function (data) {
    http.get('https://myapp.com/api/user', {
        cookies: data.cookies
    });
}

Testing Queue Performance

// Create many jobs for queue testing
Artisan::command('test:queue-load {count=1000}', function (int $count) {
    for ($i = 0; $i < $count; $i++) {
        ProcessOrder::dispatch(Order::find(rand(1, 100)));
    }
    $this->info("Dispatched {$count} jobs");
});
# Monitor queue during test
php artisan queue:work --verbose &
watch -n 1 'php artisan queue:monitor redis:default'

CI/CD Integration

GitHub Actions

# .github/workflows/load-test.yml
name: Load Test

on:
  schedule:
    - cron: '0 2 * * *'  # Nightly
  workflow_dispatch:

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install k6
        run: |
          curl -L https://github.com/grafana/k6/releases/download/v0.47.0/k6-v0.47.0-linux-amd64.tar.gz | tar xz
          sudo mv k6-v0.47.0-linux-amd64/k6 /usr/local/bin/

      - name: Run load test
        run: k6 run --out json=results.json tests/load/smoke.js

      - name: Check thresholds
        run: |
          if grep -q '"thresholds":{"http_req_duration":\["p(95)<500"\].*"ok":false' results.json; then
            echo "Performance thresholds not met!"
            exit 1
          fi

      - name: Upload results
        uses: actions/upload-artifact@v3
        with:
          name: load-test-results
          path: results.json

Threshold Gates

// k6: Strict thresholds for CI
export const options = {
    thresholds: {
        http_req_duration: [
            'p(50)<200',   // Median under 200ms
            'p(95)<500',   // 95th percentile under 500ms
            'p(99)<1000',  // 99th percentile under 1s
        ],
        http_req_failed: ['rate<0.01'],  // Less than 1% errors
        http_reqs: ['rate>100'],          // At least 100 rps
    },
};

Performance Baselines

Establishing Baselines

// Baseline test: Run weekly, compare results
export const options = {
    scenarios: {
        baseline: {
            executor: 'constant-vus',
            vus: 50,
            duration: '5m',
        },
    },
};

Tracking Over Time

# Store results with timestamp
k6 run --out json=results/$(date +%Y%m%d).json load-test.js

# Compare with previous
python compare-results.py results/20240115.json results/20240122.json

Common Issues and Solutions

Connection Limits

# nginx.conf - Increase worker connections
events {
    worker_connections 4096;
}

http {
    # Increase keepalive
    keepalive_timeout 65;
    keepalive_requests 1000;
}

Database Connection Pool

// config/database.php
'mysql' => [
    'pool' => [
        'min_connections' => 10,
        'max_connections' => 100,
    ],
],

PHP-FPM Tuning

; php-fpm.conf
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500

Redis Connection Issues

// config/database.php
'redis' => [
    'client' => 'phpredis',
    'default' => [
        'persistent' => true,
        'persistent_id' => 'myapp',
        'read_timeout' => 60,
    ],
],

Best Practices

  1. Test in production-like environment - Same hardware, data volume, network
  2. Use realistic data - Don't test with empty database
  3. Simulate real user behavior - Think times, varied paths
  4. Monitor everything - Application, database, network, queues
  5. Test regularly - Catch regressions early
  6. Start small - Smoke tests before full load tests
  7. Document findings - Track improvements over time
  8. Test failure modes - What happens when dependencies fail?

Conclusion

Load testing is essential for production confidence. Start with simple smoke tests, establish baselines, and gradually increase sophistication. Use k6 or similar modern tools for developer-friendly testing, integrate tests into CI/CD, and always test in production-like environments. The goal is discovering limits and bottlenecks before your users do.

Share this article

Related Articles

Kubernetes Operators Deep Dive

Automate complex application management with Kubernetes Operators. Learn the Operator pattern and build custom controllers.

Jan 13, 2026

Need help with your project?

Let's discuss how we can help you build reliable software.