Skip to content

Network Throttling

Network throttling lets you simulate slow network conditions — 3G mobile connections, spotty WiFi, or high-latency satellite links. Instead of testing your app only on your fast office connection, you can experience exactly what your users see on a crowded subway or in a rural area with poor coverage.

When throttling is active, Ghost artificially slows down all traffic passing through the proxy. Downloads take longer, uploads are restricted, and an initial delay is added to simulate network latency. Your application still works — it just works slowly, revealing problems like missing loading states, timeout handling bugs, excessive data usage, or UI freezes that only appear on slow connections.

Ghost includes 5 common network profiles that you can activate with a single click:

PresetDownload SpeedUpload SpeedLatencySimulates
EDGE240 kbps200 kbps300 msThe slowest cellular technology still in use. Pages take 30+ seconds to load. Tests absolute worst-case scenarios.
Slow 3G400 kbps400 kbps200 msPoor 3G coverage — what users experience in basements, elevators, or congested cell towers.
3G750 kbps250 kbps100 msStandard 3G connection. Many users worldwide still rely on this. Good for testing mobile app performance in emerging markets.
4G4,000 kbps (4 Mbps)3,000 kbps (3 Mbps)20 msDecent LTE connection. Good for testing that your app performs well on typical mobile connections.
WiFi10,000 kbps (10 Mbps)5,000 kbps (5 Mbps)2 msAverage WiFi. Still significantly slower than a wired connection. Good for testing that lazy loading and pagination work correctly.

You can also create custom profiles with any values for download speed (kbps), upload speed (kbps), and latency (ms). This is useful for simulating specific conditions reported by users or matched from network monitoring tools.

What this diagram shows — how throttling is applied to every connection:

When throttling is active, every new upstream connection the proxy makes is wrapped with rate limiters. Data flowing from the server to the client (downloading) passes through the download limiter. Data flowing from the client to the server (uploading) passes through the upload limiter. Additionally, latency is injected once on the first read from the server, simulating the network round-trip delay.

Ghost uses Go’s golang.org/x/time/rate package to implement token bucket rate limiting — a standard algorithm used in networking. Here’s the intuition:

Imagine a bucket that fills with tokens at a constant rate (determined by the bandwidth setting). Each byte of data requires one token. When data needs to flow through, it “spends” tokens from the bucket. If the bucket is empty (all tokens used), the data has to wait until more tokens accumulate. This naturally restricts the average throughput to the configured bandwidth.

Key parameters:

  • Bandwidth conversion: bytesPerSec = kbps × 1024 / 8. For example, 750 kbps = 96,000 bytes per second. The “kbps” here means kilobits (1 kilobit = 1024 bits), divided by 8 to convert to bytes.
  • Burst size: 32 KB. This allows short bursts of data (like HTTP headers or small responses) to flow through at full speed, while maintaining the target bandwidth for sustained transfers. Without bursting, even tiny protocol messages would be artificially delayed.
  • Chunked waiting: When a single read or write exceeds the burst size (32 KB), the data is split into burst-sized chunks and each chunk waits independently. This prevents the rate limiter from blocking indefinitely on large data transfers.

Each connection gets its own independent pair of rate limiters, so throttling is applied per-connection — a slow download on one connection doesn’t block another connection’s upload.

Network latency (the delay before any data arrives) is injected once per connection on the first Read() call. Ghost uses an atomic.Bool with CompareAndSwap to ensure the latency sleep happens exactly once — even if multiple goroutines read from the same connection concurrently.

Why only once? Real network latency is primarily the round-trip time to establish the connection. Once the connection is established, subsequent data flows without additional latency (just bandwidth constraints). Injecting latency on every read would unrealistically compound the delay.

The active throttle profile is stored as a Go atomic.Value on the proxy server. This means checking whether throttling is active (which happens for every new connection) requires no locks, no mutexes — just an atomic load. On the performance-critical proxy hot path, this is important: throttling adds zero overhead when disabled, and minimal overhead when enabled.

The throttle dropdown appears in Ghost’s status bar at the bottom of the window. It’s a compact popover that opens upward from the status bar.

Shows a grey gauge icon with “Throttle” text. Click to open the dropdown.

The gauge icon turns amber and the needle rotates to indicate throttling. The text shows the profile name and latency (e.g., “3G 100ms” or “Custom 150ms”).

  1. “No Throttle” — click to disable throttling (green dot indicator when selected)
  2. Preset list — each preset shows its name, formatted speed (values ≥1000 kbps display as Mbps, e.g., “4 Mbps”), and latency
  3. “Custom…” — expands to reveal three numeric input fields:
    • Download speed (kbps)
    • Upload speed (kbps)
    • Latency (ms)
    • “Apply” button (amber) to activate the custom profile

Setting or clearing a throttle broadcasts a proxy.throttle WebSocket event to all connected clients:

// When throttle is set
{ "enabled": true, "profile": { "name": "3G", "download_kbps": 750, "upload_kbps": 250, "latency_ms": 100 } }
// When throttle is cleared
{ "enabled": false }
MethodEndpointDescription
GET/api/v1/proxy/throttleGet current throttle status. Returns { enabled: false } when inactive, or { enabled: true, profile: {...} } when active.
POST/api/v1/proxy/throttleSet a throttle profile. Send { "preset": "3g" } for a preset, or { "download_kbps": 500, "upload_kbps": 200, "latency_ms": 150 } for custom values. Custom profiles are automatically named “Custom”.
DELETE/api/v1/proxy/throttleClear the throttle, returning the proxy to full speed.
GET/api/v1/proxy/throttle/presetsList all available preset profiles with their bandwidth and latency values.

Test loading states — Enable Slow 3G and navigate through your app. Do all pages show loading spinners? Are there skeleton screens? Or does the UI just freeze with no feedback?

Test timeout handling — Enable EDGE (the slowest preset) and trigger API-heavy operations. Does your app show a timeout error after a reasonable wait, or does it hang forever?

Test progressive loading — Enable 3G and load a page with images. Do images load lazily? Are low-resolution placeholders shown first? Or does the page wait until every image is fully downloaded?

Test offline recovery — Enable throttling, start an operation, then disable throttling mid-way. Does your app handle the speed change gracefully?

Reproduce user-reported issues — A user reports “the app freezes when I add items to cart on the train.” Enable 3G with 200ms latency to simulate their conditions and reproduce the issue.

Performance budgets — Enable 3G and measure how long key user flows take. If checkout takes 15 seconds on 3G, that’s a problem worth fixing before launch.