Skip to content

Error Tracking (Sentry)

Ghost uses Sentry to catch and track errors across all three layers of its architecture — the React frontend, the Go backend, and the browser extension. Think of it as a safety net that catches every crash, every failed API call, and every unhandled exception, then sends it to a central dashboard where developers can investigate what went wrong, how often it happens, and which users are affected.

The system is designed so that no error goes unnoticed, even in layers that can’t talk to Sentry directly (like the browser extension, which has no Sentry SDK).


What this diagram shows: Errors flow from three different sources to a single Sentry instance. The React frontend and Go backend can talk to Sentry directly using their respective SDKs. The browser extension cannot load the Sentry SDK (content scripts and service workers have limited capabilities), so it sends errors to the Go backend via an HTTP endpoint, and the backend relays them to Sentry with appropriate tags. This relay design ensures every error from every layer ends up in the same Sentry dashboard.


Sentry is configured via the [telemetry] section in ~/.ghost/config.toml:

[telemetry]
sentry_dsn = "https://...@sentry.io/..."

When the DSN is empty (the default), Sentry is completely disabled across all layers — no SDK is initialized, no errors are sent anywhere, and no network calls are made. This means Ghost works perfectly fine without Sentry; it’s an optional observability layer.

The Sentry DSN is stripped from settings exports to prevent accidental sharing of the organization’s Sentry project credentials.


The Go backend uses github.com/getsentry/sentry-go and captures errors at four integration points: top-level panics, background goroutine panics, API handler panics, and all 5xx HTTP responses.

SettingValueDescription
DSNFrom config.tomlIf empty, initialization is skipped entirely
Releaseghost@0.1.0Hardcoded version string — identifies which release produced the error
EnvironmentproductionAlways “production” (Ghost doesn’t have staging environments)
Sample rate1.0 (100%)Every error is captured — nothing is sampled or dropped
TracingDisabledPerformance tracing is not used on the backend
Stack tracesAttachedFull Go stack traces are included with every error event

Integration Point 1: Top-Level Panic Recovery

Section titled “Integration Point 1: Top-Level Panic Recovery”

In main(), a deferred function catches any panic that bubbles up to the top level — the kind of crash that would normally kill the entire application.

Flow:

  1. The deferred function calls recover() to catch the panic
  2. Sends the panic value to Sentry via sentry.CurrentHub().Recover(r)
  3. Flushes Sentry with a 2-second timeout (to ensure the error is sent before the process exits)
  4. Re-panics — this is intentional. The error has been recorded, but the process still exits with a crash rather than continuing in a potentially corrupted state.

Integration Point 2: safeGo — Background Goroutine Protection

Section titled “Integration Point 2: safeGo — Background Goroutine Protection”

All background goroutines (long-running tasks that operate independently from HTTP request handling) are wrapped in safeGo(logger, name, fn), which provides panic recovery for each goroutine independently.

How it works: safeGo wraps the given function in a goroutine with its own recover(). If the goroutine panics, the error is logged with the goroutine’s name, sent to Sentry, and flushed — but crucially, only that one goroutine dies. The rest of the application keeps running.

Why this matters: Without safeGo, a panic in any background goroutine would crash the entire Ghost process. With it, a failure in (for example) the WAL checkpoint timer doesn’t kill the proxy or the API server.

All 4 background goroutines protected by safeGo:

NamePurpose
findings-drainDrains the security findings channel and writes them to the database, then broadcasts via WebSocket
tag-updates-drainDrains the security tag update channel and writes tag updates to the database
auto-purgerRuns every 5 minutes to clean up old flows based on age and count limits
wal-checkpointRuns every 30 minutes to checkpoint the SQLite WAL (Write-Ahead Log) file, keeping the database file compact

Integration Point 3: API Recovery Middleware

Section titled “Integration Point 3: API Recovery Middleware”

Every API request passes through a recovery middleware that catches panics in HTTP handlers. This is different from the top-level recovery — it catches panics within individual request handlers while keeping the API server running.

When a handler panics:

  1. The middleware catches the panic with recover()
  2. Captures a full Go stack trace via debug.Stack()
  3. Sends to Sentry with detailed context tags
  4. Returns HTTP 500 “internal server error” to the client
  5. The API server continues serving other requests normally

Tags added to the Sentry event:

Tag/ExtraValuePurpose
layer"api"Identifies this as an API-layer error
pathThe URL path (e.g., /api/v1/flows)Which endpoint crashed
methodThe HTTP method (e.g., GET, POST)What operation was attempted
stack (extra)Full Go stack traceDetailed crash location
request_id (extra)ULID-based request IDCorrelate with logs

Integration Point 4: respondError — All 5xx Responses

Section titled “Integration Point 4: respondError — All 5xx Responses”

Every time any API handler returns a 5xx status code (500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, etc.), the respondError function automatically captures it to Sentry. This catches server errors that aren’t panics — logic errors, database failures, upstream timeouts, and other non-crash failures.

Tags: layer = "api", status = the HTTP status code integer.

How it’s triggered: respondError is called from 31 handler files across the entire API surface. Any handler that calls respondError(w, statusCode, message) with a status >= 500 will produce a Sentry event.

sentry.Flush(2 * time.Second) is called during graceful shutdown to ensure any buffered error events are sent to Sentry before the process exits. The 2-second timeout prevents the shutdown from hanging indefinitely if the network is down or Sentry is unreachable.


The React frontend uses @sentry/browser and captures errors from 5 integration points: component crashes, WebSocket parse errors, API request failures, store action errors, and browser tracing.

SettingValueDescription
SDK@sentry/browserBrowser-specific Sentry client
ActiveProduction builds onlyChecked via import.meta.env.DEV — if true, errors go to console.error instead of Sentry
DSNFrom backend settingsFetched via api.getSettings() after the setup wizard completes, from settings.telemetry.sentry_dsn
Releaseghost@{APP_VERSION}Version from Vite’s __APP_VERSION__ define, fallback "0.1.0"
EnvironmentproductionHardcoded
IntegrationsbrowserTracingIntegration()Performance tracing for page loads and XHR requests
Trace sample rate0.1 (10%)Only 10% of transactions are traced (to reduce overhead)
Error sample rate1.0 (100%, default)Every error is captured — not explicitly set, uses the SDK default

The beforeSend hook runs on every event before it’s sent to Sentry. It scans all breadcrumbs (the trail of events leading up to the error) and redacts authentication tokens from URLs. Specifically, it replaces token=<actual-token> with token=[REDACTED] in XHR and fetch breadcrumb URLs. This prevents Ghost’s API bearer token from being exposed in Sentry.

All frontend Sentry captures go through a single utility function: captureError(error, context?).

Behavior:

  • In development: Logs to console.error only — never sends to Sentry. This prevents development errors from polluting the production error dashboard.
  • Error instances: Sent via Sentry.captureException() with the context as extra data.
  • Non-Error values (strings, numbers): Stringified and sent via Sentry.captureMessage() with context as extra.

React’s error boundary catches any unhandled error that occurs during component rendering. When a component crashes (throws an error during render, in a lifecycle method, or in a constructor), the error boundary catches it and sends it to Sentry with the React component stack trace.

Context: componentStack — the React component hierarchy that led to the crash.

The API client has retry logic for transient failures. When all retries are exhausted and the request still fails, the final error is sent to Sentry.

Two separate capture points:

  • JSON requests: Context includes path, retries count, source: 'api'
  • Blob requests (binary downloads like screenshots, exports): Context includes path, retries count, source: 'api-blob'

When a WebSocket message arrives that can’t be parsed as JSON, the error is captured. This typically indicates a protocol mismatch or a corrupted message.

Context: source: 'websocket', rawData — the first 200 characters of the raw message data (for debugging what the malformed message contained).

The toast bridge monitors all Zustand store error fields. When a store’s error transitions from null to a string value, the error is captured and simultaneously shown to the user as a toast notification.

Context: source: 'store'. Note: since store errors are strings (not Error objects), these are sent via captureMessage rather than captureException.

Deduplication: The toast bridge suppresses duplicate messages and certain known benign errors to avoid flooding both the UI and Sentry.


The browser extension cannot send errors directly to Sentry because content scripts and service workers have restricted capabilities — loading the full Sentry SDK would be impractical and would increase the extension’s size significantly. Instead, errors are relayed through Ghost’s backend.

Extension error → POST /api/v1/telemetry/error → Ghost backend → Sentry

The /api/v1/telemetry/error endpoint is unauthenticated — it doesn’t require a bearer token. This is intentional because the extension might encounter errors before it has established its WebSocket connection and received authentication details.

To prevent a misbehaving extension (or an attacker) from flooding the relay endpoint:

SettingValue
Max errors10 per minute per source
ImplementationIn-memory map-based (not persistent across restarts)
EnforcementWindow resets when the minute expires
Over limitHTTP 429 response, error is dropped
{
"source": "extension",
"message": "Error message (required)",
"stack": "stack trace string (optional)",
"context": {
"tab_url": "https://example.com",
"context": "ws_create"
},
"timestamp": "2024-01-01T12:00:00Z"
}
FieldRequiredMax sizeNotes
messageYesPart of 4KB bodyMust be non-empty, otherwise HTTP 400
sourceNo64 characters (capped)Defaults to "extension" if empty
stackNoPart of 4KB bodySent as raw_stack extra in Sentry
contextNoPart of 4KB bodyEach key-value pair becomes a Sentry extra
timestampNoPart of 4KB bodySent as a Sentry extra

Total body limit: 4,096 bytes (4 KB), enforced by io.LimitReader.

When the backend receives an extension error, it constructs a full Sentry event (not just a message):

  • Tags: source = the payload source (capped at 64 chars), layer = "extension"
  • Extras: timestamp, raw_stack, plus all key-value pairs from the context map
  • Event level: Error
  • Exception: If a stack trace is provided, the event includes an Exception entry with Type: "ExtensionError" and Value: the error message

The extension reports errors from 3 separate contexts, each with its own global error handler:

ContextError types captured
Service workerWebSocket creation failures, connection errors, send failures, message parse errors, content script injection failures, plus global error and unhandledrejection events
PopupGlobal error and unhandledrejection events in the popup page
Content scriptGlobal error and unhandledrejection events in the content script running inside web pages

All error handlers swallow failures silently (console.debug) — if the error reporting itself fails (e.g., Ghost isn’t running), it doesn’t cause additional errors or user-visible issues.