Error Tracking (Sentry)
Ghost uses Sentry to catch and track errors across all three layers of its architecture — the React frontend, the Go backend, and the browser extension. Think of it as a safety net that catches every crash, every failed API call, and every unhandled exception, then sends it to a central dashboard where developers can investigate what went wrong, how often it happens, and which users are affected.
The system is designed so that no error goes unnoticed, even in layers that can’t talk to Sentry directly (like the browser extension, which has no Sentry SDK).
Architecture
Section titled “Architecture”What this diagram shows: Errors flow from three different sources to a single Sentry instance. The React frontend and Go backend can talk to Sentry directly using their respective SDKs. The browser extension cannot load the Sentry SDK (content scripts and service workers have limited capabilities), so it sends errors to the Go backend via an HTTP endpoint, and the backend relays them to Sentry with appropriate tags. This relay design ensures every error from every layer ends up in the same Sentry dashboard.
Configuration
Section titled “Configuration”Sentry is configured via the [telemetry] section in ~/.ghost/config.toml:
[telemetry]sentry_dsn = "https://...@sentry.io/..."When the DSN is empty (the default), Sentry is completely disabled across all layers — no SDK is initialized, no errors are sent anywhere, and no network calls are made. This means Ghost works perfectly fine without Sentry; it’s an optional observability layer.
The Sentry DSN is stripped from settings exports to prevent accidental sharing of the organization’s Sentry project credentials.
Layer 1: Go Backend
Section titled “Layer 1: Go Backend”The Go backend uses github.com/getsentry/sentry-go and captures errors at four integration points: top-level panics, background goroutine panics, API handler panics, and all 5xx HTTP responses.
Configuration
Section titled “Configuration”| Setting | Value | Description |
|---|---|---|
| DSN | From config.toml | If empty, initialization is skipped entirely |
| Release | ghost@0.1.0 | Hardcoded version string — identifies which release produced the error |
| Environment | production | Always “production” (Ghost doesn’t have staging environments) |
| Sample rate | 1.0 (100%) | Every error is captured — nothing is sampled or dropped |
| Tracing | Disabled | Performance tracing is not used on the backend |
| Stack traces | Attached | Full Go stack traces are included with every error event |
Integration Point 1: Top-Level Panic Recovery
Section titled “Integration Point 1: Top-Level Panic Recovery”In main(), a deferred function catches any panic that bubbles up to the top level — the kind of crash that would normally kill the entire application.
Flow:
- The deferred function calls
recover()to catch the panic - Sends the panic value to Sentry via
sentry.CurrentHub().Recover(r) - Flushes Sentry with a 2-second timeout (to ensure the error is sent before the process exits)
- Re-panics — this is intentional. The error has been recorded, but the process still exits with a crash rather than continuing in a potentially corrupted state.
Integration Point 2: safeGo — Background Goroutine Protection
Section titled “Integration Point 2: safeGo — Background Goroutine Protection”All background goroutines (long-running tasks that operate independently from HTTP request handling) are wrapped in safeGo(logger, name, fn), which provides panic recovery for each goroutine independently.
How it works: safeGo wraps the given function in a goroutine with its own recover(). If the goroutine panics, the error is logged with the goroutine’s name, sent to Sentry, and flushed — but crucially, only that one goroutine dies. The rest of the application keeps running.
Why this matters: Without safeGo, a panic in any background goroutine would crash the entire Ghost process. With it, a failure in (for example) the WAL checkpoint timer doesn’t kill the proxy or the API server.
All 4 background goroutines protected by safeGo:
| Name | Purpose |
|---|---|
findings-drain | Drains the security findings channel and writes them to the database, then broadcasts via WebSocket |
tag-updates-drain | Drains the security tag update channel and writes tag updates to the database |
auto-purger | Runs every 5 minutes to clean up old flows based on age and count limits |
wal-checkpoint | Runs every 30 minutes to checkpoint the SQLite WAL (Write-Ahead Log) file, keeping the database file compact |
Integration Point 3: API Recovery Middleware
Section titled “Integration Point 3: API Recovery Middleware”Every API request passes through a recovery middleware that catches panics in HTTP handlers. This is different from the top-level recovery — it catches panics within individual request handlers while keeping the API server running.
When a handler panics:
- The middleware catches the panic with
recover() - Captures a full Go stack trace via
debug.Stack() - Sends to Sentry with detailed context tags
- Returns HTTP 500 “internal server error” to the client
- The API server continues serving other requests normally
Tags added to the Sentry event:
| Tag/Extra | Value | Purpose |
|---|---|---|
layer | "api" | Identifies this as an API-layer error |
path | The URL path (e.g., /api/v1/flows) | Which endpoint crashed |
method | The HTTP method (e.g., GET, POST) | What operation was attempted |
stack (extra) | Full Go stack trace | Detailed crash location |
request_id (extra) | ULID-based request ID | Correlate with logs |
Integration Point 4: respondError — All 5xx Responses
Section titled “Integration Point 4: respondError — All 5xx Responses”Every time any API handler returns a 5xx status code (500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, etc.), the respondError function automatically captures it to Sentry. This catches server errors that aren’t panics — logic errors, database failures, upstream timeouts, and other non-crash failures.
Tags: layer = "api", status = the HTTP status code integer.
How it’s triggered: respondError is called from 31 handler files across the entire API surface. Any handler that calls respondError(w, statusCode, message) with a status >= 500 will produce a Sentry event.
Flush on Shutdown
Section titled “Flush on Shutdown”sentry.Flush(2 * time.Second) is called during graceful shutdown to ensure any buffered error events are sent to Sentry before the process exits. The 2-second timeout prevents the shutdown from hanging indefinitely if the network is down or Sentry is unreachable.
Layer 2: React Frontend
Section titled “Layer 2: React Frontend”The React frontend uses @sentry/browser and captures errors from 5 integration points: component crashes, WebSocket parse errors, API request failures, store action errors, and browser tracing.
Configuration
Section titled “Configuration”| Setting | Value | Description |
|---|---|---|
| SDK | @sentry/browser | Browser-specific Sentry client |
| Active | Production builds only | Checked via import.meta.env.DEV — if true, errors go to console.error instead of Sentry |
| DSN | From backend settings | Fetched via api.getSettings() after the setup wizard completes, from settings.telemetry.sentry_dsn |
| Release | ghost@{APP_VERSION} | Version from Vite’s __APP_VERSION__ define, fallback "0.1.0" |
| Environment | production | Hardcoded |
| Integrations | browserTracingIntegration() | Performance tracing for page loads and XHR requests |
| Trace sample rate | 0.1 (10%) | Only 10% of transactions are traced (to reduce overhead) |
| Error sample rate | 1.0 (100%, default) | Every error is captured — not explicitly set, uses the SDK default |
Privacy Protection
Section titled “Privacy Protection”The beforeSend hook runs on every event before it’s sent to Sentry. It scans all breadcrumbs (the trail of events leading up to the error) and redacts authentication tokens from URLs. Specifically, it replaces token=<actual-token> with token=[REDACTED] in XHR and fetch breadcrumb URLs. This prevents Ghost’s API bearer token from being exposed in Sentry.
The captureError Utility
Section titled “The captureError Utility”All frontend Sentry captures go through a single utility function: captureError(error, context?).
Behavior:
- In development: Logs to
console.erroronly — never sends to Sentry. This prevents development errors from polluting the production error dashboard. - Error instances: Sent via
Sentry.captureException()with the context asextradata. - Non-Error values (strings, numbers): Stringified and sent via
Sentry.captureMessage()with context asextra.
Integration Point 1: Error Boundary
Section titled “Integration Point 1: Error Boundary”React’s error boundary catches any unhandled error that occurs during component rendering. When a component crashes (throws an error during render, in a lifecycle method, or in a constructor), the error boundary catches it and sends it to Sentry with the React component stack trace.
Context: componentStack — the React component hierarchy that led to the crash.
Integration Point 2: API Client
Section titled “Integration Point 2: API Client”The API client has retry logic for transient failures. When all retries are exhausted and the request still fails, the final error is sent to Sentry.
Two separate capture points:
- JSON requests: Context includes
path,retriescount,source: 'api' - Blob requests (binary downloads like screenshots, exports): Context includes
path,retriescount,source: 'api-blob'
Integration Point 3: WebSocket Hook
Section titled “Integration Point 3: WebSocket Hook”When a WebSocket message arrives that can’t be parsed as JSON, the error is captured. This typically indicates a protocol mismatch or a corrupted message.
Context: source: 'websocket', rawData — the first 200 characters of the raw message data (for debugging what the malformed message contained).
Integration Point 4: Toast Bridge
Section titled “Integration Point 4: Toast Bridge”The toast bridge monitors all Zustand store error fields. When a store’s error transitions from null to a string value, the error is captured and simultaneously shown to the user as a toast notification.
Context: source: 'store'. Note: since store errors are strings (not Error objects), these are sent via captureMessage rather than captureException.
Deduplication: The toast bridge suppresses duplicate messages and certain known benign errors to avoid flooding both the UI and Sentry.
Layer 3: Extension Error Relay
Section titled “Layer 3: Extension Error Relay”The browser extension cannot send errors directly to Sentry because content scripts and service workers have restricted capabilities — loading the full Sentry SDK would be impractical and would increase the extension’s size significantly. Instead, errors are relayed through Ghost’s backend.
Relay Flow
Section titled “Relay Flow”Extension error → POST /api/v1/telemetry/error → Ghost backend → SentryThe /api/v1/telemetry/error endpoint is unauthenticated — it doesn’t require a bearer token. This is intentional because the extension might encounter errors before it has established its WebSocket connection and received authentication details.
Rate Limiting
Section titled “Rate Limiting”To prevent a misbehaving extension (or an attacker) from flooding the relay endpoint:
| Setting | Value |
|---|---|
| Max errors | 10 per minute per source |
| Implementation | In-memory map-based (not persistent across restarts) |
| Enforcement | Window resets when the minute expires |
| Over limit | HTTP 429 response, error is dropped |
Payload Format
Section titled “Payload Format”{ "source": "extension", "message": "Error message (required)", "stack": "stack trace string (optional)", "context": { "tab_url": "https://example.com", "context": "ws_create" }, "timestamp": "2024-01-01T12:00:00Z"}| Field | Required | Max size | Notes |
|---|---|---|---|
message | Yes | Part of 4KB body | Must be non-empty, otherwise HTTP 400 |
source | No | 64 characters (capped) | Defaults to "extension" if empty |
stack | No | Part of 4KB body | Sent as raw_stack extra in Sentry |
context | No | Part of 4KB body | Each key-value pair becomes a Sentry extra |
timestamp | No | Part of 4KB body | Sent as a Sentry extra |
Total body limit: 4,096 bytes (4 KB), enforced by io.LimitReader.
Sentry Event Construction
Section titled “Sentry Event Construction”When the backend receives an extension error, it constructs a full Sentry event (not just a message):
- Tags:
source= the payload source (capped at 64 chars),layer="extension" - Extras:
timestamp,raw_stack, plus all key-value pairs from thecontextmap - Event level:
Error - Exception: If a stack trace is provided, the event includes an
Exceptionentry withType: "ExtensionError"andValue:the error message
Extension Error Sources
Section titled “Extension Error Sources”The extension reports errors from 3 separate contexts, each with its own global error handler:
| Context | Error types captured |
|---|---|
| Service worker | WebSocket creation failures, connection errors, send failures, message parse errors, content script injection failures, plus global error and unhandledrejection events |
| Popup | Global error and unhandledrejection events in the popup page |
| Content script | Global error and unhandledrejection events in the content script running inside web pages |
All error handlers swallow failures silently (console.debug) — if the error reporting itself fails (e.g., Ghost isn’t running), it doesn’t cause additional errors or user-visible issues.