Skip to content

Sessions

Sessions are Ghost’s way of organizing captured traffic into separate workspaces — like folders for your network recordings. Every flow (HTTP request-response pair) belongs to exactly one session. When Ghost captures traffic through its proxy, each flow is saved into whichever session is currently “active.” You can create multiple sessions to separate different testing scenarios (e.g., “Login Flow Testing” vs “Checkout Bug Investigation”), then export, compare, or analyze them independently.

Think of sessions like recording sessions in a studio — you press record, capture some traffic, stop, name it, and later you can play it back, export it, or compare two sessions to see what changed.

Every session has these fields:

FieldTypeDescription
idstringA ULID (Universally Unique Lexicographically Sortable Identifier) — a unique ID that also encodes the creation time, so sessions are naturally sortable by when they were created
namestringHuman-readable name you choose (e.g., “Sprint 42 Regression Test”)
descriptionstringOptional longer description of what this session is for
created_atISO 8601 timestampWhen the session was created
flow_countintegerNumber of flows in this session — this is computed on the fly from the flows table (not stored as a field), so it’s always accurate even if flows are deleted or imported

The flow_count is calculated using a SQL LEFT JOIN that counts rows in the flows table grouped by session ID. This means it’s always real-time accurate — if you delete flows or import new ones, the count updates immediately on the next query.

GET /api/v1/sessions

Returns all sessions ordered by creation time (newest first). There are no query parameters — no pagination, no filtering. The response is a JSON array of session objects.

Response:

[
{
"id": "01HWXYZ...",
"name": "Login Flow Testing",
"description": "Testing the new SSO integration",
"created_at": "2024-01-15T10:30:00Z",
"flow_count": 342
},
{
"id": "01HWABC...",
"name": "Checkout Bug Investigation",
"description": "",
"created_at": "2024-01-14T16:00:00Z",
"flow_count": 89
}
]

The list returns the same fields as getting a single session — there’s no “summary vs detail” distinction for sessions (unlike flows, which have separate summary and detail representations).

POST /api/v1/sessions

Creates a new session. The name field is required (after trimming whitespace). The description field is optional and defaults to an empty string.

Request body (1 MB limit):

{
"name": "Sprint 42 Regression Test",
"description": "Comparing checkout flow before and after the payment refactor"
}

Validation:

  • name is required — if empty or only whitespace after trimming, returns 400 Bad Request
  • description is optional — trimmed of leading/trailing whitespace

Response: 201 Created with the new session object.

Broadcasts a session.created WebSocket event with the session object, so the frontend can add it to the session list in real time without polling.

GET /api/v1/sessions/{id}

Returns a single session by its ID. The response format is identical to what the list endpoint returns for each session — the same 5 fields with the same computed flow_count.

Error responses:

  • 400 if the ID is empty
  • 404 if no session exists with that ID
  • 500 for unexpected database errors
PUT /api/v1/sessions/{id}

Updates a session’s name and/or description. The same validation rules apply as creation: name is required after trimming, description is optional.

Request body (1 MB limit):

{
"name": "Updated Session Name",
"description": "Added more context about what was tested"
}

After updating, the handler re-fetches the session from the database to return the complete state including the computed flow_count. If the re-fetch fails (unlikely but possible), it returns the session without the flow count.

Response: 200 OK with the updated session object.

Broadcasts a session.updated WebSocket event.

DELETE /api/v1/sessions/{id}

Deletes a session and everything associated with it. This is a cascading delete — SQLite’s foreign key constraints (with ON DELETE CASCADE) automatically remove all related data.

When you delete a session, these database records are automatically removed:

DataTableRelationship
All flows in the sessionflowsDirect child (session_id FK)
All WebSocket frames for those flowsws_framesGrandchild (flow_id FK → flows)
All conversations (AI chat threads)conversationsDirect child (session_id FK)
All messages in those conversationsmessagesGrandchild (conversation_id FK → conversations)
All browser interactions capturedinteractionsDirect child (session_id FK)
All journey recordingsjourneysDirect child (session_id FK)
All journey steps in those journeysjourney_stepsGrandchild (journey_id FK → journeys)
All screenshot baselinesscreenshot_baselinesDirect child (session_id FK)
All security findingssecurity_findingsDirect child (session_id FK)
All injection rulesinjection_rulesDirect child (session_id FK)
All extension eventsextension_eventsDirect child (session_id FK)

Additionally, the handler does best-effort filesystem cleanup — it removes the session’s workspace directory at ~/.ghost/workspaces/{sessionID} (which may contain agent-generated reports, PoC scripts, and uploaded files). If this cleanup fails, it logs a warning but the API call still succeeds. The database deletion is the important part; leftover files are just disk space.

Response: {"ok": true}

Broadcasts a session.deleted WebSocket event with {"id": "<session_id>"}.

POST /api/v1/sessions/{id}/activate

Makes this session the “active” one — all new traffic captured by the proxy will be saved to this session. This is like switching which folder incoming mail goes to.

Activation does three things internally:

  1. Switches the proxy’s target session — calls SetSessionID(id) on the proxy server so all newly captured flows are tagged with this session ID
  2. Switches the extension hub’s session — the browser extension’s WebSocket connection also uses this session ID for captured interactions and events
  3. Reloads injection rules — loads the new session’s injection rules from the database and updates the in-memory script injector. Each session can have its own set of injection rules, so switching sessions means switching which scripts are injected into web pages

All three subsystems are nil-safe — if the proxy server, extension hub, or script injector isn’t initialized yet, the handler skips that step without crashing.

Response: {"ok": true}

Note: The activate endpoint does not broadcast a WebSocket event. The frontend updates its local state directly after the API call succeeds.

Ghost can export a session’s flows in five formats, each suited for different purposes. All export endpoints share a common safety cap of 100,000 flows — if a session has more flows than this, only the first 100,000 are exported. Flows are fetched from the database in batches of 5,000 to avoid loading everything into memory at once.

After generating an export, if the result is under 20 MB, Ghost automatically saves it as an “artifact” (a persistent file associated with the session) so you can download it again later without re-generating it.

GET /api/v1/sessions/{id}/export/har

Exports the session as an HAR (HTTP Archive) file — the industry-standard format for recording HTTP traffic. HAR files can be opened by browser DevTools, Charles Proxy, Fiddler, and many other tools.

Content-Type: application/json; charset=utf-8 Content-Disposition: attachment; filename="{sessionName}_{timestamp}.har"

The HAR file follows the 1.2 specification with:

  • Creator: {"name": "Ghost", "version": "1.0"}
  • Entries: Each flow becomes a HAR entry with request (method, URL, headers, cookies, query strings, body) and response (status, headers, cookies, body, redirect URL)
  • Timings: Ghost’s timing data is mapped to HAR timing fields
  • Body encoding: Text content is included as-is. Binary content (images, protobuf, compressed data) is base64-encoded with a "encoding": "base64" field on the content object
  • Cookies: Parsed from Cookie/Set-Cookie headers into structured objects
GET /api/v1/sessions/{id}/export/json

Exports the session as a JSON array of Ghost’s internal flow objects — this preserves all Ghost-specific fields (tags, notes, metadata, source, device info) that HAR format doesn’t support.

Content-Type: application/json; charset=utf-8 Content-Disposition: attachment; filename="{sessionName}_{timestamp}.json"

The output is pretty-printed JSON (indented for readability) of the raw flow structs. This format can be re-imported into Ghost on another machine or in a different session.

GET /api/v1/sessions/{id}/export/csv

Exports the session as a CSV spreadsheet — useful for opening in Excel, Google Sheets, or feeding into data analysis scripts. CSV is a flat format, so it only includes summary data (no headers or bodies).

Content-Type: text/csv; charset=utf-8 Content-Disposition: attachment; filename="{sessionName}_{timestamp}.csv"

Columns (12):

ColumnDescription
idFlow ULID
methodHTTP method (GET, POST, etc.)
urlFull URL including scheme, host, and path
hostHostname only
pathURL path only
statusHTTP status code (200, 404, etc.)
duration_msRequest duration in milliseconds
request_sizeRequest body size in bytes
response_sizeResponse body size in bytes
content_typeResponse content type
tagsTags joined with semicolons (;)
errorError message, if any

CSV injection protection: To prevent formula injection attacks (where a malicious value starting with =, +, -, @, tab, or carriage return could execute formulas when opened in a spreadsheet), all cell values are checked and prefixed with a single quote (') if they start with any of these 6 dangerous characters. This is a standard security measure for CSV exports.

GET /api/v1/sessions/{id}/export/postman

Exports the session as a Postman Collection v2.1.0 — you can import this directly into Postman to replay, modify, or organize the captured requests.

Content-Type: application/json; charset=utf-8 Content-Disposition: attachment; filename="{sessionName}_{timestamp}_postman.json"

Each flow becomes a Postman item named "{Method} {Path}" (e.g., “GET /api/users”). The export:

  • Skips Host, Content-Length, and Connection headers (Postman manages these automatically)
  • Includes request bodies only if they’re valid UTF-8 text (binary bodies are omitted since Postman can’t display them)
  • Sets the URL as a raw string (not decomposed into protocol/host/path/query components)
GET /api/v1/sessions/{id}/export/report

Generates a self-contained HTML report file — a single .html file you can open in any browser, share via email, or attach to a bug ticket. No external dependencies required.

Content-Type: text/html; charset=utf-8 Content-Disposition: attachment; filename="ghost-report-{idPrefix}.html"

Flow limit: 10,000 flows (not the 100,000 shared by other exports — HTML tables become unwieldy with more data).

The report includes:

  • Stats cards: Total flows, total transfer size (human-readable + raw bytes), error count, HTTP method distribution (with colored badges), and status code distribution grouped by category (2xx/3xx/4xx/5xx)
  • Searchable flow table: An input field that filters rows as you type, searching across all columns
  • Sortable columns: Click any column header to sort — Time (HH:MM:SS.mmm format), Method (with colored badge), Status (green for 2xx, yellow for 3xx, red for 4xx/5xx), Host (cyan), Path (truncated to 60 characters with full URL on hover), Duration, Size, Tags
  • Dark theme styling: Uses Ghost’s design system colors and fonts (Space Grotesk for UI, JetBrains Mono for code/data)
  • All CSS and JavaScript is embedded inline — no external files, CDNs, or network requests needed
GET /api/v1/sessions/{id}/report

Downloads the AI agent’s analysis report for this session. Unlike the HTML export (which Ghost generates automatically from flow data), this is a report written by the AI agent during an analysis conversation. The agent saves it as report.md in the session’s workspace directory.

Content negotiation:

  • If the request includes Accept: text/html, returns the markdown wrapped in a basic HTML <pre> tag for browser viewing
  • Otherwise, returns raw markdown with Content-Type: text/markdown and Content-Disposition: attachment; filename="ghost-report-{idPrefix}.md"

File size limit: 10 MB (maxReportSize). Returns 413 if the report somehow exceeds this.

Returns 404 if no report has been generated for this session (the agent hasn’t been asked to write one yet, or the workspace directory doesn’t exist).

GET /api/v1/sessions/{id}/poc

Downloads the Proof-of-Concept scripts generated by the security agent as a ZIP archive. When the agent identifies vulnerabilities, it can generate PoC scripts (Python, curl commands, etc.) that demonstrate the vulnerability — these are saved to the session workspace’s poc/ directory.

Content-Type: application/zip Content-Disposition: attachment; filename="ghost-poc-{idPrefix}.zip"

The ZIP is created on-the-fly and streamed directly to the response (no temporary file on disk). Each regular file in the poc/ directory is added to the ZIP. Directories and symbolic links are skipped. Individual files larger than 10 MB are skipped with a warning logged.

Returns 404 if the poc/ directory doesn’t exist or contains no files.

Ghost can import traffic data from external sources into an existing session. Both import endpoints accept either a raw JSON body or a multipart file upload (field name: file).

POST /api/v1/sessions/{id}/import/har

Imports flows from an HAR 1.2 file. Each HAR entry becomes a new flow in the target session.

Body size limit: 256 MB

Import behavior:

  • Each HAR entry gets a new ULID (Ghost doesn’t reuse the original flow IDs)
  • The source field is set to "import" (not "proxy")
  • Metadata includes {"import_source": "har"} so you can identify imported flows
  • The session ID is set to the target session from the URL path
  • For each successfully imported flow, a flow.created WebSocket event is broadcast so the frontend updates in real time

Error handling: Individual entry failures don’t stop the import. If one entry has invalid data, it’s skipped and the error is recorded. The response tells you exactly what happened:

Response:

{
"imported": 142,
"skipped": 3,
"errors": [
"entry 47: missing request URL",
"entry 89: invalid status code"
]
}
POST /api/v1/sessions/{id}/import/json

Imports flows from Ghost’s native JSON export format (or any JSON array of flow objects).

Body size limit: 256 MB Flow cap: Maximum 50,000 flows per import file. Returns 413 if exceeded. Error cap: Maximum 100 error messages in the response — after that, errors are still counted but messages stop accumulating (to prevent the response itself from becoming enormous).

Format detection: The handler inspects the first non-whitespace byte to determine the format:

  • [ (array) — expected format, proceeds with import
  • { (object) — probes the first 4 KB for a "log" key. If found, this is actually an HAR file and the handler returns a helpful error: “this looks like an HAR file, use /import/har instead”
  • Anything else — returns 400

Validation per flow:

  • Request must not be nil
  • Method and URL must not be empty
  • Host and Path are auto-populated from the URL if missing
  • StartedAt defaults to time.Now() if not set
  • Tags and Metadata are initialized to empty (not null) if missing

Cancellation awareness: The handler checks the request context every 100 flows. If the client disconnected (or a timeout fired), it stops inserting and returns what it has so far.

Response: Same format as HAR import: {"imported": int, "skipped": int, "errors": [...]}

POST /api/v1/sessions/compare

Compares two sessions side by side to identify what changed between them — new endpoints, removed endpoints, performance regressions, error rate increases, and more. This is the engine behind Ghost’s “Session Comparison” feature, which helps you spot what broke (or improved) between two test runs.

Think of it like a “diff” for your API traffic — instead of comparing text files line by line, it compares HTTP endpoints request by request.

{
"session_a": "01HWXYZ...",
"session_b": "01HWABC..."
}

Validation:

  • Both session_a and session_b are required
  • Cannot compare a session with itself (returns 409 Conflict)

Safety limits:

  • Timeout: 30 seconds — comparison is computationally intensive for large sessions, so it has a dedicated timeout
  • Flow cap: Maximum 50,000 flows per session. If either session exceeds this, returns 413 with a message suggesting you filter the sessions first

The response has two main sections: an overview (aggregate statistics for both sessions) and hosts (per-hostname endpoint comparisons).

{
"overview": {
"session_a": {
"id": "01HWXYZ...",
"name": "Before Refactor",
"description": "",
"created_at": "2024-01-14T10:00:00Z",
"flow_count": 1234,
"error_count": 12,
"avg_duration_ms": 150.5,
"total_size": 5678900,
"unique_hosts": 8,
"unique_endpoints": 45
},
"session_b": {
"id": "01HWABC...",
"name": "After Refactor",
"description": "",
"created_at": "2024-01-15T10:00:00Z",
"flow_count": 1456,
"error_count": 25,
"avg_duration_ms": 230.2,
"total_size": 6789000,
"unique_hosts": 9,
"unique_endpoints": 48
},
"delta": {
"new_endpoints": 5,
"removed_endpoints": 2,
"changed_endpoints": 8,
"unchanged_endpoints": 33,
"regressions": 3,
"performance_issues": 2
}
},
"hosts": [
{
"host": "api.example.com",
"endpoints": [
{
"method": "GET",
"path_pattern": "/api/users/{id}",
"status": "changed",
"change_types": ["regression", "performance"],
"severity": 0,
"stats_a": {
"count": 50,
"status_codes": {"200": 48, "500": 2},
"avg_duration_ms": 120.0,
"min_duration_ms": 45.0,
"max_duration_ms": 890.0,
"p95_duration_ms": 450.0,
"avg_response_size": 2048,
"error_rate": 0.04,
"flow_ids": ["id1", "id2"],
"sample_paths": ["/api/users/123", "/api/users/456"]
},
"stats_b": {
"count": 55,
"status_codes": {"200": 30, "500": 25},
"avg_duration_ms": 340.0,
"min_duration_ms": 90.0,
"max_duration_ms": 2100.0,
"p95_duration_ms": 1200.0,
"avg_response_size": 1024,
"error_rate": 0.45,
"flow_ids": ["id3", "id4"],
"sample_paths": ["/api/users/789"]
},
"sample_pair": {
"flow_a_id": "id1",
"flow_b_id": "id3"
}
}
]
}
]
}

To group requests to the same logical endpoint, the comparison engine normalizes URL paths. Dynamic segments are replaced with {id}:

  • Numeric IDs: /users/12345/users/{id}
  • UUIDs: /orders/550e8400-e29b-41d4-a716-446655440000/orders/{id}
  • ULIDs: /flows/01HWXYZ123456789012345/flows/{id}
  • Hex hashes: /assets/a1b2c3d4e5f6/assets/{id}

This means /users/123 and /users/456 are recognized as the same endpoint (GET /users/{id}) and their stats are compared together.

Each endpoint comparison is classified based on these thresholds:

Change TypeCriteriaWhat It Means
RegressionError rate increased by ≥10% AND the dominant status code class shifted from 2xx to 4xx/5xxAn endpoint that was working is now failing — the most serious type of change
PerformanceAverage duration ≥ 2× the baseline AND absolute increase ≥ 100msAn endpoint got significantly slower — both a relative and absolute threshold must be met to avoid false positives on very fast endpoints
Size changedAverage response size changed by ≥ 20% AND absolute change ≥ 1,024 bytesResponse payloads grew or shrank significantly — might indicate missing data or new fields
Status changedThe most common status code class differs (e.g., 200 → 301)The endpoint behaves differently but isn’t necessarily broken

An endpoint can have multiple change types simultaneously (e.g., both “regression” and “performance” if it’s both failing more and slower).

Endpoints are sorted by severity so the most important changes appear first:

SeverityChange TypePriority
0RegressionHighest — things that broke
1Removed (only in session A)High — endpoints that disappeared
2Performance degradationMedium-high — things that got slower
3Status code changedMedium — different behavior
4New (only in session B)Medium-low — new endpoints appeared
5Size changedLow — payload differences
6Other changesLower
8UnchangedLowest — no differences

Hosts are also sorted: hosts with real changes sort before hosts with only additions/removals, which sort before hosts where everything is unchanged. Within each group, hosts are sorted alphabetically.

To keep response sizes manageable:

  • Flow IDs: Maximum 100 per endpoint per session (enough to inspect specific examples)
  • Sample paths: Maximum 5 per endpoint (shows the actual URL paths grouped into this endpoint pattern)
EventTriggerPayload
session.createdNew session created via POSTSession object (id, name, description, created_at, flow_count)
session.updatedSession renamed or description changedUpdated session object
session.deletedSession removed via DELETE{"id": "..."} — just the deleted session’s ID
flow.createdEach flow imported via HAR or JSON importFlow summary DTO (per flow, not batched)
artifact.createdExport auto-saved as artifact (when under 20 MB)Artifact summary DTO

Note that the activate endpoint does not broadcast any WebSocket event — the frontend handles session switching locally.

LimitValueContext
Request body size1 MBCreate and update endpoints
Export flow cap100,000 flowsHAR, JSON, CSV, Postman exports
Export batch size5,000 flowsInternal pagination during export generation
Report flow cap10,000 flowsHTML report export (smaller cap for UI performance)
Import body size256 MBBoth HAR and JSON import
Import flow cap50,000 flowsJSON import only (HAR has no flow cap)
Import error cap100 messagesJSON import (errors still counted beyond this, just not listed)
Comparison flow cap50,000 per sessionSessions with more flows are rejected with 413
Comparison timeout30 secondsHard deadline for comparison computation
Flow IDs per endpoint100In comparison results
Sample paths per endpoint5In comparison results
Artifact auto-save threshold20 MBExports larger than this are not saved as artifacts
Agent report size10 MBMaximum file size for report.md
PoC file size10 MB per fileIndividual files in the PoC ZIP; oversized files are skipped