Bug Reports
Ghost generates rich, contextual bug reports from the Mobile Inspector — combining animated screenshots, device state, element information, correlated API traffic, and automatically generated reproduction steps. When an AI provider is configured, reports are enhanced with intelligent observations and inferred user journey steps.
This isn’t just a screenshot with a text note. A Ghost bug report includes everything a developer needs to understand and reproduce the issue: what the user did, what API calls were triggered, which ones failed, what the server returned, what the UI looked like, and step-by-step instructions to reproduce — all generated automatically.
Generating a Bug Report
Section titled “Generating a Bug Report”From the Mobile Inspector, press Cmd+B to generate a bug report for the currently connected device. Ghost sends a request to POST /inspector/devices/{id}/bug-report and gathers all available context.
Optional parameters:
- Element ID — if you have a UI element selected in the inspector, its details and automation selectors are included
- Screenshot offset — grab a screenshot from earlier in the ring buffer (not just the latest frame)
- Session ID — scope the network traffic to a specific capture session
Generation Flow
Section titled “Generation Flow”What this diagram shows — the complete report generation pipeline:
- You trigger the report from the inspector UI
- Ghost’s report generator gathers all available context from the device: the current screenshot, the selected element’s details, recent network traffic (up to 200 flows within the last 5 minutes), interaction events from the touch monitor, and UI hierarchy snapshots
- A deterministic (rule-based) report is built first — this always works, even without AI
- Two things happen in parallel: the animated GIF is encoded from the screenshot ring buffer, and the report is sent to the configured LLM for AI enhancement
- The final report is auto-saved as a persistent artifact and returned to the UI
Report Contents
Section titled “Report Contents”Device Context
Section titled “Device Context”Every report includes the device details at the time of generation:
| Field | Description |
|---|---|
| Device name | The connected device identifier (e.g., “iPhone 15 Pro Simulator”) |
| OS version | Platform and version (e.g., “iOS 17.2”, “Android 14”) |
| App name | The application currently running on the device |
| Screen resolution | Device display dimensions in pixels |
| Connection type | How the device is connected (simulator, emulator, USB, WiFi) |
| Timestamp | Exact date and time when the report was generated |
Screenshots
Section titled “Screenshots”Two visual captures are included:
Static screenshot — A single JPEG or PNG frame from the screenshot ring buffer, base64-encoded and embedded directly in the report. You can specify an offset to grab an earlier frame (useful when the bug already passed and you want to show what the screen looked like when it happened).
Animated GIF — The last 30 frames from the ring buffer, encoded as an animated GIF that plays back the last ~30 seconds of device activity. This shows the reviewer what happened leading up to the bug — which screens the user navigated through, what they tapped, and how the UI responded.
GIF encoding details:
- Scale: 0.25× (phone screenshots like 1206×2622 are scaled down to ~301×655 to keep file size reasonable)
- Frame delay: 500ms per frame (2 FPS playback)
- Color palette: Plan9 256-color with Floyd-Steinberg dithering
- Loop: Infinite
- Size cap: 5 MB maximum — larger GIFs are silently dropped
- Minimum frames: 2 (if fewer frames are available, the GIF is skipped)
Selected Element Information
Section titled “Selected Element Information”When a UI element is selected in the inspector at the time of report generation, its details are included:
- Element class, text content, accessibility labels, and content description
- Bounding box coordinates on screen
- All generated automation selectors with reliability scores (Excellent, Good, OK, Fragile)
- Each selector in 6 framework formats: Raw, Appium Java, Appium Python, Maestro, Espresso, XCUITest
This makes it immediately actionable — a developer can copy the selector and use it in their automation framework to target the exact element.
Automatic Findings
Section titled “Automatic Findings”Ghost generates rule-based observations from the data:
- Error count — “2 failed requests (2× 500, 404)” with status code breakdown
- Screen context — “Bug observed on Settings screen” (from the last interaction event’s screen name)
- Interaction summary — “User performed 5 taps and 2 scrolls before error” (from device touch monitor)
- Slow request detection — “Request to /api/cart took 4.2s” (flagged at >2 second threshold)
- Fallback — “No errors detected during interaction window” (when no issues are found)
Error Detection
Section titled “Error Detection”Failed requests (status code 400+) are promoted as dedicated error entries:
| Field | Description |
|---|---|
| Status code | The HTTP error code (400, 403, 404, 500, 502, etc.) |
| URL | The full request URL |
| Response body preview | First 200 characters of the error response body, truncated at a UTF-8-safe boundary |
Ghost also detects errors hidden in 2xx responses — APIs that return status 200 but include error content in the body. It scans for patterns like "error":, "errorMessage":, "exception":, "fault":, stacktrace, and panic:, while avoiding false positives for "error": null or "error": false.
Reproduction Steps
Section titled “Reproduction Steps”Steps are automatically generated by merging interaction events and network traffic into a single chronological timeline. This is the most complex part of the report generator:
Phase 1 — Build timeline: All interaction events (taps, scrolls, text inputs) and network flows are merged and sorted by timestamp.
Phase 2 — Attribute network flows to user actions: For each user action (tap, long press, text change), Ghost computes a time window:
- Start: 200ms before the action (compensates for WDA polling delay on iOS)
- End: Either the next user action’s timestamp or 3 seconds after the action, whichever comes first
All network flows within this window are “attributed” to that action — meaning Ghost determines that the user’s tap likely caused those API calls. Each flow is attributed to at most one action (first trigger wins). Static assets (images, CSS, JS, fonts, SVGs) are automatically filtered out.
Phase 3 — Generate numbered steps:
1. [Action] Tap on "Add to Cart" button (12:34:05) → POST /api/cart → 500 Internal Server Error → GET /api/cart/status → 200 OK2. [Observation] Error in response to POST /api/cart: {"error":"timeout"} (12:34:06)3. [Action] Tap on "Retry" button (12:34:08) → POST /api/cart → 200 OK4. [Network] GET /api/recommendations → 200 OK (12:34:09)Three step types:
- Action — user interaction with attributed network flows shown underneath (up to 5 per action, with “and N more” for extras)
- Observation — unattributed error flows or 2xx responses containing error payloads
- Network — unattributed success flows (capped at 15 to prevent flooding)
Duplicate endpoints are collapsed: “…and 3 more /api/analytics requests” instead of listing each one.
Network Waterfall
Section titled “Network Waterfall”Timing visualization for every primary network flow in the report timeframe:
| Field | Description |
|---|---|
| Flow ID | Unique identifier for cross-referencing with the traffic list |
| Method + Host + Path | What the request was |
| Status code | Response status |
| Start offset | When this request started, relative to the earliest flow in the report |
| Wait time (TTFB) | Estimated as 60% of total duration — the server thinking time |
| Transfer time | Estimated as 40% of total duration — the download time |
| Total duration | End-to-end request time in milliseconds |
| Response size | Body size in bytes |
The waterfall is rendered as an ASCII chart in the Markdown output and as a visual bar chart in the HTML export.
API Evidence
Section titled “API Evidence”Non-error primary flows (successful API calls that happened during the bug timeframe) are listed as supporting evidence. This helps the reviewer understand the complete picture: not just what failed, but what the app was doing at the time.
AI Enhancement
Section titled “AI Enhancement”When an LLM provider is configured (Anthropic, OpenAI, or Ollama), reports are automatically enhanced with AI analysis. This is an optional improvement layer — the deterministic report is always generated first and is fully functional without AI.
How Enhancement Works
Section titled “How Enhancement Works”- Ghost builds a compact prompt (~2K tokens) with XML-tagged sections containing: device info, interaction events, up to 3 key hierarchy snapshots (for screen state), up to 50 network request lines (with request/response bodies truncated at 500 bytes), error details, and selected element info
- A one-shot LLM completion call is made (not streaming, not conversational) with a 60-second timeout
- The LLM returns a structured JSON response that replaces specific sections of the report
What AI Replaces
Section titled “What AI Replaces”| Section | How AI Improves It |
|---|---|
| Title | Generates a concise, descriptive bug title (e.g., “Add to Cart button returns 500 error on product detail page”) instead of the generic rule-based title |
| Description | Writes a 2-4 sentence analysis of what went wrong and why it matters |
| Findings | Provides evidence-backed observations (e.g., “The checkout API consistently fails with a timeout error when the cart exceeds 10 items”) |
| Reproduction Steps | Reconstructs the full user journey, adding inferred steps marked with inferred: true (e.g., “User browsed product catalog” inferred from product listing API calls) |
What AI Preserves
Section titled “What AI Preserves”Screenshots, animated GIF, device context, element details, selectors, waterfall data, API evidence, and raw error data are never modified by AI — only the human-readable narrative sections are enhanced.
Fallback
Section titled “Fallback”If the LLM is unavailable, times out, or returns an invalid response, the deterministic report is used as-is. A warning is logged but the user still gets a complete, useful report. The ai_enhanced flag indicates whether enhancement succeeded.
Report Overlay
Section titled “Report Overlay”The bug report opens in a premium full-screen overlay (90% viewport width, max 1200px, 88% viewport height) with a gradient accent bar at the top (cyan → purple → pink).
Layout
Section titled “Layout”┌───────────────────────────────────────────────────────────┐│ ═══════════════════════ gradient bar ═══════════════════ │├──────────┬────────────────────────────────────────────────┤│ Scrollspy │ ││ Sidebar │ Editable Title (click to edit) ││ │ Editable Description (textarea) ││ ● Overview│ ││ ● Replay │ ┌─────────────────────────────────────────┐ ││ ● Findings│ │ Animated GIF in phone frame (220px) │ ││ ● Errors │ │ "Last 30 seconds of device activity" │ ││ ● Steps │ │ [Download GIF] │ ││ ● Evidence│ └─────────────────────────────────────────┘ ││ ● Element │ ││ ● Waterfall│ Findings (error/warning/info icons) ││ │ Errors (failed requests with body preview) ││ │ Reproduction Steps (editable, numbered) ││ │ API Evidence (successful flows) ││ │ Element Detail (selectors, screenshot) ││ │ Network Waterfall (timing bars) │├──────────┴────────────────────────────────────────────────┤│ View: [Visual] [Markdown] [JSON] Export: [MD] [JSON] [HTML]│└───────────────────────────────────────────────────────────┘The scrollspy sidebar highlights the active section as you scroll through the report. Only sections with data are shown — if there’s no selected element, the Element section is hidden.
View Modes
Section titled “View Modes”| Mode | What You See |
|---|---|
| Visual | Rich formatted layout with all sections, scrollspy navigation, interactive step editing. This is the default and most useful view. |
| Markdown | Raw Markdown source text. The title and description reflect any edits you’ve made. |
| JSON | Complete JSON structure of the report. Useful for programmatic consumption or debugging. |
Editing Reproduction Steps
Section titled “Editing Reproduction Steps”Steps are fully editable in the visual view:
- Click a step to edit its description text inline
- Click the type icon (action/observation/network) to cycle through types
- Delete individual steps with the trash button
- Add new steps with the dashed “Add Step” button
- Reset to the original generated steps
- Tab navigates to the next step, Enter or Escape exits editing
- Steps marked as AI-inferred show a dashed left border and purple “inferred” label
Attributed network flows are shown inline under action steps — so you can see exactly which API calls each user action triggered.
Editing Title and Description
Section titled “Editing Title and Description”Click the title text to enter edit mode — type a new title and click away or press Enter to save. The description is a textarea that’s always editable.
Export Formats
Section titled “Export Formats”| Format | Method | Description |
|---|---|---|
| Copy Markdown | Clipboard | Full Markdown text with all sections. Title and description reflect your edits. |
| Copy JSON | Clipboard | Complete JSON structure of the report |
| Download .md | File download | Markdown file named bug-report-{timestamp}.md |
| Download HTML | File download | Fully self-contained HTML file with inline CSS, embedded base64 screenshots and GIF, Ghost design tokens, responsive layout. Works offline with zero external dependencies — you can email it to anyone and they can open it in any browser. |
Keyboard shortcut: Cmd+Shift+M copies the Markdown to clipboard instantly.
Persistence
Section titled “Persistence”Bug reports are automatically saved as artifacts in Ghost’s database immediately after generation. You don’t need to manually save — every report is preserved.
Each artifact stores:
- Type:
bug_report - Format:
json(the full report structure) - Title and summary (description truncated to 120 characters)
- Device ID and name — which device the report was generated from
- Session ID — which capture session the traffic was scoped to
- Metadata: platform, device name, whether AI enhancement succeeded, error count
- Full content — the complete JSON report including screenshots, GIF, steps, and all data
Bug Reports Panel
Section titled “Bug Reports Panel”Saved bug reports are accessible from the Bug Reports Panel (slide-over). Each report appears as a card showing:
- Title and summary
- Device badge (which device)
- Platform badge (iOS/Android)
- AI badge (if enhanced)
- Error count
- Timestamp and file size
From the panel, you can re-open any saved report in the overlay, download it, or delete it.
A WebSocket event (artifact.created) notifies all connected frontends when a new report is saved — so if you have multiple Ghost windows open, the report appears in the panel immediately.
Agent Integration
Section titled “Agent Integration”The AI agent has a generate_bug_report tool that gathers structured data for bug report generation. This is a data-gathering tool — it collects flows and correlated browser interactions into a structured format that the LLM then uses to compose a report in its response.
The tool accepts an array of flow IDs and an optional format (markdown or jira), fetches each flow with its correlated browser interactions from the extension, and returns a chronological list of steps with action type, selector, element text, page URL, flow details, and error bodies (truncated at 2,048 characters).
This is distinct from the device-level bug report generator — the agent tool works with browser extension interactions (desktop web testing), while the inspector bug report works with mobile device interactions.