Common issues and how to resolve them.
Ollama / LLM issues
"Connection refused" when running investigations
Ollama is not running. Start it in a separate terminal:
LLM responses are slow
Ensure Ollama is using GPU acceleration: ollama ps.
On Apple Silicon Macs, GPU offload is automatic. On Linux, confirm CUDA or ROCm support.
Reduce the token budget if investigations cost too much time: SSI_LLM__TOKEN_BUDGET_PER_SESSION=30000.
Wrong model loaded
Check the active model:
ssi settings show | grep model
Pull the expected model if missing:
WHOIS lookup times out
Some WHOIS servers are slow or rate-limited. Skip the check:
Or increase the timeout in config/settings.local.toml:
VirusTotal check skipped
You need an API key. Get a free one at virustotal.com and set it:
urlscan.io returns no results
Without an API key, SSI falls back to the public search endpoint which has limited data. For full results, set:
Browser crashes or hangs
Try disabling the sandbox:
Or increase the page load timeout:
"playwright install" fails
Ensure system dependencies are available. On Ubuntu/Debian:
On macOS, Chromium installs without additional system packages.
Agent gets stuck in a loop
The agent has built-in loop detection and limits: 20 steps max and a configurable token budget. Safety mechanisms include:
Repeated action detection — aborts if the same action fails 3 times.
Scroll loop detection — stops scrolling if page content does not change.
Blank page backoff — retries with progressive delays if the page is empty.
If the agent consistently fails on a specific site, try:
Run a passive-only scan first to understand the page structure.
Create a playbook for the site pattern (see Playbooks).
PDF report issues
PDF generation skipped
The native PDF libraries (GLib, Cairo, Pango) are missing. Install them:
Investigations still complete without these — only PDF rendering is skipped.
Investigation takes too long
Passive-only scans take 30–60 seconds. Full investigations with the AI agent take 2–5 minutes depending on site complexity.
To speed things up:
Skip checks you do not need (--skip-whois, --skip-virustotal).
Reduce the token budget: SSI_LLM__TOKEN_BUDGET_PER_SESSION=30000.
Ensure Ollama uses GPU acceleration.
For batch runs, increase --concurrency to process URLs in parallel.
Database grows too large
Investigation results are stored in SQLite (local) or PostgreSQL (dev/prod). To manage local storage:
Delete old evidence directories under data/evidence/.
SSI shares core's database (core/data/i4g_store.db locally, Cloud SQL in cloud). Back up and recreate via i4g bootstrap local reset if needed.
Check ssi settings validate for configuration issues.
Run with SSI_DEBUG=true for verbose logging.
File issues in the SSI repository for bugs or feature requests.