oPivots

one input, one click

230+
Investigative Adapters
70
Tool Categories
158
Input Types
385
Social Platforms
What is oPivots?

oPivots turns a single data point into a complete intelligence brief in minutes. Provide an email, username, domain, phone number, cryptocurrency address, or one of 158 other supported input types and the system automatically investigates it: routing across 230+ adapters, pivoting on discoveries, suppressing noise, scoring confidence, and assembling a comprehensive report with full source attribution. No manual pivoting required.

How does it work?

When you submit a seed, the orchestrator routes it to every adapter that accepts that input type. Each adapter produces artifacts (emails, domains, IPs, social profiles, phone numbers, addresses, organizations) that feed back into the routing engine as new seeds. The system continues pivoting until all leads are exhausted or configurable caps are reached.

Every artifact is tagged with its source, match quality, and full provenance chain. The system scores confidence using weighted group and per-tool reliability ratings, flags contradictions, and produces a layered report with high-confidence findings surfaced first.

Configurable depth limits, task caps, fanout thresholds, and per-tool budgets give you precise control over how aggressively the system pivots. Interactive approval gates let you review and approve high-fanout pivots before they execute.

What investigation categories are covered?

Domains (69): Subdomains, DNS, WHOIS/RDAP, SSL, fingerprinting, capture, backlinks, security posture, threat intel
People (32): Business records, courts, developer profiles, phone validation, vehicles, property, sanctions
Crypto (26): Multi-chain tracing, flow analysis, NFTs, sanctions, contract analysis, mixer detection
IP Addresses (19): Abuse, reputation, geolocation, routing, scanning, cloud detection, hosting attribution
Search (15): AI-powered web search, dorking, paste archives, dark web indexing, reverse image search
Social: Username probing across 385 platforms, public profile scraping, media download, email-to-account checking
Emails (7): Reputation scoring, registration detection, cryptographic key lookup, domain-level analysis
Breaches (6): Aggregated queries across 6 independent sources with deduplication and cross-referencing
Images: OCR, metadata, AI geolocation, face detection and cross-case clustering, tampering detection
Maps: Forward/reverse geocoding, satellite imagery, cell tower mapping, location correlation

What output formats are produced?

Each completed investigation produces:

Intelligence Brief: Structured report with biographical profile, associated persons, communications, digital presence, domain ownership, threat intelligence, and crypto analysis. Available in HTML (print-optimized for PDF) and structured JSON.
Connections Graph: Full provenance-rich entity graph exportable as GraphML (for Gephi, yEd), CSV, and NDJSON.
AI Analysis Artifacts: Structured identity profile, investigation gaps, reasoning chains, confidence calibration, and stylometric assessment (when sufficient authored content exists).
Raw Evidence: Complete API response archive with replay handles for every tool execution, enabling full auditability.

How is the connections graph built?

Every artifact discovered during an investigation becomes a node in the connections graph. Edges represent relationships with full provenance: which adapter discovered the connection, what the match quality was, and what the original evidence was. Nodes are sized by corroboration (how many independent sources confirm them) and colored by type.

The graph supports interactive exploration with filtering by confidence tier, artifact type, and source tool. It includes force-directed and hierarchical layout modes, cluster detection, and a minimap for navigating large graphs. Exports are available in multiple formats for use in external analysis tools.

Can I ingest existing reports or documents?

Yes. The document ingestion pipeline accepts PDFs, DOCX, HTML, and plain text files. It extracts text (with OCR support for scanned documents), strips classification markings and boilerplate, then uses AI to extract structured entities: person names, emails, phone numbers, addresses, usernames, domains, companies, crypto addresses, VINs, dates of birth, and more.

Extracted entities are returned as seeds grouped by subject and role (owned, affiliated, or contextual), ready to launch as a new investigation or add to an existing case.

Can I group multiple cases together?

Yes. The case grouping feature lets you combine multiple investigations into a single view for cross-case analysis. Shared artifacts (entities appearing across multiple cases) are automatically identified and highlighted. The group view includes a combined connections graph showing relationships between cases and a shared artifact panel for identifying overlapping infrastructure, contacts, or identifiers.

What does the AI analysis do?

After an investigation completes, an 11-stage AI pipeline runs automatically to synthesize everything the tools discovered:

1. Auto-Analysis: Initial triage and categorization of all collected evidence.
2. Structured Profile: Extracts a structured identity profile covering biographical anchors, career history, digital footprint, geographic profile, and family network.
3. Search Queries: Identifies investigation gaps and generates ranked queries for follow-up on leads the automated tools couldn't reach.
4. Reasoning: Builds logical chains connecting artifacts to conclusions, evaluating the strength and independence of each evidence path.
5. Content Collection: Scans all evidence for authored content (social posts, blog articles, forum comments) and groups it by author for downstream analysis.
6. Stylometry: Forensic linguistic analysis extracting writing fingerprints across platforms and assessing authorship consistency. Includes AI-generation detection guardrails.
7. Critique: Adversarial review of earlier reasoning, challenging conclusions and flagging weak evidence.
8. Calibrate: Chain-of-verification on key claims, cross-checking against independent sources and assigning confidence tiers (verified, unverified, contradicted, inferred).
9. Graph Injection: Feeds verified findings back into the connections graph with confidence tiers. Deduplicates against existing nodes.
10. Lateral Synthesis: Second-pass reasoning that searches for nuanced patterns: temporal residue, naming echoes, geographic mismatches, and cross-category links.
11. False-Positive Triage: Evaluates every identity-bearing node against the structured profile using an asymmetric cost model, automatically pruning misattributed findings.

The pipeline is fully automated and runs without manual intervention. Each stage builds on the outputs of prior stages, creating a layered analysis that no single-pass system can replicate.

All AI calls use Zero Data Retention (ZDR) endpoints exclusively. Your investigation data is never stored, logged, or used for model training by any provider. You select from three model tiers (budget, standard, premium) with transparent per-case cost tracking.

What is the AI Pivot Gate?

The AI Pivot Gate is a real-time cascade detection system that evaluates every dispatched task before execution to prevent catastrophic fan-out. It operates on three tiers:

Tier 1 (Deterministic): Automatically approves passive, low-risk lookups with no AI call needed. This covers the majority of routine tasks.
Tier 2 (Fast Evaluation): A lightweight model evaluates the task against three failure patterns: infrastructure-based cascade (e.g., shared hosting producing hundreds of domains), broken logical continuity (pivoting on data disconnected from the investigation subject), and semantic redundancy (re-investigating data already covered by another tool).
Tier 3 (Adversarial Review): A more capable model performs adversarial review of Tier 2 decisions, requiring concrete evidence citations to justify disagreement.

The gate includes budget enforcement, a circuit breaker that halts after repeated failures, and intelligent caching so approved decisions carry forward without repeated evaluation.

What is the interactive AI chat?

After a case completes, an interactive AI chat interface lets you ask questions about the investigation data in natural language. The AI has full context of every artifact, observation, and confidence score from the case.

You can ask it to run additional tools directly from the conversation. For example, asking it to investigate a newly discovered email address or trace a cryptocurrency transaction. Tool executions requested through chat go through the same approval and budget controls as the automated pipeline.

The chat supports file uploads (PDFs, CSVs, text documents) for incorporating external evidence into the analysis, context compression for long sessions, and model tier selection on a per-message basis.

How does false-positive triage work?

After the AI pipeline completes, a dedicated false-positive triage stage evaluates every identity-bearing node in the investigation (social accounts, emails, domains, addresses, crypto wallets) against the structured profile.

Each node is classified as MATCH, NO_MATCH, or UNCERTAIN using an asymmetric confidence model: the threshold for confirming a match (75%) is deliberately lower than for rejecting one (92%), favoring recall over precision so legitimate findings are never incorrectly pruned.

The triage includes contradiction detection, low-attribution evidence veto, citation validation, and subject-confirmed-alias rescue paths. Verdicts with full reasoning are stored per-node, giving the analyst transparency into why each decision was made.

What is stylometric analysis?

When an investigation discovers authored content (social media posts, blog articles, forum comments), the system performs forensic linguistic analysis using an academically-grounded methodology. It extracts a writing fingerprint covering function-word distribution, sentence-length patterns, punctuation habits, vocabulary richness, hedging patterns, idiosyncratic expressions, error patterns, and morphology.

This fingerprint is compared across platforms to assess authorship consistency. The analysis includes AI-generation detection guardrails to avoid false positives on edited or formally-written content.

Do you store or hold any data?

No. oPivots is purely an orchestrator. We maintain zero in-house data, no proprietary databases, no data lake, and no pre-collected corpus of any kind. The system queries external sources in real time on your behalf and all results are stored exclusively in your local environment.

Each deployment is an isolated instance running in your own infrastructure. We do not access your cases, seeds, artifacts, or investigation data. The system does not transmit data back to us and does not require a persistent connection to our servers.

The platform does maintain comprehensive local audit trails within your environment: every tool execution, API call, artifact discovery, and analyst action is logged locally with timestamps and operator identity. These logs exist for your compliance and accountability needs. They never leave your infrastructure.

AI analysis features use Zero Data Retention (ZDR) endpoints exclusively. Your investigation data is never stored, logged, or used for training by model providers.

Licensees are responsible for maintaining appropriate use policies, preserving local audit logs in accordance with their organizational requirements, and ensuring compliance with applicable data protection regulations in their jurisdiction (including GDPR, CCPA, and equivalent frameworks).

Is the evidence court-admissible?

The system implements forensic-grade evidence integrity. Every piece of evidence is SHA-256 hashed with a chained hash structure (each entry references the previous hash), creating a tamper-evident audit trail. Timestamps are issued via RFC 3161 Time Stamping Authority (TSA) endpoints, and operator identity (system user, hostname) is logged with each collection event.

Every tool execution is recorded with its full API request/response payload and a replay handle, enabling any finding to be independently verified. The evidence package is designed to satisfy FRE 901(b)(9) and 902(13)/(14) authentication requirements.

Who can use oPivots?

oPivots is available to qualified investigation programs — both commercial and governmental. Corporate investigation teams, due-diligence and compliance firms, executive-protection programs, vetted investigative journalists, law enforcement agencies, and other organizations with demonstrable ethical purpose. All users undergo a verification process before deployment. The system is designed for lawful investigations with appropriate authorization.

Important: oPivots is not certified for employment screening, tenant vetting, or any consumer-facing background check purpose. The platform may not be used for these purposes under any circumstances.

Does oPivots perform mass scraping or use fake accounts?

No. oPivots is not a mass scraping platform and is not designed or capable of bulk data harvesting. The system collects only publicly available information relevant to a specific investigation seed, one case at a time, using ethical collection methods.

No fake accounts, impersonation, or deceptive access methods are used at any point in the collection process. All social media data is gathered from public profiles and public endpoints only. The system does not log in to platforms on behalf of the user, does not create accounts, and does not bypass access controls.

All platform activity is recorded in local audit logs. Misuse of the platform for mass surveillance, bulk scraping, or any purpose inconsistent with targeted lawful investigation constitutes a violation of the license agreement and may result in termination of access.

What does deployment look like?

We work with each client to determine the deployment model that best fits their operational requirements. Options include a dedicated server provisioned and configured by us, deployment onto your existing infrastructure, or hybrid arrangements tailored to your security and compliance needs.

Regardless of model, the process is the same: we install and configure the system, set up integrations, verify everything is working, then transfer full control. You manage your own cases, configure your own credentials, and operate independently from that point forward.

Updates are delivered as versioned releases that you can apply at your discretion. We provide ongoing support but do not access your investigation data. All operational activity is logged locally within your environment for your own compliance and audit purposes.

Can the platform integrate custom or client-specific data sources?

Yes. Data integration is a standard part of every engagement.

If your organization has internal data structures (case management systems, evidence repositories, proprietary databases, custom feeds) that should be queried automatically during investigations, we can build adapters that connect them to the orchestration engine. A client-specific source becomes a first-class part of the routing table, same as the 230+ existing adapters, so it participates in pivoting and correlation like any other tool.

If there is a third-party source you rely on that is not currently supported, we can add it. The adapter architecture is designed to make new integrations straightforward: a new source typically maps to a new adapter with defined input types and output artifacts, wired into the existing routing and gate logic.

Every engagement begins with a scoping conversation to identify integration needs upfront. We work directly with clients to make sure the platform covers what their investigations actually require, whether that means connecting internal systems, adding third-party sources, or adjusting routing and gate behavior for specialized use cases.