Your operational data holds the answers. Find them.

Operational systems generate the data that predicts failures and guides spending. Most companies have that data scattered across systems that do not talk to each other. SCRAPE unifies it, correlates it, and delivers the intelligence your operators need to make decisions with confidence.

Works with all your operational systems

SCRAPE reads from the operational systems you use today. No replatforming, no data migration, no rewrites.

For your IT team: Each connector authenticates against the target system using credentials you control. Adding additional ticketing sources is an engineering task during the discovery phase, not a platform limit.
For your IT team: Certificate-based authentication against Microsoft Entra. We scope to a specific site, drive, and folder path rather than tenant-wide access.
For your IT team: Preferred mode is an EC2 or ECS task role with no credentials stored in our configuration. Explicit access keys are supported when role-based access is not available. Prefix scoping and recursive enumeration are both configurable.
For your IT team: PDFs are split to pages automatically for downstream extraction. JSON sources auto-detect standard schemas or accept a field-mapping you supply.
For your IT team: Connections are first-class records. Credentials are stored per-connection with sensitive fields redacted on read. Every connection has a test endpoint you can hit before committing to a pipeline run.

Security and identity management

Plug SCRAPE into the identity provider your organization already runs.

For your security team: Standard OIDC authorization-code flow with discovery-document fetch, signed state token, configurable scopes, and a configurable groups claim. Group membership at login drives role assignment through a mapping you control in the admin UI.
For your security team: SP-initiated flow with signed AuthnRequests, an SP metadata endpoint your IdP admin can import directly, configurable IdP x509 cert, SP cert and private key, and a configurable clock-skew tolerance.
For your security team: RFC 7643 and RFC 7644 compliant. Users are created on assignment, updated on profile change, and deactivated on unassignment. Deactivation is enforced at request time, not just at next login, so existing sessions are blocked within seconds. Every SCIM operation is recorded in the audit log with the SCIM client fingerprint as the actor.
For your security team: Roles are evaluated on every API call. New users authenticated through SSO without a matching group mapping land in a pending state with no data access until an admin promotes them.
For your security team: Secrets redacted on read across the admin UI and the API. Bearer tokens compared in constant time to prevent timing attacks.

Your infrastructure, your terms

You pick the deployment model. We support all of them.

For your IT team: Deployment is container-based with managed dependencies. Every environment we have shipped to runs on infrastructure the customer owns and administers.
For your IT team: Nothing leaves your network at any point. Fully self-contained deployment with no external dependencies.
For your IT team: Deployment is container-based with managed dependencies. Every environment runs on infrastructure the customer owns and administers.
For your IT team: Deployment is container-based with managed dependencies. Every environment we have shipped to runs on infrastructure the customer owns and administers.

Choose your LLM, choose your cloud

SCRAPE works with whichever LLM provider fits your security posture, budget, and procurement.

For your IT team: Provider, model, endpoint URL, and credentials are tenant-scoped configuration. Switching providers is a configuration change, not a redeployment.
For your IT team: Provider, model, endpoint URL, and credentials are tenant-scoped configuration. Switching providers is a configuration change, not a redeployment.
For your IT team: One team can run on a self-hosted model while another runs on a cloud provider, in the same instance. Provider, model, endpoint URL, and credentials are tenant-scoped configuration.

Never modifies your data

SCRAPE never writes back to your source systems. It reads, correlates, and writes its findings to a results database that you own.

For your security team: The connector layer exposes only read operations against source systems. Any modification to your operational data would require a code change to the source connector class, which is reviewable in our delivery process.

Audit-ready operations

Every action is logged, every credential is protected, every deployment is controlled.

For your security team: The audit log is queryable from the admin UI with filters for actor, event type, table, and date range. Distinct event names and table names are exposed as enumerated lookups. Audit entries are immutable from the application layer.
For your security team: Sensitive fields show as a redaction sentinel on read. Updates that send the sentinel back preserve the existing value, so admin UI round-trips can never accidentally clear a credential. Per-source sensitive-field sets are defined in code.
For your security team: Rotation procedures are aligned with the controls AS9115 and SOC 2 customers expect.

Dashboards for the questions you are asking

Auto-provisioned on deployment, customized to match your data sources and priorities.

01

Fleet Overview

All assets on a unified timeline with event annotations from connected sources.

02

Asset Sessions

Per-asset operational session history with correlated event markers.

03

Telemetry Detail

Deep-dive into any metric with cross-system event overlays.

04

Correlation Detail

Per-event detail showing how metrics changed and by how much.

05

Events by Asset

Filterable event table with component breakdowns and measured impact.

06

Component Lifecycle

State timeline showing generation boundaries and operational hours per generation.

07

Operational Hours

Cumulative hours per asset with component swap markers.

All dashboards use template variables. Filter by asset, component category, date range, or individual event.

Statistically rigorous, operationally useful

Every flagged change is backed by evidence, not a hunch, with a high bar before anything reaches your team.

Per-event evidence

Drill into any flagged event for the full breakdown of what changed and by how much. Borderline or noisy results are filtered out before they get to you.

Generation tracking

For every tracked entity, see install dates, replacement dates, total operational hours, cycle counts, and current status across each generation.

Predictive foundation

A structured, labeled dataset derived from your unstructured records, ready to feed your own predictive analytics and ML pipelines.

Connect your tools and data pipelines

REST API

Every capability in the admin UI is a REST endpoint.

SQL access

Query directly, connect BI tools, or build custom dashboards.

Schema discovery

New tables and columns appear in the configuration UI automatically.

Purpose-built by Oberth engineers

SCRAPE is not a self-serve SaaS product. Every deployment is purpose-built by Oberth Systems engineers to extract the correlations that matter to your business.

The platform underneath is the same across every engagement. What varies is which sources connect, which correlations matter, and how the dashboards are arranged for your operators.

Ready to explore?

Start a discovery phase to understand your data landscape, the correlations we can unlock, and a clear deployment roadmap.

Request a discovery phase