Platform

SurgeXi is being built as a platform, not a one-off services site.

The public experience is designed to support commercial use now while preserving a clean path to customer portals, AI interaction, analytics, integrations, and partner enablement.

Edge Platform Standard

The Helix line becomes the enterprise edge foundation.

SurgeXi is standardizing around the Helix node family as the primary enterprise edge hardware path, with smaller and larger supporting node classes where the environment needs a different fit.

Smart property edge

Put private inference and site automation closer to the building, home, or occupancy signals being monitored.

Fleet and field edge

Run local orchestration, dispatch support, and intake intelligence where crews and vehicles are operating in real time.

Enterprise control edge

Use higher-performance nodes in control rooms and operational hubs to anchor heavier analytics and orchestration workloads.

SurgeXi Ai v1 Performance
SurgeXi Ai v1 Compact

Compact

SurgeXi Ai v1 Compact

The compact node is the distributed edge workhorse for private inference, signal capture, and site-level autonomy where hardware has to be deployed closer to the operation.

Fleet vehicles, smart properties, remote branches, field-side deployments

  • Ideal for lighter local models and operational routing
  • Supports privacy-sensitive local decision layers
  • Designed for broad rollout across many locations
SurgeXi Ai v1 Performance

Performance

SurgeXi Ai v1 Performance

The performance node is the larger edge unit for stronger analytics, multi-stream processing, orchestration, and higher-headroom local AI execution.

Control rooms, facility cores, enterprise branches, heavier onsite workloads

  • Better fit for heavier orchestration and local reasoning
  • Supports more demanding onsite intelligence layers
  • Acts as the serious edge compute tier for enterprise environments
Supporting node classes

Extended

Supporting node classes

SurgeXi can pair the core Helix enterprise line with smaller and larger node classes where the environment calls for lower-cost endpoints or higher-density edge compute.

Larger and smaller supporting models around the core Helix fleet

  • Right-size the hardware to the customer environment
  • Preserve a consistent operating model across node classes
  • Keep branding, management, and deployment standards unified

Phase 1

Marketing and Lead Intake

Public-facing pages, solution positioning, commercial inquiry flow, and a clear top-level operating story.

Phase 2

Portal Foundation

Authenticated routes, account-aware navigation, role separation, and internal service surfaces for customers and partners.

Phase 3

Operational AI Layer

AI-guided workflows, reporting interactions, and business-task automation tied to actual customer environments.

Phase 4

Platform and Partner Expansion

White-label enablement, API-backed integrations, tenant-aware services, and recurring SaaS delivery.

Recommended rollout path

Launch the marketing and lead-intake site first. Keep route architecture and deployment decisions compatible with a future authenticated application shell and a managed edge hardware layer.

Discuss the Build Roadmap