AI Transparency
Uptimer uses locally executed intelligence to assist monitoring operations. We do not depend on paid model APIs or external LLM services for these features.
What "Powered by AI" Means in Uptimer
- Incident summaries are generated from your monitor events and check telemetry.
- Incident root-cause hints are inferred locally with top 1-2 categories and confidence (DNS, TLS, Network, Application, Auth, Rate-limit).
- Latency anomaly detection is computed with local statistical methods.
- Status summaries are produced from monitor configuration, checks, and events.
- Predictive insights are produced by local ML.NET models to estimate incident risk, likely failure type, and uptime forecasts.
How We Built It
Uptimer implements its own monitoring intelligence pipeline in the application itself. The system combines deterministic logic, statistical analysis, and local ML.NET models to describe health, detect degradation, summarize incidents, and generate predictive signals.
This is intentionally engineered for reliability and explainability: same inputs produce the same output, and behavior can be audited from stored telemetry.
Predictive Pipeline (ML.NET)
- Incident Risk Score: predicts probability of an Up-to-Down transition in the next 1 hour and 24 hours.
- Uptime Forecast: predicts expected uptime for next day and next week, with a 7-day daily forecast curve.
- Early Warning State: labels monitor posture as Stable, Degrading, or Critical.
- Likely Incident Type: predicts the next likely class (Timeout, DNS, HTTP 5xx, Connection Refused).
- Feature Store: monitor prediction snapshots are persisted with model version, confidence, and notes.
Incident Root-Cause Hints
- Root-cause hints are tenant-scoped and generated only when both AI incident summaries and AI root-cause hints are enabled for that tenant.
- Signals include checker outputs (timeouts, HTTP codes, error patterns) and status-event recurrence patterns.
- Uptimer stores the top 1-2 inferred categories plus confidence and surfaces them in incident summary views.
- Hints are advisory and explainable; they do not auto-remediate or modify monitor settings.
Retraining and Runtime Behavior
- Models retrain on first boot before regular scoring starts.
- After first boot, retraining runs nightly.
- Scoring runs hourly and stores the latest snapshot per monitor/hour.
- When historical data is sparse, Uptimer falls back to deterministic heuristics instead of forcing low-confidence ML output.
Nightly Retraining Status
Last Model Trained (UTC): 2026-03-09 23:32:20Z
Next Nightly Retrain ETA (UTC): 2026-03-10 23:32:20Z
Last Retrain Attempt (UTC): 2026-03-09 23:32:20Z
Last Scoring Snapshot (UTC): 2026-03-09 23:00:00Z
Was Last Run Initial Boot: Yes
Model Version: mlnet-predictive-v1
Candidate Tenants (Last Run): 1
Tenants Trained (Last Run): 1
Tenants Failed (Last Run): 0
Total Training Samples (Last Run): 332
Total Training Monitors (Last Run): 90
Snapshots (Last 24h): 156
Distinct Monitors Scored (24h): 88
ML-backed Snapshots (24h): 18
Fallback Snapshots (24h): 138
Fallback Ratio (24h): 88.5%
Avg Training Samples (ML-backed): 323
This panel is aggregate-only and excludes monitor names, endpoints, payloads, and any tenant-identifying content.
Auto-refresh interval: 30 seconds.
No External LLM Dependency
- No paid model providers are required.
- No monitor payloads are sent to third-party AI endpoints.
- No secret headers, credentials, or response bodies are required for summaries.
- AI features continue to operate in local/offline environments.
Safety Boundaries
- Minimum data thresholds are required before ML predictions are treated as full-confidence model output.
- Telemetry is sanitized before summary generation.
- Inputs are capped to bounded windows to avoid runaway processing.
- AI output is informational only and does not execute tools or mutate monitor config automatically.