Uptimer Capability Evidence

Stats for Nerds

Enterprise evidence pack combining measured benchmark runs, modeled capacity projections, and live anonymized runtime telemetry.

Evidence Run: 20260309-233210 Recorded: 2026-03-09 23:32:47 UTC Latest local evidence bundle

Loaded from artifacts/test-reports/performance/20260309-233210

Section 1: Evidence Classification

🔵 Measured Created directly from benchmark artifacts in artifacts/test-reports/performance/<runId>, including execution-throughput, scheduler-scale, and execution-reliability scenario outputs. Values are parsed from raw JSON and surfaced without projection math.

🟡 Modeled Built from measured baseline plan throughput (plan-capacity-projection) and then scaled using explicit controls: selected plan capacity, active-tenant count, and deployment multiplier (cloud = 1.0x, self-hosted = selected factor). Daily and weekly values are deterministic transforms from checks/sec.

🟢 Live Computed from current production database counters over rolling windows (for example checks completed in the last 24 hours), then normalized into rates (checks/sec) and load ratios against modeled capacity. These are anonymized aggregate telemetry snapshots, not synthetic test output.

🟣 Theoretical Envelope Derived from the max-usage-envelope scenario by applying declared workload bounds (min/max concurrent checks and fastest/slowest intervals) to produce upper-limit demand and capacity envelope estimates. Use this as a planning boundary, not an observed runtime guarantee.

Section 2: Platform Status Overview

Latest Evidence Run 🔵 Measured

2026-03-09 23:32 UTC

Timestamp of the latest evidence bundle used for this report.

Result 🔵 Measured

PASS

Evidence integrity verified; reliability within control limits.

Observed Sustained Throughput 🔵 Measured

76.3 checks/sec

Sustained execution rate from the measured benchmark workload.

Reliability Control Status 🔵 Measured

Within target threshold

CV target: 0.350, observed CV: 0.288.

Scheduler Saturation 🟡 Modeled

0.0%

Derived from observed queue size relative to expected queue burst target.

Queue Fairness 🔵 Measured

Verified

No tenant starvation means no tenant queue was continuously deprioritized while others progressed.

System State 🟢 Live

Stable under multi-tenant load

Posture derived from live throughput and absence of queue pressure indicators.

Conclusion: Uptimer demonstrates predictable scaling behavior under bounded concurrency with significant headroom relative to current production usage.

Hardware Profile of Evidence Run

🔵

CPU Class: 12 logical cores

Memory Class: 62.7 GiB

Database Engine (Benchmark Profile): SQLite

Storage: Local fixed volume

OS Family: Ubuntu 24.04.4 LTS

Runtime: .NET 10.0.3

Section 3: Measured Performance Validation

Observed Execution Throughput 🔵

76.3 checks/sec

Measured sustained throughput during controlled 400-monitor parallel test (parallelism: 6). Represents stable steady-state processing rate without queue backlog.

Reliability P95 Throughput i 🔵

97.7 checks/sec

95th percentile throughput across 3 independent benchmark rounds. Coefficient of Variation: 0.288. Lower CV indicates better repeatability.

Scheduler Fairness i 🔵

Verified

40 tenants simulated. Queue burst target achieved at 100.0% expected fill. Tenant starvation means one tenant repeatedly waits while others continue to run.

Section 4: Modeled Multi-Tenant Capacity Projection

Modeled

These figures are projections derived from measured baseline throughput and scaled concurrency assumptions. They represent expected sustainable throughput under equivalent hardware scaling.

Deployment Mode

Selected Tenant Scale (Active Tenants = 300)

Per-Tenant Modeled Capacity 🟡

66.7 checks/sec

Per-Tenant Modeled Capacity (BUSINESS plan).

Modeled Throughput at Selected Scale 🟡

20,000.1 checks/sec

Selected-tenant throughput using current controls and deployment mode.

Daily Projection (Selected Scale) 🟡

1,728,008,640 checks/day

Modeled checks/sec multiplied by 86,400 seconds/day.

Weekly Projection (Selected Scale) 🟡

12,096,060,480 checks/week

Modeled checks/day multiplied by seven days.

Safety Margin vs Measured Test (Selected Scale) 🟡

262.3x

Selected-scale modeled throughput divided by measured execution throughput.

Reference Scale (300 tenants, benchmark baseline)

Modeled Multi-Tenant Cloud Baseline 🟡

20,000 checks/sec

At-scale benchmark baseline throughput from plan evidence.

Daily (Reference Scale) 🟡

1,728,000,000 checks/day

Reference checks/sec multiplied by 86,400 seconds/day.

Weekly (Reference Scale) 🟡

12,096,000,000 checks/week

Reference checks/day multiplied by seven days.

Section 5: Workload Coverage Analysis

Theoretical Envelope

The selected workload Theoretical Envelope (100 monitors at 30s) requires 3.3 checks/sec. Current modeled capacity 🟡 Modeled provides 20,000.1 checks/sec. This configuration provides 6,000.0x excess capacity over this workload.

Requirement Gap: -19,996.8 checks/sec (required - capacity; negative means surplus capacity).

Operationally, this demand can be absorbed without measurable queue pressure when model assumptions hold.

Section 6: Scheduler Concurrency and Backpressure Controls

Scheduler fairness uses tenant round-robin batching so due checks are distributed across tenants before deep-draining any single tenant backlog.

Location routing is stress-aware: automatic placement picks the lowest-stress location within each tenant's plan-allowed slots, across built-in globals and configured nodes.

Current configured location slots: 8 (includes cloud built-ins plus configured node locations).

Execution control combines global/per-tenant/per-host limits with bounded queueing and per-monitor de-duplication to prevent slot starvation and duplicate in-flight runs.

Concurrent Check Slots i 🔵

30

Configured number of simultaneous checks in the scheduler benchmark.

Queue Burst Capacity 🔵

120

Expected aggregate burst target (`maxConcurrentChecks × 4`) used to validate burst absorption across queue partitions.

Observed Aggregate Queue Fill i 🔵

120

Observed aggregate queue count at benchmark time (100.0% of target).

Active Queue Partitions 🔵

3

Distinct queue partitions sampled during the scheduler benchmark (global locations plus node queues).

Queue Distribution Imbalance 🔵

0

Difference between busiest and lightest sampled queue partition during the burst window.

Max Observed Queue Delay 🟡

1.57s

Derived upper-bound queue wait using worst sampled partition depth and per-partition throughput estimate.

Average Queue Wait Time i 🟡

0.79s

Derived midpoint wait under even queue service, used as a practical waiting-time indicator.

Longest Queue Drain Time 🟡

1.57s

Derived drain time for worst partition burst at estimated per-partition throughput.

Section 7: Plan Throughput Visualization

Plan rows lead with service envelope and intended profile; throughput values below are modeled at-scale cloud baseline values from benchmark evidence.

Plan Max Monitors Min Interval Modeled At-Scale Throughput (Cloud Baseline, 300 tenants) Intended Usage Profile
FREE 50 120s 125.0 checks/sec 🟡 Low-frequency non-critical checks
SOLO 100 60s 500.0 checks/sec 🟡 Small workloads with moderate cadence
TEAM 250 30s 2,500.0 checks/sec 🟡 Shared team operations and service groups
BUSINESS 1,000 15s 20,000.0 checks/sec 🟡 High-density multi-service production monitoring

Section 8: Current Production Telemetry Snapshot

This is real runtime data, aggregated and anonymized. It is not synthetic test data.

Current Production Load 🟢

0.0848 checks/sec

Derived from rolling 24-hour completed checks (7,323 in last 24h).

Tested Sustained Load 🔵

76.3 checks/sec

Measured controlled benchmark throughput.

Modeled Multi-Tenant Cloud Baseline (300 tenants) 🟡

20,000 checks/sec

At-scale modeled throughput under benchmark tenant assumptions.

Load-to-Capacity Ratio 🟢

0.000424%

Current production throughput as a percentage of modeled capacity.

Live stats recorded at 2026-03-10 00:16:10 UTC

Risk and Constraints

  • Modeled figures assume similar hardware, network conditions, and monitor mix as measured runs.
  • Queue delay metrics in this report are derived estimates, not direct per-item queue latency traces.
  • Hardware profile fields should be explicitly pinned for each evidence run when used in procurement or external audits.
  • Live telemetry is aggregated and anonymized; it indicates load posture, not per-tenant behavior.

Raw Test Evidence (Verbatim)

Direct JSON outputs from the performance test pack.

plan-capacity-projection
{
  "ActiveTenants": 300,
  "HighestFinitePlan": "business",
  "HighestFinitePlanMaxMonitors": 1000,
  "HighestFinitePlanMinIntervalSeconds": 15,
  "HighestFiniteChecksPerSecondPerTenant": "66.667",
  "HighestFiniteChecksPerDayPerTenant": "5760000.000",
  "HighestFiniteChecksPerWeekPerTenant": "40320000.000",
  "HighestFiniteChecksPerSecondAtScale": "20000.000",
  "HighestFiniteChecksPerDayAtScale": "1728000000.000",
  "HighestFiniteChecksPerWeekAtScale": "12096000000.000",
  "Projections": [
    {
      "PlanCode": "free",
      "MaxMonitors": 50,
      "MinIntervalSeconds": 120,
      "ChecksPerSecondPerTenant": "0.417",
      "ChecksPerDayPerTenant": "36000.000",
      "ChecksPerWeekPerTenant": "252000.000",
      "ProjectedChecksPerSecondAtScale": "125.000",
      "ProjectedChecksPerDayAtScale": "10800000.000",
      "ProjectedChecksPerWeekAtScale": "75600000.000"
    },
    {
      "PlanCode": "solo",
      "MaxMonitors": 100,
      "MinIntervalSeconds": 60,
      "ChecksPerSecondPerTenant": "1.667",
      "ChecksPerDayPerTenant": "144000.000",
      "ChecksPerWeekPerTenant": "1008000.000",
      "ProjectedChecksPerSecondAtScale": "500.000",
      "ProjectedChecksPerDayAtScale": "43200000.000",
      "ProjectedChecksPerWeekAtScale": "302400000.000"
    },
    {
      "PlanCode": "team",
      "MaxMonitors": 250,
      "MinIntervalSeconds": 30,
      "ChecksPerSecondPerTenant": "8.333",
      "ChecksPerDayPerTenant": "720000.000",
      "ChecksPerWeekPerTenant": "5040000.000",
      "ProjectedChecksPerSecondAtScale": "2500.000",
      "ProjectedChecksPerDayAtScale": "216000000.000",
      "ProjectedChecksPerWeekAtScale": "1512000000.000"
    },
    {
      "PlanCode": "business",
      "MaxMonitors": 1000,
      "MinIntervalSeconds": 15,
      "ChecksPerSecondPerTenant": "66.667",
      "ChecksPerDayPerTenant": "5760000.000",
      "ChecksPerWeekPerTenant": "40320000.000",
      "ProjectedChecksPerSecondAtScale": "20000.000",
      "ProjectedChecksPerDayAtScale": "1728000000.000",
      "ProjectedChecksPerWeekAtScale": "12096000000.000"
    },
    {
      "PlanCode": "enterprise",
      "MaxMonitors": null,
      "MinIntervalSeconds": 5,
      "ChecksPerSecondPerTenant": "Infinity",
      "ChecksPerDayPerTenant": "Infinity",
      "ChecksPerWeekPerTenant": "Infinity",
      "ProjectedChecksPerSecondAtScale": "Infinity",
      "ProjectedChecksPerDayAtScale": "Infinity",
      "ProjectedChecksPerWeekAtScale": "Infinity"
    }
  ],
  "RecordedAtUtc": "2026-03-09T23:32:36.8012132Z"
}
execution-throughput
{
  "MonitorCount": 400,
  "Parallelism": 6,
  "CompletedChecks": 400,
  "MinimumChecksPerSecond": 8,
  "ObservedChecksPerSecond": 76.258,
  "ElapsedSeconds": 5.245,
  "RecordedAtUtc": "2026-03-09T23:32:44.8749421Z"
}
scheduler-scale
{
  "ExpectedBatchSize": 120,
  "ObservedQueueCount": 120,
  "SampledTenantCount": 40,
  "SampledUniqueMonitorCount": 120,
  "SampledLocationCounts": {
    "global-1": 40,
    "global-2": 40,
    "global-3": 40
  },
  "SampledLocationImbalance": 0,
  "MaxConcurrentChecks": 30,
  "MonitoringLocations": "global-1,global-2,global-3",
  "RecordedAtUtc": "2026-03-09T23:32:47.7102908Z"
}
execution-reliability
{
  "Rounds": 3,
  "MonitorCountPerRound": 300,
  "Parallelism": 6,
  "MinimumChecksPerSecond": 8,
  "MaxAllowedCoefficientOfVariation": 0.35,
  "AverageChecksPerSecond": 96.886,
  "P50ChecksPerSecond": 97.668,
  "P95ChecksPerSecond": 97.668,
  "MinChecksPerSecond": 62.332,
  "MaxChecksPerSecond": 130.658,
  "ThroughputCoefficientOfVariation": 0.288,
  "ThroughputSamples": [
    62.332,
    97.668,
    130.658
  ],
  "ElapsedSecondsSamples": [
    4.813,
    3.072,
    2.296
  ],
  "RecordedAtUtc": "2026-03-09T23:32:50.469885Z"
}
max-usage-envelope
{
  "ActiveTenants": 300,
  "SelfHostedMultiplier": 4,
  "MinExpectedConcurrentChecks": 100,
  "MaxExpectedConcurrentChecks": 300000,
  "FastestExpectedIntervalSeconds": 15,
  "SlowestExpectedIntervalSeconds": 30,
  "MinExpectedChecksPerSecond": "3.333",
  "MinExpectedChecksPerDay": "288000.000",
  "MinExpectedChecksPerWeek": "2016000.000",
  "MaxExpectedChecksPerSecond": "20000.000",
  "MaxExpectedChecksPerDay": "1728000000.000",
  "MaxExpectedChecksPerWeek": "12096000000.000",
  "StrongestFinitePlanCode": "business",
  "StrongestFinitePlanMaxMonitors": 1000,
  "StrongestFinitePlanMinIntervalSeconds": 15,
  "StrongestFiniteChecksPerSecondPerTenant": "66.667",
  "CloudBaselineChecksPerSecond": "20000.000",
  "CloudBaselineChecksPerDay": "1728000000.000",
  "CloudBaselineChecksPerWeek": "12096000000.000",
  "SelfHostedProjectedChecksPerSecond": "80000.000",
  "SelfHostedProjectedChecksPerDay": "6912000000.000",
  "SelfHostedProjectedChecksPerWeek": "48384000000.000",
  "FinitePlanEnvelope": [
    {
      "PlanCode": "business",
      "MaxMonitors": 1000,
      "MinIntervalSeconds": 15,
      "ChecksPerSecondPerTenant": "66.667",
      "CloudChecksPerSecond": "20000.000",
      "CloudChecksPerDay": "1728000000.000",
      "CloudChecksPerWeek": "12096000000.000",
      "SelfHostedChecksPerSecond": "80000.000",
      "SelfHostedChecksPerDay": "6912000000.000",
      "SelfHostedChecksPerWeek": "48384000000.000"
    },
    {
      "PlanCode": "team",
      "MaxMonitors": 250,
      "MinIntervalSeconds": 30,
      "ChecksPerSecondPerTenant": "8.333",
      "CloudChecksPerSecond": "2500.000",
      "CloudChecksPerDay": "216000000.000",
      "CloudChecksPerWeek": "1512000000.000",
      "SelfHostedChecksPerSecond": "10000.000",
      "SelfHostedChecksPerDay": "864000000.000",
      "SelfHostedChecksPerWeek": "6048000000.000"
    },
    {
      "PlanCode": "solo",
      "MaxMonitors": 100,
      "MinIntervalSeconds": 60,
      "ChecksPerSecondPerTenant": "1.667",
      "CloudChecksPerSecond": "500.000",
      "CloudChecksPerDay": "43200000.000",
      "CloudChecksPerWeek": "302400000.000",
      "SelfHostedChecksPerSecond": "2000.000",
      "SelfHostedChecksPerDay": "172800000.000",
      "SelfHostedChecksPerWeek": "1209600000.000"
    },
    {
      "PlanCode": "free",
      "MaxMonitors": 50,
      "MinIntervalSeconds": 120,
      "ChecksPerSecondPerTenant": "0.417",
      "CloudChecksPerSecond": "125.000",
      "CloudChecksPerDay": "10800000.000",
      "CloudChecksPerWeek": "75600000.000",
      "SelfHostedChecksPerSecond": "500.000",
      "SelfHostedChecksPerDay": "43200000.000",
      "SelfHostedChecksPerWeek": "302400000.000"
    }
  ],
  "RecordedAtUtc": "2026-03-09T23:32:36.7969255Z"
}
An unhandled error has occurred. Reload x

Connection Interrupted

Uptimer is a live app session. Reconnecting...

Still reconnecting your live session in s.

We could not restore the live session yet. Retry or reload when you are ready.

Your live session was paused by the server.

We could not resume the live session. Retry or reload.

Status: Waiting... Uptime Streak: 0 Best Streak: 0

Guide the Uptimer node over outage spikes. Press Space or tap to jump.

Keeping the mini-game active while your live session reconnects...

Live session re-established. Continue with Uptimer Runner?