Runtime Metrics
Monitor feature set execution with built-in metrics collection. Track action counts, timing, success rates, and system resource usage with multiple output formats including Prometheus.
Zero Configuration
Metrics are collected automatically for every feature set execution.
Access them anytime via the <metrics> magic variable — no setup required.
Overview
ARO automatically tracks metrics for every feature set that executes in your application. The runtime records execution counts, success/failure rates, and timing statistics. System-level process metrics (CPU, memory, file descriptors) are also available.
Metrics are accessed through the <metrics> magic variable, which is
available in any feature set without declaration. Use a qualifier to choose the output format.
Enabling Metrics Output
Use the Log action with the <metrics> variable and a format qualifier:
(* Table format - ASCII table for terminal display *)
Log the <metrics: table> to the <console>.
(* Short format - single-line summary *)
Log the <metrics: short> to the <console>.
(* Plain format - detailed human-readable output *)
Log the <metrics: plain> to the <console>.
(* Prometheus format - for monitoring systems *)
Log the <metrics: prometheus> to the <console>.
Without a qualifier, <metrics> defaults to the plain format:
(* These are equivalent *)
Log the <metrics> to the <console>.
Log the <metrics: plain> to the <console>.
Output Formats
| Qualifier | Description | Use Case |
|---|---|---|
plain |
Context-aware detailed output | Development, debugging |
short |
Single-line summary | Quick status checks |
table |
ASCII table format | Terminal display |
prometheus |
Prometheus text format | Monitoring integration |
Table Format
The table qualifier produces an ASCII table showing all feature sets with their counts and timing:
+-------------------+-------+---------+--------+---------+---------+
| Feature Set | Count | Success | Failed | Avg(ms) | Max(ms) |
+-------------------+-------+---------+--------+---------+---------+
| Application-Start | 1 | 1 | 0 | 12.50 | 12.50 |
| listUsers | 2 | 2 | 0 | 8.30 | 9.50 |
+-------------------+-------+---------+--------+---------+---------+
| TOTAL | 3 | 3 | 0 | 10.40 | 12.50 |
+-------------------+-------+---------+--------+---------+---------+
Short Format
The short qualifier produces a single-line summary:
metrics: 3 executions, 2 featuresets, avg=10.4ms, uptime=5.2s
Plain Format
The plain qualifier provides a detailed breakdown per feature set:
Feature Set Metrics (3 total executions, uptime: 5.2s)
Application-Start (Entry Point)
Executions: 1 (success: 1, failed: 0)
Duration: avg=12.5ms, min=12.5ms, max=12.5ms
listUsers (User API)
Executions: 2 (success: 2, failed: 0)
Duration: avg=8.3ms, min=7.1ms, max=9.5ms
Prometheus Format
The prometheus qualifier outputs metrics in standard
Prometheus text format,
ready for scraping by monitoring systems:
# HELP aro_featureset_executions_total Total number of feature set executions
# TYPE aro_featureset_executions_total counter
aro_featureset_executions_total{featureset="Application-Start",activity="Entry Point"} 1
aro_featureset_executions_total{featureset="listUsers",activity="User API"} 2
# HELP aro_featureset_duration_ms_avg Average execution duration in milliseconds
# TYPE aro_featureset_duration_ms_avg gauge
aro_featureset_duration_ms_avg{featureset="Application-Start",activity="Entry Point"} 12.5
aro_featureset_duration_ms_avg{featureset="listUsers",activity="User API"} 8.3
# HELP aro_application_uptime_seconds Application uptime in seconds
# TYPE aro_application_uptime_seconds gauge
aro_application_uptime_seconds 5.2
You can also expose metrics as an HTTP endpoint for Prometheus scraping:
(getMetrics: Monitoring API) {
Return an <OK: status> with <metrics: prometheus>.
}
Example Usage
A complete example that emits events, processes them in parallel, and displays metrics at shutdown:
(Application-Start: Metrics Demo) {
Log "=== Metrics Demo ===" to the <console>.
Log "Emitting events in parallel..." to the <console>.
(* Emit events in parallel for each item *)
Create the <items> with [1, 2, 3].
parallel for each <item> in <items> {
Emit a <ProcessItem: event> with <item>.
}
Log "Events emitted. Metrics will be shown at shutdown." to the <console>.
Return an <OK: status> for the <demo>.
}
(Process Item: ProcessItem Handler) {
Extract the <value> from the <event: item>.
Log "Processing item: ${value}" to the <console>.
Return an <OK: status> for the <processing>.
}
(Application-End: Success) {
Log "=== Final Metrics ===" to the <console>.
(* Short summary *)
Log the <metrics: short> to the <console>.
(* Detailed table *)
Log the <metrics: table> to the <console>.
(* Prometheus format for monitoring *)
Log the <metrics: prometheus> to the <console>.
Return an <OK: status> for the <shutdown>.
}
Available Metric Types
Per-Feature Set Metrics
Each feature set that executes is tracked individually with the following metrics:
| Metric | Type | Description |
|---|---|---|
executionCount |
Counter | Total number of times the feature set has been executed |
successCount |
Counter | Number of successful executions |
failureCount |
Counter | Number of failed executions |
averageDurationMs |
Computed | Average execution time in milliseconds |
minDurationMs |
Gauge | Fastest execution time in milliseconds |
maxDurationMs |
Gauge | Slowest execution time in milliseconds |
successRate |
Computed | Percentage of successful executions (0–100) |
Global Application Metrics
| Metric | Type | Description |
|---|---|---|
totalExecutions |
Counter | Sum of all feature set executions |
uptimeSeconds |
Computed | Time since application start |
applicationStartTime |
Timestamp | When the application started |
System Process Metrics
The metrics snapshot also includes system-level process information collected from the operating system:
| Metric | Description |
|---|---|
cpuUserTime |
CPU time spent in user mode (seconds) |
cpuSystemTime |
CPU time spent in system/kernel mode (seconds) |
residentMemoryBytes |
Physical memory used by the process |
virtualMemoryBytes |
Virtual memory allocated to the process |
openFileDescriptors |
Number of currently open file descriptors |
maxFileDescriptors |
Maximum allowed file descriptors for the process |
Prometheus Labels
Each Prometheus metric includes identifying labels for filtering and grouping in dashboards:
| Label | Description | Example |
|---|---|---|
featureset |
Feature set name | listUsers |
activity |
Business activity | User API |
Learn More
See the full runtime metrics specification in ARO-0044: Runtime Metrics.