How to configure logging for the Aranya daemon and client applications.
Aranya uses Rust tracing for structured logging. Configure logging separately for:
aranya-daemon) - configured via config file or environment variablearanya-client
Use TRACE only for targeted debugging sessions; it can significantly impact performance.
Add logging configuration to your daemon.toml:
[debug]
# Enable debug endpoints
enabled = true
# Endpoint address
bind_addr = "127.0.0.1:9090"
[logging]
# Toggle logging
enabled = true
# Log format (json or text)
format = "json"
# Default log level (error, warn, info, debug, trace)
level = "info"
# Per-component log level overrides (optional)
# Format: "component1=level1,component2=level2"
# Example: "sync=debug,afc=trace,policy=info"
# log_filter = ""
# Log file path (optional, defaults to stdout only)
# path = "/var/log/aranya/daemon.log"
# Log to standard out (enabled by default)
# stdout = true
Default behavior: Logs go to stdout by default (ideal for systemd, containers, and log aggregators). Optionally add path to also write to a file.
Common log filter configurations: [Subscriber filter doc] (https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#filter-directives)
# Debug sync operations only
log_filter = "info,aranya_daemon::sync=debug"
# Debug policy evaluation
log_filter = "info,aranya_daemon::actions=debug,aranya_policy_vm=trace"
# Multiple components
log_filter = "info,aranya_daemon::sync=debug,aranya_daemon::api=debug"
The ARANYA_DAEMON environment variable overrides the config file setting:
# Override to debug level
ARANYA_DAEMON=debug ./aranya-daemon --config daemon.toml
# Override with module-specific filter
ARANYA_DAEMON=info,aranya_daemon::sync=debug ./aranya-daemon --config daemon.toml
# Disable all logging
ARANYA_DAEMON=off ./aranya-daemon --config daemon.toml
When to use:
Currently, the Aranya daemon applies log level configuration at startup only and does not support runtime log level changes without restarting.
Future enhancement: The tracing_subscriber library supports reloadable filters through the reload layer, but Aranya’s daemon does not yet implement this. To support dynamic log level adjustment at runtime, the daemon would need to:
tracing_subscriber::reload::Layer - Wraps the filter in a reloadable handle that allows updates via a broadcast channelCurrent options for temporary debugging:
ARANYA_DAEMON environment variableARANYA_DAEMON env var at startup to enable debug logging before issues occurThe aranya-client library does not configure logging itself. Applications embedding the client must initialize tracing_subscriber.
Initialize logging in your Rust application:
use tracing_subscriber::{prelude::*, EnvFilter};
use std::io;
// Initialize tracing subscriber
tracing_subscriber::registry()
.with(
tracing_subscriber::fmt::layer()
.with_writer(io::stderr)
.with_filter(EnvFilter::from_env("APP_LOG")),
)
.init();
C applications using the aranya-client-capi library must call aranya_init_logging() to initialize the client library’s logging system. The function supports file output and format configuration through environment variables.
To enable daemon logging for C applications, configure the daemon using the daemon.toml configuration files that your application provides to each daemon instance. For development and testing, you can also temporarily override logging using the ARANYA_DAEMON environment variable. See the Configuration File section for the complete list of logging configuration options and the Environment Variable Override section for temporary overrides.
Note: Client and daemon logging are configured independently. Aranya correlation IDs are propagated across boundaries, but end-to-end trace continuity is best-effort: if either side filters out the relevant spans/events, the execution path may not be fully reconstructable.
Function Signature:
#include "aranya-client.h"
AranyaError aranya_init_logging(void);
Environment Variables:
ARANYA_CAPI - Log filter using EnvFilter syntax (required to enable logging)
"info", "debug", "debug,aranya_client::afc=trace"
"off" to disable loggingARANYA_CAPI_LOG - Log file path (optional)
"./logs/client.log"
ARANYA_CAPI_FORMAT - Log format: "json" or "text" (optional, defaults to "json")Shell Configuration:
# Basic debug logging to stderr
ARANYA_CAPI="debug" ./my-c-app
# Detailed AFC tracing to file
ARANYA_CAPI="info,aranya_client::afc=trace" \
ARANYA_CAPI_LOG="./client.log" \
ARANYA_CAPI_FORMAT="json" \
./my-c-app
# Multiple component filtering with file output
ARANYA_CAPI="info,aranya_client=debug,aranya_client::afc=trace" \
ARANYA_CAPI_LOG="/var/log/myapp/client.log" \
./my-c-app
Note: If ARANYA_CAPI is not set or set to "off", no logs will be emitted
A further expansion on this could be a function that takes in the config file rather than depending on the env variables
#if defined(ENABLE_ARANYA_PREVIEW)
AranyaError aranya_init_logging_with_config(const struct AranyaLoggingConfig *cfg);
struct AranyaLoggingConfig {
size_t size;
const char *filter; // e.g., "info,aranya_client=debug"
const char *log_path; // e.g., "./client.log" or NULL
AranyaLogFormat format; // JSON or TEXT
AranyaLoggingConfigFlags flags;
};
#include <stdio.h>
#include <string.h>
#include "aranya-client.h"
int init_client_logging(void) {
struct AranyaLoggingConfig cfg;
memset(&cfg, 0, sizeof(cfg));
cfg.size = sizeof(cfg);
cfg.filter = "info,aranya_client=debug,aranya_client::afc=trace";
cfg.log_path = "./client.log";
cfg.format = ARANYA_LOG_FORMAT_JSON;
cfg.flags = ARANYA_LOGGING_CONFIG_USE_ENV_FALLBACK;
AranyaError rc = aranya_init_logging_with_config(&cfg);
if (rc != ARANYA_ERROR_SUCCESS) {
fprintf(stderr, "failed to initialize logging: %s\n", aranya_error_to_str(rc));
return 1;
}
return 0;
}
#endif /* ENABLE_ARANYA_PREVIEW */
For this preview API, filter should accept the same syntax as RUST_LOG/EnvFilter (for example, "info,aranya_client::afc=trace") passed directly from the C application. This allows applications to configure tracing without setting process environment variables.
Implementation note: parse the provided filter directly when initializing tracing, rather than mutating RUST_LOG at runtime.
For Rust applications using APP_LOG:
# All client library logs at debug level
APP_LOG=aranya_client=debug ./application
# AFC operations only
APP_LOG=aranya_client::afc=debug ./application
# Detailed AFC seal/open tracing
APP_LOG=aranya_client::afc=trace ./application
# Multiple modules
APP_LOG=info,aranya_client::afc=debug,aranya_client::client=debug ./application
For C applications using ARANYA_CAPI:
# All client library logs at debug level
ARANYA_CAPI=aranya_client=debug ./my-c-app
# AFC operations only
ARANYA_CAPI=aranya_client::afc=debug ./my-c-app
# Detailed AFC seal/open tracing
ARANYA_CAPI=aranya_client::afc=trace ./my-c-app
# Multiple modules
ARANYA_CAPI=info,aranya_client::afc=debug,aranya_client::client=debug ./my-c-app
For Rust applications:
Option 1: Application-specific env var
# Clear ownership - this app controls its logging
APP_LOG=info,aranya_client::afc=debug ./application
For C applications:
Use ARANYA_CAPI with aranya_init_logging():
# Clear separation from daemon logging
ARANYA_CAPI=info,aranya_client::afc=debug ./my-c-app
# With file output
ARANYA_CAPI=debug \
ARANYA_CAPI_LOG="./client.log" \
./my-c-app
For deployments where you run both daemon and client application:
Rust application with daemon:
# Clear separation of concerns
ARANYA_DAEMON=info,aranya_daemon::sync=debug \
APP_LOG=info,aranya_client::afc=debug \
./application
C application with daemon:
# Separate environment variables for daemon and client
ARANYA_DAEMON=info,aranya_daemon::sync=debug \
ARANYA_CAPI=info,aranya_client::afc=debug \
ARANYA_CAPI_LOG="./client.log" \
./my-c-app
Human-readable terminal output:
2026-01-28T10:15:23.456789Z INFO aranya_daemon::sync: Sync completed successfully
peer=8gH4jK9mL2nP5qR...
duration_ms=123
cmd_count=5
effects_count=3
For log aggregation and analysis:
{
"timestamp": "2026-01-28T10:15:23.456789Z",
"level": "INFO",
"target": "aranya_daemon::sync",
"fields": {
"message": "Sync completed successfully",
"peer": "8gH4jK9mL2nP5qR...",
"duration_ms": 123,
"cmd_count": 5,
"effects_count": 3
}
}
To distinguish where failures occur, use component-level spans organized by architectural layer:
API Layer → Component Layer → Sub-component Layer
Architecture:
Each span should include a component field to identify which architectural layer or subsystem is active. This enables filtering logs by component and quickly identifying where errors occur.
Tarpc provides an rpc.trace_id for each request. You can include this value in spans on both the caller and receiver to correlate client and daemon logs.
Recommended field:
rpc_trace_id - Use the tarpc context::current().trace_id valueOutcome:
rpc_trace_id
rpc_trace_id as the correlation_id/request_id when logging RPC activityCommon components:
daemon - Top-level daemondaemon_api - API request handlerspolicy - Policy evaluationquic_sync - QUIC sync operationssync_manager - Sync coordinationkeystore - Key managementafc - Aranya Fast ChannelsWhen errors are logged, they occur in the innermost (active) span. The singular span field at the top level shows exactly which span originated the error:
{
"timestamp": "2026-02-03T16:35:25.142225Z",
"level": "ERROR",
"fields": {
"message": "server unable to respond to sync request from peer",
"error": "The connection was closed by the remote endpoint"
},
"span": {
"component": "quic_sync",
"name": "serve_connection",
"conn_source": "127.0.0.1:50159"
},
"spans": [
{"component": "daemon", "name": "daemon"},
{"component": "quic_sync", "name": "sync-server"},
{"component": "quic_sync", "name": "serve"},
{"component": "quic_sync", "name": "serve_connection"}
]
}
Interpreting the error:
The span field identifies where the error originated:
component: "quic_sync" - Network/connection subsystem failed (not API validation or policy evaluation)name: "serve_connection" - Specific function handling the peer connectionconn_source: "127.0.0.1:50159" - Context showing which peer was involvedThe spans array shows the complete call path leading to the error:
daemon → sync-server → serve → serve_connection (error occurred here)
This makes it immediately clear the error is an infrastructure issue (connection failure) rather than an application-level problem (API validation or policy denial).
When adding logging to Aranya code, include these structured fields where applicable:
Define a event_type field for key observability events. This enables filtering and test assertions without relying on message strings. Could be something like (sync_timeout, policy_denied, or afc_shm_add_failed)
Benefits:
event_type reliablyevent_type without having to use string matches for a string that may changecomponent - The architectural component (daemon_api, policy, quic_sync, etc.)device_id - The device performing the operationteam_id / graph_id - The team/graph being operated onduration_ms - Operation duration in millisecondsmessage - Human-readable operation descriptionpeer_device_id - Device ID of sync peerpeer_addr - Network address of peercmd_count / cmd_count_received - Number of commands syncedeffects_count - Number of effects generatedbytes_transferred - Total bytes sent/receivedfirst_cmd_hash - Hash of first command (for stall detection)first_cmd_max_cts - Max CTS of first commandnetwork_stats - Optional object containing network quality metrics:
rtt_ms - Round-trip time in millisecondsbandwidth_mbps - Measured bandwidth in megabits per secondpacket_loss_percent - Packet loss percentage (if using QUIC)operation - API method name (create_team, add_member, etc.)request_id / correlation_id - For tracing requests across componentsvalidation_error - Specific validation failure (if applicable)channel_id - AFC channel identifierlabel_id - Label used for the channelpeer_device_id - Peer device for the channelplaintext_len / ciphertext_len - Data sizesseq_num - Sequence number for seal/open operationsshm_path - Shared memory path (for SHM operations)key_index - Key index in SHMcurrent_key_count - Number of keys in SHMrole_id - Role being created/modifiedlabel_id - Label being created/modifiedmanaging_role_id - Role managing the operationperm - Permission being granted/revokedtarget_role_id - Target role for permission operationserror - Error message or descriptionerror_code - Specific error code (if applicable)operation - Operation that failedretry_count - Number of retry attemptslast_successful_sync - Timestamp of last success (for recurring failures)psk_id - PSK identifier (for authentication failures)local_addr - Local network addressremote_addr / conn_source - Remote network addresstimeout_ms - Timeout valuertt_ms - Round-trip time in millisecondsbandwidth_mbps - Measured bandwidthdevice_id = %id) instead of string formattingduration) to identify bottlenecksdevice_id, team_id, peer_id, channel_id)