Optimize Logs
Cut Costs by 50%+
Losslessly compact events collected byFluentd/Bit · OTel · Filebeat · Logstash · Splunk UF
Verbose → 8x Compact
Lossless — stores shared structure once, ships only changing values
{
"stream": "stderr",
"log": "2026-02-21 10:23:47,891 ERROR locust.runners: Traceback (most recent call last):
File \"/app/locustfile.py\", line 42, in browse_product
response = self.client.get(\"/api/products/OLJCESPC7Z\")
File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 188, in get
return self._send_request_safe_mode(\"GET\", url, **kwargs)
File \"/usr/local/lib/python3.12/site-packages/locust/clients.py\", line 112, in _send_request_safe_mode
return self.request(\"GET\", url, **kwargs)
File \"/usr/local/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request
resp = self.send(prep, **send_kwargs)
File \"/usr/local/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send
r = adapter.send(request, **kwargs)
File \"/usr/local/lib/python3.12/site-packages/requests/adapters.py\", line 519, in send
raise ConnectionError(e, request=request)
File \"/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 791, in urlopen
retries = retries.increment(method, url, error=e, _pool=self)
File \"/usr/local/lib/python3.12/site-packages/urllib3/util/retry.py\", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
File \"/usr/local/lib/python3.12/site-packages/urllib3/connection.py\", line 211, in connect
sock = self._new_conn()
File \"/usr/local/lib/python3.12/site-packages/urllib3/connection.py\", line 186, in _new_conn
raise NewConnectionError(self, msg)
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='frontend', port=8080):
Max retries exceeded with url: /api/products/OLJCESPC7Z
(Caused by NewConnectionError: Failed to establish a new connection:
[Errno 111] Connection refused)",
"docker": {
"container_id": "5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18"
},
"kubernetes": {
"container_name": "load-generator",
"namespace_name": "default",
"pod_name": "load-generator-6d4f78459b-fjxxp",
"container_image": "ghcr.io/open-telemetry/demo:2.1.3-load-generator",
"container_image_id": "ghcr.io/open-telemetry/demo@sha256:b35d080e712780e2078f3837d334a0ff13204ad2d334f3b4838d64bb543d031a",
"pod_id": "5c3c2718-4397-8e2c-2109a5b3cc41",
"pod_ip": "192.168.51.178",
"host": "ip-192-168-42-205.ec2.internal",
"labels": {
"app.kubernetes.io/component": "load-generator",
"app.kubernetes.io/name": "otel-demo",
"opentelemetry.io/name": "load-generator",
"pod-template-hash": "6d4f78459b"
}
},
"tenx_tag": "kubernetes.var.log.containers.load-generator-6d4f78459b-fjxxp_default_load-generator-5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18.log"
}
70UEVe+i ],1758886627891,42,OLJCESPC7Z,188,112,589,703,519,791,592,211,186,8080,OLJCESPC7Z,0x7f2a8c3d1e50,111,5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18,6d4f78459b,fjxxp,2,1,3,b35d080e712780e2078f3837d334a0ff13204ad2d334f3b4838d64bb543d031a,5c3c2718,-4397,8e2c,2109a5b3cc41,192,168,51,178,-192,-168,-42,-205,6d4f78459b,tenx,6d4f78459b,fjxxp,5d6f421bbc2861616cacaf2ff589c3cc7e0b8489510c07ef73f3574f0a196e18
Constant symbol values, delimiters, and structure — stored once and referenced by hash. Learn more →
Automatically parsed once per template and converted to efficient 64-bit epoch values. Learn more →
High-cardinality values unique to each event — each maps to a {N} placeholder in the template. Learn more →
Edge Optimizer Workflow
Eliminates redundancy using shared templates. Works with your forwarders.
Receive
Forwarder sidecar
Transform
Typed objects
Optimize
Lossless encoding
Output
Forward events
Hover over each step to learn more
Frequently Asked Questions
The engine identifies repeating structure in your logs — JSON keys, timestamp formats, constant strings — and stores each unique pattern once as a cached template. Only the variable values (IPs, pod names, trace IDs) are shipped per event. Similar to how Protocol Buffers define a schema once and send only field values over the wire.
The AOT Compiler builds symbol vocabulary from your repos in CI/CD; the JIT engine uses those symbols to create and assign templates at runtime. 150+ frameworks are covered by the built-in library.
Result: 50-80% volume reduction with 100% data fidelity. Every field, value, and timestamp remains intact. Real-world benchmark: 64% reduction on Kubernetes OTel logs (1,835 → 662 bytes per event).
Compact vs. compress:
- Compressed data must be decompressed before it can be searched or aggregated. Compact events remain searchable in place — they can be streamed to log analyzers and aggregated to metrics without a decompression step.
- Many SIEMs bill on uncompressed ingest volume — gzip and zstd reduce storage but not the ingestion bill. Edge Optimizer reduces volume before ingestion, cutting both costs.
- The two approaches are complementary: optimize first, then let your SIEM compress on top.
Why not sampling or filtering? Those are lossy — they permanently discard data, eliminating evidence for troubleshooting and security investigations. Edge Optimizer uses only lossless techniques.
How expanding works depends on your analyzer:
- Splunk: The open-source 10x Splunk app (GitHub) transparently expands at search time. A JavaScript hook intercepts search requests and expands compacted events via KV Store template lookup — your searches, dashboards, and alerts work unchanged
- Datadog & CloudWatch: Events are expanded via Storage Streamer from S3 before ingestion — no expansion step needed on the analyzer side. Your dashboards, monitors, and alerts work as-is
- Elasticsearch / OpenSearch: The open-source L1ES plugin (GitHub) transparently rewrites standard queries and decodes results at search time — Kibana dashboards, KQL queries, and alerts work unchanged. For managed services (Elastic Cloud, AWS OpenSearch Service), Storage Streamer expands events from S3 before ingestion
~1.25x query time for both Splunk and Elasticsearch. Datadog and CloudWatch have zero overhead — events are expanded before ingestion.
- Splunk: The 10x for Splunk app expands using native SPL and KV Store template lookups. Per-event expansion is O(1). A 10-second search takes ~12.5 seconds. Compatible with interactive search, scheduled search, alerts, dashboards, REST API, and summary indexing
- Elasticsearch / OpenSearch: The L1ES plugin expands at the Lucene segment level — each shard handles expansion locally. A 100ms query takes ~125ms. Scales horizontally with cluster size
- Datadog, CloudWatch & managed Elasticsearch: Storage Streamer expands events before ingestion — zero query overhead
ROI formula: (daily volume in GB) × (reduction ratio) × (your per-GB cost) × 365 = annual savings
Use the free Dev tool to measure your exact reduction ratio on your own logs, or try the ROI Calculator with your analyzer's per-GB cost. See Datadog, Splunk, Elasticsearch, or CloudWatch pages for vendor-specific savings estimates.
Edge Regulator filters and samples events based on cost and priority policies — events are dropped before they ship. This is lossy but sends directly to your analyzer with no additional infrastructure.
Edge Optimizer compacts events losslessly — no data is discarded. For Splunk, the 10x app expands at search time. For self-hosted Elasticsearch and OpenSearch, the L1ES plugin does the same. For Datadog, CloudWatch, and managed Elasticsearch, events route through S3 and Storage Streamer expands them before ingestion.
Many teams deploy both: Regulator for known low-value events, Optimizer for everything else.
512 MB heap + 2 threads handles 100+ GB/day per node. Both values map directly to Kubernetes resource specs in your DaemonSet manifest.
Edge Optimizer deploys in your infrastructure — no data leaves your network. Add a tenx blockEdge Optimizer — Helm values:tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY"
kind: "optimize"
runtimeName: my-edge-optimizerFluent Bit · Fluentd · Filebeat · OTel Collector · Logstash · Splunk UF · Datadog AgentFull deploy guide → to your existing forwarder Helm values and run helm upgrade. For VMs, the CLI and Docker options take under 5 minutes.
Cut Log Costs 50%+
CLI · Docker · KubernetesWorks with any forwarder


