Ingest On-Demand
Cut 80% Analytics Cost

Stream from S3 only the events you need into
Splunk · Datadog · Elasticsearch · CloudWatch Logs

Select & Stream From S3

View streaming costs in the managed 10x Console or your monitoring stack

Log10x Storage Streamer on-demand S3 streaming

Storage Streamer diverts bulk data to S3 at $0.023/GB — stream only what you need back to your analyzer.

ROI Calculator

Full ingestion vs. on-demand streaming from S3

Your Setup
$ /GB
+50% savings
Your Scenario
1,000 GB/day
20%
50 nodes
Current Monthly Cost
$75,000
100% to analyzer
With Storage Streamer
$17,130
20% to analyzer
Monthly Savings
$57,870
(77% reduction)
Annual savings: $694,440
View cost breakdown
S3 Storage ? $690
S3 Operations ? $113
Compute (EKS) ? $600
Monitoring + SQS ? $60
Network ? $300
Egress ?
Log10x License ? $1,500
Analyzer License ? $15,000
Break-even: Stream up to 97% before Storage Streamer stops saving money

Storage Streamer Workflow

Index on upload, query and stream on-demand. Works with your log analyticsView implementation on GitHub

Hover over each step to learn more

Frequently Asked Questions

How It Works

What is Storage Streamer and how does it work?

Storage Streamer stores 100% of your logs in S3 at just $0.023/GB/month and indexes them at ingest time.

When you query, the system scans the index to find which files contain matching data. Only those files are streamed to your analyzer.

You pay analyzer license costs only on the data you actually query -- typically 5-30% of total volume -- not all your logs.

What are the main use cases?

Incident investigation — search months of historical logs during an outage without pre-paying for ingestion. Matching events stream to your analyzer within seconds.

Scheduled dashboard population — a Kubernetes CronJob streams the last hour's data from S3 on a recurring schedule, keeping Splunk/Datadog/Elastic dashboards current without full-volume ingestion.

Compliance and audit — retain years of logs in S3 at $0.023/GB/month. Stream to your analyzer only when auditors request specific time ranges.

Metric aggregation — convert S3 events into metric data points (counts, rates, percentiles) and publish to Datadog Metrics, Prometheus, CloudWatch, or Elastic, bypassing log ingestion entirely.

Query workflows, retrieval times, and savings per vendor →

How does Storage Streamer reduce costs?

Store 100% of logs in S3 at a fraction of analyzer costs. Pay your analyzer license only on the data you actually query -- typically 5-30% of total volume.

Most customers see 70-80% cost reduction. Use the ROI calculator above to estimate your savings, or see our pricing page for details.

How fast is data retrieval?

Bloom filter lookups identify matching files in under 1 second. Full event retrieval depends on result set size:

  • ~100 events — 2-5 seconds
  • ~10,000 events — 10-30 seconds

For sub-second alerting, keep critical log types streaming to your primary analyzer. Use Storage Streamer for incident investigation, scheduled dashboard population, compliance audits, and metric aggregation — see use cases for query workflows and savings per vendor.

How do engineers search S3 data during an incident?

Queries are initiated via REST API — from a script, runbook, or CronJob. Results stream back into your existing analyzer.

  1. Send a query with time range and search expression
  2. Bloom filter index identifies matching S3 files (<1 second)
  3. Matching events stream through Fluent Bit to your analyzer (Splunk HEC, Elasticsearch Bulk API, Datadog, CloudWatch)
  4. Events appear in Kibana / Splunk Search / Datadog Logs with original timestamps — alongside your live data

Example — find all payment errors in the last 6 hours:

curl -X POST http://streamer:8080/streamer/query \
  -d '{"from":"now(\"-6h\")","to":"now()",
       "search":"level == \"ERROR\" && message.includes(\"payment\")"}'

No separate UI to learn. Results are standard indexed events in your existing tool — search, filter, and dashboard them the same way you always do. Events are permanently ingested; your analyzer's standard retention policy applies.

For recurring workflows (dashboard population, compliance scans), schedule queries via Kubernetes CronJob.

Comparisons

How does Storage Streamer compare to Splunk Federated Search for S3?

Federated Search scans every file in S3 via AWS Glue — no indexes. It's capped at 10 TB per search, ~100 sec/TB, and Splunk Cloud on AWS only.

Storage Streamer indexes files at upload so queries skip 99%+ of files in seconds. No scan caps, works with Splunk Cloud and Enterprise, and results stream with original timestamps for full analytics.

Detailed comparison →

Storage Streamer vs Datadog Archive Search?

Archive Search scans every byte in your S3 archive at $0.10/GB per query — no indexes. Results are capped at 100K events and expire after 24 hours.

Storage Streamer indexes files at upload so queries skip 99%+ of files in seconds, with no per-query fees and no result caps.

Detailed comparison →

Storage Streamer vs Datadog Flex Logs?

Flex Logs still charges $0.10/GB to ingest everything into Datadog first — it just stores it cheaper after. Querying Flex data adds compute tier fees on top.

Storage Streamer skips ingestion entirely: logs go straight to your S3 at $0.023/GB and you stream only what you need to any analyzer.

Detailed comparison →

Storage Streamer vs Datadog Rehydration?

Rehydration re-indexes archived data back into Datadog's hot tier — you pay ingestion costs a second time, and only data that was originally sent is available. Anything you filtered at ingest is gone.

Storage Streamer keeps 100% of your logs in S3 and queries in-place — no re-ingestion, no data loss, results in seconds.

Detailed comparison →

What does deployment look like?

Storage Streamer deploys in your AWS account — no data leaves your infrastructure. Deploy via a Terraform moduleStorage Streamer — Terraform:module "tenx_streamer" { source = "log-10x/tenx-streamer/aws" tenx_api_key = var.tenx_api_key tenx_streamer_index_source_bucket_name = "my-app-logs" }Full deploy guide → that provisions S3 buckets, SQS queues, and IAM roles. Run terraform apply and logs start flowing to S3 within minutes.