LogoLogo
  • Overview
    • Introduction
    • Getting Started
    • Nimbus Hub
    • Log Optimization
      • Reduce Optimizations
      • Lint Optimizations
    • Working with NTL
      • Working with Transforms
    • Working with Aggregated Logs
    • Examples
    • Concepts
    • FAQ
    • Architecture
  • Additional Features
    • Configuration Overrides
    • Pause All
    • Error Detection
    • Private Link
  • Integrations
    • Datadog
      • AWS Lambda Extension
      • AWS Lambda Forwarder
      • DD Agent
      • DD Log Forwarding Destination
      • Heroku
      • Journald
      • OpenTelemetry
  • Resources
    • Nimbus Attributes
    • Changelog
    • SLA
    • Bug Bounty
    • Support
  • Preview Features
    • Metric Optimization
    • Datadog CLI
Powered by GitBook
On this page
  • Optimization Triggers
  • Logs with common message patterns
  • Logs with common identifiers
  • Multi-line Logs
  • Optimization Dimensions
  • Volume
  • Size
  • Optimization Fidelity
  • High
  • Medium
  • Low

Was this helpful?

Export as PDF
  1. Overview
  2. Log Optimization

Reduce Optimizations

Last updated 1 year ago

Was this helpful?

Reduce optimizations reduce the volume of your logs, either along number of events or raw ingested bytes.

Nimbus analyzes all your logs and finds high volume log patterns based on incoming data. From these patterns, Nimbus generates transformations that aggregate related logs into a single event.

We refer to this style of transformation as lossless aggregation. You can see an example of how this works below.

Lossless Aggregation

Optimization Triggers

Nimbus can automatically optimize logs when it detects the following situations:

  1. Logs with common message patterns

  2. Logs with common identifiers

  3. Multi-line Logs

Logs with common message patterns

These are high volume log events that repeat most of their content. For most applications most of the time, this will be the primary driver of log volume. Examples include health checks and heart beat notifications.

Logs with common identifiers

These are logs that describe a sequence of related events. These sequences usually have some sort of common identifier like a transactionId or a jobId. Examples include a background job and business specific user flows.

Multi-line Logs

These are logs where the message body can be spread across multiple new lines. Unless you add special logic on the agent side, the default behavior is to emit each newline delimited message as a separate log event.

Optimization Dimensions

Nimbus optimizes logs across the following dimensions:

  1. Volume: Optimize to reduce the number of events logged

  2. Size: Optimize to reduce the size of events logged

Volume

When optimizing for volume, Nimbus aggregates as many logs as it can given the constraints of the destination.

Size

When optimizing for size, Nimbus deduplicates and removes redundant metadata as it aggregates logs.

Optimization Fidelity

Nimbus generated optimizations can be tuned via fidelity levels to indicate how much of the original log message to preserve.

High

Nimbus optimizes for preserving original log data with perfect fidelity. This means there is no reduction in ingest size and aggregated logs contain all fields of the original log entires with only identical fields deduplicated.

Medium

Nimbus preserves most of the data. Individual timestamps in aggregated logs are discarded.

Low

For before and after examples of these triggers, see .

For before and after examples of optimizations along these dimensions, see .

For example, has specific limits around total array size as well as log size. Nimbus makes sure to aggregate underneath this limit to maximize volume reduction.

For example, when aggregating , its often the case that 40% or more of the metadata (tags and attributes) are the same.

Nimbus optimizes for ingest size. Low value fields are nominated for removal. All except nimsize are removed from the resulting log.

examples
examples
datadog
logs with common message patterns
nimbus attributes