Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Go to cloudformation, and search “datadog”, you should either see one of the following
datadog-forwarder as its own entry
DatadogIntegration-ForwarderStack-*** as a nested stack
Click on the forwarder stack and click update
Select use current template
Set the following 2 parameters:
DdUrl: $YOUR_NIMBUS_ENDPOINT
DdPort: 443
Update the stack
By default, Nimbus automatically detects errors and routes error logs through a separate pipeline that does pass through aggregation. This means that error logs get routed through in near real time without any transformations.
The Nimbus Hub (or Hub for short) acts as your command center for all optimizations.
Nimbus automatically identifies high traffic log patterns and displays them on the console as a table.
Name: Autogenerated name for the traffic pattern
Volume: The total number of logs analyzed for the given pattern (for an 1h period)
Percentage: The percentage of logs this pattern represents (for an 1h period) compared to all logs
Updated: When this log pattern was last updated
To apply a transform, click on Details link on the log pattern you wish to update.
This will open up the transform modal with two panels.
The left panel shows the transform that Nimbus generated for the given pattern. The transform is constructed using the , a domain specific language optimized for expressing telemetry optimizations.
The right panel shows a sample of raw logs that the transform would act over.
To apply a transform, click on the Apply button in the left panel. This will immediately deploy the transform.
Nimbus lets you preview of how logs will be shaped post transformation. Raw Preview shows the log in pure JSON whereas Rich Preview shows you how those logs would show up in Datadog.
Aggregated logs are just regular logs with specific nimbus attributes.
The individual payload of the pre-aggregated logs can be found in the nimdata field which is an array of the underlying log events.
The message field is an array of the original log bodies
When searching for values within a JSON array, use the same syntax as when searching a regular property.
For example, to find log messages with "error", you can use the following search
Searching for values within a JSON array of objects, you can use the following search
Nimbus is compatible with existing log monitoring setups. We'll walkthrough three common scenarios below and how monitors would behave after Nimbus:
These are monitors that alert based on logs with errors. Error logs are automatically detected by Nimbus and go through a separate pipeline that . This means any monitors on error logs will be unaffected.
These are monitors that measure the number of logs during a set interval. You can retrieve the original size of of pre-aggregated logs by using Sum of @nimsize instead of Count of All Logs.
Original Monitor based on Count
Aggregated Monitor based on @nimsize
These are monitors that depend on a specific attribute within the aggregated log. You can either modify the monitor to alarm based on the nested attribute or use the directive to keep attributes that you alarm on at the top level.
All instructions for monitors also apply to dashboards.
This guide walks you through using Nimbus to forward and optimize your log traffic from datadog. This assumes you're sending logs using the datadog agent. If that's not the case, see integrations for documentation for other sources.
Click on Sinks in the left navbar and click Add Sink and select Datadog
Enter your datadog site and a valid API key:
you can find what site you're on by matching your datadog URL to the following table
you can either use an existing api key or create a new one in Organizational Settings > API Keys
This sections walks through adding Nimbus via the datadog agent. If you are using a different integration, see for integration specific instructions.
Start by adding the following configuration to your datadog config
NOTE: YOUR_NIMBUS_ENDPOINT is an URL that is generated for you when you first create an account
Optionally, you can configure the endpoint using the following environmental variables. This is useful when you're running the datadog agent in kubernetes like environments and don't have easy access to the raw configuration.
Update your datadog agent to run with the new configuration. Congratulations - you're now forwarding log traffic with Nimbus!
At this point, Nimbus will start analyzing your traffic. It can take up to 24h for initial results to show up if this is your first time integrating. So go grab some coffee and go on with your day. We'll send you an email when the findings are ready for you to review in the .
Add the following configuration to your datadog config
NOTE: YOUR_NIMBUS_ENDPOINT is an URL that is generated for you when you first create an account
Optionally, you can configure the endpoint using the following environmental variables. This is useful when you're running the datadog agent in kubernetes like environments and don't have easy access to the raw configuration.
Update your datadog agent to run with the new configuration. Congratulations - you're now forwarding log traffic with Nimbus!
The following is a list of currently documented integration sources with datadog
Nimbus helps companies reduce datadog costs by 60% or more. Our data optimization pipeline analyzes telemetry and aggregates it in flight to reduce your volume without dropping data.
Nimbus analyzes all your logs and finds high volume log patterns based on incoming data. From these patterns, Nimbus generates optimizations that aggregate related logs into a single event.
We refer to this style of transformation as lossless aggregation. You can see an example of how this works below.
{
message: ["item 123 refreshed", "item 345 refreshed", "item 567 error"],
jobId: 1,
nimdata: [
{
jobId: 1,
message: "item 123 refreshed",
category: "luxury"
},
{
jobId: 1,
message: "item 345 refreshed",
category: "toys"
},
{
jobId: 1,
message: "item 567 refreshed",
category: "luxury"
}
]
...
}"error"nimbusEndpoint: $YOUR_NIMBUS_ENDPOINT (this will be provided by nimbus)
Authentication: Basic Auth
Username and Password: (this will be provided by nimbus)
Click Save when done
Congratulations, you're now forwarding logs to Nimbus!
Nimbus optimizations are automatically surfaced by our traffic analysis engine and take up to 24 hours to surface when you first connect to Nimbus.
Nimbus currently supports the following optimization types:
Reduce Optimization: Reduce log volume
Lint Optimization: Improve log hygiene
You can reach out to [email protected] for any queries regarding Nimbus. If you're on any paid plan, you also have priority support via a dedicated slack channel.
observability_pipelines_worker:
logs:
enabled: true
url: "https://YOUR_NIMBUS_ENDPOINT"DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED=true
DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL="https://YOUR_NIMBUS_ENDPOINT"You can add custom attributes by appending it to the end of your drain query string
Verify via the datadog log console that logs are coming in with the expected attributes
Update your lambda function
That's it - you're done. Nimbus is now optimizing your logs!
DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED: true,
DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL: $YOUR_NIMBUS_ENDPOINTheroku drains:add "https://$YOUR_NIMBUS_ENDPOINT?dd-api-key=<DD_API_KEY>&ddsource=heroku&env=<ENV>&service=<SERVICE>&host=<HOST>" -a <APPLICATION_NAME>heroku drains:add "https://$YOUR_NIMBUS_ENDPOINT?dd-api-key=<DD_API_KEY>&ddsource=heroku&env=<ENV>&service=<SERVICE>&host=<HOST>&<attKey>=<attValue>" -a <APPLICATION_NAME>heroku drains:remove <id-of-dd-drain>Optimized: Whether this log pattern is already optimized
To see what this looks like in practice, watch our demo!
To get an better understanding of supported optimizations, see understanding log optimizations for details.
The Nimbus Transformation Language (NTL) is a high level language for reshaping telemetry data.
When Nimbus makes an optimization, it generates NTL to describe when and how an optimization should be made.
You can edit generated transforms to tailor it for your specific business requirements (this is not necessary in the majority of cases).
Nimbus predicates evaluate a series of expressions and return a boolean. They have the following syntax
key: the dot delimited path
op: any NTL valid operator
val: the expected value
Note that key can be omitted in which case the value is expected to be a list of Nimbus predicates.
Checks whether two elements are exactly equal in value
Checks whether a particular path exists in an object
Regex match
Regex match against an array of values
This depends on your specific vendor and your application integration. On average, for customers using the regular datadog agent to forward data, initial setup takes under 5 minutes.
Other services put the burden on you to understand your traffic patterns. Nimbus automatically analyzes your traffic and creates a top N list of high traffic patterns.
Other services make you manually create the rules and filters to create a pipeline. Nimbus automatically generates transforms based on its traffic analysis.
Other services have you sample and drop data to reduce volume. Nimbus applies lossless aggregation which means that you reduce volume without losing visibility.
Yes. Please reach out to [email protected] to get details on vendor specific integrations
Nimbus processes logs in near real time - the average message is received and forwarded in under 100ms.
There is a caveat for aggregated logs. These are held in memory (and buffered on disk) until a is met (eg. max_events, expire_after_ms, etc). Aggregations can be disabled at any time. Nimbus also has a button that lets you disable all aggregations at once if needed.
Note that Nimbus has built-in rules to not aggregate error logs which means that they will still come through in near real time.
Yes. Because Nimbus uses to optimize your log volume, you don't end up losing any data. You can find out more in
Short answer - no. Any log based monitor you currently have can be replicated post-aggregation, either with no changes or some small tweaks. When you onboard, a dedicated Nimbus engineer will work with you to ensure that none of your existing monitors will be impacted.
You can find out more in .
No. See answer above about log based monitors.
By default, Nimbus deduplicates common metadata and merges unique values from logs when aggregation. The only field that gets discarded is the timestamp field - Nimbus preserves the start time and adds a timestamp_end field to designate the time interval for the aggregation. You can see examples of what this looks like in the
Yes. See the section for details.
Nimbus is extremely effective at reducing number of events (100X) and very effective at reducing the size of events (40%). So regardless of the type of pricing model, we will be able to deliver significant savings.
Nimbus keeps observability data for a period of up to 7 days in order to analyze traffic patterns. It does not store or retain data beyond the observability window.
Nimbus offers a 99.9% SLA on uptime. See more details .
Even if you've made a commitment, it's likely that you'll exceed the committed usage and have on demand spend (overage) on top of the committed usage. Nimbus can drop the on demand portion to 0 and make sure you don't exceed it.
We can also help you negotiate with Datadog for alternative contracts with your account executive.
Lint optimizations scan logs for common hygiene issues like sensitive (eg. api tokens) and redundant data (eg. timestamp appearing the log).
Nimbus can automatically optimize logs when it detects the following situations:
Logs with timestamp appearing in message body
Common kind of secrets (AWS tokens, github and gitlab, etc)
Take the following log
There are two issues:
the timestamp is emitted with the json log and prevents datadog from properly parsing the log as json
datadog adds its own timestamp at the time of ingestion (when the log was processed by datadog) which is not the same as the time of emission (when the log was originally emitted)
Nimbus can now recognize this class of issues and apply a lint optimization to fix it. In this case, Nimbus would come up with the following optimization
The log post lint optimization would look like the following
This applies the correct timestamp and lets datadog properly parse the json log as a structured log. This also makes it possible to do queries like @retry_count > 0 which previously would not have been possible over the string based log data
In rare cases, lint optimizations can interfere with existing .
For example, if a current reduce optimization relies on a timestamp to be present in the log body and the lint optimization pulls it out as a log attribute, it means that those logs will no longer be aggregated.
For example, say you have the following log.
You also have the current reduce optimization
You might get a lint optimization that pulls out the current timestamp into a separate attribute
This means that your previous reduce optimization would no longer work because the it was using the date as an activation filter.
Today, you can either manual adjust the process_when clause and change the to fix it yourself or wait for Nimbus to re-analyze your logs and provide updated recommendations.
You can think of Nimbus as a data pipeline for your telemetry. We provide an out of the box opinionated framework to process your telemetry according to industry best practices.
parse logs according to source format
meter and derive analytics from ingress
routes telemetry depending on condition
if message is identified as an error, forward to error route
if message matches a optimization predicate, forward it to filter route
all messages not processed by a transform or matched as an error go to the default route
applies error specific attributes and optimizations
applies optimization specific attributes and optimizations
applies default attributes and optimizations
currently, this applies nimkind: raw to the log
meter and derive analytics from ingress
Reduce optimizations reduce the volume of your logs, either along number of events or raw ingested bytes.
Nimbus analyzes all your logs and finds high volume log patterns based on incoming data. From these patterns, Nimbus generates transformations that aggregate related logs into a single event.
We refer to this style of transformation as lossless aggregation. You can see an example of how this works below.
Nimbus can automatically optimize logs when it detects the following situations:
Logs with common message patterns
Logs with common identifiers
Multi-line Logs
For before and after examples of these triggers, see .
These are high volume log events that repeat most of their content. For most applications most of the time, this will be the primary driver of log volume. Examples include health checks and heart beat notifications.
These are logs that describe a sequence of related events. These sequences usually have some sort of common identifier like a transactionId or a jobId. Examples include a background job and business specific user flows.
These are logs where the message body can be spread across multiple new lines. Unless you add special logic on the agent side, the default behavior is to emit each newline delimited message as a separate log event.
Nimbus optimizes logs across the following dimensions:
Volume: Optimize to reduce the number of events logged
Size: Optimize to reduce the size of events logged
For before and after examples of optimizations along these dimensions, see .
When optimizing for volume, Nimbus aggregates as many logs as it can given the constraints of the destination.
For example, has specific limits around total array size as well as log size. Nimbus makes sure to aggregate underneath this limit to maximize volume reduction.
When optimizing for size, Nimbus deduplicates and removes redundant metadata as it aggregates logs.
For example, when aggregating , its often the case that 40% or more of the metadata (tags and attributes) are the same.
Nimbus generated optimizations can be tuned via fidelity levels to indicate how much of the original log message to preserve.
Nimbus optimizes for preserving original log data with perfect fidelity. This means there is no reduction in ingest size and aggregated logs contain all fields of the original log entires with only identical fields deduplicated.
Nimbus preserves most of the data. Individual timestamps in aggregated logs are discarded.
Nimbus optimizes for ingest size. Low value fields are nominated for removal. All except nimsize are removed from the resulting log.
Last Updated: April 4th, 2024
This Nimbus Service Level Agreement (“SLA”) is a policy governing the use of Nimbus and applies separately to each account using Nimbus.
Capitalized terms used herein but not defined herein shall have the meanings set forth in the Agreement.
Nimbus commits to use commercially reasonable efforts to make the Observability Pipeline, specifically focusing on the data ingestion component, available with the Monthly Uptime Percentages set forth in the table below. In the event the Observability Pipeline does not meet the Service Commitment, you will be eligible to receive a Service Credit as described below.
An “Observability Pipeline” refers to the infrastructure and services provided by Nimbus for the collection, normalization, transformation, and routing of observability data (e.g., metrics, logs, traces) for a specific domain.
“Monthly Uptime Percentage” for the Observability Pipeline is calculated by subtracting from 99.999% the percentage of minutes during the month in which the data ingestion component of the Observability Pipeline was Unavailable. Monthly Uptime Percentage measurements exclude Unavailability resulting directly or indirectly from any Nimbus SLA Exclusions.
A “Service Credit” is a dollar credit, calculated as set forth above, that we may credit back to an eligible account.
The Observability Pipeline is “Unavailable” during a given minute if the data ingestion component fails to receive and process data for all attempts made to the pipeline throughout the minute.
Service Credits are calculated as a percentage of the total charges paid by you for the affected component of the Observability Pipeline for the monthly billing cycle in which the Service Commitment was not met, in accordance with the schedule below:
We will apply any Service Credits only against future Nimbus payments otherwise due from you. At our discretion, we may issue the Service Credit to the credit card you used to pay for the billing cycle in which the Unavailability occurred. Service Credits will not entitle you to any refund or other payment from Nimbus. A Service Credit will be applicable and issued only if the credit amount for the applicable monthly billing cycle is greater than one dollar ($1 USD). Service Credits may not be transferred or applied to any other account. Unless otherwise provided in the Agreement, your sole and exclusive remedy for any unavailability, non-performance, or other failure by us to provide the Observability Pipeline is the receipt of a Service Credit (if eligible) in accordance with the terms of this SLA.
To receive a Service Credit, you must submit a claim by contacting Nimbus support team. To be eligible, the credit request must be received by us by the end of the second billing cycle after which the incident occurred and must include:
i. the words “SLA Credit Request” in the subject line;
ii. the dates, times, and descriptions of each Unavailability incident that you are claiming;
iii. evidence that corroborates the claimed Unavailability, such as logs or monitoring alerts (any confidential or sensitive information in these documents should be removed or replaced with asterisks).
If the Monthly Uptime Percentage of such request is confirmed by us and is less than the Service Commitment, then we will issue the Service Credit to you within one billing cycle following the month in which the request occurred. Your failure to provide the request and other information as required above will disqualify you from receiving a Service Credit.
The Service Commitment does not apply to any unavailability, suspension, or termination of the Observability Pipeline, or any other Nimbus performance issues: (i) caused by factors outside of our reasonable control, including any force majeure event or Internet access or related problems beyond the demarcation point of Nimbus; (ii) that result from any actions or inactions by you or any third party; (iii) that result from your equipment, software, or other technology and/or third party equipment, software, or other technology (other than third party equipment within our direct control); (iv) arising from our suspension or termination of your right to use the Observability Pipeline in accordance with the Agreement; or (v) that result from your failure to follow the guidelines and best practices described in Nimbus documentation, including exceeding usage limits. If availability is impacted by factors other than those used in our Monthly Uptime Percentage calculation, then we may issue a Service Credit considering such factors at our discretion.
This guide goes over integrating Nimbus with the OpenTelemetry Collector.
In your OTEL collector, add an otlphttp exporter - replace $API_KEY with your Nimbus API key
exporters:
otlphttp/nimbus:
endpoint: https://$API_KEY-otlp-intake.logs.us1.nimbus.dev:443Add the otlphttp exporter to any existing pipeline that processes logs
Reload existing collectors with the new configuration.
That's it - you're done. Nimbus is now optimizing your logs!
Nimbus lets you set up private connectivity between your cloud provider and Nimbus.
With Nimbus Private Link, you can directly connect your VPC with Nimbus using AWS VPC Endpoints. Note that this is currently only supported for AWS accounts in region us-east-1.
cost reduction: with private link, your egress cost go down by 90% (regular egress on AWS is $0.09/GB. With private link, this becomes $0.01/GB)
compliance and security: prevent sensitive data from traversing the public internet
Ensure Cloudformation stack is in status CREATE_COMPLETE and VPC Endpoint is Available with has Private DNS names enabled before proceeding
You can test the endpoint by sending data to $API_KEY-http-intake.privatelink.logs.us1.nimbus.dev in a connected subnet
NOTE: Sending the request outside of the connected VPC will result in
403response
To switch over to private link, update your Nimbus endpoint. to the new schema by adding privatelink to your Nimbus endpoint.
See specific docs for your integration endpoints.
The Datadog CLI lets you query logs from your terminal. If you're querying aggregated logs, this also gives you the option to disaggregate them into individual log lines.
git clone [email protected]:nimbushq/dd-cli.git
cd dd-cli
yarn && yarn build
npm linknimbus logs
Search across dd logs. The following environmental variables need to be set in
order to run this command: DD_SITE, DD_API_KEY, DD_APP_KEY
Options:
--version Show version number [boolean]
--help Show help [boolean]
-q, --query dd log query [string] [required]
-f, --from time in the following format:
2024-02-14T11:35:00-08:00
[string] [required]
--to, --t, desc: "time in the
following format:
2024-02-14T11:35:00-08:00" [string] [required]
-i, --indexes log indexes to search
[array] [default: ["main"]]
-d, --disaggregate disaggregate aggregated logs
[boolean]Replace env variables with your org specific values
Nimbus is architected around observability pipeline best practices and usually requires no manual configuration. That said, we understand that real life systems are complex and more flexibility is needed.
To that end, configuration overrides let you override any part of the Nimbus pipeline with your custom VRL code.
Configuration Overrides is currently in Limited Access. Please contact [email protected] if you want to use it
You can use any valid VRL to edit the configuration.
You can use the Override dropdown to change what part of the pipeline you wish to edit. The current options are:
nim/in/global_remap: controls ingress. all data will pass by this transform
nim/out/global_remap: controlls egress. all data that is sent upstream will pass by this transform
For a full list of configuration options, visit the
Hit Save to apply your changes.
Nimbus supports pausing all transforms in times of distress.
Pause All buttonGo to the transforms tab and then click Pause All
Clicking will open a modal with a dialogue box asking you to type CONFIRM to continue. Type the letters and hit confirm to un
When you are ready to resume transforms, click the Resume button to enable existing transforms
Nimbus Metric Hub enables you to pre-aggregate your host metrics before sending them to datadog. This means we can reduce your billable infrastructure host count by an order of magnitude while still preserving the individual metrics for each host.
Nimbus Metric Optimization is currently in private preview. To get early access, please reach out to [email protected]
@nimdata.category:"luxury"observability_pipelines_worker:
logs:
enabled: true
url: "https://YOUR_NIMBUS_ENDPOINT"DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED=true
DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL="https://YOUR_NIMBUS_ENDPOINT"type: enum
values:
opt: processed by a nimbus transform
noopt: not processed by nimbus transform
error: detected as an error by nimbus
optional (only present if logs have been reduced)
type: int
Number of items in data field
optional (only present if logs match a transform)
Name of the transform that has processed the log
Less than 99.999% but greater than or equal to 99.995%
10%
Less than 99.995% but greater than or equal to 99.99%
25%
Less than 99.99%
80%
service:
pipelines:
...
$YOUR_PIPELINE:
...
exporters: [..., otlphttp/nimbus]# regular query
env DD_SITE="***" DD_API_KEY="***" DD_APP_KEY="***" dd-cli logs -q "service:elb " -f "2024-02-14T11:35:00-08:00" -t "2024-02-14T11:38:00-08:00"
# query with log disaggregation
env DD_SITE="***" DD_API_KEY="***" DD_APP_KEY="***" dd-cli logs -q "service:elb " -f "2024-02-14T11:35:00-08:00" -t "2024-02-14T11:38:00-08:00" -dFor JSON logs, the log body is usually represented by the value of the message key.
Top level keys are the first level of keys in a json logs.
For example, take the following log:
In this case, top, nested, and bottom, would be top level keys
Vector Remap Language is a domain-specific language developed by Vector for modifying your observability data.
- key:
op:
val:key: foo
op: equal
val: 42key: foo
op: exists
val: truekey: foo
op: match
val: "foo"key: foo
op: match_any
val: ["foo", "foobar"]op: AND
val:
- key: foo
op: exists
val: true
- key: foo
op: equal
val: 42op: NOT
val:
key: foo
op: equal
val: 42op: OR
val:
- key: foo
op: exists
val: true
- key: foo
op: equal
val: 42message: '2024/01/23 01:33:122 {"method": "process_checkout", "retry_count": 3}'
timestamp: 2024/01/23 01:33:126
service: checkout
...process_when:
- key: message
op: EQUAL
value: 'checkout'
- key: message
op: MATCH
value: '^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2} .+'
vrl: |
groups = parse_regex!(.message, r'^(?<time>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) (?<data>.+)')
.message = groups.data
.timestamp = parse_timestamp!(groups.time + "+00:00", format:"%Y/%m/%d %H:%M:%S%:z")method: "process_checkout"
retry_count: 3
timestamp: 2024/01/23 01:33:122
service: checkout
...message: 2024/01/23 01:33:12 foo did bar
...process_when:
- key: message
op: MATCH
value: '^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2} foo.+'
...message: foo did bar
timestamp: 2024/01/23 01:33:12{
"top": 1,
"nested": {
"inner": 2
},
"bottom": 3
}
Nimbus is committed to maintaining the security and integrity of our services. We understand that no technology is perfect, and we believe in working collaboratively with the security community to find and resolve vulnerabilities. Our bug bounty program encourages this collaboration by rewarding security researchers who provide us with high-quality security information.
curl -v -d "{msg: ping}" https://$API_KEY-http-intake.privatelink.logs.us1.nimbus.dev - https://$API_KEY-$INTEGRATION-intake.logs.us1.nimbus.dev
+ https://$API_KEY-$INTEGRATION-intake.privatelink.logs.us1.nimbus.devNimbus Website: https://hub.nimbus.dev
Nimbus API: https://api.nimbus.dev
Nimbus Data Pipeline
The following are explicitly out of scope:
Third-party services and dependencies
Denial of Service (DoS) attacks
Spam or social engineering techniques
Participants must:
Not be a former or current employee of Nimbus or its affiliates.
Not violate any laws or breach any agreements in order to discover vulnerabilities.
Adhere to the guidelines and scope of this program.
Nimbus provides rewards as follows:
Critical vulnerabilities: Up to $1000
High severity vulnerabilities: Up to $500
Medium severity vulnerabilities: Up to $200
Low severity vulnerabilities: Recognition in our Hall of Fame
Reward amounts are determined by the impact, ease of exploitation, and quality of the report. Decisions on reward eligibility and amounts are made by Nimbus and are final.
To submit a vulnerability, please follow these guidelines:
Provide detailed steps to reproduce the vulnerability, including proof of concept (PoC) code if applicable.
Include your contact information for further communication.
Do not disclose the vulnerability publicly or to any third parties without explicit permission from Nimbus.
Submissions should be sent to security(at)nimbus.dev
Participants agree to:
Handle any confidential information obtained through this program responsibly.
Refrain from exploiting any vulnerabilities beyond what is necessary for demonstration purposes.
Comply with all applicable laws and regulations.
Nimbus commits to:
Respond promptly to submissions.
Not pursue legal action against researchers who adhere to this policy.
Work with researchers to understand and remediate reported vulnerabilities.
For questions or more information about the bug bounty program, please contact security(at)nimbus.dev.











cat << EOF > /usr/local/etc/
data_dir: /tmp/
api:
address: 0.0.0.0:8686
enabled: true
playground: false
sources:
source/journald:
type: journald
current_boot_only: true
sinks:
sink/nimbus:
type: http
encoding:
codec: json
compression: gzip
inputs:
- source/*
uri: $YOUR_NIMBUS_ENDPOINT
EOF. ~/.zprofile
chmod +x /tmp/nimsetup.sh
sudo /tmp/nimsetup.sh `which vector`cat << EOF > /tmp/nimsetup.sh
#!/bin/bash
# Define the binary and service names
BINARY_PATH=$1
BINARY_NAME="vector"
SERVICE_NAME="vector.service"
# Copy the binary to /usr/local/bin
echo "Copying $BINARY_PATH to /usr/local/bin..."
cp "$BINARY_PATH" "/usr/local/bin/$BINARY_NAME"
chmod +x "/usr/local/bin/$BINARY_NAME"
# updating config file
chmod a+r /usr/local/etc/vector-config.yaml
# Create a systemd service file
SERVICE_FILE_PATH="/etc/systemd/system/$SERVICE_NAME"
echo "Creating $SERVICE_FILE_PATH..."
cat <<EOF1 > "$SERVICE_FILE_PATH"
[Unit]
Description=Nimbus Collector
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/$BINARY_NAME -c /usr/local/etc/vector-config.yaml
Restart=on-abort
[Install]
WantedBy=multi-user.target
EOF1
# Reload systemd to recognize the new service
echo "Reloading systemd manager configuration..."
systemctl daemon-reload
# Enable the service to start on boot
echo "Enabling $SERVICE_NAME..."
systemctl enable "$SERVICE_NAME"
# Start the service
echo "Starting $SERVICE_NAME..."
systemctl start "$SERVICE_NAME"
echo "$SERVICE_NAME is now running."
EOFsystemctl status vector
vector.service - Nimbus Collector
Loaded: loaded (/etc/systemd/system/vector.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2024-02-29 22:41:32 UTC; 6min ago
Main PID: 3505 (vector)
Tasks: 6 (limit: 18945)
Memory: 16.5M



Release Date: 2024/04/26
Want to try Nimbus without speaking to a sales rep? Nimbus now supports self serve onboarding!
All new accounts get a free 14 day trial and can send any amount of data to Nimbus without any caps! You can get started by signing up from the and setup your in under 10 minutes!
Release Date: 2024/04/12
You can now reduce you data egress fees by 90% using Nimbus Private Link 🎉
Data egress fees are the "hidden cost" of observability. They are hard to detect because they don't show up in your observability vendor bill but rather in the data transfer fees from your cloud provider and can double your ingest costs for observability.
As an example, AWS charges $0.09/GB for egress (for comparison, datadog charges $0.10/GB for data ingress). Private link sets up a private connection between your VPC and Nimbus and reduces the cost of data transfer to $0.01/GB.
Private link is available for free to all Nimbus customers in us-east-1. Instructions for getting started
Release Date: 2024/03/28
engine: Nimbus now supports . Lint optimizations scan logs for common hygiene issues like sensitive (eg. api tokens) and redundant data (eg. timestamp appearing the log)
pipeline: improvements pipeline p99 latency by using provisioned throughput for block storage
pipeline: reduce impact of az failover by buffering telemetry data on disk across multiple azs
Release Date: 2024/03/14
ntl: support directive
ntl: support directive
api: sink updates are now immediately applied
Release Date: 2024/02/29
We launched a . this lets you query logs from your terminal. If you're querying aggregated logs, this also gives you the option to disaggregate them into individual log lines.
ui: handle various text overflow issues in the ui
Release Date: 2024/02/15
| Heads up that we will be switching to a bi-weekly release model moving forward due to the growing scope of what the team is currently taking on.
ui: you can now update your observability sinks in the UI
ntl: Nimbus now support the NOT operator
UI now shows all modals in full screen regardless of content size
Release Date: 2024/02/08
self serve onboarding: you can now provision and manage your Nimbus destinations without human contact (that said, we're still here if you need us)
UI improvements: snappier page loads and consisting alignment of tables and elements
Release Date: 2024/02/01
Nimbus SLA - Nimbus now has a of 99.999% uptime. You can follow our public status page to be notified of incidents.
Release Date: 2024/01/25
Nimbus now shows ingest bytes reduction in addition to event reduction on optimizations
Release Date: 2024/01/18
optimization fidelity: you can now customize log fidelity during optimization, choosing between preserving 100% of the original input and minimizing data ingest size
Release Date: 2024/01/11
support max, min, retain, flat_unique, and longest_array
bad validation rule when updating a transform causes update to fail on certain merge strategies
Release Date: 2024/01/04
New year, new look.
We launched the Nimbus website. You should have received new credentials in your email. The new website offers a much snappier and light weight version of our previous retool application and will enable us to ship much more ambitious features later this year.
new frontend at https://hub.nimbus.dev
Release Date: 2023/12/28
support optimization_mode to tune Nimbus optimization
traffic analysis now shows volume reduction by either events or size
support which accepts a list of paths to be deleted from the target payload
Release Date: 2023/12/21
support additional_sinks in
support additional_sources in
support MATCH_ANY
Release Date: 2023/12/14
: support aggregating metrics across hosts (private preview)
better error messages when there is a syntax error when manually creating a transform
Release Date: 2023/12/07
directive allows you to pull properties inside of nested objects to the top level
support for the inside of compound statements
Release Date: 2023/11/23
Pause All: Support manual override to
clicking on the log pattern from analysis will now take you directly to events in datadog
transforms now show usage stats
revamped documentation with demo video
reduce egress bytes by removing nimraw
Release Date: 2023/11/09
analysis now shows log output preview for generated transformation
support modifying transforms generated by analysis
Release Date: 2023/10/26
analysis now shows log samples for findings
support ability to delete a transform
analysis now auto generates names for findings
analysis now highlights new findings
Release Date: 2023/10/12
Configuration Overrides: You can now add custom VRL to control all aspects of the ingress stage of the nimbus pipeline
support directive
support directive
support directive
support directive
Release Date: 2023/09/28
Usage dashboards: you can now access your usage dashboard graphs
Smarter error detection - we now autodetect errors based on
Release Date: 2023/09/14
Heroku Datadog Integration: Support Datadog
Heroku Log Forwarding Integration: Support Datadog
Release Date: 2023/08/31
Hello world!
Nimbus Transformation Language (NTL): A high level language for working with telemetry data.
Nimbus Traffic Analysis: Automatically identify high traffic log patterns
Nimbus Transform Recommendations: Auto generated transforms using the based on traffic analysis results
support merge_strategies directive
Nimbus Transforms are high level NTL functions that have specialized logic for specific optimizations.
Once an optimization is applied, you can find its corresponding transformation in the transforms section of the console.
You can click on Edit to either update or delete an existing transform.
The following properties are available on all transforms
status: required
type:
Determines when a transform should be applied. Takes one or more predicates as input.
Example:
status: optional
type: boolean
default: false
When set to true, designate that the current transform can apply to error logs. By default, error logs are not transformed but immediately proxied downstream for immediate processing.
status: optional
type: string
default: message
The key where the is located
Example:
status: optional
type: string[]
When specified, a list of paths that should be made into
Example:
Before:
After:
status: optional
type: string[]
When specified, a list of paths that should be removed
Example:
status: optional
type: string[]
If set, removes the selected paths from
Example:
status: optional
type: boolean
If set, removes the . Helps with significantly removing dataisze
Example:
The Nimbus reduce transform is a superset of the transform.
When using reduce, remember that group_by only works on
If the key you need is nested, make sure to pull it up using the pull_up directive.
status: optional
type: enum
The default behavior is as follows:
The first value of a string field is kept and subsequent values are discarded.
For timestamp fields the first is kept and a new field [field-name]_end is added with the last received timestamp value.
Numeric values are summed.
Strategies:
status: optional
type: NTL
A condition used to distinguish the first event of a transaction. If this condition resolves to true for an event, the previous transaction is flushed (without this event) and a new transaction is started.
Example:
status: optional
type: integer
The maximum number of events to group together.
Example:
status: optional
type: integer
default: 30000
The maximum period of time to wait after the last event is received, in milliseconds, before a combined event should be considered complete.
Suppose you have the following logs:
And you have the following reduce transform
Your processed logs would look like the following
longest_array
Keep the longest array seen.
max
Keep the maximum numeric value seen.
min
Keep the minimum numeric value seen.
retain
Discard all but the last value found.
array
Append each value to an array.
concat
Concatenate each string value, delimited with a space.
concat_newline
Concatenate each string value, delimited with a newline.
concat_raw
Concatenate each string, without a delimiter.
discard
Discard all but the first value found.
flat_unique
Create a flattened array of all unique values.
process_when:
- {key: service, op: EQUAL, val: foo}# logs sent by dd lambda extension have the message field nested inside the message key
# eg:
# {message: { message: "START ...", lambda: {arn: arn:aws:lambda:us-east-1:33333333:function:test-lambda, ...}}}
msg_field:
- message.messagepull_up:
- message.transactionId{
"message": {
"transactionId": 1,
...
}
}{
"transactionId": 1,
"message": {
...
}
}remove:
- message.id
- message.source
- message.timeoutremove_from_nimdata:
- status
- hostname
- ...remove_nimdata: truestarts_when:
- {key: message, op: MATCH, val: "\n\{"}max_events: 200[
{
"host": "host1",
"fooatt": "one",
"baratt": "alpha"
},
{
"host": "host2",
"fooatt": "two",
"baratt": "beta"
},
{
"host": "host1",
"fooatt": "three",
"baratt": "gamma"
},
{
"host": "host1",
"baratt": "gamma"
}
]
name: hostreducer
# only apply this reducer when the log event has both a `host` and `fooatt` keys
process_when:
- {key: host, op: exists, val: true}
- {key: fooatt, op: exists, val: true}
group_by:
- host
[
// this log was processed and grouped correctly
{
"host": "host1",
"nimdata": [
{
"host": "host1",
"fooatt": "one",
"baratt": "alpha"
},
{
"host": "host1",
"fooatt": "three",
"baratt": "gamma"
}
],
"nimsize": 2,
"nimkind": "opt",
"nimmatch": "hostreducer"
},
{
"host": "host2",
"nimdata": [
{
"host": "host2",
"fooatt": "two",
"baratt": "beta"
}
],
"nimsize": 1,
"nimkind": "opt",
"nimmatch": "hostreducer"
},
// this log did not get processed as it did not have a `fooatt` key
{
"host": "host1",
"baratt": "gamma",
"nimkind": "noopt"
}
]
Examples of log patterns identified and optimized by Nimbus.
These are high volume log events that repeat most of their content. For most applications most of the time, this will be the primary driver of log volume. Examples include health checks and heart beat notifications.
[
{
"ddsource": "nodejs",
"host": "itemrefresh-0",
"message": "refresh item catalogue for itemId: ITEM470",
"path": "/",
"service"
97.5% event volume reduction
79% ingest volume reduction
These are logs that describe a sequence of related events. These sequences usually have some sort of common identifier like a transactionId or a jobId. Examples include a background job and business specific user flows.
75% reduction in event volume
4% reduction in ingest volume
Many times, an application will emit a single log across multiple lines such as the case with a JSON log. Unless you specifically account for this, most logging agents will consume each newline as a separate log event. Nimbus can identify when this happens and stitch these logs back together.
90% in event volume reduction
87% in ingest volume reduction
{
"ddsource": "nodejs",
"host": "itemrefresh-0",
"path": "/",
"service": "itemrefresh",
"status": "info",
"message": [
"refresh item catalogue for itemId: ITEM470",
"refresh item catalogue for itemId: ITEM8185",
"refresh item catalogue for itemId: ITEM7594",
// 37 more messages
...
],
"nimsize": 40,
"timestamp": "2023-11-23T00:16:09.970Z",
"timestamp_end": "2023-11-23T00:16:11.322Z"
} - name: itemrefresh
type: reduce
rules:
process_when:
- key: service
op: EQUAL
val: itemrefresh
group_by:
- host
- path
pull_up:
- ddsource
- path
- status
- service
msg_field: message[
{
"ddsource": "nodejs",
"host": "checkout-0",
"message": {
"customerId": "CU26940939",
"itemId": "ITEM1417",
"itemName": "Product 7",
"itemPrice": 2.612748019396105,
"msg": "adding ITEM9798 to cart",
"quantity": 2,
"transactionId": "TX79924095"
},
"service": "checkout",
"status": "info",
"timestamp": "2024-04-26T15:45:14.000000138Z"
},
{
"ddsource": "nodejs",
"host": "checkout-0",
"message": {
"customerId": "CU26940939",
"discountAmount": 13.837782236831986,
"msg": "applying discount DISC16",
"transactionId": "TX79924095"
},
"service": "checkout",
"status": "info",
"timestamp": "2024-04-26T15:45:14.000000831Z"
},
{
"ddsource": "nodejs",
"host": "checkout-0",
"message": {
"customerId": "CU26940939",
"estimatedDelivery": "2023-11-24",
"msg": "calculating shipping info for ITEM9798",
"shippingAddress": "902 Main St, Anytown, AN 68387",
"shippingMethod": "Standard",
"transactionId": "TX79924095"
},
"service": "checkout",
"status": "info",
"timestamp": "2024-04-26T15:45:15.000000523Z"
},
{
"ddsource": "nodejs",
"host": "checkout-0",
"message": {
"customerId": "CU26940939",
"msg": "payment for ITEM9798 succeeded",
"paymentMethod": "PayPal",
"totalAmount": 56.645267111988474,
"transactionId": "TX79924095"
},
"service": "checkout",
"status": "info",
"timestamp": "2024-04-26T15:45:16.000000214Z"
}
]{
"customerId": "CU26940939",
"ddsource": "nodejs",
"host": "checkout-0",
"message": [
"adding ITEM9798 to cart",
"applying discount DISC16",
"calculating shipping info for ITEM9798",
"payment for ITEM9798 succeeded",
],
"nimdata": [
{
"message": {
"itemId": "ITEM1417",
"itemName": "Product 7",
"itemPrice": 2.612748019396105,
"msg": "adding ITEM9798 to cart",
"quantity": 2
},
"timestamp": "2024-04-26T15:45:14.000000138Z"
},
{
"message": {
"discountAmount": 13.837782236831986,
"msg": "applying discount DISC16"
},
"timestamp": "2024-04-26T15:45:14.000000831Z"
},
{
"message": {
"estimatedDelivery": "2023-11-24",
"msg": "calculating shipping info for ITEM9798",
"shippingAddress": "902 Main St, Anytown, AN 68387",
"shippingMethod": "Standard"
},
"timestamp": "2024-04-26T15:45:15.000000523Z"
},
{
"message": {
"msg": "payment for ITEM9798 succeeded",
"paymentMethod": "PayPal",
"totalAmount": 56.645267111988474
},
"timestamp": "2024-04-26T15:45:16.000000214Z"
}
],
"service": "checkout",
"status": "info",
"timestamp": "2024-04-26T15:45:14.000000138Z",
"timestamp_end": "2024-04-26T15:45:16.000000905Z",
"transactionId": "TX79924095"
}- name: checkout
type: reduce
rules:
# process when service is exactly equal to "checkout"
process_when:
- key: service
op: EQUAL
val: checkout
# make sure these fields are still available at the "top level" instead of being nested
pull_up:
- message.transactionId
- message.customerId
# group all logs by the following top level keys
group_by:
- customerId
- transactionId
# specify the message field, the highlighted body of the log
msg_field: message.msg
# remove unecessary timestamp fields
remove:
- timestamp
- message.timestamp
# details how custom top level keys will be merged
merge_strategies:
transactionId: discard
customerId: discard[
{
"ddsource": "nimbus",
"host": "some-host",
"message": "{",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.108Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"id\": \"2460\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.134Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"method\": \"GET\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.147Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"url\": \"/health\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.160Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"query\": {},",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.174Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"params\": {},",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.187Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"headers\": {",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.199Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"host\": \"100.119.27.217:8080\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.210Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"user-agent\": \"kube-probe/1.18\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.221Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"accept-encoding\": \"gzip\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.233Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"connection\": \"close\"",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.245Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " },",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.256Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"remoteAddress\": \"::ffff:172.20.65.189\",",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.269Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"remotePort\": 60444",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.280Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": "}",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.292Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": "{",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.304Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"statusCode\": 200,",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.316Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"headers\": {",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.327Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " \"x-powered-by\": \"Express\"",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.338Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": " }",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.350Z"
},
{
"ddsource": "nimbus",
"host": "some-host",
"message": "}",
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.361Z"
}
]{
"ddsource": "nimbus",
"host": "some-host",
"message": "{ \"id\": \"2460\", \"method\": \"GET\", \"url\": \"/health\", \"query\": {}, \"params\": {}, \"headers\": { \"host\": \"100.119.27.217:8080\", \"user-agent\": \"kube-probe/1.18\", \"accept-encoding\": \"gzip\", \"connection\": \"close\" }, \"remoteAddress\": \"::ffff:172.20.65.189\", \"remotePort\": 60444}{ \"statusCode\": 200, \"headers\": { \"x-powered-by\": \"Express\" }}",
"nimkind": "opt",
"nimmatch": "healthcheck",
"nimsize": 21,
"path": "/",
"service": "healthcheck",
"source_type": "http_server",
"status": "info",
"timestamp": "2023-11-23T00:05:58.108Z"
}- name: healthcheck
type: reduce
rules:
process_when:
- key: service
op: EQUAL
val: healthcheck
group_by:
- host
msg_field: message
starts_when:
- key: message
op: MATCH
val: \n\{
merge_strategies:
msg_source: concat_newline