How to resolve schema parsing errors with malformed JSON and truncated EKS log data in Panther

Last updated: February 18, 2026

Issue

When trying to process EKS logs from an S3 bucket, schema parsing errors occur due to non-JSON data and truncated JSON events. The logs contain mixed data formats, with some events being plain text (non-JSON) and others being malformed JSON (truncated due to AWS size limits), causing parsing failures.

These errors prevent successful log ingestion and result in error messages like:

  • "No match" errors for non-JSON events

  • "parse failed: readObjectStart: expect { or n, but found I" for plain text logs

  • "parse failed: readStringSlowPath: unexpected end of input" for truncated JSON

Resolution

Here are some options to resolve this issue:

  1. Address the upstream issues causing log truncation and errors, as this is the most optimal long-term solution.

  2. Create a raw event filter to exclude non-JSON and malformed events that don't start with <optional space>{ and don't end with }<optional space>. Configure the filter to only process events that start with { and end with }.

    1. In order to ensure that the Amazon.EKS.Authenticator logs, that are parsed using the Fastmatch Log Parser, won't be excluded as well,  this regex can be applied as part of an inclusion filter: ^\s*\{.*|^time=. This regex means that the inclusion filter should match and allow logs to be ingested only when the log:
      • Starts with { ( which is most likely a JSON)
      • Starts with time= (Amazon.EKS.Authenticator logs)

    2. In order to be on the safe side and ensure that no valuable logs will be dropped, please ensure that extensive testing has been performed before applying the inclusion filter.

  3. If your custom schema contains field names with invalid characters (such as / and .), use the rename transformation to map invalid field names to valid ones in your schema definition.

Cause

This issue occurs when EKS logs contain mixed data formats and when JSON logs exceed AWS's maximum log entry size limits. AWS truncates log entries that exceed the size limit, resulting in malformed JSON that cannot be parsed by Panther's schemas. Additionally, some EKS logs may contain plain text entries that don't match the expected JSON format.

Panther currently doesn't support partial parsing of truncated JSON or automatic handling of mixed data formats within a single log source, so fixing this upstream or filtering out problematic events is the recommended approach to prevent ingestion errors.