My Lacework log source shows up as unhealthy. When I looked at the issues, all of the errors are of the type:
failed to read line: gzip decompression failed: flate: corrupt input before offset 39958128
I downloaded a number of the files mentioned, tested them, and they all decompressed without errors. Why do I get this error in Panther?
To resolve this issue, you can use different S3 buckets and thus multiple log sources for each Lacework sub-account.
This behavior is triggered by Lacework's configuration template. When Lacework exports its content in an S3 bucket, it applies the same S3 object key multiple times (doesn’t append a random suffix for each Lacework account but uses a static name, differentiating by the hour). As a result, the S3 object gets overwritten in the middle of Panther's processing resulting in a failed decompression error.