A Log Source has failed with the ambiguous error message "Source experienced errors recently while trying to access S3 objects".
You may need to open a support case to determine the cause of the error. To start the troubleshooting process, please do the following:
In the Panther Console, navigate to Configure > Log Sources.
You will see a list of all Log Sources in your Panther instance. Find the Source that is triggering the error. (Pro-Tip: You can use the "Filter" button to only display unhealthy Log Sources, which might make it easier to find.)
Click on the log source that is producing the error.
On the left side of the log source page, look under Basic Info and find the Log Source ID. Copy this value.
Navigate to the Data Explorer.
In the Data Explorer, enter the following SQL query:select * from panther_monitor.public.data_audit
where p_occurs_since('30 days') and
status != 'SUCCESS' and
p_source_id = 'YOUR_LOG_SOURCE_ID'
order by p_event_time desc
limit 100;
Replace 'YOUR_SOURCE_ID'
with the Source ID you found in step 2. Also, adjust the time window in p_occurs_since
as needed.
Export the query results. Note the following:
You can export the query results to a CSV file by clicking the "Download CSV" button above the results panel in the Data Explorer.
Contact Panther Support and include the exported query results. In your ticket to Support, include:
Which type of Log Source is having issues, and if it's a custom Log Integration or one of the Panther-provided ones.
The CSV export from the SQL query.
All errors you see on this Log Source, including "Source experienced errors recently while trying to access S3 objects".
If this is a new Log Source, or an existing one that previously worked fine.
Following these steps, a Support Engineer will be able to more rapidly diagnose the problem and restore functionality to your Log Source.
It should be noted that if you chose the wrong stream type and adjust your log source to the correct stream type, you will still have to backfill the logs again to be processed correctly. A spot check workflow would be to send a single file first to make sure it is processing correctly with the schema and the file type. Once you confirm that it is working as expected, your current and backfill data are structured the same, then you can re-ingest. Make sure you are not doing it all at once but rather in small batches.
This error indicates a catch-all for any errors reading files from an S3 bucket, including, but not limited to:
A mis-configured stream-type
Errors in the log file formatting
CloudFormation Template deployed to the wrong account
A service control policy denying the request. Example:
"error": "seek path [Records] failed: s3manager download failed: AccessDenied: Access Denied\n\tstatus code: 403
The role that you have configured in your log source doesn't have permissions to read the S3 objects. The role appears in the column assumedrolearn
from the query that is provided in the section Resolution. To fix this behavior, you can either:
Give the above role read access to the data in your bucket, or
Change the role that is configured for that log source to one that has adequate permissions.
"""FAILURE""","""READ""","""s3manager download failed: AccessDenied: Access Denied\n\tstatus