Is log ingestion case insensitive?
For example, we have logs in JSON format and use the following schema:
fields:
- name: ExampleVersion
type: string
- name: UserLocation
type: string
It works as expected with this event:
{"ExampleVersion": "1.0", "UserLocation": "Test"}
However there is an issue when JSON keys have the same field name, but different case, such as this:
{"exampleVersion": "1.0", "userLocation": "Test"}
Those kind of events don't get parsed at the moment. How can we bypass this behavior?
The ability to handle the incoming event field names in a case-insensitive way on the Panther side is not supported yet. To work around this and normalize the field names beforehand, a transformation step is required.
Option 1: One potential way to handle this would be to define all the fields in lowercase in the schema definition and use a tool like Cribl to perform changes (turn all the fields in lowercase) on the payload before it reaches Panther.
Option 2: Alternatively, one potential solution could be to use two different schemas on the same log source The caveat is to ensure that they don’t overlap unless they can be distinguished by the S3 prefix (assuming it’s an S3 source). By overlap, we mean to have the same exact fields marked as required. Based on the example above, this might look like the following:
# Custom.Schema1
fields:
- name: userLocation
type: string
required: true
and
# Custom.Schema2
fields:
- name: UserLocation
type: string
required: true