I'm ingesting my AWS VPC Flow logs into Panther, but a significant portion of those logs are records of internal traffic (within a VPC) and don't hold value to me. How can I filter those logs from entering Panther to save on my ingestion quota?
The simplest method of filtering these logs out is to add a raw event filter that looks for IP patterns that indicate internal traffic. For example, if all your internal IP addresses have the form 10.XXX.XXX.XXX
, you could use the following regex filter to match on log events for traffic between two such addresses:
10(\.\d+){3} 10(\.\d+){3}
If you have multiple patterns of IP that indicate an internal address, and you want to filter all communication between them, then you'll need to add a filter for each permutation of the patterns. For example, if the 2 common IP patterns are 10.XXX.XXX.XXX
and 172.XXX.XXX.XXX
, then you'd need 4 patterns:
10(\.\d+){3} 10(\.\d+){3}
10(\.\d+){3} 172(\.\d+){3}
172(\.\d+){3} 10(\.\d+){3}
172(\.\d+){3} 172(\.\d+){3}
When creating the raw event filter, here's a raw sample VPC event you can use to try and match against:
- eni-1235b8ca123456789 10.0.1.5 10.0.0.220 10.0.1.5 203.0.113.5
Note: There's 2 ways to test the IP's, either using this raw sample event above and changing the IP values or using the normalized event straight from your log source in Panther (these would contain p_standard fields)!