elastic / logstash

Logstash - transport and process your logs, events, or other data
https://www.elastic.co/products/logstash
Other
14.18k stars 3.5k forks source link

Cisco ASA pattern error #1369

Closed dav3860 closed 9 years ago

dav3860 commented 10 years ago

Hi,

There is an issue with the built-in pattern for Cisco ASA firewalls. The line :

# ASA-6-302020, ASA-6-302021
CISCOFW302020_302021 %{CISCO_ACTION:action}(?: %{CISCO_DIRECTION:direction})? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?

should be replaced by :

# ASA-6-302020_302021 inbound
CISCOFW302020_302021_1 %{CISCO_ACTION:action}(?: (?<direction>inbound))? %{WORD:protocol} connection for faddr %{IP:src_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:dst_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:dst_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?
# ASA-6-302020_302021 outbound
CISCOFW302020_302021_2 %{CISCO_ACTION:action}(?: (?<direction>outbound))? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?

Indeed, the src_ip & dst_ip are different if the direction is inbound or outbound.

You will need to update the Logstash Cookbook page for Cisco ASA too, because we replace the pattern CISCOFW302020_302021 by two patterns (CISCOFW302020_302021_1 and CISCOFW302020_302021_2).

dav3860 commented 10 years ago

In fact, this mistake appears for multiple patterns. Here are the fixed patterns :

# ASA-6-302020_302021 inbound
CISCOFW302020_302021_1 %{CISCO_ACTION:action}(?: (?<direction>inbound))? %{WORD:protocol} connection for faddr %{IP:src_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:dst_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:dst_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?
# ASA-6-302020_302021 outbound
CISCOFW302020_302021_2 %{CISCO_ACTION:action}(?: (?<direction>outbound))? %{WORD:protocol} connection for faddr %{IP:dst_ip}/%{INT:icmp_seq_num}(?:\(%{DATA:fwuser}\))? gaddr %{IP:src_xlated_ip}/%{INT:icmp_code_xlated} laddr %{IP:src_ip}/%{INT:icmp_code}( \(%{DATA:user}\))?

# ASA-2-106001 inbound
CISCOFW106001_1 (?<direction>Inbound) %{WORD:protocol} connection %{CISCO_ACTION:action} from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} flags %{GREEDYDATA:tcp_flags} on interface %{GREEDYDATA:interface}
# ASA-2-106001 outbound
CISCOFW106001_2 (?<direction>Outbound) %{WORD:protocol} connection %{CISCO_ACTION:action} from %{IP:dst_ip}/%{INT:dst_port} to %{IP:src_ip}/%{INT:src_port} flags %{GREEDYDATA:tcp_flags} on interface %{GREEDYDATA:interface}

# ASA-2-106006, ASA-2-106007 inbound
CISCOFW106006_106007_1 %{CISCO_ACTION:action} (?<direction>inbound) %{WORD:protocol} from %{IP:src_ip}/%{INT:src_port}(\(%{DATA:src_fwuser}\))? to %{IP:dst_ip}/%{INT:dst_port}(\(%{DATA:dst_fwuser}\))? (?:on interface %{DATA:interface}|due to %{CISCO_REASON:reason})
# ASA-2-106006, ASA-2-106007 outbound
CISCOFW106006_106007_2 %{CISCO_ACTION:action} (?<direction>outbound) %{WORD:protocol} from %{IP:dst_ip}/%{INT:dst_port}(\(%{DATA:dst_fwuser}\))? to %{IP:src_ip}/%{INT:src_port}(\(%{DATA:src_fwuser}\))? (?:on interface %{DATA:interface}|due to %{CISCO_REASON:reason})
# ASA-2-106010
CISCOFW106010 %{CISCO_ACTION:action} %{CISCO_DIRECTION:direction} %{WORD:protocol} src %{IP:src_ip}/%{INT:src_port}(\(%{DATA:src_fwuser}\))? dst %{IP:dst_ip}/%{INT:dst_port}(\(%{DATA:dst_fwuser}\))?

# ASA-6-302013, ASA-6-302014, ASA-6-302015, ASA-6-302016 inbound
CISCOFW302013_302014_302015_302016_1 %{CISCO_ACTION:action}(?: (?<direction>inbound))? %{WORD:protocol} connection %{INT:connection_id} for %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port}( \(%{IP:src_xlated_ip}/%{INT:src_xlated_port}\))?(\(%{DATA:src_fwuser}\))? to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}( \(%{IP:dst_xlated_ip}/%{INT:dst_xlated_port}\))?(\(%{DATA:dst_fwuser}\))?( duration %{TIME:duration} bytes %{INT:bytes})?(?: %{CISCO_REASON:reason})?( \(%{DATA:user}\))?
# ASA-6-302013, ASA-6-302014, ASA-6-302015, ASA-6-302016 outbound
CISCOFW302013_302014_302015_302016_2 %{CISCO_ACTION:action}(?: (?<direction>outbound))? %{WORD:protocol} connection %{INT:connection_id} for %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}( \(%{IP:dst_xlated_ip}/%{INT:dst_xlated_port}\))?(\(%{DATA:dst_fwuser}\))? to %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port}( \(%{IP:src_xlated_ip}/%{INT:src_xlated_port}\))?(\(%{DATA:src_fwuser}\))?( duration %{TIME:duration} bytes %{INT:bytes})?(?: %{CISCO_REASON:reason})?( \(%{DATA:user}\))?

# ASA-6-602303, ASA-6-602304 inbound
CISCOFW602303_602304_1 %{WORD:protocol}: An (?<direction>inbound) %{GREEDYDATA:tunnel_type} SA \(SPI= %{DATA:spi}\) between %{IP:src_ip} and %{IP:dst_ip} \(user= %{DATA:user}\) has been %{CISCO_ACTION:action}
# ASA-6-602303, ASA-6-602304 outbound
CISCOFW602303_602304_2 %{WORD:protocol}: An (?<direction>outbound) %{GREEDYDATA:tunnel_type} SA \(SPI= %{DATA:spi}\) between %{IP:dst_ip} and %{IP:dst_ip} \(user= %{DATA:user}\) has been %{CISCO_ACTION:action}

And here is the grok match clause for these patterns :

      match => [
        "cisco_message", "%{CISCOFW106001_1}",
        "cisco_message", "%{CISCOFW106001_2}",
        "cisco_message", "%{CISCOFW106006_106007_1}",
        "cisco_message", "%{CISCOFW106006_106007_2}",
        "cisco_message", "%{CISCOFW106010}",
        "cisco_message", "%{CISCOFW106014}",
        "cisco_message", "%{CISCOFW106015}",
        "cisco_message", "%{CISCOFW106021}",
        "cisco_message", "%{CISCOFW106023}",
        "cisco_message", "%{CISCOFW106100}",
        "cisco_message", "%{CISCOFW110002}",
        "cisco_message", "%{CISCOFW302010}",
        "cisco_message", "%{CISCOFW302013_302014_302015_302016_1}",
        "cisco_message", "%{CISCOFW302013_302014_302015_302016_2}",
        "cisco_message", "%{CISCOFW302020_302021_1}",
        "cisco_message", "%{CISCOFW302020_302021_2}",
        "cisco_message", "%{CISCOFW305011}",
        "cisco_message", "%{CISCOFW313001_313004_313008}",
        "cisco_message", "%{CISCOFW313005}",
        "cisco_message", "%{CISCOFW402117}",
        "cisco_message", "%{CISCOFW402119}",
        "cisco_message", "%{CISCOFW419001}",
        "cisco_message", "%{CISCOFW419002}",
        "cisco_message", "%{CISCOFW500004}",
        "cisco_message", "%{CISCOFW602303_602304_1}",
        "cisco_message", "%{CISCOFW602303_602304_2}",
        "cisco_message", "%{CISCOFW710001_710002_710003_710005_710006}",
        "cisco_message", "%{CISCOFW713172}",
        "cisco_message", "%{CISCOFW733100}"
      ]
GregMefford commented 10 years ago

@dav3860 Could you please provide documentation or example log lines that would indicate that this change is correct?

When I created these Grok expressions, I referred to the Cisco website (http://www.cisco.com/c/en/us/td/docs/security/asa/syslog-guide/syslogs/logmsgs.html) and validated it against real logs from an ASA, but it's possible that the Cisco documentation is wrong (I noticed that it was in several places).

In most of these cases, the logs are clearly saying "from X to Y", implying that X is the source and Y is the destination. Inbound or Outbound would only determine whether the source is inside or outside the firewall, right? Do you have reason to believe that, in reality, the logs are lying?

donckers commented 10 years ago

%ASA-3-106010: Deny inbound protocol 2 src WORD:###.###.###.### dst identity:###.###.###.### %ASA-3-713206: Group = ###.###.###.###, IP = ###.###.###.###, Tunnel Rejected: Conflicting protocols specified by tunnel-group and group-policy %ASA-3-713902: Group = ###.###.###.###, IP = ###.###.###.###, QM FSM error (P2 struct &0x34b624e0, mess id 0x43db584d)! %ASA-3-713902: Group = ###.###.###.###, IP = ###.###.###.###, Removing peer from correlator table failed, no match! %ASA-3-713902: Group = ###.###.###.###, IP = ###.###.###.###, Removing peer from peer table failed, no match! %ASA-4-113019: Group = ###.###.###.###, Username = ###.###.###.###, IP = ###.###.###.###, Session disconnected. Session Type: IKE, Duration: 0h:00m:00s, Bytes xmt: 0, Bytes rcv: 0, Reason: IKE Delete %ASA-4-713903: Group = ###.###.###.###, IP = ###.###.###.###, Can't find a valid tunnel group, aborting...! %ASA-4-713903: Group = ###.###.###.###, IP = ###.###.###.###, Error: Unable to remove PeerTblEntry %ASA-4-713903: IP = ###.###.###.###, Header invalid, missing SA payload! (next payload = 4) %FMANFP-6-IPACCESSLOGP: F0: fman_fp_image: list NON-SPACE denied udp ###.###.###.###(#####) -> ###.###.###.###(#####), 1 packet

donckers commented 10 years ago

Sorry, I may have posted this to the wrong thread. I had parsing issues with patterns similar to this and got a little eager to provide examples.

GregMefford commented 10 years ago

@dav3860 On closer inspection (and comparing against actual ASA logs), I can confirm that you're correct about the ICMP and IPSEC SA patterns being incorrect depending on the direction. I added some comments to the PR to help you tweak it.

Unfortunately, this will be a breaking change for people who are using it, because when they upgrade Logstash, they will no longer have a couple of the patterns that they're declaring in their config files (since the names have changed).

I'm trying to think of a seamless way to make this migration, but I think it would be pretty difficult to merge both into one pattern because (last I checked), Grok can't handle multiple named-literal captures in the same Grok expression. That is, you can't have have (?<direction>inbound) and (?<direction>outbound) in the same expression or it won't be captured properly.

My initial reaction is that really this Cisco log-parsing feature is probably too complex to use in the first place. How does the Logstash team feel about reworking these patterns into a cisco_syslog codec/filter?

My gut tells me that it could be implemented in a more efficient way using Ruby and a lookup table instead of requiring users to list out each of the patterns in their config file. It wouldn't be hard to pull the existing patterns into a filter and just tag the unmatched ones with _grokparsefailure so the behavior would be nearly identical to the existing, but it would simplify the config file dramatically.

GregMefford commented 10 years ago

@donckers Yeah, I don't see how your paste is relevant to this thread. Feel free to start another issue thread describing the problem you're having if there's not one already.

stonith commented 9 years ago

@GregMefford any update on this?

GregMefford commented 9 years ago

@stonith I haven't had any free time to devote to creating a new plugin for this lately and I'm not sure when I'd be able to come up with something since it's not a pressing need that I have at the moment. Is it something you need, or would want to help create?

stonith commented 9 years ago

I'm fine with referencing the patterns directly even though it's a bit ugly. I'm just looking for a properly working pattern set. I saw you comments on the pull request, I'll try and validate the changes but are you planning on merging them eventually?

GregMefford commented 9 years ago

I'd be happy to do what I can to get changes merged, but I'm not a committer to the project, I just wrote the original pattern set.

If you look at the comments I added to the commits in this PR, I think you could probably sort out what changes are needed. I just haven't had a chance to do the work of sorting out the details and testing it against real logs to validate.

On Tuesday, November 4, 2014, Darren Foo notifications@github.com wrote:

I'm fine with referencing the patterns directly even though it's a bit ugly. I'm just looking for a properly working pattern set. I saw you comments on the pull request, I'll try and validate the changes but are you planning on merging them eventually?

— Reply to this email directly or view it on GitHub https://github.com/elasticsearch/logstash/issues/1369#issuecomment-61752823 .

untergeek commented 9 years ago

@GregMefford @stonith @dav3860

Beginning with Logstash 1.5, the patterns are in their own repository, https://github.com/logstash-plugins/logstash-patterns-core

You can create issues and pull requests directly against the patterns repo now. The resulting patterns ruby gem will be versioned with semver, allowing users to revert to an earlier build, should something go wrong.

We can migrate PRs to the new repo, but not issues. If you're willing to take a look at this, would you consider making a PR for the proposed changes?

untergeek commented 9 years ago

By the way, this means that updating your patterns (in 1.5) will be as easy as:

bin/plugin install logstash-patterns-core

and restarting Logstash.

GregMefford commented 9 years ago

Pretty neat! I'll try to take a look at getting this moved over to the new repo.

~Greg Mefford

On Friday, January 23, 2015, Aaron Mildenstein notifications@github.com wrote:

By the way, this means that updating your patterns (in 1.5) will be as easy as:

bin/plugin install logstash-patterns-core

and restarting Logstash.

— Reply to this email directly or view it on GitHub https://github.com/elasticsearch/logstash/issues/1369#issuecomment-71228212 .

purbon commented 9 years ago

Got this moved to https://github.com/logstash-plugins/logstash-patterns-core/issues/46 as it really belogs nowadays to the patterns-core repo.

purbon commented 9 years ago

The PR https://github.com/elastic/logstash/pull/1383 was also moved to the patterns-core repo. closing this issue and moving all there.