envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
25.02k stars 4.82k forks source link

redis: Envoy segfaults when no catch-all-route specified in v1.27 #29120

Open nickamorim opened 1 year ago

nickamorim commented 1 year ago

If you are reporting any crash or any potential security issue, do not open an issue in this repo. Please report the issue via emailing envoy-security@googlegroups.com where the issue will be triaged appropriately.

Title: redis: Envoy segfaults when no catch-all-route specified in v1.27

Description:

When there is no catch-all route specified and a request is made to a route that does not have a corresponding prefix_route, Envoy segfaults.

This is likely due to https://github.com/envoyproxy/envoy/pull/27902 since value is being set to catch_all_route by default if a key doesn't match a prefix route. If catch_all_route is null (which is the case if no catch_all_route is specified), this will raise a NullPointerException. The test to check if for a missing catch all route was removed as part of that same PR -- https://github.com/envoyproxy/envoy/pull/27902/files#diff-a340d81bca2cdf585a8843b8de21af05aa68578de7e547b8393b487675449ef9L52-L64.

Repro steps:

I simply ran envoy -c config.yaml with the config pasted below locally and in another terminal redis-cli -p 6380 get key.

❯ envoy --version

envoy  version: 7bba38b743bb3bca22dffb4a21c38ccc155fbef8/1.27.0/Distribution/RELEASE/BoringSSL

Config:

Include the config used to configure Envoy.

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: "0.0.0.0", port_value: 6380 }
      filter_chains:
        - filters:
            - name: envoy.filters.network.redis_proxy
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
                stat_prefix: redis_stats
                settings:
                  op_timeout: 5s
                prefix_routes:
                  routes:
                    - prefix: "foo:"
                      cluster: "redis_cluster"
  clusters:
    - name: redis_cluster
      connect_timeout: 5s
      type: STRICT_DNS
      lb_policy: MAGLEV
      load_assignment:
        cluster_name: redis_cluster
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: localhost
                      port_value: 6379

Logs:

redis-cli -p 6380 get key
Error: Server closed the connection
(2.15s)

Call Stack:

[2023-08-17 19:21:03.738][107][critical][backtrace] [./source/server/backtrace.h:104] Caught Segmentation fault, suspect faulting address 0x38
[2023-08-17 19:21:03.738][107][critical][backtrace] [./source/server/backtrace.h:91] Backtrace (use tools/stack_decode.py to get line numbers):
[2023-08-17 19:21:03.738][107][critical][backtrace] [./source/server/backtrace.h:92] Envoy version: 7bba38b743bb3bca22dffb4a21c38ccc155fbef8/1.27.0/Clean/RELEASE/BoringSSL
[2023-08-17 19:21:03.738][107][critical][backtrace] [./source/server/backtrace.h:96] #0: __restore_rt [0x7f3121e75420]
[2023-08-17 19:21:03.745][107][critical][backtrace] [./source/server/backtrace.h:96] #1: Envoy::Extensions::NetworkFilters::RedisProxy::CommandSplitter::SimpleRequest::create() [0x5649f55bc2b4]
[2023-08-17 19:21:03.752][107][critical][backtrace] [./source/server/backtrace.h:96] #2: Envoy::Extensions::NetworkFilters::RedisProxy::CommandSplitter::CommandHandlerFactory<>::startRequest() [0x5649f55c5209]
[2023-08-17 19:21:03.759][107][critical][backtrace] [./source/server/backtrace.h:96] #3: Envoy::Extensions::NetworkFilters::RedisProxy::CommandSplitter::InstanceImpl::makeRequest() [0x5649f55c2706]
[2023-08-17 19:21:03.766][107][critical][backtrace] [./source/server/backtrace.h:96] #4: Envoy::Extensions::NetworkFilters::RedisProxy::ProxyFilter::onRespValue() [0x5649f55c6b01]
[2023-08-17 19:21:03.773][107][critical][backtrace] [./source/server/backtrace.h:96] #5: Envoy::Extensions::NetworkFilters::Common::Redis::DecoderImpl::parseSlice() [0x5649f55de6e5]
[2023-08-17 19:21:03.780][107][critical][backtrace] [./source/server/backtrace.h:96] #6: Envoy::Extensions::NetworkFilters::Common::Redis::DecoderImpl::decode() [0x5649f55dd64b]
[2023-08-17 19:21:03.787][107][critical][backtrace] [./source/server/backtrace.h:96] #7: Envoy::Extensions::NetworkFilters::RedisProxy::ProxyFilter::onData() [0x5649f55c76cc]
[2023-08-17 19:21:03.793][107][critical][backtrace] [./source/server/backtrace.h:96] #8: Envoy::Network::FilterManagerImpl::onContinueReading() [0x5649f6efb675]
[2023-08-17 19:21:03.800][107][critical][backtrace] [./source/server/backtrace.h:96] #9: Envoy::Network::ConnectionImpl::onReadReady() [0x5649f6eaa252]
[2023-08-17 19:21:03.807][107][critical][backtrace] [./source/server/backtrace.h:96] #10: Envoy::Network::ConnectionImpl::onFileEvent() [0x5649f6ea7d09]
[2023-08-17 19:21:03.813][107][critical][backtrace] [./source/server/backtrace.h:96] #11: std::__1::__function::__func<>::operator()() [0x5649f6e9c4d1]
[2023-08-17 19:21:03.820][107][critical][backtrace] [./source/server/backtrace.h:96] #12: Envoy::Event::FileEventImpl::assignEvents()::$_1::__invoke() [0x5649f6e9d9fd]
[2023-08-17 19:21:03.827][107][critical][backtrace] [./source/server/backtrace.h:96] #13: event_process_active_single_queue [0x5649f74803f0]
[2023-08-17 19:21:03.833][107][critical][backtrace] [./source/server/backtrace.h:96] #14: event_base_loop [0x5649f747ed31]
[2023-08-17 19:21:03.840][107][critical][backtrace] [./source/server/backtrace.h:96] #15: Envoy::Server::WorkerImpl::threadRoutine() [0x5649f676a1fb]
[2023-08-17 19:21:03.846][107][critical][backtrace] [./source/server/backtrace.h:96] #16: Envoy::Thread::ThreadImplPosix::ThreadImplPosix()::{lambda()#1}::__invoke() [0x5649f7487553]
[2023-08-17 19:21:03.846][107][critical][backtrace] [./source/server/backtrace.h:96] #17: start_thread [0x7f3121e69609]
ConnectionImpl 0x1636bf922c00, connecting_: 0, bind_error_: 0, state(): Open, read_buffer_limit_: 1048576
socket_:
  ListenSocketImpl 0x1636bef58b00, transport_protocol_: raw_buffer
  connection_info_provider_:
    ConnectionInfoSetterImpl 0x1636bec60718, remote_address_: 242.9.97.101:37708, direct_remote_address_: 242.9.97.101:37708, local_address_: 242.9.98.23:6379, server_name_:
Segmentation fault (core dumped)
yanavlasov commented 1 year ago

Adding @weisisea @mattklein123 as extension owners.

deveshkandpal1224 commented 1 year ago

i'm fixing this bug @nickamorim and adding a test for this. You can assign this to me @yanavlasov

deveshkandpal1224 commented 1 year ago

apologies for causing this crash, I've fixed this issue and re-introduced the test that exercises this. Apologies for not thinking through when I removed the unit test ( it was clearly there for a reason - lesson learnt :) ).

nickamorim commented 1 year ago

Definitely no need to apologize @deveshkandpal1224 - thank you for such a quick turnaround time!