apache / apisix

The Cloud-Native API Gateway
https://apisix.apache.org/blog/
Apache License 2.0
14.5k stars 2.52k forks source link

help request: dubbo proxy throw "dubbo_service_name not found" #8386

Closed wxbty closed 12 months ago

wxbty commented 1 year ago

Description

I encountered the same error as issue #3725, but I used the dubbo provider exported by java, and the consumer can call it normally.

error:

curl http://127.0.0.1:9080/demo -H "Host: 127.0.0.1" -X POST --data '{"name": "hello"}'

<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty</center>
<p><em>Powered by <a href="https://apisix.apache.org/">APISIX</a>.</em></p></body>
</html>

error log:

2022/11/23 12:04:35 [error] 49#49: *282271 lua entry thread aborted: runtime error: /usr/local/apisix/apisix/plugins/dubbo-proxy.lua:58: variable "dubbo_service_name" not found for writing; maybe it is a built-in variable that is not changeable or you forgot to use "set $dubbo_service_name '';" in the config file to define it first
stack traceback:
coroutine 0:
    [C]: in function 'error'
    /usr/local/openresty/lualib/resty/core/var.lua:144: in function '__newindex'
    /usr/local/apisix/apisix/plugins/dubbo-proxy.lua:58: in function 'phase_func'
    /usr/local/apisix/apisix/plugin.lua:946: in function 'run_plugin'
    /usr/local/apisix/apisix/init.lua:602: in function 'http_access_phase'
    access_by_lua(nginx.conf:326):2: in main chunk, client: 172.18.0.1, server: _, request: "POST /demo HTTP/1.1", host: "127.0.0.1"

route cmd:

curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '

{ "host": "127.0.0.1", "uris": [ "/demo" ], "plugins": { "dubbo-proxy": { "service_name": "org.apache.dubbo.springboot.demo.DemoService", "service_version": "0.0.0", "method": "sayHello" } }, "upstream_id": 1 }' {"key":"\/apisix\/routes\/1","value":{"create_time":1669195582,"upstream_id":1,"priority":0,"id":"1","plugins":{"dubbo-proxy":{"service_name":"org.apache.dubbo.springboot.demo.DemoService","service_version":"0.0.0","method":"sayHello"}},"uris":["\/demo"],"status":1,"host":"127.0.0.1","update_time":1669205803}}

java provider:

@DubboService
public class DemoServiceImpl implements DemoService {

    @Override
    public Map<String, Object> sayHello(Map<String, Object> params) {
        System.out.println("Hello " + params.get("name") + ", request from consumer: " + RpcContext.getContext().getRemoteAddress());

        Map<String, Object> ret = new HashMap<String, Object>();
        ret.put("body", "Hello " + params.get("name")); // http response body
        ret.put("status", "200"); // http response status
        ret.put("test", "123"); // http response header
        return ret;
    }
}

generated nginx.conf not exist var dubbo_service_name:

# Configuration File - Nginx Server Configs
# This is a read-only file, do not try to modify it.
master_process on;

worker_processes auto;

# main configuration snippet starts

# main configuration snippet ends

error_log logs/error.log warn;
pid logs/nginx.pid;

worker_rlimit_nofile 20480;

events {
    accept_mutex off;
    worker_connections 10620;
}

worker_rlimit_core  16G;

worker_shutdown_timeout 240s;

env APISIX_PROFILE;
env PATH; # for searching external plugin runner's binary

thread_pool grpc-client-nginx-module threads=1;

lua {
}

http {
    # put extra_lua_path in front of the builtin path
    # so user can override the source code
    lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/usr/local/apisix/?.lua;/usr/local/apisix/?/init.lua;;/usr/local/apisix/?.lua;./?.lua;/usr/local/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/openresty/luajit/share/lua/5.1/?.lua;/usr/local/openresty/luajit/share/lua/5.1/?/init.lua;;";
    lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;";

    lua_max_pending_timers 16384;
    lua_max_running_timers 4096;

    lua_shared_dict internal-status 10m;
    lua_shared_dict upstream-healthcheck 10m;
    lua_shared_dict worker-events 10m;
    lua_shared_dict lrucache-lock 10m;
    lua_shared_dict balancer-ewma 10m;
    lua_shared_dict balancer-ewma-locks 10m;
    lua_shared_dict balancer-ewma-last-touched-at 10m;
    lua_shared_dict etcd-cluster-health-check 10m; # etcd health check

    # for discovery shared dict

    lua_shared_dict plugin-limit-conn 10m;

    lua_shared_dict plugin-limit-req 10m;

    lua_shared_dict plugin-limit-count 10m;
    lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;

    lua_shared_dict prometheus-metrics 10m;

    lua_shared_dict plugin-api-breaker 10m;

    # for openid-connect and authz-keycloak plugin
    lua_shared_dict discovery 1m; # cache for discovery metadata documents

    # for openid-connect plugin
    lua_shared_dict jwks 1m; # cache for JWKs
    lua_shared_dict introspection 10m; # cache for JWT verification results

    lua_shared_dict cas_sessions 10m;

    # for authz-keycloak
    lua_shared_dict access-tokens 1m; # cache for service account access tokens

    lua_shared_dict ext-plugin 1m; # cache for ext-plugin

    # for custom shared dict

    lua_ssl_verify_depth 5;
    ssl_session_timeout 86400;

    underscores_in_headers on;

    lua_socket_log_errors off;

    resolver 127.0.0.11 ipv6=off;
    resolver_timeout 5;

    lua_http10_buffering off;

    lua_regex_match_limit 100000;
    lua_regex_cache_max_entries 8192;

    log_format main escape=default '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"';
    uninitialized_variable_warn off;

    access_log logs/access.log main buffer=16384 flush=3;
    open_file_cache  max=1000 inactive=60;
    client_max_body_size 0;
    keepalive_timeout 60s;
    client_header_timeout 60s;
    client_body_timeout 60s;
    send_timeout 10s;
    variables_hash_max_size 2048;

    server_tokens off;

    include mime.types;
    charset utf-8;

    real_ip_header X-Real-IP;

    real_ip_recursive off;

    set_real_ip_from 127.0.0.1;
    set_real_ip_from unix:;

    # http configuration snippet starts

    # http configuration snippet ends

    upstream apisix_backend {
        server 0.0.0.1;

        keepalive 320;
        keepalive_requests 1000;
        keepalive_timeout 60s;
        # we put the static configuration above so that we can override it in the Lua code

        balancer_by_lua_block {
            apisix.http_balancer_phase()
        }
    }

    apisix_delay_client_max_body_check on;
    apisix_mirror_on_demand on;

    init_by_lua_block {
        require "resty.core"
        apisix = require("apisix")

        local dns_resolver = { "127.0.0.11", }
        local args = {
            dns_resolver = dns_resolver,
        }
        apisix.http_init(args)

        -- set apisix_lua_home into constans module
        -- it may be used by plugins to determine the work path of apisix
        local constants = require("apisix.constants")
        constants.apisix_lua_home = "/usr/local/apisix"
    }

    init_worker_by_lua_block {
        apisix.http_init_worker()
    }

    exit_worker_by_lua_block {
        apisix.http_exit_worker()
    }

    server {
        listen 0.0.0.0:9092;

        access_log off;

        location / {
            content_by_lua_block {
                apisix.http_control()
            }
        }
    }

    server {
        listen 0.0.0.0:9091;

        access_log off;

        location / {
            content_by_lua_block {
                local prometheus = require("apisix.plugins.prometheus.exporter")
                prometheus.export_metrics()
            }
        }

        location = /apisix/nginx_status {
            allow 127.0.0.0/24;
            deny all;
            stub_status;
        }
    }

    server {
        listen 0.0.0.0:9180;
        log_not_found off;

        # admin configuration snippet starts

        # admin configuration snippet ends

        set $upstream_scheme             'http';
        set $upstream_host               $http_host;
        set $upstream_uri                '';

        location /apisix/admin {
                allow 0.0.0.0/0;
                deny all;

            content_by_lua_block {
                apisix.http_admin()
            }
        }
    }

        upstream apisix_conf_backend {
        server 0.0.0.0:80;
        balancer_by_lua_block {
            local conf_server = require("apisix.conf_server")
            conf_server.balancer()
        }
    }

    server {
        listen unix:/usr/local/apisix/conf/config_listen.sock;

        access_log off;

        set $upstream_host '';

        access_by_lua_block {
            local conf_server = require("apisix.conf_server")
            conf_server.access()
        }

        location / {
            proxy_pass http://apisix_conf_backend;

            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header Host $upstream_host;
            proxy_next_upstream error timeout non_idempotent http_500 http_502 http_503 http_504;
        }

        log_by_lua_block {
            local conf_server = require("apisix.conf_server")
            conf_server.log()
        }
    }

    # for proxy cache
    proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;
    lua_shared_dict memory_cache 50m;

    map $upstream_cache_zone $upstream_cache_zone_info {
        disk_cache_one /tmp/disk_cache_one,1:2;
    }

    server {
        listen 0.0.0.0:9080 default_server reuseport;
        listen 0.0.0.0:9443 ssl default_server http2 reuseport;

        server_name _;

        ssl_certificate      cert/ssl_PLACE_HOLDER.crt;
        ssl_certificate_key  cert/ssl_PLACE_HOLDER.key;
        ssl_session_cache    shared:SSL:20m;
        ssl_session_timeout 10m;

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
        ssl_prefer_server_ciphers on;
        ssl_session_tickets off;

        # http server configuration snippet starts

        # http server configuration snippet ends

        location = /apisix/nginx_status {
            allow 127.0.0.0/24;
            deny all;
            access_log off;
            stub_status;
        }

        ssl_certificate_by_lua_block {
            apisix.http_ssl_phase()
        }

        proxy_ssl_name $upstream_host;
        proxy_ssl_server_name on;

        location / {
            set $upstream_mirror_uri         '';
            set $upstream_upgrade            '';
            set $upstream_connection         '';

            set $upstream_scheme             'http';
            set $upstream_host               $http_host;
            set $upstream_uri                '';
            set $ctx_ref                     '';

            # http server location configuration snippet starts

            # http server location configuration snippet ends

            access_by_lua_block {
                apisix.http_access_phase()
            }

            proxy_http_version 1.1;
            proxy_set_header   Host              $upstream_host;
            proxy_set_header   Upgrade           $upstream_upgrade;
            proxy_set_header   Connection        $upstream_connection;
            proxy_set_header   X-Real-IP         $remote_addr;
            proxy_pass_header  Date;

            ### the following x-forwarded-* headers is to send to upstream server

            set $var_x_forwarded_for        $remote_addr;
            set $var_x_forwarded_proto      $scheme;
            set $var_x_forwarded_host       $host;
            set $var_x_forwarded_port       $server_port;

            if ($http_x_forwarded_for != "") {
                set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
            }
            if ($http_x_forwarded_host != "") {
                set $var_x_forwarded_host $http_x_forwarded_host;
            }
            if ($http_x_forwarded_port != "") {
                set $var_x_forwarded_port $http_x_forwarded_port;
            }

            proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
            proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
            proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
            proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;

            ###  the following configuration is to cache response content from upstream server

            set $upstream_cache_zone            off;
            set $upstream_cache_key             '';
            set $upstream_cache_bypass          '';
            set $upstream_no_cache              '';

            proxy_cache                         $upstream_cache_zone;
            proxy_cache_valid                   any 10s;
            proxy_cache_min_uses                1;
            proxy_cache_methods                 GET HEAD POST;
            proxy_cache_lock_timeout            5s;
            proxy_cache_use_stale               off;
            proxy_cache_key                     $upstream_cache_key;
            proxy_no_cache                      $upstream_no_cache;
            proxy_cache_bypass                  $upstream_cache_bypass;

            proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;

            mirror          /proxy_mirror;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location @grpc_pass {

            access_by_lua_block {
                apisix.grpc_access_phase()
            }

            # For servers which obey the standard, when `:authority` is missing,
            # `host` will be used instead. When used with apisix-base, we can do
            # better by setting `:authority` directly
            grpc_set_header   ":authority" $upstream_host;
            grpc_set_header   Content-Type application/grpc;
            grpc_socket_keepalive on;
            grpc_pass         $upstream_scheme://apisix_backend;

            header_filter_by_lua_block {
                apisix.http_header_filter_phase()
            }

            body_filter_by_lua_block {
                apisix.http_body_filter_phase()
            }

            log_by_lua_block {
                apisix.http_log_phase()
            }
        }

        location = /proxy_mirror {
            internal;

            proxy_connect_timeout 60s;
            proxy_read_timeout 60s;
            proxy_send_timeout 60s;
            proxy_http_version 1.1;
            proxy_set_header Host $upstream_host;
            proxy_pass $upstream_mirror_uri;
        }
    }

    # http end configuration snippet starts

    # http end configuration snippet ends
}

Environment

wxbty commented 1 year ago

After I use the command "docker-compose -p docker-apisix restart" to restart, the following error occurs:

image

error log:

image

This is also very similar to #3725

tzssangglass commented 1 year ago

Refer to the hello function here https://github.com/apache/apisix/tree/master/t/lib/dubbo-backend to implement sayHello.

I think your sayHello is not fully implemented.

wxbty commented 1 year ago

Added several other methods, it seems that there is no difference. The problem now is 502:

error_log: 2022/11/24 03:47:14 [error] 48#48: 9611 multi: connect failed 0000FFFFA0828D20 2022/11/24 03:47:14 [warn] 48#48: 9611 multi: multi connection detach not empty 0000FFFFA0828D20

tzssangglass commented 1 year ago

Added several other methods, it seems that there is no difference. The problem now is 502:

I noticed you are using M1 mac, I don't have a corresponding machine to reproduce it.

I suggest you perform the same operation on an amd64 machine to determine if the problem is on a different platform or on your operation.

wxbty commented 1 year ago

@tzssangglass I reproduced this problem on a centos7 server, the steps are as issue #8382. In addition, the issue of #https://github.com/apache/apisix-docker/issues/373 should be caused by the wrong configuration of m1, It has been solved now. The current 502 problem can be reproduced on different platforms. I can't understand the error msg of “multi connection detach not empty”, and even I can't find it in the source code

tzssangglass commented 1 year ago

I can't understand the error msg of “multi connection detach not empty”, and even I can't find it in the source code

Can you display the error logs as text instead of images?

tzssangglass commented 1 year ago

and even I can't find it in the source code

here: https://github.com/alibaba/tengine/blob/ae3ff4619d133ff15d6dad4b8fab77865d7f5dbe/modules/ngx_multi_upstream_module/ngx_http_multi_upstream_module.c#L699-L702

            if (!ngx_queue_empty(&multi_c->data)) {
                ngx_log_error(NGX_LOG_WARN, c->log, 
                              0, "multi: multi connection detach not empty %p", c);
            }
tzssangglass commented 1 year ago

The problem now is 502

the steps are as issue #8382.

You need to close this issue and open a new issue to describe the new 502 problems and giving full details of the steps to reproduce it.

Cross-referencing issues and describing new problems in old issues confuse me.

wxbty commented 1 year ago

@tzssangglass Put it on hold for now, I'll bring it up next time

github-actions[bot] commented 12 months ago

This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@apisix.apache.org list. Thank you for your contributions.