Closed hopewise closed 1 year ago
@hopewise have you checkout the Nginx example? It should give you a good starting point.
To debug Lambda Web Adapter, just update the RUST_LOG
environment variable to debug
.
Yes, I have used the example with most of settings, however, I did not start from:
public.ecr.aws/docker/library/nginx:1.21.6
I have changed it to my image that have Nginx installed and other dependencies ( Ruby 3) as well for my app.
Are there any restrictions for the required Nginx version? or I must use the exact image public.ecr.aws/docker/library/nginx:1.21.6
to use Nginx with Lambda ??
No, you could use any base images with Nginx installed.
Did you replace the default Nginx config files with the ones from Nginx example? Lambda's filesystem is read-only, except /tmp. We need to change a few configurations to make Nginx write logs, and temp files into /tmp.
Next thing to check is the readiness check. By default, Lambda Web Adapter will send http GETs to http://127.0.0.1:8080/ to determine if the web app is ready. If the adapter receives any http response, the check passes. Otherwise, the adapter will retry this request every 10ms. More detail in the project README. You can turn on DEBUG log for the adapter and see those readiness check requests in CloudWatch Logs.
Ah! thanks for the valuable info, yes I used the default Nginx files, but I found this in my settings:
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
would that prevent nginx from starting? I will point them to /tmp and check again.
A few more config options should be updated as well. Please look at this file.
Yes I tried it, I am stuck here if you can provide help 🙏 https://stackoverflow.com/questions/75284241/bin-bash-1-entrypoint-sh-not-found?noredirect=1#comment132846695_75284241
'/tmp' is special in Lambda. It is cleaned for each new sandbox. Please store your files in another directory.
each new sandbox? can you please explain more with response to this issue context
Oh, officially it is called the Lambda execution environment. Technically, it is a Firecracker MicroVM. Each invocation to Lambda is served by one Firecracker MicroVM. Your code (Zip or Docker Image) runs inside a MicroVM. The /tmp directory is always empty for a new MicroVM, even if you store code in that directory in the docker image.
Ah I see, and when deploying a new deployment, would previous files from the previous deployment exist? ex: say my app files were at /home/app
when I built the docker image.
No, new deployment will run in new MicroVMs.
Thanks, problem solved for entrypoint, but when I added my site config in sites-enabled to nginx, I started getting this error:
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (30: Read-only file system)
--
2023/01/30 16:36:58 [warn] 14#14: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
2023/01/30 16:36:58 [emerg] 14#14: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
Shouldn't nginx use the tmp folder to write errors? Also, isn't the example provided already working on port 80, why would a permission issue occur with my setup? Can you have a look here please https://gist.github.com/hopewise/3685fc10c15483f79cede02550f4f7a6
The function/processes run inside Lambda MicroVM without root privilege. So Nginx can't listen on port 80. You should change the port to something higher than 1024. I suggest listening on port 8080. It is the default port Lambda Web Adapter uses.
I am using lambda function behind a target group, I notice that I can't specify the port on the target group when target type is lambda, so how would I actually use the lambda function on port 8080 when using an ELB?
You need to configure Nginx to listen on port 8080. This is configured in Nginx config files.
Here is an example: https://github.com/awslabs/aws-lambda-web-adapter/blob/main/examples/nginx/app/config/conf.d/default.conf
Yes, I did,
server {
listen 8080;
location / {
proxy_pass http://127.0.0.1:3000;
}
So, it should proxy pass to my local rails server, but I am getting error:
2023/02/03 13:40:43 [error] 15#15: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: lambda.mydomain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "lambda.mydomain.com"
although, it worked fine with other proxies outside of lambda, ex:
location /blog {
proxy_pass http://x.x.x.x/blog;
}
is there any restriction to use such port ( 3000 ) locally inside lambda?
It should work. Was the rails process listening on 3000?
yes, it uses port 3000
COPY ./entrypoint.sh /app/entrypoint.sh
RUN chmod 777 /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ENV RAILS_ENV=production
WORKDIR "/app"
CMD /app/entrypoint.sh
/app/entrypoint.sh
#!/bin/bash
nginx -g 'daemon off;'
cd /app
RAILS_ENV=production bundle exec puma -C config/puma.rb
service nginx stop
service nginx start
/bin/bash
config/puma.rb
pidfile '/tmp/puma.pid'
port 3000
threads 1,1
so it should be running locally on port 3000
Okey, I found that for config/puma.rb
I need to bind it as bind 'tcp:127.0.0.1:3000'
otherwise, it will be binded to 0.0.0.0 as its in production mode
also, I found that nginx -g 'daemon off;'
will prevent from continuing to the next line 🙄
the testing worked well locally, but still not when deployed to aws..
Do you have a simple example app? I can test it on my side.
Kindly try it here https://github.com/hopewise/test-lambda-with-nginx, I tried it and it results in 502 Bad Gateway
in the lambda logs:
127.0.0.1 - - [04/Feb/2023:06:55:46 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" "88.227.124.14"
please let me know if I can provide any further details.
I built the project with sam build
, run the docker image locally, and rails app was not running. From docker logs, I see the following error. Should you add RUN bin/rails credentials:edit
in Dockerfile?
! Unable to load application: ArgumentError: Missing `secret_key_base` for 'production' environment, set this string with `bin/rails credentials:edit`
bundler: failed to load command: puma (/usr/local/bundle/bin/puma)
/usr/local/bundle/gems/railties-6.1.7.2/lib/rails/application.rb:603:in `validate_secret_key_base': Missing `secret_key_base` for 'production' environment, set this string with `bin/rails credentials:edit` (ArgumentError)
Oh, can you please try again, I just pushed the secrets to the repo as for testing..
I tried it on my side, I can see the rails app starting:
RAILS_ENV=production puma -C config/puma.rb
=> Booting Puma
=> Rails 6.1.7.2 application starting in production
=> Run `bin/rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 5.6.5 (ruby 2.7.2-p137) ("Birdie's Version")
* Min threads: 1
* Max threads: 1
* Environment: production
* PID: 8
* Listening on http://127.0.0.1:3000
Use Ctrl-C to stop
puma and rails app could not start because of lambda's read-only filesystem. You can use the following docker command to simulate Lambda's read-only file system.
docker run --name dcaclabfunction -d -p 8080:8080 --read-only -v /tmp:/tmp dcaclabfunction:v1
Then run the following command to see the logs from docker container.
docker logs dcaclabfunction
I'm not familiar with puma and Rails. You need to find out how to config them to change tmp directory to /tmp
and send logs to STDOUT/STDERR.
Thanks, its set to write pid to tmp folder as here
However to avoid end up debug the rails app it self, I've used a simple python server and tried it, it worked locally as expected using --read-only -v /tmp:/tmp
, but not when deployed to AWS, that's really weird..
Can you have a look please, this should be easier now to detect.
The python server process couldn't start. It is weird.
If I replaced python server with a Rust Axum web server, it worked. It also works if the python server listens on port 8080, without nginx proxing.
hmm, so the nginx proxy pass did not work Rust Axum web server too?
Nginx to Axum works. I don't know why Python server couldn't start. I need to investigate.
Can you please show me the command line you used to start Axum at port 3000?
I used the Axum example in this repo. Change this line to use port 3000, compile the project with make build
, copy the binary target/lambda/rust-axum-zip/bootstrap
to your sample project. Add it to /app
directory in docker image. And this is the entrypoint.sh
I used.
#!/bin/bash -x
/app/bootstrap &
exec nginx -g 'daemon off;'
I don't know if this part was resolved, but to see the logs from nginx you do want to set the logs to print to stdout and stderr, like this, for nginx:
http {
error_log /dev/stdout;
access_log /dev/stdout;
...
}
My personal fave is json output format:
access_log /dev/stdout json;
@hopewise I figured out the reason: it is all about the readiness check.
For LWA -> nginx -> python dev server, at start up, LWA sends a get request to nginx at :8080, nginx forward the request to upstream python dev server at :3000, but python dev server is slow and not ready, so nginx sends 503 response to LWA. LWA will take any http response code as passing the check, and sends the actual invoke to nginx, which fails with 503.
For LWA -> nginx -> rust axum server, LWA sends a GET reques to nginx at :8080, nginx forwards the request to axum server at :3000. The axum server boots up fast and is ready to serve the request. So everything works.
The workaround is to send the readiness check to the python dev server directly. Add an environment variable in your Dockerfile: READINESS_CHECK_PORT=3000
. This will make sure the Python dev server actually boots up.
The adapter should not consider 5xx response codes as a successful response. I will change that in v0.7.0 release.
hello
I am trying to use nginx on lambda, I am getting this error:
but I don't have any further details, how can I fix this ?