Closed billyshambrook closed 4 years ago
hey @billyshambrook
thank you for the issue! could you try the latest v6 alpha.6, which was just released - we fixed a couple of bugs around invoking local python
, ruby
and go
.
let me know if the issue still persists!
Hey @dnalborczyk - not sure if it's changed, but now when I tried it, the first 2 requests were successful, but the 3rd failed, please see the output below:
Serverless: Starting Offline: dev/us-east-1.
Serverless: Offline [HTTP] listening on http://localhost:3000
Serverless: Enter "rp" to replay the last request
Serverless: Routes for hello:
Serverless: GET /hello
Serverless: POST /{apiVersion}/functions/golang-dev-hello/invocations
Serverless: Routes for world:
Serverless: GET /world
Serverless: POST /{apiVersion}/functions/golang-dev-world/invocations
Serverless: GET /hello (λ: hello)
Proxy Handler could not detect JSON: Serverless: Packaging service...
Proxy Handler could not detect JSON: Serverless: Excluding development dependencies...
Proxy Handler could not detect JSON: Serverless: Building Docker image...
Proxy Handler could not detect JSON: START RequestId: 867a3433-5079-1769-8c87-b8f35e7fa3b0 Version: $LATEST
Proxy Handler could not detect JSON: END RequestId: 867a3433-5079-1769-8c87-b8f35e7fa3b0
REPORT RequestId: 867a3433-5079-1769-8c87-b8f35e7fa3b0 Duration: 1.31 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 6 MB
Serverless: Duration 3756.00 ms (λ: hello)
Serverless: GET /hello (λ: hello)
Proxy Handler could not detect JSON: Serverless: Packaging service...
Proxy Handler could not detect JSON: Serverless: Excluding development dependencies...
Proxy Handler could not detect JSON: Serverless: Building Docker image...
Proxy Handler could not detect JSON: START RequestId: b598968e-f578-1a09-f76c-8a30583538e9 Version: $LATEST
Proxy Handler could not detect JSON: END RequestId: b598968e-f578-1a09-f76c-8a30583538e9
REPORT RequestId: b598968e-f578-1a09-f76c-8a30583538e9 Duration: 1.06 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 6 MB
Serverless: Duration 3927.00 ms (λ: hello)
Serverless: GET /hello (λ: hello)
Proxy Handler could not detect JSON: Serverless: Packaging service...
Proxy Handler could not detect JSON: Serverless: Excluding development dependencies...
Proxy Handler could not detect JSON: Serverless: Building Docker image...
Proxy Handler could not detect JSON: START RequestId: ce503a29-66a5-170e-460f-cb8f81a41f81 Version: $LATEST
{"statusCode":200,"headers":{"Content-Type":"application/json","X-MyCompany-Func-Reply":"hello-handler"},"body":"{\"message\":\"Go Serverless v1.0! Your function executed successfully!\"}"}END RequestId: ce503a29-66a5-170e-460f-cb8f81a41f81
REPORT RequestId: ce503a29-66a5-170e-460f-cb8f81a41f81 Duration: 1.06 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 6 MB
Serverless: Failure: {"statusCode":200,"headers":{"Content-Type":"application/json","X-MyCompany-Func-Reply":"hello-handler"},"body":"{\"message\":\"Go Serverless v1.0! Your function executed successfully!\"}"}END RequestId: ce503a29-66a5-170e-460f-cb8f81a41f81
REPORT RequestId: ce503a29-66a5-170e-460f-cb8f81a41f81 Duration: 1.06 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 6 MB
undefined
@billyshambrook thanks for checking! I was able to repro.
I guess the JSON extraction is broken. I have to have a closer look at this.
I'm also thinking of trying to execute the serverless invoke local
plugin directly, instead of spawning an additional process - although I'm not sure if that would solve the problem.
I removed go-lang
from the README for now, as it is clearly not working.
I can give a hand if you point me at where to look
hey @gonzalovilaseca , thank you!
we originally invoked serverless invoke local
with a child process, which in turn also spins up a child process and writes to stdout, that was parsed, and fairly buggy and unreliable. I removed all that and pulled in the (slightly modified) code from serverless
directly. I have done it only for Python
and Ruby
so far, but I'm also planning to pull in the remaining (Go
[docker], Java
, Dotnet
) as well if possible. I'll report back.
Ok, if you need any help just let me know.
How far are you on adding the Go support? Will it make it into version 6? Thanks
@gonzalovilaseca @localrivet
I started something locally. Although part of me is thinking to hook into the serverless
invocation, meaning asking them if they would be up for a PR exposing that functionality directly, instead of replicating everything.
Will it make it into version 6?
it's definitely on the roadmap for v6.
Let me know if there's something I can help with. Having support to locally invoke golang Lambda/API Gateway endpoints is the only thing stopping me from moving to golang + lambda setup.
There's a workaround by combining SAM with serverless framework
@hom-bahrani could you please elaborate on what the workaround is?
@MatejBalantic would be great if you could help out with this! 😃
I am also getting a strange "please start the docker daemon" error...
First, I created a new "aws go lang dep" project with serverless:
serverless create -t aws-go-dep -p my-cool-proj
then I go into it, build it, and deploy it:
cd my-cool-proj
make
serverless deploy
Everything there works fine, and I can call to the live endpoint like this:
serverless invoke -f hello
The issue is when I try to invoke it locally. If it run this command:
serverless invoke local --function hello --data "hello world"
then I get a strange some strange output about starting a docker daemon and exits in a failure:
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Building Docker image...
Exception -----------------------------------------------
'Please start the Docker daemon to use the invoke local Docker integration.'
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 8.16.0
Framework Version: 1.59.1
Plugin Version: 3.2.5
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
``
@JimTheMan Apologies I might be referring to a different issue of not being able to run Go lambda functions offline. For that I've created an example for the SAM workaround with instructions in the readme https://github.com/hom-bahrani/Go-serverless-local-docker
@hom-bahrani @JimTheMan go
support (with docker) is on its way: https://github.com/dherault/serverless-offline/pull/845
we just released initial docker
support with v6.0.0 alpha 54
. feel free to give it a try and report back with any bugs or improvements.
Thanks @dnalborczyk
I am getting a response now, but I am still seeing these errors in the console:
Proxy Handler could not detect JSON: Serverless: Packaging service...
Proxy Handler could not detect JSON: Serverless: Excluding development dependencies...
Proxy Handler could not detect JSON: Serverless: Building Docker image...
Proxy Handler could not detect JSON: START RequestId: 3ee2cd73-b4e3-1e73-d334-85a19fded799 Version: $LATEST
Proxy Handler could not detect JSON: END RequestId: 3ee2cd73-b4e3-1e73-d334-85a19fded799
REPORT RequestId: 3ee2cd73-b4e3-1e73-d334-85a19fded799 Init Duration: 127.92 ms Duration: 3.26 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 21 MB
Proxy Handler could not detect JSON:
Serverless: Replying timeout after 30000ms
@JimTheMan you are still running an older version of serverless-offline
. you have to install the alpha with: npm install serverless-offline@next
.
I just tried the latest version of serverless-offline on a go project and I'm getting Error: Unsupported runtime
when hitting an endpoint.
Using the new --useDocker
flag seems to make it work. Thanks for the fix!
What is the recommended way to run Go in serverless-offline now?
--useDocker does work, but it's agonizingly slow. Without --useDocker, I get Unsupported Runtime.
Ideally, I don't want to run in Docker if I have to. With a large API, we'd be talking about spinning up hundreds of containers, one for each endpoint, and there would be the mandatory 5 second staring at the wall every time you call a new endpoint. It makes it impossible to actually test an application offline.
Basically, I've been spending a week on this now, and I can't come up with a realistic way to run a Go-based API offline. So the question is, should I simply give up, create a developer AWS account, always deploy as soon as possible, and run a https://testapi.example.com thing directly on Amazon.
It really stinks, because the rest of our stack is Docker and it runs beautifully locally. But I'm about to give up running Go-based APIs locally.
Are Node and other interpreted languages the only thing that actually works locally?
Hello @perholmes
I created a repository using serverless offline with nodemon. I believe the performance is acceptable, check if it suits your scenario. https://github.com/eriveltonfacundo/docker-serverless-offline/tree/master/golang
Hi @eriveltonfacundo ,
Thanks! Yes, so this was also the direction I went in based on a thing I found somewhere, where each sub-api was its own folder, but then a level above, a script scraped each of the sub-APIs and put them together into one giant serverless.yml that booted the whole API.
Two problems. First, this does nothing to address the startup time of each endpoint. I can only make it run under --useDocker, and each container just takes a handful of seconds on first launch. So recompiling and relaunching means a lot of waiting for a first response from each endpoint. In a complex API where a client app may call 20 endpoints, it's hopeless.
Secondly, this requires every single API to be written in the same language. No mixing Node with Go. Since we already have some Node APIs running that are critical, this isn't something we can entertain right now.
Basically, I've settled on just doing one of our APIs as serverless because it truly benefits from the scaling model. We'll keep all the rest in Go containers on ECS. Serverless with Lambda has all kinds of other problems, for example the 200 resource limit that you have to jump through hoops to work around.
It's just not totally ready for prime time, which I say with sadness. There needs to be a way to boot an entire cluster with several sub-APIs that are their own repos that deploy separately. There needs to be a way for non-Node/Python APIs to boot as instantly and recompile on the fly, and there needs to be no limits in what you can deploy on AWS. The nested/split stack stuff is a nice hack, but it's a hack.
@perholmes those were the times that I'm getting.
`make build && node ./node_modules/.bin/serverless offline start --useDocker
env GOOS=linux go build -ldflags="-s -w" -o bin/hello src/hello/main.go env GOOS=linux go build -ldflags="-s -w" -o bin/world src/world/main.go offline: Starting Offline: dev/us-east-1. offline: Offline [http for lambda] listening on http://localhost:3002
┌─────────────────────────────────────────────────────────────────────────┐ │ │ │ GET | http://localhost:3000/dev/hello │ │ POST | http://localhost:3000/2015-03-31/functions/hello/invocations │ │ GET | http://localhost:3000/dev/world │ │ POST | http://localhost:3000/2015-03-31/functions/world/invocations │ │ │ └─────────────────────────────────────────────────────────────────────────┘
offline: [HTTP] server ready: http://localhost:3000 🚀 offline: offline: Enter "rp" to replay the last request
offline: GET /dev/hello (λ: hello) Lambda API listening on port 9001...
START RequestId: 9945c639-d315-1137-4a49-a6459486ec92 Version: $LATEST
END RequestId: 9945c639-d315-1137-4a49-a6459486ec92 REPORT RequestId: 9945c639-d315-1137-4a49-a6459486ec92 Init Duration: 182.66 ms Duration: 6.65 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 22 MB
offline: GET /dev/world (λ: world) Lambda API listening on port 9001...
START RequestId: 6173493c-b996-1ebe-2ee5-4748b6bdbd2e Version: $LATEST END RequestId: 6173493c-b996-1ebe-2ee5-4748b6bdbd2e REPORT RequestId: 6173493c-b996-1ebe-2ee5-4748b6bdbd2e Init Duration: 164.13 ms Duration: 5.14 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 16 MB
offline: GET /dev/world (λ: world) START RequestId: f833f6de-b96f-11d9-2331-b1356f7fd57b Version: $LATEST END RequestId: f833f6de-b96f-11d9-2331-b1356f7fd57b REPORT RequestId: f833f6de-b96f-11d9-2331-b1356f7fd57b Duration: 5.30 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 16 MB
offline: GET /dev/world (λ: world) START RequestId: 2b81b076-434b-13ed-1cf4-aa014d0d4fd3 Version: $LATEST END RequestId: 2b81b076-434b-13ed-1cf4-aa014d0d4fd3 REPORT RequestId: 2b81b076-434b-13ed-1cf4-aa014d0d4fd3 Duration: 5.02 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 16 MB
offline: GET /dev/world (λ: world) START RequestId: 7c898d54-aa5e-17ef-19e6-8787d23d99f2 Version: $LATEST END RequestId: 7c898d54-aa5e-17ef-19e6-8787d23d99f2 REPORT RequestId: 7c898d54-aa5e-17ef-19e6-8787d23d99f2 Duration: 4.39 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 16 MB
offline: GET /dev/hello (λ: hello) START RequestId: 621ba1ce-5e87-11fc-b48c-db47bd235815 Version: $LATEST END RequestId: 621ba1ce-5e87-11fc-b48c-db47bd235815 REPORT RequestId: 621ba1ce-5e87-11fc-b48c-db47bd235815 Duration: 6.06 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 16 MB`
You're missing the initial startup, which is between 2 and 5 seconds for each endpoint. I've been doing almost nothing else for days, recompiling, booting, connecting to a websocket, waiting 2-5 seconds, doing my test, rinse and repeat.
That's just one endpoint. For a full application connecting to 50 endpoints, we're talking over a minute of startup time every single time you recompile before the API is usable. It's a dealbreaker.
And the second point about the runtimes for each function is possible to define the runtime
Actually that's true with the runtimes. But your execution times are AFTER the Docker container has started. Sure, once it's up the request takes a few milliseconds.
@perholmes not a pretty solution, but I found a workaround:
1: use the nodemon to monitor go files and rebuild 2: when making the first call after the rebuild, for some reason it gets lost, what I did was to discard the call and make another one. With that the execution is in ms
See the changes in this commit: https://github.com/eriveltonfacundo/docker-serverless-offline/commit/fae889af915967a59e65d23b220c9d34dec30abd
There's no problem with execution times, only startup times of the containers.
but there is no need to restart the container, it uses the handler directory as the volume
But there isn't a single container. Serverless starts the number of containers it feels it needs to start. And the go binary is locked while it's executing, at least it was in some tests (a bit foggy for me, this is some days back). And since I'm not starting the Lambda container (serverless is), I don't see how to pass drive mappings into it. Is it a part of serverless that the drive is mapped to the handler directory?
bind is already done automatically
Actually, you're right. I just did a make build in a separate window with the API already running, and my Println that I just inserted ran. So this changes everything, thanks for sticking in there.
I don't know if I love the idea of compiling a master API that you run in one go. I may rather want to launch each API into its own tabbed terminal window.
But this changes everything, because then you can suffer the boot time at the beginning of the day and then just compile to your heart's content the rest of the day.
F'ing super! I had no idea the binary was drive mapped.
Then there's the separate issue of APIs being limited to about 30 endpoints (200 resources). Let's not hijack this thread more, but it's also more acceptable to hack around it if it's the only remaining problem.
Thanks for your tip, this completely changes the course over here.
I'm happy to help
Hmm, it doesn't seem completely stable. After a while, the new compilations don't make their way into the container anymore and I have to reboot the API. Just leaving a note here in case others are reading later with the same problem, this needs a bit of investigation.
My best guess is that this is Serverless starting multiple containers, and only one of them getting the drive mapping. So next step is to find out if it's possible to limit Serverless to starting exactly one container per endpoint.
UPDATE: I'm not finding anything in the docs, and I can see from just opening a websocket and calling it a couple of times that 4 containers are suddenly running.
So making this work either involves a way to make read-only drive mappings into the containers or somehow limit the limit number of containers SLS Offline starts for each endpoint.
UPDATE: Now it's not working, even with just one container started by SLS. Darn. There must be some way to tame it.
@perholmes commit an example in the repository that we can try to solve for it
I'm coding on a deadline, but first, does this work for you if you hammer the endpoint?
Please open Docker dashboard and look at the number of containers. If you keep calling the same endpoint at random intervals, do you only see one container, or does the number grow or shrink? When the number is 2 or 3, please try to make changes and see if they reflect in each single page load.
One difference here is that I'm using websockets, which Serverless proxies. There's a small possibility that websockets are handled differently than just calling end points.
I'm doubting my sanity a little bit, because now it seems to be working again. And it's possible I edited an API response that appears multiple places and the other one was running so my change didn't show. It's 3 AM, I'm cross-eyed. I'll post back here after testing in the coming days.
Anyone find a way to run this without --useDocker
? As noted above, my super lightweight lambda that just prints returns a static json took 13 seconds on first pass.
@perholmes @eriveltonfacundo when running sls offline --useDocker
, are you guys able to communicate with other docker containers? My go lambda is trying to communicate with localhost:5432 for postgres, but is hitting dial TCP errors.
Aren't you supposed to use host.docker.internal for localhost access?
@perholmes yes you're right, I had to change it to docker.for.mac.localhost
. Forgot about that. Thanks!
@perholmes can confirm multiple docker containers are being created - I get this issue after making multiple requests to an endpoint before any container has been started. I have a feeling the problem lies in here as this allows many new containers to be spawned instead of waiting on a single instance of the container to be ready to then take on requests. I'm not sure if this is by design or is a bug.
As for the hot reload, the underlying image that Serverless Offline uses has the option to watch for changes using the DOCKER_LAMBDA_WATCH=1
flag. Unfortunately with Go, this makes the container crash and the official solution to this is to set a --restart on-failure
run flag. Docs for this in further detail are available here.
After adding the above in I've managed to get some development speed improvements when rebuilding as a cold start is not needed, just a warm restart. I've managed to add this into Serverless Offline here - https://github.com/dherault/serverless-offline/compare/master...JamesMarino:feature/docker-hot-reload
Will need to test this further as I'm having some issues with the multiple container spawn problem above. Would be good to work out a way for a single container to wait on requests after a warm restart.
Hi @JamesMarino,
We ended up giving up on Serverless and Go, and also Serverless as such. Here's our reasoning, in case it helps.
First, yes, the spawning and the constant cold starting makes it really unfun to develop locally, especially with many endpoints.
But we also feel that the whole Serverless thing doesn't make sense in the real world. You have constant cold starting, you have no state in the containers that you can rely on, and so you end up having to persist a ton of state in databases. And there's no good way to trigger on database changes. Underneath, any pub/sub system actually just polls the database.
So instead, we've made a cluster of Go containers, duplicates of the same service, under ECS, receiving websocket connections from the load balancer. Each server is allowed to have a ton of state about who's connected and which resources they're connected to. We then use service discovery for duplicates of this server to find each other, and then changes to a resource are immediately broadcast to peers.
This gives us <100ms realtime collaboration, where if we were forced to do it through a database, we'd always be on a two second delay, hammering the database with polling (whether we do it or some pub/sub system does it, it's always polling, there's no magic).
So this allows us to keep a ton of state, support millions of concurrent clients, be aggressively real-time, and develop locally without a hitch. And it ends up using fewer resources, because we're not paying while waiting. This is the best solution for us.
Also, AWS has some severe limits in deploying serverless to lambda. You can really only make small APIs with this. And these cloudformation scripts (converted from serverless) leave insane amounts of trash. We just gave up on the whole thing.
and about this problem: https://stackoverflow.com/questions/72703863/serverless-offline-in-golang
does anyone know how to solve it ?
Hey, I am trying to use serverless-offline with go functions and finding that when I make a request, serverless-offline is not correctly parsing the output coming from the docker container. Here's steps to reproduce:
Now in another terminal run
curl localhost:3000/hello
This gives the following output: