grammyjs / grammY

The Telegram Bot Framework.
https://grammy.dev
MIT License
2.21k stars 110 forks source link

Vercel no longer works with `express` middleware #239

Closed roziscoding closed 2 years ago

roziscoding commented 2 years ago

Version 1.9.0 introduced the usage of req.header in the express middleware, which is valid for express, but not for Vercel.

The example bot for usage with Vercel uses the express middleware, which no longer works. I've verified that the fastify middleware is able to work with Vercel, since it uses req.headers[SECRET_HEADER] and req.headers does exist in Vercel's request object.

I'm willing to create a PR to change the example bot to use fastify instead of express, but I'm also willing to create a middleware specifically for Vercel. Which one would be better?

KnorpelSenf commented 2 years ago

I think it would be best to adjust to whatever Vercel uses internally. For example, Google expressly relies on express (pun intended) so we would follow the express API for Google. This means that if Vercel uses fastify under the hood, we should change the example bot to use fastify. However, if they use their own web framework, we should have a new framework adapter that matches their specification.

(I didn't follow along Vercel development since I last used it, but I believe they have their own web API, so we should have a dedicated integration. Can you confirm this?)

roziscoding commented 2 years ago

Ok, so I want to investigate this and I found a few things:

What I think would work great is:

I'd be glad to create both PRs if we think that's the way to go.

PS: As a temporary workaround, the solution would be to use any token both when setting the webhook and when calling webhookCallback()

KnorpelSenf commented 2 years ago
  • meaning the http adapter should work, but it doesn't

This adapter doesn't necessarily have the same interface, even though the exchanged objects are the same. Are you sure that this will work? (I didn't read the docs of every single integration we have.) If yes, please go ahead to create a pull request for that.

Regarding the HTTP header parsing, that's clearly a bug. I'd be happy to review your PR with a fix.

roziscoding commented 2 years ago

Are you sure that this will work?

Vercel's types explicitly extend IncomingRequest and ServerResponse, so as long as they keep doing that, it should work. Also, I've tested it myself and it does work correctly..

I'll open both PRs then.

roziscoding commented 2 years ago

Besides the linked PR, I've opened #18 on the examples repository

null-prophet commented 10 months ago

vercel bots using serverless functions need to use 'next-js' adapter now.

export default webhookCallback(bot, 'next-js', {
    secretToken: SECRET_TOKEN
})

Just spent 2 hrs of my life I will never get back trying the example using the 'http' and trying to debug it. Was constantly getting 401 errors.

I used the example for node and the rest works as soon as you change that.

I also tested this with a secret token and its working correctly.

KnorpelSenf commented 10 months ago

@PonomareVlad do you know why they changed this?

PonomareVlad commented 10 months ago

@null-prophet how your initial setup different from this example ?

@KnorpelSenf I don't see any changes, my good old template still works:

logs

PonomareVlad commented 10 months ago

@null-prophet btw, maybe 401 code was associated with this change ?

null-prophet commented 10 months ago

let me test this template. I was using the one in the grammy examples site as I didn't know about this other one you linked above.

One other strange effect I see sometimes is that the bot will double up its replies.

I have the bot setup/configured like so:

inline mode : enabled group privacy: disabled (so it can reply to group chats) and 1 command setup as well

I will try this repo as an alternative and see how I go. I was using typescript on my code but will get the general layout of your code above.

null-prophet commented 10 months ago

@null-prophet btw, maybe 401 code was associated with this change ?

possibly, but when I changed the adapter it works. This was how I had it setup when it was running inside a nextjs app prior to this. I was getting this duplicate message thing sometimes though hence why I thought moving it to its own instance would alleviate the problem but it doesnt.

null-prophet commented 10 months ago

This was what I based it off, bumped up the versions of everything though.

https://github.com/grammyjs/examples/tree/main/setups/vercel-serverless-node

PonomareVlad commented 10 months ago

@null-prophet Can you try to replace this file and build again?

null-prophet commented 10 months ago

and put it back to the http adapter also?

PonomareVlad commented 10 months ago

and put it back to the http adapter also?

Yep, full example here

null-prophet commented 10 months ago

That seems to work, I had also changed the url to the main url it creates for the project prior to this and it was behaving also. It must be this password/login requirement for the deployed branches/commits.

I do still get duplicate answers when a few values are coming in, is there a way to stop this or perhaps block until a request is being processed, its the only thing I can imagine is happening.

I'm running the pro vercel account and nothing else special setup on the serverless project other than environment variables.

void bot.api.setWebhook(webhook, {
    secret_token: SECRET_TOKEN,
    drop_pending_updates: true
})

This is how I set the webhook also.

null-prophet commented 10 months ago

thinking out loud, if the call took longer than the normal response time (timeout) would that cause the message to be procesed twice?

PonomareVlad commented 10 months ago

I do still get duplicate answers when a few values are coming in, is there a way to stop this or perhaps block until a request is being processed, its the only thing I can imagine is happening.

I'm running the pro vercel account and nothing else special setup on the serverless project other than environment variables.

I think you need try to extend maximum duration of invocations up to 300 seconds (Pro plan limit)

PonomareVlad commented 10 months ago

thinking out loud, if the call took longer than the normal response time (timeout) would that cause the message to be procesed twice?

Read this

null-prophet commented 10 months ago

I will up the limit to 300 but if the webhook is timing out at 10 seconds then its not really going to solve the problem.

Is there any examples of using an in memory queue for the webhook on vercel or similar or am I going to need to change to a runner on a VPS to get reliability? And if so, is heroku or digitalocean a good alternative?

null-prophet commented 10 months ago

I upped the webhook timeout also, lets see how that goes but the queue makes sense. Any more info on an implementation would be great, thankyou for all your help.