Closed GregPeden closed 3 years ago
I am not that familiar with subscriptions and/or Pusher, but I guess the responsible code might be https://github.com/nuwave/lighthouse/blob/be92297497e4577cffb81b3ad4dc7b018ed04013/src/Subscriptions/Broadcasters/PusherBroadcaster.php#L57
@stayallive @thekonz
You're syncing all tabs to listen into the same channel? Would it work if you instead synced all tabs to fire one subscription request (and listening to their own channel) per tab?
Each tab is a separate session. They aren't sharing resources other than the auth token and some info on the user to enable fast loading. Websockets acknowledges each tab as separate client sessions.
I say "tab" because its a dev environment.
Hmm, I will have to setup a test env for this, but looking at the code everything looks correct.
Channels are subscribed to topics and on the pusher hook the channel is removed.
However I have a simple question to start with. You mention you work locally and with laravel-websockets, how is the webhook even called since laravel-websockets doesn't have webhooks?
So assuming there are no webhooks being fired the server has no clue that the connection was closed and the subscription state should not be affected at all (remember that unless you did so Lighthouse and laravel-websockets have no direct integration or are aware of each other).
Well consider that the topic is the same for all three instances. So if I subscribe to "myUserEvent" on user with "ID" 2 with three tabs, it might be set up to use "my-user-event:2" as the topic, all three sessions will have that same topic. So if the code goes and ends subs based on subscribing user + topic string, it'll wipe all three when any one leaves. It needs to be client session ID + topic string.
I suppose I could bake something unique about the client session (from Subscriber class) in to the topic string but I suggest this should be handled external to this implementation scope. Like, the context of the topic is not unique between each client session, but the same. It's also not expressed in documentation as required.
I just got my latest code base up on a proper real server today for testing so I can test it like a proper production server. So now, testing with my phone in hand + three tabs on my desktop, I can trigger events with the phone and see them propagate on my desktop. If I close any one desktop tab, then the other tabs no longer receive expected subscription events.
My dev implementation is using Docker and I am running laravel-websockets in its own Docker container, and using a reverse proxy to for the routing which is checking for the "upgrade" property.
map $http_upgrade $type {
default "web";
websocket "ws";
}
server {
listen 443 ssl;
ssl_certificate blahblahblah
ssl_certificate_key blahblahblah+
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
index index.php;
charset utf-8;
server_name blahblahblah;
root /var/www/html/public;
location / {
try_files /nonexistent @$type;
}
location @web {
try_files $uri $uri/ /index.php?$query_string;
}
location @ws {
proxy_pass http://websockets:6001;
proxy_set_header Host $host;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
# Allow the use of websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
For my production implementation it's similar except I using a separate server on a subdomain instead of relying on the "websocket" context to affect routing.
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/blahblahblah/before/*;
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name blahblahblah;
server_tokens off;
root /home/forge/blahblahblah/public;
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate blahblahblah;
ssl_certificate_key blahblahblah;
ssl_protocols TLSv1.2;
ssl_ciphers TLS13-AES-256-GCM-SHA384:TLS13-CHACHA20-POLY1305-SHA256:TLS_AES_256_GCM_SHA384:TLS-AES-256-GCM-SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/blahblahblah/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/blahblahblah-error.log error;
error_page 404 /index.php;
location ~* \.(otf|eot|ttf|woff|woff2|ico|css|js)$ {
expires 365d;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_read_timeout 300;
}
location ~ /\.ht {
deny all;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/blahblahblah/after/*;
The server config is not relevant I think in this case.
Could you please answer this question:
However I have a simple question to start with. You mention you work locally and with laravel-websockets, how is the webhook even called since laravel-websockets doesn't have webhooks?
It's really important to know if Ligthouse knows about the closed connection, because then it could be a bug in the Ligthouse code. But if Ligthouse is not informed about the connection being closed it also doesn't cleanup so the bug is elsewhere.
Can you also share what you have setup for this config option?:
I have LIGHTHOUSE_SUBSCRIPTION_STORAGE_TTL=3600
When you ask "how is it called" I'm not sure what you mean. The subscriptions work and when I close the browser, laravel-websockets knows this and I can see the event in the debug log. There are no magic trickery happening.
I am using this demonstration project as a reference to integrate with Lighthouse: https://github.com/enzonotario/lighthouse-laravel-websockets
My client implementation is similar to that in the above reference but modified a bit, most significantly my change tells the server when Apollo indicates a subscription to be closed (this is separate from the browser closing as being discussed herein).
import { ApolloLink, Observable } from "apollo-link";
import echoClient from "./echo";
class SubscriptionsLink extends ApolloLink {
constructor() {
super();
this.subscriptions = [];
this.echo = echoClient;
}
request(operation, forward) {
return new Observable(observer => {
let subscriptionChannel;
// Check the result of the operation
forward(operation).subscribe({
next: data => {
// If the operation has the subscription extension, it's a subscription
subscriptionChannel = this._getChannel(data);
if (subscriptionChannel) {
this._createSubscription(subscriptionChannel, observer);
} else {
// No subscription found in the response, pipe data through
observer.next(data);
observer.complete();
}
},
});
// Return an object that will unsubscribe _if_ the query was a subscription.
return {
closed: false,
unsubscribe: () => {
if (subscriptionChannel) {
this._leaveSubscription(subscriptionChannel);
}
},
};
});
}
_getChannel(response) {
return response?.extensions?.lighthouse_subscriptions?.channel ?? null;
}
_createSubscription(subscriptionChannel, observer) {
const privateChannelName = subscriptionChannel.split('private-').pop();
if (!this.subscriptions.find(s => s.channel === subscriptionChannel)) {
this.subscriptions.push({
channel: subscriptionChannel,
observer: observer,
});
}
this.echo.private(privateChannelName).listen('.lighthouse-subscription', payload => {
if (!payload.more || observer._subscription._state === 'closed') {
this._leaveSubscription(subscriptionChannel, observer);
return;
}
const result = payload.result;
if (result) {
observer.next({
data: result.data,
extensions: result.extensions,
});
}
});
}
_leaveSubscription(channel) {
const subscription = this.subscriptions.find(s => s.channel === channel);
this.echo.leave(channel);
subscription.observer.complete();
this.subscriptions = this.subscriptions.slice(this.subscriptions.indexOf(subscription), 1);
}
}
export default SubscriptionsLink;
laravel-websockets includes support for the Pusher interface, so I'm using that. It maintains the connection by pinging back and forth and as soon as I close the browser the websockets server instantly knows because the pusher client will broadcast the browser is closing as its last action. If the client fails to report, the regular pinging failing will inform Pusher (and laravel-websockets) that the client has dropped. Is this what you mean to ask about?
I would focus on the Pusher implementation, not laravel-websockets, as it is emulating Pusher.
The behavior on a real addressed server on Digital Ocean using a cell phone and/or desktop computer is exactly the same as in my development environment, it doesn't matter that I am using Docker locally.
For what it's worth, awhile back when first experimenting with subscriptions, I used the actual Pusher service properly to run the subscriptions service and the same thing happened. I ended up abandoning subscriptions at the time because I knew this would create huge usability problems. I later implemented https://github.com/thekonz/lighthouse-redis-broadcaster and it worked better except it had the opposite problem - it wholly depends on the client reporting the subscription is closed to stop broadcasting to it. It is now archived and getting buggy, and at the advice of its developer I tried laravel-websockets, which works great, except for this lifecycle management thing.
Note I am going about review and logging the code I pulled in from the "WebSockets" folder from the project linked above, just to make sure it's not doing something weird. In particular I know that "removeFromAllChannels()" is being called when the browser closes and then it is doing exactly what the method name suggests... but I will make sure this is the same as what is expected from Lighthouse itself.
<?php
namespace App\WebSockets\Channels\ChannelManagers;
use BeyondCode\LaravelWebSockets\WebSockets\Channels\Channel;
use BeyondCode\LaravelWebSockets\WebSockets\Channels\ChannelManagers\ArrayChannelManager as ArrayChannelManagerAlias;
use Illuminate\Support\Arr;
use Nuwave\Lighthouse\Subscriptions\Contracts\StoresSubscriptions as Storage;
use Ratchet\ConnectionInterface;
class ArrayChannelManager extends ArrayChannelManagerAlias
{
public function removeFromAllChannels(ConnectionInterface $connection)
{
\Log::debug('removeFromAllChannels', ['connection' => $connection]);
$storage = app(Storage::class);
collect(Arr::get($this->channels, $connection->app->id, []))
->each(function (Channel $channel, string $channelName) use ($storage) {
\Log::debug('deleteSubscriber', ['channel' => $channelName]);
$storage->deleteSubscriber($channelName);
});
parent::removeFromAllChannels($connection);
}
}
collect(Arr::get($this->channels, $connection->app->id, []))
->each(function (Channel $channel, string $channelName) use ($storage) {
\Log::debug('deleteSubscriber', ['channel' => $channelName]);
$storage->deleteSubscriber($channelName);
});
Well there is your problem 😉
Arr::get($this->channels, $connection->app->id, [])
returns all channels belonging to the app ID.
Not just where the user was subscribed on... so you are deleting subscribers for every channel currently active in Ligthouse which explains what you are seeing.
This is not a Ligthouse issue but that code you are using is incorrect.
Might I suggest you take a look here: https://github.com/nuwave/lighthouse/issues/847#issuecomment-618254044
This might point you in a right direction to correctly implement the deleteSubscriber
hook with Ligthouse, or at least that one I have personally tested so I can vouch for.
But I think we can close this issue as resolved since we found the problem and it's technically not Lighthouse?
I agree that is doing it. Now my question is... why is it there? The person who built the demo put it there for some reason.
I did see the method but I figured it was to be fired as some grand cleanup action.
I have been doing some experimentation and find it works better without that method overwritten, however I am worried about if maybe there is some memory leak problem or something.
I have also modified the client to be able to report to the server when it no longer needs a subscription, which helps mitigate any memory leak problem a lot... this plus I think set TTL to a day or so and it'll be good.
Thanks for your help. :)
I cannot speculate for the other you would have to ask them 😉
But I think they thought it cleaned up the subscriptions for a single connections while in reality it cleaned all the subscriptions connected to the websocket server on every channel close.
This works until you have more then 1 subscription as you encountered.
There could be only 1 reason I can think of to run that code and that is when shutting down the websocket server to make sure all active subscriptions are removed.
But I would not worry to much about that and set LIGHTHOUSE_SUBSCRIPTION_STORAGE_TTL
to 24 hours (in seconds) to have subscriptions that are active longer than 24 hours be cleanup up automatically (if you expect your clients to be connected for longer increase the TTL).
You still need and want to update your app when a connection closes but I have written on how to achieve that here: https://github.com/nuwave/lighthouse/issues/847#issuecomment-618254044. That solution is tested and works great by cleaning up channels as they are vacated which ensures only a single channel is cleaned when there are no subscribers on the channel left.
I don't have the time to update the example with a working solution myself so I've opened an issue on the example you were following so the author and possibly other readers know what is up with it: https://github.com/enzonotario/lighthouse-laravel-websockets/issues/17.
Hi folks! Sorry about that line. In fact, when I have did it, I have tested it on an Android App, so I always have tested it with one client. Maybe that's why I didn't notice this problem. I have updated my example with the code described by @stayallive , so far it works fine. Thank you!
Describe the bug
Context:
Consider a client/user with your page open on multiple tabs, all logged in with the same account (ie. all in same Chrome browser session). They all have API subscriptions and, at first, they all work well and correct. So consider for example a thing which syncs across all the browser. As soon as one of the tabs is closed - and the "pusher connection closed" event is generated by Pusher - all of the subscriptions in the other browser tabs are all ended as well. I don't think the clients are told of this, it's just that server never broadcasts to them again.
If I then go open up more tabs, subs work again as expected in those new tabs while the old ones remain broken. If I close one of those new tabs it'll again break subs on all other tabs.
Expected behavior/Solution
When one browser tab is closed, subs on the other tabs should keep working.
Steps to reproduce
Commentary/speculation In short, it seems like when the pusher connection is closed, the server is looking up all subs from that users and ending them all, not just the ones associated with the one browser session, and purging them all from the cache so that broadcasts end.