Closed SystemFw closed 6 years ago
Not sure if it helps, but this also reproduces the problem.
import cats.effect._
import cats.implicits._
import fs2._
import fs2.concurrent.{SignallingRef, Topic}
import scala.concurrent.duration._
object Sample extends SampleApp
class SampleApp extends IOApp {
def run(args: List[String]): IO[ExitCode] = {
val kafkaConsumer = Stream.awakeEvery[IO](1.seconds).map(_ => "message".some).repeat
val stream: Stream[IO, Unit] = for {
topic <- Stream.eval(Topic[IO, Option[String]]("".some))
stop <- Stream.eval(SignallingRef[IO, Boolean](false))
subscriberCountLogger = topic.subscribers.map(n => s"$n client(s) connected").through(Sink.showLinesStdOut)
subscriber = kafkaConsumer.to(topic.publish)
subscriber1 = topic.subscribe(5).evalTap[IO](message => IO { println(s"subscriber1: $message") })
subscriber2 = topic.subscribe(5).evalTap[IO](message => IO { println(s"subscriber2: $message") }).unNoneTerminate.interruptWhen(stop)
s <- subscriber
.concurrently(subscriber1)
.concurrently(subscriber2)
.concurrently(subscriberCountLogger)
.concurrently(Stream.sleep(3.seconds) ++ Stream.eval[IO, Unit] {
for {
//r <- stop.set(true) // interruptWhen does NOT unsubscribe subscriber2
r <- topic.publish1(None) // unNoneTerminate does unsubscribe subscriber2
} yield r
})
} yield s
stream.compile.drain.map(_ => ExitCode.Success)
}
}
Output when using interruptWhen
:
0 client(s) connected
1 client(s) connected
2 client(s) connected
subscriber1: Some()
subscriber2: Some()
subscriber2: Some(message)
subscriber1: Some(message)
subscriber2: Some(message)
subscriber1: Some(message)
1 client(s) connected
2 client(s) connected <-- this is odd
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
// now it stops as the queue of subscriber2 (which wasn't unsubscribed) is full
Output when using unNoneTerminate
(commented line in snippet):
0 client(s) connected
1 client(s) connected
2 client(s) connected
subscriber1: Some()
subscriber2: Some()
subscriber2: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber2: Some(message)
subscriber1: None
subscriber2: None
1 client(s) connected
subscriber1: Some(message)
subscriber1: Some(message)
// correctly goes on forever
seems to me like some fault in pubsub strategy when unsusbcribing. Seems like old state is somehow passed along on unsubscribe...
With 1.0.0-RC2
unsubscribing seems to work as expected.
Only Topic.subscribers
continuously emits the subscriber count (other than 1.0.0
which only emits when there is a change). I added a filter to deal with this: topic.subscribers.filterWithPrevious(_ != _)
.
Happens to me as well. Had to switch back to the non-PubSub
implementation as it seems that the subscribers list is not being cleaned for some reason and the subscribers count is just keeps growing.
In my case it is also a parent subscription with an .interruptWhen
on it. Parent subscription completes correctly but the subscriber entry remains in the PubSub
.
P.S. Yesterday I checked the implementation myself but the amount of indirection was just to high for me to understand where the problem is.
This sounds like the same root cause as described #1293
Any decisions on this and #1293 ? It's starting to affect my teams in prod
@SystemFw I have drafted solution with Resource[F, F[Unit]]
just didn't have time to put up PR yet. I could manage it till Sunday. Is that ok ?
Sure :)
I'm assuming we would want a release soon after, for this and the other small bugs (like splitAt
)
@SystemFw Yeah, I would like to have this done earlier so we can have release over weekend
@sebastienvoss This appears fixed in 1.0.0-SNAPSHOT thanks to #1308.
scala> Sample.main(Array.empty)
0 client(s) connected
2 client(s) connected
subscriber1: Some()
subscriber2: Some()
subscriber2: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber2: Some(message)
subscriber1: None
subscriber2: None
1 client(s) connected
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
subscriber1: Some(message)
@SystemFw Could you try your use case too?
Yeah the one above is the one I have directly, the other one is from another team, I'll ask them to give it a shot
@SystemFw Here's what I get using your samples, which I think is correct but please spot check the output:
scala> prog1
0
1
message
message
message
0
scala> prog1
0
1
message
message
message
0
scala> prog1
0
1
message
message
message
0
scala> prog2
0
1
0
scala> prog2
0
1
0
scala> prog2
0
1
0
scala> prog2
0
1
0
scala> prog2
0
1
0
scala> prog3
0
1
message
message
message
0
@mpilquist The problem we were having in prod was fixed by upgrading to the SNAPSHOT
Great stuff! I checked and cannot reproduce the issue anymore. Will run some more tests tomorrow and get back in case of any issues.
Hi, could you try running this? It hangs after run 3 in version 1.0.0 (I guess because maxQueued = 2).
import cats.effect.{ContextShift, IO}
import fs2.Stream
import fs2.concurrent.{Queue, Topic}
object Test extends App {
implicit val ioContextShift: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
val publisher = Queue.unbounded[IO, Double].unsafeRunSync()
val topic = Topic[IO, Double](.0).unsafeRunSync()
val listener = publisher.dequeue.map(_ * 100).map(_.floor) to topic.publish
listener.compile.drain.unsafeRunAsync {
case Right(_) => println("listener stopped gracefully")
case Left(e) => e.printStackTrace()
}
val maxQueued = 2
val rpc = for {
b <- Stream.eval(Queue.unbounded[IO, Double])
receivedStream = for {
_ <- Stream.eval(b.dequeue1)
_ <- Stream.emit(math.random()) to publisher.enqueue
r <- b.dequeue
} yield r
received <- receivedStream.concurrently(topic.subscribe(maxQueued) to b.enqueue)
} yield received
1 to 10 foreach { i =>
val result = rpc.take(1).compile.toList.unsafeRunSync()
println(s"run $i: $result")
}
}
I'd say this can be closed now
Still quite a lot of code, here's what I have: