turion / rhine

Haskell Functional Reactive Programming framework with type-level clocks
http://hackage.haskell.org/package/rhine
123 stars 21 forks source link

Dev monad bayes #186

Closed turion closed 1 year ago

turion commented 2 years ago
reubenharry commented 2 years ago

I have a question about runtime and concurrency. I notice that the framerate of the simulation slows when I have a large number of particles. It would be nice if these were somehow decoupled, so inference would happen in its own time but the simulation speed would never change. Rhine seems like it has the tools to do this sort of thing (i.e. is intended for this sort of thing) - does it seem conceptually feasible to you?

turion commented 2 years ago

I have a question about runtime and concurrency. I notice that the framerate of the simulation slows when I have a large number of particles. It would be nice if these were somehow decoupled, so inference would happen in its own time but the simulation speed would never change. Rhine seems like it has the tools to do this sort of thing (i.e. is intended for this sort of thing) - does it seem conceptually feasible to you?

Yes, that's what rhine is for. One would put the simulation in a separate clock that runs on a busy loop (and thus samples as fast as the machine can do), and the graphics on the gloss clock which ticks every 30th of a second by default. I was planning to do this, I just didn't get around to it yet.

turion commented 2 years ago

Ah, maybe I've misunderstood. You want to decouple the simulation/sampling of the latent position from the inference/particle filtering step? That's more elaborate, but I believe it could also work. Really good idea, since sampling should be fast!

reubenharry commented 2 years ago

Btw, I refactored some of your code as follows, to give an alternate way of looking at it:

prior :: (MonadSample m, Diff td ~ Float) => BehaviourF m td () Position
prior = fmap V.fromTuple $ model1D &&& model1D where

    model1D = proc _ -> do
        acceleration <- constM (normal 0 4 ) -< ()
        velocity <- decayIntegral 2 -< double2Float acceleration -- Integral, dying off exponentially
        position <- decayIntegral 2 -< velocity
        returnA -< float2Double position

    decayIntegral timeConstant = undefined -- average timeConstant >>> arr (timeConstant *^)

generativeModel :: (MonadSample m, Diff td ~ Float) => BehaviourF m td Position Observation
generativeModel = proc p -> do
    n <- fmap V.fromTuple $ noise &&& noise -< ()
    returnA -< p + n

    where noise = constM (normal 0 std)

posterior :: (MonadInfer m, Diff td ~ Float) => BehaviourF m td Position Observation
posterior = proc (V2 oX oY) -> do
  latent@(V2 trueX trueY) <- prior -< ()
  arrM factor -< normalPdf oY std trueY * normalPdf oX std trueX
  returnA -< latent

----------
-- display
----------

gloss :: IO ()
gloss = sampleIO $
        launchGlossThread defaultSettings
            { display = InWindow "rhine-bayes" (1024, 960) (10, 10) } 
        $ runIdentityT $ reactimateCl glossClock proc () -> do

                actualPosition <- prior -< ()
                measuredPosition <- generativeModel -< actualPosition
                samples <- onlineSMC 100 resampleMultinomial posterior -< measuredPosition
                (withSideEffect_ (lift $ lift clearIO) >>> visualisation) -< Result { estimate = averageOf samples
                                    , stdDev = stdDevOf (first xCoord <$> samples) + stdDevOf (first yCoord <$> samples)
                                    , measured = measuredPosition
                                    , latent = actualPosition
                                    }
turion commented 2 years ago

Btw, I refactored some of your code as follows, to give an alternate way of looking at it:

That makes a lot of sense, I refactored it in that direction!

reubenharry commented 2 years ago

Nice! I've been writing a tutorial in a notebook, and trying a few variations of the model, which I'll quickly describe.

Variations:

reubenharry commented 2 years ago

(Btw I started putting together an example in a notebook, and will extend it to a tutorial. Draft here: https://monad-bayes-site.netlify.app/realtimeinference)

turion commented 2 years ago

(Btw I started putting together an example in a notebook, and will extend it to a tutorial. Draft here: https://monad-bayes-site.netlify.app/realtimeinference)

Wow, awesome! If you want I can comment on the source.

I pushed a version with multiple clocks. Right now it doesn't have any advantages yet because the computation is all single threaded, but we can improve on that later with a little work in rhine. Also, it paves the way for interaction, because there you can add event listening & handling.

reubenharry commented 2 years ago

Cool! I think I understand how dunai and Rhine's FRP work, and the implementation of the particle filter, but the details of how to combine clocks with schedules, how to use resampling buffers, and the use of Sns and Rhines are still tricky for me. The new code is helpful for that though, so I'll read through carefully.

reubenharry commented 2 years ago

OK, I have a Rhine question: is there a simple way to shift a signal forward or backwards? I'd like to do sf >>> shift @Millisecond 300. (I can see that this would involve knowing the clock I'm on.) The reason I'm asking is I'd like to adapt the model so that we predict trajectories of the dot, rather than just track at the current time. I think that would be a cool application.

EDIT: ok, I just used accumulateWith, and that worked fine. But that's on the dunai level, not the Rhine level.

turion commented 2 years ago

OK, I have a Rhine question: is there a simple way to shift a signal forward

Forecasting is a nontrivial task, both conceptually and computationally. I haven't implemented a general solution for that in rhine, but now that you ask for it, there actually should be a way to do this in general. For now you might probably hack something together with these building blocks:

The tricky bit here is to forecast from the current state of the signal function & clock. I'm guessing that there should be a function forecast :: Diff td -> Rhine m td a b -> Rhine m td a b.

But note that this may have unintended consequences regarding side effects: When we forecast e.g. arrMCl (print =<< sampleIO (normal 1 1)) for one second, should it also execute the side effects (e.g. under a Millisecond 100 clock, printing a random number 10 times) during every forecasting? Probably the right way is to keep side effects algebraic and discard them.

or backwards?

Yes, that's delayBy

I'd like to do sf >>> shift @Millisecond 300. (I can see that this would involve knowing the clock I'm on.) The reason I'm asking is I'd like to adapt the model so that we predict trajectories of the dot, rather than just track at the current time. I think that would be a cool application.

EDIT: ok, I just used accumulateWith, and that worked fine.

Ah, how did that work out? Can I see the WIP code somewhere?

But that's on the dunai level, not the Rhine level.

That's fine, all dunai functions are valid rhine functions as well. If you want to incorporate the time information ad-hoc into your accumulateWith, simply use timeInfo or similar.

turion commented 2 years ago

the details of how to combine clocks with schedules, how to use resampling buffers, and the use of Sns and Rhines are still tricky for me. The new code is helpful for that though, so I'll read through carefully.

Yes, that is tricky. I'm going to get rid of schedules soon, dropping one piece of complexity.

reubenharry commented 2 years ago

Ah, how did that work out? Can I see the WIP code somewhere?

I put my WIP at https://github.com/reubenharry/rhine-bayes-examples

Here's where I track the past/future: https://github.com/reubenharry/rhine-bayes-examples/blob/master/src/Future.hs

turion commented 1 year ago

Leftovers will be addressed in #206 .