futureverse / future

:rocket: R package: future: Unified Parallel and Distributed Processing in R for Everyone
https://future.futureverse.org
954 stars 85 forks source link

Error: ‘inherits(future, "Future")’ is not TRUE #723

Open sebsilas opened 5 months ago

sebsilas commented 5 months ago

Describe the bug

When I try to load a promise in the context of starting up a Shiny app, I get the following error:

Error: ‘inherits(future, "Future")’ is not TRUE

This doesn't happen consistently.

The traceback (see below) is interesting. It suggests my promise is of cluster type, even though I have used plan(multisession). If I explicitly set the plan when calling future_promise, I still get the same behaviour.

Reproduce example


library(shiny)
library(promises)

ui <- fluidPage(

    titlePanel("Promise fail"),

)

server <- function(input, output, session) {

  on_start(session)

}

on_start <- function(session) {

  future::plan(future::multisession)

  res <- promises::future_promise({
    Sys.sleep(5)
    "test_res"
  }, seed = NULL) %...>% (function(result) {
    print(result)
  })

}

# Run the application
shinyApp(ui = ui, server = server)

Expected behavior

I expect the promise not to fail.

Traceback

13: stop(cond)
12: stopf("%s is not TRUE", sQuote(call), call. = FALSE, domain = NA)
11: stop_if_not(inherits(future, "Future"))
10: post_mortem_cluster_failure(ex, when = "checking resolved from", 
        node = node, future = future)
9: resolved.ClusterFuture(x, timeout = 0)
8: future::resolved(x, timeout = 0)
7: withCallingHandlers(expr, error = function(e) {
       promiseDomain$onError(e)
   })
6: doTryCatch(return(expr), name, parentenv, handler)
5: tryCatchOne(expr, names, parentenv, handlers[[1L]])
4: tryCatchList(expr, classes, parentenv, handlers)
3: base::tryCatch(withCallingHandlers(expr, error = function(e) {
       promiseDomain$onError(e)
   }), ..., finally = finally)
2: tryCatch({
       future::resolved(x, timeout = 0)
   }, FutureError = function(e) {
       reject(e)
       TRUE
   })
1: (function () 
   {
       is_resolved <- tryCatch({
           future::resolved(x, timeout = 0)
       }, FutureError = function(e) {
           reject(e)
           TRUE
       })
       if (is_resolved) {
           tryCatch({
               result <- future::value(x, signal = TRUE)
               resolve(result)
           }, FutureError = function(e) {
               reject(e)
               TRUE
           }, error = function(e) {
               reject(e)
           })
       }
       else {
    ...

Session information

> sessionInfo()

R version 4.3.0 (2023-04-21)
Platform: x86_64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.6.6

Matrix products: default
BLAS:   /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: Europe/Berlin
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] promises_1.2.1 shiny_1.8.0    devtools_2.4.5 usethis_2.2.3 

loaded via a namespace (and not attached):
 [1] jsonlite_1.8.8    miniUI_0.1.1.1    compiler_4.3.0    Rcpp_1.0.12      
 [5] stringr_1.5.1     parallel_4.3.0    jquerylib_0.1.4   later_1.3.2      
 [9] globals_0.16.3    fastmap_1.1.1     mime_0.12         R6_2.5.1         
[13] htmlwidgets_1.6.4 future_1.33.1     profvis_0.3.8     bslib_0.7.0      
[17] rlang_1.1.3       cachem_1.0.8      stringi_1.8.3     httpuv_1.6.14    
[21] sass_0.4.9        fs_1.6.3          pkgload_1.3.4     memoise_2.0.1    
[25] cli_3.6.2         withr_3.0.0       magrittr_2.0.3    digest_0.6.35    
[29] rstudioapi_0.15.0 xtable_1.8-4      remotes_2.5.0     lifecycle_1.0.4  
[33] vctrs_0.6.5       glue_1.7.0        listenv_0.9.1     urlchecker_1.0.1 
[37] codetools_0.2-19  sessioninfo_1.2.2 pkgbuild_1.4.4    parallelly_1.37.1
[41] purrr_1.0.2       tools_4.3.0       ellipsis_0.3.2    htmltools_0.5.8.1
…
> future::futureSessionInfo()

*** Package versions
future 1.33.1, parallelly 1.37.1, parallel 4.3.0, globals 0.16.3, listenv 0.9.1

*** Allocations
availableCores():
system 
    16 
availableWorkers():
$system
 [1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"
 [8] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"
[15] "localhost" "localhost"

*** Settings
- future.plan=<not set>
- future.fork.multithreading.enable=<not set>
- future.globals.maxSize=<not set>
- future.globals.onReference=<not set>
- future.resolve.recursive=<not set>
- future.rng.onMisuse=<not set>
- future.wait.timeout=<not set>
- future.wait.interval=<not set>
- future.wait.alpha=<not set>
- future.startup.script=<not set>

*** Backends
Number of workers: 16
List of future strategies:
1. multisession:
   - args: function (..., workers = availableCores(), lazy = FALSE, rscript_libs = .libPaths(), envir = parent.frame())
   - tweaked: FALSE
   - call: future::plan(future::multisession)

*** Basic tests
Main R session details:
    pid     r sysname release
1 85671 4.3.0  Darwin  22.6.0
                                                                                                 version
1 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
  nodename machine   login    user effective_user
1  host001  x86_64 user001 user002        user002
Worker R session details:
   worker   pid     r sysname release
1       1 86370 4.3.0  Darwin  22.6.0
2       2 86367 4.3.0  Darwin  22.6.0
3       3 86369 4.3.0  Darwin  22.6.0
4       4 86372 4.3.0  Darwin  22.6.0
5       5 86376 4.3.0  Darwin  22.6.0
6       6 86366 4.3.0  Darwin  22.6.0
7       7 86374 4.3.0  Darwin  22.6.0
8       8 86373 4.3.0  Darwin  22.6.0
9       9 86371 4.3.0  Darwin  22.6.0
10     10 86368 4.3.0  Darwin  22.6.0
11     11 86378 4.3.0  Darwin  22.6.0
12     12 86379 4.3.0  Darwin  22.6.0
13     13 86380 4.3.0  Darwin  22.6.0
14     14 86377 4.3.0  Darwin  22.6.0
15     15 86375 4.3.0  Darwin  22.6.0
16     16 86365 4.3.0  Darwin  22.6.0
                                                                                                  version
1  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
2  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
3  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
4  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
5  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
6  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
7  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
8  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
9  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
10 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
11 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
12 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
13 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
14 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
15 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
16 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
   nodename machine   login    user effective_user
1   host001  x86_64 user001 user002        user002
2   host001  x86_64 user001 user002        user002
3   host001  x86_64 user001 user002        user002
4   host001  x86_64 user001 user002        user002
5   host001  x86_64 user001 user002        user002
6   host001  x86_64 user001 user002        user002
7   host001  x86_64 user001 user002        user002
8   host001  x86_64 user001 user002        user002
9   host001  x86_64 user001 user002        user002
10  host001  x86_64 user001 user002        user002
11  host001  x86_64 user001 user002        user002
12  host001  x86_64 user001 user002        user002
13  host001  x86_64 user001 user002        user002
14  host001  x86_64 user001 user002        user002
15  host001  x86_64 user001 user002        user002
16  host001  x86_64 user001 user002        user002
Number of unique worker PIDs: 16 (as expected)
…
sebsilas commented 5 months ago

Sorry, I submitted this to the wrong repo. I guess I should have submitted to promises. But maybe you have some insight?

HenrikBengtsson commented 5 months ago

When I try to load a promise in the context of starting up a Shiny app, I get the following error:

Error: ‘inherits(future, "Future")’ is not TRUE

This doesn't happen consistently.

I suspect your parallel worker crashes, because of the code you're running. That said, future should have given you another, more informative error message, but there is a typo/bug in future causing it to produce this error instead. I've fixed this in the develop branch. Can you please try by installing that version;

if (requireNamespace("remotes")) install.packages("remotes")
remotes::install_github("HenrikBengtsson/future", ref="develop")

The traceback (see below) is interesting. It suggests my promise is of cluster type, even though I have used plan(multisession).

This is expected, because multisession futures inherits from cluster futures.

Thanks

sebsilas commented 5 months ago

Thanks for your response. Yes, the dev version does give a more helpful error now.

Unhandled promise error: MultisessionFuture () failed to checking resolved from cluster RichSOCKnode #1 (PID 58779 on localhost ‘localhost’). The reason reported was ‘Connection to the worker is corrupt’. Post-mortem diagnostic: A process with this PID exists, which suggests that the localhost worker is still alive. The socket connection to the worker of MultisessionFuture future () is lost or corrupted: Connection (connection: index=4, description="<-localhost:11802", class="sockconn", mode="a+b", text="binary", opened="opened", can read="yes", can write="yes", id=1604, raw_id="<pointer: 0x644>") is no longer valid. It differ from the currently registered R connection with the same index 4 (connection: index=4, description="<-localhost:11802", class="sockconn", mode="a+b", text="binary", opened="opened", can read="yes", can write="yes", id=1654, raw_id="<pointer: 0x676>"). As an example, this may happen if base::closeAllConnections() have been called, for instance via base::sys.save.image() which in turn is called if the R session (pid 58516) is forced to terminate

As to why that occurs, I am not sure. Do you?

HenrikBengtsson commented 5 months ago

Thanks. Before troubleshooting further, could you make sure to update your R packages and retry, because several of your packages are outdated. It would be a waste of time to troubleshoot this just find out it has been fixed in some of the packages you use.

sebsilas commented 5 months ago

Fair enough. I've done that and still get the same error. New session info:


R version 4.3.0 (2023-04-21)
Platform: x86_64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.6.6

Matrix products: default
BLAS:   /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: Europe/Berlin
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] promises_1.3.0 shiny_1.8.1.1  devtools_2.4.5 usethis_2.2.3 

loaded via a namespace (and not attached):
 [1] jsonlite_1.8.8     miniUI_0.1.1.1     compiler_4.3.0     Rcpp_1.0.12        stringr_1.5.1     
 [6] parallel_4.3.0     jquerylib_0.1.4    later_1.3.2        globals_0.16.3     fastmap_1.2.0     
[11] mime_0.12          R6_2.5.1           htmlwidgets_1.6.4  future_1.33.2-9001 profvis_0.3.8     
[16] bslib_0.7.0        rlang_1.1.3        cachem_1.1.0       stringi_1.8.4      httpuv_1.6.15     
[21] sass_0.4.9         fs_1.6.4           pkgload_1.3.4      memoise_2.0.1      cli_3.6.2         
[26] withr_3.0.0        magrittr_2.0.3     digest_0.6.35      rstudioapi_0.16.0  xtable_1.8-4      
[31] remotes_2.5.0      lifecycle_1.0.4    vctrs_0.6.5        glue_1.7.0         listenv_0.9.1     
[36] urlchecker_1.0.1   codetools_0.2-20   sessioninfo_1.2.2  pkgbuild_1.4.4     parallelly_1.37.1 
[41] purrr_1.0.2        tools_4.3.0        ellipsis_0.3.2     htmltools_0.5.8.1 

*** Package versions
future 1.33.2.9001, parallelly 1.37.1, parallel 4.3.0, globals 0.16.3, listenv 0.9.1

*** Allocations
availableCores():
system 
    16 
availableWorkers():
$system
 [1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"
 [9] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"

*** Settings
- future.plan=<not set>
- future.fork.multithreading.enable=<not set>
- future.globals.maxSize=<not set>
- future.globals.onReference=<not set>
- future.resolve.recursive=<not set>
- future.rng.onMisuse=<not set>
- future.wait.timeout=<not set>
- future.wait.interval=<not set>
- future.wait.alpha=<not set>
- future.startup.script=<not set>

*** Backends
Number of workers: 16
List of future strategies:
1. multisession:
   - args: function (..., workers = availableCores(), lazy = FALSE, rscript_libs = .libPaths(), envir = parent.frame())
   - tweaked: FALSE
   - call: future::plan(future::multisession)

*** Basic tests
Main R session details:
    pid     r sysname release
1 62085 4.3.0  Darwin  22.6.0
                                                                                                 version
1 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
  nodename machine   login    user effective_user
1  host001  x86_64 user001 user002        user002
Worker R session details:
   worker   pid     r sysname release
1       1 62759 4.3.0  Darwin  22.6.0
2       2 62751 4.3.0  Darwin  22.6.0
3       3 62758 4.3.0  Darwin  22.6.0
4       4 62752 4.3.0  Darwin  22.6.0
5       5 62750 4.3.0  Darwin  22.6.0
6       6 62753 4.3.0  Darwin  22.6.0
7       7 62765 4.3.0  Darwin  22.6.0
8       8 62763 4.3.0  Darwin  22.6.0
9       9 62757 4.3.0  Darwin  22.6.0
10     10 62756 4.3.0  Darwin  22.6.0
11     11 62762 4.3.0  Darwin  22.6.0
12     12 62761 4.3.0  Darwin  22.6.0
13     13 62754 4.3.0  Darwin  22.6.0
14     14 62764 4.3.0  Darwin  22.6.0
15     15 62755 4.3.0  Darwin  22.6.0
16     16 62760 4.3.0  Darwin  22.6.0
                                                                                                  version
1  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
2  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
3  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
4  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
5  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
6  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
7  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
8  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
9  Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
10 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
11 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
12 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
13 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
14 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
15 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
16 Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
   nodename machine   login    user effective_user
1   host001  x86_64 user001 user002        user002
2   host001  x86_64 user001 user002        user002
3   host001  x86_64 user001 user002        user002
4   host001  x86_64 user001 user002        user002
5   host001  x86_64 user001 user002        user002
6   host001  x86_64 user001 user002        user002
7   host001  x86_64 user001 user002        user002
8   host001  x86_64 user001 user002        user002
9   host001  x86_64 user001 user002        user002
10  host001  x86_64 user001 user002        user002
11  host001  x86_64 user001 user002        user002
12  host001  x86_64 user001 user002        user002
13  host001  x86_64 user001 user002        user002
14  host001  x86_64 user001 user002        user002
15  host001  x86_64 user001 user002        user002
16  host001  x86_64 user001 user002        user002
Number of unique worker PIDs: 16 (as expected)
sebsilas commented 5 months ago

There was a little bit more information in one of the errors just now:

The total size of the 4 globals exported is 400 bytes. The three largest globals are ‘psychTestR_session_id’ (232 bytes of class ‘character’), ‘test_id’ (56 bytes of class ‘numeric’) and ‘session_id’ (56 bytes of class ‘numeric’)

I'm not sure what the purpose of that message is, but this doesn't seem to be a lot of memory.

HenrikBengtsson commented 5 months ago

That's just a clue, in case there are large objects. It'll save some back a forth, when troubleshooting.

See if you can reproduce this with fewer workers, e.g. workers = 2. The goal is to get to a minimal example where this occurs, so we can rule out other things.

What is odd, is that something is changing the connection of the main R process, invalidating it, resulting it not being able to talk to the parallel worker. What is the code you're running?

sebsilas commented 5 months ago

Yes it still happens with workers = 2.

Unhandled promise error: MultisessionFuture () failed to checking resolved from cluster RichSOCKnode #1 (PID 80608 on localhost ‘localhost’). The reason reported was ‘Connection to the worker is corrupt’. Post-mortem diagnostic: A process with this PID exists, which suggests that the localhost worker is still alive. The socket connection to the worker of MultisessionFuture future () is lost or corrupted: Connection (connection: index=4, description="<-localhost:11709", class="sockconn", mode="a+b", text="binary", opened="opened", can read="yes", can write="yes", id=700, raw_id="<pointer: 0x2bc>") is no longer valid. It differ from the currently registered R connection with the same index 4 (connection: index=4, description="<-localhost:11709", class="sockconn", mode="a+b", text="binary", opened="opened", can read="yes", can write="yes", id=742, raw_id="<pointer: 0x2e6>"). As an example, this may happen if base::closeAllConnections() have been called, for instance via base::sys.save.image() which in turn is called if the R session (pid 80495) is forced to terminate

How was your question meant to finish? "What is the code you're running..."

HenrikBengtsson commented 5 months ago

Are you saying the code in https://github.com/HenrikBengtsson/future/issues/723 produces this error? If so, then there must be something else going on too, because I cannot see how that could happen. Do you have a custom ~/.Rprofile file? Can you reproduce it when you run R via R --vanilla?

sebsilas commented 5 months ago

Are you saying the code in #723 produces this error? If so, then there must be something else going on too, because I cannot see how that could happen. Do you have a custom ~/.Rprofile file? Can you reproduce it when you run R via R --vanilla?

Hi @HenrikBengtsson, yes that is the minimal code to reproduce the error and is also why I am extremely confused!

I do not have a custom ~/.Rprofile file, no.

I was also wondering if this is an RStudio issue. However, I had a 2023 version of RStudio with this issue occurring, and it persisted when I updated to the latest version.

tdeenes commented 5 months ago

@sebsilas Did you try to run the MRE in a plain R terminal (so not in RStudio)?

sebsilas commented 5 months ago

@sebsilas Did you try to run the MRE in a plain R terminal (so not in RStudio)?

After trying multiple times, I haven't been able to reproduce the exact error.

When I load the app, I get this error though, seeming to correspond to when the app would crash:

sh: rm: command not found

So this could be it, will try and resolve.

sebsilas commented 5 months ago

@sebsilas Did you try to run the MRE in a plain R terminal (so not in RStudio)?

After trying multiple times, I haven't been able to reproduce the exact error.

When I load the app, I get this error though, seeming to correspond to when the app would crash:

sh: rm: command not found

So this could be it, will try and resolve.

Well, including the PATH variable to include /bin: resolves the error in the non-RStudio terminal, but the same fix in RStudio doesn't prevent the failed Promise error.

HenrikBengtsson commented 5 months ago

Thank you. I can reproduce this, by taking your original example and reload the Shiny page frequently enough. It has to do with plan(multisession) being call within the server function. That causes new parallel workers to be set up each time, and the old ones to be invalidated(*). Set it outside, and you should be good;

library(shiny)
library(promises)
future::plan(future::multisession)

This is what https://rstudio.github.io/promises/articles/promises_06_shiny.html uses.

(*) Futureverse is design to detect when you set the same plan() as already set. When it detects that, it should just ignore the replicated requests. However, there is something in this case, that prevents it from detecting it's the same setting. I don't why it fails here.

I'll look into it, but I cannot promise anything soon.

sebsilas commented 5 months ago

Thanks @HenrikBengtsson, but I've already tried this and it still produces the error.

HenrikBengtsson commented 5 months ago

Here's the script that I used to replicate this. I can reproduce it if I set the plan within server() and keep reloading the Shiny page. It does not happen if it's set upfront. I'm running this from plain R - not RStudio. I'm using R 4.4.0 on Linux, but I doubt that makes a difference here.

library(shiny)
library(promises)
library(future)

## Here and it'll work
plan(multisession, workers = 2)

ui <- fluidPage(
  titlePanel("Promise fail")
)

server <- function(input, output, session) {
  ## Not here! Because that will trigger the error
  ## plan(multisession, workers = 2)

  res <- promises::future_promise({
    Sys.sleep(5)
    "test_res"
  }, seed = NULL) %...>% (function(result) {
    print(result)
  })
}

# Run the application
app <- shinyApp(ui = ui, server = server)
print(app)
sebsilas commented 5 months ago

Ok, but just to confirm, I do still get this in RStudio with your script:

https://github.com/HenrikBengtsson/future/assets/39628712/a527d37d-acf8-4371-93f7-1879a65cc19d

HenrikBengtsson commented 5 months ago

Uploading futures_debug_2.mov…

?!? That is definitely not something the Futureverse is producing. It looks like you have problems with your RStudio setup. I would start by verifying it works in vanilla R so you have a working baseline to compare to.

HenrikBengtsson commented 4 months ago

Uploading futures_debug_2.mov…

I see that this is now a movie that you uploaded to GitHub. I guess I looked at it too soon, and I literally saw the text "Uploading futures_debug_2.mov…" here in the issue tracker.