mhahsler / pomdp

R package for Partially Observable Markov Decision Processes
16 stars 5 forks source link

Getting an error when finite horizon used #4

Closed meeheal closed 4 years ago

meeheal commented 4 years ago

Hi Michael,

I'm writing with another issue. I am now using the most update version of R available for Windows, and with RStudio.

I'll upload a file with a minimum working example concurrently to this description.

When I run the source code to create my POMDP object "POMDP_Min" for this minimum working example, I am then able to run code from the console, for example sol_POMDP_Inf <- solve_POMDP(POMDP_Min) which seems to work fine. Now I can view the resulting policy graph (which is so great by the way). But I'm also interested in getting the Finite horizon view as a tree, but I'm getting an error when I try: sol_POMDP_Fin <- solve_POMDP(POMDP_Min, discount = 1, horizon = 7) I've also tried writing into the POMDP object a discount of 1 and horizon 7. It seems to be the finite horizon that's caused the error as I've tried using a discount less than 1 but still getting the error.

Based on my understanding of a POMDP, if the infinite horizon problem is getting solved with no issues, that the finite horizon problem should also be solvable, although I could be mistaken about this.

Thanks for your help! Emile minwexample_fin - Copy.R.txt

mhahsler commented 4 years ago

This was a bug in handling states specified as index numbers. The version on GitHub resolved the issue.

sol_POMDP_Fin <- solve_POMDP(POMDP_Min, discount = 1, horizon = 7, method = "enum")
sol_POMDP_Fin
  Solved POMDP model: Minimum working example with error 
  solution method: enum 
  horizon: 7 
  converged: FALSE 
  total expected reward (for start probabilities): 3.549981 

A new release will be sent to CRAN after a few more changes are made.