Open DaftofHS opened 4 weeks ago
@DaftofHS You can change the logging level to control which logging messages get shown. You can silence the warnings by running:
In [4]: from pgmpy.global_vars import logger
In [5]: import logging
In [6]: logger.setLevel(logging.ERROR)
Hello, when I was running, it did not have the output result, but continued to be adjusted in a few numbers (in infinite cycle). Although these code did not display the cycle, it seemed to be constantly looping inside. There is no output result!
@DaftofHS The approximate inference in pgmpy is based on sampling. This can be quite slow if you generate a lot of samples. You can control between precision of the results and runtime by varying the n_samples
parameter of the query
method. If it's possible for you to share your code, I might be able to help better.
test241101.zip Hello, the file has been imported, and the adjustment data that has been circulating when using approximate inferences
Hello, I think I know why it keeps looping. In the query function, simulate is used to simulate the sample. In the sample, multiple faults will be simulated to occur at the same time (that is, the status of multiple faults is 1). However, in my data, only one fault will occur individually (in a sample, only one fault will be 1). This should be the case. But how can I modify it? I don't know yet.
@DaftofHS I had a look at the code that you shared. As the inference query that you are trying to perform has a lot of evidence variables, approximate inference would be extremely slow (as it would try to do rejection sampling). I would suggest trying to use DBNInference instead of Approxinference. However, to not run into memory issues with DBNInference, you will need to reduce the complexity of your model both in terms of reducing the number of parents of the variables and the number of states.
Hello, I would like to ask, when using DBNinference for dynamic Bayesian inference, it shows insufficient memory. In what format is the constructed conditional probability table stored? Can it be changed to float16? How to change it? Is there any other way I can reduce memory?
Subject of the issue
Describe your issue here. using ApproxInference for reasoning, I keep getting these warnings in a loop: WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: 0.000244140625. Adjusting values. This happens repeatedly with various differences, such as -0.000244140625, 0.0001220703125, and so on. How can I fix this precision issue? Is there a way to ignore precision problems?"
Your environment
Steps to reproduce
Tell us how to reproduce this issue. Please provide a minimal reproducible code of the issue you are facing if possible. I am working with a dataset that has 18 parameters with 8 operating conditions, where each parameter has 2 states. run with ApproxInference and note any warnings about probability sums.
Expected behaviour
Tell us what should happen query-result
Actual behaviour
Tell us what happens instead WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: 6.103515625e-05. Adjusting values. WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: 6.103515625e-05. Adjusting values. WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: -6.103515625e-05. Adjusting values. WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: 0.000244140625. Adjusting values. WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: -0.000244140625. Adjusting values. WARNING:pgmpy:Probability values don't exactly sum to 1. Differ by: 0.0001220703125. Adjusting values.