Is your feature request related to a problem? Please describe.
Sympy utilises python exec() under the hood, and doesn't santise its input. When loading a model file that has been shared, it is possible malicious code could be inserted into the propensity function. Ideally we should sanitise all input to future proof the code and reduce the security risk.
Describe the solution you'd like
All non-useful characters are stripped from strings - possibly using regex. Assignment equals is removed (only ==, >=, <=, != would be permitted) function calls will have to be permitted to allow for mathematical functions - the specific function call should be checked perhaps.
Describe alternatives you've considered
It is possible to do nothing and leave a warning to the end user to check their model files manually. This would be inefficient for the user and this use case is restricted enough where we can restrict the set of characters provided to the end user of pyRBM.
Is your feature request related to a problem? Please describe. Sympy utilises python exec() under the hood, and doesn't santise its input. When loading a model file that has been shared, it is possible malicious code could be inserted into the propensity function. Ideally we should sanitise all input to future proof the code and reduce the security risk. Describe the solution you'd like All non-useful characters are stripped from strings - possibly using regex. Assignment equals is removed (only ==, >=, <=, != would be permitted) function calls will have to be permitted to allow for mathematical functions - the specific function call should be checked perhaps.
Describe alternatives you've considered It is possible to do nothing and leave a warning to the end user to check their model files manually. This would be inefficient for the user and this use case is restricted enough where we can restrict the set of characters provided to the end user of pyRBM.