It would be helpful when analyzing logs if we could count on having a succinct identifier of the rule set the daemon was running on at any given time.
There is already a line by line dump of the rules when loading a file and logging at debug level, but this is neither concise nor very reliable since it requires debug level logging. Instead, logging a single statement at info level, i.e. on the same level as other fapolicyd state messages, would be better.
Simply logging fapolicyd rule identity is X where X is a hash of the compiled rules file would be enough. This creates a content based identifier for fapolicyd rules, that identifies rule-sets in the same way a git sha identifies a git commit.
The relation of a hash back to the actual rule-set content is not considered the responsibility of fapolicyd. Third-party tools may use this new functionality to do exactly that by maintaining a history of rule file backups which can be cross referenced when analyzing past logs. This identifier could also support implementation of a dirty flag check, e.g. comparing what the daemon is executing vs what is currently contained in the rules file on disk.
The solution here has to consider #243 and support runtime reloads as well.
It would be helpful when analyzing logs if we could count on having a succinct identifier of the rule set the daemon was running on at any given time.
There is already a line by line dump of the rules when loading a file and logging at debug level, but this is neither concise nor very reliable since it requires debug level logging. Instead, logging a single statement at info level, i.e. on the same level as other fapolicyd state messages, would be better.
Simply logging
fapolicyd rule identity is X
where X is a hash of the compiled rules file would be enough. This creates a content based identifier for fapolicyd rules, that identifies rule-sets in the same way a git sha identifies a git commit.The relation of a hash back to the actual rule-set content is not considered the responsibility of fapolicyd. Third-party tools may use this new functionality to do exactly that by maintaining a history of rule file backups which can be cross referenced when analyzing past logs. This identifier could also support implementation of a dirty flag check, e.g. comparing what the daemon is executing vs what is currently contained in the rules file on disk.
The solution here has to consider #243 and support runtime reloads as well.