-
In the new optimized evaluation of allowed connections, a lot of work is done in preprocessing, that is, when building network configs.
This may **increase** runtime for certain queries, which do not…
-
With RFD#0004, Hipcheck can handle an unlimited number of analyses, but we lose the hard-coded semantic information that made the report human-readable (i.e. the knowledge of concern format / output u…
-
# Description #
This epic is a larger feature to continue evolving the ScubaGear AAD conditional access policy evaluation logic. The work is improve the AAD secure configuration baselines by enhanc…
-
### Current Behavior
Currently the API does not offer - as far as I can see - a way to trigger a reevaluation of the policy of a certain project or component.
In our situation we have a side proje…
-
**Describe the bug**
When policy result is RoutingPolicy_PolicyResultType_NEXT_STATEMENT in last statement of a policy, policy evaluation should continue to the next policy. If no policy exists, ev…
-
[How to file a helpful issue](https://www.qubes-os.org/doc/issue-tracking/)
### The problem you're addressing (if any)
Currently qrexec-policy-daemon fetches info about all the qubes in the syst…
-
### Describe what should be investigated or refactored
In thinking about general workflows for best-practices and STIG's, Lula may want to support the ability to return zero resources from a domain…
-
After policy evaluation is merged (but filing now as a reminder to myself), signature verification in policy evaluation, with its temporary directory creation, will take about a second per case. Both…
-
in the policy evaluation and policy iteration solution.ipynb; why is the value fuction calculated with the below equation.
v += action_prob * prob * (reward + discount_factor * V[next_state])
Shou…
-
```python
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
v = np.zeros(shape=(env.nS, 1)) # value vector index by state
R = np.zeros(shape=(env.nS, 1)) # reward …