jantman / awslimitchecker

A script and python package to check your AWS service limits and usage via boto3.
http://awslimitchecker.readthedocs.org/
GNU Affero General Public License v3.0
514 stars 188 forks source link

only query sgs owned by the account #535

Closed robpickerill closed 3 years ago

robpickerill commented 3 years ago

This PR updates the security group counts to filter on owner by account id. In reference to https://github.com/jantman/awslimitchecker/issues/518


Before submitting pull requests, please see the Development documentation and specifically the Pull Request Guidelines.

IMPORTANT: Please take note of the below checklist, especially the first three items.

Summary

Add a summary of what your PR does here. This could be as simple as "adds support for X service" or "fixes default limit for Y", or a longer explanation for less straightforward changes.

Pull Request Checklist

Contributor License Agreement

By submitting this work for inclusion in awslimitchecker, I agree to the following terms:

codecov-commenter commented 3 years ago

Codecov Report

Merging #535 (e813709) into master (8921393) will not change coverage. The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff            @@
##            master      #535   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           42        42           
  Lines         3021      3021           
  Branches       451       451           
=========================================
  Hits          3021      3021           
Impacted Files Coverage Δ
awslimitchecker/services/ec2.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 8921393...e813709. Read the comment docs.

jantman commented 3 years ago

This has been released in 12.0.0, which is now live on PyPI and Docker Hub. Thank you so much, and apologies for the delay!