Open thereseanders opened 5 years ago
Actually, is the limit 5,000 after all, because we all requests in ghclass
are user-to-server, as opposed to server-to-server, requests? https://developer.github.com/apps/building-github-apps/understanding-rate-limits-for-github-apps/
Updated figure with logged API calls on y-axis and more detail.
Places to reduce API calls
peer_add_file_rev()
peer_assign()
peer_add_file_aut()
peer_return()
repo_mirror_file()
(note that maybe source_files and target_files should be passed to function from a higher level function to reduce calls to API)The new version of peer_assign()
and peer_return()
incorporates the functionality of peer_add_file_aut()
and peer_add_file_rev()
, respectively, and reduces overall API calls by tracking files.
400 students, 2 reviewers, and 2 files (1 file for peer_return()
) now is feasible.
Nice!
The maximum API calls per hour for an organization with > 20 repos and > users is 12,500.
There are a number of feasible scenarios in which the peer review functions would exceed this limit.
Suggestion for reducing calls:
repo_put_file()
internally callsrepo_get_file()
to get thesha
of existing files. Many depending functions, for examplerepo_add_file()
, already callrepo_files()
and could extract thesha
from there.repo_put_file()
, I suggest a new set of functionsrepo_put_file_exists()
andgithub_api_repo_put_file_exists()
that expect ansha
parameter and no not check whether a file exists. There might also be more elegant ways to do this within the existing functions. .