Open dpastoor opened 4 years ago
I always imagined the library would look something like this:
repo <- function(org, repo){
query <- paste0("query customQuery{
repository(owner: ", glue::glue('"{org}"'), ", name: ", glue::glue('"{repo}"'), ") {")
return(query)
}
issues <- function(.data, ...){
query <- 'issues(last: 100){
nodes{
title
body
author {
login
}
number
milestone {
title
number
}
state
closed
closedAt
resourcePath
url
lastEditedAt
editor {
login
}
publishedAt
}
pageInfo {
hasPreviousPage
startCursor
}
}'
new_query <- paste0(.data, query)
class(new_query) <- append(class(new_query), "issues")
return(new_query)
}
run <- function(.data, ...){
query <- paste0(.data, ' } }')
.api_url <- ghpm::api_url()
#return(query)
return(gh("POST ", query = query, .api_url = .api_url, .token = ghpm::get_token(.api_url)))
# run graphql query and return results
}
And you could call it via:
repo(org = "metrumresearchgroup", repo = "rbabylon") %>% issues() %>% run()
So you could "build" out the query based on the info you need while still being syntactically simple instead of having distinct specific functions. Although doing this would require a big overhaul.
Could then have appropriate fragment parsers.
Case - issue fragment that would have core relevant information for an issue
Given multiple queries where having an issue nested, could then just include that single fragment + write a function to handle that part of the tree unpacking.