Open sorin-ionescu opened 12 years ago
What would the logic be for when and how often to sync? git
works because you explicitly ask it to fetch/pull revisions.
$ ghi fetch
$ ghi list --no-fetch
?
Yes, ghi pull/push
could work. Though, perhaps all commands should automatically fetch and push if there is an internet connection and cache for later otherwise.
Hi, I'm one of the maintainers of Ditz. I would love for ditz to sync with github issues. However, I am unlikely to write that plugin anytime soon.
I would absolutely love to see this feature as it would actually be more useful for me than my current Ditz workflow. Ditz is great because it allows every dev to have a local version of the project's issue list. Ditz is terrible because no one but developers can add an issue. Github issues are great because anyone can add issues. If you can cache your github issues locally that would be the best of both worlds.
Please let me know if I can assist with advice on how Ditz works and solves these problems - I cannot devote much time to actually contributing code however.
@mattkatz I've played with Ditz in the past but don't currently use it in my workflow, so I don't really plan on tackling that anytime soon. I'm definitely open to any contributions, though, if you think it would be easier to use ghi
as a foundation rather than a Ditz plugin from scratch.
I've forked on the off-chance I have time to try it. Low probability, but you never know.
GitHub going down, quite frequently, is the best reason for caching issues and pull requests locally.
My usecase is that I do most of my development in areas with no internet access: subways, planes, etc.
I have the same usecase as @mattkatz , for now I use https://github.com/zauberlabs/github-issues-backup to backup my github issues, but te way they're saved is not very browsable (plain json files). However it works well for backup purposes. If ghi could browse/parse files stored by github-issues-backup that would be a great first step towards offline use.
It's written in ruby too and exposes the same json structure as Github's API, which ghi uses. An example of backed up issues can be found at https://github.com/nodiscc/issues-backup
Hope it helps.
Like @mattkatz and @ghost: While syncing back is a hard problem, my main issue is that I cannot read issues while having no internet access.
So a read-only cache would be quite fine for me. Bonus points if I can annotate tickets to take notes that I have to update them.
Ping. Any update on this? Is offline support planned?
I think it's a good idea, but it's not on my roadmap. If anyone is thinking of tackling it and wants input, please let me know!
It's a while that I'm thinking about it, but I still have no free time to work on it.
I'll send a pull-request as soon as possible.
Any advanced in this important improve?
@s1n4 any update on this? I’m in the same situation right now, I’d love to make a pull request but have really no free time.
no update. unfortunately I have no time to hack on it, but @bfontaine it would be cool if we could work on this issue together.
@s1n4 & @bfontaine - I'd like to help, warning: I'm new to Ruby as of last weekend.
Time wise I'm a masters student with a bunch of deadlines. So realistically I'll be procrastinating and closing and old github issues in my labs repos for another few weeks before those deadlines start to sink in.
I organize everything with github issues.. and campus to school daily on the train an hour each way without internet. So I would really like to see this functionality and would like to help make it happen.
@moriarty You can store any arbitrary data in the .git repository. Whether you want to use a Git format or another database format is up to you, but there is where it should be stored; so that one doesn't have to deal with .gitignore unless you want to track issues inside of the repository for some reason.
You'll have to be familiar with the GitHub issues/pull request API. You'll have to download the issues and store them locally sync based on the recently most updated in reverse order. Syncing comments may be an issue. So, you may want to only sync issue titles, descriptions, labels, and status by default and sync comments upon request.
@sorin-ionescu, I played around with this a bit today. I didn't see your reply until now. I got something really hacky working along these lines : developer.yahoo.com/ruby/ruby-cache.html I also found this: https://github.com/mloughran/api_cache but being new to Ruby I started with google and looked at the two solutions and went with the quickest.
Why keep it in the repositories .git directory? ghi list
can be run from any directory.
What I had in mind was: If connected to the internet behave as normal, with an added step of caching everything. If not connected to the internet, then read the cached version. If not connected to the internet and the cache is old, print a warning If not connected to the internet and no cached version, too bad, print some info.
I wasn't thinking this through fully. I just want to be able to ghi show
and ghi list
while on the train because I comment on issues via commit message references.. and if I need to get fancy I us the smartphone app.
Now I see the benefits of ghi comment/close/label/assign...
. But I don't think I'll have that done by the end of the week due to lack of Ruby and API familiarity. I'll remove my foos and puts and publish my cached version by Friday.
This is also what I had in mind. Note: I’m not a ghi power user, mostly due to the fact that you can’t use it offline. For example, I didn’t know before this thread that you could have all comments on an issue from this tool. I also didn’t look at its source code before this comment.
We could have a ~/.ghi/cache/
directory to store JSON cache files. Each repo would have its file (the advantage is that you have to load only one file when getting one repo’s issues, but you would have to load all files if you wanted to see all issues from all repos), something like:
{
"issues": [
{
"number": 17,
"title": "Here is an issue",
"text": "here is the text, blah blah blah",
"open": true,
"author": "bfontaine",
"date": "2013-12-15T22:25:04",
"assigned": null,
"tags": [ "bug" ],
"milestone": "m1",
"comments": [
{
"author": "someone",
"date": "2013-12-15T22:26:24",
"text": "hey, this is awesome!"
},
{
"author": "someone2",
"date": "2013-12-15T22:26:25",
"text": ":)"
},
]
}
],
"tags": {
"bug": { "color": "#f02204" },
"foobar": { "color": "#42FF42" }
},
"milestones": [
"m1",
"m2"
]
}
I’m using JSON for this example, but YAML would do the trick, too.
I don’t have the time to code something for now.
Hi,
I just discovered Mislav's issuesync. It enables downloading project issues to a local directory. That's pretty much the only feature, but at least issues can be now be grep
ed.
Note to project maintainer: feel free to delete this post if you think it's an ad to another project. I just needed this feature badly and so I'm sharing the finding.
Perhaps a more compelling reason to support some caching capability is to minimize API hits to Github. I've found that round-tripping on every ghi list
command is slow.
Perhaps the wontfix
label is harsh. I'm very open to the feature but have absolutely no time to maintain ghi these days.
Happy to help out, then. This project was originally recommended to me by @twhitacre to manage issues in our classes at @theironyard more efficiently, so maybe some of our instructors can get involved...
This would be a cool issue and I'd be interested if anyone wants to take a crack at implementing it.
I would be happy with just read only offline support.
See also: https://github.com/jlord/offline-issues
@veganstraightedge I've come up with this quick and dirty script to archive the issues inside a directory of the repo:
#!/bin/bash
#Description: Backup github issues for a repository to plain text files for offline reading
#License: WTFPL (http://www.wtfpl.net/txt/copying/)
#Dependencies: ghi "https://github.com/stephencelis/ghi"
#Usage: backup-gh-issues.sh [--closed] username repository
set -e
if [ "$1" == "--closed" ]
then closedissues="true"
shift
fi
ghuser="$1"
ghrepo="$2"
if [ "$ghuser" == "" -o "$ghrepo" == "" ]
then echo "Usage: backup-gh-issues.sh [--closed] username repository"; exit 1
fi
### Backup open issues
ghi list -- "$ghuser"/"$ghrepo" | tee "$ghuser-$ghrepo-issues.md"
issues=$(cat "$ghuser-$ghrepo-issues.md" | awk -F" *" '{print $2}' | egrep -v '^[A-Z|a-z]')
for i in $issues; do ghi show $i -- "$ghuser"/"$ghrepo" | tee "$ghuser-$ghrepo-issue-$i.md"; done
### Backup closed issues
if [ "$closedissues" == "true" ]
then
if [ ! -d closed ]
then mkdir closed
fi
ghi list -s closed -- "$ghuser"/"$ghrepo" | tee "closed/$ghuser-$ghrepo-issues.md"
issues=$(cat "closed/$ghuser-$ghrepo-issues.md" | awk -F" *" '{print $2}' | egrep -v '^[A-Z|a-z]')
for i in $issues; do ghi show $i -- "$ghuser"/"$ghrepo" | tee "closed/$ghuser-$ghrepo-issue-$i.md"; done
fi
One of the best features of Git is its decentralised model. A connection to a repository is not required. That is not the case for GitHub issues, and though distributed bug trackers exist, such as Ditz, none of them sync with GitHub issues.
ghi
should download all issues and cache them locally. Then it should allow local operations and syncing with GitHub.