daviddengcn / gcse

Project for Go Search, a search engine for finding popular and relevant packages.
http://go-search.org/
BSD 2-Clause "Simplified" License
277 stars 45 forks source link

Make it purdy #11

Closed mipearson closed 10 years ago

mipearson commented 10 years ago

Not sure of our own preferences on this: would you welcome a PR that pretties things up a bit?

Thinking bringing in a basic bootstrap 3 theme, including responsive support.

Inspired by this twitter thread: https://twitter.com/ggiesemann/status/441007698188840960

daviddengcn commented 10 years ago

Thank you for interest on GCSE.

I'm open for visual changes, but I'm just busy these days.

What do you mean "PR"?

mipearson commented 10 years ago

PR = Pull Request. I'm interested as your website has been the best so far for finding packages in Go.

daviddengcn commented 10 years ago

Sure, any PR is welcome, and thank you for that!

daviddengcn commented 10 years ago

Related source files are under server folder:

mipearson commented 10 years ago

I've hit a wall, unfortunately: I can get a basic server running, but I can't work out how to bootstrap the crawler or indexer so that there are some actual documents to display.

The 'server' command is failing as there are no documents indexed, and the 'tocrawl', 'crawler' and 'indexer' commands are failing because data/docs is missing, but I can't work out what it's expecting to be there.

daviddengcn commented 10 years ago

Sorry for incompete documents. I'll make tocrawl runnable without any initial data.

You can try this:

mkdir data/docs
tocrawl
crawler
mergedocs
indexer

Then

server
mipearson commented 10 years ago

Thanks for getting back to me. It looks like this will download 32,000 repositories. I am in Australia: our internet is slow and expensive. Is there a way to limit it to only a hundred or so?

Regards,

Michael Pearson (sent from my phone)

On 11 Mar 2014, at 6:33 am, David notifications@github.com wrote:

Sorry for incompete documents.

You can try this:

mkdir data/docs tocrawl crawler mergedocs indexer Then

server — Reply to this email directly or view it on GitHub.

daviddengcn commented 10 years ago

You can copy conf.json.template to conf.json and set crawler.due_per_run to "1m" to make the crawler to crawl only for a short time. Or, I'll make a file for you later.

mipearson commented 10 years ago

Excellent, that worked perfectly.

On Tue, Mar 11, 2014 at 2:09 PM, David notifications@github.com wrote:

You can copy conf.json.template to conf.json and set crawler.due_per_runto "1m" to make the crawler to crawl only for a short time. Or, I'll make a file for you later.

Reply to this email directly or view it on GitHubhttps://github.com/daviddengcn/gcse/issues/11#issuecomment-37259303 .

Michael Pearson

daviddengcn commented 10 years ago

I've used th bootstrap on the UI and push it online.