Closed verokarhu closed 9 years ago
@verokarhu great!
having a /ctrl/add path is not restful since / is the location where the POSTed files end up
Makes sense.
distinguishing between uploaded blocks and files is unnecessary since a client can just chop the file up themselves if they want a chunked upload
Yep, this is fine. Eventually we will want more complicated things like, rabin fingerprint chunking done on the server and perhaps even support multipart upload. Not important anytime soon.
I'll CR now.
Okay, made some comments. Everything else LGTM. I haven't tested this-- if you can create a test file, would be great. Don't worry about query params, we can make that a future issue.
I left the multiaddr change and a test file out for now. Not familiar with how http testing in go is done but I'll look into it for a future pull request if/when I get around to doing more features for the http api.
Hopefully my assurances that it has been thoroughly tested on Win7 is enough for now. :)
As a sidenote, I wonder if the windows version of the mount command could use this API. @whyrusleeping mentioned that he had worked on an FTP server in issue #9 but I haven't thought about it enough to know if this API would be more suitable.
@verokarhu http testing that ive done in the past has used the napping library to query a server you start up within the test to ensure that the data is consistent with the you expect. I would love to get this code polished and merged in soon. It will be very nice to have as an alternative to fuse (since fuse is being a little problematic for people to easily set up at the moment)
also, take a look at how the mount command works with the daemon code (on the daemon branch) and make serve do the same thing.
I'm no git master but wouldn't it be simplest to get the daemon branch merged before using those features in this pull request?
Yeah, hold off on the daemon stuff for a little bit until we get that finalized and merged in, but if you could write some sort of unit tests for the http interface, that would be extremely helpful!
All right, I'll do that. Thanks for the tip on napping btw, I'll look into it. Have been meaning to figure out http testing for another project (and ipfs of course =) ). There seems to have been some changes to the master that this pull request isn't compatible with so I'll have to fix that as well.
Just a ping to let you know that the daemon code is merged in!
There is now tests for posthandler and servehttp. I suppose one could refactor it a bit and enable testing the routes as well but that seems overkill ATM.
The biggest problem now is getting gorilla vendored. Try as I may, godep seems to remove the entire workspace when attempting to add gorilla/mux...
I'm curious, why is both godep and vendoring inside the repo used simultaneously? This hybrid approach between vendoring and revision locking seems odd, though I'm not exactly familiar with godep.
We use godep for vendoring, the make file should do it for you.
godep is used to copy all dependencies into Godeps/_workspace
. Throughout the codebase, import statements are rewritten to point to this local workspace.
The command make vendor
in package root performs this task.
I don't have make (i'm not sure if that's available on windows) but I ran the command contained in the makefile. This resulted in godep deleting the workspace (?) and the following error message:
C:\ipfs\src\github.com\jbenet\go-ipfs>godep save -r ./...
godep: rename C:\IPFS\src\github.com\jbenet\go-ipfs\server\http\http.go.temp C:\IPFS\src\github.com\jbenet\go-ipfs\server\http\http.go: Cannot create a file when that file already exists.
Anyways, I was able to manually merge the half-done state godep left things in and it seems to work. Hopefully the build completes succesfully.
Looks like the build still fails. Am I correct in assuming that the builds were broken already?
The tests in dht_test.go look peculiar, is it making an actual node that it contacts? If so, that should really be mocked out.
Yeah, the tests that are failing in your code dont seem to be related to the work youve done. Ill take a little further look at them to make sure, see if i can repro
@verokarhu Yeap, confirmed that the tests fail on master. Your stuff looks good, once we get a go ahead from @jbenet and @perfmode we can get this merged in!
I haven't looked at everything, but why is HTTP POST being used to upload files and not HTTP PUT. I believe the PUT method is better suited for that purpose (being idempotent is a good thing, and the client can know the hash in advance).
A description can be found on stackoverflow. Basically:
Very good point about PUT vs POST. I agree
The reason POST is used is because the resource URL is unknown. Essentially this from that same stackoverflow post, though I can't remember right now if the URL or only hash is returned:
If you let the server decide then use POST.
You could though do something like this to create a resources using POST:
POST /questions HTTP/1.1
Host: wahteverblahblah.com
Note that in this case the resource name is not specified, the new objects URL path would be returned to you.
The POST handler is for the root URL so this is IMO the correct HTTP method to use.
As an object-store, this server's API will be similar to REST / CRUD. In REST:
Think about an example like this:
# creating within a collection
POST http://server/collection/ --> {id: 4}
# subsequently
PUT http://server/collection/4
Now, if you know the id before creating, then this usually means either the object "exists" in that there is an acceptable null value (GET http://server/collection/
For us, these should all work:
POST http://ipfs-server/add -d <blockdata> -> 200 - {"added": [ <objhash> ]}
POST http://ipfs-server/ipfs/<objhash> -d <blockdata> -> 200 - <blockdata>
PUT http://ipfs-server/ipfs/<objhash> -d <blockdata> -> 200 - <blockdata>
GET http://ipfs-server/ipfs/<objhash> -> 200 - <blockdata>
the last three could return just the data, or a json object like:
{
hash: <objhash>,
block: <blockdata>
}
(i think this should be togglable by opts in the header).
@verokarhu woah just saw that merge. fffff feel bad asking after, but could you rebase instead? not sure how much work you just did...
Also, we'll design the http api here: #132 (though happy to get this done/merged before the api solidifies, just know it's likely to change)
Git did all the work, I just did a fetch and a merge according to this guide. I'll have to see what a rebase does in this case, might be easier to just reset the fork and commit the changes again.
@verokarhu merges make things confusing in the history. rebasing basically does what you said (commit the changes again), automatically. Try:
# get updated remote refs -- should put master at 68f1a1a2
git fetch
# checkout the commit prior to the merge
git checkout -b temp ba95879
# try rebasing on top of master
git rebase master
# if all worked, re-apply your last commit
git cherry-pick 5bbf656
wow, i love git.
update: and if anything breaks... git reflog
! (or ping jbenet on freenode#ipfs)
Thanks for the git crashcourse. Checked out the commit before the first sync and cherry picked the real commits after that. Then I force pushed the new branch on the master, hopefully this worked correctly.
awesome! thanks @verokarhu -- are @whyrusleeping's last comments addressed? i think that's the only thing left to merge this in, no?
Good catch, looks like I succeedeed in cherry notpicking the dagreader fix.
It also looks like posting doesn't actually work atm, I could have sworn it worked at some point. Back to the drawing board... :P
The file splitting method was not handling all EOF cases properly:
An instance of this general case is that a Reader returning a non-zero number of bytes at the end of the input stream may return either err == EOF or err == nil. The next Read should return 0, EOF regardless.
This lead to the post failing since the request body reader returns EOF with the bytecount in Read.
verokarhu are you completing everything on issue #132 ? If not, i might be able to do the rest.
@llSourcell I don't know. This http api got done since there was no way to use ipfs on windows and I wanted to try it out. I figured getting it merged might be useful to other windows users. :)
@llSourcell the API in #132 needs to solidify. that will take time, so please don't rush to implementing exactly that. though of course, features that solidify can be added incrementally.
@verokarhu yep, agreed on that. let's merge this in. did you manage to cherry-pick the right commits? you can look at your branch's history and compare it to how your branch looked before to find the missing commits.
Yes, this pull request includes everything.
couple comments for the future, but LGTM. Hey @whyrusleeping -- can you give the green light when you get a chance?
Looks good to me. Ill be glad to have this merged in
@verokarhu okay playing with it (it's really awesome!!! :D super excited to have it merged in) a few things came up. I've documented them where appropriate. LMK if you can't get to them tonight/tmrw, and if so, I can do it. Thanks!
I should be able to get those fixed tonight, I'll let you know if there are any problems.
I had time to fix some of the things you mentioned but when I rebase to the latest upstream/master and build&run ipfs panics. I suspect the error is not related to the changes in this pull request since the cat command also doesn't work:
C:\ipfs\src\github.com\jbenet\go-ipfs\cmd\ipfs>ipfs cat 1
2014/09/30 21:07:45 Checking if daemon is running...
2014/09/30 21:07:45 Daemon is running! failed to create lock file C:\ipfs\src\github.com\jbenet\go-ipfs\cmd\ipfs\ip4\127.0.0.1\tcp\5001\daemon.lock open C:\ipfs\src\github.com\jbenet\go-ipfs\cmd\ipfs\ip4\127.0.0.1\tcp\5001\daemon.lock: The system cannot find the path specified.
2014/09/30 21:07:45 getDaemonAddr failed: open /ip4/127.0.0.1/tcp/5001/rpcaddress: The system cannot find the path specified.
Executing command locally: open /ip4/127.0.0.1/tcp/5001/rpcaddress: The system cannot find the path specified.Caution: blockservice running in local (offline) mode.
@verokarhu yikes you're right. Fixed in c7af4a6 (new master). how did that pass the test suite!?
I think it's good to go. Some of the things you mentioned are unfixed like returning a 404 error since it seemed like that might need work in other parts of ipfs as well (the error returned was context deadline exceeded
). The setup part in serve.go is also still manual.
@verokarhu travis build failed, and failure is in TestServeHTTP: https://travis-ci.org/jbenet/go-ipfs/jobs/36706143
--- FAIL: TestServeHTTP (0.00 seconds)
panic: runtime error: slice bounds out of range [recovered]
panic: runtime error: slice bounds out of range
goroutine 3 [running]:
runtime.panic(0x6b5dc0, 0xaa5b6a)
/home/travis/.gvm/gos/go1.2.2/src/pkg/runtime/panic.c:266 +0xb6
testing.func·005()
/home/travis/.gvm/gos/go1.2.2/src/pkg/testing/testing.go:385 +0xe8
runtime.panic(0x6b5dc0, 0xaa5b6a)
/home/travis/.gvm/gos/go1.2.2/src/pkg/runtime/panic.c:248 +0x106
github.com/jbenet/go-ipfs/server/http.(*handler).ServeHTTP(0xc21001eaf0, 0x7fc23411f638, 0xc21000a560, 0xc210049820)
/home/travis/gopath/src/github.com/jbenet/go-ipfs/server/http/http.go:36 +0x2f4
github.com/jbenet/go-ipfs/server/http.TestServeHTTP(0xc21004e120)
/home/travis/gopath/src/github.com/jbenet/go-ipfs/server/http/http_test.go:35 +0x26d
testing.tRunner(0xc21004e120, 0xaa0480)
/home/travis/.gvm/gos/go1.2.2/src/pkg/testing/testing.go:391 +0x8b
created by testing.RunTests
/home/travis/.gvm/gos/go1.2.2/src/pkg/testing/testing.go:471 +0x8b2
goroutine 1 [chan receive]:
testing.RunTests(0x79c5d8, 0xaa0480, 0x2, 0x2, 0x1)
/home/travis/.gvm/gos/go1.2.2/src/pkg/testing/testing.go:472 +0x8d5
testing.Main(0x79c5d8, 0xaa0480, 0x2, 0x2, 0xaa9580, ...)
/home/travis/.gvm/gos/go1.2.2/src/pkg/testing/testing.go:403 +0x84
main.main()
github.com/jbenet/go-ipfs/server/http/_test/_testmain.go:49 +0x9c
FAIL github.com/jbenet/go-ipfs/server/http 0.012s
Oh man, that's what I get for forgetting to run the tests before the last changes. Should really start using an IDE so they are run automatically. Should be ok now.
Ok, tests pass, LGTM! thanks @verokarhu :+1:
Im so happy about this.
This pull request adds a serve command which hosts a http server that listens for POST and GET requests. At the moment it's only possible to GET and POST individual blocks (blobs?).
I decided against implementing the API like it was described in issue #2 because