Closed GoogleCodeExporter closed 9 years ago
Do you get the same exception with the Java Client or Curl too?
Which bucket and predicate are you using?
Original comment by sergio.b...@gmail.com
on 20 Mar 2011 at 9:47
Sorry, I've already reset the bucket to continue getting work done (the issue
was preventing my application from working correctly).
I was using the js predicate but didn't try jxpath or any others. I should
point out that perhaps the issue wasn't so much the use of a predicate as an
issue fetching a specific document. This occurred during a programmatic
bulk-load operation, where rapid calls to PUT and GET were running in sequence.
However, fetching all documents -was- working.
If it's useful, the following trace is from the second Terrastore node a few
seconds later:
Terrastore Server 0.8.1 - 16:35:17.930 - Communication timeout!
terrastore.communication.CommunicationException: Communication timeout!
at
terrastore.communication.remote.RemoteNode.send(RemoteNode.java:153)~[terrastore
-0.8.1.jar:na]
at
terrastore.service.impl.DefaultQueryService$5.map(DefaultQueryService.java:204)
~[terrastore-0.8.1.jar:na]
at
terrastore.service.impl.DefaultQueryService$5.map(DefaultQueryService.java:196)
~[terrastore-0.8.1.jar:na]
at terrastore.util.collect.parallel.ParallelUtils$1.call(ParallelUtils.java:53)
~[terrastore-0.8.1.jar:na]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
~[na:1.6.0_23]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) ~[na:1.6.0_23]
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:8
86) ~[na:1.6.0_23]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
~[na:1.6.0_23]
at java.lang.Thread.run(Thread.java:662) ~[na:1.6.0_23]
Original comment by teonanac...@gmail.com
on 20 Mar 2011 at 5:04
I've made some investigations: to make a long story short, the exception you
see there is caused by a null error message, so a minor problem; the real
problem is *what* caused that exception, but I don't have enough information
there to determine that.
So, if you can't provide a way to reproduce the problem, the best I can do is
to improve the exception logging so that the next time we will see the original
exception.
Original comment by sergio.b...@gmail.com
on 21 Mar 2011 at 10:11
I've not yet seen it come up again since. I'll update to the new version soon
and aim to be able to get you something more useful if/when it occurs next.
Original comment by teonanac...@gmail.com
on 21 Mar 2011 at 6:10
Fixed ErrorMessage to avoid failing with NPE on null message.
Original comment by sergio.b...@gmail.com
on 26 Mar 2011 at 3:23
I think I have uncovered the root of the problem--or if not this problem
specifically, then a related problem. It appears that calls to
bucket(name).clear() do not take effect immediately, and it's probably my fault
for assuming they do.
Namely, if I call:
client.bucket("test").clear()
client.bucket("test").key("1").get(...);
I may in fact still retrieve 1. Alternatively, if I attempt to put "1" with an
"if:absent" predicate, that predicate may fail because key "1" may still exist
immediately following a clear() call.
In my usage, the situation is more likely to occur if the server has been
moderately busy with a rapid-fire sequence of puts and gets. (In my scenario,
it's a bulk-load script that more or less resets my data store to a default
state for development and testing.)
For the time being, I've simply added some pauses in the script after each
clear() method.
Original comment by teonanac...@gmail.com
on 23 Apr 2011 at 12:15
Yes, the "clear" operation isn't synchronous, meaning it takes time to
propagate in the cluster, and is a rather heavy one.
Unfortunately, as of now I don't have a solution for your problem: can you
tolerate it?
Thanks for the feedback!
Sergio B.
Original comment by sergio.b...@gmail.com
on 26 Apr 2011 at 10:12
Original issue reported on code.google.com by
teonanac...@gmail.com
on 19 Mar 2011 at 11:53Attachments: