Closed ghost closed 9 years ago
The ping test passes, the other fails. Heres's my riak config
storage_backend = multi
multi_backend.bitcask_mult.storage_backend = bitcask
multi_backend.bitcask_mult.bitcask.data_root = ../data/riak/bitcask_mult
multi_backend.leveldb_mult.storage_backend = leveldb
multi_backend.leveldb_mult.leveldb.data_root = ../data/riak/leveldb_mult
multi_backend.leveldb_mult.storage_backend = memory
multi_backend.default = bitcask_mult
#....
## To enable Search set this 'on'.
##
## Default: off
##
## Acceptable values:
## - on or off
search = on
This one works but it's not what I need, I need to specify bucket type otherwise it's going to default to using bitcask which I definitely do not want for sessions:
[Test()]
public void B_TestCreateSessionObjectInSessionBucketWithoutBucketType()
{
var client = BackendManager.Instance.GetSessionStore();
Assert.NotNull(client);
RiakObject ro = new RiakObject(SessionBucketName,TestUserId);
ro.SetObject<SessionIdentity>(new SessionIdentity());
var result = client.Put(ro);
Assert.IsTrue(result.IsSuccess);
}
Hi @paigeadele -
Bucket Types are supported in the current develop
code. Please clone this repository, build it (instructions are in the README.md
file), and use the generated assemblies for your tests. Let me know how it goes.
@lukebakken this is CorrugatedIron from the develop branch of the repository, let me check
@paigeadele please use develop
and re-try.
@lukebakken yes, that is what I'm using
laptop :: ~/CorrugatedIron » git branch
* develop
laptop :: ~/CorrugatedIron »
@paigeadele there are several RiakObject
constructor overloads that allow setting bucket type, please use one of them and let me know how it works for you:
public RiakObject(string bucket, string key, byte[] value, string contentType, string charSet)
: this(null, bucket, key, value, contentType, charSet) { }
public RiakObject(string bucketType, string bucket, string key, byte[] value, string contentType, string charSet)
: this(new RiakObjectId(bucketType, bucket, key), value, contentType, charSet) { }
Naturally, it makes sense to pick one of them and the one I picked is not working:
RiakObject ro = new RiakObject(
SettingsManager.Instance.settings.SessionBucketType,
SessionBucketName,
TestUserId,
Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new SessionIdentity())),
"application/json",
"utf-8");
var result = client.Put(ro);
Assert.IsTrue(result.IsSuccess); // <-- fail
Error:
result {CorrugatedIron.RiakResult<CorrugatedIron.Models.RiakObject>} CorrugatedIron.RiakResult<CorrugatedIron.Models.RiakObject>
base {CorrugatedIron.RiakResult} CorrugatedIron.RiakResult
ErrorMessage "A connection attempt failed because the connected party did not properly respond after a period of t…" string
IsSuccess false bool
ResultCode CommunicationError CorrugatedIron.ResultCode
Non-public members
Continuation (null) object
Done null bool?
Value (null) object
Non-public members
also before you say check the value of SettingsManager.Instance.settings.SessionBucketType and check my settings and all that:
SettingsManager.Instance.settings.SessionBucketType "Sessions" string
erratic@laptop ~/r/r/riak> bin/riak-admin bucket-type list
default (active)
UserData (not active)
Sessions (active)
UserAccounts (not active)
erratic@laptop ~/r/r/riak>
This is a connection error at the TCP level (http://www.nsoftware.com/kb/xml/07240801.rst)
Things to check:
curl
over port 8098?app.config
configured correctly to use the right host and port for Riak? By default, the HTTP port is 8098 and PBC port 8087.There are configuration examples here: https://github.com/basho-labs/CorrugatedIron/blob/develop/src/CorrugatedIron.Tests.Live/App.config
Yeah thats not the problem because I can put an object if I do:
[Test()]
public void A_Test_PingSessionCluster()
{
var client = BackendManager.Instance.GetSessionStore();
Assert.NotNull(client);
Assert.IsTrue( client.Ping().IsSuccess); // <-- works
}
***** as well as **
[Test()]
public void B_TestCreateSessionObjectInSessionBucketWithoutBucketType()
{
var client = BackendManager.Instance.GetSessionStore();
Assert.NotNull(client);
RiakObject ro = new RiakObject(SessionBucketName,TestUserId);
ro.SetObject<SessionIdentity>(new SessionIdentity());
var result = client.Put(ro);
Assert.IsTrue(result.IsSuccess); // <-- works
}
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="riakConfig" type="CorrugatedIron.Config.RiakClusterConfiguration, CorrugatedIron"/>
</configSections>
<riakConfig nodePollTime="5000" defaultRetryWaitTime="200" defaultRetryCount="3">
<nodes>
<node name="dev1" hostAddress="127.0.0.01" pbcPort="8087" restPort="8069" poolSize="20" />
</nodes>
</riakConfig>
</configuration>
erratic@laptop ~/r/r/riak> sudo netstat -an | grep LISTEN | grep 8087
tcp 0 0 127.0.0.1:8087 0.0.0.0:* LISTEN
erratic@laptop ~/r/r/riak>
although I don't seem to have REST port, shouldn't matter though I only want pbc anyway
erratic@laptop ~/r/r/riak> sudo netstat -an | grep LISTEN | grep 8087 1
tcp 0 0 127.0.0.1:8087 0.0.0.0:* LISTEN
erratic@laptop ~/r/r/riak>
Yeah and I changed the conf to have 8087 and just as I thought, makes no difference.
FYI:
==> log/error.log <==
2014-12-10 16:35:35.799 [error] <0.1247.0> gen_fsm <0.1247.0> in state active terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/crash.log <==
2014-12-10 16:35:35 =ERROR REPORT====
** State machine <0.1247.0> terminating
** Last event in was {riak_vnode_req_v1,936274486415109681974235595958868809467081785344,{fsm,undefined,<0.23103.22>},{riak_kv_put_req_v1,{{<<"Sessions">>,<<"Sessions_Test">>},<<"e58dfd0b-428c-42d0-b8f9-0bd64e1f46b4">>},{r_object,{<<"Sessions">>,<<"Sessions_Test">>},<<"e58dfd0b-428c-42d0-b8f9-0bd64e1f46b4">>,[{r_content,{dict,4,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[[<<"content-type">>,97,112,112,108,105,99,97,116,105,111,110,47,106,115,111,110],[<<"X-Riak-VTag">>,53,66,83,102,121,111,100,112,71,54,120,122,99,89,110,87,68,52,50,52,105,97]],[[<<"index">>]],[],[[<<"X-Riak-Last-Modified">>|{1418,229335,791795}]],[],[]}}},<<"{\"Created\":\"2014-12-10T16:35:35.675971+00:00\",\"Modified\":\"2014-12-10T16:35:35.675971+00:00\",\"Guest\":true,\"UserId\":\"00000000-0000-0000-0000-000000000000\",\"Cookie\":null,\"UserName\":null,\"LastUpdated\":\"2014-12-10T16:35:35.67527+00:00\",\"Expires\":\"2014-12-10T16:40:35.675273+00:00\",\"ZID\":\"0b8154ed-4499-4c06-ba19-3554b975eaa2\",\"ObjectType\":\"SessionIdentity\"}">>}],[],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[[clean|true]],[]}}},undefined},29229549,63585448535,[coord,{returnbody,true}]}}
** When State == active
** Data == {state,936274486415109681974235595958868809467081785344,riak_kv_vnode,{state,936274486415109681974235595958868809467081785344,riak_kv_multi_backend,{state,[{<<"leveldb_mult">>,riak_kv_memory_backend,{state,62914861,62652716,undefined,undefined,0,undefined}},{<<"bitcask_mult">>,riak_kv_bitcask_backend,{state,#Ref<0.0.0.4657>,"936274486415109681974235595958868809467081785344",[{io_mode,erlang},{expiry_grace_time,0},{small_file_threshold,10485760},{dead_bytes_threshold,134217728},{frag_threshold,40},{dead_bytes_merge_trigger,536870912},{frag_merge_trigger,60},{max_file_size,2147483648},{open_timeout,4},{data_root,"../data/riak/bitcask_mult"},{sync_strategy,none},{merge_window,always},{max_fold_age,-1},{max_fold_puts,0},{expiry_secs,-1},{require_hint_crc,true},{key_transform,#Fun<riak_kv_bitcask_backend.1.79788753>},{read_write,true}],936274486415109681974235595958868809467081785344,"../data/riak/bitcask_mult",1}}],<<"bitcask_mult">>},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},<<35,9,254,249,108,144,169,215>>,3000,1000,100,100,true,false,undefined,0,undefined,<0.1379.0>,undefined,0},undefined,none,undefined,undefined,<0.1520.0>,{pool,riak_kv_worker,10,[]},undefined,95262}
** Reason for termination =
** {{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
2014-12-10 16:35:35 =CRASH REPORT====
crasher:
initial call: riak_core_vnode:init/1
pid: <0.1247.0>
registered_name: []
exception exit: {{{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},[{gen_fsm,terminate,7,[{file,"gen_fsm.erl"},{line,622}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [riak_core_vnode_sup,riak_core_sup,<0.149.0>]
messages: []
links: [<0.153.0>,<0.1520.0>,#Port<0.11441>]
dictionary: [{random_seed,{16736,24577,11225}},{hashtree_tokens,88},{bitcask_file_mod,bitcask_file},{bitcask_time_fudge,no_testing},{yz_hashtree_tokens,89},{{tree,936274486415109681974235595958868809467081785344},<0.2160.0>},{bitcask_efile_port,#Port<0.11441>}]
trap_exit: true
status: running
heap_size: 4185
stack_size: 27
reductions: 3299606
neighbours:
neighbour: [{pid,<0.1520.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker_pool,init,['Argument__1']}},{current_function,{gen_fsm,loop,7}},{ancestors,[<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1247.0>,<0.1521.0>]},{dictionary,[]},{trap_exit,false},{status,waiting},{heap_size,1598},{stack_size,10},{reductions,1615}]
2014-12-10 16:35:35 =SUPERVISOR REPORT====
Supervisor: {local,riak_core_vnode_sup}
Context: child_terminated
Reason: {{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
Offender: [{pid,<0.1247.0>},{name,undefined},{mfargs,{riak_core_vnode,start_link,undefined}},{restart_type,temporary},{shutdown,300000},{child_type,worker}]
==> log/console.log <==
2014-12-10 16:35:35.799 [error] <0.1247.0> gen_fsm <0.1247.0> in state active terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/crash.log <==
2014-12-10 16:35:35 =ERROR REPORT====
** State machine <0.1521.0> terminating
** Last message in was {'EXIT',<0.1520.0>,{{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
** When State == ready
** Data == {state,<0.1522.0>,{[<0.1534.0>,<0.1533.0>,<0.1532.0>,<0.1531.0>,<0.1547.0>,<0.1542.0>,<0.1540.0>,<0.1537.0>,<0.1536.0>],[<0.1535.0>]},{[],[]},67633471,10,0,0}
** Reason for termination =
** {{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
2014-12-10 16:35:35 =CRASH REPORT====
crasher:
initial call: poolboy:init/1
pid: <0.1521.0>
registered_name: []
exception exit: {{{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},[{gen_fsm,terminate,7,[{file,"gen_fsm.erl"},{line,622}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]
messages: []
links: [<0.1536.0>,<0.1540.0>,<0.1542.0>,<0.1547.0>,<0.1537.0>,<0.1532.0>,<0.1534.0>,<0.1535.0>,<0.1533.0>,<0.1522.0>,<0.1531.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 610
stack_size: 27
reductions: 1054
neighbours:
neighbour: [{pid,<0.1531.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,2538}]
neighbour: [{pid,<0.1533.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,2521}]
neighbour: [{pid,<0.1535.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1794}]
neighbour: [{pid,<0.1534.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,2538}]
neighbour: [{pid,<0.1532.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,2538}]
neighbour: [{pid,<0.1537.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1777}]
neighbour: [{pid,<0.1547.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1794}]
neighbour: [{pid,<0.1542.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1777}]
neighbour: [{pid,<0.1540.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1794}]
neighbour: [{pid,<0.1536.0>},{registered_name,[]},{initial_call,{riak_core_vnode_worker,init,['Argument__1']}},{current_function,{gen_server,loop,6}},{ancestors,[<0.1522.0>,<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]},{messages,[]},{links,[<0.1522.0>,<0.1521.0>]},{dictionary,[{bitcask_time_fudge,no_testing}]},{trap_exit,false},{status,waiting},{heap_size,2586},{stack_size,9},{reductions,1794}]
2014-12-10 16:35:35 =SUPERVISOR REPORT====
Supervisor: {<0.1522.0>,poolboy_sup}
Context: shutdown_error
Reason: {{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
Offender: [{nb_children,10},{name,riak_core_vnode_worker},{mfargs,{riak_core_vnode_worker,start_link,[[{worker_module,riak_core_vnode_worker},{worker_args,[936274486415109681974235595958868809467081785344,[],worker_props,<0.1520.0>]},{worker_callback_mod,riak_kv_worker},{size,10},{max_overflow,0}]]}},{restart_type,temporary},{shutdown,5000},{child_type,worker}]
2014-12-10 16:35:35 =ERROR REPORT====
** Generic server <0.1522.0> terminating
** Last message in was {'EXIT',<0.1521.0>,{{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
** When Server state == {state,{<0.1522.0>,poolboy_sup},simple_one_for_one,[{child,undefined,riak_core_vnode_worker,{riak_core_vnode_worker,start_link,[[{worker_module,riak_core_vnode_worker},{worker_args,[936274486415109681974235595958868809467081785344,[],worker_props,<0.1520.0>]},{worker_callback_mod,riak_kv_worker},{size,10},{max_overflow,0}]]},temporary,5000,worker,[riak_core_vnode_worker]}],{set,10,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[<0.1547.0>],[],[],[],[],[<0.1536.0>],[<0.1537.0>],[<0.1531.0>],[<0.1532.0>],[<0.1540.0>,<0.1533.0>],[<0.1534.0>],[<0.1542.0>,<0.1535.0>],[],[],[]}}},0,1,[],poolboy_sup,{riak_core_vnode_worker,[{worker_module,riak_core_vnode_worker},{worker_args,[936274486415109681974235595958868809467081785344,[],worker_props,<0.1520.0>]},{worker_callback_mod,riak_kv_worker},{size,10},{max_overflow,0}]}}
** Reason for termination ==
** {{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
2014-12-10 16:35:35 =CRASH REPORT====
crasher:
initial call: supervisor:poolboy_sup/1
pid: <0.1522.0>
registered_name: []
exception exit: {{{case_clause,{riak_kv_multi_backend,undefined_backend,<<"memory">>}},[{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,744}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
ancestors: [<0.1521.0>,<0.1520.0>,<0.1247.0>,riak_core_vnode_sup,riak_core_sup,<0.149.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 4185
stack_size: 27
reductions: 1321
neighbours:
==> log/error.log <==
2014-12-10 16:35:35.800 [error] <0.1247.0> CRASH REPORT Process <0.1247.0> with 1 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_fsm:terminate/7 line 622
==> log/console.log <==
2014-12-10 16:35:35.800 [error] <0.1247.0> CRASH REPORT Process <0.1247.0> with 1 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_fsm:terminate/7 line 622
==> log/error.log <==
2014-12-10 16:35:35.800 [error] <0.153.0> Supervisor riak_core_vnode_sup had child undefined started with {riak_core_vnode,start_link,undefined} at <0.1247.0> exit with reason no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in context child_terminated
==> log/console.log <==
2014-12-10 16:35:35.800 [error] <0.153.0> Supervisor riak_core_vnode_sup had child undefined started with {riak_core_vnode,start_link,undefined} at <0.1247.0> exit with reason no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in context child_terminated
==> log/error.log <==
2014-12-10 16:35:35.800 [error] <0.1521.0> gen_fsm <0.1521.0> in state ready terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/console.log <==
2014-12-10 16:35:35.800 [error] <0.1521.0> gen_fsm <0.1521.0> in state ready terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/error.log <==
2014-12-10 16:35:35.800 [error] <0.1521.0> CRASH REPORT Process <0.1521.0> with 10 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_fsm:terminate/7 line 622
==> log/console.log <==
2014-12-10 16:35:35.800 [error] <0.1521.0> CRASH REPORT Process <0.1521.0> with 10 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_fsm:terminate/7 line 622
==> log/error.log <==
2014-12-10 16:35:35.802 [error] <0.1522.0> Supervisor {<0.1522.0>,poolboy_sup} had child riak_core_vnode_worker started with riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[936274486415109681974235595958868809467081785344,...]},...]) at undefined exit with reason no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in context shutdown_error
==> log/console.log <==
2014-12-10 16:35:35.802 [error] <0.1522.0> Supervisor {<0.1522.0>,poolboy_sup} had child riak_core_vnode_worker started with riak_core_vnode_worker:start_link([{worker_module,riak_core_vnode_worker},{worker_args,[936274486415109681974235595958868809467081785344,...]},...]) at undefined exit with reason no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in context shutdown_error
==> log/error.log <==
2014-12-10 16:35:35.802 [error] <0.1522.0> gen_server <0.1522.0> terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/console.log <==
2014-12-10 16:35:35.802 [error] <0.1522.0> gen_server <0.1522.0> terminated with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345
==> log/error.log <==
2014-12-10 16:35:35.803 [error] <0.1522.0> CRASH REPORT Process <0.1522.0> with 0 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_server:terminate/6 line 744
==> log/console.log <==
2014-12-10 16:35:35.803 [error] <0.1522.0> CRASH REPORT Process <0.1522.0> with 0 neighbours exited with reason: no case clause matching {riak_kv_multi_backend,undefined_backend,<<"memory">>} in riak_core_vnode:vnode_command/3 line 345 in gen_server:terminate/6 line 744
@paigeadele thanks for that information, I believe that Riak's multi backend isn't set quite right. If your intent is to have two backends, one called bitcask_mult
, and the other leveldb_mult
, this is the necessary configuration:
storage_backend = multi
multi_backend.bitcask_mult.storage_backend = bitcask
multi_backend.bitcask_mult.bitcask.data_root = ../data/riak/bitcask_mult
multi_backend.leveldb_mult.storage_backend = leveldb
multi_backend.leveldb_mult.leveldb.data_root = ../data/riak/leveldb_mult
multi_backend.default = bitcask_mult
Once changing the above, restart Riak using riak stop
/ riak start
. After that, please run the following curl
command to get the properties of the bucket you're using in your failing test - replace SessionBucketName
with the actual name of the bucket:
curl -vvv -4 localhost:8098/types/Sessions/buckets/SessionBucketName/props
The above should return JSON. I'm interested in the backend
property.
Thanks!
yeah actually my intent is to have three backends and I noticed before your comment in my config file I was setting
multi_backend.leveldb_mult.storage_backend = memory
which I changed to:
multi_backend.memory_mult.storage_backend = memory
but this did not fix my problem, am I understanding correctly that I can only set 2 backends?
You can set as many backends as you'd like. Since you've restarted Riak, could you run riak-debug
and make the generated archive available via dropbox or some other service? That will collect a lot of useful information for me to figure out what's going on.
After that, could you please run the curl
command I provided above, as well as the following command?
riak-admin bucket-type status Sessions
Thanks! We'll get to the bottom of this. The Corrugated Iron integration test suite covers the case of bucket types so this must be a configuration issue somewhere.
Here's the debug archive: https://drive.google.com/file/d/0B2C9v2OBz4a4U0RJRkV5OXZVUEk/view?usp=sharing
give me a second to run curl, gonna go out for a min I'll do it when I get back
laptop riak # curl -vvv -4 localhost:8098/types/Sessions/buckets/Sessions_Test/props
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8098 (#0)
> GET /types/Sessions/buckets/Sessions_Test/props HTTP/1.1
> User-Agent: curl/7.39.0
> Host: localhost:8098
> Accept: */*
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
< Date: Wed, 10 Dec 2014 17:18:06 GMT
< Content-Type: application/json
< Content-Length: 510
<
* Connection #0 to host localhost left intact
{"props":{"name":"Sessions_Test","young_vclock":20,"w":"quorum","small_vclock":50,"rw":"quorum","r":"quorum","pw":0,"precommit":[],"pr":0,"postcommit":[],"old_vclock":86400,"notfound_ok":true,"n_val":3,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"last_write_wins":false,"dw":"quorum","dvv_enabled":true,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"big_vclock":50,"basic_quorum":false,"backend":"memory","allow_mult":true,"active":true,"claimant":"riak@127.0.0.1"}}laptop riak #
OK, it looks like Sessions/Sessions_Test
is configured to use the backend named memory
, while your Riak configuration has that backend named memory_mult
. Let's check if this is a configuration issue at the bucket type level, please run the following and provide the output:
riak-admin bucket-type status Sessions
ahh I think I see what you mean
laptop riak # bin/riak-admin bucket-type status Sessions
Sessions is active
young_vclock: 20
w: quorum
small_vclock: 50
rw: quorum
r: quorum
pw: 0
precommit: []
pr: 0
postcommit: []
old_vclock: 86400
notfound_ok: true
n_val: 3
linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
last_write_wins: false
dw: quorum
dvv_enabled: true
chash_keyfun: {riak_core_util,chash_std_keyfun}
big_vclock: 50
basic_quorum: false
backend: <<"memory">>
allow_mult: true
active: true
claimant: 'riak@127.0.0.1'
laptop riak #
I think all you'll have to do is update the bucket type to use the memory_mult
backend:
riak-admin bucket-type update Sessions '{"props":{"backend":"memory_mult"}}'
You're absolutely correct, that fixed it :)
Thank you so much for your help! I think we can close this, though I'd like to keep a link to this ticket for troubleshooting purposes, close but don't delete! I imagine somebody else could very easily make the same mistake xD
No problem, I'm glad it got sorted out and please feel free to report ideas or issues with Corrugated Iron via GitHub. This issue will be around as long as GitHub is :smile: