ripple / rippled-historical-database

SQL database as a canonical source of historical data
99 stars 68 forks source link

Getting org.apache.hadoop.hbase.TableNotFoundException #132

Closed njmurarka closed 7 years ago

njmurarka commented 7 years ago

Hi.

I just installed my own instance of HBase and am running thrift and rpc.

Am I missing some critical step? No matter what I do (whether I import data live from rippled or I am importing historical data), there is an error that tables are not found. Who creates these tables then?

Is this documented somewhere so I know what to do?

I am getting errors all over the place as so:

message: 'org.apache.hadoop.hbase.TableNotFoundException: Table \'test_lu_ledgers_by_index\' was not found, got: hbase:namespace.\n\tat org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1300)\n\tat org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1181)\n\tat org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)\n\tat org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)\n\tat org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)\n\tat org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)\n\tat org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)\n\tat org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)\n\tat org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)\n\tat org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)\n\tat org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)\n\tat org.apache.hadoop.hbase.thrift.ThriftServerRunner$HBaseHandler.scannerOpenWithScan(ThriftServerRunner.java:1482)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hadoop.hbase.thrift.HbaseHandlerMetricsProxy.invoke(HbaseHandlerMetricsProxy.java:67)\n\tat com.sun.proxy.$Proxy9.scannerOpenWithScan(Unknown Source)\n\tat org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4613)\n\tat org.apache.hadoop.hbase.thrift.generated.Hbase$Processor$scannerOpenWithScan.getResult(Hbase.java:4597)\n\tat org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)\n\tat org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)\n\tat org.apache.hadoop.hbase.thrift.TBoundedThreadPoolServer$ClientConnnection.run(TBoundedThreadPoolServer.java:289)\n\tat org.apache.hadoop.hbase.thrift.CallQueue$Call.run(CallQueue.java:64)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat java.lang.Thread.run(Thread.java:745)\n' }

scratch28 commented 7 years ago

Looks like you are trying to invoke a "test" table - is you configuration pointed to test? In hbase have you created your tables? Node does not created them and it is required that they pre-exists.
Bare minimum tables create 'prod_control', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'false'} create 'prod_exchanges', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'false'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_ledgers', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_payments', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_transactions', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_account_balance_changes', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_lu_account_memos', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_account_offers_by_sequence', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_account_transactions', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_affected_account_transactions', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_ledgers_by_index', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_ledgers_by_time', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} create 'prod_lu_transactions_by_time', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROWCOL' , IN_MEMORY => 'true' , BLOCKCACHE => 'true'} extra tables - not required to make it work but you would have to comment out code that utilizes it

create 'prod_account_exchanges', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_account_offers', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_account_payments', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_accounts_created', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_memos', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_agg_exchanges', {NAME =>'d', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'f', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} create 'prod_agg_stats', {NAME =>'metric', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'type', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'ROW' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'} , {NAME =>'result', COMPRESSION => 'SNAPPY', BLOOMFILTER => 'NONE' , IN_MEMORY => 'false' , BLOCKCACHE => 'true'}

shekenahglory commented 7 years ago

there is a nodejs script you can run to create all the necessary tables: https://github.com/ripple/rippled-historical-database/blob/develop/import/createTables.js

njmurarka commented 7 years ago

It was asked if I my configuration is "pointed" to test. Not sure what that means. Please clarify?

The import.config.json and api.config.json files both have "test_" as prefixes for the hbase section and hbase-rest sections, which was the default, so I left them this way. But that is about all there is to do with "test".

I am guessing the naming here is not the issue and there was a step missing in terms of building the actual tables.

mDuo13 commented 7 years ago

yes, I think the installation docs have not been updated. They should instruct you to create the tables by running the script @shekenahglory mentioned.