lbehnke / h2database

Automatically exported from code.google.com/p/h2database
0 stars 0 forks source link

Store database files in the Hadoop Distributed File System (HDFS) #168

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Please add the ability to store the database files in the Hadoop Distributed 
File System (HDFS) http://hadoop.apache.org/hdfs/

As an example, the string will have a link to connect to a database.
jdbc:h2:hdfs:/user/database/test

Original issue reported on code.google.com by igor.y...@gmail.com on 24 Feb 2010 at 3:42

GoogleCodeExporter commented 9 years ago
Possible to create an interface in order to be able to write their own 
connectors for 
data storage.

Original comment by igor.y...@gmail.com on 26 Feb 2010 at 9:57

GoogleCodeExporter commented 9 years ago
Hi, 

Why do you need this feature? What is your exact use case?

As far as I know, HDFS is append only:
http://hadoop.apache.org/common/docs/current/hdfs_design.html "HDFS supports
write-once-read-many semantics on files." H2 needs random access writes.

Regards,
Thomas

Original comment by thomas.t...@gmail.com on 26 Feb 2010 at 10:13

GoogleCodeExporter commented 9 years ago
Perhaps you can suggest possible alternative to a distributed file system with 
which to 
operate and maintain the database?

Original comment by igor.y...@gmail.com on 26 Feb 2010 at 12:45

GoogleCodeExporter commented 9 years ago
Why do you need this feature? What is your exact use case?

Original comment by thomas.t...@gmail.com on 26 Feb 2010 at 12:49

GoogleCodeExporter commented 9 years ago
We have a HDFS which includes 15 servers. Looked for a long time database 
solution with 
full support for SQL. We have a database of over 100gb which must make 
time-consuming 
computations. To do this, and was taken as a basis for a distributed computing 
system 
HADOOP.

Data obtained from the database is processed and stored again in the database.

Original comment by igor.y...@gmail.com on 26 Feb 2010 at 4:01

GoogleCodeExporter commented 9 years ago
> We have a database of over 100gb which 
> must make time-consuming computations.

How many rows do you have in the database? It it mainly large binary files, or 
it is
a lot of small records? What kind of application is it?

Original comment by thomas.t...@gmail.com on 28 Feb 2010 at 10:33

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
At the moment we base is about 840 yew records.

Initially we worked with a large number of small apartments in the form of 
WARCHAR and 
INT. But also planned to perform tests with the CLOB and BLOB records. As soon 
this 
opportunity will need.

Original comment by igor.y...@gmail.com on 1 Mar 2010 at 8:01

GoogleCodeExporter commented 9 years ago
I set the bug to "won't fix" because HDFS does not seem to support random 
access writes.

Original comment by thomas.t...@gmail.com on 21 Mar 2010 at 11:36