apache / accumulo-testing

Apache Accumulo Testing
https://accumulo.apache.org
Apache License 2.0
15 stars 40 forks source link

Explore storing continuous ingest bulk import files in S3 #94

Open keith-turner opened 5 years ago

keith-turner commented 5 years ago

When running bulk import continuous ingest test it can take a while to generate a good bit of data to start testing. Not sure, but it may be faster to generate a data set once and store it in S3. Then future test could possibly use that data set.

I think it would be interesting to experiment with this and if it works well add documentation to the bulk import test docs explaining how to do it. One gotcha with this approach is that anyone running a test needs to be consistent with split points. A simple way to address this problem would be store a file of split points in S3 with the data.

keith-turner commented 5 years ago

I suspect the procedure for this would be the following :

https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#S3A

keith-turner commented 5 years ago

Current bulk import test docs : bulk-test.md

keith-turner commented 5 years ago

Below are some notes from copying bulk data from HDFS to S3

# bulk files were generated intp /tmp/bt dir in hdfs

# prep directory before distp, assuming all splits files are the same just keep one
hadoop fs -mv /tmp/bt/1/splits.txt /tmp/bt
hadoop fs -rm /tmp/bt/*/splits.txt
hadoop fs -rm /tmp/bt/*/files/_SUCCESS

# get the S3 libs on the local hadoop classpath
# edit following file and set : export HADOOP_OPTIONAL_TOOLS="hadoop-aws"
vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh 

# The remote map reduce jobs will need the S3 jars on the classpath, define the following for this.. may need to change for your version of hadoop
export LIBJARS=$HADOOP_HOME/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar,$HADOOP_HOME/share/hadoop/tools/lib/hadoop-aws-3.2.0.jar

# the following command will distcp files to bucket
hadoop distcp -libjars ${LIBJARS} -Dfs.s3a.access.key=$AWS_KEY -Dfs.s3a.secret.key=$AWS_SECRET hdfs://leader1:8020/tmp/bt s3a://$AWS_BUCKET/continuous-1000