Open ggssh opened 1 year ago
The experiments used Hadoop version. I think spark version can generate the same dataset if it works.2023年5月29日 20:19,Yizhe Yuan @.***>写道:
DEFAULT_HADOOP_HOME=$HOME/hadoop-2.6.0 #change to your hadoop folder
DEFAULT_LDBC_SNB_DATAGEN_HOME=pwd
#change to your ldbc_snb_datagen folder
HADOOP_HOME=${HADOOP_HOME:-$DEFAULT_HADOOP_HOME} LDBC_SNB_DATAGEN_HOME=${LDBC_SNB_DATAGEN_HOME:-$DEFAULT_LDBC_SNB_DATAGEN_HOME}
export HADOOP_HOME export LDBC_SNB_DATAGEN_HOME export HADOOP_CLIENT_OPTS="-Xmx64G"
echo =============================================================================== echo Running generator with the following parameters: echo ------------------------------------------------------------------------------- echo LDBC_SNB_DATAGEN_HOME: $LDBC_SNB_DATAGEN_HOME echo HADOOP_HOME: $HADOOP_HOME echo HADOOP_CLIENT_OPTS: $HADOOP_CLIENT_OPTS echo ===============================================================================
$HADOOP_HOME/bin/hadoop jar ldbc_snb_datagen-0.2.7-jar-with-dependencies.jar $LDBC_SNB_DATAGEN_HOME/params-foreign-key.ini
rm -f mpersonFactors rm -f .mpersonFactors rm -f mactivityFactors rm -f .mactivityFactors rm -f m0friendList rm -f .m0friendList
The current version used in run.sh is Hadoop. How should I modify it?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
The current version used in run.sh is Hadoop. How should I modify it?