We will need to setup a cluster for bigtop verification of glusterfs-hadoop . This will not run upstream, but we will make the results available to the upstream community. We have servers to do this. The internal tests should
pull jars from upstream releases.
copy them to local (just like our slaves in EC2 do)
run mahout,pig,hive,mapreduce, and other similar tests using the new https://issues.apache.org/jira/browse/BIGTOP-1222 simplified bigtop smoke infrastructure which is gradle (or maybe , to start, just use the maven based smoke runner wrapped in a bash script).
We will need to setup a cluster for bigtop verification of glusterfs-hadoop . This will not run upstream, but we will make the results available to the upstream community. We have servers to do this. The internal tests should