Open ertanden opened 8 years ago
Looks like the problem is that geomesa and geowave doesn't work good with each other!!! They use different scala versions.
I removed geowave jar from $ACCUMULO_HOME/lib/ext and now geomesa works.
On the gitter geomesa channel, they suggested this:
instead of putting everything in /lib/ext, you can use accumulo's isolated classpath functionality to set up geomesa and geowave in separate namespaces
@ertanden good point that you noticed some confilcts in accumulo iterators usage, thanks! That was an experiment to put everything in one namespace to investigate goemesa and geowave compatibility; can you throw here more details to repeat your bug?
@moradology Can you also have a look into this thread? Probably you had some experience in similar issues with geoserver usage?
To repeat, just ingest some data with geomesa ingest
and then try to export it with geomesa export
but be sure to include a CQL query with a paremeter like -q elevation > 10
in the export command.
You will get the errors on the tserver and will not be able to export any data.
@ertanden great thanks! You may join GeoTrellis gitter channel to have real-time communication.
I tried ingesting custom data with geomesa and ran into problems too.
I got:
Creating schema CDRVoice-csv Running ingestion in distributed mode Submitting job - please wait... Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:379) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngestJob.run(AbstractIngestJob.scala:66) at org.locationtech.geomesa.tools.accumulo.ingest.ConverterIngest.runDistributedJob(ConverterIngest.scala:62) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngest.runDistributed(AbstractIngest.scala:176) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngest.run(AbstractIngest.scala:89) at org.locationtech.geomesa.tools.accumulo.commands.IngestCommand.execute(IngestCommand.scala:63) at org.locationtech.geomesa.tools.common.Runner$class.main(Runner.scala:26) at org.locationtech.geomesa.tools.accumulo.AccumuloRunner$.main(AccumuloRunner.scala:15) at org.locationtech.geomesa.tools.accumulo.AccumuloRunner.main(AccumuloRunner.scala)
That too was solved by removing the geowave jar from $ACCUMULO_HOME/lib/ext
I'm running geodocker-cluster on my local machine.
I have copied my Geomesa application.conf (containing sfts and converters) both on accumulo-master and accumulo-tserver containers.
Then I ingested some data with
geomesa ingest
and I can succesfully export withgeomesa export
.However when I try to access the feature layer from GeoServer with OpenLayers, then I get the following errors on tserver.
Do you have any idea why that happens?