geotrellis / geodocker-cluster

[NOT MAINTAINED] GeoDocker Cluster is a Docker environment with Apache Accumulo and Apache Spark environment.
https://github.com/geodocker/geodocker
Apache License 2.0
27 stars 18 forks source link

Exception when accessing GeoMesa data from GeoServer #52

Open ertanden opened 8 years ago

ertanden commented 8 years ago

I'm running geodocker-cluster on my local machine.

I have copied my Geomesa application.conf (containing sfts and converters) both on accumulo-master and accumulo-tserver containers.

Then I ingested some data with geomesa ingest and I can succesfully export with geomesa export.

However when I try to access the feature layer from GeoServer with OpenLayers, then I get the following errors on tserver.

Do you have any idea why that happens?

org.apache.thrift.TException: java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class org.locationtech.geomesa.utils.geotools.SimpleFeatureTypes$
    at org.apache.accumulo.server.rpc.RpcWrapper$1.invoke(RpcWrapper.java:81)
    at com.sun.proxy.$Proxy20.startMultiScan(Unknown Source)
    at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startMultiScan.getResult(TabletClientService.java:2330)
    at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startMultiScan.getResult(TabletClientService.java:2314)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.accumulo.server.rpc.TimedProcessor.process(TimedProcessor.java:63)
    at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
    at org.apache.accumulo.server.rpc.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:78)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
    at java.lang.Thread.run(Thread.java:745)
ertanden commented 8 years ago

Looks like the problem is that geomesa and geowave doesn't work good with each other!!! They use different scala versions.

I removed geowave jar from $ACCUMULO_HOME/lib/ext and now geomesa works.

On the gitter geomesa channel, they suggested this:

instead of putting everything in /lib/ext, you can use accumulo's isolated classpath functionality to set up geomesa and geowave in separate namespaces

pomadchin commented 8 years ago

@ertanden good point that you noticed some confilcts in accumulo iterators usage, thanks! That was an experiment to put everything in one namespace to investigate goemesa and geowave compatibility; can you throw here more details to repeat your bug?

@moradology Can you also have a look into this thread? Probably you had some experience in similar issues with geoserver usage?

ertanden commented 8 years ago

To repeat, just ingest some data with geomesa ingest and then try to export it with geomesa export but be sure to include a CQL query with a paremeter like -q elevation > 10 in the export command.

You will get the errors on the tserver and will not be able to export any data.

pomadchin commented 8 years ago

@ertanden great thanks! You may join GeoTrellis gitter channel to have real-time communication.

spereirag commented 8 years ago

I tried ingesting custom data with geomesa and ran into problems too.

I got:

Creating schema CDRVoice-csv Running ingestion in distributed mode Submitting job - please wait... Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:379) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngestJob.run(AbstractIngestJob.scala:66) at org.locationtech.geomesa.tools.accumulo.ingest.ConverterIngest.runDistributedJob(ConverterIngest.scala:62) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngest.runDistributed(AbstractIngest.scala:176) at org.locationtech.geomesa.tools.accumulo.ingest.AbstractIngest.run(AbstractIngest.scala:89) at org.locationtech.geomesa.tools.accumulo.commands.IngestCommand.execute(IngestCommand.scala:63) at org.locationtech.geomesa.tools.common.Runner$class.main(Runner.scala:26) at org.locationtech.geomesa.tools.accumulo.AccumuloRunner$.main(AccumuloRunner.scala:15) at org.locationtech.geomesa.tools.accumulo.AccumuloRunner.main(AccumuloRunner.scala)

That too was solved by removing the geowave jar from $ACCUMULO_HOME/lib/ext