Closed davidsf closed 9 years ago
You have no input for one of the events in "eventNames": ["event1", "event2"]
array. Can you give examples of how you sent each event to the EventServer?
BTW, until this is a bug, could we have the discussion on the Google PredictionIO User group?
Ok, I close the bug and share the information in the user group (I don't know that there are one, sorry)
Getting the same error. Any tips on how to resolve it?
@andreyz my problem was that I didn't inserted events of one type listed in eventNames.
Also, its better to write in the google group for support: https://groups.google.com/forum/#!forum/predictionio-user
OK! Thank you
On Nov 25, 2015, at 4:23 PM, David Sedeño notifications@github.com wrote:
@andreyz my problem was that I didn't inserted events of one type listed in eventNames.
Also, its better to write in the google group for support: https://groups.google.com/forum/#!forum/predictionio-user
— Reply to this email directly or view it on GitHub.
@davidsf @pferrel : where exactly I need to mention this Events ? Still facing the same error exception with version 0.12.0 Kindly, help. Thanks.
This repo is not used anymore
https://github.com/actionml/universal-recommender https://github.com/actionml/universal-recommender
It is in the template gallery for the PIO site too. The master works with pio 0.11.0, the 0.7.0-SNAPSHOT branch works with pio 0.12.0
On Nov 6, 2017, at 7:08 AM, Mohan Pawar notifications@github.com wrote:
@davidsf https://github.com/davidsf @pferrel https://github.com/pferrel : where exactly I need to mention this ? Kindly, help. Thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/PredictionIO/template-scala-parallel-universal-recommendation/issues/17#issuecomment-342177090, or mute the thread https://github.com/notifications/unsubscribe-auth/AAT8S3FxtdtO6XSV6CWPfmLxvxoOStrJks5szyEEgaJpZM4Gd_eY.
You have no data for the primary indicator. Can you share your engine.json and sample event json? Maybe there is a spelling or config error.
On Nov 7, 2015 9:04 AM, "David Sedeño" notifications@github.com wrote:
I have imported around 2 millions of events (primary and secondary events), the build step goes well but when running the training it gives me the following error:
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: org.template.DataSource@15646798
[INFO] [Engine$] Preparator: org.template.Preparator@48e77aae
[INFO] [Engine$] AlgorithmList: List(org.template.URAlgorithm@b5a8e3c)
[INFO] [Engine$] Data sanity check is on.
[INFO] [Engine$] org.template.TrainingData does not support data
sanity check. Skipping check.
[INFO] [Engine$] org.template.PreparedData does not support data
sanity check. Skipping check.
[INFO] [URAlgorithm] Actions read now creating correlators
[ERROR] [Executor] Exception in task 0.0 in stage 33.0 (TID 25)
[WARN] [TaskSetManager] Lost task 0.0 in stage 33.0 (TID 25,
localhost): java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.
[ERROR] [TaskSetManager] Task 0 in stage 33.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failure: Task 0 in stage 33.0 failed 1 times,
most recent failure: Lost task 0.0 in stage 33.0 (TID 25, localhost):
java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1942)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1003)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:985)
at org.apache.mahout.sparkbindings.SparkEngine$.numNonZeroElementsPerColumn(SparkEngine.scala:86)
at org.apache.mahout.math.drm.CheckpointedOps.numNonZeroElementsPerColumn(CheckpointedOps.scala:37)
at org.apache.mahout.math.cf.SimilarityAnalysis$.sampleDownAndBinarize(SimilarityAnalysis.scala:286)
at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrences(SimilarityAnalysis.scala:66)
at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrencesIDSs(SimilarityAnalysis.scala:141)
at org.template.URAlgorithm.calcAll(URAlgorithm.scala:126)
at org.template.URAlgorithm.train(URAlgorithm.scala:100)
at org.template.URAlgorithm.train(URAlgorithm.scala:85)
at io.prediction.controller.P2LAlgorithm.trainBase(P2LAlgorithm.scala:46)
at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:688)
at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:688)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at io.prediction.controller.Engine$.train(Engine.scala:688)
at io.prediction.controller.Engine.train(Engine.scala:174)
at io.prediction.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:65)
at io.prediction.workflow.CreateWorkflow$.main(CreateWorkflow.scala:247)
at io.prediction.workflow.CreateWorkflow.main(CreateWorkflow.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.
— Reply to this email directly or view it on GitHub https://github.com/PredictionIO/template-scala-parallel-universal-recommendation/issues/17 .
@pferrel : Cool thanks, issue is resolved.
I have imported around 2 millions of events (primary and secondary events), the build step goes well but when running the training it gives me the following error: