Closed shoon closed 10 years ago
Can duplicate this problem on the environment listed above.
Morphia 1.4.0, Play 1.2.7, Java 7, and MongoDB 2.0.6 seems fine.
Going to run additional tests
hmm... first I don't even know morphia can work with Java 8. As it uses javassist to do byte code enhancement and AFAIK, it doesn't support Java 8.
Second, before I got your issue sorted out, probably a better approach is to use file system or S3 storage. Here is the configuration of my project:
#######################################################
# Morphia Storage Service Configuration
#######################################################
# set the storage services
# fs - file system
# s3 - amazon s3
# gfs - gridfs (this is always available implicit thus you can omit it)
morphia.storage=fs,s3
# set the default storage service
# a default storage service is used when
# a Blob typed field is not annotated with @play.modules.morphia.Storage annotation
# default value: gfs
morphia.storage.default=fs
%at.atorage.default=s3
%prod.storage.default=s3
# set migrate data to true if you want to move your data
# from gridfs to external storage
# default value: false
morphia.storage.migrateData=false
#### file system storage settings ####
# set the service implementation
morphia.storage.fs.serviceImpl=org.osgl.storage.impl.FileSystemService
# set the root folder where the file is uploaded
morphia.storage.fs.home.dir=public/uploads
# set the URL root for file access
morphia.storage.fs.home.url=/public/uploads
#### Amazon S3 service settings ####
morphia.storage.s3.serviceImpl=play.modules.morphia.MorphiaS3Service
morphia.storage.s3.keyId=<KEY_ID>
morphia.storage.s3.keySecret=<SEC>
morphia.storage.s3.defStorageClass=rr
morphia.storage.s3.put.async=true
morphia.storage.s3.get.waive=true
morphia.storage.s3.bucket=<local-bucket>
%at.morphia.storage.s3.bucket=<at-bucket>
%prod.morphia.storage.s3.bucket=<prod-bucket>
# fetch only meta as we will use amazon's static web service to get the blob
morphia.storage.s3.get.MetaOnly=true
# this is used to fetch the blob directly from amazon's static service
morphia.storage.s3.staticWebEndpoint=s3-ap-southeast-2.amazonaws.com/<local-bucket>
%at.morphia.storage.s3.staticWebEndpoint=s3-ap-southeast-2.amazonaws.com/<at-bucket>
%prod.morphia.storage.s3.staticWebEndpoint=s3-ap-southeast-2.amazonaws.com/<prod-bucket>
Ok. I will give that a shot. We're using Java 8 with the latest in the Play1 1.3.x branch that has a newer javassist
http://play.lighthouseapp.com/projects/57987/tickets/1800-support-java-8
Ah, good to know that!
Sorry, this is somewhat offtopic... when I set migrateData, it seems to start transferring data from gfs to fs but stops after about 5MB (Edit: looks like that was a new incoming file, so it doesn't look like the migration is happening)
morphia.storage.migrateData=true
Is there anything to check or is this not recommended?
(Edit 2 :Ok, the data migration appears to happen on demand when a blob is accessed)
It might be buggy. I just used that once migrating out data out from gridfs to s3.
Going to mark this as closed. After migrating away from GridFS in favor of fs this problem went away.
Morphia 1.5.0a, Java 8, Play 1.3 from git, mongo 2.7.1
I have a route that simply saves a single routed File into a play.modules.morphia.Blob;
When I upload 6 or so larger files (totaling about 400MB) in series into separate morphia blobs, my CPUs will peg 100% until I kill the play instance and restart. The files are successfully transferred but It appears that there are many job threads opened and they reading large amounts of data from mongo.
After some time, running
play status
shows this being called many many times.. and there are a ton of unnamed jobs queued up:
Mongotop is showing the uploads.chunks running even after the file transfer
Mongo is blasting out data:
and for network:
My Play configuration for Morphia: