Open shuryhin-oleksandr opened 3 years ago
Did you get a full working solution for this? I've been overcoming multiple errors trying to get this to work with s3 and django-rest-framework (finally landing on this one), but am still running into some difficulties. Did you end up having to make additional changes other than just adding this path method?
No, I just upload my file directly from FE to AWS.
Did you get a full working solution for this? I've been overcoming multiple errors trying to get this to work with s3 and django-rest-framework (finally landing on this one), but am still running into some difficulties. Did you end up having to make additional changes other than just adding this path method?
I had to override the entire upload function in the chunkedupload model by writing the custom model. In my case, I was uploading to google cloud storage but it should work for s3 as well. However, we can all agree that this package should be considered somewhat legacy as it hasn't been maintained in years.
class ChunkedUploadFile(ChunkedUpload):
file = models.FileField(verbose_name=_('file'),
max_length=255,
upload_to=generate_chunked_filename,
null=True)
def allowed_owners(self):
return super(ChunkedUploadFile, self).allowed_owners()
def allowed_owner(self, owner_type, owner_id=None, msg=None):
return super(ChunkedUploadFile,
self).allowed_owner(owner_type, owner_id, msg)
def append_chunk(self, chunk, chunk_size=None, save=True):
gcs_file = self.file
gcs_file.close()
gcs_file.open(mode='w') #change this if uploading to ab Digital ocean or baremetal server
for subchunk in chunk.chunks():
gcs_file.write(subchunk)
if chunk_size is not None:
self.offset += chunk_size
elif hasattr(chunk, 'size'):
self.offset += chunk.size
else:
self.offset = self.file.size
# clear any cached checksum
self._checksum = None
if save:
self.save()
self.file.close()
def get_uploaded_file(self):
gcs_file = self.file
gcs_file.close()
gcs_file.open(mode='rb')
return UploadedFile(file=self.file,
name=self.filename,
size=self.file.size)
@transaction.atomic
def completed(self,
completed_at=timezone.now(),
ext=_settings.COMPLETE_EXT):
if ext != _settings.INCOMPLETE_EXT:
original_path = self.file.name
path_ext = original_path.split('.')[-1]
# remove filename extension
formatted_file_path = original_path.split('.')[0]
self.file.name = formatted_file_path + ext
self.status = self.COMPLETE
self.completed_at = completed_at
self.save()
if ext != _settings.INCOMPLETE_EXT:
rename_blob(original_path, self.file.name)
self.save()
I have the following code in my project:
settings.py:
CHUNKED_UPLOAD_STORAGE_CLASS = 'sstraffic.storage_backends.PublicMediaStorage'
storage_backends.py:
I have an error:
I fixed it by adding a function path to my PublicMediaStorage class.
Is not that too hucky? Will not I break something by this? It would be nice to have that working out of the box.