Dani3lSun / apex-plugin-dropzone

Oracle APEX Region Plugin - Dropzone
MIT License
72 stars 19 forks source link

Is it possible to transfer data as blob instead of clob and also optional without chunking? #7

Closed ogobrecht closed 8 years ago

ogobrecht commented 8 years ago

Hi Daniel,

first of all: thank you very much for this great plugin (and the APEX 4.2 implementation). The plugin works as expected, but we have performance problems because of two things:

  1. apex_web_service.clobbase642blob takes a long time: around 50 seconds for 5 MB without clob caching and around 10 seconds with caching
  2. dbms_lob.append apend takes some time: around 3 seconds without caching and 300 milliseconds with caching

We using your last version 1.9.2 for APEX 4.2. Now my questions:

  1. I can see in your file apexdropzone.js, that you have a function binaryArray2base64 and I think you do the conversion to clob by yourself. Is there a chance to transfer the data to the DB directly as a blob to avoid the time consuming conversion back to blob in PL/SQL?
  2. If I understand the issues and release notes correctly, can you make the split into chunks of 32k size optional? We use APEX Listener 2.x and Tomcat in the moment and it would be nice to save also the processing time in PL/SQL to put the chunks together, if this is technically possible with our setup.

Thank you in advance and best regards Ottmar

Dani3lSun commented 8 years ago

Hi @ogobrecht

I now did a few test on my XE database with default settings and indeed there are performance issues with apex_web_service.clobbase642blob. Here are my test results (with a 7MB file):

Javascript: Function: apexDropzone.clob2Array => ~0,1 Sec Function: apexDropzone.binaryArray2base64 => ~0,1 Sec

PL/SQL: Loop over 30k Array and dbms_lob.writeappend => 0,3 Sec apex_web_service.clobbase642blob => 4,5 Sec

The only bottleneck I discovered was apex_web_service.clobbase642blob, all the other functions were into an acceptable threshold...

Unfortunately apex.server.plugin function can only use parameters from wwv_flow.show and this procedure doesn´t inlcude a BLOB parameter, the only parameter which could be used is p_clob_01. In the first version of this plugin I already used this parameter but it only worked well with ORDS and I had problems with OHS web server (30k limit of parameters)...

Nevertheless if using p_clob_01 we also have to convert the file into base64, so we have a string instead of a binary object, maybe you know an other way to convert it into a string?

ogobrecht commented 8 years ago

Hi Daniel,

thank you for your fast answer :-)

No, at the moment I have no other idea and I understand why you use the clob way to transfer the data to the DB. Let me think a little bit and if I find a solution, I will let you know.

Best regards Ottmar

ogobrecht commented 8 years ago

Hi Daniel,

I think, I have the solution: The key is to do the conversion to BLOB on the chunks. I am currently sitting in the home office with a slow connection to the office, so the upload itself takes much more time as usual, but I tracked the processing time for the chunks loop on the DB: around 40 milliseconds for 1MB instead of around 2 seconds with apex_web_service.clobbase642blob. Your instance seems to be faster at all but please verify, if you could also improve your processing speed.

Following my code Please have a look at the loop. There are two logger calls, please change this, if you don't have logger installed:

DECLARE
  --
  l_collection_name VARCHAR2(100);
  l_blob            BLOB;
  l_filename        VARCHAR2(200);
  l_mime_type       VARCHAR2(100);
  l_token           VARCHAR2(32000);
  l_random_file_id  NUMBER;
  --
BEGIN
  -- get defaults from AJAX Process
  -- filename
  IF apex_application.g_x01 = 'clipboard0.png'
  THEN
    apex_application.g_x01 := 'Screenshot_' || v('APP_USER') || '_' ||
                              to_char(SYSDATE, 'YYYY_MM_DD_HH24_MI_SS') ||
                              '.png';
  END IF;
  l_filename := apex_application.g_x01;
  -- mimetype
  l_mime_type := nvl(apex_application.g_x02, 'application/octet-stream');
  -- random file id
  l_random_file_id := round(dbms_random.value(100000, 99999999));
  -- build BLOB from f01 30k Array
  dbms_lob.createtemporary(l_blob, TRUE, dbms_lob.session);
  logger.log(p_text => 'Start BLOB conversion', p_scope => 'upload_test');
  FOR i IN 1 .. apex_application.g_f01.count
  LOOP
    l_token := wwv_flow.g_f01(i);
    IF length(l_token) > 0
    THEN
      -- varchar chunk to blob idea taken from
      -- https://community.oracle.com/thread/495548?start=0&tstart=0
      dbms_lob.append(l_blob
                     ,to_blob(utl_encode.base64_decode(utl_raw.cast_to_raw(l_token))));
    END IF;
  END LOOP;
  logger.log(p_text => 'End BLOB conversion', p_scope => 'upload_test');
  -- create own collection (here starts custom part (for example a Insert statement))
  -- collection name
  l_collection_name := 'DROPZONE_UPLOAD';
  -- check if collection exist
  IF NOT
      apex_collection.collection_exists(p_collection_name => l_collection_name)
  THEN
    apex_collection.create_collection(l_collection_name);
  END IF;
  -- add collection member (only if BLOB not null)
  IF dbms_lob.getlength(l_blob) IS NOT NULL
  THEN
    apex_collection.add_member(p_collection_name => l_collection_name
                              ,p_c001            => l_filename -- filename
                              ,p_c002            => l_mime_type -- mime_type
                              ,p_d001            => SYSDATE -- date created
                              ,p_n001            => l_random_file_id -- random file id
                               -- process specific information, not relevant here in the example
                               --,p_n002            => :p3_card_id
                              ,p_blob001 => l_blob); -- BLOB file content
  END IF;
  --
END;

I have to do tomorrow more tests in the office and I will let you know if everything is fine.

Best regards Ottmar

Dani3lSun commented 8 years ago

Hi @ogobrecht

great idea! I tested your solution with different sized files and the performance issues are gone!

My tracked times: 7MB file: 0,3 Sec for conversion 11MB file: 0,4 Sec for conversion

Completely acceptable times!:) I will update the plugin tomorrow and will pack a new release! Many thanks for your help!

Best wishes Daniel

ogobrecht commented 8 years ago

Hi Daniel,

great to hear, that it is also working for you. It seems, that in the end your issue with the OHS was the key to great performance - if you could transfer the data as a clob you had to split it up afterwords in PL/SQL for the performance improvements ;-)

A last comment: Please doublecheck the impact of the temporary LOB creation with and without caching (true/false) before you release a new version.

Many thanks for your fast response times ;-). and best regards Ottmar

Am 09.08.2016 um 22:55 schrieb Daniel Hochleitner notifications@github.com:

Hi @ogobrecht

great idea! I tested your solution with different sized files and the performance issues are gone!

My tracked times: 7MB file: 0,3 Sec for conversion 11MB file: 0,4 Sec for conversion

Completely acceptable times!:) I will update the plugin tomorrow and will pack a new release! Many thanks for your help!

Best wishes Daniel

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.