mobizt / Firebase-ESP32

[DEPRECATED]🔥 Firebase RTDB Arduino Library for ESP32. The complete, fast, secured and reliable Firebase Arduino client library that supports CRUD (create, read, update, delete) and Stream operations.
MIT License
415 stars 118 forks source link

HELP with Reading / Writing large records #123

Closed albertlt closed 3 years ago

albertlt commented 3 years ago

Hello. First of all, great library, thank you very much !

I am trying to simulate a situation where I need to read and write 1000 log records with a single int value with format (id: "<1-1000>", value: "<0/1>"). Writing is not an issue as I can always write in small batch in loop. However, reading them into serial port produced "data buffer overflow" message. I followed your example in "printResult" method to read the data using json.iteratorBegin(); and json.iteratorEnd();. Is there a way to iterate through a large record of data efficiently?

mobizt commented 3 years ago

You may need to increase the http response payload limit which in your case already exceed 4kB.

This limit prevent the large server response payload which causes out of memory issue.

Use the method setResponseSize of Firebase Data object e.g. to set the limit to 8192 bytes.

firebasseData.setResponseSize(8192);

Iterate into large JSON payload is not efficient, you should use query to filter the get result instead by add some node which keep the key (number or text) as index for query and sorted for the result.

This example and this example showed how to query the data based on index.

albertlt commented 3 years ago

Thank you for the reply. I still got buffer overflow response with 8192. What is the maximum recommended response size for ESP32?

Also I found out that if call Firebase.setJSON multiple times, it will only create one record on the firebase side. On firebase console, I can see multiple record being created before it dissapear leaving only the last record. Example:

FirebaseJson json; String path = "/logs/"; String key, val;

for(int i = 1; i <= 10; i++){ json.clear(); key = "abcde" + String(i); val = "0"; json.set(key, val);

if (Firebase.setJSON(firebaseData, Path, json)) {} // This will create only 1 record at the end }

for(int i = 1; i <= 10; i++){ key = "abcde" + String(i); val = "0"; json.add(key, val); } if (Firebase.setJSON(firebaseData, Path, json)) {} // This will create 10 different records

Why is that?

mobizt commented 3 years ago

That's the example value. The free heap is only 200 kB it's up on you.

albertlt commented 3 years ago

I tried doubling the value to 16384 and now the device actually crashed:

12:45:40.517 -> Backtrace: 0x4008cee8:0x3ffb17d0 0x4008d119:0x3ffb17f0 0x401345e7:0x3ffb1810 0x4013462e:0x3ffb1830 0x4013418f:0x3ffb1850 0x4013427e:0x3ffb1870 0x401bbe16:0x3ffb1890 0x401bbf56:0x3ffb18b0 0x401bc239:0x3ffb18f0 0x401bc292:0x3ffb1910 0x401bc2a5:0x3ffb1930 0x4013e0d5:0x3ffb1950 0x4013f77f:0x3ffb1970 0x401402d6:0x3ffb19f0 0x40140412:0x3ffb1a60 0x40139916:0x3ffb1aa0 0x4013cb11:0x3ffb1b10 0x4013d01c:0x3ffb1d70 0x4013d163:0x3ffb1d90 0x4013d521:0x3ffb1dd0 0x400d4386:0x3ffb1e70 0x400d1683:0x3ffb1eb0 0x400d16c6:0x3ffb1f90 0x400d8665:0x3ffb1fb0 0x400895fd:0x3ffb1fd0 12:45:40.517 -> 12:45:40.517 -> Rebooting... 12:45:40.517 -> ets Jun 8 2016 00:22:57 12:45:40.517 -> 12:45:40.517 -> rst:0xc (SW_CPU_RESET),boot:0x17 (SPI_FAST_FLASH_BOOT) 12:45:40.517 -> configsip: 0, SPIWP:0xee 12:45:40.517 -> clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 12:45:40.517 -> mode:DIO, clock div:2 12:45:40.517 -> load:0x3fff0018,len:4 12:45:40.517 -> load:0x3fff001c,len:1044 12:45:40.517 -> load:0x40078000,len:8896 12:45:40.517 -> load:0x40080400,len:5828 12:45:40.517 -> entry 0x400806ac 12:45:41.141 -> M5Stack initializing...OK

mobizt commented 3 years ago

Your device doesn't has enough memory to handle the payload.

You should test the function and read the doc on readme first. The set function will be create new or replace the existing node. The push function create new random node id and, the update function update the existing node or create new node if it not existed.

mobizt commented 3 years ago

If you're new to Firebase RTDB please play with the console and simple javascript test code here for better understanding.

To listen to the value changes, you may need stream but it's not design for large payload as in your case.

You may need to design your database based on low memory limit, speed and data bandwidth usage perspectives.

albertlt commented 3 years ago

Yes, I am actually new with Firebase. I am more used to SQL lol. Anyway, yeah I guess I need to restructure the database model. I need to be able to iterate through 1000 records max. So I am thinking to break the database into 10 x 100 array and retrieve 100 records at a time until it finish. Thank you for your great work.

albertlt commented 3 years ago

One more thing. I was going to monitor for data change every 1 minute. With Stream, how often does it poll the database? I am concerned if it is too fast, it will eat too much bandwidth if I have many devices connected to the database.

mobizt commented 3 years ago

Stream is keep-alive http connection which server will push the event data to client based on the changes.

There is no further client request as long as the client connected to the server.

Server will push the keep-alive data (small JSON data < 20 bytes) to the client regularly at some interval from 30 sec. or more in stream mode.

This keep-alive data is used for heartbeat check for server connection status that client (this library) can be used this data to determine the stream connection timeout.

Almost data usage is upon your data changes under the streaming node. The server will return all payload data under the stream node once when the stream connected.

Data usage is up on the user.

albertlt commented 3 years ago

ok great. thank you :)

albertlt commented 3 years ago

Hello again bro. One quick question. I am iterating through a folder which has 10 key-value pairs in which each of the parent's value contains another sets of key-value pairs. When I iterate through the data, it produced the following outputs:

08:00:57.989 -> 0, Type: object, Key: 1, Value: {"a":"0000000","b":"abcdefghijklmnopqrstuvwxyz1000","c":"0000-00-00","d":"00:00","e":"10","f":"0000-00-00","g":"00:00","h":"20"}

08:00:57.989 -> 1, Type: object, Key: a, Value: 0000000 08:00:57.989 -> 2, Type: object, Key: b, Value: abcdefghijklmnopqrstuvwxyz1000 08:00:57.989 -> 3, Type: object, Key: c, Value: 0000-00-00 08:00:57.989 -> 4, Type: object, Key: d, Value: 00:00 08:00:57.989 -> 5, Type: object, Key: e, Value: 10 08:00:57.989 -> 6, Type: object, Key: f, Value: 0000-00-00 08:00:58.037 -> 7, Type: object, Key: g, Value: 00:00 08:00:58.037 -> 8, Type: object, Key: h, Value: 20

08:00:58.037 -> 9, Type: object, Key: 2, Value: {"a":"0000000","b":"abcdefghijklmnopqrstuvwxyz1000","c":"0000-00-00","d":"00:00","e":"10","f":"0000-00-00","g":"00:00","h":"20"}

08:00:58.037 -> 10, Type: object, Key: a, Value: 0000000 08:00:58.037 -> 11, Type: object, Key: b, Value: abcdefghijklmnopqrstuvwxyz1000 08:00:58.037 -> 12, Type: object, Key: c, Value: 0000-00-00 08:00:58.037 -> 13, Type: object, Key: d, Value: 00:00 08:00:58.083 -> 14, Type: object, Key: e, Value: 10 08:00:58.083 -> 15, Type: object, Key: f, Value: 0000-00-00 08:00:58.083 -> 16, Type: object, Key: g, Value: 00:00 08:00:58.083 -> 17, Type: object, Key: h, Value: 20

As you can see, the first row is always the parent's pair and the following rows are the child's pairs. I need to get the parent's key and the all of the child's key-pair for all 10 records. I am thinking to only parse the parent's pairs at index 9^0 to 9^10. But the value is in string, how do I do the parsing? Also, is there a better approach for this?

mobizt commented 3 years ago

I don't get what your need.

For iteration, no indexes or information about the members and the depth of all nodes inside the object, it need to iterate from the top level element to the nested elements. That's it.

To parse, you need to assign the node name or path to parse like this

  FirebaseJson &json = firebaseData.jsonObject();
  FirebaseJsonData parseResult;
  json.get(parseResult, "/1/a"); // relative path to get the node value

  if (parseResult.success) //if that node is existed
  {
    Serial.println(parseResult.stringValue);
  }

There are many examples about FirebaseJson.

Redesign your database to keep the data in the known key (index) e.g. timestamp which you can sort, filter and compare instead of a array index.

albertlt commented 3 years ago

So basically, I want to parse the value variable from this statement:

json.iteratorGet(i, type, key, value);

The value data type is String. How do I convert it to json object for parsing purpose?

Inside the value variable is this data:

{"a":"0000000","b":"abcdefghijklmnopqrstuvwxyz1000","c":"0000-00-00","d":"00:00","e":"10","f":"0000-00-00","g":"00:00","h":"20"}

I would like to do: String aVal = json["a"]; String bVal = json["b"];

mobizt commented 3 years ago

setJsonData

To parse, read my above post. The library provide the APIs like that no interfaces as you need available.

Please read the doc before ask.

albertlt commented 3 years ago

Problem solved. I had been looking for setJsonData page for all function's explanation. My bad for failing to find that page. Thank you for pointing me to the right direction.

albertlt commented 3 years ago

Hello again. I am having difficulty implementing Stream function. If I only use one FirebaseData for both Stream and data insert, the insert will fail. If I create two FirebaseData objects (one for stream and one for normal operations), I ran out of memory. Now, is it possible to "stop" the stream process when I want to do normal operations and resume stream when I am done? I tried stopping the stream by writing:

Firebase.endStream(firebaseDataStream); firebaseDataStream.stopWiFiClient();

and continue with: Firebase.beginStream(firebaseDataStream, path);

However, it also crash on first resume, possibly running out of memory too. What is the correct way to completely stop stream process as if it was never used?

mobizt commented 3 years ago

You already know the limitation about memory.

I post the comment something like this so many times and don't want to repost the same thing too often.

You should design the usage based on this limitation. Your stream data should kept small as possible, Don't stream large data, unless you use the query filter.

The most memory usage is for SSL client resources (mbedTLS). In almost examples, the Firebase Data defined out of any block (global scope) as a static variable. You also can define it inside the function to use in that scope. Or this example shows the dynamic allocation of Firebase Data which give the same result as define it as local var in the function.

The stream loop example using the single Firebase Data, the idle time in the loop allows stream to establish.

mobizt commented 3 years ago

For single shared Firebase data object usage I already noted about the usage concerns here.

albertlt commented 3 years ago

Actually, I am only streaming a single string character the whole time. However, the data involved in normal operations is a little large but no problem if stream is not used. I just saw the dynamic allocation examples. May I know is there any memory leak issue after prolonged use?

mobizt commented 3 years ago

Actually, no memory leak in Firebase library.

The memory used in system and core libs too when WiFi used and some are static reserved that you can do nothing with that unless you disable WiFi.

You can check the heap in/out of every memory consumed task to track the current free heap.

albertlt commented 3 years ago

Okay got it. I will resume work. Thank you !