Closed LucasBrazi06 closed 4 years ago
Have you already assessed with the tenants we have in QA ? List them all and let me know the total size. We can have a shorter cleanup for logs (1 week instead of 1 month) The rest should fit imho. Let me know.
If I drop also the '*.transactions' collections, the data integrity issue that might be left will not have an impact on the runtime.
But to test firmware upgrade from the QA, the 512M size limit will be very quickly reached if old firmware version can't be deleted.
If I drop also the '*.transactions' collections, the data integrity issue that might be left will not have an impact on the runtime.
But to test firmware upgrade from the QA, the 512M size limit will be very quickly reached if old firmware version can't be deleted.
You haven't answered my question! List them all and let me know the total size.
For the firmware we can clean up the files in the DB. Goal is also to tune the backend to store as less as possible data in collections to reduce the price of the use of Atlas.
So provide me what I requested so I can make a decision !
On 17 Feb 2020, at 16:44, FABIANO Serge notifications@github.com wrote: List them all and let me know the total size
A per tenant analysis is not relevant and on the QA, there’s only 5 test tenant left.
The given excluded collection type are the most space consuming in exactly the order listed. The transactions collection have been added to avoid data integrity issue, not because of the size.
The migration can be done to MongoDB Atlas as long as long we drop some data that we do not care really about like test or demo transactions, meter values, etc.
I’m just saying that 512M will be quickly reached on the QA given the increase rate size on the previous collections if the usage pattern is doing charge transaction.
So if you want the DB to be space efficient, I can already give which collection need to be checked:
The logs DB is already cleaned but for legal reason, you will have to keep them anyhow for one year. On QA, logs cleaning can be agressive. Old logs can be archived on non Atlas DB.
I think that after some time, it’s not needed to keep all the values in time that the charging stations have sent and that’s some kind of median value regression can be done on a timeframe.
Give me the size of the whole DB. I haven't thought about load testing on our QA system and I don't think this is a good idea to do it with an Atlas backend. Then let's keep our local MongoDB database in SCP as long as we can but in the meantime I want you to test the app in QA connected to the Atlas to ensure that we can switch the prod without any troubles. Keep me informed.
The size before the collections drop is around 750M, after less than 100M. I will double check tomorrow with MongoDB Compass.
Switching from one DB to an other using CF services is not difficult, it’s just either some cf commands to type for each apps or manifest.yml editing if the switch is more or less permanent and need to be kept between deployment. If I remember correctly, you’ve already used MongoDB Atlas on CF though user-defined service.
So we can do load test with both setup, do whatever we need by switching back and forth between them easily.
-- Jérôme Benoit aka fraggle Piment Noir - https://piment-noir.org OpenPGP Key ID : 27B535D3 Key fingerprint : B799 BBF6 8EC8 911B B8D7 CDBC C3B1 92C6 27B5 35D3
Ok, thanks. I just want to switch to Atlas just to check the app. Yes I did it already and it's easy, you just have to map to the user defined service in the manifest.yml. This service still exists in TradeEV space so you can copy/paste it and replace the url connection. For the time being, we will not perform load tests with Atlas but only with the local MongoDB instance.
Yes I did it already and it's easy, you just have to map to the user defined service in the manifest.yml. This service still exists in TradeEV space so you can copy/paste it and replace the url connection.
It’s already created under the SAP-IT SCP test space :) I’ve just not done yet the switch to test the MongoDB URI inputed. I probably need to warn developers before that kind of test.
-- Jérôme Benoit aka fraggle Piment Noir - https://piment-noir.org OpenPGP Key ID : 27B535D3 Key fingerprint : B799 BBF6 8EC8 911B B8D7 CDBC C3B1 92C6 27B5 35D3
You can send an email during the day and switch it in the evening ;-)
Issue left for MongoDB Altas usage on QA landscape:
Migration test script:
#!/usr/bin/env bash
set -x
apps="sap-ev-batch-server-qa sap-ev-chargebox-json-server-qa sap-ev-chargebox-soap-server-qa sap-ev-ocpi-server-qa sap-ev-odata-server-qa sap-ev-rest-server-qa"
function bind_to_atlas() {
for app in $1
do
cf us $app e-Mobility-db-qa
cf bs $app mongodbatlas-evse
cf restage $app
done
}
function bind_to_scp() {
for app in $1
do
cf us $app mongodbatlas-evse
cf bs $app e-Mobility-db-qa
cf restage $app
done
}
case $1 in
atlas)
bind_to_atlas "$apps"
;;
scp)
bind_to_scp "$apps"
;;
*)
echo "Usage: $0 {atlas|scp}"
exit 1
;;
esac
No more issue. The QA is now using MongoDB Altas evse cluster. I'll switch back to SCP MongoDB tomorrow morning.
Ok, thanks Jerome! Have you tried to run a transaction with SAP-Mougins-69? Let me know.
All the tests done after the MongoDB Atlas switch went fine, including the ones with the test CS.
Perfect thanks!
If we get connected to Atlas in prod, we will want to MongoDB 4.2 APIs which is not supported by SCP instance currently. Do you confirm? This may forces us to switch definitively to Atlas! Let me know.
On 20 Feb 2020, at 16:05, FABIANO Serge notifications@github.com wrote:
If we get connected to Atlas in prod, we will want to MongoDB 4.2 APIs which is not supported by SCP instance currently. Do you confirm?
As long as you do not use 4.2 only features while querying the DB or defining data structure, the rollback is possible. A dump and restore work on any MongoDB version if the data structure has no version specific fields. And I do not think the JSON data structure definition have changed a lot the last years.
++
-- Jérôme Benoit aka fraggle Piment Noir - https://piment-noir.org OpenPGP Key ID : 27B535D3 Key fingerprint : B799 BBF6 8EC8 911B B8D7 CDBC C3B1 92C6 27B5 35D3
Let me rephrase: I will use MongoDB 4.2 ;-) Some commands are commented out and ready to be used.
Aaahhhh, yes, on the Atlas cluster, you will be able to use them.
Aaahhhh, yes, on the Atlas cluster, you will be able to use them.
In other words, we need to get our QA system to Atlas to test these new features. So we need a small VM in Atlas for our QA. Let me know which one do you need and I'll create it.
You can also ask developers to switch to 4.2. They will have to follow https://docs.mongodb.com/manual/release-notes/4.0-upgrade-replica-set/ before updating.
But if the 4.2 features are enabled in the code without any fallback to the 4.0 features or MongoDB version detection code, the dev landscape will not work anymore and we can't use a free cluster because of the fake data generation that needs a lot of data to work properly.
I do not understand why you want to create a VM on Atlas. If I want to test MongoDB 4.2 features, I just install MongoDB 4.2 on my laptop (which is already done), comment out or refactor code that requires 4.2 and run unit tests after being sure that the 4.2 code path will be exercised by tests.
And If I want to test on SCP, I just deploy a new app bound to the Altas DB.
You can also ask developers to switch to 4.2. They will have to follow https://docs.mongodb.com/manual/release-notes/4.0-upgrade-replica-set/ before updating.
But if the 4.2 features are enabled in the code without any fallback to the 4.0 features or MongoDB version detection code, the dev landscape will not work anymore and we can't use a free cluster because of the fake data generation that needs a lot of data to work properly.
Then we have to find a solution for the dev landscape, no fallback to 4.0, too complicated.
I do not understand why you want to create a VM on Atlas. If I want to test MongoDB 4.2 features, I just install MongoDB 4.2 on my laptop (which is already done), comment out or refactor code that requires 4.2 and run unit tests after being sure that the 4.2 code path will be exercised by tests.
For the QA system, to be able to test.
And If I want to test on SCP, I just deploy a new app bound to the Altas DB.
No, only native support to 4.2 will be used.
To test 4.2 specific code path on the QA once the code is enabled is master-qa, it's a matter of following all steps:
And let the QA running MongoDB Atlas, with the size limitation.
To test 4.2 specific code path on the QA once the code is enabled is master-qa, it's a matter of following all steps:
- comment out the mongodbaltas-evse service in the manifest.yml files
- comment the SCP MongoDB service manifest.yml files
- run the script that do proper unbind and bind for MongoDB service
- deploy the code in master-qa
And let the QA running MongoDB Atlas, with the size limitation.
No I want to be always 4.2 enabled in QA. Let me know the size you need in Atlas for QA.
The switch procedure described is definitive. If you want to do a full QA data migration from current SCP to Atlas without any collections content deletion to fit in the 512Mo size, the cluster on Atlas need at least 1 Go size limit.
Shared M2 or dedicated M10 are Ok.
I just created the M10 evse-qa instance in Atlas. I'll switch to Atlas the QA
QA landscape is now bound to MongoDB Atlas and configuration have been updated to match.
Done
512M total size is really short, the dump and restore from the QA need to exclude:
'*.logs' '*.metervalues' '*.consumptions'
'*.statusnotifications'
to fit. And there's probably some data integrity issues on the restored DB.