Closed Moumouls closed 3 years ago
We are currently using node migrate for this, to upgrade Schemas and keep them in sync across environments. We store the state of the migration in a master-key only Parse.Config entry.
app.ts
migrate.load({
stateStore: new ParseStateStore()
}, (err, set) => {
if (err) {
throw err
}
set.up((err) => {
if (err) {
throw err
}
console.log('migrations successfully ran');
});
});
Parse Statestore
export class ParseStateStore {
async load(fn) {
const config = await Parse.Config.get({
useMasterKey: true
});
const migrations = config.get('node-migrate') || {};
fn(null, migrations);
}
async save(set, fn) {
await Parse.Config.save({
'node-migrate': {
lastRun: set.lastRun,
migrations: set.migrations
}
}, {
'node-migrate': true // make node-migrate private and masterKey only
});
fn();
}
}
example of migration file
module.exports.up = async function (next) {
const schema = new Parse.Schema('Product');
schema.addDate('startedAt', {
required: true
});
try {
await schema.update();
}
catch (err) {
if (err.code === 255) {
console.log('startedAt already exists, ingoring');
}
else {
throw err;
}
}
next();
}
module.exports.down = function (next) {
next();
}
thanks @timanrebel for the suggestion, i think it could be nice to support a method that allow developers to execute some data manipulation (with node migrate or other tools) before deleting fields after schemas update.
I take in count that we should have system similar to serverStartComplete()
but here it could be
We can have this type of trigger:
beforeCreateMigrationOperations()
afterCreateMigrationOperations()
beforeDeleteMigrationOperations()
afterDeleteMigrationOperations()
Or a more flexible approach Parse.Cloud.beforeSchemaSave()
, Parse.Cloud.afterSchemaSave()
, Parse.Cloud.beforeSchemaDelete()
, Parse.Cloud.afterSchemaDelete()
With this kind of trigger developers will have all tools to run some complex databases operations before pushing any changes.
@Moumouls I played with your Gist snippet and the schema generations works pretty well, congrats for the nice job! One note for the documentation because I faced problem in production with Redis cache - It looks like you should enable single schema cache else the generation failed for random reasons each time
PARSE_SERVER_ENABLE_SINGLE_SCHEMA_CACHE=true
Unfortunately, it seems that enabling single schema cache does not fixes the issue :(
2020-12-27 18:51:18.271 [debug]: RedisCacheAdapter
2020-12-27T18:51:18.278948+00:00 app[web.1]: 2020-12-27 18:51:18.278 [error]: Field address exists, cannot update.
2020-12-27T18:51:18.289883+00:00 app[web.1]: 2020-12-27 18:51:18.282 [debug]: RedisCacheAdapter
2020-12-27T18:51:18.290228+00:00 app[web.1]: 2020-12-27 18:51:18.290 [debug]: RedisCacheAdapter
Without the RedisAdapter everything works fine, but I have no idea how to enable it after that on the fly. I'm deploying on Heroku using MongoDB
Any ideas?
@L3K0V what is your database? Do you run multiple Parse Server (in parallel) with parallel deployment strategy ? (or many dyno instances on Heroku ?)
@L3K0V what is your database? Do you run multiple Parse Server (in parallel) with parallel deployment strategy ?
I was thinking if this might cause a problem. I'm using throng with 1 worker, one dyno for now and believe this is not the issue.
Thanks @L3K0V during this implementation; i discovered that a field option change (like adding/modifying defaultValue or required) on my Gist script trigger a field delete and then a field create. So be careful with the gist script ! (Note: this behavior cannot be corrected on the gist since the current version of parser server has a limitation on field option updates (required/defaultValue)).
The parse server onboarded implementation will have a better stability and have many little improvements !
Indeed I have some default values or required, but what do you propose about them, I mean - Can I have them? I'm not changing them within different deployments. Restarting the dyno sometimes fixes the generation.
On the PR i need to add a retry system for better handling in case of parallel deployment (when all Parse server starts at the same time).
I have some default values or required
You can use it , but changing/removing defaultValue/required will trigger a field reset (my script delete the field then create the field with new options. Sadly the field concerned by the change will be deleted on all objects.)
@L3K0V can you try to set PARSE_SERVER_SCHEMA_CACHE_TTL=0
just to check if redis is may be outdated ?
On the PR i need to add a retry system for better handling in case of parallel deployment (when all Parse server starts at the same time).
I have some default values or required
You can use it , but changing/removing defaultValue/required will trigger a field reset (my script delete the field then create the field with new options. Sadly the field concerned by the change will be deleted on all objects.)
@L3K0V can you try to set
PARSE_SERVER_SCHEMA_CACHE_TTL=0
just to check if redis is may be outdated ?
I was able to deploy without issues. Looking at the @timanrebel snippet above I put a error check not to kill the process. Not sure if this cause some side effects on migrations, what do you thunk @Moumouls?
// This function update, migrate and create Classes
export const buildSchemas = async (localSchemas: any[]) => {
try {
const timeout = setTimeout(() => {
if (process.env.NODE_ENV === 'production') process.exit(1)
}, 20000)
const allCloudSchema = (await Parse.Schema.all()).filter(
(s: any) => !lib.isDefaultSchema(s.className),
)
clearTimeout(timeout)
// Hack to force session schema to be created
logger.info('🔨 Schema generation...')
await lib.createDeleteSession()
await Promise.all(
localSchemas.map(async (localSchema) => lib.saveOrUpdate(allCloudSchema, localSchema)),
)
logger.info('🔨 Schema generation completed!')
} catch (e) {
logger.error(e)
if (e.code === 255) {
logger.warn(e.message)
} else {
if (process.env.NODE_ENV === 'production') process.exit(1)
}
}
}
Okay so we need to check how Parse server currently use cache on Schema queries (await Parse.Schema.all()
). No problem if if you just log the error, in many use cases a restart is better because process managers will try to recreate a fresh instance after exit; also developers that use K8, K8 will just stop the rolling update and end users will not have a service interruption .
In your use case it seems that Redis is just out of date. Then if you remove the PARSE_SERVER_SCHEMA_CACHE_TTL
do you have an error ?
It seems also that schema cache TTL, is not applied correctly in Schema Cache instance.
setAllClasses(schema) {
if (!this.ttl) {
return Promise.resolve(null);
}
return this.cache.put(this.prefix + MAIN_SCHEMA, schema);
// expected code: return this.cache.put(this.prefix + MAIN_SCHEMA, schema, this.ttl);
}
We need to fix this in my PR also
Hey @Moumouls. Want to share some new findings:
throng
for node clustering.serverStartComplete
and define a migration
job which works like a charm even with Redis.It's very strange. Let me know if I can help somehow.
Thanks @L3K0V for your investigation. So now I'm sure that the "random" fails come from concurrency of your node cluster, because multiple parse server will try to update schemas at the same time. I think throng do not support rolling update policy. But here no problem I know what we have to do, to reduce errors from concurrency.
The script just need a retry system ( attempt to migrate schema, if fail, wait 2 sec then retry, at the 5th fail exit), then each parse server will try to ensure the schema structure and most of the time the 2nd try will be sufficient for all parse servers instances to be okay since we need at least one parse server to perform the schema updates. I will work on this, and try to add some tests :)
@L3K0V if you want to give a try to the new defined schema feature, you can install temporary on your repo my forked package branch: "parse-server": "moumouls/parse-server#defined-schema-pkg"
Then on parse server options you can use the schemas
key to provide your schemas.
Everything is tested, i will be happy to get your feedback and also if the retry system works correctly in your node cluster.
ex:
const server = ParseServer.start({
schemas: [{ className: '_User', fields: { aNewField: { type: 'String'}} }, { className: 'Test' }],
beforeSchemasMigration: async () => {
// Some code if you want to execute something before migration ops
},
});
The schema structure is the same as my script that you have used before (JSON Schema). Example here: https://github.com/Moumouls/next-atomic-gql-server/blob/master/src/schema/schemas/User.ts
I was actually looking for such a functionality where you could specify the schemas on startup. This seems impossible as far as I know, so I'll be happy to try it out soon as well.
Btw it would be handy if you could pass an array of Parse.Schema (https://parseplatform.org/Parse-SDK-JS/api/master/Parse.Schema.html).
Hi @jonas-db , i will be happy to get your feedback from my forked package version for this feature
in your package.json
you just have to add this
"parse-server": "moumouls/parse-server#beta.8"
(this version is stable)
Usage:
const server = ParseServer.start({
schemas: [{ className: '_User', fields: { aNewField: { type: 'String'}} }, { className: 'Test' }],
beforeSchemasMigration: async () => {
// Some code if you want to execute something before migration ops
},
});
Is your feature request related to a problem? Please describe. Schema less behavior of Parse Server is hard to maintain accros multi envs and creates some complications in using the new GraphQL API correctly.
Describe the solution you'd like Allow the pass a JSON REST version of a Parse Schema to the Parse Server options. Then parse server will push/migrate schema to the DB.
Describe alternatives you've considered Need a custom script into serverStartComplete
Additional context Currently the feature could be achieve with https://gist.github.com/Moumouls/e4f0c6470398efc7a6a74567982185fa This script is currently used in prod during 1 year, no issue detected.
Community discussion here: https://community.parseplatform.org/t/possibility-to-set-class-level-permissions-via-file/1061/22