dynamicdan / sn-filesync

2-way sync ServiceNow field values to local files
MIT License
66 stars 37 forks source link

Mass Commit #50

Open turkeysnorter opened 6 years ago

turkeysnorter commented 6 years ago

If I were able to commit all files from the local machine at once, or track which files have been modified since being synced down from the server and only commit those files, that would be a nice functionality to have, being able to mass commit, and run the same command that runs when a file is saved on multiple files at the same time.

This would also accomplish the "offline" functionality mentioned in the roadmap as a possibly nice enhancement, and I suppose that I am used to having this functionality having used a plugin that fills a similar need for Salesforce called MavensMate. It just gives you the ability to have your edits be in lock step with the server (ServiceNow), or to be able to locally save your files many times before you decide you are ready for committing those changes up to the server. There is also an option to configure if you would like your "save" commands to automatically initiate the "commit" command up to the server, if this helps from a system-design reference. Thank you so much for all of the pioneering work you have done to expose ServiceNow to the IDE, I really appreciate it, and appreciate you sharing it. Thank you

dynamicdan commented 6 years ago

Firstly, I'm glad you appreciate the solution and that you've gotten value out of SN-FileSync. I did invest a lot of hours to get it modernised and I do hope to invest more to make it even better.

I'm not 100% sure of the use case for this although I see the overlap with offline or flaky connections. The idea of delaying bulk changes may be of use....

In the event of doing bulk changes and touching many records, it makes sense to sync them with the instance straight away. Basically, the longer a set of changes are out of sync with the instance, the higher the risk is of a sync conflict which will stop you pushing your changes.

Regardless, here are my initial thoughts on how this could be done.

There is already an "API" call that we could use. See the documented pull + push calls: https://github.com/dynamicdan/sn-filesync#pull-and-push-commands

One could start SN-filesync with an option like --manual-sync which still enables listening for changes but just records which files have been changed in a "changeset" text file. Then, on demand or when you start filesync next, this file could be checked and all push commands executed (with errors/conflicts reported per push).

On demand option: --push-changeset=my_changed_files.txt. (I think I have a todo somewhere to already push a list of changes from a file)

If we focus more on the flaky connection issue then we have a few more challenges like confirming that the record went up properly and the transaction completed correctly and automatic restart of the transfer. This could also make use of a temporary (pending) "changeset" file which would then also be useful for a complete crash + restore scenario.

Alternatively, if your project made use of a code repo (eg, Git or mercurial) then one could actually use a pre-commit or post-push hook that triggered off the filesync push commands based on the change set recorded by the code repo mechanism. That solution wouldn't require any changes to filesync although some useful error handling might be needed.

As a side note, we also have the "preLoadList" option that does the opposite (on start, pull down record fields).

If you're going to try this out and create a pull request, then please start with new startup option modes so that it's easy to test separate from the core functionality.

I hope that helps.