JackAdams / meteor-transactions

App level transactions for Meteor + Mongo
http://transactions.taonova.com/
MIT License
113 stars 16 forks source link

Support for $addToSet: {$each ....} #14

Closed rjsmith closed 9 years ago

rjsmith commented 9 years ago

Hi, I have a use case in my app where I need to add multiple items to an array field for an existing document. I use the $addToSet: {arrField: {$each: [someObject, anotherObject]}} update modifier to do the update in single atomic MongoDB operation.

Now I am in the process of integrating meteor-transactions into my app, I realised that modifiers of this form do not work. Looking in the code, I can see there is no explicit support for any of the $addToSet array modifiers. There are two issues:

  1. When meteor-transactions tries to save the resulting tx.Transactions document, it fails with a MongoDB error because some of the field keys (in the update and inverse command objects) have a key that starts with $ (the $each key in the original update modifier)
  2. The inverse command is not correct (it changes the $addToSet command to a $pull command, but $each is not supported as a $pull modifier. Actually, I don't think you can inverse $addToSet with $pull so easily, because addToSet only adds more objects if they do not already exist in the array being modified. So , when inversing, you have to be careful not to delete array items that were in the addToSet modifier, but were already in the array before then. Phew!

After working on this today, I think I have a solution for number 2. I think the inverse operation of $addToSet should be a $set of the complete state of the array field prior to the $addToSet. It's a brute force approach vs unpicking it with $pull, but should work for any $addToSet operation. Penalty is storing a larger amount of data in tx.Transactions. I my own use case, I will only max 100 items in these array fields, with each array item object object just being a couple of small strings.

I have started working on number 1, but am currently stuck somewhere in the packaging / unpackaging methods. Basically, need to ensure that any field keys starting with $ get replaced with something else before saving, and then un-replaced when the tx.Transaction doc is read back out of the db again.

Will update this issue when I have made some more progress.

JackAdams commented 9 years ago

Oh man ... I can see I'm going to have to up my game a bit.

When it comes to mongo operators, there's support for almost nothing in this package -- maybe half a dozen operators total.

Yes, for the $addToSet inverse operation, $set on the whole array will work. In fact $set probably works well as an inverse for just about any change. The only trouble is, if two users are adding elements to the same array in a doc in quick succession, this might happen:

1) there is a field with an empty array: [] 2) user 1 adds one element so array looks like: ['user 1'] 3) user 2 adds two elements so array becomes: ['user 1','user 2 - 1','user 2 - 2'] 4) user 1 clicks undo.

With current implementation the array becomes: ['user 2 - 1','user 2 - 2'] and both users are fairly happy with the result. Using $set as the inverse, the array becomes [] and both users are confused. (Updates don't cause transactions to expire.)

User 1: "I thought clicking undo only removed the things I added. Why did the other two things disappear?" User 2: "Hey. I just added some things. Where did they go?"

It's weighing the possibility of this happening against the scenario you describe where someone adds an element to an array that's already there, then does an undo and the element disappears, leaving the state of the array changed, which is the opposite of what you'd expect when clicking undo.

My preferred method of handling this would be to check for existence of the array element when doing an $addToSet and setting the inverse action to nothing if the element already exists. I think that would cover both cases. I'll look into this.

If you can figure out the $each thing -- I'll totally accept a pull request! The problem of using mongo operators as field names is what the _unpackageForUpdate and _packageForStorage methods solve for basic first level mongo operators, but I wasn't very forward thinking when I put this package together, so I don't really have a pattern in mind for supporting operators like $each. I'm open to suggestions.

rjsmith commented 9 years ago

Thanks for the reply. I think meteor-transactions is a potentially important part of the Meteor ecosystem ... I haven't spent hours scouring Atmosphere for similar packages, and only a couple of days working with it so far, but this package looks like a serious attempt at solving an important but complex problem. I would have imagined most serious production Meteor apps would need to look at this a some point , at least for multi-document 2 - phase commit + rollback support. For me, the user-visible undo / redo is a nice-to-have side-effect. A 25 year career spent in large enterprise software development has taught me you need to get the basics down first, and this is one of them! That's why I am happy to put a bit of time and effort in to help out here :-)

re: your multi-user scenario. Agreed this is tricky, if you isolate one user's edit for another user's edit of the same doc. I currently intend to solve that by exposing a single list of undo-able actions to all users who look at the same document. In your example, if user 1 wants to roll back, they will need to also rollback user 2's edit first. This is more an event - sourcing model, where there is a single stream of time-ordered transactions, which can only be rolled back in contiguous sequence, instead of cherry - picking which events to reverse. I haven't got onto that bit yet, as I hit the other issues I have reported first :-) Probably some gnarly issues around ensuring the server code enforces the sequencing .. haven't looked at that yet. One step at a time ....

Right, back to figuring out the package / unpackage code ....

JackAdams commented 9 years ago

Ah! I like the sound of that approach where you have to roll back the subsequent commits by other users first. That said, this package wasn't designed with that in mind, so it may be difficult to achieve without a substantial re-write.

I'm glad that you see this package as a serious attempt to solve an important problem. I'm using it extensively in a big, complex app that's been in production for almost 2 years and will be for years to come, so I'm committed to maintaining it as best I can. (I'm also secretly pleased, and kind of surprised, that it's been one of the most reliable components of the system, even though it puts some limitations on how I can do my writes to mongo.)

Time has taught me to view the "multi-document 2 - phase commit + rollback support" as the package's most essential function, although (full disclosure) when I wrote it, I just wanted a reliable user-visible undo/redo and saw the rest of it as being necessary to achieve this end.

You see, my brother (15 year career in large enterprise software development) was supposed to be helping me architect the trickier components of my app (he wrote my RBAC, for instance). But he was too busy playing some computer game during our brief window of collaboration, so I (high school mathematics teacher) ended up writing the whole transactions package myself.

But that was two years ago and I've learnt a lot since then.

So now, "2-phase commits" is the next major milestone on my roadmap to getting this package ready (in all good conscience) for widespread adoption.

I'm kind of surprised that a solution for multi-document transactions wasn't one of the first packages in the Meteor ecosystem. The only other serious attempt at this sort of thing for Meteor (as far as I'm aware) was mrt:versioning, the development of which was discontinued two years ago. We did look into adopting/adapting it at the time, but it takes a substantially different approach to mine and wasn't quite what I was after.

Anyway... this is comment is turning into a novella, so let me just conclude by saying that I'm very grateful to have someone with your experience actively helping to improve the codebase.

JackAdams commented 9 years ago

I forgot to say earlier, when you mentioned "Penalty is storing a larger amount of data in tx.Transactions" that the transactions collection in the database of my production app is 10x larger than the next largest collection (in terms of disk space). For this reason, I'm thinking of making the package automatically clean out old transactions -- like 6 months old (by default) -- and configurable to never remove old transactions with tx.purgeAfter = 0;.

rjsmith commented 9 years ago

Can i suggest you raise a new Issue about the archiving idea ... my own 1st Meteor app hasn't even gone live yet, so I don't yet have that problem, but I know it will bite at some point.

Perhaps you might just provide a tx object method to archive all transactions older than a supplied date. Then apps can use their own trigger mechanism (e,.g,. using synced-cron package, REST - triggered authenticated command etc). Or, I guess, apps could just remove documents from the tx.Transactions collection themselves, right ? Is there any need for having a tx method to do that (other than for encapsulation purposes?)

Look forward to seeing the 2-phase commit taking shape :-)

JackAdams commented 9 years ago

Just added #15.

rjsmith commented 9 years ago

Updated proposed code, for further discussion, for inverting an $addToSet update operation:

          case '$pull' :
            // Brute force approach to ensure previous array is restored on undo
            // even if $addToSet uses sub-modifiers like $each / $slice
            _.each(_.keys(updateMap), function (keyName) {
               var formerVal = self._drillDown(existingDoc,keyName);
               if (typeof formerVal !== 'undefined') {
                inverseCommand = '$set';
                formerValues[keyName] = formerVal;
               } else {
                // Reset to empty array. Really should be an $unset
                inverseCommand = '$set';
                formerValues[keyName] = [];
               }
            })
            // formerValues = updateMap;
            break;

Instead of using $unset in the else block (which needs support for list of inverse commands, not just a single one), just use another $set to reset to an empty array. Ideally, it should completely remove the property if was previously undefined.

rjsmith commented 9 years ago

Oh, also updated my transactions bug test repo with related tests:

https://github.com/rjsmith/meteor-transactions-bug-repo/commits/master

JackAdams commented 9 years ago

Thanks for this, Richard. Am I right that something similar will need to be written for $addToSet?

rjsmith commented 9 years ago

Yes, I think so.

A $pull removes all instances of documents that match the given value or query. So there could be multiple same-valued items removed in a single $pull. In order to restore the same number of items again, you would have to use $push with the right number of duplicated values .. yuk.

So, I think we would have to adopt the same brute force method as for $addToSet and store the entire original array to be restored with a $set.

Will need a test to ensure that multiple same-valued items removed with a $pull get restored properly :-)

JackAdams commented 9 years ago

These transaction records could get very large, storing whole arrays. I guess there's no simple alternative.

rjsmith commented 9 years ago

Yes, but is totally dependent on storage capacity etc. There is no one right answer ... depending also on the undo / redo strategy (per-user, shared per document etc)

One thing would be to document the default strategies for computing inverse operations and point out possible issues. Then, package users can make an informed decision if they wish to override the default with their own custom inverse functions (I haven't tried that myself, but I believe this package enables that, correct?) Maybe you could even give an example alternative (e.g. use $addToSet / $pull instead of the default $set approach), and document pros / cons.

JackAdams commented 9 years ago

I agree that this stuff should be documented to allow the user to make informed choices. It's pretty involved stuff to explain though. I'll do my best. :-)

rjsmith commented 9 years ago

Hi @JackAdams. I have completed re-factoring all of the collection mutation code in my application using a local fork of meteor-transactions with the changes discussed above, and added a whole bunch of sanjo:jasmine tests to test undo / redo operations for each action. So far, so good!

Have you had any time to look at the changes discussed above?

Here's the relevant code from my local fork at the moment:

        switch (inverseCommand) { // In case we need to do something special to make the inverse happen
          case '$inc' :
          case '$unset' :
          case '$set' :
            _.each(_.keys(updateMap), function(keyName) {
              var formerVal = self._drillDown(existingDoc,keyName);
              if (typeof formerVal !== 'undefined') {
                formerValues[keyName] = formerVal;
              }
              else {
                inverseCommand = '$unset';
                formerValues[keyName] = '';
              }
            });
            break;
          case '$pull' :
            // Brute force approach to ensure previous array is restored on undo
            // even if $addToSet uses sub-modifiers like $each / $slice
            _.each(_.keys(updateMap), function (keyName) {
               var formerVal = self._drillDown(existingDoc,keyName);
               if (typeof formerVal !== 'undefined') {
                inverseCommand = '$set';
                formerValues[keyName] = formerVal;
               } else {
                // Reset to empty array. Really should be an $unset
                inverseCommand = '$set';
                formerValues[keyName] = [];
               }
            })
            break;
          /*case '$push' :
            formerValues = updateMap;
            break;*/
          case '$addToSet' :
            formerValues = updateMap;
            break;
          case '$pullAll' :
            // TODO
            break;
          case '$pushAll' :
            // TODO
            break;
          default :
            // TODO
            break;
        }
JackAdams commented 9 years ago

I've thought about it a fair bit, but I haven't tried anything code-wise. I'm still kind of torn between the robustness of the approach above and wanting the granularity of adding and removing single elements to prevent my confusing-for-both-users scenario from playing out.

That said, it's nice to have code that is passing a whole bunch of tests. So I'm ready to commit this and release a new version if you think it's ready.

(I was trying to think my way through how to undo $pull and $addToSet without using $set but, as you said, it gets "yuk" real quick. And you're right, if needed for an app, the dev can use custom inverse operations.)

rjsmith commented 9 years ago

How about leaving it unchanged in the code right now, but, similar to the tx.onTransactionExpired and tx.permissionCheck override functions, provide a tx.inverse function that is used, if defined. That way, for example, I could provide my own 'brute-force' code, used for all tx - aware updates in my codebase.

Use update - specific inverse option function, if specified, else use tx.inverse (which can be overridden from the default implementation).

rjsmith commented 9 years ago

So, I came with this, I think this perhaps the best compromise. But it still keeps the door open to improve the default inverse strategy over time:

First the declaration of the default implementation at the top of the transactions_common.js file:

  // Function to work out the inverse operation that will reverse a single collection update command.
  // This default implementation attempts to reverse individual array $set and $addToSet operations with
  // corresponding $pull operations, but this may not work in some cases, eg:
  // 1.  if a $set operation does nothing because the value is already in the array, the inverse $pull will remove that value freom the array, even though
  //     it was there before the $set action
  // 2.  if a $addToSet operation had a $each qualifier (to add multiple values), the default inverse $pull operation will fail because $each is not suported for $pull operations
  // 
  // You may wish to provide your own implementation that addresses these issues if they affect your own application.
  // For example, store the entire state of the array being mutated by a $set / $addToSet or $pull operation, and then restore it using a inverse $set operation.
  // The tx.inverse function is called with the tx object as the `this` data context.
  // 
  this.inverse = function (collection, existingDoc, command, updateMap, opt) {
    var self = this;
    var inverseCommand = this._inverseOps[command];
    var formerValues = {};
    switch (inverseCommand) { // In case we need to do something special to make the inverse happen
      default :
        // TODO
      case '$inc' :
      case '$unset' :
      case '$set' :
        _.each(_.keys(updateMap), function(keyName) {
          var formerVal = self._drillDown(existingDoc,keyName);
          if (typeof formerVal !== 'undefined') {
            formerValues[keyName] = formerVal;
          }
          else {
            inverseCommand = '$unset';
            formerValues[keyName] = '';
          }
        });
        break;
      case '$pull' :
        formerValues = updateMap;
        break;
      /*case '$push' :
        formerValues = updateMap;
        break;*/
      case '$addToSet' :
        formerValues = updateMap;
        break;
      case '$pullAll' :
        // TODO
        break;
      case '$pushAll' :
        // TODO
        break;
    }
    return {command:inverseCommand,data:formerValues}; // console.log("inverse op: ",JSON.stringify(inverse));
  }

Then this code in the Transact.prototype.update function:

      var inverse;
      if (typeof opt === 'undefined' || typeof opt.inverse === 'undefined') {
        // var fieldName = _.keys(actionField[0][1])[0]; // console.log(fieldName);
        if (typeof opt === 'undefined') {
          opt = {};    
        }

        inverse = this.inverse.call(self, collection, existingDoc, command, updateMap, opt);

      }
      else {
        // This "opt.inverse" thing is only used if you need to define some tricky inverse operation, but will probably not be necessary in practice
        // a custom value of opt.inverse needs to be an object of the form:
        // {command:"$set",data:{fieldName:value}}
        inverse = opt.inverse;    
      }

Happy to put a PR together for this, with some documentation in README if you prefer this approach.

JackAdams commented 9 years ago

I like this. It's cleaner than the big mess I have in tx.update at the moment too. Will happily accept a PR. :-)

JackAdams commented 9 years ago

I was just thinking as I rode to school ...

The tx.inverse hash has a set of strings representing the various inverse operations. What about refactoring so that the hash is now a set of minimalist functions that accepts the operation (string) and updateMap and returns the inverse action and formerValues?

This way, the dev doesn't have to overwrite the whole tx.inverse function, but can overwrite the inverses for individual operations. It would mean breaking up tx.inverse and distributing it to tx.inverseOperations.

rjsmith commented 9 years ago

OK, working on a PR.

BUT, I have got stuck with another issue .. I really hope you can help. If a single collection update has 2 commands (say set and addToSet), the code here adds a doUpdate call to the execution stack for each command (in the actionField loop). But, the doUpdate function uses the contents of the updates object to actually perform the updates. This means it repeats the same update twice. And the second update appears to be using updates argument that has gone through the packaging function first.

Not sure if this helps, but here's debug output from my application from within the Transact.prototype.update function:

command: $addToSet
updateMap: {"designs":{"designRequirementId":"CLhp6sGoC4aNNhYAT"},"impactTable":{"$each":[{"requirementId":"eCtJkZSMAggJF2cYb","baselineParameterId":"2r8LmehNPLMvwCqq5","targetParameterId":"KJtqBSWonwekwK6Qo","designRequirementId":"CLhp6sGoC4aNNhYAT","impactTargetParameterId":"RLwik7WcsjcCSSLpW"}]}}
Pushed update command to stack: zNTFPkbWGh43brWCm
command: $set
updateMap: {"lastUpdated":"2015-03-24T12:28:33.987Z","lastUpdatedBy":"s7EZgcK8FgjpKYx65"}
Pushed update command to stack: zNTFPkbWGh43brWCm

It shows it is adding 2 update commands to the execution stack from the same atomic collection update.

But, logging in doUpdate at the point it actually applies the updates shows it is doing the same thing twice:

Beginning commit with transaction_id: zNTFPkbWGh43brWCm
updating with updates:{"$addToSet":{"designs":{"designRequirementId":"CLhp6sGoC4aNNhYAT"},"impactTable":{"$each":[{"requirementId":"eCtJkZSMAggJF2cYb","baselineParameterId":"2r8LmehNPLMvwCqq5","targetParameterId":"KJtqBSWonwekwK6Qo","designRequirementId":"CLhp6sGoC4aNNhYAT","impactTargetParameterId":"RLwik7WcsjcCSSLpW"}]}},"$set":{"lastUpdated":"2015-03-24T12:28:33.987Z","lastUpdatedBy":"s7EZgcK8FgjpKYx65","transaction_id":"zNTFPkbWGh43brWCm"}}
Executed update
updating with updates:{"$addToSet":{"designs":{"designRequirementId":"CLhp6sGoC4aNNhYAT"},"impactTable":{"$each":[{"requirementId":"eCtJkZSMAggJF2cYb","baselineParameterId":"2r8LmehNPLMvwCqq5","targetParameterId":"KJtqBSWonwekwK6Qo","designRequirementId":"CLhp6sGoC4aNNhYAT","impactTargetParameterId":"RLwik7WcsjcCSSLpW"}]}},"$set":{"lastUpdated":"2015-03-24T12:28:33.987Z","lastUpdatedBy":"s7EZgcK8FgjpKYx65","transaction_id":"zNTFPkbWGh43brWCm"}}
Executed update
Commit reset transaction manager to clean state

Either the doUpdate should be called once per invocation of Transact.prototype.update, or doUpdate should act on the updateData argument (which is different for each invocation).

JackAdams commented 9 years ago

I think I've got the solution to this. Just testing now. Will merge the PR soon. Thank you for that, by the way -- it's just how I'd imagined it in my head (but also with documentation!).

JackAdams commented 9 years ago

I've just committed what I believe to be a fix for the doUpdate problem with multiple updates in a single update. Let me know if it's still not working. This was a quick, didn't-think-very-hard-about-it, didn't-test-it-properly fix. :-p Note: I haven't released a new version of the package, I just pushed a commit to the repo in github.

rjsmith commented 9 years ago

Thanks. I commented on the commit .. there is a problem with the anonymous function signature having two arguments with the same name.

JackAdams commented 9 years ago

Okay. I think I've got it right this time.

rjsmith commented 9 years ago

OK, that seems to be much closer now, thanks, .. except when using a $each modifier for an $addToSet operation (to do with the $ - prefixed property again).. I have fixed part of it , but now it is failing when it then tries to store the updates object in the transactions collection again. Let me see if I can track that down.

Here's my fix:

        (function(updateData,inverse,execute) {
          this._executionStack.push(function() {
            doUpdate(collection,_id,restoreKeysIf1stCharacterWas$(updates),updateData,inverse,false,opt,callback,execute);
            self.log("Executed update");
          });
          this.log("Pushed update command to stack: " + this._transaction_id); //  + ' (Auto: ' + this._autoTransaction + ')'
        }).call(this,updateData,inverse,(i === (actionFieldsCount - 1)) ? true : false);

and the error I now get:

**updates:{"$addToSet":{"designs":{"designRequirementId":"iRtn3zBvEfu5mDqni"},"impactTable":{"$each":[{"requirementId":"BK3xrPKS9EJwZdvht","baselineParameterId":"TsNx9s8SeS69w4ddc","targetParameterId":"keM85Egd7DyuJKKai","designRequirementId":"iRtn3zBvEfu5mDqni","impactTargetParameterId":"Wqwd2PiAB8rrASzEZ"}]}},"$set":{"lastUpdated":"2015-03-25T08:03:25.042Z","lastUpdatedBy":"ww7r8rJt5fKk4vzmu","transaction_id":"RHzZdR4zjQm9fTrwt"}}
Executed update
{ [MongoError: The dollar ($) prefixed field '$each' in 'items.updated.0.update.data.1.value.$each' is not valid for storage.] stack: [Getter] }
Rolling back changes
Rollback reset transaction manager to clean state
{ [Error: Error: An error occurred, so transaction was rolled back. [error] [failed-to-attach-design]]
  error: 'failed-to-attach-design',
  reason:
   { [Error: An error occurred, so transaction was rolled back. [error]]
     error: 'error',
     reason: 'An error occurred, so transaction was rolled back.',
     details: { [MongoError: The dollar ($) prefixed field '$each' in 'items.updated.0.update.data.1.value.$each' is not valid for storage.] stack: [Getter] },
     message: 'An error occurred, so transaction was rolled back. [error]',
     errorType: 'Meteor.Error' },
  details: undefined,
  message: 'Error: An error occurred, so transaction was rolled back. [error] [failed-to-attach-design]',
  errorType: 'Meteor.Error' }
JackAdams commented 9 years ago

Ah yes... I think I can see what's happening here. I'll try to solve it too.

rjsmith commented 9 years ago

It's all to do with trying to workround saving object keys with $ in them. The recursive functions I wrote modify the argument .. so when I use it here: restoreKeysIf1stCharacterWas$(updates), it's modifying the updates object directly, instead of acting on a clone.

I was wondering perhaps there is a different way. In the _packageForStorage / unpackageForStorage functions, instead of storing the $ - modified object itself in the transactions collection, use JSON.stringify / JSON.parse to serialise the commands to and from strings.

JackAdams commented 9 years ago

That would sure be a whole lot easier. I'm totally open to exploring this possibility.

Interestingly, two different people have just released meteor packages for versioning in the last two days (after nothing happening in this space for two years):

https://atmospherejs.com/mickaelfm/vermongo and https://atmospherejs.com/todda00/collection-revisions

I haven't had a good look at them, but the vermongo package uses shadow collections (one for each collection) and the collection-revisions package stores revisions on the actual document itself.

So there we have it: three different approaches to undo/redo. Of the three though, I think this is the only package attempting to do multi-document transactions.

rjsmith commented 9 years ago

This is the code I am working on now:

// This turns the data that has been stored in an array of key-value pairs into an object that mongo can use in an update

Transact.prototype._unpackageForUpdate = function(data) {
  var objForUpdate = {};
  _.each(data, function(val) {
    var unpackagedValue;
    if (val.value.json) {
      unpackagedValue = JSON.parse(val.value.json);
    } else {
      unpackagedValue = val.value;
    }
    objForUpdate[val.key] = unpackagedValue;
  });
  return objForUpdate;
}

// This turns the data that is given as a mongo update into an array of key-value pairs that can be stored

Transact.prototype._packageForStorage = function(update) {
  var arrForStorage = [];
  _.each(update.data, function(value,key) {
    var packagedValue;
    if (_.isObject(value) || _.isArray(value)) {
      packagedValue = {json:JSON.stringify(value)};
    } else {
      packagedValue = value;
    }
    arrForStorage.push({key:key,value:packagedValue});
  });
  return {command:update.command,data:arrForStorage};

}

This seems to be working nicely so far, all my tests are now passing (doesn't mean to say those tests are testing the right thing though!). It neatly bypasses needing to do the $ - handling.

However, I am concerned that the _unpackageForUpdate will be backwardly compatible with existing transactions collection documents in an existing system ...

JackAdams commented 9 years ago

Yeah. Might need some notes on the next release about compatibility. But I think this approach is easier to maintain and more robust than the one I had. In most apps, I imagine the transactions collection can be blown away with impunity.

Sent from my iPad

On 25/03/2015, at 6:13 pm, Richard Smith notifications@github.com wrote:

This is the code I am working on now:

// This turns the data that has been stored in an array of key-value pairs into an object that mongo can use in an update

Transact.prototype.unpackageForUpdate = function(data) { var objForUpdate = {}; .each(data, function(val) { var unpackagedValue; if (val.value.json) { unpackagedValue = JSON.parse(val.value.json); } else { unpackagedValue = val.value; } objForUpdate[val.key] = unpackagedValue; }); return objForUpdate; }

// This turns the data that is given as a mongo update into an array of key-value pairs that can be stored

Transact.prototype.packageForStorage = function(update) { var arrForStorage = []; .each(update.data, function(value,key) { var packagedValue; if (.isObject(value) || .isArray(value)) { packagedValue = {json:JSON.stringify(value)}; } else { packagedValue = value; } arrForStorage.push({key:key,value:packagedValue}); }); return {command:update.command,data:arrForStorage};

} This seems to be working nicely so far, all my tests are now passing (doesn't mean to say those tests are testing the right thing though!). It neatly bypasses needing to do the $ - handling.

However, I am concerned that the _unpackageForUpdate will be backwardly compatible with existing transactions collection documents in an existing system ...

— Reply to this email directly or view it on GitHub.

rjsmith commented 9 years ago

Cool. I will carry on testing for rest of today and send a PR by end of my day today (UK time), or give another update here .. thanks for your support.

Oh btw - do you need help to bootstrap a test repo ? I'd like to write some more Jasmine tests around this, it's ultimately the best way to do this kind of work. If you could set up a bare-bones repo, perhaps based on the bug repo I did earlier, I could then send in some PRs.

JackAdams commented 9 years ago

Yes, I would love some help with bootstrapping a test repo. Is this best set up as a separate repo, or within the package itself?

For jasmine tests, it should be a separate repo/app, right? And tiny-test unit tests should go in the package, right? Please excuse my ignorance -- this is my first (long-overdue) foray into testing code.

rjsmith commented 9 years ago

I'll get back to you tomorrow, possibly with a PR :-) I think there is a trick where you can set up a test app in the same repo, and point it's local meteor-transactions package to the same repo. It would be great to have it in the same repo, so commits can include source code changes and corresponding test changes together, for true TDD goodness.

JackAdams commented 9 years ago

Hi Richard, I've set up: https://github.com/JackAdams/meteor-transactions-tests It's a public repo and it does have a file in there with this at the top:

/* Copyright (c) RSBA Technology Ltd 2015 - All Rights Reserved
 * Unauthorized copying of this file, via any medium is strictly prohibited
 * Proprietary and confidential
 */

Please let me know if I should remove this file. :-)

Oops, just saw your comment now! :-p Removing this repo now!

rjsmith commented 9 years ago

He he .. let me see if I can get the local testapp idea working tomorrow.... past my bedtime now :-0