Meteor-Community-Packages / Meteor-CollectionFS

Reactive file manager for Meteor
MIT License
1.05k stars 237 forks source link

CollectionFS Rewrite #95

Closed aldeed closed 10 years ago

aldeed commented 10 years ago

The few remaining dev tasks have been moved to separate issues.

DOCS:

TESTING:

aldeed commented 10 years ago

@raix, some questions to ponder:

ONE: The core package currently saves chunks to an unmanaged Meteor.collection on the server as they are uploaded. After they're all uploaded, it combines them into a buffer, puts that in a FileObject, and then passes that FileObject to beforeSave and the storage adaptor put handler for each file copy. When all copies have been made or have failed the max number of times, the chunks are then deleted from the unmanaged collection.

The collection is unmanaged because I don't think using a collection synchronized to the database is a great idea for this type of temporary storage. Because if they are only saving copies to the local filesystem or even S3, there is no need for all that data to go into the actual database (which might be on a far away server) for a short period of time.

The drawback, though, is that an unmanaged collection is in memory and is lost when the app restarts, meaning that uploads in progress are lost. What we really need is some way of caching the data temporarily on the server that is not impacted by app restarts. Any ideas? I was thinking maybe writing to a file on the filesystem chunk by chunk.

TWO: You mentioned making queue Task objects a custom EJSON type. One problem is that they contain a taskData property, which is set to anything needed by the task handler, so we don't know what it will be set to. I think we'd just have to note in the API that the taskData needs to be JSON-able, but I'm not sure. I've only created a couple custom types before.

THREE: The generic queue code suffers from the same issue as ONE above: the tasks are stored in memory. Your plan included support for passing in a collection to persist, but maybe that again would be too much overhead? The queue is supposed to work on either client or server, so maybe we could use localStorage (if available) by default on the client and something else on the server (however we solve ONE, do it the same way)?

aldeed commented 10 years ago

I think the observe feature would be very difficult to do and might not be necessary for this initial rewrite. Is the idea to track file changes in the storage (say, the local filesystem) and update the copies data in collectionFS to keep them in sync? I feel like this would be difficult to do properly because you could use file system watcher, but if the app is down while files are changed, those changes would never by synced.

Also, if http methods are supported, one could just ensure that all changes are funneled through CFS, either DDP or HTTP? Maybe I'm forgetting about some case.

raix commented 10 years ago

One, I've added a "temporary storage adapter" in the arc.doc at filehandlers i think - like temp in php on the filesystem (an other idea is to have a "Master" storage adapter to hold the original version. *discussed a long time ago in #34 #29)

Two, downside is at it has to be a ejson able object, but on the client we loose file "pointer" / access anyway, the queue could allow custom functions for finding next, handle, fail - I'll prototype on this the next couple of days.

Three, the queue collection is either client or server, so client queue would go unless persisted with groundDB

You Got a point on sync and restart of the server - but not impossible, you are right it's not super important - but would be relevant to dropbox and chrome storage adapters. I wanted to prototype on this, just to figure out how to have it work with the new api, true one could rig a function to monitor changes and update the relevant files, I'll think about this one,

(Writing on ipad, sorry about typos)

raix commented 10 years ago

And great work, it's a good idea to coordinate the core "work" list

aldeed commented 10 years ago

That all makes sense. I'll have to think about it some more.

One point about the ejson for Task: the new FileObject does not keep a reference to .file. Instead it immediately converts the File to a Blob and stores in .blob and then all access is done through .blob. I think the blob is still lost because I didn't include it in the EJSON functions (didn't want all that data to transfer up to the server when file objects are passed around), but theoretically it could be persisted in grounddb on the client as EJSON.newBinary?

If you want to move the queue package files into a new repo in your account and make me an owner, I'm fine with that. It probably doesn't belong under the CFS group. (PowerQueue is a good name.)

One more idea for queue: How about an autostart option? You could set autostart: myCursor and the constructor would set up an observe on that cursor for added and changed, which would simply start the queue whenever anything is added or changed. When autostarted, the queue would stop itself automatically when it runs out of work to do. That might make filehandling more efficient since the queue could just autostart whenever files are uploaded, process them all, and then stop itself.

aldeed commented 10 years ago

Wait, actually the queue could start if not already started whenever you call addTask and then stop when it's done with all tasks. No need for observe. I'm not sure why I didn't make it do that. :) I'm sure the queue code needs lots of work.

CFS could use an observe function to add tasks to the filehandler queue, though.

raix commented 10 years ago

Ahh, thats what you ment about moving queue package to another repo - I actually just wanted to keep it in the org - I've added the HTTP-methods, HTTP-publish here too - since its ment for this project. The reason for me to make reusable packages was to make better abstractions and more isolated tests/issues etc. plus others might benefit from this project in more ways.

Its a good idea about an autostart: boolean

I've got 2-3 concepts on the queue:

  1. The queue in the spaceCapsule project https://github.com/SpaceCapsule/packmeteor/blob/master/queue.js this takes async functions and have it perform in sync.
  2. A persisted queue in a collection with jsonable objects and a queue handler to interpretate / execute the operations in the task object
  3. A complete abstraction that allows one to set custom functions for next/run etc. functionality - This could in theory contain the two above.

Converting the file into blob is a good idea, and yes it could actually be persisted in groundDB - I've isolated the work on grounding data in https://github.com/organizations/GroundMeteor - The GroundDB is to be split up into:

We should investigate if blobs are converted using EJSON in minimongo - I hope not - keeping it as a blob is better for memory usage? could be large files.

raix commented 10 years ago

In theory... We could have the client write write the chunks directly into .chunks - We dont have to subscribe to make inserts into a collection... We dont have to use Meteor.call for this.. Make sense?

If wanting to load a file - then add a subscription,

The chunks could be persisted by groundDB - if one would want to,

It would provide a master version of the file, in the database and still have option for creating copies or syncs?

aldeed commented 10 years ago

If we write into .chunks directly from the client, then the chunks have to go into the actual mongo database (because I don't think there's a way to sync a collection between client and app server but not to DB server, i.e., to sync two minimongo collections; you can do it if all inserts happen on the server using pub/sub, but not vice versa). I was trying to avoid having two trips, first from client to app server, then from app server to DB server. Because 99% of the time, the data is only saved until the client finishes uploading, then it is passed to the storage adaptors, and then it's deleted from the server .chunks collection. The 1% that causes problems is when the app server needs to restart while data is being uploaded. Then we loose the upload cache, so in those rare cases having the data in the mongo database would be nice.

But that's why I thought maybe writing data to temp files on the app server filesystem would be best. Then it's persistent until all data is received and all copies have been made, but yet it doesn't have to make the extra trip to the mongo database server.

I don't generally like the idea of keeping a master version of the file in mongo because I may want to use this package for all of the easy upload/download and reactivity features, but yet store directly to S3 or filesystem and skip mongo entirely.

aldeed commented 10 years ago

I just ran a test and Blob does not save to minimongo as blob.

raix commented 10 years ago

does it save as a $binary?

raix commented 10 years ago

Try to log the myCollection._collection.docs or myCollection._collection.docs[id] un tested

the inserts could happen to a SA - does not have to be a database Just add a /insert/images.chunks server meteor method and it should be ok - could do custom publish, but I dont think would be useful

But agree - we should have a SA for temporary filesystem.

aldeed commented 10 years ago

Logging myCollection._collection.docs[id] is what I did. It was a generic Object with only type and size props. I think binary data may be lost? You don't get a Blob back when you call findOne either. The LocalCollection insert function appears to run the full doc through EJSON.clone() before storing.

raix commented 10 years ago

clone, what I feared - Thats not good... Unless we make a custom EJSON type - where the blob ref is put in an object with key id. Maybe the queue should not be mounted on a collection, I'll have to think about this,

raix commented 10 years ago

@aldeed I'm just curious: you wrote GQ in the queue code - what did the G stand for?

aldeed commented 10 years ago

Generic Queue. That was just me being unimaginative. Could be changed to anything better. :)

raix commented 10 years ago

Haha, I see.. Well Power Queue does sound a bit cooler, but I actually rather wanted to stay in the astronomy/space terminology - not sure about warp queue - maybe orbit queue - So if you feel inspired - I'll let it hang there a moment :)

aldeed commented 10 years ago

I've been thinking more about the idea of the "two-way" binding for SAs. I hadn't been considering that as critical, but I realized that the current architecture I have makes it very difficult. However, I think I have a plan for one final restructuring that will get us there. It's sort of a combination of your idea to have multiple .files and the way I architected everything.

  1. Reduce the importance of the .uploads collection. It will still be used to track upload progress and file handling progress, but after that point it's not really needed. All interaction will be directly with SAs, with CollectionFS as a proxy.
  2. Developer will need to define stores, which are groups of related files within a storage adaptor context. The generic config info for the SA will be defined when defining the store. For example, if you're storing files in two different S3 buckets, you'd have to define two different stores, each of which is linked with one bucket. The store defines both "where to store the files" and "where to watch for filesystem changes".
  3. When you call copies to define the copies that should be created after an upload, you specify the "store" in which each copy should be saved. This implicitly defines the storage adaptor.
  4. Each store gets one persistent mongo collection, too: store.<storename>.files. As files are uploaded/saved from any source, documents are added here. As files are removed, documents are removed. Each SA determines how best to do this.
  5. For downloads, listings, etc., the developer will interact directly with the data store. Basically we'll expose the corresponding .files collection through the data store name so that the developer does something like myCFS.store.<storename>.find().

The main reason for all these changes is to have the idea of a store (a single bucket, subfolder, or gridfs collection), which we need in order for an SA to know where to watch for external file changes.

I'll try to make these adjustments later today or tomorrow. But let me know if you see any flaws.

raix commented 10 years ago

Yes!

A SA can be connected to a CollectionFS as:

  var myFile = myCollectionFS.findOne();
  myFile === {
    _id: '',
    stores: {
      'mydropbox': 'Dcs33sSdfdfa23DFs' // Id in the mydropbox SA
    },
    copies: {
      'size40': 'sdfsf4g43grgfedfgsdf3' // Id in the size40 SA
    }
  };
raix commented 10 years ago

Just playing with api: (server-side)

var dropbox = new Storage.Dropbox(options);
var thumbnails = new Storage.Filesystem('/www/files/thumbnails');

var myImages = new CollectionFS('images', {
  stores: {
    // Defining a SA here automaticly enables twoways if not wanting the
    // the twoway binding then simply define the file SA in the copies section?
    'mydropbox': dropbox
  },
  copies: {
    'size40': thumbnails.Filehandler(function() {
      // We could also allow for custom stuff like the original FH but run in a Fiber?
      // or allow packages to add/extend helpers to the `this` scope?
      this.image.resize(40);
    })
  }
});

Client-side:

  // We connect to the images collection
  var myImages = new CollectionFS('images');

  // Save a file to the server first create the filerecord
  var myFile = myImages.insert({ filename: '' etc... });

  // Return the FileObject
  var myFile = myImages.findOne({ _id: id });

  // Get the blob:

  // This would just load the file from the first SA in stores
  myFile.get();
  // or
  // Get the copy "size40"
  myFile.get({ copies: 'size40' });

  // Save or update the file contents (added to the client-side upload queue)
  myFile.put(blob);

  // If HTTP access is on: (returns url with authToken if allow/deny rules applied?)
  {{ CFSUrl "images" "id"=id "copies"="size40" }}
  or
  myFile.getUrl();
  or
  myFile.getUrl({ stores: 'mydropbox'});
aldeed commented 10 years ago

I think there are still a few differences in what we're thinking. I was thinking that "copies" would become an idea tied to uploads, i.e., "after uploading to this CFS, create these copies and put them in these stores". A copy definition is essentially (1) which named store should the copy be saved in and (2) what pre-processing should be done (the beforeSave method).

Every SA will have a Meteor.Collection (or compatible) this would be kept up to date by the SA internally

I think there should be one per store rather than one per SA. In my mind, the SA is a set of methods that tell CFS how to interact with some storage mechanism (basically SA = the package). A developer can define multiple stores for an app, each of which would use one SA with a certain config. (We might be using different terms for the same things.)

It's true that there could be one collection per SA, in which case the "store" (config info) would have to be tracked as a property in each document.

Here's what I would change about your API suggestion:

// I'm not sure there's benefit to having separate Storage.<SA> objects/instances because
// I think the CollectionFS instance can create all the mappings we need
// behind the scenes.
var myImages = new CollectionFS('images', {
  stores: {
    'dropbox': {
      adaptor: "s3",
      config: {}, //region, key, secret, bucket, etc.; exact properties documented by SA package
      observe: true //false to disable two-way binding?
    },
    'thumbnails': {
      adaptor: "filesystem",
      config: {
        folder: '/www/files/thumbnails' //exact properties documented by SA package
      },
      observe: false
    }
  },
  copies: {
    //save the original; this should not be done by default because an upload could be very large
    'original': {
      store: "dropbox"
    }
    //save any other copies wanted, in this case a thumbnail
    'size40': {
      store: "thumbnails",
      beforeSave: function () {
        this.gm().resize(40);
        //this is the current GraphicsMagick API I have, and I think that would work for
        //any additional file manipulation packages; they just extend FileObject, which is `this`
      }
    }
  }
});

With the info you pass to the stores option, the CollectionFS constructor instantiates new store objects for internal use, but the API is all through the main mycfs.files document, which is referenced by the mycfs.store.<storename>.files documents.

None of this is actually too different from what's happening with my current code, except that the "store" idea is abstracted to allow two-way updates.

This gets confusing when I think too much about it, so I think I'll try to make it work and see if I run into any issues. It's easy enough to tweak the API syntax later.

raix commented 10 years ago

@aldeed I've did some prototyping / playing with some ideas - I wrote it almost from scratch today in a couple of hours: https://github.com/CollectionFS/cfs-prototype Its pretty untested - Its about 1000 lines of code in 6 hours one file containing:

I haven't wrote the code for

It mostly the rough architecture, I've removed the idea about stores and kept copies


var myImages = new CollectionFS('images', {
  copies: {
    'mydropbox': dropbox.syncronize(), // Pass the synchronize handle

    'filesystem': filesystem, // Pass the raw SA

    'size40': thumbnails.beforeSave(function() {
      this.gm().resize(40);
    })
  }
});

It utilizes your idea about a FileObject:

  var file = new FileObject( files[0] );
  myCollectionFS.insert(file); // Inserts file and starts upload

  profiles.insert({ name: 'Morten', image: file });

 file.get()
 file.get('size40');
 file.put(newBuffer);
 file.remove
 file.update
 ...
 file.url()
 file.url('size40');
 ...
 etc...

Had to get it out of my head - have to focus 200% on a project in November... Well se ya on tha flip-side :)

aldeed commented 10 years ago

Looks impressive, @raix! I did a quick skim but I'll try to look more closely tomorrow. I think I'm going to come up with a few example use cases and adjust your code until all of them work. Example cases:

aldeed commented 10 years ago

@raix, one question so far about your new code (if you have any time outside the 200%). :) Generally it looks similar to what I have, but you've solved many of the issues I was running into, so I much appreciate that.

I'm not sure the .synchronize and .beforeSave methods are helpful. We could have developers use the objects these methods return directly, and that would allow for more options and more combinations of options. For example, what if you want to synchronize but also run a beforeSave for new uploads?

Change to the following?

copies: {
    'mydropbox': {
      store: dropbox, // SA reference
      syncronize: true
    }

    'filesystem': filesystem, // Pass the raw SA

    'size40': { 
      store: thumbnails,
      beforeSave: function() { this.gm().resize(40); }
    },

    'syncAndAlter': {
      store: filesystem,
      syncronize: true,
      beforeSave: function() { this.filename = myRandomFileName; }
    }
  }

It sounds like you will not be making any more changes for now? I think next week I can merge your codebase with mine, finish the remaining TODOs, and then start some test cases. I appreciate your prototype; I was beginning to lose steam.

raix commented 10 years ago

Hi @aldeed sounds cool - well I was about to explode with ideas and steam :)

I only ran into one issue, I wanted the fromJSONValue to fetch the full file record from the database - but it does not work sending FileObject from client to server _(since the fromJSONValue seems to be running outside a fiber or something before passed on the the Meteor.method - I've filed it as an issue on meteor - could be in https://github.com/meteor/meteor/blob/devel/packages/livedata/livedata_server.js#L1024 )_

I would accept removing the beforeSave + synchronize helper functions and have the user set store, but:

    'syncAndAlter': {
      store: filesystem,
      sync: true, // should be refactored misspelled in my code? maybe leave it at `sync`?
      // Since synchronize is set the beforeSave should have a counter part beforeSync
      // that could make sure the sync data is cool
      beforeSave: function() { this.filename = myRandomFileName; },
      // If we converted from png to jpg in the beforeSave we should be able to convert from
      // jpg to png and pass that data to out load to the rest of the copies? - This would be
      // an option for the user to handle some sync cases.
      beforeSync: function() { }
    }

I just pushed the last code for github (I've got a super tight deadline so focus is a keyword) - but if you post on this thread I'll still read it - and it you push code I'll try to read it - but with the risk of lag.

aldeed commented 10 years ago

Sounds good. I'll try not to bug you too much. :)

raix commented 10 years ago

No worries - cfs is important stuff

aldeed commented 10 years ago

@raix, I've made a lot of progress merging our codebases and splitting into separate packages. I have a little bit of work left to do before I can push everything back to github. Hopefully done sometime tomorrow.

Just one minor question at the moment: Your FileObject accepts a _collectionName argument for internal use only, which it sets to self.collection. Would you be OK with instead having internal code set .collection directly on FileObjects to avoid having the _collectionName argument there, which could confuse API users? This is how my codebase does it.

raix commented 10 years ago

Hi @aldeed super! We could do that, just to be clear self.collection is the name of the collection, maybe it should be refactored to self.collectionName? - I currently use _getCollectionFS to get the collectionFS instance from a name. I've been thinking about refactoring it into:

// A non prototype function to use directly on the CollectionFS object
// allowing this to be used cross package scopes
// Returns valid CollectionFS from name string or throws an error
CollectionFS.get = function(name) {
  if (name && name === ''+name) {
    if (_collectionsFS[name] instanceof CollectionFS) {
      return _collectionsFS[name];
    } else {
      throw new Error('CollectionFS "' + name + '" not found');
    }
  } else {
    throw new Error('requires name as string');
  }
};

We have to work this way to make use of a EJSON-able FileObject.

What do you think?

raix commented 10 years ago

Just read your post again, you are talking about a self._collection reference to the CollectionFS in FileObject - instead of using the collectionName? I guess we could do this, it would clean up the use of useCollection. toJSONValue should do something like:

// EJSON toJSONValue
FileObject.prototype.toJSONValue = function() {
  var self = this;
  if (self._id && self._collection instanceof CollectionFS) {
    // If user wants to save the file reference then return the id
    return { _id: ''+self._id, collection: ''+self._collection.files._name };
  } else {
    throw new Error('file is not stored so we cannot return a file reference');
  }
};

maybe refactor collection into collectionName in the file reference json format

The fromJSONValue would still need the CollectionFS.get() to initialize the self._collection reference. I'm still not 100% on the fromJSONValue - theres a bit of inconsistency in that the client-side actually looks up the file and initializes the fileRecord, this is at the moment handy, havent testet reactivity yet. But on the server-side we dont have the files in memory, and fetching every file record one at a time might not be optimal - and it will not be possible to do from the fromJSONValue unless we workaround.

shravansing commented 10 years ago

Quick q, Is the devel branch ready with S3 ? And i hope S3's API key /secrets etc will be stored on the server side. ? Please let me knw the timeline when it will get ready , even if it is on the devel branch

aldeed commented 10 years ago

@bhramakar, Devel is undergoing significant changes right now. I have a bunch of local changes that I'll be pushing within a couple days. At that point there will then be a core CFS package and several additional packages that add, among other things, S3 storage. The API will still be in flux for a couple more weeks at least, but you'd be welcome to help test out the S3 package and provide feedback during that time.

On Fri, Nov 8, 2013 at 2:33 AM, bhramakar notifications@github.com wrote:

Quick q, Is the devel branch ready with S3 ? And i hope S3 secrets etc will be stored on the server side. ?

Please let me knw the timeline if not ready, even if it is on the devel branch

Reply to this email directly or view it on GitHub: https://github.com/CollectionFS/Meteor-CollectionFS/issues/95#issuecomment-28046462

raix commented 10 years ago

@bhramakar I'm just adding on: S3 will be a storage adapter, storage adapters is not really that related to the client-side of code in the cfs v2 - so security wise api keys will not be an issue in cfs.

raix commented 10 years ago

@aldeed I've created a small gist at https://gist.github.com/raix/7374295 Its for easier validation of user input in functions - take a look at the test.js - I just got sick of writing the same stuff over and over again + it gets more and more complicated. Let me know what you think, does it seem useful? feedback appreciated :)

aldeed commented 10 years ago

@raix, that's a great abstraction! I'm always annoyed by writing that same stuff, too. I wonder if this could be made part of the check package?

raix commented 10 years ago

I think it would be nice addition to the check stuff - haven't packaged it - I guess it could be - just played around with some ideas :) I've created a pr. for EJSON to support circular and global references - could be nice - the current check breaks if checking objects with circular ref. like Meteor.Collection. The reason is that the parseArguments could check for instance of eg. a collection etc. Make stuff much simpler.

aldeed commented 10 years ago

@raix, there are a couple issues with the CFS code as I'm trying to finish merging:

(1) You use callbacks in a number of places where I was using blocking. In most cases, the callbacks seem fine, but when dealing with SAs, I'm not sure it works. For example, when calling the get/download methods, we need to return the data to the client, but sa.getBuffer doesn't return right away but rather expects a callback. So I think I'll need to pull in some of my Future/wrapAsync code. I think the best might be to adjust the SA.getBuffer method so that if no callback is passed in, it will wrapAsync the call to api.get and then return the actual buffer. That way anyone writing a storage adaptor instance can just deal with callbacks and the generic SA code can handle sync vs. async. Does that seem correct to you?

(2) I think SAs need to store content type in addition to the .files ID in the copies list. Otherwise how can we know the content type to set when downloading a certain copy? It is not guaranteed to be the same as the content type of the originally uploaded file was.

raix commented 10 years ago

Hi @aldeed,

  1. The main reason for the callback structure is basicly that SA's perform async stuff / io - so it seemed natural adapt - callbacks are ok on client apply/call - but it should be supported in the server methods - cant see why it should not be possible to do async. - I guess thats why you want to use wrapAsync? guess methods should be non blocking?
  2. true - we could save an image converted from jpg to png - also the timestamp ctime/utime and size should also be in the SA - for sync conflict resolution?
aldeed commented 10 years ago

Right. The client callback for apply/call gets its result argument from the return value of the server method. But when doing a "get" method, there is no way for the method to return the file data/buffer without using Future or wrapAsync, is there? Maybe I'm missing something. I think the server get methods should call this.unblock() and then use wrapAsync to wait until the SA retrieves the data, then return the data. That's easy enough to do, I just wanted to make sure you can't think of any better way.

raix commented 10 years ago

APDownload and APUpload would be the functions to do this.unblock() I think?

But you got a point maybe we should make the file.get etc. wrapped - There could be some leads in the HTTP package - call method. I think to remember that it both runs sync and async? (server only of course)

raix commented 10 years ago

@aldeed I just saw an interesting client-side api for images https://github.com/mailru/FileAPI

FileAPI.each(files, function (file){
  FileAPI.Image(file).preview(100).get(function (err, img){
    // Could be nice to be able to do resize in runtime filehandlers etc.?
    images.appendChild(img);
  });
});

What I like is having a way to make a request to the server - its in Archtecture - runtime filehandlers...

aldeed commented 10 years ago

@raix, I just have a few things to finish on the merged rework. I will plan to create the separate repos and push everything to GitHub on Sunday. At that point, @awatson1978 can create tests and maybe fix any minor bugs that are found. We can tell others to begin trying it out, too.

I had to change or extend your API in a few places, but it's generally the same. I will have the documentation mostly updated when I push, too.

I have sync'ing working somewhat, but just need to figure out the best way to determine whether a watched file change was due to something we did, and therefore should be ignored. Otherwise there's a bit of an infinite loop situation.

raix commented 10 years ago

Cool Eric, really looking forward to testing it out a bit :)

Ideas: The SA contains a marker - eg. utime/ctime/filename or/and hash that complies with the actual storage and could be used for comparing files. The SA would shield against updates/sync on a file when using put - this would avoid infinite updates.

raix commented 10 years ago

Great work Eric, I've been testing v2 feels nice, still bumpy but making good progress. Looking forward to be able to join in :) I think when we have all the major packages isolated in separate repos it will be a nice base for creating isolated tests and development.

aldeed commented 10 years ago

Thanks, Morten. I have all the separate packages done and documented on my machine, so I'm going to begin putting those on GitHub now. That should make it easier to test the whole thing. There are definitely still some bugs and TODOs, relatively minor. I'll list those out here after I get everything pushed, and then I can create some issues for them, too, so that others can work on them.

raix commented 10 years ago

Wow, and we are not even in december yet I'm beginning to get a lot of packages :) all we lack now is a bit or eight of snow and some bells, think I got a santa tux and a beard somewhere...

aldeed commented 10 years ago

Don't be too happy until you unwrap the packages. Might be lumps of coal inside. :)

OK, I'll try to dump everything I can think of here just so that it's somewhere. Then I can try to put some of it in issues, too.

API changes from your prototype:

I will work on testing in some real-world apps and push fixes based on that, but if others want to begin pushing fixes and changes and writing tests now, that would be great.

aldeed commented 10 years ago

Also, development should be done in devel-merge for the Meteor-CollectionFS package and in master for all other packages for now. I put a note in all the readmes to use them only for testing for now.

Here's a link to the advanced info doc: https://github.com/CollectionFS/Meteor-CollectionFS/blob/devel-merge/ADVANCED.md

aldeed commented 10 years ago

I forgot to mention that I haven't done anything with security yet. I'm not sure we necessarily need the built-in "owner" property on FileObjects, but we will need to support a "metadata" or similar property that is saved, which users can then check in the allow/deny functions. They can then base access on owner, type, or any other metadata.

I haven't done anything with testing the authToken stuff for HTTP URLs either.

raix commented 10 years ago

Yeah coal, right :)

Great work!