iamgreaser / iceball

Open-source rewrite of the VOXLAP version of Ace of Spades.
http://iceball.build
GNU General Public License v3.0
113 stars 32 forks source link

Automatically create needed directories recursively #231

Open fkaa opened 8 years ago

fkaa commented 8 years ago

Currently we need to have .dummy files to preserve the directory structure. It would be more safe & robust if we had a function called mkdir_recursive which could take a path and mkdir every directory in it.

For example:

mkdir_recursive("clsave/vol/screenshots");

Will create the following directories if they do not exist:

We could also do this to file write functions (the path given would be recursively created before opening and writing to file).

NotAFile commented 8 years ago

when would these be created?

the system as it is now works, but this is a simple change

fkaa commented 8 years ago

@NotAFile file writes aren't that common so it could be run every time you try and write to a file (eg. screenshot save).

Alternatively common locations could be created at startup like clsave/vol/screenshots/ etc. but this seems more fragile and prone to errors.

rakiru commented 8 years ago

:+1:

It's entirely feasible that a user may delete, say, their screenshot folder, to clear up old files, instead of deleting the screenshots themselves. The current behaviour for such an event is likely "crash and burn".

fkaa commented 8 years ago

Made a quick mockup of how it could look. The question is where do we put this? Should we wrap every file write call with a call to this as well?

int mkpath(const char *file_path, int mode)
{
    // we make a copy since we do not know if the passed path will be a string
    // literal or not, since modifying such a variable will result in undefined
    // behavior according to ISO/IEC 9899:2011, §6.4.5
    char path[FILENAME_MAX];
    snprintf(path, FILENAME_MAX, "%s", file_path);

    // we progressively call `mkdir` on the folder structure, starting from the
    // bottom and working our way up. for example, if the passed `file_path` is
    // "test/folder/structure/file.txt" we will make the following `mkdir`
    // calls:
    //
    // * `mkdir("test\0folder/structure/file.txt", mode)`
    // * `mkdir("test/folder\0structure/file.txt", mode)`
    // * `mkdir("test/folder/structure\0file.txt", mode)`
    //
    // basically, it recognizes folders by the following `/`

    char* p;
    for (p = strchr(path + 1, '/'); p != NULL; p = strchr(p + 1, '/')) {
        *p = '\0';
        if (mkdir(path, mode) == -1) {
            if (errno != EEXIST) { return -1; }
        }
        *p = '/';
    }
    return 0;
}
NotAFile commented 8 years ago

would it not be better to try writing and create the directory if it fails? Doing it that way avoids the overhead, should someone decide to hammer the disk with writes

fkaa commented 8 years ago

@NotAFile Did some profiling. Using the example code and string test/folder/structure/file.txt, the first call (no folders existing at all) takes 0.75ms. Subsequent calls fluctuate between 0.05ms and 0.15ms.

Wrapping the method above in a fopen-ish function gets some different results:

FILE* ib_fopen(const char *path, const char *mode)
{
    FILE* file;
    fopen_s(&file, path, mode);
    if (!file) mkpath(path, 0755);
    fopen_s(&file, path, mode);
    return file;
}

With this function, the initial call takes 1.08ms, while subsequent calls are much more stable at around 0.3ms to 0.5ms.

To put this into perspective: to get 60fps, each frame has a budget of 16ms. This means we can fit ~40 of these file opens in one frame and still get 59fps.

NOTE: I don't even have an SSD

iamgreaser commented 8 years ago

Should be fine provided you ensure that the path is "safe", unless they're hardcoded paths in the engine in which case it's kinda hard to make them not safe.

fkaa commented 8 years ago

@iamgreaser Could look into using PhysicsFS for sandboxing the FS. I've had no trouble integrating it into one of my projects before.

rakiru commented 8 years ago

We already have our own FS sandbox. It could perhaps be tidied up into a single API to reduce the amount of duplicated logic and potential for fucking up though. Is PhysicsFS able to fit out needs anyway? Its aims seem orthogonal - we have no need for that zip stuff. It seems potentially useful for allowing custom content to override server stuff (for custom client-side gun models, hitsounds, etc.), but we'd need to whitelist that stuff anyway so only certain things can be replaced, and we have a system to do that already I believe.

fkaa commented 8 years ago

@rakiru are you talking about src/path.c? It looks pretty barebones and unflexible (eg. only ASCII paths), but could probably suit our needs, yeah. PhysicsFS can be built without the compression/uncompression features, but that is a large selling point of it, true.

Regarding overriding server stuff, I'm not sure how we would differentiate between client stuff and server stuff, guessing that a simple exec(server); exec(client); could get messy.

rakiru commented 8 years ago

Yeah, the current way isn't great, but I'm not convinced it needs completely replaced, rather than just fixed up a bit. It seems like we'd be disabling the majority of PhysicsFS (zip/etc. support, search paths) just to get better path sanitisation. Pre-tested code that we know should work with unicode filenames and across all common systems is always nice, but it's yet another dependency.

Regarding server/client stuff, I'm not sure what you mean about exec; it's just certain parts of the code loading local models if they exist, otherwise requesting them from the server. I was mostly thinking aloud there, it doesn't really matter.