vibe-d / vibe.d

Official vibe.d development
MIT License
1.15k stars 284 forks source link

Request for changes in Session and SessionStore #446

Closed etcimon closed 9 years ago

etcimon commented 10 years ago

This issue is a request for changes in the following groups:

For the serialization issue, this would be the proposal:

interface Serialization {
    ubyte[] serialize(T)(ubyte[] value);
    ubyte[] deserialize(T)(ubyte[] value);
}

final struct Session {
    private {
        Serialization m_ser;
    }
    @property void serialization(Serialization ser){
        m_ser = ser;
    }

    T get(T)(string key) { return *cast(T*)m_ser.get(m_id, key, &m_ser.serialize!T, &m_ser.deserialize!T); }
    void set(T)(string key, T value) {        m_store.set(m_id, key, cast(ubyte[T.sizeof])(value), &m_ser.serialize!T, &m_ser.deserialize!T); }
}

This way, one could setup a Session with serialization based off inherited Serialization.

Also, CacheStore would be:

interface CacheStore {
    /// Sets a name/value pair for a given session.
    void set(string id, string name, ubyte[] value, ubyte[] delegate(ubyte[] value) ser, ubyte[] delegate(ubyte[] value) deser);

    /// Returns the value for a given session key.
    Variant get(string id, string name, ubyte[] value, ubyte[] delegate(ubyte[] value) ser, ubyte[] delegate(ubyte[] value) deser);

    /// Determines if a certain session key is set.
    bool isKeySet(string id, string key);

    /// Terminates the given sessiom.
    void destroy(string id);

    /// Deletes the specified key
    void deleteKey(string id, string name);

    /// Iterates all key/value pairs stored in the given cache manager.
    int delegate(int delegate(ref string key, ref ubyte[] value)) iterateKeys(string id);
}

And SessionStore:

interface SessionStore : CacheStore {

    /// Maximum Session ID bytes
    uint szSID;

    /// Creates a new session.
    Session create();

    /// Opens an existing session.
    Session open(string id);

    /// Creates a new Session object which sources its contents from this store.
    protected final Session createSessionInstance(string id = null)
    {
            if (!id.length) {
                    ubyte[64] rand;
                    g_rng.read(rand);
                    id = cast(immutable)Base64URLNoPadding.encode(rand)[0..szSID];
            }
            return Session(this, id);
    }
}

A CacheVar! could then be added to vibe.web.web, with a Cache object in HTTPServerRequest

From my latest estimates, this would allow me to start building the cache library as a separate package.

s-ludwig commented 10 years ago

Unfortunately (we had that already), template functions are not virtual. This means that they won't work as interface functions that are to be implemented in the derived class.

etcimon commented 10 years ago

I guess that means we'll have to use a global type tuple to be able to turn that into a virtual function

alias Tuple!(int, long, float) GTYPES;

class JsonSerializer : Serialization {
    ubyte[] serialize(TypeVessel val){
        foreach (T; GTYPES){
            if (T.toHash() == val.typeHash)
                serialize!(T)(val.contents);
        }
    }
}

Where serialize!T is a placeholder to simplify this explanation. I just wish we had a trait to get all of the program's types

etcimon commented 10 years ago

I'm going to need some help because I'm not so good with Tuples...

alias BaseGTypes = TypeTuple!(uint, int, long, char, wchar, dchar ...);

template Expand(T)
{
    static if (isTuple!T)
        alias T.Types Expand;
    else
        alias T Expand;
}

template AddGTypes(TL...)
{
    static if (TL.length == 0)
        alias BaseGTypes AddGTypes;
    else
        // Finished loop, return self and T
        alias TypeTuple!(Expand(BaseGTypes), TL[0..$]) AddGTypes;
}

template GTypes(TL...){
    alias GTypes = AddGTypes!(TL[0..$]);
}

mixin template RegisterTypesImpl(TL...){
    alias LimitedVariant = Algebraic!(GTypes!(TL[0..$]));
}

template RegisterTypes(TL...){
    mixin RegisterTypesImpl!(TL[0..$]);
}

interface SerializeVisitor {
    foreach(T ; GTypes){
        mixin("(" ~ T.stringof ~ " ...");
    }
}

The goal here is to create a "LimitedVariant" at compile-time using a RegisterTypes! call in the client applications. The Session would get/set this variable from the SessionStore with get(T)(string id, string name, LimitedVariant value).

The SerializeVisitor interface would be instantiated in the SessionStore during construction to visit during serialization of the LimitedVariant. This interface will automatically include all registered types would allow looping GTypes to implement it as well using maybe some helpers that allow people to do.

class JsonSerializeVisitor : SerializeVisitor {
    MixinSerializeFunction!("serializeToJson");
    MixinDeserializeFunction!("deserializeJson");
}

Can you see any constraints with this option?

etcimon commented 10 years ago

I started a test and it seems to work correctly so far, I need to extract the functions in the interface to add them as handlers for the visitation. Anyway, here's the file:

https://github.com/GlobecSys/vibe.d/blob/master/tests/variant-serialization.d

I'd like to know if this is worthy enough of a concept to solve the problem we've been discussing, I wouldn't want to be wasting time on it otherwise.

s-ludwig commented 10 years ago

This could be a good compromise, but it doesn't work on the framework scale because of the reverse dependency it introduces (e.g. the session store implementation will provide it's own JsonSerializeVisitor, but needs SerializeVisitor to derive from, which is defined in myapplication, which it doesn't/can't import).

One possibility to solve the serialization issue is to mirror the interface used for vibe.data.serialization:

interface RuntimeSerializer {
    void write(int val);
    void write(float val);
    void write(double val);
    void write(string val);
    // ...
    void beginWriteArray();
    void endWriteArray();
    void beginWriteArrayEntry();
    void endWriteArrayEntry();
    void beginWriteDictionary();
    void endWriteDictionary();
    void beginWriteDictionaryEntry();
    void endWriteDictionaryEntry();
}

This would work, but still has a number of drawbacks:

It's the first point that leaves me concerned a bit and is also why I didn't mention this approach up to now. I still didn't have time to really think about this topic in depth, so maybe there are still better possibilities that don't involve virtual function calls per data field.

etcimon commented 10 years ago

reverse dependency it introduces (e.g. the session store implementation will provide it's own JsonSerializeVisitor, but needs SerializeVisitor to derive from, which is defined in myapplication, which it doesn't/can't import).

I was thinking of adding mixin RegisterTypes!(A,B); in-between an import vibe.registers and import vibe.d. Thus if SessionStore doesn't find the SerializeVisitor at the point of declaration, it'll import and run the register internally with the base types only. That would definitely fix everything even from a framework point of view. I'd have to take some time to look at the RuntimeSerializer otherwise... Do you want me to show you a fully functional example with the SerializeVisitor? I'm tweaking a few things now but it's almost there

s-ludwig commented 10 years ago

Hm, you mean like adding a declaration between two #include directives in C/C++? That won't work in D, you always have to explicitly import a module to have its declarations visible.

etcimon commented 10 years ago

Oh... so the only way this solution would work is with a config/types.d in vibe, but that's unacceptable right?

s-ludwig commented 10 years ago

Yeah, that wouldn't really fit within the way the package system is organized. I think the best thing currently would be to make a quick benchmark of serializing a few different types once using JsonStringSerializer directly and once using a RuntimeSerializer in-between. If the overhead is in fact not very high (let's say below 10%), then it may actually work...

etcimon commented 10 years ago

The serialization interface is currently entirely templated, so lets say Session.get received a T struct, how would it use the write interfaces with it?

s-ludwig commented 10 years ago

The idea is that RuntimeSerializer implements the compile time interface (duck typing) that serialize!() requires and that there is a proxy class that uses an existing serializer and feeds it with dummy types for objects/arrays. Simplified:

interface RuntimeSerializer {
    enum isSupportedValueType(T) = is(T == int) || is(T == float) || ...;

    void writeValue(int val);
    void writeValue(float val);
    // ...
    void beginWriteArrayGeneric(size_t length);
    // ...

    final void beginWriteArray(T)(size_t length) { beginWriteArrayGeneric(length); }
    // ...
}

class ProxyRuntimeSerializer(alias Serializer) : RuntimeSerializer {
    Serializer serializer;

    void writeValue(int value) { serializer.writeValue(value); }
    void writeValue(float value) { serializer.writeValue(value); }
    // ...
    void beginWriteArrayGeneric(size_t length) { serializer.beginWriteArray!(void[])(length); }
    // ...
}

This assumes that the underlying serializer doesn't need the exact array/dictionary types, so it won't work for all types of serializers, but it will work for all existing ones.

etcimon commented 10 years ago

So the SessionStore will have the Serlializer type available as a template parameter and will send it to ProxyRuntimeSerializer?

etcimon commented 10 years ago

I'll take some time to think about this.

In the meantime, I'd also ask to think of the other solution but in a way that uses a head/ or objects/ folder containing .di files (like views/ contains .dt files) to supplement alias BaseGTypes = TypeTuple!(uint, int, long, char, wchar, dchar, string, char[]); for the LimitedVariant. That would seem to produce no overhead if the folder is empty and it would easily support any serialization interfaces, at the cost of a couple more steps to have it work in a myapplication.

s-ludwig commented 10 years ago

So the SessionStore will have the Serlializer type available as a template parameter and will send it to ProxyRuntimeSerializer?

Exactly, RedisSessionStore would for example simply create a hard-coded ProxyRuntimeSerializer!JsonStringSerializer (or whatever fits best), while MemorySessionStore would simply continue to use Variant.

struct Session {
    void set(T)(string name, T value)
    {
        m_store.set(m_id, name, Variant(value), (s) { serialize(s, value); });
    }
}

class SessionStore {
    void set(string sid, string name, Variant value, scope void delegate(RuntimeSerializer) serialize_callback);
}

class MemorySessionStore {
    void set(string sid, string name, Variant value, scope void delegate(RuntimeSerializer) serialize_callback)
    {
        m_sessions[sid][name] = value;
    }
}

class RedisSessionStore {
    void set(string sid, string name, Variant value, scope void delegate(RuntimeSerializer) serialize_callback)
    {
        scope serializer = new ProxyRuntimeSerializer!JsonStringSerializer;
        serialize_callback(serializer);
        m_redis.set(name, serializer.serializedResult);
    }
}
s-ludwig commented 10 years ago

in a way that uses a head/ or objects/ folder containing .di files (like views/ contains .dt files) to supplement

I absolutely don't like import tricks*. They only work under very limited conditions and open have a lot of possible pitfalls. This is definitely something that I wouldn't include in the framework - the shared static this() trick with vibe.d providing main() already caused a lot of headaches and this seems like it would even be worse.

* The views/ folder for string imports is a very unfortunate exception that is still necessary for overriding Diet templates. But if I could redo it now using a hygienic approach, I'd do it (would be possible, of course, but the breakage that this would cause would be insane).

etcimon commented 10 years ago

have a lot of possible pitfalls.

I won't challenge you on that, you have much more experience as a library writer in D.


RedisSessionStore(alias Serializer) would be great but I'm a little worried about the struct Session [..] (s) { serialize(s, value); }

We would import vibe.data.serialization in Session to make that work? I'm wondering if that's an acceptable overhead in the first place

etcimon commented 10 years ago

Reading the vibe.data.serialization library, it would need to be modified to remove the template parameters e.g. readDictionary!T() -> readDictionary()?

s-ludwig commented 10 years ago

The additional import would not pose any additional overhead (the compile time increase should really be negligible). vibe.data.serialization is also already used by the vibe.http package (e.g. HTTPServerResponse.writeJsonBody), so it also isn't a problem dependency-wise.

etcimon commented 10 years ago

Sorry I just caught up with everything in there. Only the proxy needs to adapt, so that's fine. It's very well designed I'll get to work right away.

s-ludwig commented 10 years ago

Reading the vibe.data.serialization library, it would need to be modified to remove the template parameters e.g. readDictionary!T() -> readDictionary()?

ProxyRuntimeSerializer would need to do the same thing as for beginWriteArray and use some dummy type. The existing serializers don't depend on the exact type, so it will work fine for those. But it's something that at least would need to be documented.

interface RuntimeSerializer {
    void readDictionaryGeneric(scope void delegate(string) entry_callback);
    final void readDictionary(T)(scope void delegate(string) entry_callback)
    {
        readDictionaryGeneric(entry_callback);
    }
}
class ProxyRuntimeSerializer : RuntimeSerializer {
    void readDictionaryGeneric(scope void delegate(string) entry_callback)
    {
        serializer.readDictionary!(string[string])(entry_callback);
    }
}
s-ludwig commented 10 years ago

RedisSessionStore(alias Serializer) would be great

RedisSessionStore(alias Serializer = JsonStringSerializer) would probably be a good default, but probably it makes sense indeed in that case to leave it customizable for someone who want's to optimize space requirements using a more compact format.

etcimon commented 10 years ago

Just as a side note, will I be hearing about an interface CacheStore for interface SessionStore : CacheStore and Cache object now that this is closer to a final Api? I want to simulate the TLB concepts in a mvc / db framework. b/c vibe.d looks very much like a kernel to me

s-ludwig commented 10 years ago

Just as a side note, will I be hearing about an interface CacheStore for interface SessionStore : CacheStore and Cache object now that this is closer to a final Api?

Hm like I've said, this is something that doesn't fit into the current time line and the dependencies that this creates are also an issue. If you want, you can of course already experiment with it, but this is a change that at least would have to wait until the first stable branch has been started. Please take into account that apart from a few details, the vibe.d API is already considered feature complete and bigger changes and additions (if not necessary) are not planned for a while (the vibe.web.web system was an exception, because this change has already been planned long time ago).

But apart from that, I've also not really understood what the intention of the separate CacheStore really is. What would you store in it apart from session data and what life time does it have? I could see a use for a nicer database layer, but that (w/sh)ouldn't have anything to do with HTTPServerRequest.

etcimon commented 10 years ago

It's not Session data, the storage is exactly like Session data but the data is more like a persistent TaskLocal! which stays the same from one request to another. In other words, it doesn't change depending on the user, but it may change depending on a group or unique ID if specified.

Examples would be a Chat room, videoconference, live shared document, newsfeed, etc

Cache!ChatRoom chatRoom;

chatRoom.uuid = session["chat_uuid"];

render("chatlog.dt", chatRoom.messages); // prints the chat messages in this user's group

Cache is useful because the speed is 100 times faster than database request.

s-ludwig commented 10 years ago

depending on a group or unique ID.

But that doesn't have anything to do with HTTP, so it also shouldn't be there in the code. Simply make it a global or class variable of the application instead and keep it as a separate layer on top between the application and the DB driver.

Any kind of database abstraction is currently out of the question for the vibe.d core package, because it is a topic that is much too big/involved once you start to properly generalize it (as much as I'd like to have something like that, too, but it really doesn't fit in my schedule - at all). A simple key-value store is nice and all, but apart from the most simple applications, it will be necessary to add support for more complex queries and that's when things start to get hairy...

etcimon commented 10 years ago

doesn't have anything to do with HTTP,

It's an abstraction layer for memory operations that is used by SessionStore. The primitive itself is quite useful because then I can create the same SessionStore for both, Session-specific and Global- data, which further in the inheritance tree will include (for me) a common database abstraction that allows an MVC framework to be used commonly.

I could create separate stores too, but they would have to be synced, and it would be more logical for both of us if you simply split it this way: interface CacheStore { Variant get(); void set(); } interface SessionStore : CacheStore { Session create(); Session open(); }

s-ludwig commented 10 years ago

I hear you, but the problem is that although it may be useful and may avoid code duplication across library boundaries, it also adds to vibe.d's public API. As such it must be very well thought out, particularly w.r.t. any possible future functionality in that area. I won't add a system for which I can't say how future proof it will be (and I can't say that without putting more - considerable - amount of time into thinking about the general topic). Also, the code duplication is really minimal (really, it's basically just a get/set pair implemented in terms of the DB driver) and there is - or shouldn't be - any overlap between session data and other CacheStore data, so syncing between them is not necessary, too.

etcimon commented 10 years ago

Of course, I'll implement it on my end then - I really thought it was a necessity in vibe.d because it adds distributed persistence to TaskLocal and it's so simple, but I guess that's one more perk to w3vibe ;)

etcimon commented 10 years ago

I think the serialization speed will be quite a bit faster if JSONStringSerializer uses a manual allocator - or FreeLists - in vibe.utils.memory, rather than the GC. I'd make sure that's settled before running the benchmarks, I have some doubts about runtime optimizations regarding allocations, what do you think?

s-ludwig commented 10 years ago

JsonStringSerializer (hopefully) doesn't allocate at all. Instead, it takes an input/output range where it stores the data. For benchmarking purposes it may make sense to define a dummy output range or use NullOutputStream.

etcimon commented 10 years ago

It seems to be made for an appender, I wonder if there's a way to know the size beforehand or use an internal freelist for the appender. Also, the m_compositestack seems to be built with with sequential and individual appends, I don't know if D would make repeat allocations at runtime for that.

s-ludwig commented 10 years ago

Appender is a kind of output range, but where do you see something specific for it? At least there is no fundamental reason why that should be the case. NullOutputStream exposes an output range interface and should work as a replacement. Then there is also vibe.utils.array.AllocAppender as a drop-in replacement for Appender.

m_compositeStack is part of JsonSerializer, not JsonStringSerializer. It shouldn't be used here, as it is not very efficient.

etcimon commented 10 years ago

I don't want to sound harsh about extending session for redis and serialization, but I found a pretty good solution for this that I can apply myself, where it boils things down to having a public host and session_id in vibe.web.

It allows me to have: Cache!(CacheStore!Serlializer)

Where Cache.get!T(id = sessionID for session cache, and Cache.get!T(id = host [_uuid] for domain-specific cache, with an optional uuid for globally grouped data.

As far as I'm concerned, anything that needs serialization and distribution can just import a more general Cache module. This is how I'm implementing it in w3vibe's vibe.web ( I also had to change the SessionVar ):

private struct RequestContext {
    HTTPServerRequest req;
    HTTPServerResponse res;
}
private {
    TaskLocal!RequestContext s_requestContext;
    TaskLocal!string s_sessID;
    TaskLocal!string s_host;
}

private void handleRequest(string M, alias overload, C)(HTTPServerRequest req, HTTPServerResponse res, C instance, WebInterfaceSettings settings)
{

    if (!req.session)
        res.startSession(settings.urlPrefix, settings.options);
    s_sessID = req.session.id;

    s_host = req.fullURL.host;

Global properties _session_id and _host are needed for this to work.

etcimon commented 10 years ago

It looks like there's a memory leak with the GC, and VibeManualMemoryManagement doesn't play well with my use cases of AllocAppender - it gives out gibberish when used in combination with the bodyWriter (or not?). See the print_toJson() function in w3v.core.core here https://github.com/GlobecSys/w3Vibe/blob/master/w3v/core/core.d

If it doesn't print random strings, it gives me core.exception.AssertError@vibe.utils.array(74): Assertion failure

Should I open a new issue for this or is it because I'm not using it correctly?

etcimon commented 10 years ago

I found the bug in vibe.http.common's ChunkedOutputStream's AllocAppender - AllocAppender's reuseData should be like freeData on a manual allocator. Also, the ChunkedOutputStream doesn't reserve anything?

s-ludwig commented 10 years ago

First off, not really related to the bug, OutputStream, and thus res.bodyWriter is itself a valid output range, so you can pass it directly to serializeJson without buffering the whole result in an appender (BTW, also no need to create a serializer manually here, you can use the range based overload). However, there is actually res.writeJsonBody(val), which is supposed to do exactly what print_toJson does.

reuseData is especially intended to do what it does to avoid freeing the data in situations where it is known to be OK to do so, so if a manual allocator is used. reset(AppenderResetMode.freeData needs to be called at some point or keepData used in conjunction with freeing the data manually, that's just the semantics of reset().

But the assertion at vibe.utils.array line 74 indicates that the allocator's realloc is actually buggy, so that's definitely an issue. Do you know which allocator chain was used exactly, manualMemoryAllocator(), right? Was VibeManualMemoryMangement defined?

etcimon commented 10 years ago

However, there is actually res.writeJsonBody(val), which is supposed to do exactly what print_toJson does.

I needed PrettyPrint as a parameter to print_toJson, and it should append to an html output without changing the content-type or status.

serializeJson without buffering the whole result in an appender

You mean serializeToJson? It uses serialize!JsonStringSerializer but again without a PrettyPrint parameter. Also, I was using an AllocAppender instead of the bodyWriter OutputRange because I was trying to solve this bug. I'll revert back to the correct method.

allocator's realloc is actually buggy

I found it to work when using freeData instead of reuseData there, and yes I'm using manualMemoryAllocator(). The GCAllocator is also buggy, because the data never gets freed and my process accumulated memory (I can see that in the task manager's processes, memory usage increases when I hold refresh and never goes down).

I can argue against this use of reuseData in the AllocAppender, because with freeData the data will be recycled (reused) anyways in vibe.utils.memory's FreeList.

s-ludwig commented 10 years ago

I needed PrettyPrint as a parameter to print_toJson, and it should append to an html output without changing the content-type or status.

I see. Sorry, missed that. At least the pretty print parameter would probably make sense to have for serializeToJson so that that part doesn't need to be done manually.

The GCAllocator is also buggy, because the data never gets freed and my process accumulated memory (I can see that in the task manager's processes, memory usage increases when I hold refresh and never goes down).

Can it be that references to that memory are accumulated somewhere (in your code, directly or indirectly)? The GCAllocator just directly calls the GC, so there is not much room for memory leak bugs. BTW, the memory usage for GC memory generally never goes down AFAIR, because the GC doesn't give unused memory back to the OS (it only recycles it for new allocations). That may have changed by now, though. But if you see a constant growth in memory usage, that's of course a good sign for an actual leak.

and yes I'm using manualMemoryAllocator()

It looks like MallocAllocator is missing the bitcopy workaround that GCAllocator.realloc uses for changed alignment. I'll add that.

etcimon commented 10 years ago

If it was a memory leak bug on my end, it would be there with Malloc as well wouldn't it? I'm seeing it constantly increasing when I use the userprofiles example in w3vibe, back when I used diet templates and print_toJson combined. I don't know if it's still there since I switched to temple though. If I can figure out a good AllocAppender OutputStream I'd put diet templates back in there with print_dt, print_tpl and read_dt, read_tpl.

As for the cache solution, I'm thinking of using This.session.get/set or This.cache.session.get/set, This.cache.global. This.cache.session.get!T("key") would then forward to This.cache.global.get!T("id","key"). I think this gives w3Vibe a good interface for php-like development but with some never-yet-seen OOP. I'm eager to see the hooks working, settings up plugins to other libraries would be a built-in functionality thanks to D and vibe!

s-ludwig commented 10 years ago

If it was a memory leak bug on my end, it would be there with Malloc as well wouldn't it?

Not necessarily. GCMalloc.free is a no-op for safety reasons, while MallocAllocator.free will free the data even if it is still referenced.

If I can figure out a good AllocAppender OutputStream

There is already a wrapper for AllocAppender - vibe.stream.memory.MemoryOutputStream - nothing fancy, though.

etcimon commented 10 years ago

Could this have been the cause of the Gzip issue as well?

etcimon commented 10 years ago

I just tested it with the new alignment without my changes and it gives me the same error (line 74, realloc !=)

This is what I have to do to fix this: vibe.http.common : line 462

    m_buffer.reset(AppenderResetMode.freeData);
    m_buffer.reserve(512);

I also give it a reserve at line 402

    m_buffer.reserve(512);

This is in ChunkedOutputStream. Without this I can't call the bodyWriter twice...

etcimon commented 10 years ago

Wow. I can't believe I spent an hour for such a minor change.

array.d : 72 should be

        debug auto olddata = m_data.idup; // added debug and .idup here
        m_data = cast(ElemType[])m_alloc.realloc(m_data, (n+amt)*E.sizeof);
        debug assert(m_data[0 .. olddata.length] == olddata);

The first 4 bytes of olddata were zeroed in my use case otherwise. My personal explanation is that olddata holds the old FreeList allocation and these bytes are somehow reallocated and changed by the time it gets to the assert. Could this be because of concurrency?

s-ludwig commented 10 years ago

Should work now. I've made a small test with repeated random realloc calls and the results have been consistent.

etcimon commented 10 years ago

Didn't fix it for me, however it does seem like it should work, but the AllocAppender in ChunkedOutputStream never calls the Mallocator realloc. I used logtrace and it only calls Mallocator.alloc.

s-ludwig commented 10 years ago

Oh sorry, of course you still need your fix from above. I'll commit that, too.

etcimon commented 10 years ago

I won't really need the fix from above if the AllocAppender's realloc forwards through AFL and FL like this:

    return m_baseAlloc.realloc(mem, sz);

Which of course is not necessary if the AllocAppender simply does freeData.

s-ludwig commented 10 years ago

Well, the fix is necessary anyway because technically the old pointer is invalid after the call to realloc (if it's not the same pointer). I think I'll change the assertion to compare a checksum instead of the full buffer contents.

etcimon commented 10 years ago

A couple things still feel wrong about the AutoFreeList's realloc, which is what's being used anyways, because it doesn't do the same pointer alignment checks.

void[] realloc(void[] data, size_t sz)
{
    // TODO: optimize!
    //logTrace("AFL realloc");
    auto newd = alloc(sz);
    assert(newd.ptr+sz <= data.ptr || newd.ptr >= data.ptr+data.length, "New block overlaps old one!?");
    auto len = min(data.length, sz);
    newd[0 .. len] = data[0 .. len];
    free(data);
    return newd;
s-ludwig commented 10 years ago

It doesn't have to do the checks because it can rely on the base allocator to do them. Only the underlying system calls don't guarantee alignment. AFAICS AutoFreeListAllocator.realloc looks pretty unsuspicious, but I've applied some optimizations and ran it through a random data test run. It seem to run fine, at least if only realloc is used.

If there are still issues, can you upload a small reproduction case, so that I can debug on the actual failing case? With just trial and error it will probably take a while to fix this.