Kotlin / kotlinx-io

Kotlin multiplatform I/O library
Apache License 2.0
1.31k stars 59 forks source link

Random access for IOBuffer or a separate random access structure #39

Closed altavir closed 1 year ago

altavir commented 5 years ago

We have a rather unusual application for Buffers (not actually IO). We use buffers in mathematical application for high performance direct allocation of objects in memory. For that we need to be able to allocate one or several continuous memory buffers and be able to create a view (read-only or read-write) with given offset and given size. The whole buffer is allocated and released as a whole.

Currently, I am having following ideas:

  1. Introduce random access read-only and read-write structures. I do not think we should keep position from ByteBuffer as a part of buffer because it complicates things. The Buffer obviously won't inherit Input or Output.

  2. Introduce indexed read and write (for mutable version) operations on primitives.

  3. Introduce specifications for read and write of custom objects like it is done in kmath prototype. Those specifications could be used either as contexts for specific operations or passed as parameters for read/write operations on buffers. Those specifications could be used also to create nio-like view buffers.

  4. Introduce buffer windowed views. Meaning a sub-buffer that inherits parent read/write permissions. but could see only limited part of parent buffer. Those could be used for safe operations on buffer.

Further improvement could be achieved by using inline-classes on top of buffer views. If the idea proves to be good, it could later be implemented as a compiler plugin, which will generate specification automatically and allow to create non-boxing arrays.

altavir commented 5 years ago

@elizarov Just pointed me in the direction of Memory, which seems to do exactly what I need. No I need to understand if there are ways to easily allocate it.

elizarov commented 5 years ago

It is not supposed to be allocated easily. It is a resource that is to be carefully managed. Right now we are thinking that scoped primitive that gives you memory for a while should be Ok. Tell us more about your use-case, though.

altavir commented 5 years ago

I use manual placement of objects in a JVM ByteBuffer to avoid boxing. Currently, it us solved by specialized readers and writers like these ones and emulate value-types. Current tests show that it allows almost completely eliminate boxing overhead on non-primitive buffers (tested it for complex numbers).

Currently I use JVM ByteBuffer, but obviously, I can't move it to multiplatform since IOBuffer works quite differently. Memory seems to do the trick (read and write primitives, create non-copying view-slices, etc). And it seems to be backed by ByteBuffer, but of course I will need some way to allocate it and keep it allocated while the Buffer that holds it is alive.

altavir commented 5 years ago

In future, I will probably want to connect to something like Apache Arrow for cross-language data transport and use its memory model, but it is currently out of scope.

cy6erGn0m commented 5 years ago

The problem is that different platform has different memory management so it is unclear how can we define an MPP common allocator for Memory so that it can be fully-functional and relatively safe. This is why the only planned function is something like that (IoBuffer will always have Memory inside)

inline fun <R> withBuffer(size: Int, block: IoBuffer.() -> R): R
altavir commented 5 years ago

I think that for most cases a simple wrapper on top of ByteArray will do. why not make an interface like RandomAccessBuffer and make Memory implement it. The operations like primitime get/set could be added on top of it as extensions instead of being extensions of Memeory (Memory could have its own set of extensions overriding those of interface). Than we can add other implementations like the one wrapping the ByteArray or even that of Arrow storage.

cy6erGn0m commented 5 years ago

Those extensions couldn't be on top of RandomAccessBuffer because they can't be implemented efficiently: all primitive get/set operations will be significantly slow (comparing to `ByteBuffer.getShort/Int/Long...). The idea is that on JVM Memory is an inline class that is represented as a ByteBuffer in runtime and all functions are inline so writing any code with Memory will be compiled to the corresponding bytecode working with ByteBuffer so all Hot Spot optimizations are enabled. Any kind of wrapping or handmade primitive reading implementations will reduce performance.

altavir commented 5 years ago

Indeed, but we still need some kind of multiplatform implementation for this. There are several ways to solve that. One is to make primitive read/write members instead of extensions. It will allow to use "slow" access for "slow memory" (ByteArray) and optimized access to ByteBuffer. It will probably work, but not very kotlinish. Another way (the one I usually do in kotlin) is to separate storage and access. Meaning that you have a storage class like Memory with minimal functionality and then accessor class like MemoryReader or MemoryWriter that takes actual Memory as a parameter and could use factory function like Memory.read() to create it. This factory funciton could find out (in runtime) what exact Memory implementation is used and then use optimized access methods if they do exist. It will bring only minimal runtime overhead and looks quite simple from the user side. We can also automatically free the memory if it is initialized and no accessor holds it at the moment. I can write a prototype later if you are interested.

altavir commented 5 years ago

Here is the prototype: https://github.com/mipt-npm/kmath/tree/dev/kmath-memory/src/commonMain/kotlin/scientifik/memory It ended up very similar to the current IO implementation (I've stolen most of JS part). The difference is that Memory is interface and could have multiple implementations on the same platform. It could allow better flexibility in future. For example it is possible that we will need some kind of special representation for shared memory, when it is available.

Another feature (not really used yet) is a release mechanism. It is supposed that Memory is initiated when first reader or writer is taken from it (it is possible to make initialization lazy), then it is released when all readers and writers are released. In this case one can control memory release process in native or some other case.

I currently did not implement array reads since I am not sure I understand a use-case for them. They could be done via MemorySpec. MemorySpec could be optimized for specific memory type, it could check on specific memory type from MemoryReader::memory and use optimized access operations if type matches. Also user could use specific MemorySpec optimized for specific memory type.

Dominaezzz commented 5 years ago

IMO, I think something like that can and should be implemented on top of the current memory class but not necessarily in kotlinx.io. It introduces unnecessary runtime indirection for what is supposed to be a thin abstraction over platform-specific raw memory implementation. If in future (Like project panama) there's another implementation, it can be added as another actual module for the same target. Similar to how ktor-client-* has multiple implementations for the same core client. Although it would be nice if one the actuals could be your prototype. Which would make everyone happy. Not sure if expect/actual would ever allow this use case.

I'm not sure if this is currently possible as I haven't gotten to this stage in my project yet but the MemorySpec bit might be achieved with kotlinx.serialization.

altavir commented 5 years ago

I can agree that this is not basically an IO problem. But it seems for me that one Memory for one platform does not solve all possible use-cases. It is possible to have different memory variants in the same platform. Split actuals are not always a good solution because you need to actually take different module and recompile everything to make the change. Of course, I can build everything on top of existing Memory implementation and then add my own interface on top of it, but the problem of inability to allocate memory in common still exists. I do not see any memory indirection here. Maybe you are talking about virtual calls? Well, the API adds a single additional virtual call, and I do not see how it could affect anything.

A compiler plugin to determine the MemorySpec could be done in the same way as it is done in kotlinx.serialization. I mentioned it before. Maybe even current plugin could be tricked to do it, but I am not sure. For mathematical tasks it probably not needed (we work with limited number of simple objects and it is quite easy to implement specification for each of them), but it Kotlin tries to implement value-types surrogate through that, it is possible.

Dominaezzz commented 5 years ago

Will the new Memory class have methods to set/get in native byte order? As supposed to the current big-endian only getters and setters?

fzhinkin commented 1 year ago

We're rebooting the kotlinx-io development (see https://github.com/Kotlin/kotlinx-io/issues/131), all issues related to the previous versions will be closed. Consider reopening it if the issue remains (or the feature is still missing) in a new version.