-
Implement log structured memory allocator with virtual addresses on top of matras for tuples.
This is necessary to:
- use 4-byte addresses for tuples in memtx indexes
- support multiple read views ov…
-
Hi ziqi,
Thanks a lot for your great blog, it helps me a lot.
I found one possible typo in your blog here.
```
When a parent event finishes execution, child events will be notified after an extr…
Z-Y00 updated
4 years ago
-
Memory allocations and release will probably become a bottleneck during the forward and backward propagation.
During the forward pass it will hold inputs tensor in cache. During backward pass it wi…
-
This kernel oops also happens in 4.14.49-5 kernel:
```
/sys/fs/pstore/dmesg-nvram-1
Oops#1 Part1
[6202252.103571] drop_cache.sh (57792): drop_caches: 3
[6203927.983602] usb 1-4.4.2: reset high-…
-
This is related to pull request #5: there is now a SPI module class which has a transaction queue object. The transaction queue is fixed in size for each instance of the peripheral; currently, there a…
-
A couple of ideas for optimizing the GPU cache:
* Much of the data we use is integer based and fits within a u16. For example, texture coordinates, render task rects are suitable to store as u16. S…
-
Running inside managarm on managarm/managarm@d1d5afe509ce401d34d79be413b3ba64d22cd765 and on mlibc on managarm/mlibc@afd11daf4565943f8265b2c1ed2a9116e7a4ba4f, sadly I didn't get to test it on any newe…
-
Hi, we've been looking for a memory allocator for our COBOL runtime's garbage collector, and had a few questions.
What is the memory alignment for the pointers returned by snmalloc?
Would it be …
-
Currently a `Slab` uses 24 bytes per element. This overhead comes from `Entry` being an enum.
To remove this overhead completely, `Entry` must become a union. But then:
1) iteration can't be suppo…
alkis updated
4 years ago
-
Hello I'm searching for a faster replacement for the Binutils addr2line, and this project is incredible fast (about 1 sec vs. 12min) for my program. Many thanks!
## Crash details
However, when I…