I also don't know how much we want to butcher this blog post, but:
> RAM is fundamentally a giant array of bytes, where each byte has a unique address. However, CPUs don’t fetch data one byte at a time. They read and write memory in fixed-size chunks called words which are typically 4 bytes on 32-bit systems or 8 bytes on 64-bit systems.
CPUs these days fetch entire cache lines. Memory is also split into banks. There are many more details involved, and it is viewing memory as a giant array of bytes that is fundamentally broken. It's a useful abstraction up until some point, but it breaks apart once you analyze performance. This part of the blog didn't seem very accurate.
In a single-threaded context, I think 'giant array array of bytes' is still correct? Performance, not so much.
> This part of the blog didn't seem very accurate.
It was a sufficient amount of understanding to produce this allocator :-). I think that if we have beginner[0] projects posted and upvoted, we must understand that the author's understanding may be lacking some nuance.
[0] author might be a very good programmer, just not familiar with this particular area!
- The code seems to completely lack use of `static` for things that should be local to the implementation, such as `META_SIZE`, `find_free_block()` and others.
- The header includes `<stdbool.h>` but the interface doesn't use it so it could be included in the C file instead (which, in fact, it is!).
- Could do with more `const` for clarity, but that is quite personal.
- Interesting to use explicit comparison to check for integers being zero, but treat pointers being NULL as implicit boolean. I prefer comparing in both cases.
Looks nice! Though I have to say, you should probably avoid sbreak even for small allocations -- obviously it's slow, but even beyond that you have to deal with the fact that it's essentially a global singleton and introduces a lot of subtle failure cases you might not think of + which you can't really solve anyways. It's better to mmap out some chunk of memory and sub-allocate it out yourself.
sbrk grows linearly, and if anything is mapped in the way it fails. mmap can map anywhere there's space as it is not restricted to linear mappings. So, you'd better hope a mapping doesn't randomly land there and run you out of space.
It's not a failure but relatedly as sbrk is linear, you also don't really have a reasonable way to deal with fragmentation. For example, suppose you allocate 1000 page sized objects and then free all but the last one. With an mmap based heap, you can free all 999 other pages back to the OS whereas with sbrk you're stuck with those 999 pages you don't need for the lifetime of that 1000th object (better hope it's not long lived!).
> For example, suppose you allocate 1000 page sized objects and then free all but the last one. With an mmap based heap, you can free all 999 other pages back to the OS whereas with sbrk you're stuck with those 999 pages
Actually... you can free those 999 sbrk() pages using munmap() on Linux and Darwin (so most likely the BSDs too). You can also change the mappings within the sbrk()-allocated range, much like any other mmap.
This feature is not well known, nor particularly useful :-)
> With an mmap based heap, you can free all 999 other pages back to the OS whereas with sbrk you're stuck with those 999 pages you don't need for the lifetime of that 1000th object (better hope it's not long lived!).
Thanks to the wonders of virtual memory, you can madvise(MADV_DONTNEED), and return the memory to the OS, without giving up the address space.
Not giving up the address space feels like an anti feature. This would mean, among other things, that access to the DONTNEED memory is no longer a segfault but garbage values instead, which is not ideal.
Indeed. To me it still looks kind of fishy, because the author doesn't have a single other C project on github. The blog post reference is the only thing that makes it somewhat legit, to me at least.
The fastest garbage collector algorithm is similar. Just keep allocating new objects. Just don't bother with actually collecting garbage. Just leak all that memory.
Perfectly usable in many applications. Unfortunately, since it depends on assumptions about the application, it's not really suited for a general purpose library.
Your comment reads as if you believe that simply writing the syscall C wrapper yourself would constitute a meaningful enhancement to the code. We can all apply more effort to be insightful.
It’s a ‘minimal’ allocator. Reading the blog it seems to be going in depth into allocator principles, in practice, things like coalescing blocks.
I haven’t read in full so not sure if it discusses using blocks vs other structures (eg stack-based allocators, stack being the data structure not the program stack.) Ie, it’s a set of implementation choices. It still seems to reflect common ways of allocating in far more detail than many blogs I’ve read on the topic do.
https://screwjankgames.github.io/engine%20programming/2020/0...
https://www.bytesbeneath.com/p/the-arena-custom-memory-alloc...
I also don't know how much we want to butcher this blog post, but:
> RAM is fundamentally a giant array of bytes, where each byte has a unique address. However, CPUs don’t fetch data one byte at a time. They read and write memory in fixed-size chunks called words which are typically 4 bytes on 32-bit systems or 8 bytes on 64-bit systems.
CPUs these days fetch entire cache lines. Memory is also split into banks. There are many more details involved, and it is viewing memory as a giant array of bytes that is fundamentally broken. It's a useful abstraction up until some point, but it breaks apart once you analyze performance. This part of the blog didn't seem very accurate.
> This part of the blog didn't seem very accurate.
It was a sufficient amount of understanding to produce this allocator :-). I think that if we have beginner[0] projects posted and upvoted, we must understand that the author's understanding may be lacking some nuance.
[0] author might be a very good programmer, just not familiar with this particular area!
A couple of minor C points:
- The code seems to completely lack use of `static` for things that should be local to the implementation, such as `META_SIZE`, `find_free_block()` and others.
- The header includes `<stdbool.h>` but the interface doesn't use it so it could be included in the C file instead (which, in fact, it is!).
- Could do with more `const` for clarity, but that is quite personal.
- Interesting to use explicit comparison to check for integers being zero, but treat pointers being NULL as implicit boolean. I prefer comparing in both cases.
It's not a failure but relatedly as sbrk is linear, you also don't really have a reasonable way to deal with fragmentation. For example, suppose you allocate 1000 page sized objects and then free all but the last one. With an mmap based heap, you can free all 999 other pages back to the OS whereas with sbrk you're stuck with those 999 pages you don't need for the lifetime of that 1000th object (better hope it's not long lived!).
Really, sbrk only exists for legacy reasons.
Actually... you can free those 999 sbrk() pages using munmap() on Linux and Darwin (so most likely the BSDs too). You can also change the mappings within the sbrk()-allocated range, much like any other mmap.
This feature is not well known, nor particularly useful :-)
Thanks to the wonders of virtual memory, you can madvise(MADV_DONTNEED), and return the memory to the OS, without giving up the address space.
Since this is all allocator.h[0] contains aside from other include statements, why have allocator.h at all?
0 - https://github.com/t9nzin/memory/blob/main/include/allocator...
It's interesting to brainstorm new memory allocation interfaces. Some cool ideas:
https://nullprogram.com/blog/2023/12/17/
https://gist.github.com/o11c/6b08643335388bbab0228db763f9921...
I'm in a position to do this in my programming language project. Wrote my own allocator for it. Maybe it's time to reinvent a better wheel.
I don't think that's the case here though.
No need to free in short living processes
Perfectly usable in many applications. Unfortunately, since it depends on assumptions about the application, it's not really suited for a general purpose library.
I haven’t read in full so not sure if it discusses using blocks vs other structures (eg stack-based allocators, stack being the data structure not the program stack.) Ie, it’s a set of implementation choices. It still seems to reflect common ways of allocating in far more detail than many blogs I’ve read on the topic do.