CSRT-NTUA / AlgoPlus

AlgoPlus is a C++17 library for complex data structures and algorithms
https://csrt-ntua.github.io/AlgoPlus
Apache License 2.0
141 stars 20 forks source link

Optimize Sieve Of Eratosthenes #50

Closed purpl3F0x closed 2 weeks ago

purpl3F0x commented 2 months ago
codecov[bot] commented 2 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 89.92%. Comparing base (fc5b638) to head (a20e012).

Additional details and impacted files [![Impacted file tree graph](https://app.codecov.io/gh/CSRT-NTUA/AlgoPlus/pull/50/graphs/tree.svg?width=650&height=150&src=pr&token=3SBDRHUQR5&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CSRT-NTUA)](https://app.codecov.io/gh/CSRT-NTUA/AlgoPlus/pull/50?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CSRT-NTUA) ```diff @@ Coverage Diff @@ ## main #50 +/- ## ========================================== - Coverage 97.32% 89.92% -7.40% ========================================== Files 95 95 Lines 3433 3247 -186 Branches 574 574 ========================================== - Hits 3341 2920 -421 Misses 92 92 - Partials 0 235 +235 ``` [see 78 files with indirect coverage changes](https://app.codecov.io/gh/CSRT-NTUA/AlgoPlus/pull/50/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CSRT-NTUA)
spirosmaggioros commented 2 months ago

Eh, i get it but i dont agree with the size_t conversion at all. I prefer int64_t over this. I dont think it's much of a change. Also, the compiler uses 8 bits to check booleans but 32 bits(or 64 in other cases) to check integers if im not wrong, but with the compiler optimizers it is completely the same thing. So why this will be faster?

purpl3F0x commented 2 months ago

So why this will be faster?

Vector bool is bitset, each value is 1 bit not 1 byte. https://isocpp.org/blog/2012/11/on-vectorbool.

(Alternatively, valarray could also be used but I don't anyone likes valarray 🥲)

Eh, i get it but i dont agree with the size_t conversion at all. I prefer int64_t over this. I dont think it's much of a change.

I don't think there is a reason the type should be signed, it only limits the range of the results. The function basically gets as an argument the max index of the vector which will return(which is a size_t) and then it does index maths,.all of these are size_t operations. I think the question is, if there is a reason not to be rather than being an int.(Ofc I would like to see one trying to allocate 1.8exabytes of RAM but I think my point is still valid)

Hard coding the size to 64b will, Slow down architectures with words smaller than 64bits (eg ARM). Also possibly produce unwanted behavior, since I can pass 2^42 as an argument but the function will not return a vector of that size.