1: pointer jump speed is the same between stack and heap
2: reason why stack is faster than heap
1) we can only change the stack pointer position when allocating the stack, but in heap, we need to scan the array that holds all the available space , and find the first available space for current allocating need. then delete the room from the array and return the remain space back to the array.
2) we can only change the stack pointer position when recycle the stack space,but in heap, we still need to put the recycle space information back to the array for next use.
warning: the space did't initialize or reset when allocating or recycle for both the stack and the heap. Here is the example showing this:
1----C++ no optimize ----stack
void func()
{
int data = 10;
}
void func1()
{
int data1;
std::cout <<"data1:"<< data1<<"\n";
}
int main()
{
func();
func1();
return 0;
}
the more detailed explanation is in the <Pro .NET Performance>
Contrary to popular belief, there isn’t that much of a difference between stacks and heaps in a .NET process. Stacks and heaps are nothing more than ranges of addresses in virtual memory, and there is no inherent advantage in the range of addresses reserved to the stack of a particular thread compared to the range of addresses reserved for the managed heap. Accessing a memory location on the heap is neither faster nor slower than accessing a memory location on the stack. There are several considerations that might, in certain cases, support the claim that memory access to stack locations is faster, overall, than memory access to heap locations. Among them:
On the stack, temporal allocation locality (allocations made close together in time) implies spatial locality (storage that is close together in space). In turn, when temporal allocation locality implies temporal access locality (objects allocated together are accessed together), the sequential stack storage tends to perform better with respect to CPU caches and operating system paging systems.
Memory density on the stack tends to be higher than on the heap because of the reference type overhead (discussed later in this chapter). Higher memory density often leads to better performance, e.g., because more objects fit in the CPU cache.
Thread stacks tend to be fairly small – the default maximum stack size on Windows is 1MB, and most threads tend to actually use only a few stack pages. On modern systems, the stacks of all application threads can fit into the CPU cache, making typical stack object access extremely fast. (Entire heaps, on the other hand, rarely fit into CPU caches.)
With that said, you should not be moving all your allocations to the stack! Thread stacks on Windows are limited, and it is easy to exhaust the stack by applying injudicious recursion and large stack allocations.
1: pointer jump speed is the same between stack and heap 2: reason why stack is faster than heap 1) we can only change the stack pointer position when allocating the stack, but in heap, we need to scan the array that holds all the available space , and find the first available space for current allocating need. then delete the room from the array and return the remain space back to the array. 2) we can only change the stack pointer position when recycle the stack space,but in heap, we still need to put the recycle space information back to the array for next use.
warning: the space did't initialize or reset when allocating or recycle for both the stack and the heap. Here is the example showing this: 1----C++ no optimize ----stack
result: 10
2-----heap
result: 78
the more detailed explanation is in the <Pro .NET Performance>
Contrary to popular belief, there isn’t that much of a difference between stacks and heaps in a .NET process. Stacks and heaps are nothing more than ranges of addresses in virtual memory, and there is no inherent advantage in the range of addresses reserved to the stack of a particular thread compared to the range of addresses reserved for the managed heap. Accessing a memory location on the heap is neither faster nor slower than accessing a memory location on the stack. There are several considerations that might, in certain cases, support the claim that memory access to stack locations is faster, overall, than memory access to heap locations. Among them:
On the stack, temporal allocation locality (allocations made close together in time) implies spatial locality (storage that is close together in space). In turn, when temporal allocation locality implies temporal access locality (objects allocated together are accessed together), the sequential stack storage tends to perform better with respect to CPU caches and operating system paging systems.
Memory density on the stack tends to be higher than on the heap because of the reference type overhead (discussed later in this chapter). Higher memory density often leads to better performance, e.g., because more objects fit in the CPU cache.
Thread stacks tend to be fairly small – the default maximum stack size on Windows is 1MB, and most threads tend to actually use only a few stack pages. On modern systems, the stacks of all application threads can fit into the CPU cache, making typical stack object access extremely fast. (Entire heaps, on the other hand, rarely fit into CPU caches.)
With that said, you should not be moving all your allocations to the stack! Thread stacks on Windows are limited, and it is easy to exhaust the stack by applying injudicious recursion and large stack allocations.