Open jcseok99 opened 7 years ago
Well, I have to say i have never tested this on a windows machine :)
My guess would be something related to this https://github.com/logzio/elasticsearch-stress-test/blob/master/elasticsearch-stress-test.py#L211
can you run:
sys.getsizeof(str("string"))
Under Windows and Linux to see the difference?
Thank you for reply.
As your opinion, I was run the following code
start_time = time.time(); for i in range(1, 1000000): sys.getsizeof(str("string")) print("--- %s seconds ---" % (time.time() - start_time))
The result is follows.
On Wndows:
On Linux(CentOs7):
Hey @jcseok99! I think that it has been proven already that Linux is faster :)
Im more interrested in the output of sys.getsizeof(str("string"))
as Linux and Windows might represent strings differently in the memory, which may impact the actual size calculated
I used the your stress test code for checking performance of our systems such as Microsoft Windows 10, 10 Server, Linux(CentOS 7). In case of Linux(CentOS 7), It shows the 27-30 MB/s on the test using example 1. (python elasticsearch-stress-test.py --es_address 1.2.3.4 1.2.3.5 --indices 4 --documents 5 --seconds 120 --not-green --clients 5) But, On Windows 10 and 10 server shows the 2.7~4 MB/s. Why difference the performance between them? All of them have same configuration of elasticsearch such as heap size (2GB) and same hardwrare(CPU, RAM, SSD).