Closed ityuhui closed 3 years ago
Append the result of valgrind:
valgrind --tool=memcheck --leak-check=full ./multi_thread_bin
==32218== ==32218== HEAP SUMMARY: ==32218== in use at exit: 134,418 bytes in 411 blocks ==32218== total heap usage: 12,293 allocs, 11,882 frees, 1,776,892 bytes allocated ==32218== ==32218== 23 bytes in 1 blocks are definitely lost in loss record 39 of 241 ==32218== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x5037E0C: CoreV1API_listNamespacedPod (CoreV1API.c:19391) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== 32 bytes in 1 blocks are definitely lost in loss record 94 of 241 ==32218== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x4EA1593: strReplace (apiClient.c:491) ==32218== by 0x5037E49: CoreV1API_listNamespacedPod (CoreV1API.c:19394) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== 68 bytes in 1 blocks are definitely lost in loss record 132 of 241 ==32218== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x4EA0407: assembleTargetUrl (apiClient.c:140) ==32218== by 0x4EA1127: apiClient_invoke (apiClient.c:402) ==32218== by 0x50383F6: CoreV1API_listNamespacedPod (CoreV1API.c:19514) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== 86 (24 direct, 62 indirect) bytes in 1 blocks are definitely lost in loss record 146 of 241 ==32218== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x4E9F917: list_create (list.c:27) ==32218== by 0x5037D72: CoreV1API_listNamespacedPod (CoreV1API.c:19373) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== 144 (24 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 167 of 241 ==32218== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x4E9F917: list_create (list.c:27) ==32218== by 0x5037D90: CoreV1API_listNamespacedPod (CoreV1API.c:19376) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== 134,065 (21,224 direct, 112,841 indirect) bytes in 1 blocks are definitely lost in loss record 241 of 241 ==32218== at 0x4C31B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==32218== by 0x5BA309E: ??? (in /usr/lib/x86_64-linux-gnu/libcurl.so.4.5.0) ==32218== by 0x5BB0103: curl_easy_init (in /usr/lib/x86_64-linux-gnu/libcurl.so.4.5.0) ==32218== by 0x4EA0745: apiClient_invoke (apiClient.c:206) ==32218== by 0x50383F6: CoreV1API_listNamespacedPod (CoreV1API.c:19514) ==32218== by 0x10986C: watch_pod_thread_func (watch_pod.c:84) ==32218== by 0x557B6DA: start_thread (pthread_create.c:463) ==32218== by 0x58B4A3E: clone (clone.S:95) ==32218== ==32218== LEAK SUMMARY: ==32218== definitely lost: 21,395 bytes in 6 blocks ==32218== indirectly lost: 113,023 bytes in 405 blocks ==32218== possibly lost: 0 bytes in 0 blocks ==32218== still reachable: 0 bytes in 0 blocks ==32218== suppressed: 0 bytes in 0 blocks ==32218== ==32218== For counts of detected and suppressed errors, rerun with: -v ==32218== ERROR SUMMARY: 6 errors from 6 contexts (suppressed: 0 from 0)
/cc @brendandburns
Perhaps we can use CURLOPT_PROGRESSFUNCTION
to abort more cleanly?
https://curl.se/libcurl/c/CURLOPT_PROGRESSFUNCTION.html
I really think there has to be a better approach than an endless watch thread.
OK. I will go to investigate CURLOPT_PROGRESSFUNCTION
Actually looks like there is a newer callback function: https://curl.se/libcurl/c/CURLOPT_XFERINFOFUNCTION.html
Hi @brendandburns
Thank you for your comments, the new code (using CURLOPT_PROGRESSFUNCTION
) can exit watch thread gracefully now. Could you please review the new code change ?
The change of apiClient.{h,c} will not be committed by this PR. So I label WIP
to block the automatic code merge.
Alought there is still one memory leak, it's a previous defect existing in openapi-generator/c-libcurl. I will fix it in another PR. ==19621== 16 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==19621== at 0x4C31B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==19621== by 0x503A547: CoreV1API_listNamespacedPod (CoreV1API.c:19504) ==19621== by 0x109858: watch_pod_thread_func (watch_pod.c:91) ==19621== by 0x55946DA: start_thread (pthread_create.c:463) ==19621== by 0x58CDA3E: clone (clone.S:95)
This looks good to me. Thanks for finding and fixing the api client too!
Hi @brendandburns
The libcurl progress function support is merged by #40 and the review comment of this PR is addressed too.
Could you please approve this PR ?
I will create another PR to fix the pre memory-leak issue of openapi-generator/c-libcurl
==19621== by 0x503A547: CoreV1API_listNamespacedPod (CoreV1API.c:19504)
/lgtm /approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: brendandburns, ityuhui
The full list of commands accepted by this bot can be found here.
The pull request process is described here
All of memory-leak issues are fixed by https://github.com/kubernetes-client/c/pull/42
Alougth I use
pthread_cleanup_push()/pthread_cleanup_pop()
afterpthread_cancel()
for thewatch
thread. But some resources still leak because they are allocated in the functions from openapi-generator/c-libcurl, their handles(pointers) are hard to get.So it seems that the endless
watch
thread is better, should I change the example to removepthread_cleanup_push()/pthread_cleanup_pop()/pthread_cancel()
, run thewatch
thread for ever ?