Closed ashman-p closed 1 month ago
This is present in the 0.11.0 release, I guess? Is it critical to fix?
Do we have a CI job that tests this configuration?
This is present in the 0.11.0 release, I guess? Is it critical to fix?
Do we have a CI job that tests this configuration?
For what it's worth, OQS_USE_PTHREADS
is not intended to be exposed to the user as a config variable; it's set automatically based on the availability of pthreads and not documented in CONFIGURE.md. I would guess that all of our Linux and macOS CI systems have pthreads
, so we are probably not running any tests without pthreads enabled—which makes me wonder why we haven't previously detected this leak. I'm hoping that maybe this leak only arises when OQS_USE_PTHREADS
is set manually instead of being autoset by CMake.
I'm not able to reproduce the leak myself—@ashman-p could you let me know what Linux environment you're running so I can test various configs out in Docker images?
I'm not able to reproduce the leak myself—@ashman-p could you let me know what Linux environment you're running so I can test various configs out in Docker images?
@SWilson4 , Thanks for checking on this issue. I just have a generic linux setup. Linux nashley-dev 5.4.0-126-generic #142-Ubuntu SMP Fri Aug 26 12:12:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
liboqs Build
cmake -DBUILD_SHARED_LIBS=ON -DOQS_USE_OPENSSL=ON -DCMAKE_BUILD_TYPE=Debug -GNinja ..
-- The C compiler identification is GNU 9.4.0
-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Performing Test CC_SUPPORTS_WA_NOEXECSTACK
-- Performing Test CC_SUPPORTS_WA_NOEXECSTACK - Success
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Algorithms filtered for KEM_ml_kem_512;KEM_ml_kem_768;KEM_ml_kem_1024;KEM_ml_kem_512_ipd;KEM_ml_kem_768_ipd;KEM_ml_kem_1024_ipd;SIG_ml_dsa_44;SIG_ml_dsa_65;SIG_ml_dsa_87;SIG_sphincs_sha2_128f_simple;SIG_sphincs_sha2_128s_simple;SIG_sphincs_sha2_256f_simple;SIG_sphincs_sha2_256s_simple;SIG_sphincs_shake_128f_simple;SIG_sphincs_shake_128s_simple;SIG_sphincs_shake_256f_simple
-- Found OpenSSL: /home/ubuntu/openssl3.3/ossl-install/lib/libcrypto.so (Required is at least version "1.1.1")
-- Looking for aligned_alloc
-- Looking for aligned_alloc - found
-- Looking for posix_memalign
-- Looking for posix_memalign - found
-- Looking for memalign
-- Looking for memalign - found
-- Looking for explicit_bzero
-- Looking for explicit_bzero - found
-- Looking for explicit_memset
-- Looking for explicit_memset - not found
-- Looking for memset_s
-- Looking for memset_s - not found
-- Found Doxygen: /usr/bin/doxygen (found version "1.8.17") found components: doxygen dot
-- Configuring done
-- Generating done
@SWilson4, the problem occurs when use of OQS_USE_SHA3_OPENSSL=ON and threading is active. The only significance of the use of threads is that the problem went away when threading was disabled.
OK, I've managed to reproduce the leak, but only when building against OpenSSL >= 3.3.2. In particular, the leak does not occur when building against OpenSSL 3.3.1 with the same configuration.
It seems to me that this might actually be an OpenSSL bug rather than a liboqs bug, especially as the fix proposed here should (?) be unnecessary: per the OpenSSL docs:
As of version 1.1.0 OpenSSL will automatically allocate all resources that it needs so no explicit initialisation is required. Similarly it will also automatically deinitialise as required.
Looping in @baentsch @romen @beldmit @levitte: any knowledge of a regression going from 3.3.1 to 3.3.2?
I will try to isolate the exact commit which introduces the leak.
At first glance nothing suspicious
I will try to isolate the exact commit which introduces the leak.
It seems that this leak was introduced in https://github.com/openssl/openssl/commit/83efcfdfa1de760bd30df7f4cf94e7a0d2b0db9f. When building against the parent of that commit (https://github.com/openssl/openssl/commit/a13df68796828794920403c31d77409b0f06bae0) the leak does not occur.
^ fyi @beldmit
@nhorman could you please take a look?
FWIW, i don't see anything expressly wrong with the changes, its fine to call OPENSSL_init_crypto in your application rather than having the library do it as needed, but I'm having a hard time understanding: 1) Where exactly the leak is 2) How manually calling OpenSSL_init_crypto fixes it
I'll try reproduce here, but since you already have it set up, can you re-run valgrind with --leak-check=full to show the stack trace of where the leaked memory is being allocated?
Sure, here's the valgrind output:
==462229==
==462229== HEAP SUMMARY:
==462229== in use at exit: 240 bytes in 1 blocks
==462229== total heap usage: 6,841 allocs, 6,840 frees, 845,217 bytes allocated
==462229==
==462229== 240 bytes in 1 blocks are definitely lost in loss record 1 of 1
==462229== at 0x4846828: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==462229== by 0x4AEC3C3: CRYPTO_malloc (mem.c:202)
==462229== by 0x4AEC44D: CRYPTO_zalloc (mem.c:222)
==462229== by 0x4B06246: ossl_rcu_read_lock (threads_pthread.c:410)
==462229== by 0x49D7FB8: module_find (conf_mod.c:403)
==462229== by 0x49D7A9D: module_run (conf_mod.c:259)
==462229== by 0x49D782F: CONF_modules_load (conf_mod.c:166)
==462229== by 0x49D796A: CONF_modules_load_file_ex (conf_mod.c:215)
==462229== by 0x49D8CB9: ossl_config_int (conf_sap.c:68)
==462229== by 0x4AEAF13: ossl_init_config (init.c:258)
==462229== by 0x4AEAEF5: ossl_init_config_ossl_ (init.c:256)
==462229== by 0x4F31EC2: __pthread_once_slow (pthread_once.c:116)
==462229==
==462229== LEAK SUMMARY:
==462229== definitely lost: 240 bytes in 1 blocks
==462229== indirectly lost: 0 bytes in 0 blocks
==462229== possibly lost: 0 bytes in 0 blocks
==462229== still reachable: 0 bytes in 0 blocks
==462229== suppressed: 0 bytes in 0 blocks
==462229==
==462229== For lists of detected and suppressed errors, rerun with: -s
==462229== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
OpenSSL is built with
cd ~/openssl
git checkout 83efcfdfa1
./config --debug --prefix=`pwd`/../.localopenssl_83efcfdfa1_debug && make -j 4 && make install_sw install_ssldirs
liboqs is built with
cd ~/liboqs
git checkout 7f4c89b2
mkdir build && cd build
cmake -GNinja -DOPENSSL_ROOT_DIR=../.localopenssl_83efcfdfa1_debug -DOQS_MINIMAL_BUILD="SIG_ml_dsa_65" -DOQS_USE_SHA3_OPENSSL=ON -DCMAKE_BUILD_TYPE=Debug ..
ninja
The triggering command is
valgrind --leak-check=full ~/liboqs/build/tests/test_sig ML-DSA-65
Thank you. Thats interesting, I just tried to recreate with the head of the master branch, and didn't encounter the leak. Given that you reported the issue was introduced with commit 83efcfd (which is a backport of the original commit to the 3.3 branch, that makes me think we fixed something in master that we didn't backport. Let me see if I can find it
Thank you. Thats interesting, I just tried to recreate with the head of the master branch, and didn't encounter the leak. Given that you reported the issue was introduced with commit 83efcfd (which is a backport of the original commit to the 3.3 branch, that makes me think we fixed something in master that we didn't backport. Let me see if I can find it
Weird—I'm also getting the leak on the latest master (https://github.com/openssl/openssl/commit/c262cc0c0444f617387adac3ed4cad9f05f9c526).
hmm, what version of valgrind are you using? I'm on valgrind-3.23.0
also, what does your openssl.conf look like? Its possible I'm not loading any modules, and as such not triggering the allocation thats leaking
In fact, I'm sure its my config, I just ran it through gdb and never hit a breakpoint in ossl_rcu_read_lock
There we go, if I load the oqs provider (duh, should have thought of that), I see the leak. Now to figure out why
I'm on valgrind-3.22.0, and I haven't made any manual changes to openssl.cnf (including to load any provider).
I'm running everything in a Docker container on x86_64 / Linux; I'll attach a Dockerfile to duplicate my setup in case it's helpful to you.
FROM openquantumsafe/ci-ubuntu-latest:latest
WORKDIR /root
RUN git clone --depth=1 --branch=0.11.0 https://github.com/open-quantum-safe/liboqs.git liboqs
RUN git clone --branch=master https://github.com/openssl/openssl.git openssl
WORKDIR /root/openssl
RUN ./config --debug --prefix=/root/.localopenssl && make -j && make install_sw install_ssldirs
WORKDIR /root/liboqs
RUN mkdir build
WORKDIR ./build
RUN cmake -GNinja -DOPENSSL_ROOT_DIR=../.localopenssl -DOQS_MINIMAL_BUILD="SIG_ml_dsa_65" -DOQS_USE_SHA3_OPENSSL=ON -DCMAKE_BUILD_TYPE=Debug ..
RUN ninja
Ok, think I found the problem. https://github.com/openssl/openssl/commit/83efcfdfa1de760bd30df7f4cf94e7a0d2b0db9f added a use of ossl_init_thread_start to the code base, for which there is only a handful of users. ossl_init_thread_start registers a handler function to clean up thread local data, not when the thread exits explicitly, but rather when the library context against which the thread allocates resources is deleted. I'm not 100% sure why we do this, but its how it works.
Normally its not a problem. Library contexts get deleted in OSSL_LIB_CTX_free, that calls context_deinit->ossl_ctx_thread_stop->init_thread_stop, and that calls all the handlers registered via ossl_init_thread_start, and everything gets cleaned up.
However, in liboqs, you implicitly call the OpenSSL_cleanup routine via the registered atexit() handler, which again, is ok, but, it means that in test_sig.c (and likely your other tests), you join and exit the thread prior to OpenSSL getting cleaned up. It appears whats happening is that, on exit, the C library is cleaning up its thread local data list (I feel like I investigated this before), and, not having an cleanup handler registered (recalling from above that we register the local data handler with libcrypto, not with libc), and so it just NULL's the pointer in the thread local data table in the c library. As a result when OpenSSL_cleanup is called (either explicitly or implicitly), we run through the libcrypto registered cleanup handlers, the handler for rcu gets called (ossl_rcu_free_local_data), and when we call CRYPTO_THREAD_get_local to fetch the local data to free (which maps to pthread_getspecific) it returns NULL, as libc already expunged that pointer. As a result the local data leaks.
This API could use some reworking, as It seems to me that it relies on implementation details in libc that aren't guaranteed (I think in the past those pointers stuck around until after all the exit handlers ran, but I need to look further). The documentation here: https://docs.openssl.org/3.0/man3/OPENSSL_init_crypto/#description Is sort of vague on the subject, indicating that the cleanup should be done on thread exit, but thats not really true, as its actually done on behalf of the thread when the library context is cleaned.
For the time being, as any fix here would be an ABI break, I think its something we need to live with. The good news(?) is that theres an API to manage this, and theres precedent for it in our tests. OPENSSL_thread_stop is used to explicitly clean up a threads local data prior to exiting, which can be used here. You can see it used in situations similar/identical to yours in drbgtest.c and threadstest.h in which thread wrappers like thread_run() call it prior to exiting for just this reason.
You will likely need to do this for your various thread enabled tests, but this patch:
diff --git a/tests/test_sig.c b/tests/test_sig.c
index e94d3034..bd5e4728 100644
--- a/tests/test_sig.c
+++ b/tests/test_sig.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <oqs/oqs.h>
+#include <openssl/crypto.h>
#if OQS_USE_PTHREADS
#include <pthread.h>
@@ -183,6 +184,7 @@ struct thread_data {
void *test_wrapper(void *arg) {
struct thread_data *td = arg;
td->rc = sig_test_correctness(td->alg_name);
+ OPENSSL_thread_stop();
return NULL;
}
#endif
which I tested here, resolves the leak for me
Hi @nhorman, thanks for tracking this down. An observation I made earlier when this issue was discovered was that it seems to manifest with SHA3 fetches but not SHA2. Does this still fit with your findings?
It's a bit late here now, but I'll take a look in the am and let you know
@ashman-p do you have a particular signature algorithm that you see as not encountering the leak? I'm looking at the available defined SIG algs and I don't see any available that make use of SHA3
resolves the leak for me
Thanks @nhorman for the analysis and the proposed fix. I guess the proper location for your fix would be here?
@baentsch yes, that seems like a reasonable approach I think.
@SWilson4 @ashman-p @dstebila Please also take note of this guidance from OpenSSL documentation:
Similarly this message will also not be sent if OpenSSL is linked statically, and therefore applications using static linking should also call OPENSSL_thread_stop() on each thread.
and in addition
On Linux/Unix where OpenSSL has been loaded via dlopen() and the application is multi-threaded and if dlclose() is subsequently called prior to the threads being destroyed then OpenSSL will not be able to deallocate resources associated with those threads. The application should either call OPENSSL_thread_stop() on each thread prior to the dlclose() call, or alternatively the original dlopen() call should use the RTLD_NODELETE flag (where available on the platform).
So I think it is actually mandatory for liboqs
to call this function given it should work both statically linked as well as when used via dlopen() -- which I think is the primary use case (and surely when used as part of (a dynamically loaded) oqsprovider
), no? Only question is how to ensure it's called on each thread?
The more I read the more I wonder how/why this ever worked OK given this guidance and by-default use of pthreads
. The sig test also is not the only one actually using threading: At first glance, KEM and SIG_STFL testing do the same...
cmake -GNinja -DOPENSSL_ROOT_DIR=../.localopenssl_83efcfdfa1_debug -DOQS_MINIMAL_BUILD="SIG_ml_dsa_65" -DOQS_USE_SHA3_OPENSSL=ON -DCMAKE_BUILD_TYPE=Debug ..
@nhorman, I meant to say that the problem only manifested with "-DOQS_USE_SHA3_OPENSSL=ON". Not that there is a sig test actually uses sha3. Defining OQS_USE_OPENSSL=ON means the OQS library uses OpenSSL's SHA2. My config additionally sets OQS_USE_SHA3_OPENSSL=ON. Without this, Spencer could not recreate the problem. Thats why I asked the question about differences between these 2 options. It seems to me that the problem would be present regardless. fetch_ossl_objects(void) in common/ossl_helpers.c calls EVP_MD_fetch() for sha2 and sha3. But, I am not sure where sha3 is used.
@ashman-p yes, I would think so, I'm a bit tied up at the moment, but given what we've found I would imagine that the leak would occur in that configuration as well, and be solved by the same proposed fix
fetch_ossl_objects(void) in common/ossl_helpers.c calls EVP_MD_fetch() for sha2 and sha3. But, I am not sure where sha3 is used.
That is a very good observation, @ashman-p -- this code now seems dubious to me: Shouldn't the EVP-alg-fetchers be contingent/ifdef'd upon the 3 corresponding build options? What's your take on that @beldmit ?
@baentsch you are 100% correct, they should be properly wrapped by #ifdef
s - if we don't use SHA3 or AES from OpenSSL, it makes no sense to fetch it.
I have originally written this prefetching code because of performance reasons instead of implicit fetching used before, but AFAIK there were many changes since then - e.g. an option to provide 3rd-party callbacks etc so this should be reconsidered
@ashman-p yes, I would think so, I'm a bit tied up at the moment, but given what we've found I would imagine that the leak would occur in that configuration as well, and be solved by the same proposed fix
No worries Neil. Again, valgrind only reports a leak with the SHA3 macro enabled. Since we now have 2 ways to make the leak go away (your findings and my cheap init crypto call) it makes me wonder if there is something more that could turn up down the road. if you have an idea that i can help to track down (without too much hand holding), I would be happy to try. The test fails both with KEMS as well as signatures.
There we go, if I load the oqs provider (duh, should have thought of that), I see the leak. Now to figure out why
@nhorman, now getting some time to revisit this today. I missed the significance of your statement here. Could we be tracking two different issues? My test setup does not involve oqsprovider. I just have openssl 3.3 and liboqs. Run the test and ...
Anyhow, I will apply your patch and report back.
@ashman-p I'm not sure, but I'm sufficiently unfamiliar with oqs to fully understand how you would use both together without making use of the liboqs provider. It's certainly possible that there are two issues here, but the fact that the errors.are identical makes me biased towards thinking the problems are simmilar if not identical
sufficiently unfamiliar with oqs to fully understand how you would use both together without making use of the liboqs provider
Thanks again for taking the time to look into this @nhorman . liboqs
is a crypto library making available for external consumption via a common API various PQ algorithms; some of these PQ algorithms need "common" things like PRNG and SHA3. liboqs
therefore has a build option to utilize these common things from OpenSSL's libcrypto
. That is what @ashman-p has been testing. liboqs
also can be built with "its own" (really, from other upstream sources) common code, offering users an option to build liboqs
without OpenSSL dependencies. In that configuration, the problems in this issue don't appear, hinting at least at a wrong interaction pattern between liboqs
and libcrypto
.
The oqsprovider
in turn is a shared library pretty much devoid of crypto functionality (except when implementing hybrid PQ) primarily making available PQ crypto implemented in liboqs
to the OpenSSL EVP API via the openssl
provider API.
|------------------------------|
| libssl, openssl apps |
|------------------------------|
| EVP API |
|------------------------------|
| oqsprovider |
|------------------------------|
|------------------------------|
| liboqs |
|------------------------------|
| PQ alg 1 || .... || PQ alg n |
|------------------------------|
| openssl || external |
| libcrypto || sha3/prng |
|------------------------------|
ok, thank you for that.
thats interesting, so liboqs is one of those interesting libraries for which applications may use the provider interface to access oqs, and liboqs in turn loads symbols from openssl to do its work.
The implication here is that we may have two dependencies on openssl (listed via DT_NEEDED entries in the app and the liboqs provider), and those instances may point to different versions of libcrypto. That gets tricky.
to make this a bit more concrete, and pull it back to the issue at hand, what config file does the liboqs instance of libcrypto load? Can you post it here? Dependent on how its built and initalized, it may be that the liboqs instance of libcrypto is loading a "vestigual" provider in this test environment, which is never used, but still triggers the allocation that we are failing to free (which would still point to the thread_stop solution we have above as the 'correct' solution to the problem.
those instances may point to different versions of libcrypto. That gets tricky.
That absolutely is a problem (known to anyone integrating the whole stack and possibly utilizing distros -- which is not many people :)
But it is not at play here as the issue occurs in "standalone" liboqs
; no oqsprovider
involved.
The config (build) and config file question for a reproducer I cannot answer: @ashman-p @SWilson4 ?
@baentsch thats not entirely true, and was the point I was getting at. Even if the top level application doesn't use openssl, and the oqsprovider to access the liboqs library, there is still a path to load the provider. If: 1) The libcrypto.so library gets loaded by liboqs (as your diagram illustrates) 2) The libcrypto.so library is not initalized with the OPENSSL_INIT_NO_LOAD_CONFIG flag (likely if you use auto initalization) 3) The default config file for openssl (or whatever file OPENSSL_CONF points to) contains a section pointing to the oqs provider
Then, when all of those conditions are true, libcrypto will still load the oqs provider, even though the library doesn't use it, which will send you down the path here, and you'll get the leak
So in short, if you have a config file that references the oqs provider (or any external provider, like fips), then this issue will be the result, even if the application being run doesn't actually make use of it
@ashman-p can you identify the openssl.cnf file that you are using for the libcrypto instance that liboqs uses and post it here? That would clarify things greatly.
there is still a path to load the provider.
Fully agree, @nhorman . It was my strong assumption that such circular setup has been excluded in testing as both @ashman-p and @SWilson4 know about the dependencies (and https://github.com/open-quantum-safe/liboqs/pull/1942#issuecomment-2389538634 seems to show a standalone build -- and https://github.com/open-quantum-safe/liboqs/pull/1942#issuecomment-2389711126 absolutely guarantees one).
But you're right in checking this explicitly by asking for the config -- particularly considering the problem never materialized in CI. Also helpful may simply be the output of openssl list -providers
in a failing setup.
@ashman-p can you identify the openssl.cnf file that you are using for the libcrypto instance that liboqs uses and post it here? That would clarify things greatly.
In my setup it was the vanilla openssl.cnf with the default provider. apps/openssl list -providers Providers: default name: OpenSSL Default Provider version: 3.3.3 status: active
That would do it. the default provider gets loaded via: module_run->do_load_builtin_modules->OPENSSL_load_builtin_modules->ossl_provider_add_conf_module->CONF_module_add->module_add
which then allocates the lock in question thats leaking
I avoided that initially as I didn't have the default provider activated in my config. Once I loaded the oqs provider (which could have been any provider, it was just the one I chose, as I was working with oqs), the problem began to appear
Here's the configuration in my environment. (Added the .txt extension for GitHub upload only.)
yeah, that looks to be the same situation
Ok, thank you @nhorman. So, my next question is ... base my tests with "OPENSSL_thread_stop()". This only worked when called, as Neil originally prescribed. i.e from the thread-based application. Calling it from any centrally used function like oqs_ossl_destroy() did not work. Presumable because the thread is already suspended or ended by that time?
Question: We have 2 work-around options... OPENSSL_thread_stop() and OPENSSL_init_crypto(). Is there any reason not to go with the 'OPENSSL_init_crypto' option? It is simpler to implement rather than changing all the tests apps. Also, this path means we would also need to instruct application writers to also make this call.
OK, I made an interesting observation: a similar leak also occurs when liboqs is built with -DOQS_USE_SHA3_OPENSSL=OFF
if the algorithm being tested uses one of the other underlying primitives. For example, the leak occurs when using SPHINCS+-SHA2-128f-simple
, which calls the OpenSSL SHA2 code.
So, to address Norm's question above:
Thats why I asked the question about differences between these 2 options. It seems to me that the problem would be present regardless. fetch_ossl_objects(void) in common/ossl_helpers.c calls EVP_MD_fetch() for sha2 and sha3. But, I am not sure where sha3 is used.
It looks like the leak does indeed occur for SHA2 as well. It's just triggered when the hashing code is actually used. (Note that ML-DSA and basically all of the PQ algorithms make extensive use of SHA3.)
Calling it from any centrally used function like oqs_ossl_destroy() did work.
I suppose this should read "...did NOT work.", right @ashman-p ?
Is there any reason not to go with the 'OPENSSL_init_crypto' option?
It's easier indeed -- but it'd really be good to understand why it is working. Without this understanding this may haunt us in the future again, say if some side effect in openssl
initialization making it work right now changes....
The "thread_stop" solution is clearly understandable and documented, would require 3 code changes in the test programs and one documentation change pointing to OpenSSLs guidance: Would that be too undesirable, @ashman-p ?
+1 to @baentsch comment. I think the argument to go with the thread_stop approach is because it works, and seems to be fairly well understood at this point. I still don't really see how the OPENSSL_init_crypto approach solves the issue (despite the fact that it does seem to do so). Thats not to say you can't go with that issue, but if I were making the decision, I would at least want to better understand how it solves the problem before electing to go with it.
It looks like the leak does indeed occur for SHA2 as well.
And does it disappear likewise if OPENSSL_thread_stop
is added to the corresponding test, @SWilson4 ?
Calling it from any centrally used function like oqs_ossl_destroy() did work.
I suppose this should read "...did NOT work.", right @ashman-p ?
That is correct. It did NOT work. Sorry for the omission.
Is there any reason not to go with the 'OPENSSL_init_crypto' option?
It's easier indeed -- but it'd really be good to understand why it is working. Without this understanding this may haunt us in the future again, say if some side effect in
openssl
initialization making it work right now changes....The "thread_stop" solution is clearly understandable and documented, would require 3 code changes in the test programs and one documentation change pointing to OpenSSLs guidance: Would that be too undesirable, @ashman-p ?
Yeah, i hear you. I am fine with us doing that. I wondered is @nhorman had any idea why the init calls seem to affect things? The other concern i have is for pre-existing applications. Seems possible that there would be some fallout?
'run_tests' memory leak tests fails when build options enables OQS_USE_PTHREADS and OQS_USE_OPENSSL.
Fixes #1941.