h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.25k stars 1.23k forks source link

Fatal Python error: Segmentation fault when inferencing with embeddings db of 4GB #683

Open slavag opened 1 year ago

slavag commented 1 year ago

Hi, Just run a small prompt : how can I list all EC2 instances in specific region using AWS CLI ? And entire process is failed (it was working a few weeks ago with same db) :

To create a public link, set `share=True` in `launch()`.
Run time of job "clear_torch_cache (trigger: interval[0:00:20], next run at: 2023-08-17 10:39:50 IDT)" was missed by 0:00:01.949125
Run time of job "clear_torch_cache (trigger: interval[0:00:20], next run at: 2023-08-17 10:40:50 IDT)" was missed by 0:00:12.048018
Run time of job "clear_torch_cache (trigger: interval[0:00:20], next run at: 2023-08-17 10:41:10 IDT)" was missed by 0:00:06.070966
Run time of job "clear_torch_cache (trigger: interval[0:00:20], next run at: 2023-08-17 10:41:30 IDT)" was missed by 0:00:01.356010
ggml_new_tensor_impl: not enough space in the context's memory pool (needed 13717376, available 10485760)
Fatal Python error: Segmentation fault

Current thread 0x000000038b007000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/llama_cpp/llama_cpp.py", line 678 in llama_eval
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/llama_cpp/llama.py", line 461 in eval
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/llama_cpp/llama.py", line 721 in generate
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/llama_cpp/llama.py", line 899 in _create_completion
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/llamacpp.py", line 288 in _stream
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/gpt4all_llm.py", line 378 in _stream
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/base.py", line 341 in stream
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/gpt4all_llm.py", line 344 in _call
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/base.py", line 961 in _generate
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/base.py", line 475 in _generate_helper
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/base.py", line 582 in generate
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/llms/base.py", line 451 in generate_prompt
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/llm.py", line 102 in generate
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/llm.py", line 92 in _call
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/base.py", line 252 in __call__
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/llm.py", line 252 in predict
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 165 in combine_docs
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 106 in _call
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/chains/base.py", line 252 in __call__
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/utils.py", line 398 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000004c2327000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 324 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 622 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/tqdm/_monitor.py", line 60 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000006f1807000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/thread.py", line 81 in _worker
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 975 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000004c131b000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 320 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/queue.py", line 171 in get
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 857 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000004be2ef000 (most recent call first):
  File "<frozen abc>", line 119 in __instancecheck__
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/h11/_events.py", line 151 in __init__
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/h11/_readers.py", line 113 in maybe_read_from_SEND_RESPONSE_server
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/h11/_connection.py", line 411 in _extract_next_receive_event
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/h11/_connection.py", line 471 in next_event
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 188 in _receive_event
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 155 in _receive_response_headers
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 91 in handle_async_request
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_async/connection.py", line 90 in handle_async_request
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 237 in handle_async_request
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_transports/default.py", line 353 in handle_async_request
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_client.py", line 1722 in _send_single_request
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_client.py", line 1685 in _send_handling_redirects
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_client.py", line 1648 in _send_handling_auth
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/httpx/_client.py", line 1620 in send
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/gradio/utils.py", line 443 in __run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/gradio/queueing.py", line 374 in call_prediction
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/gradio/queueing.py", line 398 in process_events
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/asyncio/runners.py", line 118 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/asyncio/runners.py", line 190 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/uvicorn/server.py", line 61 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 975 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000004bd2e3000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 324 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 622 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/apscheduler/schedulers/blocking.py", line 30 in _main_loop
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 975 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000002b312f000 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 324 in wait
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/queue.py", line 180 in get
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/posthog/consumer.py", line 104 in next
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/posthog/consumer.py", line 73 in upload
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/posthog/consumer.py", line 62 in run
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/threading.py", line 995 in _bootstrap

Thread 0x00000001f118e080 (most recent call first):
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/gradio/blocks.py", line 2198 in block_thread
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/gradio_runner.py", line 3394 in go_gradio
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/gen.py", line 972 in main
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/fire/core.py", line 691 in _CallAndUpdateTrace
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/fire/core.py", line 475 in _Fire
  File "/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/fire/core.py", line 141 in Fire
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/src/utils.py", line 57 in H2O_Fire
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/generate.py", line 12 in entrypoint_main
  File "/Users/slava/Documents/Development/private/AI/h2ogpt/generate.py", line 16 in <module>

Extension modules: simplejson._speedups, charset_normalizer.md, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pyarrow._hdfsio, pandas._libs.tslib, pandas._libs.ops, numexpr.interpreter, pyarrow._compute, pandas._libs.arrays, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.index, pandas._libs.internals, pandas._libs.join, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, lz4._version, lz4.frame._frame, psutil._psutil_osx, psutil._psutil_posix, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, yaml._yaml, sentencepiece._sentencepiece, pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, sqlalchemy.cyextension.collections, sqlalchemy.cyextension.immutabledict, sqlalchemy.cyextension.processors, sqlalchemy.cyextension.resultproxy, sqlalchemy.cyextension.util, greenlet._greenlet, PIL._webp, scipy._lib._ccallback_c, numpy.linalg.lapack_lite, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg._cythonized_array_utils, scipy.linalg._flinalg, scipy.linalg._solve_toeplitz, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_lapack, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._statlib, scipy.stats._mvn, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._rcont.rcont, regex._regex, sklearn.__check_build._check_build, sklearn.utils.murmurhash, sklearn.utils._isfinite, sklearn.utils._openmp_helpers, sklearn.utils._vector_sentinel, sklearn.feature_extraction._hashing_fast, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.utils._cython_blas, sklearn.svm._libsvm, sklearn.svm._liblinear, sklearn.svm._libsvm_sparse, sklearn.utils._random, sklearn.utils._seq_dataset, sklearn.utils.arrayfuncs, sklearn.utils._typedefs, sklearn.utils._readonly_array_wrapper, sklearn.metrics._dist_metrics, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_fast, sklearn.linear_model._cd_fast, sklearn._loss._loss, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, sklearn.datasets._svmlight_format_fast, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, zstandard.backend_c, clickhouse_connect.driverc.buffer, clickhouse_connect.driverc.dataconv, clickhouse_connect.driverc.npconv, ujson, websockets.speedups, markupsafe._speedups, pvectorc, uvloop.loop, httptools.parser.parser, httptools.parser.url_parser (total: 265)
[1]    43920 segmentation fault  TOKENIZERS_PARALLELISM=false python3 generate.py --base_model=llama         
/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown                                                                                                                                                                            
  warnings.warn('resource_tracker: There appear to be %d '

Startup log:

Auto set langchain_mode=TestData.  Could use MyData instead.  To allow UserData to pull files from disk, set user_path or langchain_mode_paths, and ensure allow_upload_to_user_data=True
Using Model llama
Prep: persist_directory=db_dir_TestData exists, using
Prep: persist_directory=db_dir_MailData exists, using
Starting get_model: llama 
llama.cpp: loading model from /Users/slava/Documents/Development/private/AI/Models/TheBloke/llama2-7b-GGML/llama-2-7b-chat.ggmlv3.q4_1.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_head_kv  = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: n_gqa      = 1
llama_model_load_internal: rnorm_eps  = 1.0e-06
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: freq_base  = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype      = 3 (mostly Q4_1)
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 4415.35 MB (+ 1024.00 MB per state)
llama_new_context_with_model: kv self size  = 1024.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '/Users/slava/.pyenv/versions/3.11.3/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                            0x3ae8deb60
ggml_metal_init: loaded kernel_add_row                        0x3ae8df3b0
ggml_metal_init: loaded kernel_mul                            0x3ae8df8f0
ggml_metal_init: loaded kernel_mul_row                        0x3ae8dff40
ggml_metal_init: loaded kernel_scale                          0x3ae8e0480
ggml_metal_init: loaded kernel_silu                           0x3ae8e09c0
ggml_metal_init: loaded kernel_relu                           0x3ae8e0f00
ggml_metal_init: loaded kernel_gelu                           0x3ae8e1440
ggml_metal_init: loaded kernel_soft_max                       0x3ae8e1b10
ggml_metal_init: loaded kernel_diag_mask_inf                  0x3ae8e2190
ggml_metal_init: loaded kernel_get_rows_f16                   0x3ae8e2830
ggml_metal_init: loaded kernel_get_rows_q4_0                  0x3ae8e2ff0
ggml_metal_init: loaded kernel_get_rows_q4_1                  0x3ae8e3690
ggml_metal_init: loaded kernel_get_rows_q2_K                  0x3aeee11b0
ggml_metal_init: loaded kernel_get_rows_q3_K                  0x3aeee1970
ggml_metal_init: loaded kernel_get_rows_q4_K                  0x3aeee2010
ggml_metal_init: loaded kernel_get_rows_q5_K                  0x506624f70
ggml_metal_init: loaded kernel_get_rows_q6_K                  0x506625730
ggml_metal_init: loaded kernel_rms_norm                       0x506625e10
ggml_metal_init: loaded kernel_norm                           0x506626770
ggml_metal_init: loaded kernel_mul_mat_f16_f32                0x3aeee2770
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32               0x3aeee2f70
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32               0x5518ba1c0
ggml_metal_init: loaded kernel_mul_mat_q2_K_f32               0x5518bab40
ggml_metal_init: loaded kernel_mul_mat_q3_K_f32               0x5518bb220
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32               0x506626d30
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32               0x506627510
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32               0x506628030
ggml_metal_init: loaded kernel_rope                           0x506628570
ggml_metal_init: loaded kernel_alibi_f32                      0x506628e50
ggml_metal_init: loaded kernel_cpy_f32_f16                    0x506629700
ggml_metal_init: loaded kernel_cpy_f32_f32                    0x506629fb0
ggml_metal_init: loaded kernel_cpy_f16_f16                    0x50662a740
ggml_metal_init: recommendedMaxWorkingSetSize = 21845.34 MB
ggml_metal_init: hasUnifiedMemory             = true
ggml_metal_init: maxTransferRate              = built-in GPU
llama_new_context_with_model: max tensor size =    78.12 MB
ggml_metal_add_buffer: allocated 'data            ' buffer, size =  4017.70 MB, ( 4018.16 / 21845.34)
ggml_metal_add_buffer: allocated 'eval            ' buffer, size =    10.00 MB, ( 4028.16 / 21845.34)
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =  1026.00 MB, ( 5054.16 / 21845.34)
ggml_metal_add_buffer: allocated 'scr0            ' buffer, size =   228.00 MB, ( 5282.16 / 21845.34)
ggml_metal_add_buffer: allocated 'scr1            ' buffer, size =   160.00 MB, ( 5442.16 / 21845.34)
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | 
Model {'base_model': 'llama', 'tokenizer_base_model': '', 'lora_weights': '', 'inference_server': '', 'prompt_type': 'llama2', 'prompt_dict': {'promptA': '', 'promptB': '', 'PreInstruct': '<s>[INST] ', 'PreInput': None, 'PreResponse': '[/INST]', 'terminate_response': ['[INST]', '</s>'], 'chat_sep': ' ', 'chat_turn_sep': ' </s>', 'humanstr': '[INST]', 'botstr': '[/INST]', 'generates_leading_space': False, 'system_prompt': None}}
Running on local URL:  http://0.0.0.0:7860

Please advise. Thanks

bluciano212 commented 1 year ago

Hi, I read you are stuck in the same error : ggml_new_tensor_impl: not enough space in the context's memory pool (needed 16781920, available 10485760) Fatal Python error: Segmentation fault

Please note that I also using --model_path_llama=llama-2-7b-chat.ggmlv3.q4_1.bin Nobody reply even to me ,anyway I'm doing several trial and error test:

after first question I save and clear that chat and I submit a new question , it does not crash into segmentation fault and finally return a second reply ,save and clear the second and submitted also a third question . Save and clear the third and submitted even a fourth question. RAM usage was 6.94GB of 16 Gb

I see your hardware instruction: AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |

are you using ARM too? My hardware is Orange pi 5, arm 8 core cpu with 16 Gb Ram AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |

NEON and ARM_FMA are instruction related to ARM cpu processors FP 16_VA is instruction related to NPU processor my orange pi 5 has Rockchip RK3588S it has also a NPU with 6Tops AI computing , but not still sure h2oGPT is using it

my post is: h2oGPT installed on Orange Pi 5 16Gb RAM operating system Armbian : Fatal Python error: Segmentation fault https://github.com/h2oai/h2ogpt/issues/742

slavag commented 1 year ago

@bluciano212 I'm using Mac M1 Max (Apple silicon), so I'm using metal complied llama

pseudotensor commented 1 year ago

The actual error seems to be:

ggml_new_tensor_impl: not enough space in the context's memory pool (needed 13717376, available 10485760)

Seems to be issue in llama.cpp: https://github.com/ggerganov/llama.cpp/issues/52

Last suggestion is a bug in llama.cpp due to special characters in the prompt/text.

slavag commented 1 year ago

@pseudotensor Thanks, will monitor that llama.cpp issue.

slavag commented 1 year ago

Seems that special characters are not the issue, but size of context is.

pseudotensor commented 1 year ago

@slavag Ok, are you able to reduce that some?

bluciano212 commented 1 year ago

The actual error seems to be:

ggml_new_tensor_impl: not enough space in the context's memory pool (needed 13717376, available 10485760)

Seems to be issue in llama.cpp: ggerganov/llama.cpp#52

Last suggestion is a bug in llama.cpp due to special characters in the prompt/text.

so if I will use GPT4All Model ggml-gpt4all-j-v1.3-groovy.bin there will not be any ggml_new_tensor_impl: not enough space in the context's memory pool (needed 16781920, available 10485760) Fatal Python error: Segmentation fault Is it right? what will happen to my existing db : db_dir_UserData I have create until now with llama.cpp model ? will I loose it ? Or will my existing db : db_dir_UserData be integrate in the GPT4All Model ?

pseudotensor commented 1 year ago

I don't recommend GPT4All models, they are quite bad. But it's possible the new quantization from llama.cpp will help, i.e. GGUFv2 .

None of the database stuff is affected when one changes the LLM. You'll lose nothing and not have to do anything if one tries a different LLM.

bluciano212 commented 1 year ago

new quantization from llama.cpp will help, i.e. GGUFv2

thanks I have searched on https://huggingface.co/TheBloke ,but nothing , Do you have any suggestion where I can download and try the new quantization from llama.cpp GGUFv2 .

pseudotensor commented 1 year ago

In principle you can do:

pip uninstall -y llama_cpp_python_cuda llama_cpp_python
# windows:
pip install https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.1.83+cu118-cp310-cp310-linux_x86_64.whl
# linux:
pip install https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.1.83+cu118-cp310-cp310-win_amd64.whl

or some similar wheel from: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases

This is instead of the current 0.1.73 version.

Then just use some GGUF model on TheBloke.

bluciano212 commented 1 year ago

GGUF

ok thanks! but GGUF is just for GPU ,it's not possible to use on CPU