KoboldAI / KoboldAI-Client

https://koboldai.com
GNU Affero General Public License v3.0
3.45k stars 743 forks source link

Help!! memory allocation issue, what to do? #392

Closed rrennn closed 11 months ago

rrennn commented 11 months ago

Exception in thread Thread-192: Traceback (most recent call last): File "aiserver.py", line 2411, in lazy_load_callback model_dict[key] = model_dict[key].materialize(f, map_location="cpu") File "H:\koboldai\torch_lazy_loader.py", line 106, in materialize storage = STORAGE_TYPE_MAP[dtype].from_buffer(f.read(nbytes), "little") File "B:\python\lib\zipfile.py", line 940, in read data = self._read1(n) File "B:\python\lib\zipfile.py", line 1010, in _read1 data = self._read2(n) File "B:\python\lib\zipfile.py", line 1040, in _read2 data = self._fileobj.read(n) File "B:\python\lib\zipfile.py", line 764, in read data = self._file.read(n) MemoryError

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "B:\python\lib\site-packages\transformers\modeling_utils.py", line 399, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "H:\koboldai\torch_lazy_loader.py", line 295, in torch_load callback(retval, f=f, map_location=map_location, pickle_module=pickle_module, **pickle_load_args) File "aiserver.py", line 2443, in lazy_load_callback model_dict[name] = model_dict[name].to(dtype) AttributeError: 'LazyTensor' object has no attribute 'to'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "aiserver.py", line 2604, in load_model model = AutoModelForCausalLM.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache", *lowmem) File "B:\python\lib\site-packages\transformers\models\auto\auto_factory.py", line 463, in from_pretrained return model_class.from_pretrained( File "aiserver.py", line 1822, in new_from_pretrained return old_from_pretrained(cls, pretrained_model_name_or_path, model_args, **kwargs) File "B:\python\lib\site-packages\transformers\modeling_utils.py", line 2184, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "B:\python\lib\site-packages\transformers\modeling_utils.py", line 403, in load_state_dict if f.read().startswith("version"): MemoryError

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "B:\python\lib\threading.py", line 932, in _bootstrap_inner self.run() File "B:\python\lib\threading.py", line 870, in run self._target(*self._args, self._kwargs) File "B:\python\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal r = server._trigger_event(data[0], namespace, sid, data[1:]) File "B:\python\lib\site-packages\socketio\server.py", line 756, in _trigger_event return self.handlers[namespace][event](args) File "B:\python\lib\site-packages\flask_socketio__init.py", line 282, in _handler return self._handle_event(handler, message, namespace, sid, File "B:\python\lib\site-packages\flask_socketio\init__.py", line 826, in _handle_event ret = handler(args) File "aiserver.py", line 466, in g return f(a, k) File "aiserver.py", line 3917, in get_message load_model(use_gpu=msg['use_gpu'], gpu_layers=msg['gpu_layers'], disk_layers=msg['disk_layers'], online_model=msg['online_model']) File "aiserver.py", line 2608, in load_model model = GPTNeoForCausalLM.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache", lowmem) File "aiserver.py", line 1822, in new_from_pretrained return old_from_pretrained(cls, pretrained_model_name_or_path, *model_args, *kwargs) File "B:\python\lib\site-packages\transformers\modeling_utils.py", line 2230, in from_pretrained model = cls(config, model_args, model_kwargs) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 674, in init self.transformer = GPTNeoModel(config) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 484, in init self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i) for i in range(config.num_layers)]) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 484, in self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i) for i in range(config.num_layers)]) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 313, in init self.attn = GPTNeoAttention(config, layer_id) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 264, in init self.attention = GPTNeoSelfAttention(config, self.attention_type) File "B:\python\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 137, in init bias = torch.tril(torch.ones((max_positions, max_positions), dtype=torch.uint8)).view( RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 4194304 bytes.

I have a 3060 with intel i7-12700F, 16 gigabytes of ram.

rrennn commented 11 months ago

Nevermind, found out how to fix it.