stitionai / devika

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.
MIT License
18.44k stars 2.39k forks source link

devika #604

Open werruww opened 4 months ago

werruww commented 4 months ago

not work

werruww commented 4 months ago

Lists do not open and project name does not appear

darrassi1 commented 4 months ago

try this PR #603

werruww commented 4 months ago

Is it possible to upload the entire repaired files?

werruww commented 4 months ago

https://github.com/stitionai/devika/pull/603

werruww commented 4 months ago

(base) C:\WINDOWS\system32>cd C:\Users\m\Desktop\p

(base) C:\Users\m\Desktop\p>git clone https://github.com/stitionai/devika.git Cloning into 'devika'... remote: Enumerating objects: 1483, done. remote: Counting objects: 100% (637/637), done. remote: Compressing objects: 100% (202/202), done. Receiving objects: 100% (1483/1483), 6.07 MiB | 3.32 MiB/s, done. Resolving deltas: 1% (9/864), reused 449 (delta 434), pack-reused 846Resolving deltas: 0% (0/864) Resolving deltas: 100% (864/864), done.

(base) C:\Users\m\Desktop\p>cd devika

(base) C:\Users\m\Desktop\p\devika>gh pr checkout 603 remote: Enumerating objects: 71, done. remote: Counting objects: 100% (71/71), done. remote: Compressing objects: 100% (65/65), done. remote: Total 71 (delta 42), reused 15 (delta 6), pack-reused 0 Unpacking objects: 100% (71/71), 23.13 KiB | 8.00 KiB/s, done. From https://github.com/stitionai/devika

(base) C:\Users\m\Desktop\p\devika>uv venv Using Python 3.11.5 interpreter at: C:\Users\m\anaconda3\python.exe Creating virtualenv at: .venv Activate with: .venv\Scripts\activate

(base) C:\Users\m\Desktop\p\devika>.venv\Scripts\activate

(devika) (base) C:\Users\m\Desktop\p\devika>uv pip install -r requirements.txt Resolved 143 packages in 7.43s Downloaded 2 packages in 5.57s Installed 143 packages in 1m 05s

(devika) (base) C:\Users\m\Desktop\p\devika>playwright install --with-deps

(devika) (base) C:\Users\m\Desktop\p\devika>python devika.py 24.06.17 10:28:44: root: INFO : Initializing Devika... 24.06.17 10:28:44: root: INFO : checking configurations... 24.06.17 10:28:44: root: INFO : Initializing Prerequisites Jobs...

A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 27, in init_devika from src.bert.sentence import SentenceBert File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 1, in from keybert import KeyBERT File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert__init.py", line 3, in from keybert._llm import KeyLLM File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_llm.py", line 4, in from sentence_transformers import util File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers__init__.py", line 15, in from sentence_transformers.trainer import SentenceTransformerTrainer File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\trainer.py", line 10, in from transformers import EvalPrediction, PreTrainedTokenizerBase, Trainer, TrainerCallback File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1525, in getattr module = self._get_module(self._class_to_module[name]) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1535, in _get_module return importlib.import_module("." + module_name, self.name) File "C:\Users\m\anaconda3\Lib\importlib\init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer.py", line 71, in from .optimization import Adafactor, get_scheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\optimization.py", line 27, in from .trainer_pt_utils import LayerWiseDummyOptimizer, LayerWiseDummyScheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py", line 235, in device: Optional[torch.device] = torch.device("cuda"), C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py:235: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) device: Optional[torch.device] = torch.device("cuda"), 24.06.17 10:30:31: root: INFO : Loading sentence-transformer BERT models... Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 31, in init_devika SentenceBert(prompt).extract_keywords() File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 9, in extract_keywords keywords = self.kw_model.extract_keywords( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_model.py", line 195, in extract_keywords doc_embeddings = self.model.embed(docs) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert\backend_sentencetransformers.py", line 67, in embed embeddings = self.embedding_model.encode(documents, **self.encode_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in encode all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^ RuntimeError: Numpy is not available

(devika) (base) C:\Users\m\Desktop\p\devika>

werruww commented 4 months ago

(base) C:\Users\m>cd C:\Users\m\Desktop\p

(base) C:\Users\m\Desktop\p>git clone https://github.com/stitionai/devika.git Cloning into 'devika'... remote: Enumerating objects: 1483, done. remote: Counting objects: 100% (648/648), done. remote: Compressing objects: 100% (205/205), done. Receiving objects: 100% (1483/1483), 6.07 MiB | 1.54 MiB/s, done. remote: Total 1483 (delta 519), reused 457 (delta 442), pack-reused 835 Resolving deltas: 100% (866/866), done.

(base) C:\Users\m\Desktop\p>cd devika

(base) C:\Users\m\Desktop\p\devika>uv venv Using Python 3.11.5 interpreter at: C:\Users\m\anaconda3\python.exe Creating virtualenv at: .venv Activate with: .venv\Scripts\activate

(base) C:\Users\m\Desktop\p\devika>.venv\Scripts\activate

(devika) (base) C:\Users\m\Desktop\p\devika>uv pip install -r requirements.txt Resolved 143 packages in 1.12s Installed 143 packages in 30.15s

(devika) (base) C:\Users\m\Desktop\p\devika>playwright install --with-deps

(devika) (base) C:\Users\m\Desktop\p\devika>python devika.py 24.06.17 10:37:54: root: INFO : Initializing Devika... 24.06.17 10:37:54: root: INFO : checking configurations... 24.06.17 10:37:54: root: INFO : Initializing Prerequisites Jobs...

A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 27, in init_devika from src.bert.sentence import SentenceBert File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 1, in from keybert import KeyBERT File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert__init.py", line 3, in from keybert._llm import KeyLLM File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_llm.py", line 4, in from sentence_transformers import util File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers__init__.py", line 15, in from sentence_transformers.trainer import SentenceTransformerTrainer File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\trainer.py", line 10, in from transformers import EvalPrediction, PreTrainedTokenizerBase, Trainer, TrainerCallback File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1525, in getattr module = self._get_module(self._class_to_module[name]) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1535, in _get_module return importlib.import_module("." + module_name, self.name) File "C:\Users\m\anaconda3\Lib\importlib\init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer.py", line 71, in from .optimization import Adafactor, get_scheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\optimization.py", line 27, in from .trainer_pt_utils import LayerWiseDummyOptimizer, LayerWiseDummyScheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py", line 235, in device: Optional[torch.device] = torch.device("cuda"), C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py:235: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) device: Optional[torch.device] = torch.device("cuda"), 24.06.17 10:38:27: root: INFO : Loading sentence-transformer BERT models... Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 31, in init_devika SentenceBert(prompt).extract_keywords() File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 9, in extract_keywords keywords = self.kw_model.extract_keywords( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_model.py", line 195, in extract_keywords doc_embeddings = self.model.embed(docs) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert\backend_sentencetransformers.py", line 67, in embed embeddings = self.embedding_model.encode(documents, **self.encode_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in encode all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^ RuntimeError: Numpy is not available

(devika) (base) C:\Users\m\Desktop\p\devika>python devika.py 24.06.17 10:41:53: root: INFO : Initializing Devika... 24.06.17 10:41:53: root: INFO : checking configurations... 24.06.17 10:41:53: root: INFO : Initializing Prerequisites Jobs...

A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 27, in init_devika from src.bert.sentence import SentenceBert File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 1, in from keybert import KeyBERT File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert__init.py", line 3, in from keybert._llm import KeyLLM File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_llm.py", line 4, in from sentence_transformers import util File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers__init__.py", line 15, in from sentence_transformers.trainer import SentenceTransformerTrainer File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\trainer.py", line 10, in from transformers import EvalPrediction, PreTrainedTokenizerBase, Trainer, TrainerCallback File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1525, in getattr module = self._get_module(self._class_to_module[name]) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\utils\import_utils.py", line 1535, in _get_module return importlib.import_module("." + module_name, self.name) File "C:\Users\m\anaconda3\Lib\importlib\init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer.py", line 71, in from .optimization import Adafactor, get_scheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\optimization.py", line 27, in from .trainer_pt_utils import LayerWiseDummyOptimizer, LayerWiseDummyScheduler File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py", line 235, in device: Optional[torch.device] = torch.device("cuda"), C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\transformers\trainer_pt_utils.py:235: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) device: Optional[torch.device] = torch.device("cuda"), 24.06.17 10:42:00: root: INFO : Loading sentence-transformer BERT models... modules.json: 100%|████████████████████████████████████████████████████████████████████████████████| 349/349 [00:00<?, ?B/s] C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\huggingface_hub\file_download.py:157: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\m.cache\huggingface\hub\models--sentence-transformers--all-MiniLM-L6-v2. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development warnings.warn(message) config_sentence_transformers.json: 100%|███████████████████████████████████████████████████████████| 116/116 [00:00<?, ?B/s] README.md: 100%|███████████████████████████████████████████████████████████████████████████████| 10.7k/10.7k [00:00<?, ?B/s] sentence_bert_config.json: 100%|█████████████████████████████████████████████████████████████████| 53.0/53.0 [00:00<?, ?B/s] config.json: 100%|█████████████████████████████████████████████████████████████████████████| 612/612 [00:00<00:00, 39.2kB/s] model.safetensors: 100%|███████████████████████████████████████████████████████████████| 90.9M/90.9M [00:25<00:00, 3.53MB/s] tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████| 350/350 [00:00<?, ?B/s] vocab.txt: 100%|██████████████████████████████████████████████████████████████████████████| 232k/232k [00:00<00:00, 872kB/s] tokenizer.json: 100%|████████████████████████████████████████████████████████████████████| 466k/466k [00:00<00:00, 1.11MB/s] special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████| 112/112 [00:00<00:00, 7.18kB/s] 1_Pooling/config.json: 100%|███████████████████████████████████████████████████████████████████████| 190/190 [00:00<?, ?B/s] Traceback (most recent call last): File "C:\Users\m\Desktop\p\devika\devika.py", line 8, in init_devika() File "C:\Users\m\Desktop\p\devika\src\init.py", line 31, in init_devika SentenceBert(prompt).extract_keywords() File "C:\Users\m\Desktop\p\devika\src\bert\sentence.py", line 9, in extract_keywords keywords = self.kw_model.extract_keywords( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert_model.py", line 195, in extract_keywords doc_embeddings = self.model.embed(docs) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\keybert\backend_sentencetransformers.py", line 67, in embed embeddings = self.embedding_model.encode(documents, **self.encode_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in encode all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\m\Desktop\p\devika.venv\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 568, in all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings]) ^^^^^^^^^^^ RuntimeError: Numpy is not available