Open beew opened 5 years ago
I have same problem like but I have anaconda3 create env in that env installed all the requirements Remember to use python 3.6 env if higher you can't install tensorflow 1.0.0 using pip if your using than you need to do installation using source code Steeps to install on env using anaconda
conda create -n dragonfire python=3.6
source activate dragonfire
pip install --upgrade wikipedia==1.4.0 PyUserInput==0.1.11 tinydb==3.9.0.post1 youtube_dl spacy==2.0.13 pyowm==2.9.0 tensorflow==1.0.0 deepspeech==0.4.1 SpeechRecognition tweepy==3.7.0 metadata_parser==0.9.20 hug==2.4.0 hug-middleware-cors==1.0.0 waitress==1.1.0 requests==2.20.0 pyjwt==1.6.4 SQLAlchemy>=1.3.0 PyMySQL==0.8.1 msgpack==0.5.6
i. Now this
pip install --upgrade flake8 sphinx sphinx_rtd_theme recommonmark m2r pytest
ii. One more
pip install https://github.com/huggingface/neuralcoref-models/releases/download/en_coref_sm-3.0.0/en_coref_sm-3.0.0.tar.gz
python3 -m spacy download en
/usr/share/dragonfire/deepspeech
and /usr/share/dragonfire/deepconv
python __init__.py
getting errors here is the error
dtype=data_type)
File "/media/mustafa/ubuntu_backup/anaconda3/envs/ai/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn.py", line 184, in <lambda>
call_cell = lambda: cell(input_, state)
File "/media/mustafa/ubuntu_backup/anaconda3/envs/ai/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn.py", line 197, in static_rnn
(output, state) = call_cell()
Segmentation fault
I even installed using .sh and its running perfect when i typed dragonfire in terminal but its speech to txt(stt) is not accurate tried to call dragonfire, hey, wake up, many time but it could not get accurate results it saying showing input something else.
Hope you find this usefull
@beew Dragonfire requires tensorflow==1.0.0
because a neural network model that trained which by used Dragonfire requires tensorflow==1.0.0
. But recently TensorFlow broke something about their version 1.0.0
and I don't have time to fix that. Maybe you can fix it?
you could always do a build in a clean chroot https://wiki.archlinux.org/index.php/DeveloperWiki:Building_in_a_clean_chroot im building and repackaging the dependencies right now and will be attempting to get this running on arch will let you know how it goes.
update i tried both making clean chroot in arch using apt and on arch and trying a Debian bootstrap from arch i now have the executable but couldn't get the dependencies to work right my next step will be attempting to build in a partition using a live boot Debian and then mounting to arch :) will update, would love to get this to work on arch.
@dsimon28 try 1.2.0 for fixed/removed dependencies.
I am wondering if there are instructions for compiling Dragonfire from source.
The install script installs a lot of things in the root file system and I don't want to mess up my production system because of conflicting versions (e.g The install script wants to install tensorflow-1.0.0, but I have already had tensorflow-1.13.0 compiled from source for my cuda version so it is not good if Dragonfire replaces it with an old stock tensorflow with pip)
I would like to build Dragonfire in a local directory for testing, I need some help in building it without the install script. Almost all the requirements installed via apt are already installed on my system (Ubuntu16.04) except for atlas, which I am not sure if actually needed since I already have openblas and intel-mkl and switch between them via update-alternative (atlas seems pretty inferior to either)
I have succeeded in using multiple python versions locally (python3.6, 3.7) in different directories and invoke them with the correct environmental variables so it is not difficult for me to set up a dedicate directory with python3.5 (system python3) for dragonfire.
Thanks.