We will use Conda for managing environments. We recommend installing Miniconda for Python 3.8 here. Then create an environment with
conda create -n urobotics python=3.8
activate it with
conda activate urobotics
Then clone the repo in a directory of your choice
git clone https://github.com/berkeleyauv/perception.git
Change into the cloned repo directory and install it
pip3 install -e ./
Install all dependencies with
pip3 install -r requirements.txt
Also, our training data is stored here https://www.dropbox.com/sh/rrbfqfutrmifrxs/AAAfXxlcCtWZmUELp4wXyTIxa?dl=0 so download it and unzip it in the same folder as perception
.
To compile cythonized code, run the following commands after cd
ing into the folder with Cython setup.py
python setup.py build_ext --inplace
cythonize file_to_cythonize.pyx
Misc code, camera calibration etc.
Code for specific tasks like
etc
In order to create your own algorithm to test:
Create
Create a class which extends the TaskPerceiver class. perception/tasks/TaskPerceiver.py includes a template with documentation for how to do this.
Visualization tools
Code for testing tasks (Ideally this should be placed a separate folder called tests
).
After writing the code for your specific task algorithm, you can do one of two things:
Add this to the end of
if __name__ == '__main__':
from perception.vis.vis import run
run(<list of file/directory names>, <new instance of your class>, <save your video?>)
and then run
python <your algorithm>.py
Add this to the perception/init.py file:
import <path to your module>
ALGOS = {
'custom_name': <your module>.<your class reference>
}
and then run
python perception/vis/vis.py --algorithm custom_name [--data <path to file/directory>] [--profile <function name>] [--save_video]
The algorithm parameter is required. If data isn't specified, it'll default to your webcam. If profile isn't specified, it will be off by default. Add the save_video tag if you want to save your vis test as an mp4 file.
Flowchart on TaskPerceiver, TaskReceiver, AlgorithmRunner.