Open FuryMartin opened 5 months ago
Thanks for your suggestions! The data missing problem may caused by the wrong version of pandas, you could use "pip install pandas==1.1.5" to install the correct version of pandas instead of removing the prefix "pd" in "all_df.index = pd.np.arange(1, len(all_df) + 1)".
Thanks for your suggestions! The data missing problem may caused by the wrong version of pandas, you could use "pip install pandas==1.1.5" to install the correct version of pandas instead of removing the prefix "pd" in "all_df.index = pd.np.arange(1, len(all_df) + 1)".
Thanks, it works. Now I can get the complete output:
rank | algorithm | accuracy | task_avg_acc | paradigm | basemodel | task_definition | task_allocation | unseen_sample_recognition | basemodel-learning_rate | basemodel-epochs | task_definition-origins | task_allocation-origins | unseen_sample_recognition-threhold | time | url |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | sam_rfnet_lifelong_learning | 0.6403945099540626 | 0.6142496824998978 | lifelonglearning | BaseModel | TaskDefinitionByOrigin | TaskAllocationByOrigin | HardSampleMining | 0.0001 | 1 | ['front',,'garden'] | ['front',,'garden'] | 0.95 | 2024-06-18,09:19:25 | ../sam-workspace/benchmarkingjob/sam_rfnet_lifelong_learning/f3ec718c-2d0d-11ef-82f0-4125e9124177 |
Thanks for your suggestions! The data missing problem may caused by the wrong version of pandas, you could use "pip install pandas==1.1.5" to install the correct version of pandas instead of removing the prefix "pd" in "all_df.index = pd.np.arange(1, len(all_df) + 1)".
Thanks, it works. Now I can get the complete output:
rank algorithm accuracy task_avg_acc paradigm basemodel task_definition task_allocation unseen_sample_recognition basemodel-learning_rate basemodel-epochs task_definition-origins task_allocation-origins unseen_sample_recognition-threhold time url 1 sam_rfnet_lifelong_learning 0.6403945099540626 0.6142496824998978 lifelonglearning BaseModel TaskDefinitionByOrigin TaskAllocationByOrigin HardSampleMining 0.0001 1 ['front',,'garden'] ['front',,'garden'] 0.95 2024-06-18,09:19:25 ../sam-workspace/benchmarkingjob/sam_rfnet_lifelong_learning/f3ec718c-2d0d-11ef-82f0-4125e9124177
Good job!
Good to see the detailed guide. It could be used to enrich the origin one and you might want to contribute a new pull request on https://github.com/kubeedge/ianvs/blob/main/examples/robot/lifelong_learning_bench/semantic-segmentation/README.md .
Hi, I was following this guide but am unable to get past the download dataset command. Can you please provide a link from where I can get the dataset mentioned there.
Hi.I encountered an error like this:raise RuntimeError(f"benchmarkingjob runs failed, error: {err}.") from err RuntimeError: benchmarkingjob runs failed, error: prepare dataset failed, error: not one of train_index/train_data/train_data_info.. After I have checked the YAML file in Testenv many times and tried changing the path multiple times. This error still persists. Have you ever encountered such an error while running it?
Hi.I encountered an error like this:raise RuntimeError(f"benchmarkingjob runs failed, error: {err}.") from err RuntimeError: benchmarkingjob runs failed, error: prepare dataset failed, error: not one of train_index/train_data/train_data_info.. After I have checked the YAML file in Testenv many times and tried changing the path multiple times. This error still persists. Have you ever encountered such an error while running it?
Maybe there is something wrong with the index file. I sent the index file to you by email, you can try it again.
Introduction or background of this discussion:
Guide for running the example of robot/lifelong_learning_bench/semantic-segmentation
Contents of this discussion:
These days I was trying to run examples/robot/lifelong_learning_bench/semantic-segmentation to learn the use of Ianvs.
However, the entire process of running this example was not so easy. I encountered a series of difficulties in the process. Here, I have recorded the process of running this example and the solutions to the problems encountered. Hopefully they may help others interested in Ianvs.
Besides, for the problems discovered during the trial process, I also provided some suggestions in hopes that they can be addressed by the community.
Ianvs Preparation
I created a new conda environment to run this project on a Ubuntu 22.04 Server. According to the guide #step-1-ianvs-preparation, we choose python 3.9 as our environment
Then I installed Sedna following the instruction:
Then I installed ianvs by executing
python setup.py install
.Dataset Preparation
In Step 2, I need to download the dataset. I got the dataset from @hsj576 . The dataset has the following structure:
Besides, I got trainging index files from @hsj576 , which containes multiple path pairs as shown below:
However, the README.md did not point out how the index files should be placed. After some trial and error, I found that all the files in the
2048x1024
folder need to be moved to the directory where the index files are located.Then, as the guide pointed out, I should configure the dataset URL in
testenv.yml
. As we could see, there are two folders inianvs/examples/robot/lifelong_learning_bench/
. I tried to editsemantic-segmentation/testenv/testenv.yml
in the benchmark project, which looks like this:I assume the
train_url
andtest_url
are what I have to edit. Since the url./examples/robot/lifelong_learning_bench/testenv/accuracy.py
suggests that the root path for this file isianvs/project/ianvs
, and my dataset is inianvs/project/datasets
, I updated the configuration as follows:There were multiple testenv files in
testenv/
and I edited them all.Large Vision Model Preparation
Next, I need to download SAM package and model according to #step-2.5-large-vision-model-preparationoptional. This step went smoothly.
Then, I need to install
mmcv
andmmdetection
. The installation ofmmcv
is successful following the guide, but there were some issues with installingmmdetection
, as shown below.So I need to install
torch
by my self. As the guide didn't mention the version oftorch
, I assumed I needtorch 2.0.0
withcu118
because the download link formmcv
in the guide indicates this version:https://download.openmmlab.com/mmcv/dist/cu118/torch2.0.0/mmcv-2.0.0-cp39-cp39-manylinux1_x86_64.whl
.I install torch + cu118 by the instruction from Previous PyTorch Versions | PyTorch.
As recommended in the guide, I downloaded the
cache.pickle
andpretrain_model.pth
to the specified path and editedself.resume
with the correct path.Execution and Presentation
I used the code below to try running ianvs:
Then, I found some errors about packages:
and
and
and
and
I used the code below to fix the missing package issue:
When I reran the ianvs command, I got an error:
It appears that there is a path issue. After examining the structure of this example, I realized that I can resolve it by moving all the files from
./examples/robot/lifelong_learning_bench/semantic-segmentation
to./examples/robot/lifelong_learning_bench
.After making this change and running the command, I encountered new exceptions:
Obviously, it was also a path issue. I then searched
/ianvs
in the project folder and discovered theworkspace
inbenchmarkingjob-simple.yaml
andbenchmarkingjob-simple.yaml
needed to be reconfigured.In the next stage, I encounterd more problems aboud path like below:
After fixing these problems, I could run this project.
However, there still seems to be some bugs. For example, [rank.py]() has something like
https://github.com/kubeedge/ianvs/blob/7ea4f4af57114ce3179cd0c0773a4254c5999715/core/storymanager/rank/rank.py#L178
which could cause exception as below:
Finally, we could see the csv output after removing the prefix
pd
:However, the output still seems to have some problems like:
,
as the seperator butalgorithm
,MATRIX
,url
)But in the end, we have accomplished the entire process of the example.
Advice
Overall, due to the omission of documentation and hard-coded configuration in the code, running this project is not a easy thing. To address this issue, I recommend:
print
withlogger.info
for better monitoring.