News from this site

 Rental advertising space, please contact the webmaster if you need cooperation


+focus
focused

classification  

no classification

tag  

no tag

date  

no datas

Fast-BEV code reproduction practice

posted on 2023-06-06 11:19     read(1094)     comment(0)     like(12)     collect(2)


Fast- BEV code recurrence practice, professional stepping on the pit

Recently, I have been researching some BEV visual perception algorithms. Here is a record of the Fast-BEV code reproduction practice. The ^_^
theory of professional stepping on pits will not be introduced in detail here. For details, see the original author's paper Fast-BEV: A Fast and Strong Bird's-Eye View Perception
BaselineOthers csdn, the theoretical explanation on Zhihu is also more detailed. The main reason is that I am too good at talking about theory, here I only talk about engineering reproduction^_^

1 Build the operating environment

1.1 conda and cuda

  1. Use conda to manage the python environment. After all, it is convenient. The miniconda download address and installation steps are omitted.
  2. Graphics card driver cuda, cudnn installation has many online posts, in order to save space, it is installed by default
  • If you don’t know the corresponding relationship between the various library version numbers, just copy the homework directly, cuda-11.3.0, cudnn-8.6.0, python-3.8, torch-1.10.0, mmopenlab related library versions are the same, just copy the homework directly
  • Conda and pip change domestic sources , search on Baidu to prevent domestic installation failures

1.2 Build a python virtual environment

  • If you don’t know the specific version of a certain library, just copy the homework directly, and it’s fine if you agree with me, and I won’t emphasize it later
  1. To build a python environment,
    python uses 3.8 or above, and there is a problem with the subsequent mmcv related installation of version 3.7
conda create -n fastbev python=3.8
  • Activate the new environment
conda activate fastbev
  1. torch installation

cuda and torch version query I use torch-1.10.0

pip install torch==1.10.0+cu113 torchvision==0.11.0+cu113 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
  1. Install mmcv related libraries

Since fastbevit has been out for a long time, mmcv-fullit has been updated to 2.x.xthe version, renamed mvcc, and fastbevstill usedmmcv-full

  • Install the mmopenlab related packages required by fastbev
# 安装mmcv-full安装时终端会卡住,不是卡住,是下载时间比较长,耐心等待
pip install mmcv-full==1.4.0

# 安装mmdet
pip install mmdet==2.14.0

# 安装mmdet
pip install mmsegmentation==0.14.1

1.3 Download FASTBEV source code

  • Install fastbev related dependencies
# 下载fastbev工程
git clone https://github.com/Sense-GVT/Fast-BEV.git
# 激活虚拟环境
conda activate fastbev
# 进入Fast-BEV
cd Fast-BEV

# 配置所需依赖包
pip install -v -e .  
# or "python setup.py develop"
  • View mmopenlab related package version number
python -c 'import mmcv;import mmdet;import mmdet3d;import mmseg;print(mmcv.__version__);print(mmdet.__version__);print(mmdet3d.__version__);print(mmseg.__version__)'

The displayed version numbers are as follows: the corresponding version numbers of mmcv, mmdet, mmdet3d, mmseg

insert image description here

Other dependent packages:

 pip install ipdb
 pip install timm

Since then the basic environment has been installed.

2 Prepare the dataset

2.1 Download the dataset

Only the nuscenes dataset is introduced here, the nuscenes download address

Since the nuscenes data is too large, here we only test the mini version of nuscense, download the map and mini, click the US in the red box as shown below

  1. Download
    Click US, Aias will do
    insert image description here

After downloading, you get 2 compressed files

insert image description here

  1. Unzip to the current directory
    Unzip to get nuScenes-map-expansion-v1.3与v1.0-minitwo directories, nuScenes-map-expansion-v1.3copy the three files v1.0-mini/mapin the directory. Finally, get the new v1.0-minidirectory, and use the data set required for training.

insert image description here

2.2 Dataset conversion to FastBEV supported format

Enter Fast-BEVthe project target, create a data directory, then copy the above v1.0-minifolder to ./Fast-BEV/datathe next, and v1.0-minirename it to nuscenes, the directory structure is shown in the figure below:
insert image description here

Because of the mini data set used, add parameters when converting. --versionThe data does not provide v1.0-testa file

If you use all the nuscenes data, you don't need to follow--version

  1. run create_data.py
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --workers 10 --version v1.0-mini

After execution, the file in the red box below is generated

insert image description here

  1. Run nuscenes_seq_converter.py
    Since the mini data set used has no test file, you need to modify the nuscenes_seq_converter.py file, find the code 15 and 20 lines, and modify it as follows:

insert image description here

After modification, run

python tools/data_converter/nuscenes_seq_converter.py

Generate nuscenes_infos_train_4d_interval3_max60.pkland nuscenes_infos_val_4d_interval3_max60.pklTwo files,
these two files are the data sets required for training, as shown in the following figure:

insert image description here

3 training

3.1 Training configuration

  1. Download the pretrained model pretrained_models

The download address requires Magic Internet access . Three residual network models
are generally provided . r18,r34,r50download herecascade_mask_rcnn_r18_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5110_segm_mAP_0.4070.pth

This needs to be consistent with the configuration file. The configuration file is also r18, after downloading, create a new pretrained_modelsdirectory and put it in it. As shown below:

insert image description here

  1. Modify the configuration file
    If you do not modify the configuration file, the error in section 3.2 may appear. After modification, you can skip section 3.2

Take configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.pythe file as an example: (Of course, several other configuration files are also available)

insert image description here

  • In the configuration file, change SyncBN to BN, and AdamW2 to Adam;

  • In the configuration file, the 146 lines of code remove the comments; the 147-150 codes add comments, as shown below

file_client_args = dict(backend='disk')
# file_client_args = dict(
# backend='petrel',
# path_mapping=dict({
# data_root: 'public-1424:s3://openmmlab/datasets/detection3d/nuscenes/'}))
  • Install setuptools-58.0.4 version
conda install setuptools==58.0.4
  • Modify the address of the pre-training model in the configuration file, the code is line 331, and the load_fromparameter points to the address of the pre-training model downloaded in step 1. If you don’t know the relative path, you can directly give the absolute path. Here I am the relative path
load_from = 'pretrained_models/cascade_mask_rcnn_r18_fpn_coco-mstrain_3x_20e_nuim_bbox_mAP_0.5110_segm_mAP_0.4070.pth'
  1. train
python tools/train.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --work-dir work_dir --gpu-ids 0

Parameter Description

--gpu-ids  0代表gpu使用第1块。本机只有一块gpu
--work-dir  输出文件目录,包含日志等文件
其他参数详情建train.py中parse_args()函数

The content in the red box in the following figure appears on the terminal, which means that the training has been successfully run. The specific training cycle epoch, batch_size and other parameters can be fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.pymodified in the configuration file
insert image description here

3.2 Error reported during training

  • error 1
    insert image description here

Solution 1: Click the line above the red box to jump to the error file and comment out the code using distutils

insert image description here

Scenario 2:
AttributeError: module 'distutils' has no attribute 'version'

conda install setuptools==58.0.4
  • error 2
    insert image description here

Remove the comments on line 146, add comments on lines 147-150

  • error 3
    insert image description here

Change SyncBN to BN

  • error 4
    insert image description here

Replace AdamW2 with Adam

4 tests

4.1 Test reasoning

Due to the limited video memory of my own graphics card, the test results of the trained model are not ideal, so I directly use the trained model provided by the original author for testing, download link

I used fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4中epoch_20.pth, download address

python tools/test.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py pretrained_models/epoch_20.pth --out output/result.pkl
  • out must be suffixed with .pkl. Used to save test results
--out 必须跟.pkl后缀。用来保存检测结果
--show 不要加,加上会报错,可能原作者未优化好

4.2 Visualization

Since an error will be reported when adding –show when the test runs the code, tools/misc/visualize_results.pythe result.pkl from the previous step can be generated as a video stream for display.

python tools/misc/visualize_results.py configs/fastbev/exp/paper/fastbev_m0_r18_s256x704_v200x200x4_c192_d2_f4.py --result output/result.pkl --show-dir show_dir
  • Error 1:

assert len(_list) == len(sort)

insert image description here

Solution: Fast -BEV/mmdet3d/datasets/nuscenes monocular_dataset.py, find line 192 and change it to line 193:
insert image description here

  • Error 2:

insert image description here

The solution, install these 2 packages:

pip install imageio[ffmpeg] imageio[pyav]

Finally run visualize_results.py to generate video_pred.mp4与video_gt.mp4two videos:

insert image description here

One of the frames is visualized:

insert image description here

It is found that m5-r18the effect of the model is not very good. Many of them only detect loneliness. You can try to use it m5-r50的epoch_20pth. Download link


reference article

Thank you for your hard work, Resby ^-^! !!

Author source code github address

Reference Article 1

Reference Article 2

Friends encountered problems during deployment, welcome to leave a message, welcome to bev exchange button 472648720, let's learn bev together!

If you think the article is good, one-click three consecutive support for one wave, Rui Sibai^-^




Category of website: technical article > Blog

Author:Fiee

link:http://www.pythonblackhole.com/blog/article/80214/857bf4bfbb7ed459ff24/

source:python black hole net

Please indicate the source for any form of reprinting. If any infringement is discovered, it will be held legally responsible.

12 0
collect article
collected

Comment content: (supports up to 255 characters)