Welcome to YOLOX-PAI! YOLOX-PAI is an incremental work of YOLOX based on PAI-EasyCV. We use various existing detection methods and PAI-Blade to boost the performance. We also provide an efficient way for end2end object detction.
In breif, our main contributions are:
- Investigate various detection methods upon YOLOX to achieve SOTA object detection results.
- Provide an easy way to use PAI-Blade to accelerate the inference process.
- Provide a convenient way to train/evaluate/export YOLOX-PAI model and conduct end2end object detection.
To learn more details of YOLOX-PAI, you can refer to our technical report or arxiv paper.
To download the dataset, please refer to prepare_data.md.
Yolox support both coco format and PAI-Itag detection format,
To use coco data to train detection, you can refer to configs/detection/yolox/yolox_s_8xb16_300e_coco.py for more configuration details.
To use pai-itag detection format data to train detection, you can refer to configs/detection/yolox/yolox_s_8xb16_300e_coco_pai.py for more configuration details.
To use COCO format data, use config file configs/detection/yolox/yolox_s_8xb16_300e_coco.py
To use PAI-Itag format data, use config file configs/detection/yolox/yolox_s_8xb16_300e_coco_pai.py
You can use the quick_start.md for local installation or use our provided doker images (for both training and inference).
sudo docker pull registry.cn-shanghai.aliyuncs.com/pai-ai-test/pai-easycv:yolox-pai
sudo nvidia-docker run -it -v path:path --name easycv_yolox_pai --shm-size=10g --network=host registry.cn-shanghai.aliyuncs.com/pai-ai-test/pai-easycv:yolox-pai
Single gpu:
python tools/train.py \ ${CONFIG_PATH} \ --work_dir ${WORK_DIR}
Multi gpus:
bash tools/dist_train.sh \ ${NUM_GPUS} \ ${CONFIG_PATH} \ --work_dir ${WORK_DIR}
Arguments
NUM_GPUS
: number of gpusCONFIG_PATH
: the config file path of a detection methodWORK_DIR
: your path to save models and logs
Examples:
Edit data_root
path in the ${CONFIG_PATH}
to your own data path.
GPUS=8 bash tools/dist_train.sh configs/detection/yolox/yolox_s_8xb16_300e_coco.py $GPUS
The pretrained model of YOLOX-PAI can be found here.
Single gpu:
python tools/eval.py \ ${CONFIG_PATH} \ ${CHECKPOINT} \ --eval
Multi gpus:
bash tools/dist_test.sh \ ${CONFIG_PATH} \ ${NUM_GPUS} \ ${CHECKPOINT} \ --eval
Arguments
CONFIG_PATH
: the config file path of a detection methodNUM_GPUS
: number of gpusCHECKPOINT
: the checkpoint file named as epoch_*.pth.
Examples:
GPUS=8 bash tools/dist_test.sh configs/detection/yolox/yolox_s_8xb16_300e_coco.py $GPUS work_dirs/detection/yolox/epoch_300.pth --eval
python tools/export.py \ ${CONFIG_PATH} \ ${CHECKPOINT} \ ${EXPORT_PATH}
For more details of the export process, you can refer to export.md.
Arguments
CONFIG_PATH
: the config file path of a detection methodCHECKPOINT
:your checkpoint file of a detection method named as epoch_*.pth.EXPORT_PATH
: your path to save export model
Examples:
python tools/export.py configs/detection/yolox/yolox_s_8xb16_300e_coco.py \ work_dirs/detection/yolox/epoch_300.pth \ work_dirs/detection/yolox/epoch_300_export.pth
Download exported models(preprocess, model, meta) or export your own model. Put them in the following format:
export_blade/ epoch_300_pre_notrt.pt.blade epoch_300_pre_notrt.pt.blade.config.json epoch_300_pre_notrt.pt.preprocess
Download test_image
importcv2fromeasycv.predictorsimportTorchYoloXPredictoroutput_ckpt='export_blade/epoch_300_pre_notrt.pt.blade'detector=TorchYoloXPredictor(output_ckpt,use_trt_efficientnms=False) img=cv2.imread('000000017627.jpg') img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) output=detector.predict([img]) print(output) # visualize imageimage=img.copy() forbox, cls_nameinzip(output[0]['detection_boxes'], output[0]['detection_class_names']): # box is [x1,y1,x2,y2]box= [int(b) forbinbox] image=cv2.rectangle(image, tuple(box[:2]), tuple(box[2:4]), (0,255,0), 2) cv2.putText(image, cls_name, (box[0], box[1]-5), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0,0,255), 2) cv2.imwrite('result.jpg',image)