It is recommended to create an anaconda environment according to the tutorial to avoid conflicts.
Installation requirements and steps, directly execute:
https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/INSTALL.md
conda create --name maskrcnn_benchmark
conda activate maskrcnn_benchmark
conda install ipython
pip install ninja yacs cython matplotlib tqdm
#This step is not recommended, it is best to install directly offline
conda install -c pytorch pytorch-nightly torchvision cudatoolkit=9.0
export INSTALL_DIR=$PWD
cd $INSTALL_DIR
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install
cd $INSTALL_DIR
git clone https://github.com/facebookresearch/maskrcnn-benchmark.git
cd maskrcnn-benchmark
python setup.py build develop
unset INSTALL_DIR
(The installation of pytorch 1.0 is very slow, and it is not necessary to use the author's nightly version after testing, just install 1.0 directly with the offline package)
Note: After this build is compiled and installed, maskrcn will be pretended to be a pip library for reference. If the compilation fails, an error will be reported from maskrcnn_benchmark.config import cfg because the library file cannot be found. The conda list will be displayed after installation.
The author of the weight file is integrated in the code, automatically downloads, and the speed is relatively fast, just use his directly. When running the demo, it will automatically detect whether the corresponding location has a value file, if not, it will download, the download path is very strange, and it is embedded in the hidden folder of torch:
Cd into the folder to view the downloaded ResNet-50 and ResNet-101 models and the weight of the network:
Two ways to use ready-made parameter inference
Enter the demo folder and run it directly:
python webcam.py
The original document is not clearly written, just slightly change it and add a file by yourself, create a new demo.py, which is written as follows:
from maskrcnn_benchmark.config import cfg
from predictor import COCODemo
import torch
import cv2
import ipdb
ipdb.set_trace(context=35)
config_file = "../configs/caffe2/e2e_mask_rcnn_R_101_FPN_1x_caffe2.yaml"
cfg.merge_from_file(config_file)
cfg.merge_from_list(["MODEL.DEVICE", "cpu"])
coco_demo = COCODemo(
cfg,
min_image_size=800,
confidence_threshold=0.7,
)
image = cv2.imread('/py/pic/2.jpg')
predictions = coco_demo.run_on_opencv_image(image)
cv2.imshow("COCO detections", predictions)
cv2.waitKey(0)
Just change the picture path, resnet-101 used here in config, if the video memory is not enough, change it to 50 (700M is very small). Note: config recommends using the caffe path, the outside will download the pre-training weights, which cannot be detected, and are reserved for training.
Just run:
python demo.py
Pre-training models provided by pytroch official website: resnet18: resnet18-5c106cde.pth and resnet50: resnet50-19c8e357.pth (the two files are packaged together)
Related download links:
Because before Faster R-CNN compiled trained, so use the same source code under mxnet framework:https://github.com/TuSimple/mx-maskrcnn mxnet installation, seeClick on the link to open Faster R-CNN co...
Paper address:Mask R-CNN The Mask R-CNN + ResNet50 pre-trained model is used....
table of Contents 1 Four must-use links for learning pytorch for image processing: 2 Problems when running Mask R-CNN routines 2.1 Where can I download engine.py transforms.py utils.py? How to downloa...
First, the guide package, set global variables Second, the configuration Third, the data set 3.1 Loading the coco data set 3.2 Automatically downloading data sets 3.3 Loading the mask Fourth, COCO ass...
First, the guide package Second, Bounding Boxes IoU, the cross-section, is equivalent to the overlap of two regions divided by the collection of two regions. For the principle of the non-maximum suppr...
1. Guide package First, import the package. Second, the configuration Configuration information. Third, the data set Four, visualization Visualize data set information and mask. 5. Bounding Boxes This...
First, import the package. Here we will use a model trained on the MS_COCO dataset. The configuration information of this model is located in the CocoConfig class of the coco.py file. When making pred...
1. Guide package Second, the configuration 3. Notebook Preferences Fourth, load the verification data set Five, load the model Sixth, running detection 6.1 Precision-Recall 6.2 Calculate mAP @ IoU = 5...
1. Guide package Second, the configuration 3. Notebook Preferences Three, load the Model Below are some weights. 4. Histogram of Weights...