Fast R-CNN
It improved the training procedure by unifying three independent models into one jointly trained framework and increasing shared computation results
1. Model Workflow

How fast R-CNN works:
Pre-train a convolutional neural network on image classification tasks.
Propose regions by selective search (~2k candidates per image).
Modify the pre-trained CNN:
Replace the last max pooling layer of the pre-trained CNN with a RoI pooling layer. The RoI pooling layer outputs fixed-length feature vectors of region proposals.
Replace the last fully connected layer and the last softmax layer (K classes) with a fully connected layer and softmax over K + 1 classes.
Model branches into two output layers
A softmax estimator of K + 1 classes (same as in R-CNN, +1 is the “background” class), outputting a discrete probability distribution per RoI.
A bounding-box regression model which predicts offsets relative to the original RoI for each of K classes.
2. RoI pooling layer

RoI pooling is used to convert features in a region of the image with size, , into a small fixed window, . The input region is divided into HxW grids, and each sub-window with size . Then, apply max pooling in each grid.
3. Multi-task loss function (classification + regression)
The overall loss function is
Where,
--- True class label, ; the catch-all background class has
--- Discrete probability distribution (per RoI) over K+1 classes. , computed by a softmax over the K+ 1 outputs of a fully connected layer
--- True bounding box
--- Predicted bounding box correction.
The bounding box loss measure the difference between and based on a robust loss function below. It is less sensitive to outliers.
4. Speed Bottleneck
Fast R-CNN is much faster in both training and testing time.
However, the improvement is not dramatic because the region proposals are generated separately by another model and that is very expensive.
Last updated
Was this helpful?