Results and Analysis.Training from scratch to match accuracy, 5p, mask RCNN 의 fine-tuning에 대한 강점에도* 불구하고, fine-tuning works well for the models with pre-training to converge to near optimum. Models trained from scratch can catch up with their fine-tuning counterparts standard COCO training set, ImageNet pretraining mainly helps to speed up convergence on the target task early on in training, but shows little or no evidence of improving the final detection accuracy. 실험** | Notion
LR 줄였을때 학습 팍되는거 신기하다.
*abstraction, 1p, The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (심지어 mask-CNN 은 fine tuning 에 최적화된 system 임에도 불구하고)
실험**
- COCO train : 118K
- COCO val : 5K
- bbox AP, instance seg AP


