Scene Text Extraction using Convolutional Neural Network with Amended MSER
Abstract
Content in the text format helps to communicate the relevant and specific information to users meticulously. A beneficial approach for extracting text from natural scene images is introduced which employs amended Maximally Stable Extremal Region (a-MSER) together with deep learning framework, You Only Look Once YOLOv2 network. The proposed system, a-MSER with Scene Text Extraction using Modified YOLOv2 Network (STEMYN), performs remarkably well by evaluating three publicly available datasets. The method a-MSER is used to identify the region of interest based on the variation of MSER. This algorithm considers intensity changes between text and background very effectively. The drawback of original YOLOv2, the poor detection rate for small-sized objects, is overcome by employing 1 × 1 layer with image size enhanced from 13 × 13 to 26 × 26. Focal loss is applied to improve upon the existing cross entropy classification loss of YOLOv2. The repeated convolution layer in the steep layer of the original YOLOv2 is removed to reduce the network complexity as it does not improve the system performance. Experimental results demonstrate that the proposed method is productive in identifying text from natural scene images.
Keyword(s)
Convolution layer, Deep learning framework, Focal loss, Maximally stable extremal regions, YOLOv2
Full Text: PDF (downloaded 652 times)
Refbacks
- There are currently no refbacks.