Content area
Abstract
The Deep Residual Network in Network (DrNIN) model [18] is an important extension of the convolutional neural network (CNN). They have proven capable of scaling up to dozens of layers. This model exploits a nonlinear function, to replace linear filter, for the convolution represented in the layers of multilayer perceptron (MLP) [23]. Increasing the depth of DrNIN can contribute to improved classification and detection accuracy. However, training the deep model becomes more difficult, the training time slows down, and a problem of decreasing feature reuse arises. To address these issues, in this paper, we conduct a detailed experimental study on the architecture of DrMLPconv blocks, based on which we present a new model that represents a wider model of DrNIN. In this model, we increase the width of the DrNINs and decrease the depth. We call the result module (WDrNIN). On the CIFAR-10 dataset, we will provide an experimental study showing that WDrNIN models can gain accuracy through increased width. Moreover, we demonstrate that even a single WDrNIN outperforms all network-based models in MLPconv network models in accuracy and efficiency with an accuracy equivalent to 93.553% for WDrNIN-4-2.
Details
1 Monastir University, Faculty of Sciences of Monastir, Laboratory of Electronics and Microelectronics, LR99ES30, Monastir, Tunisia (GRID:grid.411838.7) (ISNI:0000 0004 0593 5040)
2 Monastir University, Faculty of Sciences of Monastir, Laboratory of Electronics and Microelectronics, LR99ES30, Monastir, Tunisia (GRID:grid.411838.7) (ISNI:0000 0004 0593 5040); Sousse University, Higher Institute of Applied Sciences and Technology of Sousse, Sousse, Tunisia (GRID:grid.7900.e) (ISNI:0000 0001 2114 4570)





