Image-Dehazing Method Based on the Fusion Coding of Contours and Colors

The dehazing of images shot in fog is a hot spot in the study of computer vision. Unlike dehazing methods, which use an atmospheric scattering model, the method proposed here is based on fusion coding of contours and colors. It simulates the characteristics of visual perception in blurred scenes and balances the amount of color information in foggy images by actively fusing the contour features, thus preventing issues that often arise in dehazing, such as distortion or halo effects. First, the method constructs a contour feature extractor and extracts the contour features of the image, enhancing their weight in feature coding, and then it constructs a low-level feature-coding region to extract colors while adding the contours to fusion code the contours and colors. Then, the an advanced semantic coding region is constructed with dilated convolution residual blocks to deeply analyze the semantic information from the back propagation. Finally, after the fusion of the outputs of the low-level feature coding and the middle and final outputs of the advanced semantic coding, the method decodes the contours and colors in several layers of a convolutional neural network. This paper discriminates the results of dehazing and the sample labels using the discriminator net, composed of several convolution layers, and then intensifies and generates the network’s dehazing ability while improving the discrimination ability of the discriminator net. Synthetic and natural foggy images are chosen as experimental objects, and the results of the novel methods are compared with those of currently available methods. The results show that, unlike other methods, this method produces good results, is robust, addresses issues like color distortion and halos, and dehazes images with natural saturability and sharpness, providing new potential for study of image dehazing.