The artificial intelligence and machine vision (AIMV) Lab was established in 2017 to conduct AI-related research and applications, which mainly focus on computer vision, image/video processing, pattern recognition, robotics, computational intelligence, and machine learning. Currently, AIMV Lab comprises 6 faculties, 2 post doctors, 12 PhD candidates, and more than 50 Master students, which is an interdisciplinary research team including researchers from mathematics, computer engineering, electronic and control engineering, and mechanical engineering. We have undertaken more than 10 projects sponsored by the National Natural Science Foundation of China, the Department of Science & Technology, Hubei Province, the Department of Education, Hubei Province, and other industrial projects. We have published more than 400 papers in premium journals and conferences such as IEEE Trans. Image Processing, IEEE Trans. Circuits and Systems for Video Technology, IEEE Trans. Cybernetics, IEEE Conference on Computer Vision and Pattern Recognition, IEEE International Conference on Computer Vision. More than 20 China patents have been granted.
Neural Augmentation Based Saturation Restoration for LDR Images of HDR Scenes
There are shadow and highlight regions in a low dynamic range (LDR) image which is captured from a high dynamic range (HDR) scene. It is an ill-posed problem to restore the saturated regions of the LDR image. In this paper, the saturated regions of the LDR image are restored by fusing model-based and data-driven approaches. With such a neural augmentation, two synthetic LDR images are first generated from the underlying LDR image via the new model-based approach...
Dynamic-Clustering Extreme Intensity Prior based Blind Image Deblurring
In blind image deblurring, feasible solutions have been obtained by exploiting image prior information such as dark channel prior, extreme channel prior, and local minimal intensity prior. The performance highly depends on these priors, which may have poor adaptability to different image contents in real-world applications. For example, these priors only consider the changes of local minimal and maximal intensity pixels in the blurring process and ignore the difference between these changes...
Part Aware Contrastive Learning for Self-Supervised Action Recognition
In recent years, remarkable results have been achieved in self-supervised action recognition using skeleton sequences with contrastive learning. It has been observed that the semantic distinction of human action features is often represented by local body parts, such as legs or hands, which are advantageous for skeleton-based action recognition. This paper proposes an attention-based contrastive learning framework for skeleton representation learning, called SkeAttnCLR...
Efficient Robust Principal Component Analysis via Block Krylov Iteration and CUR Decomposition
Robust principal component analysis (RPCA) is widely studied in computer vision. Recently an adaptive rank estimate based RPCA has achieved top performance in lowlevel vision tasks without the prior rank, but both the rank estimate and RPCA optimization algorithm involve singular value decomposition, which requires extremely huge computational resource for large-scale matrices. To address these issues, an efficient RPCA (eRPCA) algorithm is proposed based on block Krylov iteration and CUR decomposition in this paper...
Self-Calibrating Gaze Estimation with Optical Axes Projection for Head-Mounted Eye Tracking
Gaze estimation suffers from burdensome personal calibration or complex all-device calibration. Self calibrating methods can meet this challenge but depend on scenes and sacrifice accuracy. We propose a flexible and accurate gaze estimation approach calibrated implicitly with potential gaze patterns. By constructing an optical axis projection (OAP) plane and a visual axis projection (VAP) plane simultaneously ...
A Hybrid Method for Implicit Intention Inference Based on Punished-Weighted Naïve Bayes
Gaze-based implicit intention inference provides a new human-robot interaction for people with disabilities to accomplish activities of daily living independently. Existing gaze-based intention inference is mainly implemented by the data-driven method without prior object information in intention expression, which yields low inference accuracy. Aiming to improve the inference accuracy, we propose a gaze-based hybrid method by integrating model-driven and data-driven intention inference tailored to disability ...
Dual-Scale Single Image Dehazing via Neural Augmentation
Model-based single image dehazing algorithms restore haze-free images with sharp edges and rich details for real-world hazy images at the expense of low PSNR and SSIM values for synthetic hazy images. Data-driven ones restore haze-free images with high PSNR and SSIM values for synthetic hazy images but with low contrast, and even some remaining haze for realworld hazy images. In this paper, a novel single image dehazing algorithm is introduced by combining model-based and datadriven ...
Adaptive weighted guided image filtering for depth enhancement in shape-from-focus
Existing shape from focus (SFF) techniques cannot preserve depth edges and fine structural details from a sequence of multi-focus images. Moreover, noise in the sequence of multi-focus images affects the accuracy of the depth map. In this paper, a novel depth enhancement algorithm for the SFF based on an adaptive weighted guided image filtering (AWGIF) is proposed to address the above issues.The AWGIF is applied to decompose an initial depth map which is estimated by the traditional SFF into a base layer...
Water Column Detection Method at Impact Point Based on Improved YOLOv4 Algorithm
For a long time, the water column at the impact point of a naval gun firing at the sea has mainly depended on manual detection methods for locating, which has problems such as low accuracy, subjectivity and inefficiency. In order to solve the above problems, this paper proposes a water column detection method based on an improved you-only-look-once version 4 (YOLOv4) algorithm. Firstly, the method detects the sea antenna through the Hoffman line detection method...
Image noise level estimation via kurtosis test
Noise level estimation is a long-standing problem in image processing. The challenge arises from the fact 7 that the estimation can be easily affected by texture information. In this paper, a new noise level estimation method 8 via kurtosis test is proposed, which is a normalized fourth-order moment. The proposed method consists of two stages: 9 the first one is to determine the image patches with normality by using the kurtosis test, the noise level is then estimated from these selected normal patches in the second stage...