HONG KONG SAR -
Media OutReach - 31 May 2022 - Future autonomous vehicles and industrial cameras might have human-like vision, thanks to a recent advance by scientists from Hong Kong and South Korea. Researchers at The Hong Kong Polytechnic University (PolyU) and Yonsei University in Seoul have developed vision sensors that emulate and even surpass the human retina's ability to adapt to various lighting levels.
"The new sensors will greatly improve machine vision systems used for visual analysis and identification tasks," says
Dr CHAI Yang, Associate Professor, Department of Applied Physics, and Assistant Dean (Research), Faculty of Applied Science and Textiles, PolyU, who led the research.
Machine vision systems are cameras and computers that capture and process images for tasks such as facial recognition. They need to be able to "see" objects in a wide range of lighting conditions, which demands intricate circuitry and complex algorithms. Such systems are rarely efficient enough to process a large volume of visual information in real time—unlike the human brain.
The new bioinspired sensors developed by Dr Chai's team may offer a solution through directly adapting different light intensities by the sensors, instead of relying on backend computation. The human eye adapts to different levels of illumination, from very dark to very bright and vice versa, which allows us to identify objects accurately under a range of lighting conditions. The new sensors aim to mimic this adaptability.
"The human pupil may help adjust the amount of light entering the eye," explains Dr Chai, "but the main adaptation to brightness is performed by retina cells." Natural light intensity spans a large range, 280 dB. Impressively, the new sensors developed by Dr Chai's team have an effective range of up to 199 dB, compared with only 70 dB for conventional silicon-based sensors. The human retina can adapt to environments under sunlight to starlight, with a range of about 160 dB.
To achieve this, the research team developed light detectors, called phototransistors, using a dual layer of atomic-level ultrathin molybdenum disulphide, a semiconductor with unique electrical and optical properties. The researchers then introduced "charge trap states"—impurities or imperfections in a solid's crystalline structure that restrict the movement of charge—to the dual layer.
"These trap states enable the storage of light information," report the researchers, "and dynamically modulate the optoelectronic properties of the device at the pixel level." By controlling the movement of electrons, the trap states enabled the researchers to precisely adjust the amount of electricity conducted by the phototransistors. This in turn allowed them to control the device's photosensitivity, or its ability to detect light.
Each of the new vision sensors is made up of arrays of such phototransistors. They mimic the rod and cone cells of the human eye, which are respectively responsible for detecting dim and bright light. As a result, the sensors can detect objects in differently lit environments as well as switch between, and adapt to, varying levels of brightness—with an even greater range than the human eye.
"The sensors reduce hardware complexity and greatly increase the image contrast under different lighting conditions," says Dr Chai, "thus delivering high image recognition efficiency."
These novel bioinspired sensors could usher in the next generation of artificial-vision systems used in autonomous vehicles and manufacturing, as well as finding exciting new applications in edge computing and the Internet of Things.
The research was published in
Nature Electronics.
#PolyU
https://www.media-outreach.com/news/hong-kong/2022/05/31/140289/