Improving the performance of solar cells, transistors, LEDs and batteries will require better electronic materials, made from new compositions that have yet to be discovered.
To accelerate the search for advanced functional materials, scientists use AI tools to identify promising materials from hundreds of millions of chemical formulations. In tandem, engineers are building machines capable of printing hundreds of material samples at once, based on chemical compositions labeled by AI search algorithms.
But to date, there is no such quick way to confirm that these printed materials actually work as intended. This final step of material characterization was a major bottleneck in the advanced materials screening process.
Now, a new computer vision technique developed by MIT engineers dramatically accelerates the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconductor samples and quickly estimates two key electronic properties for each sample: bandgap (a measure of electron activation energy) and stability (a measure of longevity) .
The new technique accurately characterizes electronic materials, 85 times faster than the standard baseline approach.
The researchers intend to use this technique to accelerate the search for promising materials for solar cells. They also plan to integrate this technique into a fully automated materials control system.
“Ultimately, we envision integrating this technique into an autonomous laboratory of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to pose a materials problem to a computer, have it predict potential compounds, and then run 24/7 to make and characterize those predicted materials until 'he arrives at the desired solution.'
“The application area of these techniques ranges from solar energy enhancement to transparent electronics and transistors,” adds MIT graduate student Alexander (Aleks) Siemenn. “It really runs the gamut of areas where semiconductor materials can benefit society.”
Aissi and Siemensn detail the new technique in a study published today In Natural communications. Their co-authors at MIT include graduate student Fang Sheng, postdoc Basita Das, and mechanical engineering professor Tonio Buonassisi, as well as former visiting professor Hamide Kavak of Cukurova University and visiting postdoctoral Armi Tiihonen of the Aalto University.
Power in optics
Once a new electronic material is synthesized, the characterization of its properties is typically handled by a “domain expert” who examines one sample at a time using a benchtop tool called UV-Vis, which scans different colors of light to determine where the electronic material is located. the semiconductor begins to absorb more strongly. This manual process is precise but also time-consuming: a domain expert typically characterizes about 20 material samples per hour, a snail's pace compared to some printing tools that can establish 10,000 different material combinations per hour.
“The manual characterization process is very slow,” explains Buonassisi. “They give you a lot of confidence in the measurement, but they are not suited to the speed at which you can deposit material onto a substrate these days.”
To speed up the characterization process and eliminate one of the biggest bottlenecks in materials screening, Buonassisi and his colleagues turned to computer vision, a field that applies computer algorithms to quickly and automatically analyze optical characteristics of an image.
“Optical characterization methods have power,” notes Buonassisi. “We can get information very quickly. There is a wealth of images, across many pixels and wavelengths, that a human simply cannot process, but a machine learning computer program can.
The team realized that certain electronic properties, namely bandgap and stability, could be estimated based on visual information alone, if that information was captured in sufficient detail and interpreted correctly.
To this end, researchers developed two new computer vision algorithms to automatically interpret images of electronic materials: one to estimate the bandgap and the other to determine stability.
The first algorithm is designed to process visual data from highly detailed hyperspectral images.
“Instead of a standard camera image with three channels – red, green and blue (RBG) – the hyperspectral image has 300 channels,” explains Siemenn. “The algorithm takes this data, transforms it and calculates a bandgap. We complete this process extremely quickly.
The second algorithm analyzes standard RGB images and evaluates the stability of a material based on visual changes in the material's color over time.
“We found that color change can be a good indicator of the rate of degradation in the material system we are studying,” says Aissi.
Material compositions
The team applied the two new algorithms to characterize the bandgap and stability of approximately 70 printed semiconductor samples. They used a robotic printer to place samples on a single slide, like cookies on a baking sheet. Each deposit was made with a slightly different combination of semiconductor materials. In this case, the team printed different ratios of perovskites – a type of material that should be a promising candidate for solar cells, but is also known to degrade quickly.
“People are trying to change the composition – add a little of this, a little of that – to try to make (the perovskites) more stable and more efficient,” Buonassisi says.
After printing 70 different compositions of perovskite samples on a single slide, the team scanned the slide with a hyperspectral camera. Next, they applied an algorithm that visually “segments” the image, automatically isolating the samples from the background. They ran the new bandgap algorithm on the isolated samples and automatically calculated the bandgap for each sample. The entire bandgap extraction process took about six minutes.
“It would normally take several days for a domain expert to manually characterize the same number of samples,” says Siemenn.
To test stability, the team placed the same slide in a chamber in which they varied environmental conditions, such as humidity, temperature and exposure to light. They used a standard RGB camera to take an image of the samples every 30 seconds for two hours. They then applied the second algorithm to images of each sample over time to estimate the degree to which each droplet changed color or degraded under various environmental conditions. Ultimately, the algorithm produced a “stability index,” or a measure of the durability of each sample.
As a check, the team compared their results with manual measurements of the same droplets, taken by an expert in the field. Compared to the expert's baseline estimates, the team's bandgap and stability results were 98.5% and 96.9% more accurate, respectively, and 85 times faster.
“We were constantly shocked by how these algorithms were able to not only increase the speed of characterization, but also achieve accurate results,” says Siemenn. “We're looking at putting it into the current automated materials pipeline that we're developing in the lab, so we can run it in a fully automated way, using machine learning to guide us where we want to discover these new materials, print them , then actually characterizing them, all with very rapid processing.
This work was supported in part by First Solar.