Ultrafast Camera Captures 156 Trillion Frames Per Second

Ultrafast Camera Captures 156 Trillion Frames Per Second

INRS’s Énergie Matériaux Télécommunications Research Centre has developed a new ultrafast camera system that can capture up to 156.3 trillion frames per second with astonishing precision. For the first time, 2D optical imaging of ultrafast demagnetization in a single shot is possible.

This new device called SCARF (for swept-coded aperture real-time femtophotography) can capture transient absorption in a semiconductor and ultrafast demagnetization of a metal alloy. This new method will help push forward the frontiers of knowledge in a wide range of fields, including modern physics, biology, chemistry, materials science, and engineering.

Abstract
Single-shot real-time femtophotography is indispensable for imaging ultrafast dynamics during their times of occurrence. Despite their advantages over conventional multi-shot approaches, existing techniques confront restricted imaging speed or degraded data quality by the deployed optoelectronic devices and face challenges in the application scope and acquisition accuracy. They are also hindered by the limitations in the acquirable information imposed by the sensing models. Here, we overcome these challenges by developing swept coded aperture real-time femtophotography (SCARF). This computational imaging modality enables all-optical ultrafast sweeping of a static coded aperture during the recording of an ultrafast event, bringing full-sequence encoding of up to 156.3 THz to every pixel on a CCD camera. We demonstrate SCARF’s single-shot ultrafast imaging ability at tunable frame rates and spatial scales in both reflection and transmission modes. Using SCARF, we image ultrafast absorption in a semiconductor and ultrafast demagnetization of a metal alloy.

Imaging ultrafast events in real time (i.e., in the time duration of the event’s occurrence) has contributed to numerous studies in diverse scientific fields, including nuclear fusion, photon transport in scattering media3, and radiative decay of molecules. Because many of these ultrafast phenomena have timespans from femtoseconds to picoseconds, femtophotography—recording two-dimensional (2D) spatial information at trillions of frames per second (Tfps)—is indispensable for clearly resolving their spatiotemporal details. Currently, femtophotography is mostly realized by using multi-shot approaches. In data acquisition, each measurement captures a temporal slice by time gating a spatiotemporal slice using ultrafast devices or a certain number of time-stamped events using photon-counting cameras. Then, repetitive measurements (with certain auxiliaries, such as temporal or spatial scanning) are performed to construct a movie. However, these methods require the dynamic events under observation to be precisely reproducible, which renders them incapable of studying non-repeatable or difficult-to-reproduce ultrafast phenomena, such as femtosecond laser ablation, shock-wave interaction with living cells, and optical chaos.

To surmount these limitations, many single-shot ultrafast imaging techniques have been developed for direct observation of dynamic events in real time. Existing techniques can be generally grouped into the categories of passive detection and active illumination. The former is propelled by disruptive hardware designs, such as an in-situ storage CCD, a shutter-stacked CMOS sensor and a framing camera. Nonetheless, thus far, these ultrafast sensors have not yet reached the Tfps level, and further increasing their frame rates is fundamentally limited by the electronic bandwidths. Streak cameras—an ultrafast imager converting time to space by pulling photoelectrons with a shearing voltage along the axis perpendicular to the device’s entrance slit—can reach an imaging speed of 10 Tfps. Although overcoming the speed limitation, streak cameras are conventionally capable of only one-dimensional imaging. To overcome the drawback in imaging dimensionality, compressed ultrafast photography (CUP) adds a single encoding mask on the wide-open entrance port of a streak camera. With the prior information provided by the encoding mask, the spatial information along the shearing direction is allowed to mix with the temporal information in a compressively recorded snapshot. The ensuing image reconstruction recovers the information. Nonetheless, the produced imaging quality, especially at the Tfps level, can be considerably degraded in the generation and the propagation of photoelectrons by various effects, including the photocathode’s thickness, the space-charge effect, and the limited fill factor of the microchannel plate. Meanwhile, space-time coupling in the temporal shearing direction caps the summation of the frame size and the sequence depth in the reconstructed movie, which limits the maximum amount of acquirable information. Finally, temporal shearing induces spatial anisotropy, further reducing the image quality in the reconstructed movie.

Active-illumination-based approaches work by imparting the temporal information to various photon tags—such as space, angles, spatial frequencies, and wavelengths—carried in the illumination for 2D imaging at the Tfps level. However, these methods have various limitations. For example, space-division-based techniques require the targeted scene to move at a certain velocity to accommodate the sequential arrival of spatially separated probe pulses. Angle-dependent probing is also affected by parallax in each produced frame. The systems relying on spatial frequency division and wavelength division may also face difficulties in scalability in their pattern projection modules and spatial mapping devices. Most importantly, these methods acquire data by compartmenting the focal plane array in either the spatial domain or the spatial frequency domain, which forbids information overlapping. Given the limitations in the sensor’s size and the system’s optical bandwidth, this focal-plane-division strategy inherently limits the recordable capacity of spatial and temporal information, which usually results in a shallow sequence depth (i.e., the number of frames in each movie).

The limitations in these methods can be lifted using the multi-pattern encoding strategy. Each frame of the scene is encoded with a different pattern at a rate much higher than the sensor’s acquisition speed30. The captured snapshot thus represents the temporal integration of the spatiotemporally modulated dynamic scene. Then, a compressed sensing-based algorithm is used to reconstruct an ultrafast movie with high quality. As an example, a flutter shutter was implemented to globally block and transmit light in a random sequence during the camera’s exposure32. This modulation created a more broadband temporal impulse response, which improved the sensing matrix’s condition number and hence reconstructed image quality. Teaming up with a multiple-aperture design, this scheme enabled an imaging speed of 200 million fps33. However, this global encoding method resulted in a full spatial correlation of the modulation structure imparted on the signal, which limited the compression ratio and hence sequence depth. Thus, ultrafast encoding over each pixel is beneficial from the standpoint of improving reconstruction fidelity. This pixel-wise coded exposure was implemented by using various techniques, such as spatial light modulators (e.g., a digital micromirror device and a liquid-crystal-on-silicon device), a translating printed pattern and in-pixel memory in the CMOS architecture. However, the imaging speeds enabled by these methods are clamped to several thousand fps by either the pattern refreshing rates of the spatial light modulators, the moving speed of the piezo stages, or the readout electronics of the imaging sensor. Although CUP provides an ultrafast pixel-wise encoding scheme, its operating principle requires simultaneously shearing the scene and the coded aperture. Consequently, pixels across the sensor are encoded with reduced depths, resulting in inferior image reconstruction.

To overcome the limitations in existing methods, here, we report swept coded aperture real-time femtophotography (SCARF), which enables a full pixel-wise encoding depth in single-shot ultrafast imaging by using a single chirped pulse and a modified pulse shaping setup42. Leveraging time-spectrum mapping and spectrum-space sweeping, SCARF attaches pixel-wise coded apertures to an ordinary CCD camera at up to 156.3 THz in real time. We demonstrate SCARF in multiple spatial and temporal scales in both reflection and transmission modes. To show SCARF’s broad utility, we use it for single-shot real-time imaging of 2D transient light-matter interactions, including ultrafast absorption on a semiconductor and ultrafast demagnetization in a metal alloy.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.

Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.

A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts.  He is open to public speaking and advising engagements.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Next Big Future – https://www.nextbigfuture.com/2024/04/ultrafast-camera-captures-156-trillion-frames-per-second.html

Exit mobile version