NEAR INFRARED (NIR) SURFACE-ENHANCED RAMAN SPECTROSCOPY AND FLUORESCENCE MICROSCOPY FOR MOLECULAR-GUIDED SURGERY By Cheng-You Yao A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Biomedical Engineering — Doctor of Philosophy 2022 ABSTRACT Molecular imaging has become an emerging technology to assess tumor margins. As the imaging contrast agents are functionalized with multiple ligands that can bind to different biomarkers – multiplexed molecular imaging, this technique can achieve high sensitivity and specificity for tumor margin detection. Optical-based molecular imaging modalities provide non- hazardous optical radiation, multiplexing wavelengths, and higher spatial resolution than ionizing radiation tomography techniques. Two categories of optical contrast agents, fluorescent dyes and Surface-Enhanced Raman Spectroscopy (SERS) nanoparticles are introduced and mainly applied in this dissertation. However, because of the tissue-photon interactions, the imaging contrast and penetration depths are limited by the visible wavelengths. The light in the NIR regime (700~1700 nm) has shown a deeper imaging penetration and better contrast with lower autofluorescence background. Thus, in this dissertation, NIR fluorescent dyes and SERS NPs excited by 785 nm are used for ex vivo and in vivo imaging for biological studies. This work aims to develop a variety of optical instruments for NIR ex vivo and in vivo biomedical imaging applications with deeper penetration, better contrast, and higher sensitivity. The optical instruments include a spectrometric system for SERS Raman detection, a VO2 MEMS scanner for SERS imaging, portable confocal microscopes, and a PZT MEMS scanner-based macroscope for wide-field fluorescence imaging. Chapter 1 briefly introduced the research background, pros and cons of existing techniques, and motivations of this study. In Chapter 2, the spectrometric SERS Raman system and ratiometric analysis have been applied to the detection of Alzheimer's Disease biomarkers and breast cancer image-guided surgery, using different SERS NPs conjugated with ligands. The Raman results were confirmed with histological analysis. In Chapter 3, a VO2 MEMS scanner has been designed, fabricated, and characterized for the Lissajous scanning SERS imaging application. In Chapter 4, two variants of the portable confocal microscopes, the point-scan and line-scan systems were designed with reflective parabolic mirrors for broadband wavelengths from the visible to NIR ranges. Ex vivo and in vivo confocal imaging results have been demonstrated using tumor-bearing mouse tissues. In Chapter 5, a thin-film PZT MEMS scanner has been reported, characterized, and integrated into a wide-field macroscope for fluorescence imaging. In Chapter 6, a novel photodetector - SNSPD has been integrated into the point-scan portable confocal microscope and PZT MEMS scanner-based wide-field macroscope to increase the efficiency and contrast of fluorescent imaging in the NIR range. In the last chapter, the future applications of the advanced VO2 MEMS scanners and fluorescence lifetime imaging microscopy using SNSPD were discussed in detail. ACKNOWLEDGEMENTS This dissertation would not have been accomplished without the collaborative support from many individuals. I would like to express my sincere gratitude towards the following people. First and foremost, I would like to acknowledge my respectful advisor Dr. Zhen Qiu. for his invaluable support and guidance in my doctoral study. He has provided me with countless ideas to work on these projects and sharpen my skill sets. His attitude as a mentor, friend, and philosopher, has led not only to my research throughout these four years but also to my future career. Additionally, I would like to thank my guidance committee members, Dr. Xuefei Huang, Dr. Nelson Sepúlveda, and Dr. Wen Li, for giving insightful advice on my dissertation research. They have spent countless hours providing technical assistance on collaborative projects. Also, I never forget the mentorship of our senior researchers, Dr. Michael Mandella and Frank Schonig, who taught me complex optical and mechanical designs and help me become a professional engineer. Moreover, I would like to express my gratitude and kind friendship toward my colleagues in the laboratory, Bo Li and Aniwat Juhong. We have been through the most difficult period of the pandemic and shared joy and unlimited support with each other. I also thank the graduate students: Dr. Yunqi Cao, Dr. Weiyang Yang, Yan Gong, Xiang Liu, Dr. Kunli Liu, Chia-Wei Yang, and A. K. M. Atique Ullah for their collaborations. Meanwhile, I would like to appreciate Dr. Hiroshi Toshiyoshi, Dr. Wibool Piyattanametha, Dr. Aaron Miller, and Dr. Tim Rambo for their contributions to the outstanding and unique devices. Last but not least, I would like to thank my family for their endless and unconditional support all the time. Without their emotional encouragement, I cannot complete my doctoral studies. iv TABLE OF CONTENTS LIST OF TABLES ........................................................................................................................ vii LIST OF FIGURES ..................................................................................................................... viii LIST OF ABBREVIATIONS ........................................................................................................ xi CHAPTER 1: INTRODUCTION ................................................................................................... 1 1.1 Molecular Imaging ................................................................................................................ 1 1.2 Fluorescence Imaging ........................................................................................................... 2 1.3 Surface-Enhanced Raman Spectroscopy (SERS) Nanoparticles .......................................... 2 1.4 Near Infrared (NIR) Imaging ................................................................................................ 3 1.5 Motivation and Research Contribution ................................................................................. 4 1.6 Outline ................................................................................................................................... 5 CHAPTER 2: SERS RAMAN SPECTROSCOPY ........................................................................ 6 2.1 SERS Raman Spectroscopic Imaging System ...................................................................... 6 2.2 SERS Nanoparticles for Alzheimer's Disease Biomarker Detection .................................. 16 2.3 SERS Nanoparticles for Breast Cancer Image-Guided Surgery ......................................... 31 CHAPTER 3: VO2 MEMS SCANNER FOR LISSAJOUS SCANNING RAMAN IMAGING . 49 3.1 Micro-Electro-Mechanical Systems (MEMS) Scanner....................................................... 49 3.2 VO2-based MEMS Scanner ................................................................................................. 53 3.3 Characterization of VO2 Scanner ........................................................................................ 60 3.4 Raman Imaging Based on Lissajous Scanning ................................................................... 66 3.5 Conclusion and Future Work of VO2 Scanner .................................................................... 72 CHAPTER 4: PORTABLE CONFOCAL MICROSCOPY .......................................................... 74 4.1 Dual-Axis Confocal Microscope ......................................................................................... 74 4.2 Point-Scan Portable Confocal Microscope ......................................................................... 77 4.3 Line-Scan Portable Confocal Microscope........................................................................... 89 4.4 Conclusion of Portable Confocal Microscopes ................................................................. 100 CHAPTER 5: PZT SCANNER-BASED WIDE-FIELD MACROSCOPE ................................ 102 5.1 Thin-Film PZT Piezoelectric MEMS Scanner .................................................................. 102 5.2 PZT MEMS Scanner-Based Wide-Field Macroscope ...................................................... 106 5.3 Conclusion of Thin-Film PZT Scanner-Based Wide-Field Macroscope .......................... 111 CHAPTER 6: SUPERCONDUCTING NANOWIRE SINGLE-PHOTON DETECTOR (SNSPD)- BASED IMAGING SYSTEMS...................................................................................................112 6.1 Superconducting Nanowire Single-Photon Detector (SNSPD) ........................................ 112 6.2 Point-Scan Confocal Microscope Imaging System Using SNSPD ................................... 118 6.3 PZT MEMS Scanner-Based Wide-Field Macroscope Using SNSPD .............................. 122 6.4 Conclusion of SNSPD-Based Imaging Systems ............................................................... 126 v CHAPTER 7: FUTURE WORK ................................................................................................ 128 7.1 Advanced Design of VO2 MEMS Scanner ....................................................................... 128 7.2 Fluorescence Lifetime Imaging Using SNSPD................................................................. 133 BIBLIOGRAPHY ....................................................................................................................... 136 vi LIST OF TABLES Table 1. Geometry parameters of the VO2-based MEMS scanner. ............................................... 54 Table 2. Material thicknesses of the VO2-based MEMS scanner. ................................................ 58 Table 3. Material properties of the VO2-based MEMS scanner used in the FEA model. ............. 64 Table 4. Comparison between simulated and experimental resonant frequencies. ....................... 65 Table 5. Theoretical resolutions of the point-scan confocal microscope. ..................................... 80 Table 6. Experimental resolutions of the point-scan confocal microscope................................... 81 Table 7. Driving parameters of the point-scan confocal microscope. ........................................... 84 Table 8. Comparison between different cameras for the line-scan confocal microscope. ............ 95 Table 9. Comparison between point-scan and line-scan confocal microscopes. ........................ 100 Table 10. Driving parameters of the point-scan confocal microscope based on SNSPD. .......... 120 vii LIST OF FIGURES Figure 1. Schematic of the customized Raman imaging system. ................................................... 8 Figure 2. CAD drawing of the customized Raman imaging system............................................... 8 Figure 3. Spectral resolution of the custom Raman spectrometer. ................................................. 9 Figure 4. LabVIEW program controlling the spectrometer and the 2D stage. ............................. 10 Figure 5. Fluorescence spectrum and image of a breast tumor. .................................................... 15 Figure 6. Concept of Alzheimer’s disease detection using SERS NPs. ........................................ 19 Figure 7. Characterization of the SERS NPs. ............................................................................... 21 Figure 8. Results of binding efficacy to BSA, Tau, and Aβ proteins. ........................................... 22 Figure 9. Raman imaging results of a paper phantom. ................................................................. 24 Figure 10. Raman imaging of the mouse brain with topically SERS staining. ............................ 26 Figure 11. Ex vivo Raman imaging of the protein-injected mouse brain. ..................................... 28 Figure 12. Concept of tumor margin identification using SERS NPs. ......................................... 35 Figure 13. Synthesis of the custom-made SERS NPs. .................................................................. 37 Figure 14. Chemical structures of the Raman reporters were used in this study. ......................... 37 Figure 15. Characterization of the custom-made SERS NPs........................................................ 38 Figure 16. SERS spectrum temporal fluctuation and its effect on the demultiplex result. ........... 39 Figure 17. Raman images of tissues topically stained with SERS-HA or SERS-PEG. ................ 42 Figure 18. Ratiometric Raman images of tissues with SERS mixture for CD44 detection.......... 44 Figure 19. SERS image-guided surgery for resection of mice breast tumor. ............................... 46 Figure 20. Design of the VO2 MEMS scanner and its application. .............................................. 54 Figure 21. Fabrication process of the VO2 MEMS scanner. ......................................................... 57 Figure 22. Resistance measurement on the VO2 thin film. ........................................................... 59 viii Figure 23. Optical measurement system for the VO2 scanner characterization............................ 61 Figure 24. DC (quasistatic) and step response of the MEMS scanner. ......................................... 62 Figure 25. Resonant frequency simulation and characterization of the MEMS scanner. ............. 64 Figure 26. Raman imaging setup and Lissajous scanning pattern using the MEMS scanner. ...... 68 Figure 27. Lissajous scanning Raman images of a paper phantom and a breast tumor................ 71 Figure 28. Schematic of the dual-axis confocal microscopy. ....................................................... 74 Figure 29. Schematic of the point-scan portable confocal microscope. ....................................... 77 Figure 30. Optical alignment of the point-scan portable confocal microscope. ........................... 79 Figure 31. Resolution of the point-scan portable confocal microscope........................................ 81 Figure 32. Schematic of the point-scan portable confocal imaging system. ................................ 83 Figure 33. Distortion correction of confocal image reconstruction. ............................................. 85 Figure 34. Dual-channel confocal imaging of mouse tissues: colon, breast tissue, and kidney. .. 87 Figure 35. Reconstructed 3D confocal images of cleared mouse muscle. .................................... 87 Figure 36. Ex vivo mouse brain imaging using the portable confocal microscope. ...................... 88 Figure 37. In vivo mouse ear imaging using the portable confocal microscope. .......................... 89 Figure 38. Schematic of the line-scan portable confocal microscope. ......................................... 90 Figure 39. Resolution of the line-scan portable confocal microscope. ......................................... 92 Figure 40. ZEMAX simulation of the line-scan portable confocal microscope. .......................... 93 Figure 41. Schematic of the line-scan portable confocal imaging system. ................................... 96 Figure 42. Confocal images of NW-ICG injected mouse tissues. ................................................ 98 Figure 43. Confocal images of NW-ICG-injected and DRAQ5-stained mouse tissues. .............. 98 Figure 44. In vivo mouse ear imaging using the line-scan portable confocal microscope. .......... 99 Figure 45. Design of the PZT piezoelectric MEMS scanner. ..................................................... 103 ix Figure 46. Wiring of the PZT piezoelectric MEMS scanner. ..................................................... 103 Figure 47. Fabrication process of the PZT piezoelectric MEMS scanner. ................................. 104 Figure 48. Finite element analysis (FEA) simulation of the PZT piezoelectric MEMS scanner.105 Figure 49. Frequency response of the PZT MEMS scanner. ...................................................... 106 Figure 50. Schematic of the PZT MEMS scanner-based macroscope. ....................................... 107 Figure 51. Fluorescence image of the ICG phantom. ................................................................. 109 Figure 52. Fluorescence images of patterns on a business card. ................................................ 109 Figure 53. Fluorescence images of a mouse breast tumor with ICG injection. ...........................110 Figure 54. Basic operation principle of the SNSPD. ...................................................................113 Figure 55. Characterization of the SNSPD detector. ...................................................................115 Figure 56. Output signals of the SNSPD detector........................................................................117 Figure 57. Signal processing flow of the SNSPD signals............................................................118 Figure 58. Schematic of the portable confocal microscope. ........................................................119 Figure 59. Image processing flow for noise removal in SNSPD confocal images. .................... 121 Figure 60. Ex vivo confocal image of ICG-stained breast tumor using SNSPD......................... 122 Figure 61. Schematic of the PZT MEMS scanner-based macroscope using SNSPD................. 123 Figure 62. Fluorescence images of an ICG-stained mouse breast tumor. ................................... 126 Figure 63. Ray tracing simulation of the MEMS-based miniaturized confocal microscope. ..... 128 Figure 64. Advanced designs of the VO2 MEMS scanner with arbitrary geometries. ............... 129 Figure 65. Procedure of the poking technique to open the holes. ............................................... 130 Figure 66. Stereomicroscopic images of the VO2 MEMS scanner after the poking technique. . 131 Figure 67. Photographs of the laser-cut VO2 MEMS scanner. ................................................... 132 Figure 68. Quantum efficiencies of the NIR-SNSPD and NIR-PMT. ........................................ 134 x LIST OF ABBREVIATIONS SPECT Single-Photon Emission Computed Tomography PET Positron Emission Tomography SERS Surface-Enhanced Raman Scattering SPR Surface Plasmon Resonance Effect QD Quantum Dots NP Nanoparticles NIR Near Infrared SWIR Short-Wave Infrared FDA Food and Drug Administration ICG Indocyanine Green MB Methylene Blue Si Silicon SWCNT Single-Walled Carbon Nanotube InGaAs Indium Gallium Arsenide MFD Mode Field Diameter DAC Dual-Axis Confocal NA Numerical Aperture WD Working Distance FOV Field of View CCD Charge-Coupled Device FVB Full Vertical Binning EPR Enhanced Permeability and Retention xi QSM-PC Queued State Machine - Producer Consumer DCLS Direct Classical Least Squares PCA Principal Component Analysis AD Alzheimer’s Disease MRI Magnetic Resonance Imaging CT Computed Tomography fMRI Functional Magnetic Resonance Imaging FDG-PET Fluorodeoxyglucose Positron Emission Tomography CSF Cerebrospinal Fluid LOD Limit of Detection DI De-Ionized FFPE Formalin-Fixed Paraffin-Embedded H&E Hematoxylin and Eosin IHC Immunohistochemistry TNM Tumor-Node-Metastasis ER Estrogen Receptors PR Progesterone Receptors HER2 Human Epidermal Growth Factor Receptor 2 HR Hormone Receptors TNBC Triple-Negative Breast Cancer BCS Breast-Conserving Surgery HA Hyaluronic Acid PEG Polyethylene Glycol xii LOQ Limit of Quantification TEM Transmission Electron Microscopy DLS Dynamic Light Scattering MEMS Micro-Electro-Mechanical Systems VO2 Vanadium Dioxide SiO2 Silicon Dioxide XeF2 Xenon Difluoride CVD Chemical Vapor Deposition PECVD Plasma Enhanced Chemical Vapor Deposition PLD Pulsed Laser Deposition RIE Reactive Ion Etching DRIE Deep Reactive Ion Etching FEA Finite Element Analysis PSD Position Sensing Detector GCD Greatest Common Divisor FF Fill Factor SNR Signal-To-Noise Ratio OAP Off-Axis Parabolic Mirrors FWHM Full-Width-Half-Maximum PSF Point Spread Function SIL Solid Immersion Lens PMT Photomultiplier Tube DAQ Data Acquisition xiii FITC Fluorescein DMSO Dimethyl Sulfoxide MIP Maximum Intensity Projection FPS Frames per Second ROI Region of Interest TTL Transistor-Transistor Logic PFI Programmable Function Interface CMOS Complementary Metal-Oxide Semiconductor PZT Lead-Zirconate-Titanate Oxide (PbZrTiO3) ADRIP Arc Discharge Reactive Ion Plating SOI Silicon on Insulator BOX Buried Oxide CHF3 Fluoroform GRIN Gradient-Index SNSPD Superconducting Nanowire Single-Photon Detector QE Quantum Efficiency PCI Peripheral Component Interconnect PCIe Peripheral Component Interconnect Express DMA Direct Memory Access FPGA Field-Programmable Gate Array SMF Single-Mode fiber MMF Multi-Mode Fiber FFT Fast Fourier Transform xiv FRET Fluorescence Resonance Energy Transfer FLIM Fluorescence Lifetime Imaging Microscopy TCSPC Time-Correlated Single-Photon Counting SPAD Single-Photon Avalanche Diode IACUC Institutional Animal Care & Use Committee xv CHAPTER 1: INTRODUCTION 1.1 Molecular Imaging A variety of optical modalities have been demonstrated to assess tumor margins, including confocal microscopy [1], multiphoton microscopy [2], optical coherence tomography [3], fluorescence lifetime microscopy [4], and intrinsic Raman microscopy [5]. Confocal microscopy can obtain high spatial resolution and optical-section images, but it also introduces scattered background noise. Although the multiphoton technique improves the background noise in confocal microscopy and provides deep penetration, its high peak intensity might cause pathological and morphological damage in tissues [6]. Spectroscopic methods, such as fluorescence lifetime and intrinsic Raman microscopy, rely on chemical compositions and tissue-scattering parameters. Therefore, their spectral signal is weaker and less specific in the tumor area in comparison to non- spectroscopic methods. In recent decades, molecular imaging has revealed its great capability of accurate tumor detection. By utilizing the imaging contrast agents conjugated with tumor-targeting ligands, this technique can discriminate between healthy cells and malignant tissues with high sensitivity and specificity [7-9]. Compared to other types of imaging modalities, such as single-photon emission computed tomography (SPECT) and positron emission tomography (PET), optical imaging is attractive because of its non-ionizing irradiation [10, 11]. Optical imaging depends on fluorescence, bioluminescence, absorption, or reflectance as the source of contrast [9]. Nonetheless, the molecular phenotypes of most cancers alter from person to person. Even within a single patient, the concentrations of the phenotypes fluctuate temporally and spatially [12, 13]. Thus, multiple disease-related biomarkers should be assessed to improve sensitivity and specificity. The usage of multiple molecules makes multiplexed molecular imaging indispensable. 1 1.2 Fluorescence Imaging In addition to the advantage of non-ionizing radiation sources, optical molecular imaging permits the selection of suitable exogenous contrast agents, which can manipulate the excitation light, emission signal, exposure time, and spectral resolution. Exploiting different optical mechanisms, various contrast agents with multiplexed ability have been developed and widely used. For instance, conventional fluorescent dyes, quantum dots (QDs), and surface-enhanced Raman scattering nanoparticles (SERS NPs) are the most common contrast agents [14]. Based on the absorption and emission of light, conventional fluorescent dyes are excited by a specific wavelength and spontaneously emit light at a longer wavelength. However, the autofluorescence from the tissue overlaps the emission light. In addition to the autofluorescence issue, multicolor fluorescent dyes require multiple excitation wavelengths that will increase the complexity of the optical detection system. The broader emission bandwidth of fluorescence constrains the number of multiplexed channels since the bandwidth will result in overlaps of emission signals. Nevertheless, QDs solve those problems mentioned in fluorescent dyes. Semiconductor-made QDs allow a single-wavelength excitation and generate a narrow and bright emission signal. Despite their promising characteristics, the toxicity of QDs is the most concerning drawback for biomedical applications [15]. 1.3 Surface-Enhanced Raman Spectroscopy (SERS) Nanoparticles SERS NPs have similar advantages to QDs, but they are claimed to be less toxic [16]. SERS NPs are based on the Raman effect or inelastic scattering, discovered by C.V. Raman [17]. They are made of Raman-active dyes encapsulating metallic nanoparticles. Each Raman dye emits a unique spectrum, called “flavor”. The metallic cores, such as gold and silver, which create the surface plasmon resonance effect (SPR), can amplify the intrinsic signal of the Raman dye up to 2 1014 of magnitude. Moreover, the sharp peaks of Raman spectra allow easier multiplexing [16, 18, 19]. Due to the scattering effect, any illumination wavelength can produce a Raman spectrum, but the efficiency might vary. Therefore, an optimized single wavelength can be used to excite a variety of flavors. In comparison to other modalities, SERS Raman signals seldom interfere with the environment and background noise. Consequently, SERS NPs can perform a highly sensitive detection platform even at a very low concentration. According to the above benefits of brightness, narrow band, nontoxicity, high sensitivity, and single excitation, SERS NPs are selected as the imaging probe in our study. As they are conjugated with tumor-targeting antibodies, the spectrally distinct Raman signals can serve as indicators for specific biomarkers [20]. By detecting the SERS NPs simultaneously, mixed signals consisting of multiplexed Raman spectra can be captured. To quantitatively differentiate and analyze each component in the mixed signals, a demultiplexing algorithm plays a vital role in calculating the ratios of each flavor [21, 22]. 1.4 Near Infrared (NIR) Imaging In vivo imaging modalities in clinical applications usually rely on deep-penetrating radiation tomography techniques, such as MRI, CT, and PET. However, these methods suffer from hazardous ionizing radiation and poor spatial resolution. In contrast, in vivo fluorescence imaging can provide high spatial resolution and non-hazardous optical radiation in living animals. Despite the advantages of fluorescence imaging, in vivo animal studies and clinical trials are limited by the poor photon penetrating depths because the light in the visible range (400~700 nm) causes substantial light-tissue interaction, such as photon scattering and absorption effects in most biological tissues [23]. In addition, shorter wavelength excitations can induce strong tissue autofluorescence, which might significantly decrease the imaging contrast [24]. Therefore, excitation and emission of light at longer wavelengths can potentially avoid the drawbacks of 3 fluorescence imaging in the visible range. Recently, researchers have shown that imaging contrast agents emitting light in the NIR region (700~1700 nm) can improve imaging penetration depths and retain high spatial resolution. The broadband NIR region can be further divided into two windows of wavelengths: the traditional NIR window (NIR or NIR-I; 700~900 nm) and the second NIR window (SWIR or NIR-II; 1000~1700 nm) [25]. To collect the light in the NIR range, the imaging contrast agents and instruments have to be developed for longer wavelengths [26]. For the NIR-I range, the NIR fluorophores, indocyanine green (ICG; emission ~800 nm), and methylene blue (MB; emission ~700 nm) have been approved by FDA for clinical use [24]. The emission spectrum in the NIR-I range is aligned with the quantum efficiency of silicon-based detectors (up to 900 nm), providing adequate sensitivity and affordable cost. In comparison, owing to the advances in chemistry and nanotechnology, artificial fluorophores, such as single-walled carbon nanotubes (SWCNTs) and QDs, are usually used for the NIR-II emission, while the ICG emission spectra also extend to the NIR-II range [27]. To detect the signals in the NIR-II regime, the indium gallium arsenide (InGaAs) photodetectors have been commercialized and applied in imaging applications. Nevertheless, due to the small direct bandgap, relatively high thermal noise and dark current affect the SNR of InGaAs [28]. Apart from the detectors, the optical imaging systems have to be designed for the NIR ranges, considering the aberrations and optical efficiencies. 1.5 Motivation and Research Contribution In this work, a variety of optical instruments have been developed for biomedical imaging in the NIR range, which can potentially improve the tissue penetration depth and prevent the autofluorescence issue. NIR fluorescent dyes and SERS NPs have been used as contrast agents in imaging applications. These optical instruments and their technical details are described in this 4 thesis, including the confocal microscopes and Raman spectrometer, from portable to miniaturized versions. Furthermore, ex vivo and in vivo NIR images using different modalities are shown for biological study and verification. In conclusion, the main contributions of this thesis are the optical designs of instrumentation, imaging processing, and NIR imaging techniques. 1.6 Outline The topics of this dissertation are organized as follows. In Chapter 2, a spectroscopic system for SERS Raman spectra detection is revealed. Different types of SERS NPs are used for the detection of Alzheimer's Disease biomarkers and breast cancer image-guided surgery. Chapter 3 introduces the development and characterization of a vanadium dioxide (VO2) MEMS scanner for the miniaturization of the Raman probe and Lissajous scanning SERS Raman Imaging. Chapter 4 presents the portable dual-axis confocal microscopes, including the point-scan and line-scan systems for fluorescence imaging. In Chapter 5, a thin-film PZT MEMS scanner is integrated into a wide-field macroscope for fluorescence imaging. In Chapter 6, a novel photodetector, Superconducting Nanowire Single-Photon Detector (SNSPD), is used in the point-scan confocal microscope and PZT MEMS scanner-based wide-field fluorescence imaging systems to improve the sensitivity, contrast, and operation range of wavelengths. Finally, Chapter 7 discusses the future works of the imaging systems: advanced VO2 MEMS scanners for miniaturized confocal microscopes, and fluorescence lifetime measurement microscopy using SNSPD. 5 CHAPTER 2: SERS RAMAN SPECTROSCOPY 2.1 SERS Raman Spectroscopic Imaging System 2.1.1 Customized Raman Imaging System A customized Raman spectral imaging system has been developed to collect the Raman signals from specimens and perform a large area scan (10 cm × 10 cm) over a sample. The system consists of three parts: the Raman probe, relay optics, and spectrometer. As shown in Figure 1, the Raman probe is based on a custom-made fiber bundle (Fiberguide Industries, Caldwell, ID, USA) that contains a single-mode fiber (780HP) for illumination and 36 multi-mode fibers (AFS200/220T) for collection. In the distal end of the probe, the fibers are arranged in a hexagonal pattern where the single-mode fiber is located at the center and surrounded by 36 multi-mode fibers. A fused silica plano-convex lens (L1, f = 6.83 mm; PLCX-4.0-3.1-UV, CVI Laser Optics, Albuquerque, NM, USA) is mounted in the front of the distal end, collimating the illumination light from the single-mode fiber, and receiving the scattered light into the multi-mode fibers. A fiber-coupled 785 nm laser (iBeam Smart 785, Toptica Photonics, Munich, Germany) is used to illuminate the sample with a 1-mm beam diameter. These multi-mode fibers are aligned in an 8- mm linear array at the proximal end. A custom spectrometer (Kymera 193i-A, Andor Technology, Belfast, Northern Ireland, UK) and a back-illuminated deep-depletion CCD detector (1024 pixels × 256 pixels, pixel size 26 μm × 26 μm; DU920P Bx-DD, Andor Technology, Belfast, Northern Ireland, UK) are used to acquire Raman scattered signals from the linear fiber array. This spectrometer is based on the Czerny- Turner configuration, consisting of a plane mirror, two concave mirrors that collimate and focus the input light, and a motorized diffraction grating that disperses a chromatic light at different angles. This motorized grating can be switched to different line densities (600 or 1200 lines/mm) 6 and rotated to different angles, providing a variable wavelength range. The dispersed light from the grating is projected onto the CCD detector along the axis of 1024 pixels whereas the other axis is used to accumulate light intensity at each wavelength in full vertical binning (FVB) mode. During the acquisition process, the CCD in the deep-cooling mode is cooled down at -90 °C to minimize the dark noise. To maximize coupling efficiency, the numerical aperture and imaging height are designed to match the linear fiber array (NA 0.22, 8 mm) and the spectrometer (NA 0.14, 7 mm) via a pair of relay optics. Therefore, the collimating (L2, f = 100 mm; AC254-100-B-ML, Thorlabs, Newton, NJ, USA) and focusing (L3, f = 80 mm; AC254-80-B-ML, Thorlabs, Newton, NJ, USA ) lenses are selected. The light coming out from the linear array is filtered by a long-pass filter (LPF, 𝜆𝑐 = 830 nm; BLP01-830R-25, Semrock, Rochester, NY, USA), which blocks Rayleigh scattering and allows longer wavelengths (Stokes Raman scattering) to pass through and then focus on the slit (width of 200 μm) of the spectrometer. Since the Raman probe is a point detection device, two digitally-controlled motorized stages (DDS100, Thorlabs, Newton, NJ, USA) are used to move the sample in two dimensions. To image a large area (up to 10 cm × 10 cm), the detection probe is clamped at an optimal height and the 2D stages perform a raster scan over a region of interest. The motion of the stages and the spectral acquisition of the detector are synchronized. The spectra acquired at corresponding positions can be arranged as a 2D mapping based on the raster scan pattern. Spectrometer-Stages synchronization is realized by a custom LabVIEW (2018 64-bit, National Instruments, Austin, TX, USA) program while 2D mapping reconstruction is implemented by MATLAB (2019b, MathWorks, Natick, MA, USA) scripts in post-processing. 7 Figure 1. Schematic of the customized Raman imaging system. (a) The optical diagram of the system setup. A 785 nm laser is used to illuminate the sample. The scattered light is collected by the custom-made Raman probe, delivered by the fiber bundle, and coupled into the spectrometer via the relay optics, where L1: lens in the Raman probe; L2: collimator in the relay optics; L3: condenser in the relay optics; LPF: long-pass filter; SW: switchable mirror in the cage cube. The spectrometer contains a rotatable grating, three mirrors (M1: reflection mirror; M2- 3: collimating and focusing mirrors), and a back-illuminated deep-depletion CCD. To perform a 2D Raman imaging, the sample is translated by a 2D motorized stage. (b) The photograph of the distal and proximal ends of the custom-made fiber bundle. Figure 2. CAD drawing of the customized Raman imaging system. 8 During spectra acquisition, the spectrometer was set to the diffraction grating with higher groove density (1200 lines/mm, 850 nm blaze) which can provide higher dispersion to resolve Raman fingerprints. In addition, the center wavelength was set to 875 nm for the desired spectral range (835 nm ~ 912 nm) by rotating the grating angle (-32.76°). The ability to separate two adjacent Raman peaks is called spectral resolution. In theory, the spectral resolution (𝛿𝜆 ) is determined by the spectral range (Δ𝜆), the entrance slit width (𝑊𝑠 ), the pixel size of the detector (𝑊𝑝 ), and the number of pixels (𝑛), expressed as the following equation [29, 30]. Δ𝜆 𝑊 𝛿𝜆 = 𝑅𝐹 × × ⌈𝑊𝑠 ⌉ (1) 𝑛 𝑝 where 𝑅𝐹 is the resolution factor, determined by the ratio between 𝑊𝑠 and 𝑊𝑝 . When 𝑊𝑠 ≈ 𝑊𝑝 , 𝑅𝐹 is 3. As 𝑊𝑠 ≫ 4𝑊𝑝 , 𝑅𝐹 drops to around 1. For instance, our spectrometer uses a 200 μm slit, a 1024 pixels detector with a pixel size of 26 μm, and a wavelength range from 835 nm to 912 nm. The calculated spectral resolution is around 0.6 nm, close to the value provided by the Andor manufacturer in Figure 3 [31]. Figure 3. Spectral resolution of the custom Raman spectrometer. 9 2.1.2 Customized LabVIEW Program A custom LabVIEW program was developed to synchronically control the spectrometer, detector, and the two stages. In Figure 4(a), the graphical user interface (GUI) not only integrates the settings of key parameters in the control panel, such as exposure time, acquisition mode, grating type, slit width, range of wavelengths, scanning area, but also offers a front panel to display acquired spectra and scanning progress in real-time. Figure 4. LabVIEW program controlling the spectrometer and the 2D stage. (a) A custom graphical user interface is developed to control the spectrometer and the 2D stage. Exposure time, acquisition mode, grating type, slit width, range of wavelengths, and scanning area can be set in the control panel. Scanning progress and acquired spectra can be displayed in real-time. (b) A state diagram describes the control flow of the spectrometer and the stage. Each state represents a specific action. The scanning process loops over a sequence of states: “Scan” (scanning process handler), “Move” (stage movement), and “Acquire” (spectrum acquisition). Upon completion, acquired spectra are saved in a text file for post-process. In consideration of systematic maintainability and scalability, the program is implemented in the Queued State Machine - Producer Consumer (QSM-PC) Architecture, which contains mainly two parallel loops executing simultaneously: the producer loop handles events happening in the GUI interface; the consumer loop is triggered by the events sent by the producer and then executes 10 specific actions. These events and actions can be classified into different states with certain relations, expressed in a state machine diagram (Figure 4(b)). In the state diagram, the blue states indicate basic operation and acquisition without scanning, while the green ones are involved in the scanning process. The scanning process is controlled by a sequence of states: “Grid” (set the scan area and grid size), “Scan” (scanning process handler), “Move” (move the stage in raster scan), and “Acq.” (acquire spectra). Once the grid size and area are set up and the scanning process starts, “Scan” state manages the process and loops over “Move” and “Acq.” sequentially until the process is completed or terminated. Upon completion of the scan, all the acquired spectra and the acquisition settings are saved in an external text file for post-processing in MATLAB. 2.1.3 Demultiplexing Algorithm and Imaging Reconstruction As ratiometric analysis requires a mixture of multiple SERS NPs applied to the specimen, a measured spectrum is a result of a superposition of these SERS spectra. To unmix different Raman components from the measured spectrum, a demultiplexing algorithm based on former research is developed to quantify each component existing in the mixture [21, 32, 33]. The amount of each spectral component is quantified and denoted in “weight”, representing a decimal number of a known “reference spectrum”. Assuming that a mixed spectrum is a linear combination of multiple reference spectra and the spectral crosstalk between reference spectra is minimized, the demultiplexing algorithm employs the direct classical least squares (DCLS) method to solve the weights. To compensate for the variations in terms of temporal spectral shift and sample heterogeneity, the algorithm also incorporates principal component analysis (PCA) on the background spectra of the specimen. The matrix representation of this algorithm is expressed in the following equation: 𝑴 = 𝑹∙𝑾+𝑬 (2) 11 where: 𝑴 = measured spectral data in column-major order 𝑹 = known reference spectral dataset in column-major order 𝑾 = calculated weights in row-major order 𝑬 = regression errors in column-major order For a single acquisition, the measured spectral data 𝑴(𝟏𝟎𝟐𝟒×𝟏) has 1024 rows (the number of pixels along the dispersive direction) and one column (single acquisition). The reference matrix 𝑹(𝟏𝟎𝟐𝟒×𝒏) contains n column vectors of reference spectral components. The computed weight matrix 𝑾(𝒏×𝟏) has n rows (the number of known reference components) and one column (single acquisition). For multiple measurements, these matrices are expanded to 𝑴(𝟏𝟎𝟐𝟒×𝒎) and 𝑾(𝒏×𝒎) where m is the number of acquisitions. Regardless of the number of measurements, the reference matrix keeps the same dimension. The accuracy of this algorithm highly relies on the coverage of the reference datasets 𝑹. To prepare comprehensive reference datasets, any substance that can emit Raman scattering signals will be collected in the reference matrix, including the spectra of SERS NPs, background spectra from the sample, and imaging system. In addition, a third-order polynomial regression (𝑷𝒐𝒍𝒚) is incorporated into the reference matrix for baseline removal. The detailed composition of the reference matrix 𝑹 can be written in the equation: 𝑹 = [𝑺, 𝑩𝒌𝒈𝒅, 𝑷𝑪(𝑩𝒌𝒈𝒅), 𝑷𝒐𝒍𝒚] (3) where: 𝑺 = purified spectra of reference Raman materials, such as SERS NPs 𝑩𝒌𝒈𝒅 = averaged spectra of scanned background datasets 𝑷𝑪(𝑩𝒌𝒈𝒅) = principal components of scanned background datasets 12 𝑷𝒐𝒍𝒚 = 3rd-order polynomial terms for baseline removal In our study, the reference spectra of SERS NPs (𝑺) were generated through a multi-step protocol. First, Raman spectra of high-concentration SERS NPs were acquired. Second, the background spectrum of the solvent (deionized water) and container for SERS NP solution was measured under the same acquisition setting. Last, the solvent background spectrum was subtracted from the high-concentration SERS spectra to create the reference datasets, which were then stored in the first few columns of the reference matrix. To further reduce the noise, reference acquisition was done by an average of 25 kinetic frames. The background datasets of the specimen were prepared by scanning the sample prior to SERS NPs staining. The scanned spectra at different positions were averaged (𝑩𝒌𝒈𝒅). However, the averaged spectrum is not sufficient to describe the complicated background properties due to sample heterogeneity, signal variation, and noise. Although all the scanned background spectra can be included in the reference matrix, a large dataset will dramatically increase the computation time to solve the matrix. Principal component analysis (PCA) is commonly used to reduce matrix dimensions by projecting a large dataset onto lower dimensions with orthogonal principal components. By applying PCA on the scanned background spectra ( 𝑷𝑪(𝑩𝒌𝒈𝒅) ), the high- dimensional dataset can be shrunk into only a few significant principal components (the first seven principal components in our algorithm) with minimized information loss. Afterward, the equation 𝑴 = 𝑹 ∙ 𝑾 + 𝑬 can be solved by the DLCS method in MATLAB, minimizing the regression errors. The results stored in the weight matrix 𝑾 are arranged in the same order as the reference components, as follows: 13 𝑾𝑺 𝑾𝑩𝒌𝒈𝒅 𝑾= 𝑾 (4) 𝑷𝑪(𝑩𝒌𝒈𝒅) [ 𝑾𝑷𝒐𝒍𝒚 ] where: 𝑾𝑺 = weights of reference Raman materials 𝑾𝑩𝒌𝒈𝒅 = weights of averaged background spectra 𝑾𝑷𝑪(𝑩𝒌𝒈𝒅) = weights of principal components of background datasets 𝑾𝑷𝒐𝒍𝒚 = weights of 3rd-order polynomial Among those calculated weight components, the weights of reference Raman materials 𝑾𝑺 are used for image reconstruction of SERS NPs, while others are the side products for the regression. For instance, when k types of SERS NPs are included in the reference dataset and m positions are scanned, 𝑾𝑺(𝒌×𝒎) (the first k rows of the weight matrix) are used to reconstruct Raman mapping images. Each row of the weight matrix represents an individual channel from the demultiplexing results, whereas the m columns indicate the “flattened” scanning position because the spectral measurements are saved in a 1D sequence. With the scanning grid size and 2D spatial information recorded in the text file, the flattened m acquisitions can be reshaped in 2D images with i by j pixels, where 𝑚 = 𝑖 × 𝑗. Consequently, 𝑾𝑺(𝒌×𝒎) can be reconstructed into k channels of 2D Raman images. Since each principal component indicates different substances, the weight of principal components of the background 𝑾𝑷𝑪(𝑩𝒌𝒈𝒅) can be utilized to differentiate background areas and create a topographic map. For example, the strongest principal component usually represents the major background contribution (the intrinsic signals from the sample or substrate). The Raman reconstruction image of the first principal component can be used to define the boundary of the 14 sample area. Moreover, the Raman probe is a distance-sensitive device of which the collection efficiency varies with the working distance from the sample to the device. It is assumed that the intensity level of the sample background is dominated by the distance variation. If the distance- efficiency relation is a monotonic function within a specific range, the intensity level can be converted into the working distance. Therefore, a 2D intensity image of the first principal component can be transformed into a 3D topography (see Section 2.2.4). 2.1.4 Fluorescence Imaging Reconstruction Our custom spectrometer can not only measure Raman spectra but also detect fluorescence signals. Compared to Raman spectra, fluorescence signals have stronger intensity and broadband spectrum without sharp distinct peaks. Due to its relatively strong intensity, the spectrometer can acquire fluorescence spectra with a good signal-to-noise ratio (SNR) using a shorter exposure time (<0.1 second). It is important not to saturate the detector during fluorescence acquisition. Moreover, the fluorescence signal exhibits a broadband background on the spectral curve as shown in Figure 5 (a). To quantify the intensity of fluorescence, this broadband signal does not require complicated demultiplexing algorithms, such as the DCLS. Instead, by simply calculating the area under the spectral curve, the fluorescence spectrum can be converted into a value. Figure 5. Fluorescence spectrum and image of a breast tumor. (a) Fluorescence spectrum. (b) Fluorescence image reconstructed from spectra. (c) Fluorescence image acquired by the Pearl System as the ground truth. 15 Variations of fluorescence intensity may come from the sample and working distance. Assuming the sample is flat, and the effect of working distance is neglectable, most of the variation is due to the fluorescence concentration on the sample. Based on this concept, it is feasible to create fluorescence mapping images with our custom spectrometer. By scanning the fluorescence sample, intensities at each position can be reconstructed into an image in Figure 5(b) using the mapping algorithm discussed in Section 2.1.3. 2.2 SERS Nanoparticles for Alzheimer's Disease Biomarker Detection 2.2.1 Introduction to Alzheimer’s disease (AD) Alzheimer’s disease (AD) is the most common form of dementia [34]. In 2021, there are approximately 6.2 million people older than age 65 living with AD in the United States. By 2060, the number of people suffering from AD is estimated to reach 13.8 million. The cost of health care and long-term care for AD patients will increase from $355 billion in 2021 to $1.1 trillion in 2050 [35]. Patients with AD experience memory loss, language problems, and difficulties completing normal daily tasks. With the progress of the disease, many neurons in AD patients’ brains begin to stop functioning, disconnect from other neurons, and die. The damage can become widespread and irreversible, eventually leading to significant brain atrophy [35-37]. Although the pathological causes of AD are still unclear, abnormal neurofibrillary tangles (tau tangles) and clumps (amyloid- beta, Aβ plaques) are hallmarks of AD [38, 39]. Several noninvasive neuroimaging techniques have been developed for the diagnosis of AD, such as magnetic resonance imaging (MRI) [40, 41], computed tomography (CT) [42], and positron emission tomography (PET) [43]. MRI and CT usually provide anatomical information for the detection of brain shrinkage. In addition, functional MRI (fMRI) [44-46] and fluorodeoxyglucose PET (FDG-PET) [47, 48] can be employed to measure activities in a brain. 16 For instance, FDG-PEG can identify the usage of glucose in the brain regions that are related to memory and problem-solving. Decreased glucose usage is a possible indicator of AD. Furthermore, by applying radiotracers targeting AD-related biomarkers: Aβ plaques or tau tangles, molecular imaging offers a new approach to monitoring disease progressions, such as Aβ-PET and tau-PET [49-52]. However, the high cost, low contrast, and limited availability of radiotracers are significant drawbacks for PET-based methods [53]. Apart from brain imaging, the levels of Aβ and tau can be examined through cerebrospinal fluid (CSF), a fluid that supplies nutrients and chemical compounds to the brain [54, 55]. Since CSF is usually obtained by a lumbar puncture, i.e., inserting a thin needle into the gap between the bones of the spine, it is impossible to determine the distribution map of biomarkers in the brain through the CSF test. Raman spectroscopy has been widely used to analyze chemical compositions and molecular structures of specimens. Raman scattering effect, discovered by C.V. Raman, is inelastic scattering due to molecular vibrations, causing changes in the energy of scattered light [17]. The energy change is presented as Raman shift, a unique fingerprint for each molecular. Several studies have demonstrated the capability of Raman spectroscopy in AD diagnosis by detecting AD biomarkers [56]. Nevertheless, the weak Raman scattering light requires a long acquisition time and limits the practical application of AD early detection. The limitations of conventional Raman spectroscopy have been improved by surface- enhanced Raman scattering (SERS). As molecules are absorbed onto rough metal surfaces, such as silver or gold, the Raman scattering intensity can be enhanced by 1010 to 1014 folds through the surface plasmon resonance effect [57, 58]. SERS platforms and SERS nanoparticles (SERS NPs) have been developed for AD diagnosis in CSF assays and brain slices [59-61]. SERS NPs are made of metallic cores coated with Raman active dyes as contrast agents in imaging applications [62]. 17 Moreover, SERS NPs conjugated with antibodies provide specific targeting to biomarkers [20, 63]. By monitoring the intensities of the SERS spectra, antibody-conjugated SERS NPs can be used to assess the levels of biomarkers in specimens. In this research, we propose a novel approach, using SERS NPs to detect AD-associated tau protein in brain tissues. One potential concern for detection is the non-specific binding of probes by the tissues leading to false positive results. To overcome this hurdle, we envision using a tau- targeting SERS NP in combination with a control SERS NP can enable the ratiometric analysis to reduce the complications due to nonspecific binding and tissue heterogeneity. In Figure 6(a), functionalized NPs with anti-Tau antibodies bind to tau protein more than the control NPs in the tau-rich part of the AD brain, whereas both NPs should have a similar distribution in the normal region without much tau deposits. Thus, the ratio of SERS signals of targeting NP over control NP (targeting/control) should be higher in the tau-rich AD brain. Because the ratiometric analysis requires at least two SERS NPs, the measured spectra are mixed, which need to be demultiplexed into individual SERS signals for ratio calculation. Additionally, we report a custom Raman spectroscopic system, which can acquire SERS spectra across a whole mouse brain FOV (15 × 16 mm2). Traditional Raman microscopes typically have a small field of view (FOV < 0.5 × 0.5 mm2). The ability to image the whole brain within one scan can significantly improve the speed of detection. With our custom scope, the spectra acquired are converted into ratios and mapped to an image of the brain. The overview of the custom Raman imaging system and the experimental procedure are shown in Figure 6(b). (1) A mouse brain with AD biomarkers is stained with a mixture of targeting and control SERS NPs. (2) The custom Raman spectroscopic imaging system collects the SERS spectra across the whole mouse brain. (3) Acquired SERS spectra are separated into individual channels using the demultiplexing algorithm. 18 (4) Ratios (targeting/control) are calculated and reconstructed in 2D images. The presence of AD- related biomarkers can be determined by ratiometric analysis. In this study, a normal mouse brain injected with tau and Aβ protein was used to mimic the brain affected by Alzheimer’s disease and followed by the Raman imaging procedure. After Raman imaging, the brain was submitted for the immunohistochemistry (IHC) staining of tau, a gold standard for histological biomarker detection. The Raman ratiometric images are compared with immunohistochemistry (IHC) images of tau for validation. Figure 6. Concept of Alzheimer’s disease detection using SERS NPs. (a) Schematic of a normal neuron and an Alzheimer’s disease neuron containing tangled tau protein and Aβ plaques. Tau-targeting SERS NPs bind to tau protein, while the control SERS NPs do not. (b) Experimental procedure to detect tau in mouse brain using SERS NPs. 1) Mixture of targeting and nontargeting SERS NPs are applied to the mouse brain. 2) A custom Raman spectroscopic imaging system is used to collect the SERS spectra from the mouse brain. 3) Acquired SERS spectra are demultiplexed into individual channels for ratiometric analysis. 4) Ratios are calculated and reconstructed in 2D images to determine the presence of biomarkers. 19 2.2.2 Characterization of SERS NPs Results Three flavors of SERS NPs, NP-A, NP-B, and NP-C were utilized in our experiments (NP- A: C17-785-A-NC-DIH-50-1; NP-B: C17-785-B-NC-DIH-50-1; NP-C: C17-785-C-NC-DIH-50- 1, Nanopartz Inc., Loveland, CO, USA). These SERS NPs are made of gold nanorods and coated with three different types of Raman active layers, as shown in Figure 7(a). The reference spectra of the SERS NPs were measured with 1 nM stock solutions in a 96-well plate. The samples were illuminated with a 33 mW 785 nm laser and acquired with an exposure time of 0.2 seconds and an average of 25 frames to enhance the signal-to-noise ratio (SNR). The purified reference spectra were presented in Figure 7(b). It should be noted that the measured spectra have a cut-on wavenumber of around 900 cm-1 due to the 830 nm long-pass filter. The x-axis of wavelengths can be converted to Raman shifts in wavenumbers using the following equation: 1 1 𝑤𝑎𝑣𝑒𝑛𝑢𝑚𝑏𝑒𝑟 [𝑐𝑚−1 ] = 107 (𝜆 − 𝜆 [𝑛𝑚]) (5) 𝑒𝑥 [𝑛𝑚] where 𝜆𝑒𝑥 is the laser excitation wavelength and λ is the measured wavelength. A TEM image of the NP-B gold nanorods was shown in Figure 7(c). The size of nanorods was measured to be approximately 60 nm by 120 nm. The limit of detection (LOD) of these SERS NPs was characterized by linear regression to determine the lowest sample concentration that can be measured with accuracy. The SERS NPs were diluted in deionized water from 200 pM to 0.5 pM. The spectra of diluted solutions were calculated in weights using the DCLS demultiplexing algorithm and then converted to measured concentrations with pre-calibration. The linearity results are plotted in Figure 7(d-f). Ideally, the measured concentrations should have a linear relationship with sample concentrations. However, if the spectrum from the SERS NP is too weak, the background signals and noise will dominate the measured spectrum. As a result, the demultiplexing algorithm cannot calculate the weight 20 accurately. Here, the LOD is defined as the lowest measured concentration within the ±50% error range of the fitting line. The LODs of NP-A, NP-B, and NP-C are estimated at 5 pM, 0.5 pM, and 2 pM, respectively. The LOD majorly relates to the maximum intensity of the SERS NPs. NP-B with the brightest spectrum provides the lowest limit (best), whereas NP-A has the most distinguishable peaks but with the highest LOD (worst). Figure 7. Characterization of the SERS NPs.(a) Schematic of the pure and functionalized SERS NPs (b) Measured spectra of the three types of SERS NPs: NP-A, NP-B, and NP-C with a concentration of 1 nM. (c) TEM image of NP-B nanorods with a size of 60 nm × 120 nm. (d-f) Linearity of NP-A, NP-B, and NP- C. The limit of detection is defined as the lowest measured concentration within the ±50% error range of the fitting line. The limits for NP-A, NP-B, and NP-C are 5 pM, 0.5 pM, and 2 pM, respectively. To enable biomarker targeting ability, NP-B was functionalized with anti-Tau antibody (Purified anti-Tau, 210-230 Antibody, BioLegend, San Diego, CA, USA) presumably through hydrophobic or sulfhydryl-gold interactions (NP-B-anti-Tau: C17-785-B-Ab-DIH-50-1, Nanopartz Inc., Loveland, CO, USA). The SERS spectra of the functionalized NPs are identical to 21 those without antibody functionalization as the negative control. The negative control NPs were used for system characterization and phantom preparation, while the functionalized NPs were employed for targeting experiments on mouse tissues. The binding efficacy of the anti-Tau antibody has been verified using ELISA (Enzyme-Linked Immunosorbent Assay). Three types of proteins, BSA (Bovine Serum Albumin, A2153, Sigma- Aldrich, St. Louis, MO, USA), Tau (Human Tau-441, #TAU-H51H3, ACROBiosystems, Newark, DE, USA), and Aβ (Amyloid β-Peptide, #052487, GL Biochem, Shanghai, China) were employed to test the specificity of detection. BSA (200 μL, 5% BSA in PBS), Tau (20 μL, 1 mg/mL), and Aβ (20 μL, 1 mg/mL) were incubated on the bottom of a 96-well plate. The anti-Tau antibody (1 mL, 2 μg/mL) was added to each well. Following the ELISA protocol, the plate was measured by an automated plate reader (SpectraMax M3, Molecular Devices, San Jose, CA, USA). The ELISA results were shown in Figure 8(a). The anti-Tau antibody exhibits strong binding to Tau protein confirming its selectivity. Figure 8. Results of binding efficacy to BSA, Tau, and Aβ proteins. (a) ELISA results of anti-Tau antibody binding efficacy illustrate specific binding to Tau protein rather than BSA or Aβ. (b) Raman results of uncoated NP-B (negative control) show no selectivity among the three proteins. (c) Raman results of NP-B- anti-Tau show selective binding to Tau protein. ***p-value < 0.001. For SERS Raman detection, two 96-well plates were incubated with the proteins following the aforementioned protocol. NP-B and NP-B-anti-Tau (100 μL, 250 pM for each) were added to 22 the well plates at room temperature for two hours and then washed three times with PBS. The plates were then measured by a commercial Raman system (WP-785 and RP-785, Wasatch Photonics, Morrisville, NC, USA) with a focusing illumination beam of 25 mW, 785 nm Laser. The spectra were acquired with an exposure time of 0.5 seconds, 25 frames averaging, and calculated as weights. Although the unfunctionalized NP-B has nonspecific binding to all the proteins (Figure 8(b)), the functionalized NP-B-anti-Tau reveals specific binding to Tau protein, consistent with ELISA results (Figure 8(c)). The error bars represent one standard deviation from the means. A two-sample t-test in MATLAB was performed to determine the selectivity of NP-B- anti-Tau using Raman detection. 2.2.3 SERS Imaging of Paper Phantom To validate and demonstrate the demultiplexing capability of NP-A, NP-B, and NP-C, the text “-MSU IQ-” was written on a paper phantom using ink of SERS NPs. Each letter was written with different mixed solutions where the SERS NPs were diluted into a concentration of 20 pM for each “flavor” as shown in Figure 9(a). Once the NP solution dried out, the paper phantom was scanned by our custom Raman imaging system. Due to the 1 mm beam size, a step size of 0.5 mm was used to scan the sample to meet the Nyquist–Shannon sampling theorem. The paper phantom (48 mm × 9 mm) was covered by 97 × 19 sampling points. With an exposure time of 0.2 seconds, the entire scanning time was about 12 minutes. Acquired spectra were processed into weights by the DCLS demultiplexing algorithm. Computed 1D weights of all three NPs were displayed in Figure 9(b). The 1D weights are split into individual channels for each flavor and reconstructed into grayscale 2D images in Figure 9(c). The grayscale images of NP-A, NP-B, and NP-C were merged to R, G, B layers, respectively, as a pseudo-color overlay image. 23 Figure 9. Raman imaging results of a paper phantom. (a) The table of the letters written by different mixtures, 20 pM for each unit of SERS NP. (b) The weights of SERS NPs calculated by the demultiplexing algorithm. The x-axis represents the sequence of acquisition samples. (c) The reconstructed 2D images of the SERS NPs in grayscale and pseudo-color. The pseudo-color overlay image is composited by merging individual channels in RGB layers. Scale bar = 5 mm. 2.2.4 Ex vivo SERS Imaging of Mouse Brain To verify the feasibility of ex vivo SERS imaging on tissues without influencing the tissue background spectra, negative control NPs were topically administrated on a mouse brain. First, a fresh mouse brain harvested from a wild-type mouse was scanned under the custom Raman imaging system for background spectra acquisition. Next, four different mixtures of SERS NPs, NP A+B, B+C, C+A, and A+B+C (equimolar 20 pM for each flavor) were topically applied to four positions respectively (3 μL for each position) on the cortex of the mouse brain (Figure 10(a)). After the staining, the brain with SERS NPs was scanned with the acquisition settings of 0.2 seconds exposure time and 0.5 mm step size. Reconstructed Raman images are presented in Figure 10(b). The four spots on the Raman images show a similar distribution and intensities of the mixed 24 solutions on the mouse brain. This experiment has successfully demonstrated SERS demultiplexing and reconstructed images on a mouse brain. The background spectra of the mouse brain can also provide distance information for 3D topography. Despite a collimated illumination beam, the slightly converging collection beam causes collection efficiency to vary with working distance (WD). The efficiency-to-working distance curve was characterized by measuring the weights of a SERS NP paper phantom (500 pM) at different distances in Figure 10(c). By normalizing the weights to 1 at distance = 0 mm, the efficiency first reaches a maximum value of 1.5 at distance = 9 mm and then declines as the distance increases. Since the efficiency curve is not a strictly monotonic function of WD, the inverse function is not always a one-to-one mapping. When selecting either WD ≤ 9 mm or WD ≥ 9 mm, we can find a one-to-one efficiency-to-distance relationship. To satisfy the requirement, WD ≥ 9 mm was used in our experiments by making the distance from the probe to the highest point of the sample equal to 9 mm. Therefore, the weights of the background spectra can be converted into distances using the characterization curve. To obtain a topography from a mouse brain, which should be assumed spectrally homogeneous, the weights of the background spectra were extracted by the first principal component. After calibration at a known distance (e.g., at 9 mm), the weights were turned into distances within proper boundary conditions. For instance, upper and lower limits are between 9 mm and the distance to the substrate. Hereafter, the distances were reconstructed into a 2D distance mapping and smoothed by 2D gaussian filtering. Finally, the 2D distance mapping of the mouse brain was visualized in a 3D topography, as shown in Figure 10(d). 25 Figure 10. Raman imaging of the mouse brain with topically SERS staining. (a) Photograph of a fresh mouse brain with different mixtures of SERS NPs. Equimolar mixture (20 pM for each) of A+B, B+C, C+A, and A+B+C were topically applied at positions 1, 2, 3, and 4 respectively. (b) Reconstructed Raman images of individual channels and an overlay image in pseudo-color. (c) Efficiency-to- working distance characterization of the Raman probe. When the probe operates within the 1-to-1 region (WD ≥ 9 mm), the efficiency can be converted into the working distance for topography generation. (d) 3D topography of the brain tissue. Scale bar = 5 mm. 2.2.5 Ex vivo Ratiometric Imaging of Protein-Injected Mouse Brain After the SERS image reconstruction on a mouse brain is verified, we design an experiment using functionalized NPs to detect AD biomarkers on a mouse brain. A normal mouse brain harvested from a wild-type mouse (C57BL/6) was injected with Tau and Aβ proteins (the same ones in Section 2.2.2) to mimic Alzheimer’s brains. Right after dissection, the brain was pre-fixed in 10% neutral buffered formalin for one day to solidify the brain structure and block other possible reactive antigens. 26 Figure 11(a) illustrates the sample preparation and imaging procedure. After prefixation, the brain was first washed in PBS for 30 seconds and then poked with two holes using a syringe (Figure 11(b)). Tau and Aβ protein (20 μL, 0.2 mg/mL for each) were separately injected into the two holes (at P1 and P2, respectively) at a depth of 3~5 mm and incubated at 4 ℃ overnight. The excess proteins were removed by rinsing the brain in PBS for 30 seconds. Second, the background spectra acquisition was performed to collect the intrinsic spectra from the brain, Tau, and Aβ. Third, an equimolar mixture of NP-B-anti-Tau (targeting) and NP-C (control) (200 pM for each flavor) was topically applied to the brain for 10 minutes. Forth, to remove unbound NPs, the brain was rinsed in DI water for 30 seconds. Last, the mouse brain with SERS NPs was scanned by our custom Raman system with 31 × 33 pixels (FOV 15 × 16 mm2). Using an exposure time of 0.2 seconds, the entire scanning time was about 6 minutes. The reconstructed Raman images were displayed and analyzed using “ratiometric analysis” in Figure 11(c). Ideally, in Figure 11(c), the NP-C channel should show similar intensity across the brain. Nevertheless, NP-C reveals strong intensity at P1, which could be due to its nonspecific binding to Tau proteins. This nonspecific binding effect could be eliminated by the ratiometric analysis, which calculates the ratio of targeting/control by dividing the weights of NP-B-anti-Tau from the weights of NP-C, as shown in the Ratio B/C channel. Note that appropriate thresholds based on the LOD are selected for both numerator and denominator, trimming the extreme values (e.g., infinity or negative) to zero. After the division, P1 showed a prominent ratio of NP-B; in contrast, the signals at P2 are almost canceled out through the ratio calculation. P3 is the region without any protein injection, which as expected did not show any signals as another negative control. The quantitative results of the three Raman images are shown in Figure 11(d) for comparison. Statistical significance was computed by a two-sample t-test in MATLAB and p-values < 0.001 27 were obtained. The error bars denote one standard deviation from the mean values. In summary, the ratiometric analysis successfully enhances the contrast and removes the nonspecific binding. Figure 11. Ex vivo Raman imaging of the protein-injected mouse brain. A normal brain was injected with Tau and Aβ to mimic Alzheimer’s brain. (a) Protocol of sample preparation and Raman imaging process. (b) Photograph of the protein- injected mouse brain. Tau and Aβ protein (20 μL, 0.2 mg/mL for each) is injected into the brain at P1 and P2 respectively. (c) Reconstructed Raman images of NP- B-anti-Tau, NP-C, and ratiometric results. (d) Intensity results at P1(Tau-injection), P2(Aβ-injection), and P3(non-injection) of the Raman images. (e) H&E and IHC images to validate the Raman results. ***p-value < 0.001. (f) Zoom-in IHC images of Tau at P1 and P2. Scale bars in (b, e) and (f) = 2 mm and 50 μm, respectively. 28 Histological analysis is the gold standard to validate where Tau protein is located in the mouse brain. After completing the imaging experiments, the mouse brain tissue was fixed in 10% neutral buffered formalin at room temperature for 48 hours and submitted for histopathology to make Formalin-Fixed Paraffin-Embedded (FFPE) tissue blocks. To correlate with the Raman imaging plane, the mouse brain was embedded in the block with the top of the brain exposed during microtome sectioning. Then, this FFPE block was cut into 4 μm-thick sections for regular H&E (hematoxylin and eosin) and IHC (immunohistochemistry) of Tau staining. As we injected proteins into two holes with a depth of 1 mm, the best depth for protein incubation could vary. Thus, to find the strongest Tau staining depth for IHC of Tau, the mouse brain was cut into 20 sections in horizontal planes from the top 50 to 1000 μm depth (every 50 μm). Moreover, an additional slice was collected for H&E at 1050 μm depth. The same anti-Tau antibody in Section 2.2.2 was employed as the primary antibody of the IHC staining. After the 20 IHC slides at various depths were investigated, the maximum intensity of Tau staining was found around 700~800 μm depth. The H&E and IHC images are investigated using an upright microscope (Eclipse Ci, Nikon Corporation, Tokyo, Japan). The images of the whole brain taken by a 10X objective (Nikon Plan 10X, NA 0.25, WD 10.5; MRL00102) are displayed in Figure 11(e). The zoom-in views of the IHC images at P1 and P2 taken by a 40X objective (Nikon Plan 40X, NA 0.65, WD 0.56 / 0.17; MRL00402) are shown in Figure 11(f). At P1, Tau protein only stayed on the periphery around the hole, providing a relatively high concentration of protein for the SERS NPs to bind to. The short penetration depth of Tau protein can be explained by the pre-fixation of the mouse brain. In comparison, the Tau staining doesn’t appear in any other regions in the brain, such as P2 with Aβ injection. The positive area on the IHC image is consistent with the Raman ratiometric image. 29 All the animal experiments were performed in accordance with the guidelines approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). 2.2.6 Conclusion of SERS NPs for Alzheimer’s Disease Detection This research presents a novel methodology for Alzheimer’s disease biomarker detection, using functionalized SERS NPs and ratiometric Raman imaging. The proposed approach, including the optical system, control program, SERS NPs, demultiplexing algorithm, and image reconstruction, facilitates accurate detection of Tau protein in a brain mimicking AD. The custom- made Raman spectroscopic imaging system has been built with a point detection probe and two stages that can scan the samples in two dimensions, providing a large FOV (15 × 16 mm2) within 6 minutes. Three different flavors of SERS NPs (NP-A, NP-B, NP-C) have been calibrated and their detection limits can reach 5 pM, 0.5 pM, and 2 pM, respectively. Various SERS NPs mixtures on the paper phantom and mouse brain have been successfully demultiplexed and reconstructed into images. The targeting capability of NP-B-anti-Tau has been validated by Raman spectroscopy, specifically binding to Tau protein instead of BSA and Aβ protein. A mixture of NP-B-anti-Tau and NP-C was topically applied onto a normal mouse brain modified with Tau and Aβ protein. The ratiometric Raman image of NP-B-anti-Tau/NP-C on the brain is consistent with the IHC images of Tau protein. The ultimate goal of this research will be to detect and image the AD-associated biomarkers, majorly Tau and Aβ protein, in the Alzheimer’s brain using SERS NPs. Although the SERS NPs have been applied only for the detection of Tau protein in the current study, we have already demonstrated the potential to detect multiple SERS NPs using the demultiplexing algorithm. In the future, once SERS NPs conjugated with anti-Aβ antibodies are available, we can 30 simultaneously obtain ratiometric Raman images for both Tau and Aβ by applying a mixture of NP-A-anti-Aβ, NP-B-anti-Tau, and NP-C. Consequently, this strategy could be employed to monitor the distribution of the biomarkers spatially and temporally in AD brains. Furthermore, genetically modified animal models of Alzheimer’s disease could be used for in vivo animal experiments [64, 65]. It will be interesting to investigate the development of different biomarkers inside living animals’ AD brains with the disease progression. This SERS imaging approach will help researchers understand the underlying mechanism of AD, diagnose the disease in the early stage, and develop potential therapeutic interventions. 2.3 SERS Nanoparticles for Breast Cancer Image-Guided Surgery 2.3.1 Introduction to Breast Cancer Breast cancer is a disease in which cells in breast tissue start to grow out of control, divide rapidly, accumulate, and eventually form a lump or a mass. The types of breast cancers are either non-invasive (carcinoma in situ) which will remain at the same location, or invasive which will spread to the surrounding tissues. In terms of invasive breast cancer, invasive lobular carcinoma which begins from the milk-producing lobules, and invasive ductal carcinoma which starts from the milk-transporting ducts are the most common types of breast cancer [66]. Invasive cancer cells can move from the breast to other parts of the body through the bloodstream or the lymph system in a process called metastasis. Breast cancer can be diagnosed through many approaches, such as mammography, breast ultrasound, breast MRI and biopsy. The most common staging system is the tumor-node-metastasis (TNM) staging system which can help determine the severity of breast cancer and suitable treatments [67]. The TNM staging system ranks breast cancer from 0, I to IV. For example, a smaller ranking for an early stage will result in a better prognosis. The recent staging system incorporates the level of estrogen receptors 31 (ER), progesterone receptors (PR), and human epidermal growth factor receptor 2 (HER2) as important prognostic indicators [68]. According to the presence of the hormone receptors (HR, such as ER and PR) and the HER2 protein, breast cancer can be classified into four main molecular subtypes: HR+/HER2-, HR+/HER2+, HR-/HER2-, HR-/HER2+. Hormone therapy will be helpful for HR-positive breast cancer, while targeted therapy could treat HER2-positive breast cancer. However, these therapies do not work against triple-negative breast cancer (TNBC), which lacks the expression of ER, PR, and HER2 [69]. Consequently, TNBCs only have limited treatment options and a worse prognosis than other types of breast cancers. Some treatments for breast cancer include mastectomy, radiation therapy, chemotherapy, and immunotherapy. Considering the status of breast cancer, the patient’s situation, risks, and benefits, joint treatments and surgeries are usually taken to improve the patient’s prognosis and decrease the recurrence. In the United States, approximately 260,000 women are diagnosed with invasive breast cancer every year. Half of the patients having breast cancer at early stages (I or II) receive breast- conserving surgery (BCS), as known as lumpectomy or partial mastectomy [70]. Although BCS combined with adjuvant chemotherapy or radiation treatments brings about an equivalent survival, there is an increasing number of BCS-eligible patients choosing mastectomy due to the fear of recurrence [71]. At present, there is no clear guidance that can assist surgeons in resecting tumors completely. Removing more tissue might help reduce the risk of local cancer recurrence, but a large open cavity is unacceptable for cosmesis. Tumor margin, which defines the minimal removal of the tumor and its surrounding normal tissues, is critical during a cancer removal surgery. In most of the instructions, a tumor margin will be assigned as “positive” if there are cancer cells within 2 - 3 mm from the resected edge, whereas a “negative” tumor margin implies no cancer cells within the distance [72]. Currently, the tumor 32 margin can be only marked by postoperative histopathology after the specimens are resected and examined. Those patients who have a positive surgical margin have a higher risk of local recurrence due to incomplete surgery. Up to 60% of BCS patients need additional re-excision to clean residual tumors [73]. On the one hand, these patients have to undergo another surgery which will place a financial burden on the healthcare system. On the other hand, extra re-excision of the lumpectomy cavity or mastectomy might increase the risk of wound infection, delay the process of adjuvant chemotherapy and radiation treatment, lead to poor aesthetic outcomes, and trigger postoperative anxiety [74, 75]. In short, the need for multiple surgeries not only increases unnecessary costs for the healthcare system but also brings psychological stress to the patients. As a result, an intraoperative approach to rapidly and precisely identify tumor margins could significantly decrease the rate of re-excision, save reductant medical expenses, and improve patients’ postoperative life quality. To discriminate tumor margins between healthy cells and malignant tissues with both high sensitivity and specificity, imaging contrast agents conjugated with tumor-targeting ligands are utilized for accurate tumor detection [7-9]. Nevertheless, the molecular phenotypes of most cancers can change from person to person, and the concentrations of the phenotypes could also fluctuate temporally and spatially [12, 13]. By detecting multiple disease-related biomarkers simultaneously, sensitivity and specificity can be potentially increased. Consequently, the usage of multiple molecules with different ligands is essential to multiplexed molecular imaging. In consideration of brightness, multiplexing capability, nontoxicity, high sensitivity, and single excitation, SERS NPs are selected as the imaging contrast agents. In our study, SERS NPs are gold nanoparticles (Au NPs) coated with Raman-active dyes, or “flavors”. Based on the Raman effect, each Raman dye emits a unique spectrum with sharp peaks, which enable easier 33 multiplexing. As SERS NPs are conjugated with tumor-targeting antibodies, the spectrally distinct Raman signals can serve as indicators for specific biomarkers 30. To be more specific, for breast cancer detection, imaging contrast molecules must have the capability of targeting ER, PR, and HER2 biomarkers. During the SERS image-guided BCS surgery, one flavor can be used as the control, while other flavors bind to tumor cells. By detecting multiple SERS NPs simultaneously, a mixed signal can be acquired and then quantitatively unmixed by a demultiplexing algorithm. Several instruments have been developed for SERS Raman detection [32, 33, 62, 76-84]. To search for tumor margins, a point-detection device relies on a benchtop stage physically translating the sample [32, 62, 78-80]. Some tools are equipped with a rotating motor to circumferentially inspect the gastrointestinal tract [33, 81, 82]. These devices utilize a collimated illumination beam which can achieve a large field of view (FOV) scanning but sacrifice the spatial resolution. Given the need for image-guided surgery, to identify tumor margins during a time-constraint intraoperative environment, scanning speed and a large FOV are the most important features of SERS Raman detection systems. Therefore, a handheld point-detection device is selected as the technical approach for this application. The design concept of the proposed handheld Raman probe that can differentiate tumor margins of breast cancer is illustrated in Figure 12. In this study, the SERS NPs are conjugated with hyaluronic acid (HA) and polyethylene glycol (PEG) as targeting and control ligands. HA is a type of carbohydrate that has been reported able to bind to CD44, a biomarker in breast cancer [85-87]. PEG is a hydrophilic polyether compound that can provide biocompatibility and stability for the SERS NPs as a control. By using a mixture of SERS-HA and SERS-CD44, ratiometric analysis can be applied to the Raman image processing. Then a map of tumor margins with positive and negative can be generated. We have demonstrated an image-guided surgery following the map for breast tumor removal. This is the 34 first report of SERS-enable image-guided surgery that uses the Raman probe and SERS-HA as contrast agents. Figure 12. Concept of tumor margin identification using SERS NPs. (a) Schematic of the usage of handheld Raman device during BCS surgery. (b) Schematic of the imaging system. A 785-nm Laser is used to illuminate the tissue. The Raman scattered light is delivered to the spectrometer. (c) The tumor area is topically administrated with SERS NPs and scanned by the Raman probe. (d) The structure of the targeted and nontargeted SERS NPs. The 60-nm gold core is coated with Raman active dyes, silica, and antibodies. 2.3.2 Synthesis of SERS NPs The SERS NPs were custom-made and synthesized by our collaborator in Dr. Xuefei Huang’s group. Since the synthesis of SERS NP is quite tricky and complicated, the synthesis protocol for SERS NPs is briefly described in this section. There are several considerations for SERS NPs synthesis: 1) the synthesis protocol should be as simple as possible (only one step); 2) the formation of SERS NPs should be at elevated temperature during Raman dye installation. 3) the concentration of surfactant should be zero or low so that the surfactant does not compete with the Raman dye for binding to the Au NPs; 4) the size of Au NPs should yield 50 to 60 nm, which is the optimized size for SERS Raman enhancement. As a result, in this study, the SERS NPs are synthesized by using the tris-base-assisted synthesis protocol since the protocol is relatively 35 straightforward, with Au NPs formation at an elevated temperature. At the beginning of the synthesis, 17 nm seeds were synthesized from gold chloride using sodium citrate as the reducing agent. Next, 17 nm seeds were mixed with tris base at 98 °C, followed by the addition of gold chloride to trigger the seeded growth of Au NPs. By varying the number of seeds, different sizes of Au NPs (such as 60 nm and 70 nm) can be synthesized. During the formation of Au NPs, 10 μM Raman reporter was mixed with seeds so that they can be chelated within the final SERS NPs. The concentration of Raman reporter was optimized to 10 μM for different flavors. Note that concentrations above 10 μM do not produce brighter SERS spectra. To functionalize SERS NPs with biomolecules, such as HA, thiol groups were employed for the attachment of HA to Au NPs via gold-thiol interaction [88-92]. SERS NPs were mixed with thiolated HA and incubated at 4 °C overnight. Then, unbounded HA was removed by repeated centrifugation. The tris-base-assisted synthesis protocol of SERS NPs is shown in Figure 13(a). The size and shape of synthesized SERS NPs were characterized by Transmission Electron Microscope (TEM; 2200FS, JEOL Ltd., Tokyo, Japan) and Dynamic Light Scattering (DLS; Zetasizer Nano ZS, Malvern Panalytical Ltd, Malvern, England, UK). In Figure 13(b), the TEM images show that SERS NPs are homogeneous spheres with approximately 50 nm diameter. The DLS results in Figure 13(c) also confirm the size distribution, with a measurement of 56 nm. 36 Figure 13. Synthesis of the custom-made SERS NPs. (a) SERS NPs synthesis and HA/PEG conjugation procedure. First, 17 nm gold seeds are formed. Second, the NPs further grow to 50 nm meanwhile different Raman reporters are attached to the gold surface. Last, the SERS NPs are functionalized with HA or PEG. (b) TEM image of the pure SERS NPs. The diameter is around 50 nm. (c) DLS result of the pure SERS NPs. The measured size is 56.16 nm in diameter. 2.3.3 Characterization of SERS NPs Four flavors of SERS NPs were synthesized based on different Raman reporters: K420, K421, K440, and K481. Those Raman reporters are non-resonant Raman dyes that have lower backgrounds and sharper peaks than resonant Raman dyes. The chemical structures of the Raman reporters are shown in Figure 14. Figure 14. Chemical structures of the Raman reporters were used in this study. 37 The SERS spectra of the four flavors of SERS NPs at 100 pM were measured by our custom- made Andor spectrometer, as shown in Figure 15(a). The limit of quantification (LOQ) of these SERS NPs was evaluated by the lowest sample concentration within ±50% linearity error. Prior to the linearity measurement, 200 µL SERS NPs (K440, K481, K420, K421) with various concentrations (100 pM, 50 pM, 10 pM, 5 pM, 1 pM, 500 fM, 100 fM, 50 fM, 10 fM, 0 fM) were added into the 96 well plate. The spectra of different concentrations were acquired at an exposure time of 0.2 s and converted to measured concentrations using the DCLS demultiplexing algorithm with pre-calibration. The linearity results of the series of measured concentrations are plotted in Figure 15(b). The LOQs of K440, K481, K420, and K421 are quantified at 50 fM, 1 pM, 100 fM, and 500 fM, respectively. Among all the four favors, K440 has the brightest signal with a maximum peak of approximately 3 × 104 counts and provides the lowest concentration at 50 fM. The other flavors are similar in brightness, around 4 × 103 counts. Figure 15. Characterization of the custom-made SERS NPs. (a) Measured spectra of the four flavors of SERS NPs at a concentration of 100 pM: K440, K421, K440, and K481. (b) Linearity of the four SERS NPs. The LOQ is defined as the minimum concentration which can be measured within ±50 % error range. The limits for K440, K481, K420, and K421 are 0.05 pM, 1 pM, 0.1 pM, and 0.5 pM, respectively. 38 2.3.4 Improved Demultiplexing Algorithm for Spectral Crosstalk To assess the spectral crosstalk between each flavor, the linearity results of individual flavors were computed into the weights using the DCLS demultiplexing algorithm. Although no other mixtures were used in this experiment, the demultiplex results showed the weights of other flavors that should not exist in the test sample. For instance, in Figure 16(c), the weights of the pure K440 sample also have components of K420, K421, and K481, which greatly mislead the accuracy of the weight results. This phenomenon can be explained by the similarity of spectral peak position between these SERS flavors, called spectral crosstalk. Even though the DCLS algorithm takes the entire spectral range into account, severe spectral crosstalk still occurs. We hypothesized that there must be other factors in the spectral domain that cause this problem. Figure 16. SERS spectrum temporal fluctuation and its effect on the demultiplex result. (a) Intensity image of the time series spectra. (b) Comparison of two fluctuating spectra acquired at 6.8 and 12 seconds. Peaks shift in both wavenumber and intensity domains. (c) Demultiplex results influenced by spectral fluctuation. 4 types of pure SERS spectra were demultiplexed simultaneously using a 4 flavors reference set. Without compensation for the fluctuation, K440 signals have severe interference or “crosstalk” with the other three flavors. (d) Demultiplex results after compensation for the spectral fluctuation. The crosstalk issue on K420 and K440 has significantly been eliminated. 39 Later, we observed that the measured spectrum was drifting randomly during a static acquisition. When we acquired the SERS spectra in a kinetic mode, the peak intensities and positions of acquired spectra were fluctuating over time. The intensity image of the time series spectra is displayed in Figure 16(a). Discontinuities in the intensity lines represent the temporal spectra fluctuations. To compare the fluctuations, two spectral curves selected at 6.8 and 12 seconds are plotted in Figure 16(b). The peak positions can shift up to 8 cm-1 wavenumbers, and the sharpness of the peaks varies as well. The small spectral changes could make a huge impact on the demultiplexing algorithm, for example, that can misclassify a peak from K440 to K420. To investigate the root cause of the fluctuations, the SERS sample was then measured by a commercial spectrometer (inVia Reflex confocal Raman microscope, Renishaw, Wotton-under- Edge, England, UK), but no spectral fluctuation was observed. The problem could be either the custom Andor spectrometer or the laser source. However, the Andor spectrometer was well- calibrated in the factory; no spectral vibration was found on-site. The culprit is likely to be the 785 nm laser source. Since Raman is a light scattering effect, scattered light is sensitive to incident light. Little changes in the incident wavelength can be reflected in the Raman spectrum, and multiple wavelengths can broaden the spectral peaks. The laser source 785 nm Laser (iBeam Smart 785, Toptica Photonics, Munich, Germany) used in the custom-made system was not optimized for narrow bandwidth (± 1 nm) application. Meanwhile, there is no cleanup filter to stabilize the output laser spectrum (± 5 nm). As a result, the spectral resolution and stability were affected, and the abovementioned phenomenon was exhibited. The root cause was confirmed by using a tunable laser (TBL-6712, Newport Corporation, Irvine, CA, USA) in our custom Andor spectrometer: no spectral fluctuations showed up again. If the stability issue of the laser source is not solved in hardware, the DCLS algorithm can 40 compensate for it. The random spectral shift can be thought of as the superposition of the average spectrum plus dynamic changes. The average spectrum (common part) and dynamic changes (differential part) can be obtained by acquiring 25 frames of kinetic spectra. The common and differential parts can be extracted by principal component analysis (PCA). The first three PCA components of each SERS flavor can replace 𝑺 in the reference matrix 𝑹 for the demultiplexing algorithm in Section 2.1.3. The resulting weights changes to the rows corresponding to the first PCA component (common part). In Figure 16(d), the same data set was processed with the improved demultiplexing algorithm. After the compensation for spectral fluctuation, the crosstalk issue on K420 and K440 has been significantly mitigated. 2.3.5 Ex vivo SERS Imaging of Mouse Breast Tumors Mouse tissues harvested from a tumor-bearing mouse model (double transgenic MUC1/MMTV) were used for ex vivo SERS Raman imaging [93], by following the procedures approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). Mouse breast tumors and mouse hearts were dissected for positive and negative controls, as shown in Figure 17(a) and (c). SERS NPs – K420 favor functionalized with HA and PEG were employed for mouse specimen staining. The SERS staining and imaging procedures are described below. First, the fresh tissues were washed in PBS for 15 seconds. Next, a breast tumor and a heart were topically administered with 100 pM K420-HA in a 96-well plate for 1 hour. Another set of tissues was stained with 100 pM K420-PEG for an hour. After the staining, the tissues were rinsed in PBS for 30 seconds, followed by SERS spectral imaging. The SERS spectra from the samples were acquired under an exposure time of 0.5 seconds and a step size of 0.5 mm. Finally, the acquired spectra were reconstructed in 2D Raman images using a MATLAB script. The Raman images from individual channels of K420- 41 HA and K420-PEG are demonstrated in Figure 17(b) and (d), respectively. Ideally, the Raman image of K420-HA should show stronger intensities on the breast tumor (positive control) but relatively weaker signals on the heart (negative control) because HA is a ligand targeting CD44 in breast cancer. In the Raman image of K420-PEG, non-targeting SERS NPs, both the breast tumor and heart should express a similar intensity level. In the experimental images, the K420-HA group did appear with stronger signals on the breast tumor; nevertheless, the intensity distribution of K420-PEG group displayed an opposite trend. According to these confusing results, no definitive conclusions can be drawn. The inconclusive results in the K420- PEG group may be caused by the heterogeneity and nonspecific binding of the tissues. For instance, some tissue areas are porous, and whether or not the SERS NPs have targeting ligands, they tend to uptake more SERS NPs than solid regions. Figure 17. Raman images of tissues topically stained with SERS-HA or SERS-PEG. Mice breast tumors and hearts were positive and negative controls. (a-b) Photograph and Raman image of the tissues with K420-HA. (c-d) Photograph and Raman image of the tissues with K420-PEG. Scale bars = 2 mm. 42 2.3.6 Ex vivo Ratiometric Imaging of Mouse Breast Tumors To compensate for the tissue heterogeneity, ratiometric analysis of Raman imaging was performed by applying a mixture of targeting and control SERS NPs on the tissues. Since both the NPs were stained on the same tissue area, after the influences of heterogeneity and nonspecific binding were eliminated, the binding efficacy between the tissue and SERS NPs became a major factor that could make differences in Raman intensity. Then, the intensity differences were calculated as ratios of targeting/control. The ratiometric experiment was conducted using ex vivo mouse tissues – a breast tumor and a normal tissue (spleen connective tissue) from a MUC1/MMTV tumor mouse model (Figure 18 (a)). The tissues were washed in PBS for 15 seconds followed by topically staining with a SERS mixture. Here, two SERS flavors, K420-HA and K481-PEG, were chosen as the mixture with minimum crosstalk effect. The tissues were stained with a 100 pM equimolar mixture of the two flavors for 10 minutes. After the staining, the tissues were rinsed in sodium phosphate for 30 seconds to remove unbound SERS NPs and prevent aggregation of SERS NPs. The SERS-stained tissues were then scanned by the custom-made spectrometer system using an exposure time of 0.2 seconds and a step size of 0.5 mm. For a FOV of 24 × 15 mm2 with 49 × 31 pixels, it took about 10 minutes to finish the whole scanning area. The mixed SERS spectra were acquired, demultiplexed, and reconstructed into Raman images of individual flavors, as shown in Figure 18(b). K420-HA image shows similar intensity on both breast tumor and normal tissue, whereas K481-PEG image expresses stronger signals on the normal tissue due to nonspecific binding. By dividing the weight of K420-HA by the weights of K481-PEG, a ratiometric Raman image was created. The ratio signals from the breast tumor were intensively enhanced over the normal tissue. 43 Figure 18. Ratiometric Raman images of tissues with SERS mixture for CD44 detection. The mixture comprises equimolar SERS-HA (targeting) and SERS-PEG (control). The mice breast tumor and normal tissue are chosen as positive and negative controls. (a) Photograph of the mice tissues. (b) Raman images of the individual channels and ratiometric results. (c) H&E and IHC-CD44 images to verify the Raman ratiometric result. (d) Enlarged IHC images of CD44 on the breast tumor and normal tissue. Scale bars in (a-c) and (d) = 5 mm and 50 μm, respectively. From the ratiometric Raman image, the breast tumor can be easily differentiated from the normal tissue based on the intensity level, which represents the level of the CD44 biomarker. To validate the level of CD44 in the breast tumor and normal tissue, hematoxylin and eosin (H&E) and immunohistochemistry (IHC) of CD44 staining were the gold standards performed for histological analysis in biological study. Initially, the mouse breast tumor and normal tissue were fixed in 10% neutral buffered formalin at room temperature for 48 hours, followed by a formalin- fixed paraffin-embedded (FFPE) process to embed tissue blocks. Then, the FFPE tissue blocks were cut into 4 μm-thick sections for H&E and IHC of CD44 staining by MSU histological lab. Note that the sections of the tissues match the same orientation during the Raman imaging. 44 Polyclonal rabbit anti-CD44 antibody (#ab157107, Abcam PLC., Cambridge, UK) was diluted at 1:400 as the primary antibody of the IHC-CD44 staining for 60 minutes. The H&E and IHC-CD44 slides were investigated using an upright microscope (Eclipse Ci, Nikon Corporation, Tokyo, Japan). The H&E and IHC images of the entire tissues were taken by a 10X objective (Nikon Plan 10X, NA 0.25, WD 10.5; MRL00102), as shown in Figure 18(c). The detailed views of the IHC images using a 40X objective (Nikon Plan 40X, NA 0.65, WD 0.56 / 0.17; MRL00402) are displayed in Figure 18(d) at the 6 selected positions (1~3: from the breast tumor, 4~6: from the normal tissue). As the CD44-positive areas are brown on the cell membranes, the IHC images at positions 1 & 2 (at the middle of the breast tumor) have the strongest CD44 overexpression, while positions 3 ~ 6 (at the edge of the breast tumor and some areas of normal tissue) show the median to weak intensities. According to the overlaid images, the IHC results are consistence with the ratiometric results of the breast tumor and normal tissue. Hence, ratiometric Raman analysis can be a potential technique for breast cancer detection. 2.3.7 In vivo SERS Image-Guided Surgery The SERS Raman imaging technique has been employed for in vivo image-guided surgery. A breast tumor-bearing MUC1/MMTV mouse was anesthetized on a water-warmed surgical bed during the experiment. The custom-made Raman spectrometer was modified: instead of moving the animal, the Raman probe was translated by the 2D stages. The experimental setup is illustrated in Figure 19(a). Note that the mouse was fixed by tapes on the surgical bed because the experiment is susceptible to the FOV change. To compare and co-register the Raman images with the photographs of the breast tumor, the FOV should remain in the same position. The mouse was shaved, and the tumor area was scanned for tissue background acquisition. Then, 100 µL, 500 pM K420-HA, was injected into the tumor at a depth of 3 ~ 5 mm. The surgery 45 began 42 hours after the injection. The skin on the breast tumor was peeled off. Although the mouse was bleeding, the breast tumor was exposed and cleaned with gauze. The procedures of the image- guided surgery are described below. 1) The tumor area injected with SERS NPs was imaged by the custom-made spectrometer. The 785-nm laser power was 45 mW, and 0.1-second exposure time and 0.5-mm step size were used to reduce the overall scanning time. For a scanning FOV of 18 × 28 mm2, the scanning process can finish in 6 minutes. 2) Right after SERS spectral acquisition, a Raman image generated by post-processing was used as a map to determine tumor resection areas. 3) Then, the tumor was gradually resected by following the signals of the Raman image. 4) The imaging and tumor removal procedures were repeated from 1) to 3) until all Raman signals on the tumor disappeared. Figure 19. SERS image-guided surgery for resection of mice breast tumor. The breast tumor was intratumorally injected with K420-HA. (a) Photograph of the experimental setup for the intraoperative SERS image-guided surgery. (b) Photographs and SERS images of K420-HA during the tumor resection surgery. Resection regions are depicted in the images until the breast tumor is completely removed. Scale bars = 5 mm. 46 The bright-field photographs of the tumor and corresponding Raman images during the resection process are displayed in Figure 19(b). Four iterations were performed to completely remove the tumor. A comprehensive iteration took approximately 15 minutes, including tumor resection, cleaning, photographing, and Raman imaging. The entire tumor resection procedure was finished in 1 ~ 1.5 hours, which should be as soon as possible to prevent the animal from dying. Moreover, the isoflurane vaporizer (Tec III 300 Series, Vaporizer Sales and Service, Rockmart, GA, USA) was tuned to 1.5% ~ 2% to ensure that the animal was in deep sleep during the surgery. Despite administration by intratumoral injection, K420-HA SERS NPs did not diffuse throughout the tumor but were concentrated at the injection sites. Due to the 5 mm injection depth, the SERS signals only came from the periphery of the tumor. After removing the exterior of the tumor between the 2nd and 3rd iterations, the Raman signals almost disappeared. In the last step, the rest of the tumor (10 × 10 mm2) was completely resected, leaving no signal in the tumor area. 2.3.8 Conclusion of SERS NPs for Breast Cancer Detection In this study, a new type of SERS NPs has been introduced for CD44 breast cancer biomarker detection. Four different flavors of custom-made SERS NPs (K440, K481, K420, K421) were synthesized in a 50 nm diameter using the tris-base-assisted protocol. These SERS NPs were characterized with the lowest limits down to femtomolar, 0.05 pM, 1 pM, 0.1 pM, and 0.5 pM, for K440, K481, K420, and K421, respectively. Additionally, the DCLS demultiplexing algorithm has been upgraded with PCA to compensate for the laser fluctuations. Thus, the crosstalk between different SERS flavors can be mitigated and the accuracy of estimated weight can be improved. Two flavors of the SERS NPs were functionalized as K420-HA and K481-PEG. HA has been reported as an effective ligand for CD44 targeting, whereas PEG provides biocompatibility as the control. These two types of SERS NPs were mixed and topically administrated for ex vivo mouse 47 tissues from a tumor-bearing mouse. According to ratiometric Raman images, breast tumors can be differentiated from normal tissues. The ratiometric results also match the IHC images of CD44 staining. Later, the SERS NPs were used in a demonstration of intraoperative image-guided surgery. A breast tumor mouse was intratumorally injected with a single flavor of K420-HA. The SERS images were used as maps of the tumor margin during the tumor resection surgery. The tumor was gradually removed according to the reconstructed Raman images until the Raman signals disappeared completely. The detailed procedures of the SERS Raman image-guided surgery have been described in this research. In the future, multiple SERS flavors with different breast tumor-targeting antibodies can be synthesized. By detecting various biomarkers, the ratiometric analysis will reduce the nonspecific binding and increase the accuracy of tumor margin detection. Comprehensive in vivo animal experiments using multiple SERS mixtures will be performed for image-guided BCS surgeries. Moreover, Raman images of different SERS flavors can reveal the distribution of distinct biomarkers, and help researchers understand the development of breast cancer. 48 CHAPTER 3: VO2 MEMS SCANNER FOR LISSAJOUS SCANNING RAMAN IMAGING 3.1 Micro-Electro-Mechanical Systems (MEMS) Scanner Micro-Electro-Mechanical System (MEMS) scanners have been widely developed in various beam scanning applications, such as projection display [94, 95], LiDAR [96-98], and biomedical imaging devices [99-101]. MEMS technology enables the advantage of compact size (< 1 mm), fast scanning speed (> 1 kHz), batch fabrication, and low cost. Especially for biomedical imaging applications, MEMS scanners play a vital role in miniaturizing the optomechanical system of optical imaging instruments, including optical coherence tomography [101-103], confocal microscopy [104-106], multiphoton microscopy [107-109], and photoacoustic microscopy [110- 112]. These imaging techniques can detect either label-free or contrast agents that can target disease-related biomarkers [113]. At the distal end of the imaging device, three-dimensional (3D) volume imaging can be achieved by beam steering and actuation mechanism. The compact size also facilitates in vivo applications. MEMS scanners have been developed based on several actuation metrologies, such as electrostatic [114-116], electrothermal [117-119], electromagnetic [120-122], and piezoelectric [123-125] mechanisms. Electrostatic actuation utilizes the electrical field across a gap and generates attractive or repelling force. Although it requires high voltage to build up the electrical field, the current consumption is almost zero due to the open-loop circuit across the gap. Piezoelectric actuation is based on materials exhibiting the piezoelectric effect, such as lead zirconate titanate (PZT), a ceramic material. When a voltage is applied to two faces of the piezoelectric material, mechanical stress will be induced, and the shape will be physically changed in response to the electrical field. Although the shape change can only provide small displacements, 49 it has been employed for ultra-precise and high-speed applications [126]. The driving signals used for piezoelectric actuation have similar characteristics to electrostatic actuation (high voltage and low current). Electromagnetic actuation is produced by Lorentz force, which requires much larger currents to generate mechanical actuation of micrometer-sized mirror structures, even when operated at resonance [127]. Electrothermal mechanism utilizes different lengths caused by thermal expansion, such as hot-and-cold arms and bimorph structures. Hot-and-cold actuators are usually made of the same material with different thicknesses to create temperature gradients and thus length differences. In contrast, bimorph structures depend on the different thermal expansion coefficients of two materials. As temperature changes, the two materials will expand at different rates. Both types of electrothermal actuators can generate large displacements, but the response time is constrained by thermodynamics. Raman scattering --discovered by C.V. Raman in 1928 [17] -- is the result of the inelastic scattering effect caused by molecular vibrations. The energy exchange between the incident scattered light and the molecules produces a unique spectral fingerprint, measured to be Raman shift. Nevertheless, only a small number of photons scatter inelastically, resulting in very weak Raman scattering signals. The weak intensity of Raman scattering can be enhanced by Surface- Enhance Raman Scattering (SERS) technique by 1010 to 1014 orders of magnitude when the molecules are attached to rough metal (e.g. silver or gold) surfaces, generating surface plasmon resonance effect [57, 58]. SERS substrates and SERS nanoparticles (SERS NPs) have been developed to achieve an ultra-low limit of detection and reduce the Raman acquisition time. SERS NPs are made of metallic cores on a nanometer scale and coated with Raman active dyes. Moreover, SERS NPs conjugated with antibodies have been used to detect disease-related biomarkers in tissues and create Raman mapping images [62, 80, 128]. 50 Even though the SERS intensity has been significantly improved by Surface-Enhanced Resonant Raman Scattering (SERRS) [63], the Raman images are still acquired by slow raster scanning due to a large field of view (FOV) and simplicity of image reconstruction. To increase the scanning speed, Lissajous scanning has become an attractive approach in imaging applications with a higher frame rate. Lissajous scanning is achieved by sinusoidal inputs applied on two orthogonal axes. The frequency, phase, and amplitude of the input signals determine the frame rate, scan trajectory, FOV, and pixel density. Many researchers have discussed the strategy for frequency selection and high fill factor (FF) on Lissajous scanning [129-132]. A MEMS scanner for Lissajous scanning should be designed with biaxial scanning and large FOV capability, which is usually achieved at resonant frequencies. For the Raman imaging application, the overall bottleneck of the scanning speed is limited by the intensity of the Raman signals and the performance of the detector. Therefore, MEMS scanners with high resonant frequencies (> 1 kHz) are not necessary for fast scanning speed. Instead, the scanners must be developed with a large mirror platform to collect the weak Raman signals from SERS NPs. In addition, to obtain a large FOV, the actuators should be able to lift the mirror platform at a large tilting angle. Considering the above requirements (large mirror, slow speed, large tilt angle), electrothermal actuation is a suitable mechanism for the MEMS scanner in Lissajous Raman imaging. Recently, a novel electrothermal actuation mechanism has been presented using vanadium dioxide (VO2) thin film and its temperature-induced phase transition. When a temperature change triggers the phase transition, VO2 material undergoes a metal-to-insulator phase transformation in mechanical [133], electrical [134], and optical [135] properties. The dramatic change of mechanical stress has been demonstrated to have a strain energy density larger than other actuation 51 mechanisms [136-139]. Moreover, the phase transition temperature of VO2 occurs at approximately 68°C, which is much lower than the actuation temperature of 300°C in a conventional electrothermal actuator [140]. Previous studies have demonstrated a VO2 MEMS scanner with a smaller mirror platform (600 µm × 600 µm) and the characterization of tip/tilt and piston motions [141-143]. However, it is operated under nonlinear quasistatic modes with a hysteresis effect. Although the hysteresis characteristic of VO2 enables the capability of programmable positioning, the nonlinear behavior leads to unrepeatable movements during each actuation cycle which increases the complexity of continuous scanning. To minimize the nonlinear effect of VO2 for continuous beam scanning, we will develop a strategy to actuate the VO2 scanner in resonant modes. As the two axes are driven around the resonant frequencies, the scanner can achieve Lissajous scanning with a maximum FOV. Compared to the previous studies, this research improves the design of the MEMS scanner and focuses on the novel application of Raman imaging, by demonstrating a VO2-based MEMS scanner with a larger mirror (1.6 mm × 1.6 mm) for Lissajous scanning Raman imaging. The design, fabrication process, and characterization of quasistatic and frequency responses are discussed. The MEMS scanner is then integrated into a custom-made imaging system for Raman spectrum acquisition. Samples with SERS NPs are scanned by the imaging system, followed by a Raman image reconstruction method based on Lissajous patterns. The Raman images of a SERS paper phantom and a SERS-stained mouse tissue are successfully obtained. Consequently, the scanning capability of the proposed MEMS scanner is demonstrated, and it can be potentially extended to different imaging applications. 52 3.2 VO2-based MEMS Scanner 3.2.1 Design of VO2 MEMS Scanner The developed VO2 MEMS scanner consists of a 1600-µm squared mirror at the center and four suspension legs around it (Figure 20(a)). Individual legs can be electrothermally actuated by VO2 phase transition which is induced by Joule heating on the metallic resistors. The legs are designed with thinner bimorph sections (thickness ~1.4 µm) for bending and thicker frames (thickness ~50 µm) for supporting. Also, the width of the frames (𝑾𝒇 : 150 µm) is wider than the width of bimorph (𝑾𝒃 : 30 µm) The lengths of the bimorph sections (𝑳𝒃𝟏−𝟑 : 300, 600, 300 µm) and frames (𝑳𝒇𝟏−𝟐 : 1000, 1000 µm) are optimized to generate a large vertical displacement with minimal lateral movement [144]. The scanner is made of Si, SiO2, VO2, and Cr/Au, where the VO2 and Cr/Au layers are sandwiched between three SiO2 layers (from top to bottom: SiO2, Cr/Au, SiO2, VO2, SiO2, Si). To prevent material exposure due to photolithography misalignment, the VO2 and Cr/Au layers keep 2 µm away from the SiO2 structural boundary. In every leg, the Cr/Au traces are connected as a loop to circulate electrical current. There are a total of eight metal traces patterned on the bimorph sections, four for the current inlet and four for the current outlet (Figure 20(c)). They are designed with smaller widths (𝑾𝒉 : 8 µm), thus increasing the resistance of these regions, which in turn increases the temperature for a given electrical current --i.e. they serve as resistive Joule heaters. As a result, when an electrical current is applied via the metal traces, the bimorph regions will experience a much higher temperature than the rest of the frame, which is used to heat up the VO2 layer underneath. As the temperature raises over 68 ℃, the VO2 goes through a solid-to-solid phase transition to induce equivalent mechanical stress. This generated stress is localized in the thinner bimorph regions, resulting in out-of-plane bending of the bimorph structures (Figure 20(b)). 53 Although the heat is mainly created in the bimorph area, the heat can be transferred from the leg to the mirror via thermal conduction, which will lead to a coupled thermal actuation. To improve the thermal isolation, the connections between the legs and mirror are designed with 30-µm pitch holes which will allow XeF2 gas to etch the silicon substrate near the connection area. The geometry parameters of the current VO2 MEMS scanner design are listed in Table 1, in comparison with the previous design with a 600-µm squared mirror [143]. Table 1. Geometry parameters of the VO2-based MEMS scanner. Dimension (µm) Mirror Size 𝑳𝒇𝟏 𝑳𝒇𝟐 𝑳𝒃𝟏 𝑳𝒃𝟐 𝑳𝒃𝟑 𝑾𝒇 𝑾𝒃 𝑾𝒉 Current Design 1600 × 1600 1000 1000 300 600 300 150 30 8 Previous Design 600 × 600 300 300 150 300 150 62 28 6 Figure 20. Design of the VO2 MEMS scanner and its application. (a) Designed geometry of the VO2 MEMS scanner. (b) SEM image of bending bimorph structures. (c) Detailed geometry of the bimorph structures and Joule heaters. (d) Stereomicroscope image of the final device. (e) Illustration of beam scanning using the VO2 MEMS scanner. The designed mirror size (1.6 mm × 1.6 mm) can cover the collimated beam (1 mm diameter) from the Raman probe. The proposed VO2 MEMS scanner is devised for SERS imaging application, driven in two directions for dual-axis beam manipulation by Lissajous scanning. A custom Raman probe is utilized to send a 785 nm laser and collect the Raman scattering light (Figure 20(e)). The 54 illumination is a collimated beam of 1 mm diameter, whereas the collection is a slightly convergent beam. The MEMS scanner is placed at 45° in front of the Raman probe. The large mirror (1.6 mm × 1.6 mm) can ensure the coverage of the illumination beam. The Raman scattering light is reflected by the scanner and measured by a spectrometer. We have considered several factors that might affect the feasibility of scanner fabrication and its performance, including the geometry, bimorph design, and gravity effect. Here, we will discuss the design concept of geometry and bimorph structure. The simulation of geometry and gravity on resonant frequency will be discussed using finite element analysis (FEA) in Section 3.3.3. First, to enlarge the mirror size, the lengths of frames and bimorph structures are extended to amplify the deflection angle. Second, to lift a large mirror platform, wider frames provide stronger mechanical support, while keeping the metal resistors in the bimorph regions narrower, in order to induce localized Joule heating. Third, the width of each beam in the bimorph region has to be designed, considering the required etching time to remove the silicon substrate underneath by XeF2 isotropic etch (Figure 20(b)). To obtain uniform XeF2 etching results, all the trenches defining the SiO2 structural boundaries are designed with identical widths. The length of each leg matches the side length of the mirror platform to maximize the fill factor of the MEMS mirror. In conclusion, when the mirror size scales up, the length of frames, width of frames, and length of bimorph beams increase whereas the width of bimorph beams and width of metal resistors need to be narrow. 3.2.2 Fabrication Process of VO2 MEMS Scanner The fabrication process flow in top and cross-section views is illustrated in Figure 21. A 2- inch double-side polished silicon wafer (280 μm) was used as a substrate. Both sides of the silicon substrate were deposited with a SiO2 layer (front: 1 μm, back: 2 μm) using Plasma Enhanced Chemical Vapor Deposition (PECVD) at 300 ℃ (Figure 21(a)). The backside SiO2 works as a hard 55 mask for the silicon substrate etch, while the topside SiO2 facilitates the monoclinic (011)M orientation of the polycrystalline VO2 alignment to the substrate surface, which can increase the maximum displacement during the VO2 phase transition [145, 146]. Next, the deposition of VO2 (100 nm) was grown on the SiO2 by Pulsed Laser Deposition (PLD) and then patterned by Reactive Ion Etching (RIE) (photomask-1) using the method mentioned in [141] (Figure 21(b)). On top of the VO2 layer, a second SiO2 layer (200 nm) was deposited to electrically insulate the metal traces and Joule heaters (Figure 21(c)). However, the SiO2 deposited by PECVD at 200 ℃ leads to porous defects which impair electrical isolation. To overcome this problem, the second SiO2 deposition was performed in four consecutive steps (50 nm for each) to reduce the possibility of the formation of through holes (e.g. pin holes). Then, the lift-off technique (photomask-2) was used to pattern resistive heaters and metal traces, which are made of chromium and gold (Cr: 10 nm, Au: 150 nm) (Figure 21(d)). The Cr layer improves the adhesion between the Au and SiO2 layers due to lower intrinsic stress [143]. After the metallization, the third SiO2 layer (200 nm) was deposited in four steps (50 nm) by PECVD at 200 ℃ (Figure 21(e)). This SiO2 layer protects the metal trace from oxidation as well as decreases the convection heat transfer from the Joule heater to the environment. At this point, all the structural materials have been deposited on the substrate. The deposited layers are subsequently etched by multiple steps to define and release the structures. The geometric profile of the device was defined by topside RIE (photomask-3) which etched multilayer SiO2 (~1.4 μm) from the front side and exposed the Si substrate (Figure 21(f)). The remaining SiO2 layer served as a hard mask for the later DRIE processes. Another RIE of the backside SiO2 (~2 μm) (photomask-4) created a hard mask for cavity etching (Figure 21(g)). The thickness of the mirror structure (~50 μm) was produced by topside anisotropic Deep Reactive Ion Etching (DRIE), with high aspect ratios on the silicon substrate (Figure 21(h)). Next, timed- 56 controlled backside DRIE created a large cavity under the device (Figure 21(i)), which defined the final thickness of the mirror platform and frame regions of the device (~50 μm). Before DRIE processes, the 2-inch device substrate was mounted on a 4-inch handle wafer by using a water- soluble adhesive wax (Crystalbond 555). After DRIE was completed, the handle wafer was released by immersing the sample in deionized water at 95°C. Figure 21. Fabrication process of the VO2 MEMS scanner. (a) PECVD of the 1st SiO2 layer on both sides of the Si wafer. (b) PLD deposition and RIE patterning of the VO2 (photomask-1). (c) PECVD of the 2nd SiO2 layer. (d) Lift-off of the Cr/Au layer (photomask-2). (e) PECVD of the 3rd SiO2 Layer. (f) RIE of the topside SiO2 (photomask-3). (g) RIE of the backside SiO2 (photomask-4). (h) DRIE of the topside Si. (i) DRIE of the backside Si. (j) XeF2 isotropic etch of the Si under the bimorph structures. The undercut effect is not shown in the figure. A final step was taken to remove the silicon substrate underneath the biomorph structures using XeF2 isotropic etch (Figure 21(j)). The XeF2 gas diffused from the front trenches into the 57 cavity to etch the silicon substrate isotropically. Since the isotropic etch also caused undercut on other silicon structures, it was timed to etch only the silicon substrate underneath the bimorph layers and minimize the over etch in other places. Without removing the passivation layer that was deposited during the DRIE, the XeF2 gas still can release the silicon substrate. The completed scanner will have an initial displacement induced by the internal residual stress inside the bimorph structure. Table 2 summarizes the material thicknesses for the previous and current processes [143]. Table 2. Material thicknesses of the VO2-based MEMS scanner. Backside Thickness (µm) Si 1st SiO2 VO2 2nd SiO2 Cr/Au 3rd SiO2 SiO2 Current Design 2 280 1 0.1 0.2 0.01/0.15 0.2 Previous Design 1 300 1 0.25 0.2 0.02/0.11 0.2 Since the VO2 thin film is susceptible to high temperature, oxidation, and chemicals, the fabrication of this VO2 MEMS scanner is not compatible with the most common microfabrication process. A temperature above 275 ℃ in an oxygen environment will accelerate the further oxidation of VO2 into other stable vanadium oxides. It should be noted that other vanadium oxides also have phase transitions, but they occur at very different temperatures (e.g. 123 ℃ for V6O13). This problem can be solved by controlling a lower temperature during thermal processes, such as PECVD. However, consecutive SiO2 thin films grown at lower temperature PECVD are porous and low density. When using regular photoresist cleaning processes, such as acetone and isopropanol, the rapid evaporation of the cleaning solution will destroy the SiO2 layers and attack VO2 thin film. Thus, the photoresist remover (Microposit Remover 1165) was used to clean the photoresist instead of acetone or isopropanol. Additionally, positive photoresists (Microposit S1813, MicroChem SF-11, and Microchemicals AZ 9260) with developer solutions (H2O: Microposit 351 diluted in 5:1, and AZ MIF 300) were employed to pattern the materials without damaging the VO2 thin film. 58 3.2.3 Validation of VO2 Thin Film We have designed VO2 testing devices (800 μm wide and 3000 μm long) on the wafer to test the quality of VO2 thin film after fabrication. By measuring the resistance across the test device during temperature change, the quality of the VO2 thin film can be evaluated. A resistance drop of 2 orders (or more) is indicative of a film formed of mainly stoichiometric VO2. Figure 22. Resistance measurement on the VO2 thin film. (a) Schematic of the VO2 resistance measurement setup. The test device was placed on a Peltier heater which is used to control the temperature with 1℃ resolution. The resistance across the VO2 thin film was measured by 𝑅 = 𝑉/𝐼 using the 4-wire sensing method. (b) The resistance-temperature curve shows 2 orders of resistance drop on the device. The resistance measurement setup is illustrated in Figure 22(a). A Peltier heater (TECD2S, Thorlabs, Newton, NJ, USA) embedded with a temperature sensor (TH100PT, Thorlabs, Newton, NJ, USA) was used to precisely control the temperature every 1℃ using a TEC controller (TED4015, Thorlabs, Newton, NJ, USA). The test device was placed on the Peltier heater and the resistance was measured by the 4-wire sensing method. A 5-μA constant current source was provided by a source meter (2401, Keithley Instruments, Cleveland, OH, USA) and the voltage across the test device was measured by a data acquisition (DAQ) card (PCIe-6321, National Instruments, Austin, TX, USA). As the temperature changes every 10 seconds by programming the Peltier heater, the temperature increases from 30℃ to 90℃ and then decreases back to 30 ℃ 59 while the DAQ card synchronously records the voltage values. The resistance values were calculated by voltage divided by current (𝑅 = 𝑉/𝐼). The measured resistance is plotted in Figure 22(b). The heating and cooling curves show 2 orders of resistance drop and hysteresis across the VO2 phase transition between 50℃ and 70℃. This phenomenon indicates that the chemical composition and stoichiometry of the VO2 film were not altered during the fabrication process. 3.3 Characterization of VO2 Scanner 3.3.1 Optical Setup An optical setup was assembled to assess the performance of the scanner as shown in Figure 23. The optical setup comprises a 660 nm laser (iBeam Smart PT660, Toptica Photonics, Munich, Germany), 50:50 beam splitter (BS; BS004, Thorlabs, Newton, NJ, USA), and a 2D position sensing detector (PSD; PSM2-20 and OT-301, On-Trak Photonics, Inc.). The 0.9 mm laser beam was collimated by a lens (F230APC-633, Thorlabs, Newton, NJ, USA), turned by the beam splitter, and reflected by the MEMS scanner onto the PSD, which outputs the beam position in voltages along the x- and y-axis. A DAQ system (cRIO-9045, National Instruments, Austin, TX, USA), including a current output module (NI-9265, National Instruments, Austin, TX, USA), a current input module (NI-9203, National Instruments, Austin, TX, USA), and a voltage input module (NI- 9223, National Instruments, Austin, TX, USA), was utilized to control the MEMS scanner and sense the position. The scanner has 4 legs (actuators), with two ports in each leg (labeled as inlet and outlet buses in Figure 23). The inlets from the 4 legs were connected to the current output module, and the outlets were connected to the current input module for current sensing. The current input and output modules shared a common ground, creating a closed-loop circuit for each leg. The scanner 60 can be driven using the differential method when applying out-of-phase sinusoid currents on a pair of legs. For example, in Figure 23, legs 2 and 4 scan along the vertical y-axis, and legs 1 and 3 rotate along the horizontal x-axis. The driving current and the position reading from the PSD were synchronically acquired in a TDMS file by a custom LabVIEW (National Instruments, Austin, TX, USA) program. Hereafter, the TDMS files were post-processed and analyzed using MATLAB (MathWorks, Natick, MA, USA). Figure 23. Optical measurement system for the VO2 scanner characterization. Schematic diagram of the optical setup and data acquisition system. The four actuators can be driven using the differential method (i.e. 1&3, 2&4) for biaxial scanning. The driving currents and scanning beam can be sensed by the NI DAQ system simultaneously. 3.3.2 Quasistatic and Step Response The quasistatic characterization of the MEMS scanner was performed to estimate the tip/tilt DC response and scanning angle. In this experiment, a single leg on the scanner was actuated with a DC current sweep from 0 to 20 mA in steps of 0.5 mA every 5 seconds (50% duty cycle), which allowed enough time for the temperature to settle before the next current increase step. The total optical deflection angle corresponding to each current value was monitored. The result of the tip/tilt DC response is shown in Figure 24(a). When applied with a current of 20 mA, the scanner 61 can achieve a maximum deflection angle of 1.2° from the starting position. Both the x- and y-axis demonstrate similar curves. The difference between the two axes might come from the fabrication error and resistor mismatch on the legs. Figure 24. DC (quasistatic) and step response of the MEMS scanner. (a) Quasistatic response of the VO2 MEMS scanner. As a DC driving current is applied along two axes individually, the optical deflection angle is measured by the PSD. The maximum deflection angle is around 1.2° for both axes at 20 mA. (b) Step response of the VO2 MEMS scanner for a current cycle between 0 and 20 mA, with a cycle period of 5 s. The deflection angle is normalized between 0 and 1. (c-d) Detailed plot of the rising/falling edge. Within the 2% error range, the settling time at the rising edge is 0.6 s, while it is 0.5 s at the falling edge. The oscillation frequencies observed at both edges are around 94 Hz. 62 The step response was conducted to measure the dynamics of the MEMS scanner for the tip/tilt motions. The deflection angle measured by the PSD was normalized ranging from 0 to 1, and the step response along the x-axis is plotted for two cycles in Figure 24(b) (results along the y-axis are similar). The details at the rising and falling edges are illustrated in Figure 24(c-d). The steady state is defined as ±2% error margin around the final position. The settling time was measured from the rising/falling edge to the steady state: 0.6 and 0.5 seconds, respectively. The rising time is slightly longer than the falling time, most likely due to heat convection losses through the air. At both edges, the step response is accompanied by an oscillation of ~94 Hz, which corresponds to the resonant frequencies of the scanner, discussed in the next section. 3.3.3 Frequency Response of VO2 MEMS Scanner A finite element analysis (FEA) model has been built to simulate the resonant frequencies of the VO2 MEMS scanner. First, a 3D model of the device is created in the computer-aided design (CAD) software (SolidWorks, Version 2019, Waltham, MA, USA). The layouts of the four photomasks were exported as DXF files into SolidWorks and then extruded as solid bodies with the actual thickness in the fabrication. This 3D CAD file was directly synchronized into the FEA software (COMSOL Multiphysics, Version 5.3, COMSOL Inc., Burlington, MA, USA) via the LiveLink plugin. The structural layers of the 3D model (including the Si substrate, SiO2, VO2 thin films, and metal traces), were assigned with corresponding material properties (Table 3), some of which use the built-in piecewise curves. To obtain the resonant frequencies of the scanner, the FEA model was simulated using eigenfrequency analysis, which is only related to the device structure and materials. Only the solid mechanic properties (density, Young’s modulus, Poisson’s ratio, and gravity), are required in the study. 63 Table 3. Material properties of the VO2-based MEMS scanner used in the FEA model. Material Properties (Unit) Si SiO2 VO2 Au Density [𝐤𝐠/𝐦𝟑 ] Built-in curve 2200 4670 19300 Young’s Modulus [𝐆𝐏𝐚] Built-in curve 70 140 70 Poisson’s Ratio [𝟏] Built-in curve 0.17 0.33 0.44 Figure 25. Resonant frequency simulation and characterization of the MEMS scanner. (a) COMSOL simulation for eigenfrequency analysis of the VO2 MEMS scanner. The first three modes, piston, tip, and tilt motions are shown with the resonant frequencies at 58.27 Hz, 98.67 Hz, and 100.26 Hz, respectively. (b) Full- range frequency response of the VO2 MEMS scanner for tip/tilt modes. The scanner is excited by an input current with 10 mA 𝐼𝑝𝑝 and 𝑓𝑠𝑤𝑝 : 1~200 Hz . The magnitude is displayed in the total optical scan angle (degree). (c) Zoom-in frequency response of the direct system. The maximum optical scan angle is 3.4° at 94 Hz on the x-axis and 2.8° at 93 Hz on the y-axis. (d) Frequency response of the coupled system. The FEA model was refined with a coarser physics-controlled mesh size. Only the first three modes are displayed in Figure 25, whereas the other higher frequency modes are associated with undesired vibration of the legs. The eigenfrequencies of the three modes, representing piston (up- 64 and-down along the z-axis), tip (rotate along the x-axis), and tilt (rotate along the x-axis) motions (see Figure 25(a) for visualizations of these modes), are 58.27 Hz, 98.67 Hz, and 100.26 Hz, respectively. Despite the symmetry of the VO2 scanner, the computed eigenfrequencies of tip/tilt are not identical, due to computational errors (e.g. discretization and numerical errors). In real devices, the measured frequencies of tip/tilt resonant modes are separated by fabrication mismatch. However, since the resonant frequencies of tip and tilt motions are too close, a coupling effect appears in the simulation results, where the motions are not perpendicular to a single axis but instead have parasitic vibrations in both axes. On the PSD, the deflection angle is acquired and analyzed along the direct (𝑥⃗ ∙ 𝑥⃗ and 𝑦⃗ ∙ 𝑦⃗) and coupled (𝑥⃗ ∙ 𝑦⃗ and 𝑦⃗ ∙ 𝑥⃗ ) axes. The full range frequency response along the direct axis is plotted in Figure 25(b), where the magnitude is displayed in the total optical scan angle (degree). The resonant frequencies are measured at 94 Hz (𝑥⃗ ∙ 𝑥⃗) and 93 Hz (𝑦⃗ ∙ 𝑦⃗), which agree with the oscillation frequency in the step response. The error between simulation and experimental measurement is about 6~7%. Using the same simulation parameters, the simulated and experimental results of the current and previous designs are compared in Table 4 [142]. The overall simulation errors of 5~7% prove that the same FEA parameters for eigenfrequency analysis can accurately estimate the resonate frequencies for the MEMS scanners with different sizes. Table 4. Comparison between simulated and experimental resonant frequencies. Mirror Size Tip Tilt Designs (µm) Experiment Simulation Experiment Simulation Current Design 1600 × 1600 93 Hz 98.67 Hz 94 Hz 100.26 Hz Previous Design 600 × 600 739 Hz 764.36 Hz 739 Hz 765.33 Hz The detailed views of the direct and coupled system response are displayed in Figure 25(c-d), including magnitude and phase curves. When the scanner was driven with 10 mA 𝐼𝑝𝑝 at the 65 resonant frequencies, the optical scan angle can reach 3.4° and 2.8° on the x- and y-axis, respectively, which are much larger than the 0.3° angle measured during the quasi-static actuation for the same driving current. As expected, a phase drop of 𝜋 rad is observed at the resonant frequencies. The coupled response reveals a smaller optical scan angle, 1.3° and 1.8° with phase shifts of 3𝜋 rad and 𝜋 rad on the y- and x-axis, respectively. 3.4 Raman Imaging Based on Lissajous Scanning 3.4.1 Raman Imaging Setup Using VO2 MEMS Scanner Raman scattering light is obtained by a Czerny-Turner spectrometer (Figure 26(a)), consisting of a slit (200 µm width) a plane mirror (M1), two focusing mirrors (M2-3), a diffraction grating (1200 lines/mm groove density, 850 nm blaze), and a back-illuminated deep-depletion CCD detector (DU920P Bx-DD, Andor Technology, Belfast, Northern Ireland, UK). Incident chromatic light is dispersed by the grating and projected onto the CCD, ranging from 835 nm to 912 nm (center wavelength at 875 nm). The theoretical spectral resolution is 0.63 nm. A custom Raman probe comprises a custom fiber bundle (Fiberguide Industries, Caldwell, ID, USA) and a lens (L1, f = 6.83 mm; PLCX-4.0-3.1-UV, CVI Laser Optics, Albuquerque, NM, USA). The fiber bundle is used to send illumination light from a 785 nm Laser (iBeam Smart 785, Toptica Photonics, Munich, Germany) and collect Raman scattering light. The lens generates a 1 mm illumination beam and a slightly convergent collection beam, focused at 9 mm away from the probe. An optical relay with a collimator (L2, f = 100 mm; AC254-100-B-ML, Thorlabs, Newton, NJ, USA) and a condenser (L3, f = 80 mm; AC254-80-B-ML, Thorlabs, Newton, NJ, USA) is deployed between the fiber bundle and the spectrometer for coupling the collection light. Rayleigh scattering light is blocked by a long-pass filter with a cut-on wavelength at 830 nm (LPF; BLP01- 830R-25, Semrock, Rochester, NY, USA). Raman light is imaged on the CCD detector with 1024 66 pixels × 256 pixels. The 2D images on the CCD are converted into 1D spectra via full vertical binning (FVB) mode along the direction of 256 pixels. Subsequently, the custom Raman probe was assembled with the VO2 MEMS scanner in a polymer (PLA) housing printed by a 3D printer (Ender-3, Creality, Shenzhen, China). The housing was specially designed so that the MEMS scanner can be directly mounted at 45° without additional alignment. The illumination light was reflected by the scanner down toward the sample plane. When the MEMS scanner operates biaxially at the resonant frequencies, Lissajous scanning can achieve a large FOV on the sample plane, as illustrated in Figure 26(b). The scanner was driven by the NI cRIO DAQ system mentioned in Section 3.3.1. The scan trajectory was synchronized with the acquisition of the CCD to allow for image reconstruction from the Raman spectra. To synchronize the scanner and the CCD, an external output TTL signal, (generated by the detector when an acquisition event occurs), was connected to the DAQ system as an external sampling clock via the PFI0 port. With the fastest horizontal readout rate of 3 Hz and vertical shift speed of 1.7 µs, the CCD detector can reach a minimum integer exposure time of 0.001 s, equal to a maximum sampling rate of 1 kHz on the DAQ. A Lissajous scanning pattern produced by two sinusoidal waveforms varies with the frequency and phase. Basically, the frame rate of the repeating Lissajous pattern is determined by the greatest common divisor (GCD) of the two frequencies, while the density of scanned lines across a FOV is proportional to the summation of the coprime components of the frequencies. The FOV is related to the amplitude of the waveform and measured distance. The fill factor (FF) is calculated by pixelization of the scanning trajectory. In most cases, assuming that all the lines are sampled continuously, the FF increases with the density of scanned lines. 67 Figure 26. Raman imaging setup and Lissajous scanning pattern using the MEMS scanner. (a) Schematic drawing of the optical setup and DAQ system. The 45° VO2 MEMS scanner and a focusing lens (L1) are assembled in a 3D printed housing. A fiber bundle is used to deliver a 785 nm laser and collect the Raman light back to the spectrometer through a pair of relay lenses (L2-3) and a long-pass filter (LPF). The spectrometer consists of three mirrors (M1-3), a diffraction grating (1200 lines/mm), and a CCD detector. The scanner is driven by the NI DAQ system synchronized with the CCD via a 1 kHz TTL signal. Lissajous scanning patterns are produced by two-axis scanning. (b) Photograph of the Lissajous scanning pattern. (c) Simulated Lissajous scanning trajectory using the resonant frequencies of 𝑓𝑥 = 94 Hz, 𝑓𝑦 = 99 Hz. (d) Simulated sampling points based on the scanning trajectory and the camera sampling rate, generating a different Lissajous pattern. (e) Simulated pixelization for a 15 px × 20 px image. Pixels not sampled by any point are drawn in blue. The FF is calculated to be 81.3%. 68 In our study, after the scanner was installed in the housing, the resonant frequencies (𝑓𝑥 , 𝑓𝑦 ) drift from (94 Hz, 93 Hz) to (94 Hz, 99 Hz) due to environmental changes. The driving frequencies (94 Hz, 99 Hz) are chosen to maximize the density of lines with a frame rate of 1 FPS (GCD = 1). The driving signals on the x- and y-axis are expressed below. 𝐼𝑥 (𝑡) = 10 sin(2𝜋𝑓𝑥 𝑡 + 𝜑𝑥 ) + 10 mA 𝐼𝑦 (𝑡) = 10 sin(2𝜋𝑓𝑦 𝑡 + 𝜑𝑦 ) + 10 mA (𝑓𝑥 , 𝑓𝑦 ) = (94 Hz, 99 Hz) (6) 𝜋 𝜋 { (𝜑𝑥 , 𝜑𝑦 ) = (− 2 𝑟𝑎𝑑, − 2 𝑟𝑎𝑑) The Lissajous scanning trajectory based on the driving frequencies is simulated in Figure 26(c). Nevertheless, considering that the 1 kHz sampling rate of the CCD is only 10 times higher than the driving frequency, the scanned lines are sampled at discrete points with an interval of 0.001 seconds, creating a completely different Lissajous pattern within a complete period, as shown in Figure 26(d). Prior to pixelization, the number of pixels of the image must be decided based on the FOV and optical resolution. Driven with an amplitude and offset of 10 mA, the scanner generates a total optical scan angle of 18°. The FOV is measured at 5.5 mm × 7 mm on the sample plane, 22 mm away from the scanner. The resolution is limited by the 1 mm spot size of the illumination beam. To satisfy the Nyquist criterion, at least 2 pixels in 1 mm, the image is reconstructed with 15 pixels × 20 pixels. Finally, the discrete sampling points are mapped to the pixels and displayed in Figure 26(e). Pixels occupied by at least one point are plotted in yellow, whereas the blue pixels represent unsampled pixels that will be interpolated with the neighbor values. The FF is defined as the ratio of sampled pixels to total pixels (# 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑒𝑑 𝑝𝑖𝑥𝑒𝑙𝑠)/ (#𝑜𝑓 𝑡𝑜𝑡𝑎𝑙 𝑝𝑖𝑥𝑒𝑙𝑠). Here, the FF is computed as 244/300 = 81.3%. 3.4.2 Lissajous Scanning Raman Images A paper phantom made with SERS NPs was prepared for the Lissajous scanning Raman 69 imaging. Custom-made SERS NPs solution (S440 flavor) was dropped on a paper with a high concentration of 1630 pM. The SERS NPs paper was cut into the word “FLY” with a line width of 1 mm. The dimension of each letter is about 5 mm × 8 mm, which can be covered by the FOV (5.5 mm × 7 mm) of the Lissajous pattern. The SERS paper phantom was then imaged by the Raman system with the VO2 MEMS scanner. Following the same setting in Section 3.4.1, the CCD detector was set under a kinetic mode acquiring 10000 spectra in 10 seconds to improve the signal-to-noise ratio (SNR) by averaging the spectra. The averaged 1000 spectra within 1 second were calculated into 1000 weights using the direct classical least squares (DCLS) algorithm, based on a hybrid method of least squares regression and principal component analysis (PCA). These weights refer to the concentration of SERS NPs at certain positions. The DCLS algorithm requires multiple data sets for weight computation, including reference spectra of SERS NPs, background spectra, and measured spectra with SERS. The background spectra usually contain the intrinsic signals from the sample, optical system, and noise from the detector. Because the integration time of the data acquisition was very short (0.001 seconds), the background signals were dominated by the detector noise, whereas the intrinsic signals from the tissues and optics can be neglected. The details of the algorithm have been introduced in previous studies [21, 32, 33]. Based on the simulated Lissajous trajectory and sampling points, the 1000 weights were arranged in the corresponding positions. When multiple sampling points were located within a single pixel, the pixel value was obtained by averaging weights. Although the scanner was driven with phases (𝜑𝑥 , 𝜑𝑦 ) = (−𝜋/2 𝑟𝑎𝑑, −𝜋/2 𝑟𝑎𝑑), the actual phase response was different from the driving phases. Thus, phase adjustment was performed on both axes to remove defects in the reconstructed images, where adjusted phases (𝜑𝑥 , 𝜑𝑦 ) = (−0.45𝜋 𝑟𝑎𝑑, −0.15𝜋 𝑟𝑎𝑑) were 70 applied. The signal averaging, DCLS algorithm, phase adjustment, and image reconstruction were performed by post-processing on the raw data (10000 spectra in a text file) using a MATLAB script. The SERS signals of the “FLY” paper phantom were calculated into weights and reconstructed as 15 pixels × 20 pixels Raman images in Figure 27(a). These paper phantoms served as reference targets for the phase adjustment. Although the shape and intensity distribution of the three letters is noticeable, they are not as sharp as the photograph, which is likely due to several limitations, such as the 1 mm beam diameter, 1 kHz sampling rate, and signal integration time on the camera. The image quality and resolution can be potentially improved by deconvolution. Figure 27. Lissajous scanning Raman images of a paper phantom and a breast tumor. (a) Reconstructed Raman images of “FLY” SERS paper phantom. (b) Photograph of the SERS-stained breast tumor overlaid with reconstructed Raman image. All the samples were scanned by the frequencies of 94 and 99 Hz in a FOV of 5.5 mm × 7 mm. Scale bars = 1 mm. After the Lissajous Raman imaging technique has been proved using the paper phantoms, we demonstrated its capabilities of tissue imaging. A mouse breast tumor was dissected from a tumor- bearing mouse model (double transgenic MUC1/MMTV) [93]. All procedures performed on animals were approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). The breast tumor without any SERS NPs was first scanned by the custom-made Raman probe for background spectrum acquisition. Next, the 71 tissue was topically stained with the SERS NPs solution (S440 flavor, 1630 pM, 5 µL). Immediately after staining, the breast tumor with SERS NPs was scanned with a FOV of 5.5 mm × 7 mm. Then, the SERS signals were reconstructed into an image with the same pixel number and phase adjustment, as illustrated in Figure 27(b). The intensity distribution of the image is consistent with the SERS NPs solution on the tissue. Although the concentrations of staining solution on the paper phantom and breast tumor are the same, the weight displayed on the latter is weaker since the SERS NPs solution spread on the tissue and the concentration changed. 3.5 Conclusion and Future Work of VO2 Scanner In summary, a VO2-based MEMS scanner with a large mirror (1.6 mm × 1.6 mm) has been successfully demonstrated for Lissajous scanning Raman imaging. To our knowledge, this is the first Raman imaging technique based on a MEMS scanner and scanned by a Lissajous pattern. The quasistatic deflection angle, step response, and frequency response of the scanner are characterized. Its biaxial resonant frequencies are measured at 93 Hz and 94 Hz, consistent with the FEA simulation results. Driven with a sinusoidal current of 20 mA 𝐼𝑝𝑝 at the resonant frequencies for Lissajous scanning, the scanner can reach a total optical scan angle of 18°, resulting in a FOV of 5.5 mm × 7 mm at a distance of 22 mm. The scanner is then integrated into a custom Raman probe and a Raman spectroscopic imaging system. The driving currents are synchronized with the CCD detector to capture and reconstruct Raman signals following the Lissajous scanning trajectory. A Raman image reconstruction method for Lissajous scanning is reported using a simulated Lissajous pattern with a sparse sampling rate and the DCLS algorithm. SERS signals from a paper phantom and a mouse tissue are successfully reconstructed into images. Thus, the scanning and imaging capabilities of the VO2 MEMS scanner have been proven. 72 Future work will concentrate on different applications of the VO2 MEMS scanner; for instance, fluorescence microscopy and two-photon microscopy in miniaturized devices. Moreover, for the Lissajous scanning Raman imaging, the scanner can be devised with a much larger size and lower resonant frequencies, which can potentially increase the collection efficiency and improve the quality of reconstructed Raman images. Finally, this MEMS scanner will be deployed in a handheld imaging device for ex vivo or in vivo disease diagnosis. 73 CHAPTER 4: PORTABLE CONFOCAL MICROSCOPY 4.1 Dual-Axis Confocal Microscope Recently, confocal microscopy is a powerful tool for semiconductor inspection and biomedical imaging. Without physically sectioning the samples, it can collect 3D structure information via optical sectioning. The optical sectioning capability is achieved by a diffraction- limited illumination spot and pinhole detection, which can reject out-of-focus light. Using a fiber optics system, the optical fiber core can work as a pinhole. Since confocal microscopy can produce high-spatial resolution and high-contrast images, it has been very attractive for in vivo, intraoperative biomedical applications, which usually require miniaturization and flexibility. A conventional single-axis confocal system usually uses a high numerical aperture (NA) objective lens to generate a diffraction-limited spot. However, a bulky high NA lens limits the working distance and is hard to miniaturize. In the past decades, researchers have proposed the architecture of dual-axis confocal (DAC) microscopy that separates the illumination and collection paths, allowing a low NA lens with a longer working distance (WD), a larger field of view (FOV), high axial resolution, and the potential for miniaturization [147]. Figure 28. Schematic of the dual-axis confocal microscopy. The resolution is defined by the area where two focused beams intersect. 74 The resolution of the dual-axis confocal microscopy is written in Equation 7 [148], related to the cross angle of the two beams (𝜃), numerical aperture (NA) of focused beams, wavelength (𝜆), and refractive index (n), where ∆𝑥, ∆𝑦, ∆𝑧 are the x-lateral, y-lateral, and axial resolutions according to the definition in Figure 28. The coefficient (𝑐𝑜𝑒𝑓𝑓) is 0.466 for the 1/𝑒 2 beam size. 𝑐𝑜𝑒𝑓𝑓∙𝜆 ∆𝑥 = 𝑛∙𝑁𝐴∙cos 𝜃 𝑐𝑜𝑒𝑓𝑓∙𝜆 ∆𝑦 = (7) 𝑛∙𝑁𝐴 𝑐𝑜𝑒𝑓𝑓∙𝜆 { ∆𝑧 = 𝑛∙𝑁𝐴∙𝑠𝑖𝑛 𝜃 A basic dual-axis confocal microscope consists of two low-NA objective lenses (typically < 0.3), a scanning mirror, and a hemispherical solid immersion lens (SIL). The use of low NA beams provides a longer WD that allows the placement of a scanner between the focusing objective lenses and the sample, called “post-objective scanning”. In such a configuration, the two beams can be scanned and de-scanned by the same scanner, ensuring that their on-axis alignment can be maintained regardless of the angle of the scanner [149]. Given a fixed scanning angle, a longer WD can also achieve a larger FOV, compared to traditional single-axis confocal microscopy. The cross angle between the illumination and collection paths can be varied in different systems due to the constraints of geometrics, scanner size, and the position of the scanning mechanism. The use of the fused silica SIL offers several benefits: minimizing off-axis and spherical aberrations, matching the refractive index (fused silica, n = 1.45) of most biological tissues, as well as increasing the effective focusing NA [148]. Several variations of DAC microscopes have been developed to improve certain functions, such as imaging speed, imaging penetration, imaging contrast, etc. For instance, a miniaturized point-scanned dual-axis confocal (PS-DAC) microscopy has a limited frame rate due to the point- scanning mechanism. Line-scan dual-axis confocal (LS-DAC) microscopy is an alternative to 75 boost the frame rate by illuminating a focal line at the sample plane instead of a focal point [150, 151]. In another example, modulated-alignment dual-axis confocal (MAD) microscopy is demonstrated to improve the imaging contrast and depths by modulating the illumination beam for background signal removal [152, 153]. The type of illumination beam can also be switched from standard Gaussian beams to Bessel illumination to improve the beam quality inside samples with refractive heterogeneities [154, 155]. Although the variants of the DAC microscopes have been proposed, none of them are optimized for broadband wavelengths, from the visible to NIR range, due to the chromatic aberrations of the refractive optics. In our study, we have developed two types of portable confocal microscopes based on fully reflective optics – off-axis parabolic mirrors, that can eliminate chromatic aberration and enable ultra-broadband operation from the visible to NIR or even NIR- II ranges. In consideration of the frame rate, the systems are designed into the point-scan and line- scan systems. For the point-scan portable confocal system, the visible (488 nm) and NIR excitations (785 nm) can be operated simultaneously, and emission spectra are collected by photomultiplier tubes (PMTs), generating dual-channel broadband confocal images. In comparison, the line-scan portable confocal system utilizes a cylindrical lens to produce an excitation focal line, whereas the collection lights are imaged onto a 2D camera detector with a virtual slit by a tube lens, where the chromatic aberration gets involved. Therefore, the line-scan system is only optimized for NIR imaging (600~800 nm). Both point-scan and line-scan systems have been developed and demonstrated their capability for ex vivo tissue imaging and in vivo animal experiments using ICG NIR fluorescence dye. 76 4.2 Point-Scan Portable Confocal Microscope 4.2.1 System Design of Point-Scan Confocal Microscope We have developed a point-scan portable confocal microscope based on four 90° off-axis parabolic mirrors (OAP), collimating and focusing the beams, as shown in Figure 29. The use of parabolic mirrors advantages a fully reflective confocal system, which minimizes chromatic shifts for broadband wavelengths. These out-of-the-shelf parabolic mirrors can reduce the cost and design complexity of custom-made ones. Figure 29. Schematic of the point-scan portable confocal microscope. (a) The optical diagram of the portable confocal microscope, based on dual-axis architecture. The illumination and collection beams are collimated and focused by parabolic mirrors. P1~P4 are used for optical axis alignment. (b) The CAD drawing of the portable confocal microscope. OAP2 and OAP3 are installed on fine-tuned rotation mounts to ensure that both beams intersect at the same focus. (c) A close view of the scanner and the SIL lens. (d) A photograph of the portable confocal microscope. The overall size is approximately 100 mm(X) × 100 mm(Y) × 200 mm(Z). A galvo scanner steers the beams in the x direction while external actuators are utilized to move samples in the y and z axes. Note: OAP: off-axis parabolic mirror, P: prisms, SIL: solid immersion lens. 77 The system consists of two reflective collimators on the top (OAP1, OAP4; RC12FC-P01, Thorlabs, Newton, NJ, USA) and bare parabolic mirrors on the bottom (OAP2, OAP3; MPD129- P01, Thorlabs, Newton, NJ, USA) with identical parent focal distance (25.4 mm) and reflective focal distance (50.8 mm). The focused beams from OAP2 and OAP3 are designed with a cross angle of 60° and a NA of 0.12 (the NA of the single-mode fibers). The four parabolic mirrors are assembled in a custom frame fabricated by the MSU physics machine shop. The bottom parabolic mirrors are mounted on custom rotation mounts to fine-tune the cross angle of two focusing beams and ensure that the two beams intersect at the same point. Due to the tolerance of the optomechanical assembly, if the collimation beam from the top parabolic mirror is not parallel (off-axis) to the optical axis of the bottom mirror, aberrations (coma and astigmatism) appear and cannot reach diffraction-limited spot size. Therefore, on both the illumination and collection paths, two pairs of Risley wedge prisms (P1-P4, wedge angle 0.5°; 10QB20, Newport Corporation, Irvine, CA, USA) are installed for optical alignment. A wedge prism refracts the light at a specific angle. When two wedge prisms close to each other (Risley configuration) can be rotated independently, the incident beam can point to any direction within a circular pattern. By steering the prims, we can ensure the collimation beam is parallel (on-axis) to the optical axis of the parabolic mirror. 4.2.2 Optical Alignment of Point-Scan Confocal Microscope Optical alignment of the confocal system is accomplished by shooting two 660-nm laser beams from both the illumination and collection sides. A 100X finite-corrected objective lens (Plan M100/0.90 170, Unitron, Commack, NY, USA) is used to magnify and project the two focused spots onto a screen. Beam quality, aberration, and overlapping can be directly observed with the naked eye (Figure 30). First, the beam quality and aberration are improved by rotating the wedge prisms until a 78 diffraction-limited spot is obtained (Figure 30(b-c)). Second, the 60° cross angle of the two focal points is achieved by rotating the fixtures of the bottom parabolic mirrors. Once the two diffraction-limited spots are moved to the closest position, the two spots are likely still separated in the vertical direction (Figure 30(d)). The separation is caused by the errors between two parabolic mirrors and the mechanical assembly. Nevertheless, there are no other tunable mechanisms to move the spots vertically. Last, a final “trick” is performed by rotating the wedge prisms again, making two spots overlap (Figure 30(e)), but this step could degrade the beam quality and confocal spatial resolution. It is better to keep the diffraction-limited spots as circular (gaussian beam) as possible. If the vertical separation is too large (> 5 mm after 100X objective lens on the screen), it might be impossible to finish the final trick due to mechanical errors. The solution would be to reassemble the optomechanical parts or use the two mirrors from the same batch. Figure 30. Optical alignment of the point-scan portable confocal microscope. (a) Photograph of the optical alignment setup. (b) Focal spots with aberrations. (c) Focal spots without aberrations but still poor quality (not diffraction-limited). (d) Vertically separated diffraction-limited spots. (e) Overlapped diffraction-limited spots upon the completion of the optical alignment. 79 4.2.3 Resolution of Point-Scan Confocal Microscope The theoretical resolutions along the x-, y-, and z-directions with various wavelengths, from visible to near-infrared (NIR) range, are listed in Table 5. Table 5. Theoretical resolutions of the point-scan confocal microscope. Resolution (µm) 488 nm 660 nm 785 nm 980 nm 1310 nm Lateral X 2.02 2.73 3.25 4.06 5.42 Lateral Y 1.75 2.37 2.81 3.51 4.70 Axial Z 3.50 4.73 5.63 7.03 9.39 Upon the completion of the alignment, the resolution of the confocal was characterized at 488 nm, 660 nm, 785 nm, 980 nm, and 1310 nm lasers. To measure the resolution, the laser was sent from the illumination side and the intensity response was detected at the collection side. At the focus plane, a knife-edge reflective target was translated along the x-, y-, and z-axis. As the focus point crossed the sharp edge of the reflective target, the intensity changed rapidly on the collection side. This transition represents the point spread function (PSF) of the confocal system. The lateral resolutions (X, Y) are determined by 10-90% of the transition, whereas the axial resolution (Z) is defined by the full-width-half-maximum (FWHM) of the gaussian-fitted curve. The experimental resolution results are plotted in Figure 31. Note that the resolutions in the visible range were characterized with a single-mode fiber 780HP, whereas resolutions in the NIR range were measured with a single-mode fiber SMF-28+ connected to the collection port. Since the mode field diameter (MFD) of SMF-28+ is ~9 µm larger than that of 780HP ~5 µm, the light at a shorter wavelength propagating in SMF-28 is slightly multi-mode, affecting the experimental resolution. The experimental resolution results at multiple wavelengths are presented in Table 6. The axial resolutions were sequentially measured at different wavelengths. With reference to 660 nm, the chromatic shifts were negligible (< 1 µm) in the visible range and less than 5 µm from 1310 nm. The FWHM of the axial resolution is dominated by the MFD of the fiber. 80 Figure 31. Resolution of the point-scan portable confocal microscope. (a-b) Lateral and axial resolution in the visible range. (c-d) Lateral and axial resolution in the NIR range Note: Lateral resolution: 10~90% response in the reflective mode; Axial resolution: FWHM in the reflective mode. Table 6. Experimental resolutions of the point-scan confocal microscope. Fiber Resolution (µm) 488 nm 660 nm 785 nm 980 nm 1310 nm Lateral X 2.55 4.05 4.66 - - 780HP Lateral Y 3.24 5.37 4.48 - - Axial Z 4.25 @-0.8 5.66 @0 4.99 @+0.07 - - Lateral X - 4.10 - 5.94 6.43 SMF-28 Lateral Y - 3.04 - 5.28 4.31 Axial Z - 7.84 @0 - 8.69 @+1.29 8.37 @+3.52 81 4.2.4 Imaging System of Point-Scan Confocal Microscope Since the pinhole in confocal microscopy only allows single-point detection, 2D or 3D imaging requires scanning the focal point over the specimen by either steering the laser or translating the sample. A galvanometer scanner (6210H with 10~15 mm Y-mirror, Cambridge Technology, Bedford, MA, USA) steers the beam in the x-direction. A linear stage (AG-LS25, Newport Corporation, Irvine, CA, USA) translates the sample along the y-axis, while a piezo- electrical actuator (P-783.ZL, Physik Instrumente, Karlsruhe, Germany) controls the image depth in z-xis. A hemispherical solid immersion lens (SIL, fused silica, n = 1.46, OD 10, trimmed 0.25 mm; Tower Optical Corporation, Boynton Beach, FL, USA) contacts the tissue specimens to match the refractive index and extend the imaging depth. To take advantage of the broadband property of the confocal microscope, it is designed to image the fluorescence dyes excited by 488 nm and 785 nm lasers simultaneously. In Figure 32, the confocal imaging system consists of (a) the multi-channel free-space laser combiner, (b) confocal microscope, (c) multi-channel fluorescence collection module, and (d) data acquisition system. 488 nm and 785 nm lasers (iBeam Smart PT488 & iBeam Smart PT785, Toptica Photonics, Munich, Germany), are combined by a dichroic mirror (DM1; FF757-Di01-25x36, Semrock, Rochester, NY, USA) and coupled into a single-mode fiber (Figure 32(a)). Two single fibers (780HP) are connected to the confocal microscope as input and output (Figure 32(b)). The mixed channel fluorescence signals are collected and then separated by a dichroic mirror (DM2; FF750- SDi02-25x36, Semrock, Rochester, NY, USA) into two channels: NIR and visible range (Figure 32(c)). Band-pass and long-pass filters (FF01-550/88-25 & BLP01-785R-25, Semrock, Rochester, NY, USA) are used to remove the excitation light. Cleaned fluorescence signals are detected by the photomultiplier tubes (PMT; H7422-40 & H7422-50, Hamamatsu Photonics, Hamamatsu, 82 Japan) via the multi-mode fibers (M31L01, Thorlabs, Newton, NJ, USA). The current outputs from the PMTs are then converted to voltages by low-noise transimpedance amplifiers (DHPCA-100, FEMTO, Berlin, Germany). A data acquisition (DAQ) board (PCI-6115, National Instruments, Austin, TX, USA) can read the multi-channel voltage signals from the PMTs and synchronously sends driving signals to the scanner and Y-Z actuators (Figure 32(d)). Figure 32. Schematic of the point-scan portable confocal imaging system. (a) The multi-channel laser combiner. (b) The main body of the portable confocal microscope. A galvo scanner scans the beam in the x-direction and the sample is translated by actuators along the y- and z-axis. (c) The multi-channel fluorescence collection module. (d) The data acquisition (DAQ) system. Note: DM: dichroic mirror, LPF: long-pass filter, BPF: band-pass filter. A customized LabVIEW program (Version 2013, 32-bit, National Instruments, Austin, TX, USA) is used to control the DAQ card and reconstruct input voltage signals into grayscale images in real-time. The driving parameters are listed in Table 7. The analog input sampling rate (AI SRate) 83 of 4 M Samples/second can generate images with 2500 × 800 pixels, based on a raster scanning pattern. The relationship between frequencies and pixel numbers is shown in Equation 8, where 𝑓𝑥 , 𝑓𝑧 are the driving frequency along the x- and z- directions. XZ-Y stack images (a series of consecutive XZ images along the Y direction) are obtained to reconstruct a 3D image. 𝐴𝐼 𝑆𝑅𝑎𝑡𝑒 𝑁𝑢𝑚 𝑜𝑓 𝑥 𝑝𝑖𝑥𝑒𝑙𝑠 = 𝑁𝑢𝑚 𝑜𝑓 𝑦 𝑝𝑖𝑥𝑒𝑙𝑠×𝐹𝑟𝑎𝑚𝑒 𝑅𝑎𝑡𝑒 2𝑓𝑥 (8) 𝑁𝑢𝑚 𝑜𝑓 𝑦 𝑝𝑖𝑥𝑒𝑙𝑠 = 𝑓𝑧 { 𝐹𝑟𝑎𝑚𝑒 𝑅𝑎𝑡𝑒 = 𝑓𝑧 Table 7. Driving parameters of the point-scan confocal microscope. Axis Part Mode Waveform Frequency Amplitude FOV X Galvo 6210H Analog Sinewave 800 Hz ±1V 830 µm Y Newport AG-LS25 Digital Stepping - - 600 µm Z PI P-783.ZL Analog Sawtooth 2 Hz 0~10 V 240 µm 4.2.5 Distortion Correction of Confocal Images Because the DAQ system is an open-looped driving mechanism without feedback, a phase delay might appear along the fast axis (x-axis), causing a hysteresis effect in reconstructed images, where the pixels in the forward and backward scanning directions are interlaced. Although the interlacing defect can be mitigated by manually adjusting the phase delay (shift pixels) in the LabVIEW program in real-time, the defect still cannot be eliminated completely due to the instability of the galvo scanner and scanning direction of fluorescence signals. In addition, the sinusoidal and sawtooth driving waveforms introduce distortion and mirror effects on the reconstructed images. As a result, the direct output of the reconstructed image with distortion and interlaced lines is displayed in Figure 33. An image post-processing algorithm is implemented to correct the distortion and resolved the interlacing defect, as described in Figure 33. First, the sampling points at either odd or even rows 84 are extracted for deinterlacing (Figure 33(b)). Second, the deinterlaced sampling points are resampled by linear interpolation with uniform intervals along the x-axis to correct the sinusoidal distortion. The other missing rows are also completed by linear interpolation along the y-axis. Last, these two steps can be achieved by 2D linear interpolation in a MATLAB script (Figure 33(c)). The confocal images before and after the distortion correction are shown in Figure 33(d-e). Figure 33. Distortion correction of confocal image reconstruction. The image was scanned in sinewave along the x-axis and sawtooth along the y-axis. (a) In the original scanning pattern, due to the sinusoidal driving, sampling points are nonuniformly distributed along the horizontal axis, especially near the edge of the image. The hysteresis effect can be found between the forward and backward directions, resulting in interlaced images. (b) Only the sampling points at odd or even rows are selected for deinterlacing. (c) The deinterlaced sampling points are resampled at uniform intervals through linear interpolation. (d) The original image with distortion and interlaced lines and (e) the distortion-corrected image. 4.2.6 Ex vivo Mouse Tissue Imaging Using Point-Scan Confocal Microscope Ex vivo fluorescence imaging has been performed with mouse tissue specimens that were 85 topically stained with fluorescence dyes with dual excitations (488 nm and 785 nm). The laser power out of the SIL was 2.5~3 mW for both 488 nm and 785 nm. Fluorescence images from multiple channels were acquired simultaneously at the same position. The ex vivo mouse tissues, including colon, breast tumor, kidney, and muscle tissues, were harvested from a tumor-bearing mouse model (MUC1/MMTV) [93], with an approved protocol by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). Fluorescein (FITC) is the common fluorescent dye excited at 488 nm, while indocyanine green (ICG) and IR-783 are the fluorescent dyes at NIR 785 nm. In Figure 34(a), the mouse colon was topically stained with FITC (200 µg/mL) and ICG (20 µg/mL) sequentially. The en face (XY plane) confocal image was acquired at depth of 60 µm. In Figure 34(b), the mouse breast tissue was topically stained with FITC (500 µg/mL) and ICG (100 µg/mL) sequentially. The XY plane maximum intensity projection (MIP) confocal image was obtained at a depth from 60 to 150 µm. In Figure 34(c), the mouse kidney was topically stained with FITC (200 µg/mL) and IR-783 (200 µg/mL) sequentially. The MIP image on the XY plane was collected at a depth from 60 to 120 µm. To improve the transparency of biological samples and confocal imaging depth, a tissue- clearing method has been applied to the specimen. A fresh mouse skeletal muscle specimen was dissected and fixed in formalin at room temperature for 24 hours. After the tissue fixation, the tissue was stained with IR-783 (200 µg/mL) and FITC (200 µg/mL) respectively for 15 minutes. Then, the tissue was soaked in a clearing agent (FocusClear™, CelExplorer Labs, Hsinchu, Taiwan) at room temperature for 24 hours. Once the tissue was cleared, it cannot contact water; otherwise, it will revert to an opaque state. The confocal images of the cleared mouse muscle are displayed in Figure 35 and reconstructed in 3D visualization by Amira (Version 5.4.3, Thermo Fisher Scientific, Waltham, MA, USA). 86 Figure 34. Dual-channel confocal imaging of mouse tissues: colon, breast tissue, and kidney. The tissues were stained for dual channel imaging: 488 nm (Green) and 785 nm (Red). (a) Image of the mouse colon stained with FITC and ICG at a depth of 60 µm. (b) MIP image of the mouse breast tissue with FITC and ICG. (c) MIP image of the mouse kidney with FITC and IR-783. Scale bars = 5 mm in photos, and 100 µm in H&E and confocal images. Figure 35. Reconstructed 3D confocal images of cleared mouse muscle. The cleared sample was stained for dual channel imaging: 488 nm (Green) and 785 nm (Red). (a-c) MIP images of the muscle tissue in 488 nm, 785 nm, and merged channels. (d) Vertical cross-sectional image at the selected plane in (c). (e-f) Reconstructed 3D images in 488 nm and 785 nm channels, respectively. (g) H&E image of the cleared muscle (skeletal muscle). Scale bars = 100 µm. 87 One application of the portable confocal microscope is mouse brain imaging. The ex vivo mouse brain was stained with FITC (200 µg/mL) for 15 minutes and then imaged without removing the entire brain from the skull to maintain its shape during imaging (Figure 36(a)). The FITC-stained mouse brain vasculature was imaged using a commercial microscope (THUNDER, Leica Camera AG, Wetzlar, Germany) in Figure 36(b). The en face MIP and vertical cross- sectional images of vasculature are visualized in Figure 36(c-d). A stitched confocal image (FOV of 500 µm × 700 µm) of the mouse brain vascular network is color-coded with depth information in Figure 36(e) and reconstructed in 3D visualization in Figure 36(f). Figure 36. Ex vivo mouse brain imaging using the portable confocal microscope. The mouse brain was topically stained with FITC dye. (a) Photograph of the mouse brain imaging setup. (b) Fluorescence image of the mouse brain from using a commercial fluorescent microscope. (c) MIP image of the mouse brain vasculature. FITC signal was excited at 488 nm. (d) Vertical cross-sectional image at the selected plane in (c). (e) Color-coded depth projection image of the mouse brain vascular network. (f) Amira 3D reconstructed image of (e). Scale bars = 100 µm. 88 4.2.7 In vivo Mouse Ear Imaging Using Point-Scan Confocal Microscope In this study, we have demonstrated the feasibility of in vivo experiments using the point-scan portable confocal microscope. The mouse was injected with ICG dye (concentration: 10 mg/mL with 10% DMSO, volume: 0.1 mL) via the tail vein. The mouse ear was flattened on the Y-Z actuator and imaged at the vascular network as shown in Figure 37(a). The ICG fluorescence signal was found at the depth ranging from 50 to 250 µm. The color-coded depth projection image and Amira 3D reconstruction images are presented in Figure 37(b-c). The ICG fluorescence signal will gradually fade out within 30 minutes after injection. The gain of PMT was tuned at 0.5~0.6. Figure 37. In vivo mouse ear imaging using the portable confocal microscope. (a) The setup and imaging area of the mouse ear, attached to the Y-Z actuators. (b) Color-coded depth projection image of the mouse ear vasculature. (c) Amira 3D reconstructed image of (b). Scale bars = 100 µm. 4.3 Line-Scan Portable Confocal Microscope 4.3.1 System Design of Line-Scan Confocal Microscope The point-scan portable confocal microscope has demonstrated its capability for broadband, multiple channels, ex vivo, and in vivo imaging. However, its maximum frame rate is 2 frames per second (FPS) for a single 2D image. For an entire 3D acquisition with 400 frames, it will take 200 seconds, which is not satisfactory for real-time in vivo imaging experiments. The acquisition speed is mainly limited by the 3-axis scanning mechanisms for 3D imaging. To boost the acquisition speed of the confocal system, a line-scan confocal system has been proposed based on a similar 89 architecture, by generating a line excitation and reducing one scanning dimension. The schematic diagram of the line-scan portable confocal microscope is illustrated in Figure 38. Shared with a similar concept, both line-scan and point-scan confocal systems are based on identical 90° off-axis parabolic mirrors (OAP1-3; RC12APC-P01 & MPD129-P01, Thorlabs, Newton, NJ, USA) for collimating and focusing. For the line-scan system, a cylindrical lens (Cyl, f=-400 mm; LK4500RM-A, Thorlabs, Newton, NJ, USA) is added on the illumination side to generate a focused line on the sample plane. On the collection side, a 2D scientific camera (C11440-42U, Hamamatsu Photonics, Hamamatsu, Japan) is utilized to capture the fluorescence light from the line excitation. Instead of using a parabolic mirror (OAP4) in the point-scan system, dielectric mirrors (M1-2; BB1-E02, Thorlabs, Newton, NJ, USA) and an achromatic doublet tube lens (L1; AC254-200-B, Thorlabs, Newton, NJ, USA) are used to image the light onto the camera. Figure 38. Schematic of the line-scan portable confocal microscope. (a) The optical diagram of the line-scan portable confocal microscope. A mirror M2 reflects the image onto the center of the camera. (b) The CAD drawing of the line-scan confocal microscope. (c) The photograph of the line-scan confocal microscope. An external Z-actuator (not shown in the photograph) translates the sample at various depths. Note: OAP: off-axis parabolic mirror, P: prisms, M: mirrors, Cyl: cylindrical lens, SIL: solid immersion lens, L: lens, LPF: long-pass filter. 4.3.2 Optical Alignment of Line-Scan Confocal Microscope The optical alignment of the line-scan system is also similar to the procedure of the point- 90 scan system. The first step of the alignment procedure is completed by shooting two collimation beams to overlap the two focal spots at the sample plane without the cylindrical lens, mirror M2, tube lens, and camera. By adjusting the prisms (P1-2) on the illumination side and the tilting angle of the mirror (M1) on the collection side, diffraction-limited spots can be achieved, and aberrations are minimized. In the second step, for pre-alignment, the cylindrical lens is temporarily placed in the illumination path to generate a line along the x-axis. In the third step, the tube lens and camera are installed on the collection side to determine the focal distance between the tube lens and the camera, which is critical for image quality. During this step, the cylindrical lens is removed, and a knife-edge target reflects the focus spot on the sample plane. The focus spot is then imaged by the camera as a criterion for determining the focal distance. In the last step, the cylindrical lens is re- installed in the system and a focused line can be seen on the sample plane, which should keep the same orientation as the pre-alignment. Without removing the knife-edge target, the mirror (M2) can direct the reflective line image onto the center of the camera, meanwhile the line image is parallel to the horizontal pixels by rotating the camera. After the optical alignment, the line image from the sample plane should be located at the center few rows of the camera. A virtual slit on the camera can be created by a software setting, reducing the region of interest (ROI) to the center few rows of the camera (the number of rows depends on the camera model) and applying vertical binning for signal readout. A long-pass filter (BLP01-785R-25, Semrock, Rochester, NY, USA) is used for 785 nm NIR fluorescence imaging. 4.3.3 Resolution of Line-Scan Confocal Microscope The resolution of the line-scan confocal microscope is measured during the first and second steps of the optical alignment. In the first step of the alignment, a temporary lens-based collimator (L2; AC254-60-B, Thorlabs, Newton, NJ, USA) is built on the collection side for alignment and 91 resolution measurement. Single-mode fibers (SM600) are connected to the input and output ports. The measurement setup is depicted in Figure 39(a). By removing or adding the cylindrical lens, the resolution test can be switched between point-scan or line-scan mode. The focused line generated by the cylindrical lens is parallel to the knife edge target along the x-direction. The axial resolution is measured by translating a reflective target along the z-axis, whereas the lateral resolution is tested by crossing the sharp edge. The resolution results are plotted in Figure 39(b). The axial resolution (FWHM, ~3.6 µm) doesn’t change with the cylindrical lens. The lateral resolution (10~90%, 2.17 µm) represents the thickness of the focused line. Figure 39. Resolution of the line-scan portable confocal microscope. (a) Schematic of the measurement setup. (b) Measured axial and lateral resolutions in the point- scan (without the cylindrical lens) and line-scan (with the cylindrical lens) modes. The illumination path and spot size of the focused line are simulated by OpticStudio (Version 20.2.1, Zemax, Kirkland, WA, USA) in Figure 40. Two OAPs (f = 50.8 mm) folds the optical path and the cylindrical lens (f = -400 mm) placed in between generates a focused line on the sample plane. The length of the simulated line is estimated at about 8 mm, but the thickness of the line is not uniform. Only the thinnest section of the line near the center (~160 µm long) can generate good optical resolution for confocal imaging. 92 Figure 40. ZEMAX simulation of the line-scan portable confocal microscope. (a) Ray tracing simulation of the illumination light path. (b) Spot diagram of the simulation result on the sample plane. 4.3.4 Imaging System of Line-Scan Confocal Microscope The imaging system for the line-scan confocal microscope requires a galvo scanner (GVS012-X, Thorlabs, Newton, NJ, USA) steering the line along the y-direction for 2D image, and an external Z-actuator (P-783.ZL, Physik Instrumente, Karlsruhe, Germany) for 3D volume imaging. The scanner and actuator are driven by analog voltage signals while the camera detector is acquiring the line imaging via USB communication. The difficulty for this imaging system is the synchronization between the camera, scanner, and actuator using different communications. In addition to the USB communication cable, the camera has internal and external triggers, which can either send or accept Transistor-Transistor Logic (TTL) signals. The TTL signals are digital pulse trains (Low < 0.8 V, High > 2~5 V) that can control the image acquisition, exposure time, and frame rate. The TTL signals are connected to the data acquisition (DAQ) board (PCI- 6115, National Instruments, Austin, TX, USA) via the Programmable Function Interface (PFI) line as the sampling clock for analog driving signals sent to the galvo scanner and stages. Therefore, the camera and the analog devices share the same timing source, which ensures that the image acquisition can be synchronized with the scanning position. Since the acquisition of a single line is triggered by a TTL pulse and a full confocal image 93 contains several lines, the line acquisition rate of the camera determines the frame rate of confocal images. To maximize the line acquisition rate, the camera is operated in “Rapid Rolling Mode”, specifically for this model. With a reduced ROI imaging area as a virtual slit, the camera can increase the acquisition rate significantly (for some models). Although the camera has internal and external trigger modes, when the acquisition is triggered by the external TTL, it has an overhead to wait for the external triggers, limiting the fastest frame rate. As a result, the camera should operate in “Free Running Mode”, sending out internal TTL triggers as every acquisition occurs. In this configuration, the camera serves as the master device, while the scanner and actuator are the slaves. As the exposure time changes, the line acquisition rate, scanning speed, and frame rate of confocal images can be automatically adjusted. The maximum line acquisition rate of this Hamamatsu C11440-42U camera is 25000 FPS with an exposure time of 40 µs. However, the acquired images are transferred to the host computer via a USB 3.0 cable, which has an upper limit of data transfer bandwidth. To optimize the data transfer bandwidth, multiple images (1~1024) are bundled in one frame and sent to the computer at once. In conclusion, following the detailed settings, including the internal TTL trigger output, reduced ROI area, and frame bundle, the maximum frame rate of the camera can be achieved. The frame rate of the reconstructed confocal image is calculated by the line acquisition rate divided by the number of lines within a frame. If a single frame consists of 1000 lines and the camera is running at a maximum speed of 25000 FPS, the resulting frame rate is 25 FPS. If the scanning FOV along the y-axis is halved by decreasing the driving voltage, the number of lines can be reduced to 500 lines, doubling the frame rate to 50 FPS. Due to the sinusoidal driving signals of the scanner, reconstructed images are mirrored between even and odd frames and distorted along the y-direction, which can be corrected by post-processing in a MATLAB script. 94 Different types of cameras have been used in the line-scan confocal system to meet the requirements of various applications. The specifications of these cameras are listed in Table 8. If there are no budget constraints, the Hamamatsu C11440-42U camera excels in the visible to NIR range with maximum line rate at 25000 FPS. A commercial camera (acA720-520um, Basler, Ahrensburg, Germany) built with a commercial CMOS sensor (IM287, Sony, Minato, Tokyo, Japan) provides a low-cost option (< $500) in the same wavelength range and a compact housing size (< 30 mm × 30 mm × 30 mm). It also allows a virtual slit to be created anywhere on the 2D image sensor, which simplifies the optical alignment. If the wavelengths of interest are in the NIR- II regime (1000–1700 nm), an InGaAs line-scan camera (C15333-10E, Hamamatsu Photonics, Hamamatsu, Japan) with a fast scan rate (max. 40000 FPS) is a good match. The corresponding maximum frame rates in the table are calculated at 500 lines per frame. Table 8. Comparison between different cameras for the line-scan confocal microscope. Max. Max. Pixel Sensitivity Reduced Min. Camera Line Frame Information Range ROI Exposure Rate Rate Hamamatsu sCMOS; 6.5 μm2 400 nm 8 pixels 40 µs 25000 FPS 50 FPS C11440-42U 2048 (H) × 2048 (V) ~950 nm w/ binning Basler CMOS; 6.9 μm2 400 nm 1~2 pixels 30 µs 6600 FPS 13 FPS acA720-520um 720 (H) × 540 (V) ~900 nm w/ binning Hamamatsu InGaAs; 12.5 μm2 950 nm 1 pixel 21 µs 40000 FPS 80 FPS C15333-10E 1024 (H) × 1 (V) ~1700 nm The Hamamatsu C11440-42U camera is used in the current system for the best performance. The schematic diagram of the line-scan portable confocal imaging system is shown in Figure 41. Two fiber-coupled laser sources, 660 nm and 785 nm (S1FC660 & S1FC785, Thorlabs, Newton, NJ, USA), are coupled into a single-mode fiber (SM600) via a fiber-based WDM combiner (NR75F1, Thorlabs, Newton, NJ, USA). The Hamamatsu C11440-42U camera is connected to the computer via USB for image transfer and to the PFI3 port of the DAQ board via a BNC cable for 95 TTL triggers. The sample sits on the SIL lens, translated by the Z-actuator. As the maximum frame rate of 25 FPS, a 3D volume acquisition with 200 frames can be finished in 8 seconds. The imaging system is designed with single-channel fluorescence imaging only. Dual-channel imaging is achieved by switching the laser source and filters sequentially. While the samples are stained with DRAQ5 (660 nm) and ICG (785 nm), long-pass filters (BLP01-664R-25 & BLP01-785R-25, Semrock, Rochester, NY, USA) are used to remove 660 nm and 785 nm excitations, respectively. Figure 41. Schematic of the line-scan portable confocal imaging system. (a) The 660 nm and 785 nm lasers are coupled into a single-mode fiber. (b) The main body of the line-scan confocal microscope. The focal line is scanned by a galvo scanner on the y-axis and the sample is gradually translated by a piezo actuator on the z- axis for 3D imaging. (c) The data acquisition (DAQ) system is controlled by a customized LabVIEW program, taking the TTL from the camera for the scanner synchronization. The line signals from the camera are collected by the 2D camera with a virtual slit and reconstructed into images in real-time at 25 FPS. XY-Z stack images are obtained for 3D image reconstruction. 4.3.5 Ex vivo Mouse Tissue Imaging Using Line-Scan Confocal Microscope A custom-made 785 nm excitation contrast agent, NW-ICG, has been synthesized by our 96 collaborators in Dr. Xuefei Huang’s group. They were systematically administrated into a breast tumor mouse model (MUC1/MMTV), following an approved animal protocol by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). Because the tumor cells tend to uptake more nutrients and oxygen supply as they grow quickly, the contrast agents will easily accumulate in the tumor area, known as the enhanced permeability and retention (EPR) effect. Ex vivo mouse specimens: kidneys and breast tumors freshly harvested from the NW-ICG injected mice were imaged by the line-scan confocal microscope. The samples were excited by the 785 nm laser (3 mW at the SIL) with an exposure time of 50 µs and 1000 lines per frame. The resulting frame rate was 20 FPS for XY images, and it takes 10 seconds to complete a 3D scanning. Because of the non-uniform thickness of the focused line as shown in Figure 40, the acquired XY image only appears to be clear in the center ~160 µm along the x-axis. The FOV of the XY-Z stacking confocal image is approximately 160 µm (X) × 1200 µm (Y) × 240 µm (Z). By moving the sample with a manual stage along the x-direction with a pitch of 100 µm, ten sets of 3D-stacked images were sequentially acquired and stitched into a large FOV image (1000 µm (X) × 1200 µm (Y) × 240 µm (Z)) using the plugin “Stitching – Grid/Collection” of Fiji [156]. The confocal images of NW-ICG kidney and breast tumors are shown in Figure 42. (Note: the definition of the XY coordinate is swapped in the figure.) The 100-µm pitch size ensures the >50% overlap area between two adjacent images, minimizing ghosting after stitching. The depth information along the z-direction can also be perfectly aligned and well-stitched. The NW-ICG-administered tissues were then topically stained with 660 nm excited DRAQ5 fluorescence dye, which stains the nuclei. In Figure 43, the tissues were imaged by 1.5-mW 660 nm (Green) and 3-mW 785 nm (Red) lasers, with an exposure time of 500 µs and 50 µs, 97 respectively. Due to asynchronous acquisition, the FOV of dual-channel images could be slightly offset, which can be manually compensated in post-processing. Figure 42. Confocal images of NW-ICG injected mouse tissues. (a) En face Confocal image of the mouse kidney at a depth of 60 µm. (b-c) MIP confocal images of the mouse breast tumors. The stitched 3D-stacked images have a large field of view of 1200 µm (X) × 1000 µm (Y) × 240 µm (Z). Corresponding H&E images and 3D visualization images are provided. Scale bars = 200 µm. Figure 43. Confocal images of NW-ICG-injected and DRAQ5-stained mouse tissues. (a) kidney and (b) breast tumors. The field of view of the XY image is 1200 µm (X) × 320 µm (Y). However, only the central area (~ 160 µm (Y)) shows the best spatial resolution, where the XZ view shows cross-sectional images at the dashed line. Scale bars = 100 µm. 98 4.3.6 In vivo Mouse Ear Imaging Using Line-Scan Confocal Microscope The line-scan portable confocal microscope was used for a real-time in vivo mouse experiment. The mouse was injected with ICG dye (concentration: 10 mg/mL with 10% DMSO, volume: 0.1 mL) via the tail vein and imaged through the mouse ear vasculature. The mouse was anesthetized, and its ear was flattened on the SIL lens on the Z-actuator as shown in Figure 44(a). 2 mW of 785 nm excitation with a minimum exposure time of 40 µs was used during the experiments. A large FOV en face image shown in Figure 44(b) was manually mosaicked by using the plugin “MosaicJ” of Fiji [157]. MIP fluorescence image and Amira 3D reconstructed image of the mouse blood vasculature network are displayed in Figure 44(c-d). Figure 44. In vivo mouse ear imaging using the line-scan portable confocal microscope. ICG dye was injected into the mouse tail vein and imaged via mouse ear vasculature. (a) Photograph of the imaging setup. (b) Mosaic confocal image with a large field view of 1500 µm (X) × 1100 µm (Y). (c) MIP image of mouse ear vascular network. (d) 3D reconstruction of the blood vasculature in Amira. Scale bars = 200 µm. 99 4.4 Conclusion of Portable Confocal Microscopes In this section, we have developed two portable confocal microscopes based on OAP parabolic mirrors, point-scan and line-scan systems with different illumination. The reflective imaging systems based on parabolic mirrors can yield a broad range of operating wavelengths, from visible to NIR. The confocal systems were built with the dual-axis configuration, which allows longer working distance and a lower NA light path. The lateral and axial resolutions of the systems have been characterized to be around 4~8 µm. No significant chromatic aberration (< 5 µm) was observed between the broadband wavelengths. The point-scan confocal system is a single focal point detection, utilizing three scanning mechanisms for 3D volume imaging. By replacing the single point excitation with a focused line, the line-scan system only employs two scanning directions and further improves the imaging frame rate from 2 FPS to 25/50 FPS. The collection sides of the two confocal systems are adapted to the different excitation styles. For instance. a single-mode fiber and PMTs were used for the point- scan system, while a lens-camera-based imaging part was applied in the line-scan system. A comparison of the features between these two confocal microscopes is listed in Table 9. Table 9. Comparison between point-scan and line-scan confocal microscopes. Point-Scan Line-Scan Confocal Microscope Confocal Microscope A single-mode fiber A single-mode fiber Excitation Two OAPs Two OAPs A cylindrical lens A single-mode fiber An OAP Collection Two OAPs An achromatic tube lens Detector Photomultiplier Tube (PMT) 1-D or 2D Camera with virtual slit X: A galvo scanner Y: A galvo scanner Scanners Y: A piezo actuator Z: A piezo actuator Z: A piezo actuator Max. 25 or 50 FPS Frame Rate 2 FPS (Depend on the number of scan lines) 100 Both the point-scan and line-scan confocal systems have been successfully demonstrated for ex vivo mouse tissue imaging and in vivo mouse ear vascular network imaging. The point-scan confocal microscope has shown its capability of fluorescence imaging at 488 nm and 785 nm simultaneously. Although the line-scan confocal only has one imaging channel now, dual-channel imaging can be achieved by sequentially imaging with multiple light sources and filters. For both systems, large FOV images can be acquired using stitching or mosaicking methods. In the future, these portable confocal microscopes with the OAPs will become an excellent framework for multiple applications with different excitation wavelengths, such as multiphoton microscopy, photoacoustic microscopy, and NIR microscopy. 101 CHAPTER 5: PZT SCANNER-BASED WIDE-FIELD MACROSCOPE 5.1 Thin-Film PZT Piezoelectric MEMS Scanner 5.1.1 Design of Thin-Film PZT Piezoelectric MEMS Scanner A 2D MEMS scanner is developed with a piezoelectric actuation mechanism because of its large energy density and low driving voltages and currents, compared to electrostatic and electrothermal actuation. A thin-film lead-zirconate-titanate oxide (PbZrTiO3, PZT) film, a piezoelectric material, can be deposited on the scanner to generate the actuation force. The thin- film PZT MEMS scanner provided by our collaborator, Dr Hiroshi Toshiyoshi’s group, has been used in many applications, such as projector, LiDAR, and microscopy [158-160]. The scanner is designed with a gold-coated mirror with a diameter of 2 mm, which is sufficient for our optical scanning application (beam size < 1mm), as illustrated in Figure 45(a). The 2D scanner has two optical scanning axes, the inner axis scanning along the x-direction and the outer axis along the y- direction, as shown in Figure 45(b). The mirror in the center is connected to the gimbal frame with narrow torsion beams scanning along the x-axis, whereas the gimbal frame is supported by the two U-shape legs scanning along the y-axis. The U-shape is composed of three sections: two of them are bent upwards and one is bent downwards. The inner axis is excited by the mechanical oscillation at resonant frequencies, while the outer axis can operate at either resonant or DC modes. The driving voltages for the inner axis are sinusoidal waveforms ranging from 0 V to 11 V, a relatively low voltage for resonant oscillation. The pairwise inputs V3 and V4 have a phase difference of 180 degrees (out-of-phase). To create bends in the opposite directions on the outer legs, out-of-phase driving voltages should be applied to the three PZT pads within each leg. The outer axis has a higher voltage rating (0 V to 40V) than the inner axis if not operating at resonant frequencies (DC/low frequencies). Although single-phase voltages still can drive the scanner, the 102 scanning range will be halved. Figure 45. Design of the PZT piezoelectric MEMS scanner. (a) Stereomicroscope image of the PZT MEMS scanner. (b) 2D beam scanning application by twisting the torsion beams of the PZT MEMS scanner. Figure 46. Wiring of the PZT piezoelectric MEMS scanner. V1 & V2 are the pairwise inputs for the left arm, whereas V5 & V6 are the inputs for the right arm. V3 & V4 are the voltage inputs for the inner axis. S1 & S4 are the voltage outputs for deflection sensing. All the input and output pads share the same ground GND. 5.1.2 Fabrication of Thin-Film PZT Piezoelectric MEMS Scanner The fabrication process of the thin-film PZT MEMS scanner is described in Figure 47 [159, 160]. All the thin films, including the silicon dioxide, PZT, and platinum layers (Pt/Ti, titanium 103 layers for adhesion), were deposited on an SOI wafer with a 30-μm-thick silicon layer in step (a). The PZT was deposited by the arc discharge reactive ion plating (ADRIP) process [161]. The PZT film was sandwiched by the platinum layers that work as the bottom electrodes. In step (b), the top electrode and PZT layers were patterned by photolithography and reactive ion etching (RIE). In step (c), the bottom electrode and silicon dioxide layers were etched by RIE till the topside silicon layer (SOI). Then, in step (d), the scanner structure was defined by deep-RIE (DRIE) to remove the topside silicon layer until by the buried oxide (BOX) layer. The silicon substrate was etched by backside DRIE to make a cavity in step (e). Finally, in step (f), the BOX was selectively removed by RIE using CHF3 gas to release the device. Figure 47. Fabrication process of the PZT piezoelectric MEMS scanner. (a) Deposition of the materials, including electrodes (Pt/Ti), PZT, and SiO2, on the SOI wafer. (b) RIE of the top electrode and SiO2. (c) RIE of the bottom electrode and SiO2. (d) DRIE of the topside Si. (e) DRIE of the backside Si. (f) Removal of the buried oxide (BOX) Layer. 5.1.3 Characterization of Thin-Film PZT Piezoelectric MEMS Scanner The eigenfrequency analysis of the PZT MEMS scanner has been studied using the finite 104 element analysis (FEA) software (COMSOL Multiphysics, Version 5.3, COMSOL Inc., Burlington, MA, USA). The material properties of silicon, including silicon dioxide, platinum, and PZT, were imported from the built-in library. The model was built at room temperature under 1 atm pressure and gravity. From the simulated FEA results in Figure 48, the resonant frequency of the outer frame’s tilting mode was around 870 Hz (slow), while the inner mirrors’ resonant frequency was around 6526 Hz (fast). The resonant modes are almost independent and orthogonal to each axis. Figure 48. Finite element analysis (FEA) simulation of the PZT piezoelectric MEMS scanner. Simulation results of the resonant scanning modes are (a) 870.8 Hz on the y-axis and (b) 6526.3 Hz on the x-axis. The actual resonant frequencies of the thin-film PZT MEMS scanner were tested at room temperature in ambient air. Unipolar differential voltages (sinusoid waveform with an amplitude of 4 V and an offset of 4 V) were applied to a single channel to excite the scanner. The sinusoidal input was swept, and the optical response was detected by a 2D position sensing detector (PSD; PSM2-20 and OT-301, On-Trak Photonics, Inc.). Then, the PSD output voltages were recorded by a data acquisition (DAQ) card (PCIe-6321, National Instruments, Austin, TX, USA). The measured frequencies were 4272 Hz on the x-axis and 565, 588 Hz on the y-axis, as shown in Figure 49. The two peaks on the y-axis are caused by the outer two legs on the left and right. The maximum optical scan angles are approximately 12° and 15° on the x- and y-axis, respectively. As expected, phase drops of ±180° are observed around the resonant frequencies of the two axes. 105 Figure 49. Frequency response of the PZT MEMS scanner. The resonant frequencies are 4272 Hz on the x-axis and 565, 588 Hz on the y-axis. Note: the x- and y-axis are defined in Figure 45. The scan angle is the total optical scan angle. 5.2 PZT MEMS Scanner-Based Wide-Field Macroscope 5.2.1 System Design of PZT MEMS Scanner-Based Wide-Field Macroscope The thin-film PZT MEMS scanner was integrated into a custom-made macroscope based on Lissajous scanning for fluorescence imaging. The schematic of the macroscope is illustrated in Figure 50(a). A collimated 785 nm laser was used to excite the sample. A custom-made gradient- index (GRIN) lens (OD 0.5 mm, NA 0.2, 0.25 pitch @ 640 nm; GRINTECH GmbH, Jena, Germany) mounted with a single-mode fiber (SM600) was used to deliver the light from a fiber- coupled 785 nm laser (S1FC785, Thorlabs, Newton, NJ, USA). A multi-mode fiber MMF1 (core size 400 μm; M119L02, Thorlabs, Newton, NJ, USA) was utilized to collect the fluorescence light directly from the sample. The GRIN lens assembly and the collection fiber were bundled next to each other in Figure 50(d). The PZT MEMS scanner was mounted 45° in front of the bundled fibers in Figure 50(c). A custom-made housing was designed to integrate the fibers and scanner in 106 a polymer (PLA) package in Figure 50(b), using a 3D printer (Ender-3, Creality, Shenzhen, China). The fluorescence light was filtered by the fluorescence collection module, consisting of two lenses (A110TM-B, Thorlabs, Newton, NJ, USA) and a long-pass filter with a cut-on wavelength at 785 nm (LPF; LP02-785RU-25, Semrock, Rochester, NY, USA). Cleaned fluorescence light was transmitted to a photomultiplier tube (PMT; H7422-50, Hamamatsu Photonics, Hamamatsu, Japan) via a multi-mode fiber: MMF2 (core size 400 μm; M74L01, Thorlabs, Newton, NJ, USA). The current outputs from the PMT were converted to voltages by a transimpedance amplifier (DHPCA-100, FEMTO, Berlin, Germany) and read by a DAQ board (PCIe-6353, National Instruments, Austin, TX, USA). Figure 50. Schematic of the PZT MEMS scanner-based macroscope. (a) Schematic drawing of the imaging system. The 785 nm excitation light is collimated by the custom-made GRIN-lens-embedded SM fiber while the fluorescence signal is collected directly by the high NA MM fiber (MMF1). The PZT MEMS scanner is driven biaxially at resonant frequencies for Lissajous scanning. The fluorescence light is detected by a PMT, converted into voltage signals, and recorded by the DAQ card. A custom LabVIEW program reconstructs the voltage signals into images based on Lissajous patterns. (b) The 3D-printed housing for the fibers and PZT MEMS scanner. (c) Stereomicroscopic image of the PZT MEMS scanner. (d) Photograph of the package of the GRIN lens assembly and MMF1. Note: MMF1: 400 μm core; MMF2: 400 μm core; LPF: long-pass filter. 107 The PZT MEMS scanner was driven biaxially around the resonant frequencies for Lissajous scanning. The sinusoidal driving voltages were generated from the DAQ board with an amplitude of 0.2 V and an offset of 0.2 V. A two-channel 20X voltage amplifier (Model 2350, Tegam Inc., Geneva, OH, USA) was used to amplify and buffer the signals, resulting in output voltages ranging from 0 V to 8 V. Driving frequencies along the x- and y-axis (𝑓𝑥 , 𝑓𝑦 ) were chosen (4270 Hz, 590 Hz) to optimize the line density and the frame rate of 10 FPS, determined by the greatest common divisor (GCD) of the two frequencies. The driving signals are expressed below. 𝑉𝑥 (𝑡) = 4 sin(2𝜋𝑓𝑥 𝑡 + 𝜑𝑥 ) + 4 V 𝑉𝑦 (𝑡) = 4 sin(2𝜋𝑓𝑦 𝑡 + 𝜑𝑦 ) + 4 V (9) (𝑓𝑥 , 𝑓𝑦 ) = (4270 Hz, 590 Hz) { (𝜑𝑥 , 𝜑𝑦 ) = (−90°, −90°) The 785 nm laser power was 10 mW for sample excitation and the PMT gain was tuned to 0.5 for the following experiments. The PMT signals were reconstructed into 8-bit grayscale images with 40 × 60 pixels. The input sampling rate of 4 × 105 samples/second was used for image reconstruction based on Lissajous trajectories. Since the phase difference between the driving and response signals changes dramatically around the resonant oscillation, phase adjustment was performed on both axes to match the actual phase and remove image reconstruction defects. The adjusted phases were (𝜑𝑥 , 𝜑𝑦 ) = (−160°, 94°). 5.2.2 Resolution of PZT MEMS Scanner-Based Wide-Field Macroscope The resolution of the imaging system was measured from a phantom image. A fluorescence phantom was prepared using a 3D printed mold with the text “IQ” and injected with ICG solution (0.1mg/mL). As the working distance was 15 mm from the scanner to the phantom, the resulting FOV was 4 × 5 mm2. Two adjacent fluorescence images were acquired and stitched in an entire FOV of the 3D-printed ICG phantom. The photograph and reconstructed image are shown in 108 Figure 51(a-b). The resolution curve is extracted from the edge response of the fluorescence image. The resolution is computed as the distance between 10% to 90% of the edge response, approximately 0.44 mm in Figure 51(c). Figure 51. Fluorescence image of the ICG phantom. (a) Photograph of the 3D- printed phantom injected with ICG. (b) Fluorescence image of the ICG phantom acquired by the macroscope. (c) Intensity curve of the edge response. Resolution is defined by the distance between 10% to 90% of the transition. Scale bars = 1 mm. 5.2.3 Fluorescence Imaging of PZT Scanner-Based Wide-Field Macroscope A business card served as the phantom for large FOV wide-field fluorescence imaging. The ink patterns (dark area) on the business card are shown in Figure 52(a). The 15-mm working distance created a FOV of 3.5 × 5 mm2. As the ink can absorb fluorescence light, the fluorescence image was mosaicked and displayed in an inverted grayscale in Figure 52(b), similar to the ink. Figure 52. Fluorescence images of patterns on a business card. (a) Photograph of the “PerkinElmer” pattern on the business card. (b) Large FOV fluorescence image of the ink pattern. Scale bars = 2 mm. 109 The macroscope based on PZT MEMS was used to perform wide-field fluorescence imaging on ex vivo mouse tissues. A tumor-bearing mouse model (double transgenic MUC1/MMTV) was injected with ICG dye (concentration: 10 mg/mL with 10% DMSO, volume: 0.1 mL) via the tail vein. A mouse breast tumor was dissected right after mouse euthanasia. All procedures performed on animals were approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). The photograph of the mouse breast tumor is shown in Figure 53(a). The sample was excited by a 10-mW 785 nm laser. As the working distance was 25 mm from the scanner to the sample plane, the resulting FOV was 6 mm (X) × 8.5 mm (Y). Although the imaging FOV is smaller than the size of the breast tumor, a large FOV fluorescence image can be achieved using mosaicking. By manually translating the sample, a video was continuously recorded with a frame rate of 10 Hz. Nine images were selected with an overlap of 50% and mosaicked by the plugin “MosaicJ” of Fiji [157]. The entire fluorescence image (FOV 11 mm × 11 mm) is illustrated in Figure 53(c), in comparison with the image acquired by a commercial wide-field system (Pearl Trilogy, LI-COR Biosciences, Lincoln, NE, USA) at the 800 nm channel in Figure 53(b). Figure 53. Fluorescence images of a mouse breast tumor with ICG injection. (a) Photograph of the mouse breast tumor. (b) Fluorescence image acquired by the Pearl System at 800 nm channel as the ground truth. (c) Fluorescence image acquired by the PZT MEMS Scanner macroscope using PMT. Scale bars = 2.5 mm. 110 5.3 Conclusion of Thin-Film PZT Scanner-Based Wide-Field Macroscope In this section, we have developed a wide-field macroscope based on a thin-film PZT MEMS scanner. The PZT MEMS scanner was designed with a 2-mm-diameter gold-coated mirror with separate inner (X) and outer (Y) scanning axes. The geometry of the scanner and fabrication process are introduced in detail. The inner axis of the scanner can be driven at resonant frequencies, while the outer axis can be operated at either DC modes or resonant frequencies. The frequency response of the scanner has been characterized to be 4272 Hz (X) and 565, 588 Hz (Y), resulting in maximum optical scan angles of 12° (X) and 15° (Y), driven with a sinusoidal voltage of 8 𝑉𝑝𝑝 . The PZT MEMS scanner is integrated into a custom-made macroscope for fluorescence imaging. A collimated 785 nm laser is used for excitation, while the fluorescence signals are detected by the PMT. The MEMS scanner is driven biaxially around resonant frequencies, 4270 Hz (X) and 590 Hz (Y) for Lissajous scanning with a frame rate of 10 FPS. The FOV varies with the working distance, from 3.5 × 5 mm2 to 6 × 8.5 mm2. The resolution of the imaging system is 0.44 mm, measured from the edge response. ICG fluorescence signals from a 3D-printed phantom are successfully reconstructed into images. The images of a business card and an ICG-stained breast tumor are also mosaicked into large FOV images. Thus, the PZT MEMS scanner-based macroscope has demonstrated the capability for real-time wide-field fluorescence imaging, which can be applied in in vivo animal experiments. In the next section, the PMT photodetector will be replaced by an SNSPD device for high-sensitivity NIR imaging. 111 CHAPTER 6: SUPERCONDUCTING NANOWIRE SINGLE-PHOTON DETECTOR (SNSPD)-BASED IMAGING SYSTEMS 6.1 Superconducting Nanowire Single-Photon Detector (SNSPD) 6.1.1 Introduction to Superconducting Nanowire Single-Photon Detector Superconducting nanowire single-photon detectors (SNSPDs) are widely used in diverse fields, such as quantum information [162], free-space optical communication [163], and biomedical imaging [164-166], due to the advantages of their high detection efficiency [167], high speed [168], low dark-count rates [169], a wide range of spectral response (ultraviolet to mid- infrared) [169-171], and low timing jitter [172]. The concept of the SNSPD was first observed by Kadin in 1996 [173] and later successfully demonstrated by Gol’tsman using the NbN film in 2001 [174]. The SNSPD is a thin film of superconducting material shaped into a meandering nanowire by microfabrication. It has extremely high sensitivity upon absorption of a single photon. To maintain its superconductivity, it is installed in a cryostat cooling down the environment below its superconducting critical temperature, e.g. 4.2 K [175]. A fiber-coupled configuration can reduce blackbody radiation and stray light received by the detector. Therefore, multiple devices can be mounted inside a cryostat and connected with SMFs or MMFs to deliver the light from the outside, providing light-tight, temperature-stable, and scalability for a commercial system. The basic operation principle of the SNSPD can be described in Figure 54(a) [174-178]. (i) In the superconducting state, the nanowire is biased with a DC current below the critical temperature. (ii) As a photon is absorbed by the nanowire, a resistive hotspot is formed. (iii) Despite the small hotspot, it forces the supercurrent to flow around the resistive region, increasing the local current density. (iv) When the local current density around the hotspot exceeds the superconducting critical current density, it leads to a formation of a resistive barrier across the 112 width of the nanowire, generating a measurable output voltage pulse across the nanowire. (v) Joule heating facilitates the growth of the resistive regions until the current flow along the nanowire is blocked. (vi) As Joule heating stops and the hotspot cools down, the nanowire and bias current recover to the superconducting state. Figure 54. Basic operation principle of the SNSPD. [175] (a) An illustration of the detection cycle. (i) The nanowire is in the superconducting state. (ii) A photon absorption creates a small resistive hotspot. (iii) The hotspot forces the current to flow around the resistive region. (iv) A resistive barrier across the width of the nanowire is formed. (v) The current flow is blocked by the high resistance of the nanowire. (vi) The nanowire returns to a superconducting state. (b) A simple electrical equivalent circuit of the SNSPD. (c) A simulated output voltage pulse of the SNSPD. The blue and the dotted red lines correspond to the phases of the detection cycle in (a). A simple electrical equivalent circuit is used to model the detection behavior of the SNSPD in Figure 54(b), where 𝐿𝑘 is the kinetic inductance, 𝑅𝑛 (𝑡) is the time-dependent hotspot resistance, and 𝐼𝑏𝑖𝑎𝑠 represents the bias current. The output voltage pulse is measured across the 𝐿𝑘 load impedance 𝑍0 [179, 180]. The LR circuit has a time constant of 𝜏1 = 𝑍 , limiting the 0 +𝑅𝑛 (𝑡) rise time of the voltage pulse. By closing the switch to remove 𝑅𝑛 (𝑡), this model can express the 113 recovery of the bias current with a longer time constant of 𝜏2 = 𝐿𝑘 /𝑍0 . The SNSPD remains insensitive to photons until the superconducting current return to a re-triggerable level. The overall non-sensitive period is called the dead time or reset time of the SNSPD, given by 𝜏1 + 𝜏2 ≈ 𝜏2 . Based on this phenomenological model, the output voltage can be simulated in Figure 54(c). The solid blue line indicates the rising period of the voltage pulse (ii-v), whereas the dotted red line expresses the recovery phase of the pulse (v-vi-i). The reset time is dominated by 𝐿𝑘 , determined by the dimension and material of the nanowire and the operating temperature. 6.1.2 Characterization and Efficiency of SNSPD Two custom-made SNSPD devices (Quantum Opus, Novi, MI, USA) cooled at 2.5 K were used in our imaging systems. The SNSPD devices can be devised with different ranges of operating wavelengths. Their quantum efficiency, Maximum Count Rate, and Dark Count Rate were characterized in Figure 55. The SNSPD device #1 was designed for wavelengths ranging from 800 nm to 1500 nm and utilized in the point-scan confocal system connecting via a single-mode fiber (SMF-28e+, 9 μm core size), while the SNSPD device #2 was optimized for 700 nm - 800 nm and deployed in the PZT MEMS macroscope wide-field imaging system using a custom-made multi- mode fiber (30 μm core size). The quantum efficiency curves between the SNSPD devices and traditional NIR photomultiplier tube (PMT; H7422-50, Hamamatsu Photonics, Hamamatsu, Japan) are illustrated in Figure 55(a). The PMT can only has a maximum peak efficiency of 20 % at optimized wavelengths below 900 nm, while the SNSPD devices can exceed 80% efficiency at wavelengths beyond 1000 nm. At 800 nm, SNSPD device #1 has a maximum quantum efficiency of 90%, five times higher than the Hamamatsu PMT. The maximum count rate and dark count rate are critical parameters affecting photodetector performance and image contrast. The photon-counting rates are dominated by the bias current. 114 Although a large bias current can shorten the reset time and boost the maximum count rate, the dark count rate also increases dramatically above the critical bias current. Hence, it is important to determine a proper bias current applied to the SNSPD device. The count rate versus bias current relations of the SNSPD device #1 and device #2 were measured at 895 nm and 808 nm, respectively, by using a field-programmable gate array (FPGA)-based time tagger (Time Tagger Ultra, Swabian Instruments GmbH, Stuttgart, Germany). The results are shown in Figure 55(b-c). To optimize the dynamic range for different imaging systems, the bias currents were chosen at 9 μA and 7 μA for the point-scan confocal system and PZT scanner macroscope, respectively. Figure 55. Characterization of the SNSPD detector. (a) Comparison of quantum efficiency curves between the SNSPD devices and PMT (Hamamatsu H7422-50). (b) Maximum count rate and dark count rate of SNSPD device #1 at different bias currents. (c) Maximum count rate and dark count rate of SNSPD device #1 at different bias currents. The quantum efficiency of the SNSPD device can be computed by dividing the measured count rate by the theoretical maximum count rate. The theoretical maximum counting rate can be calculated using the following equation. 𝐽 ℎ𝑐 𝐸𝑝ℎ𝑜𝑡𝑜𝑛 [𝑝ℎ𝑜𝑡𝑜𝑛] = ℎ𝜈 = 𝜆 𝐽 𝐽 𝑃𝑓𝑙𝑢𝑥 [𝑠] = 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 [𝑑𝐵𝑚 𝑜𝑟 𝑚𝑊] → [𝑠] ( 10 ) 𝐽 𝑝ℎ𝑜𝑡𝑜𝑛𝑠 𝑃𝑓𝑙𝑢𝑥 [ ] 𝑠 𝑅𝑎𝑡𝑒 [ ]= 𝐽 𝑠 𝐸𝑝ℎ𝑜𝑡𝑜𝑛 [ ] { 𝑝ℎ𝑜𝑡𝑜𝑛 where: 115 𝐽 𝐸𝑝ℎ𝑜𝑡𝑜𝑛 [𝑝ℎ𝑜𝑡𝑜𝑛] = the energy of a photon at a certain wavelength 𝐽 𝑃𝑓𝑙𝑢𝑥 [𝑠] = the power of optical flux (light) measured by an optical power meter 𝑝ℎ𝑜𝑡𝑜𝑛𝑠 𝑅𝑎𝑡𝑒 [ ] = the theoretical maximum count rate if all photons are absorbed by the SNSPD 𝑠 ℎ, 𝑐, and 𝜆 are the Planck constant, speed of light, and wavelength of light, respectively. The unit of the measured power of the light should be converted from [𝑑𝐵𝑚 𝑜𝑟 𝑚𝑊] to [𝐽/𝑠]. 6.1.3 Signal Processing of SNSPD Once a single photon (one or more photons) is absorbed by the nanowire, superconductivity is locally broken. As a result, the resistance of the nanowire increases dramatically, creating an output voltage pulse. The output voltage pulses from the SNSPD device are amplified and filtered by a high-pass inverted amplification electronic. The analog output of an amplified pulse ranges from -1.5 V to +1 V with a reset time of about 50 ns, as shown in the red line in Figure 56(a). The analog output pulses are further converted to Transistor-Transistor Logic (TTL) pulses (Low < 0.8 V, High > 2~5 V) by a custom electronic, as shown in the blue line in Figure 56(a). A TTL pulse indicates that one or more photons are absorbed by the SNSPD detector. The light intensity can be determined by the pulse number within a certain period (pulse density or count rate). In Figure 56(b-c), sparser TTL pulses imply weak intensity, whereas denser TTL pulses mean strong intensity. The maximum pulse density is limited by the shortest time between two pulses, which is the SNSPD reset time. The typical reset time for the custom-made SNSPD device is approximate ~50 ns, generating 20M pulses per second (Maximum Count Rate). On the other hand, the SNSPD device still can randomly generate pulses even if there is no photon absorption, which is called the minimum count rate (Dark Count Rate). The Dark Count Rate is a type of background signal related to the bias current through the SNSPD device. The dynamic range of 116 the grayscale image is bounded by Maximum Count Rate and Dark Count Rate. Figure 56. Output signals of the SNSPD detector. (a) SNSPD analog output signal and TTL pulse are shown in red and blue lines, respectively. (b) Sparse TTL pulses imply weak intensity. (c) Dense TTL pulses indicate strong intensity. In our experiments, the TTL pulses were counted by a physical counter in the PCIe-6353 DAQ board with a counting capability of up to 100 MHz and a resolution of 32 bits. A customized LabVIEW program (Version 2013, 32-bit, National Instruments, Austin, TX, USA) was used to control the DAQ card and read the TTL counting numbers in real-time. The reading rate of the counts was controlled by the input sampling rate using “continuously counter input mode” as shown in Figure 57(a). The counting values were accumulated in a data array until the maximum counting number was reached (232-1 for 32-bit resolution). The cumulative reading counts in the data array are plotted in Figure 57(b). Since the light intensity was the number of pulses within a sampling period, the intensity values were calculated from the derivative of the cumulative counts. Hence, the cumulative counts were converted into 8-bit intensity values in Figure 57(c). Last, the intensity values were mapped into the pixels for image reconstruction in Figure 57(d). The maximum available count reading rate is limited by the data transfer rate from the DAQ card to the memory on the computer via direct memory access (DMA). More specifically, the transfer speed is an empirical number that varies with different types of motherboards and interfaces. Typically, PCIe is much faster than PCI. That is the reason the PCIe-based DAQ card 117 was utilized in the system. The input sampling rate was 250k samples/second for the confocal microscope and 400k samples/second for the Lissajous scanning macroscope. Note that the input sampling rate should be an integer divider chosen from the sampling clock timebase of the DAQ board (100 MHz in PCIe-6353), preventing the drifting issue in reconstructed images. Figure 57. Signal processing flow of the SNSPD signals. (a) TTL pulses from SNSPD are counted by the DAQ counter at the rising edge of the sampling clock. (b) Counting numbers within a single frame are cumulatively read into a data array. (c) Cumulative counts are converted to 8-bit intensities by taking derivatives. (d) Intensities are reconstructed into an image. 6.2 Point-Scan Confocal Microscope Imaging System Using SNSPD 6.2.1 System of Point-Scan Confocal Microscope Using SNSPD The point-scan confocal imaging system in Section 4.2 was modified for the SNSPD device. First, the scanning mechanism includes a galvanometer scanner with a larger mirror (GVS012-Y, Thorlabs, Newton, NJ, USA) steering the beam in the x-direction, a linear stage (AG-LS25, Newport Corporation, Irvine, CA, USA) translating the sample along the y-axis, and a piezo- electrical actuator (P-783.ZL, Physik Instrumente, Karlsruhe, Germany) moving in the z-depth. A 118 custom-made hemispherical solid immersion lens (SIL, fused silica, n = 1.46, OD 10, trimmed 0.25 mm; Tower Optical Corporation, Boynton Beach, FL, USA) was installed to contact the tissue specimens for matching the refractive index and improving the imaging depth. A 785 nm excitation laser with software-controlled laser output (iBeam Smart PT785, Toptica Photonics, Munich, Germany) was coupled into a single-mode fiber (780HP). A long-pass filter (BLP01-785R-25, Semrock, Rochester, NY, USA) was directly placed at the collection path to remove the excitation light. A custom single-mode fiber (SMF-28e+) was connected to the output for fluorescence collection. The fluorescence signals were detected by the SNSPD device #1. Figure 58. Schematic of the portable confocal microscope. (a) The CAD drawing of the portable confocal microscope with a larger galvo scanner. (b) A close view of the galvo scanner and the SIL lens. (c) A photograph of the portable confocal microscope. (d) Schematic of the confocal imaging system. A 785 nm laser is used to excite the sample. The fluorescence signals are detected by SNSPD and converted into TTL pulses which can be counted by the DAQ. Note: OAP: off-axis parabolic mirror; P: prisms; SIL: solid immersion lens; LPF: long-pass filter. The analog voltage output from the SNSPD was then converted to TTL pulses by a custom TTL converter. A data acquisition (DAQ) board (PCIe-6353, National Instruments, Austin, TX, USA) and a customized LabVIEW program (Version 2013, 32-bit, National Instruments, Austin, 119 TX, USA) were used to count the number of TTL pulses, send driving signals to the scanner and Y-Z actuators, and reconstruct grayscale images in real-time. The driving parameters are listed in Table 10. An input sampling rate of 250k samples/second was used to reconstruct images with 500 × 250 pixels based on a raster scanning pattern. The relationship between frequencies and reconstructed images is shown in the equation below, where 𝑓𝑥 , 𝑓𝑧 are the driving frequency along the x- and z- directions. XZ-Y stack images (a series of consecutive XZ images along the y-direction) were obtained to reconstruct a 3D image. 𝐴𝐼 𝑆𝑅𝑎𝑡𝑒 𝑁𝑢𝑚 𝑜𝑓 𝑥 𝑝𝑖𝑥𝑒𝑙𝑠 = 𝑁𝑢𝑚 𝑜𝑓 𝑦 𝑝𝑖𝑥𝑒𝑙𝑠×𝐹𝑟𝑎𝑚𝑒 𝑅𝑎𝑡𝑒 2𝑓𝑥 ( 11 ) 𝑁𝑢𝑚 𝑜𝑓 𝑦 𝑝𝑖𝑥𝑒𝑙𝑠 = 𝑓𝑧 { 𝐹𝑟𝑎𝑚𝑒 𝑅𝑎𝑡𝑒 = 𝑓𝑧 Table 10. Driving parameters of the point-scan confocal microscope based on SNSPD. Axis Part Mode Waveform Frequency Amplitude FOV X Galvo GVS012-Y Analog Sinewave 250 Hz ± 0.3 V 500 µm Y Newport AG-LS25 Digital Stepping - - 600 µm Z PI P-783.ZL Analog Sawtooth 2 Hz 0~10 V 300 µm 6.2.2 Ex vivo Mouse Tumor Confocal Imaging Using SNSPD Ex vivo mouse specimens were imaged by the point-scan portable confocal microscope based on the SNSPD detector. A mouse breast tumor harvested from a tumor-bearing mouse model (MUC1/MMTV) was topically stained with ICG solution (0.1 mg/mL with 10% DMSO). All procedures performed on animals were approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). The mouse breast tumor was excited by the 785 nm laser. The laser power, 8 μW at the SIL, was much smaller than 3 mW, usually used in the confocal system based on PMT. Due to the high sensitivity of SNSPD, even a small fluctuation of the excitation laser power could cause obvious 120 intensity changes in the confocal images along the scanning axis. The defects can be observed as horizontal lines. In Figure 59, an image processing technique is described to remove the noise by using the Fast Fourier Transform (FFT) and inverse FFT. The horizontal lines in the spatial domain were transformed into the noises at the central vertical lines in the frequency domain. After blocking the frequency noises of the vertical lines and transforming the signals back to the spatial domain, the image defects in the spatial domain were resolved. Figure 59. Image processing flow for noise removal in SNSPD confocal images. 3D confocal images were acquired at XZ-Y mode with 200 frames and a frame rate of 2 Hz. The FOV of the XZ-Y stacking confocal image was approximately 500 µm (X) × 600 µm (Y) × 300 µm (Z). By translating the sample with a manual stage along the x-direction with a pitch of 250 µm, three sets of 3D-stacked images were sequentially acquired and stitched into a large FOV image (1000 µm (X) × 600 µm (Y) × 300 µm (Z)) using the plugin “Stitching – Grid/Collection” of Fiji [156]. The 250-µm pitch size ensures a 50% overlap area between two adjacent images, minimizing ghosting after stitching. The depth information along the z-direction can also be well- aligned and stitched. 121 The large FOV confocal images of the breast tumor are displayed in vertical cross-sectional view (XZ, Figure 60(a)) and horizontal view (XY, Figure 60(b)). The horizontal confocal image was acquired at depth of 110 µm. 3D confocal images were visualized in Amira (Version 5.4.3, Thermo Fisher Scientific, Waltham, MA, USA) in Figure 60(c). The histological H&E image of the corresponding area in the XY view is displayed in Figure 60(d). The H&E image was investigated using an upright microscope (Eclipse Ci, Nikon Corporation, Tokyo, Japan) with a 10X objective (Nikon Plan 10X, NA 0.25, WD 10.5; MRL00102). Figure 60. Ex vivo confocal image of ICG-stained breast tumor using SNSPD. (a- b) Cross-sectional images of the breast tumor at the selected planes, (a) vertical (XZ view) and (b) horizontal (XY view) at a depth of 110 μm. (c) 3D reconstruction image using Amira. (d) Histological H&E image at the corresponding horizontal view. Scale bars = 100 μm. 6.3 PZT MEMS Scanner-Based Wide-Field Macroscope Using SNSPD 6.3.1 System of PZT MEMS Scanner-Based Wide-Field Macroscope Using SNSPD The same PZT MEMS scanner-based macroscope in Section 5.2 was used for this experiment. The schematic diagram of the macroscope based on SNSPD is illustrated in Figure 61(a). A fiber- coupled 785 nm laser (S1FC785, Thorlabs, Newton, NJ, USA) was used to excite the sample. The 122 785 nm laser was collimated by a custom-made gradient-index (GRIN) lens (OD 0.5 mm, NA 0.2, 0.25 pitch @ 640 nm; GRINTECH GmbH, Jena, Germany) via a single-mode fiber SMF (SM600). On the collection side, fluorescence signals from the sample were collected by a multi-mode fiber MMF1 (core size 400 μm; M119L02, Thorlabs, Newton, NJ, USA). The GRIN lens, SMF, and MMF1 were assembled in a package (Figure 61(c)). The PZT MEMS scanner (Figure 61(b)) was mounted at 45° at the distal end of the GRIN lens assembly. Figure 61. Schematic of the PZT MEMS scanner-based macroscope using SNSPD. (a) Schematic drawing of the imaging system, including the DAQ card, 785-nm laser, SM fiber, GRIN Lens, MM fibers, fluorescence collection module, SNSPD detector, and 20X voltage amplifier for the MEMS scanner driving. The 785 nm excitation light is collimated by the custom-made GRIN-lens-embedded SM fiber while the fluorescence signal is collected directly by the high NA MM fiber (MMF1). The PZT MEMS scanner is driven biaxially at resonant frequencies for Lissajous scanning. The fluorescence signals from the SNSPD are converted into TTL pulses, counted by the counter in the DAQ card. A custom LabVIEW program reconstructs the counting number into images based on Lissajous patterns. (b) Stereomicroscopic image of the PZT MEMS scanner. (c) Photograph of the package of the GRIN lens, SMF and MMF1. Note: GRIN: Gradient-index; SMF: single-mode fiber; MMF: multi-mode fiber; MMF1: 400 𝜇𝑚 core; MMF2: 62.5 μm core; MMF3: 30 μm core; LPF: long-pass filter. 123 The fluorescence light from the macroscope was filtered by the fluorescence collection module, consisting of two lenses (A110TM-B, Thorlabs, Newton, NJ, USA) and a long-pass filter with a cut-on wavelength at 785 nm (LPF; LP02-785RU-25, Semrock, Rochester, NY, USA). For the fluorescence collection, the PMT detector was replaced by the SNSPD device #2. Two multi- mode fibers: MMF2 (core size 62.5 μm; M31L01, Thorlabs, Newton, NJ, USA) and MMF3 (core size 30 μm; custom) were used to deliver the light from the fluorescence collection module to the SNSPD device #2. Because of the core size mismatch, fluorescence light was not fully coupled to the multi-mode fibers, but the loss of photons won’t be a big issue for the SNSPD detector. The analog voltage output from the SNSPD was then converted to TTL pulses by a custom TTL converter. A data acquisition (DAQ) board (PCIe-6353, National Instruments, Austin, TX, USA) and a customized LabVIEW program (Version 2013, 32-bit, National Instruments, Austin, TX, USA) were used to count the TTL pulses, send driving signals to the PZT MEMS scanner for Lissajous scanning, and reconstruct real-time images based on Lissajous trajectories. The PZT MEMS scanner was driven biaxially around the resonant frequencies for Lissajous scanning. The sinusoidal driving voltages were generated from the DAQ board with an amplitude of 0.2 V and an offset of 0.2 V. A two-channel 20X voltage amplifier (Model 2350, Tegam Inc., Geneva, OH, USA) was used to amplify and buffer the signals, resulting in output voltages ranging from 0 V to 8 V. Driving frequencies along the x- and y-axis (𝑓𝑥 , 𝑓𝑦 ) were chosen (4270 Hz, 585 Hz) to optimize the line density and the frame rate of 5 FPS, determined by the greatest common divisor (GCD) of the two frequencies. The driving signals are expressed below. 𝑉𝑥 (𝑡) = 4 sin(2𝜋𝑓𝑥 𝑡 + 𝜑𝑥 ) + 4 V 𝑉𝑦 (𝑡) = 4 sin(2𝜋𝑓𝑦 𝑡 + 𝜑𝑦 ) + 4 V ( 12 ) (𝑓𝑥 , 𝑓𝑦 ) = (4270 Hz, 585 Hz) { (𝜑𝑥 , 𝜑𝑦 ) = (−90°, −90°) 124 The input sampling rate of 400k samples/second was used to read TTL counts. The fluorescence signals were reconstructed into 8-bit grayscale images with 40 × 40 pixels based on Lissajous trajectories. Since the phase difference between the driving and response signals changes dramatically around the resonant oscillation, phase adjustment was performed on both axes to match the actual phase and remove image reconstruction defects. Phase adjustment (𝜑𝑥 , 𝜑𝑦 ) = (35°, −85°) was applied to remove the phase delay in Lissajous reconstruction. 6.3.2 Ex vivo Wide-Field Fluorescence Imaging Using SNSPD The macroscope based on PZT MEMS was used to perform wide-field fluorescence imaging on ex vivo mouse tissues. A mouse breast tumor was dissected from a tumor-bearing mouse model (double transgenic MUC1/MMTV). All procedures performed on animals were approved by the Institutional Animal Care & Use Committee at the Michigan State University (IACUC, Protocol #PROTO202100095). The tumor was topically stained with ICG solution (0.1 mg/mL with 10% DMSO). The photograph of the mouse breast tumor is shown in Figure 62(a). The sample was excited by a 5-mW 785 nm laser to prevent SNSPD saturation. The working distance of the sample was 50 mm away from the scanner, the resulting FOV was 8 × 8 mm2. Although the typical FOV is smaller than the size of the breast tumor (a diameter of 10 mm), a large FOV fluorescence image can be achieved using mosaicking. By manually translating the sample, a video was continuously recorded with a frame rate of 5 Hz. Nine images were selected with an overlap of 50% and mosaicked by the plugin “MosaicJ” of Fiji [157]. The entire fluorescence image (FOV 13 × 16.5 mm2) is illustrated in Figure 62(c), similar to the image acquired by a commercial wide-field system (Pearl Trilogy, LI-COR Biosciences, Lincoln, NE, USA) at the 800 nm channel in Figure 62(b). The histological H&E image is displayed in Figure 62(d). The H&E images were investigated using a microscope (Eclipse Ci, Nikon Corporation, 125 Tokyo, Japan) with a 10X objective (Nikon Plan 10X, NA 0.25, WD 10.5; MRL00102). Figure 62. Fluorescence images of an ICG-stained mouse breast tumor. (a) Photograph of the ICG-stained mouse breast tumor. (b) Fluorescence image acquired by the Pearl System at 800 nm channel as the ground truth. (c) Fluorescence image acquired by the macroscope using SNSPD detector. (d) H&E image of the mouse breast tumor. Scale bars = (a-c) 5 mm and (d) 200 μm. 6.4 Conclusion of SNSPD-Based Imaging Systems SNSPD devices are high-sensitivity and high-SNR (low dark count) photodetectors that can be optimized with a broad range of working wavelengths. Two SNSPD devices have been utilized to replace the PMTs in our imaging systems: the point-scan portable confocal microscope (SNSPD device #1) and the PZT MEMS scanner-based macroscope (SNSPD device #2). The quantum efficiency curves of the devices have peak wavelengths around 800-1000 nm (device #1) and 600- 800 nm (device #2). The maximum count rate and the dark count rate are also characterized, representing the dynamic range of the photodetector, which limits the greyscale in the images. The SNSPD devices generate very short and small pulses during a photon-absorption event. Since the pulses cannot be directly read by our current data acquisition systems, which require analog voltages, we have implemented a signal processing algorithm that converts the pulse signals into counting numbers and then reconstructs them into grayscale images. Most importantly, the signal processing requires a physical counter within the DAQ card and sufficient high-speed bandwidth for data transfer. Then, this signals processing method can be applied to both the 126 confocal microscope and the Lissajous scanning PZT MEMS scanner-based macroscope. This chapter reveals the comprehensive flow and technical details. The point-scan portable confocal microscope using SNSPD has been built and successfully collected high-SNR confocal images from a mouse tumor. Because the SNSPD device provides very high sensitivity, the laser power out of the sample is only 8 µW, which is 250 times lower than the typical power used in the PMT-based confocal microscope (usually 2~3 mW). This extremely low laser excitation significantly alleviates the photobleaching effect in fluorescence samples. Thence, confocal images can be easily stitched into large FOV images of uniform intensity without darkening the stitched area due to photobleaching. However, the SNSPD is so sensitive that the variation of laser power can affect the confocal images, leading to apparent noises of scanning lines. An imaging process technique based on the Fast Fourier Transform has been implemented to remove the noises. According to the image quality, we believe that the SNSPD device will become a powerful photodetector for our confocal microscopes at longer wavelengths. Similarly, the SNSPD device replaces the PMT detector in the PZT MEMS scanner-based macroscope. As the PZT scanner is operated at the resonant frequency of 4270 Hz, the SNSPD still can collect fluorescence light from a FOV 13 × 16.5 mm2 and generate real-time images at 5 FPS. The wide-field fluorescence image of the breast tumor shows better contrast and resolution than a commercial wide-field imaging system. If the macroscope is installed with an SNSPD sensitive at the NIR-II regime (1000~1700 nm), it will become a unique imaging system that can capture real-time video for a large FOV, like a NIR-II camera-based imaging system. But the SNSPD has advantages over the NIR-II camera based on the InGaAs detector, such as high sensitivity, low noise levels, relatively low cost, and wavelength optimization. Consequently, SNSPD can be an excellent tool for NIR-II real-time imaging in the future. 127 CHAPTER 7: FUTURE WORK 7.1 Advanced Design of VO2 MEMS Scanner 7.1.1 VO2 MEMS Scanners for Miniaturized Dual-Axis Confocal Microscopes The VO2 MEMS scanner was batch fabricated on a 2-inch wafer with multiple types of MEMS scanners. Besides the VO2 scanner with a 1.6 mm × 1.6 mm mirror introduced in CHAPTER 3, there are other scanners designed with various sizes and arbitrary geometries for miniaturized confocal microscopes. An example of the ray tracing simulation of the miniaturized dual-axis confocal microscope is illustrated in Figure 63 [105]. Based on the dual-axis architecture using full parabolic mirrors, the geometry of the scanner must be carefully designed so that the mirror can cover the two focused beams but doesn’t block the two collimated beams. Meanwhile, the scanner should be placed in the desired position that provides sufficient working distance for the focused beam to penetrate specimens. The most critical dimension becomes the distance from the edges of the scanner to the collimated beams. Figure 63. Ray tracing simulation of the MEMS-based miniaturized confocal microscope. [105] (a) Schematic of the light paths in the dual-axis confocal architecture shows the geometric requirements for a MEMS scanner. (b-c) 2D Lateral scanning around the X- and Y-axis of the mirror. 128 In order to meet the dimension, the best design would be “no extra structure” from the edges to the beams. However, without additional supporting structures, this will make the MEMS scanner even more fragile. To overcome the problem, we open two holes next to the mirror to provide paths for the collimated beams; meanwhile, the mirror and holes are enclosed by a frame, enhancing the structural strength. Three types of VO2 scanners were devised for different sizes of the miniaturized dual-axis confocal microscopes, as shown in Figure 64. Those scanners are the most iconic designs with arbitrary geometries (not straight lines) inside and outside the device. Figure 64. Advanced designs of the VO2 MEMS scanner with arbitrary geometries. (a) Design for an endoscopic dual-axis confocal device. (b) Design for a handheld dual-axis confocal device. (c) Design for a handheld line-scan dual-axis confocal device. Scale bars = 1 mm. 7.1.2 Fabrication and Post-Process of Advanced VO2 MEMS Scanner To fabricate the geometry complexed devices, we utilized the topside and backside DRIE processes, described in Section 3.2.2, to make deep trenches that can generate the curved contours and the two holes inside the device. After the DRIE process, the entire devices were removed from 129 the 2-inch wafer along the deep trenches using a tweezer. Afterward, the devices were carefully protected with Kapton tapes around the edges and placed in the chamber for XeF2 isotropic etch. These additional post-processes were completed between Figure 21(i) and Figure 21(j). Since the scanners were connected to the wafer via the “link arm” structures, they did not automatically fall off after the DRIE and XeF2 processes. Different numbers of link arms were designed to hold the scanners. From the experiments, the number of three link arms on each side could provide sufficient support, and they can be easily broken without additional tools and force. After the XeF2 etch, the two holes next to the mirror were opened using a wire-bonder tip and the “poking technique”. The procedure is illustrated with the stereomicroscopic images in Figure 65. The silicon structures inside the holes were temporarily attached to the device via two link arms, which should be easy to remove with very little force. The images of the final VO2 MEMS scanners with arbitrary geometries are shown in Figure 66. The MEMS scanner for the endoscopic device (Figure 66(a)) has clearance areas on its two bottom corners, which are the keep-out zones for the supporting structures in the endoscope system design. An important thing to remember is that the two holes cannot be opened prior to the XeF2 etch, as the large opening area will allow more XeF2 gas to etch the silicon substrate, which can create uneven etching rates and damage the mirror and legs. In brief summary, the entire scanner was removed from the wafer after the DRIE etch (Figure 21(i)), while the holes were opened after the XeF2 etch (Figure 21(j)). Figure 65. Procedure of the poking technique to open the holes. 130 Figure 66. Stereomicroscopic images of the VO2 MEMS scanner after the poking technique. (a) The endoscopic device. (b) The handheld device. A hole cannot be opened due to an incomplete DRIE etch. (c) The handheld line-scan device. 7.1.3 Laser Cutting of Advanced VO2 MEMS Scanner Ideally, the opening holes could be finished using the poking technique; however, some holes could not be opened even with large force due to incomplete DRIE etch (Figure 66(b)). To prevent damage during the mechanical opening, a laser-cutting method was used to remove the residual silicon substrate connected to the device. A femtosecond pulsed laser (Astrella, Coherent Inc., Santa Clara, CA, USA) in Dr. Ming Han’s lab was employed to generate high-energy pulses at 800 nm wavelength with an average power of 25 mW. The laser was focused on the sample using a 20X objective lens (Olympus Plan Achromat 40X, NA 0.4, WD 1.2 / 0.17). The focused spot was guided by a camera-based imaging system. During the laser cutting process, the laser spot could cause high temperatures and heat dissipation around the cutting areas. At the same time, lots of debris may appear and block the imaging FOV. To avoid high temperature and cutting damage on the mirror and legs, the laser cutting followed the DRIE trenches but kept > 200 µm away from the actuation and mirror area. After the laser cutting, the scanner was inspected under an inverted microscope. If the light can penetrate through the deep trench, the laser cutting is successful; otherwise, we have to repeat the laser cutting process. The laser-cut MEMS scanner was shown in Figure 67. The hole on the left was cut by the femtosecond laser whereas the hole on the right was opened by the poking technique. The zoom-in images are displayed for the comparison between 131 these two opening methods. In Figure 67(c), the debris caused by high-temperature reactions during the laser cutting can be found. Femtosecond pulsed laser micromachining is a mature technology widely used in industry and academia. The problem of debris has been well-studied and can be resolved by several techniques, such as water immersion [181-184]. Figure 67. Photographs of the laser-cut VO2 MEMS scanner. (a) Overview of the handheld device with arbitrary geometry. (b) Zoom-in image of the backside of the scanner. (c) The hole was removed by laser cutting. Debris was generated during the laser cutting. (d) The hole was removed by the DRIE and poking technique. 7.1.4 Conclusion of Advanced VO2 MEMS Scanner In this section, three types of advanced VO2 scanners have been designed with arbitrary geometries that can fit the optical paths of different miniaturized dual-axis confocal microscopes, including the endoscope, handheld device, and line-scan handheld system. The scanners are successfully fabricated by the DRIE process, poking technique, and laser cutting. In the future, these scanners will be packaged, characterized with frequency response, and then integrated into miniaturized microscopes for imaging experiments. 132 7.2 Fluorescence Lifetime Imaging Using SNSPD 7.2.1 Introduction to Fluorescence Lifetime Imaging Fluorescence lifetime is a phenomenon in which the excited-state lifetime of a fluorophore can be observed with nanosecond decay, independent of its molecular concentration but dependent upon excited-state reactions, such as fluorescence resonance energy transfer (FRET) [185], and the interactions between the fluorophore and its environment. For instance, this reaction is not only sensitive to the pH value, oxygen level, and ionic concentration but can also sense bonds or molecular composition. Even with largely overlapping spectra, the lifetimes can be used for fluorophore identification and separation [186]. Therefore, fluorescence lifetime provides a noninvasive and nondestructive approach for monitoring the variations of molecular environment and biochemical interactions of living cells/tissues. Fluorescence lifetime imaging microscopy (FLIM) is a technique combining lifetime measurement and microscopy, in which the fluorescence lifetime of a molecular is measured at spatially resolvable positions in a microscopy image [187]. Two major approaches are utilized to measure the fluorescence lifetime: the time-domain and frequency-domain methods [188]. In the time domain, a pulsed laser excites the sample, and the fluorescence emission is recorded as a function of time (nanosecond scale). The photon delays are stored in histograms using the time-correlated single-photon counting (TCSPC) technique [189]. The time constant can be calculated by multiexponential decay, whereas the photon number represents the fluorescence intensity. In the frequency domain, a modulated light source excites the sample, while the fluorescence emission owns a similar waveform but with amplitude modulation and phase shift [190]. By calibrating the amplitude and phase information with a known lifetime, the absolute lifetime of the sample can be determined. Moreover, multiple lifetime components can be separated using phasor analysis. 133 7.2.2 Advantages of SNSPD in Fluorescence Lifetime Imaging To measure the nanosecond lifetime, the detector must be time-resolved (time-domain) or demodulated (frequency-domain). For the time-domain lifetime imaging system, such as confocal- FLIM, photomultiplier tubes (PMTs) and single-photon avalanche diodes (SPADs) are the most common detectors used for single-photon counting [191-193]. However, considering the combination of high detection efficiency, precise time resolution, and low noise, it is still challenging for PMT and SPAD used in FLIM applications [162]. Additionally, the spectral response of traditional Si-based PMTs and SPADs is limited to 1100 nm (NIR or NIR-I). Although InP/InGaAs-based PMTs and SPADs can extend the spectral range to 1500 nm (SWIR or NIR-II), the detection efficiency is still lower than 10%, restricting its applications in weak fluorescence. Figure 68. Quantum efficiencies of the NIR-SNSPD and NIR-PMT. After two decades of development, SNSPDs have shown excellent performance in terms of high sensitivity, longer wavelength range, high SNR, and low timing jitter [162]. Especially in the NIR-II regime (1000~1700 nm), SNSPDs can easily achieve efficiencies over 60%, much higher than InGaAs-based detectors [194]. For example, Figure 68 presents the quantum efficiencies of a custom-made NIR-SNSPD and an InGaAs NIR-PMT (H12397-75, Hamamatsu Photonics, Hamamatsu, Japan), where the maximum efficiencies are 82.5% and 2.5% for SNSPD and PMT, 134 respectively. Regarding the SNR level, SNSPDs are also superior to InGaAs SPADs because of their low dark count rate (< 100 counts per second) [195]. Furthermore, the very low timing jitter (~50 ps) also makes SNSPDs ideal for TCSPC applications [196]. As a result, SNSPDs are the most promising detectors for NIR FLIM applications. 7.2.3 Conclusion of Fluorescence Lifetime Imaging Based on SNSPD Recently, Jia Yu et al. have demonstrated a confocal-FLIM working in the NIR-II window [164]. A home-built confocal microscope combined with a custom-made SNSPD and a TCSPC module was developed to measure lifetime decay. The FDA-approved clinical fluorescent dye – ICG, which exhibits a weak emission spectrum in the NIR-II regime, was used as the fluorescent reagent. The ICG dyes resolved in different solutions reveal distinguishable decay curves. Later, 3D in vivo dual-color FLIM was performed in the mouse ear vasculature using ICG and IR820 (another NIR-II agent). These two channels of 3D images can be separated by a lifetime threshold. This study has successfully demonstrated the capability of SNSPD in NIR-II FLIM applications. As integrated with the SNSPD sensitive in the NIR-II region, our point-scan portable confocal can be modified for NIR-II FLIM. With the NIR-II fluorescent agents, such as ICG and quantum dots (QDs) [197], our confocal-FLIM system can potentially image biological specimens from the visible to NIR-II ranges simultaneously. The NIR-II wavelength can potentially increase the penetration depth and SNR during in vivo animal experiments. 135 BIBLIOGRAPHY [1] S. Abeytunge, B. Larson, G. Peterson, M. Morrow, M. Rajadhyaksha, and M. P. Murray, "Evaluation of breast tissue with confocal strip-mosaicking microscopy: a test approach emulating pathology-like examination," J Biomed Opt, vol. 22, no. 3, p. 34002, Mar 1 2017. [2] Y. K. Tao et al., "Assessment of breast pathologies using nonlinear microscopy," Proc Natl Acad Sci U S A, vol. 111, no. 43, pp. 15304-9, Oct 28 2014. [3] F. T. Nguyen et al., "Intraoperative evaluation of breast tumor margins with optical coherence tomography," Cancer Res, vol. 69, no. 22, pp. 8790-6, Nov 15 2009. [4] V. Sharma, S. Shivalingaiah, Y. Peng, D. Euhus, Z. Gryczynski, and H. Liu, "Auto- fluorescence lifetime and light reflectance spectroscopy for breast cancer diagnosis: potential tools for intraoperative margin detection," Biomedical Optics Express, vol. 3, no. 8, pp. 1825-1840, 2012/08/01 2012. [5] A. S. Haka et al., "Diagnosing breast cancer using Raman spectroscopy: prospective analysis," J Biomed Opt, vol. 14, no. 5, p. 054023, Sep-Oct 2009. [6] H. J. Koester, D. Baur, R. Uhl, and S. W. Hell, "Ca2+ fluorescence imaging with pico- and femtosecond two-photon excitation: signal and photodamage," (in eng), Biophysical journal, vol. 77, no. 4, pp. 2226-2236, 1999. [7] A. L. Vahrmeijer, M. Hutteman, J. R. van der Vorst, C. J. van de Velde, and J. V. Frangioni, "Image-guided cancer surgery using near-infrared fluorescence," Nat Rev Clin Oncol, vol. 10, no. 9, pp. 507-18, Sep 2013. [8] R. Weissleder and M. J. Pittet, "Imaging in the era of molecular oncology," (in eng), Nature, vol. 452, no. 7187, pp. 580-589, 2008. [9] R. Weissleder and U. Mahmood, "Molecular Imaging," Radiology, vol. 219, no. 2, pp. 316- 333, 2001/05/01 2001. [10] M. A. Pysz, S. S. Gambhir, and J. K. Willmann, "Molecular imaging: current status and emerging strategies," Clin Radiol, vol. 65, no. 7, pp. 500-16, Jul 2010. [11] L. Fass, "Imaging and cancer: a review," Mol Oncol, vol. 2, no. 2, pp. 115-52, Aug 2008. [12] D. Zardavas, A. Irrthum, C. Swanton, and M. Piccart, "Clinical management of breast cancer heterogeneity," (in eng), Nature reviews. Clinical oncology, vol. 12, no. 7, pp. 381- 394, 2015. [13] A. Marusyk and K. Polyak, "Tumor heterogeneity: causes and consequences," (in eng), Biochimica et biophysica acta, vol. 1805, no. 1, pp. 105-117, 2010. [14] Y. W. Wang, N. P. Reder, S. Kang, A. K. Glaser, and J. T. C. Liu, "Multiplexed Optical Imaging of Tumor-Directed Nanoparticles: A Review of Imaging Systems and 136 Approaches," Nanotheranostics, vol. 1, no. 4, pp. 369-388, 2017. [15] T. S. Hauck, R. E. Anderson, H. C. Fischer, S. Newbigging, and W. C. Chan, "In vivo quantum-dot toxicity assessment," Small, vol. 6, no. 1, pp. 138-44, Jan 2010. [16] Y. Wang, B. Yan, and L. Chen, "SERS tags: novel optical nanoprobes for bioanalysis," (in eng), Chemical reviews, vol. 113, no. 3, pp. 1391-1428, 2013. [17] C. V. Raman and K. S. Krishnan, "A New Type of Secondary Radiation," Nature, vol. 121, no. 3048, pp. 501-502, 1928/03/01 1928. [18] W. E. Doering, M. E. Piotti, M. J. Natan, and R. G. Freeman, "SERS as a Foundation for Nanoscale, Optically Detected Biological Labels," Advanced Materials, vol. 19, no. 20, pp. 3100-3108, 2007. [19] D. M. McClatchy, 3rd et al., "Molecular dyes used for surgical specimen margin orientation allow for intraoperative optical assessment during breast conserving surgery," (in eng), Journal of biomedical optics, vol. 20, no. 4, pp. 040504-040504, 2015. [20] R. M. Davis et al., "A Raman Imaging Approach Using CD47 Antibody-Labeled SERS Nanoparticles for Identifying Breast Cancer and Its Potential to Guide Surgical Resection," Nanomaterials (Basel), vol. 8, no. 11, Nov 20 2018. [21] D. Van de Sompel, E. Garai, C. Zavaleta, and S. S. Gambhir, "A hybrid least squares and principal component analysis algorithm for Raman spectroscopy," PLoS One, vol. 7, no. 6, p. e38850, 2012. [22] C. L. Zavaleta et al., "Multiplexed imaging of surface enhanced Raman scattering nanotags in living mice using noninvasive Raman spectroscopy," Proceedings of the National Academy of Sciences, vol. 106, no. 32, p. 13511, 2009. [23] S. Johnsen, "Hidden in plain sight: the ecology and physiology of organismal transparency," (in eng), Biol Bull, vol. 201, no. 3, pp. 301-18, Dec 2001. [24] J. V. Frangioni, "In vivo near-infrared fluorescence imaging," (in eng), Curr Opin Chem Biol, vol. 7, no. 5, pp. 626-34, Oct 2003. [25] G. Hong, A. L. Antaris, and H. Dai, "Near-infrared fluorophores for biomedical imaging," Nature Biomedical Engineering, vol. 1, no. 1, p. 0010, 2017/01/10 2017. [26] V. Ntziachristos, "Going deeper than microscopy: the optical imaging frontier in biology," Nature Methods, vol. 7, no. 8, pp. 603-614, 2010/08/01 2010. [27] J. A. Carr et al., "Shortwave infrared fluorescence imaging with the clinically approved near-infrared dye indocyanine green," Proc Natl Acad Sci U S A, vol. 115, no. 17, pp. 4465- 4470, Apr 24 2018. [28] M. P. Hansen and D. Malchow, "Overview of SWIR detectors, cameras, and applications," 137 in SPIE Defense + Commercial Sensing, 2008. [29] B&W Tek, "Spectrometer Knowledge - Spectral Resolution," 2022. [30] Avantes USA, "Understanding Spectrometer Resolution Specifications," 2021. [31] Andor Technology - Oxford Instruments, "Resolution and Wavelength Calculator," 2022. [32] E. Garai et al., "High-sensitivity, real-time, ratiometric imaging of surface-enhanced Raman scattering nanoparticles with a clinically translatable Raman endoscope device," J Biomed Opt, vol. 18, no. 9, p. 096008, Sep 2013. [33] Y. W. Wang et al., "Comprehensive spectral endoscopy of topically applied SERS nanoparticles in the rat esophagus," Biomed Opt Express, vol. 5, no. 9, pp. 2883-95, Sep 1 2014. [34] G. M. McKhann et al., "The diagnosis of dementia due to Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease," (in eng), Alzheimers Dement, vol. 7, no. 3, pp. 263-9, May 2011. [35] "2021 Alzheimer's disease facts and figures," (in eng), Alzheimers Dement, vol. 17, no. 3, pp. 327-406, Mar 2021. [36] R. A. Sperling et al., "Toward defining the preclinical stages of Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease," (in eng), Alzheimers Dement, vol. 7, no. 3, pp. 280-92, May 2011. [37] M. S. Albert et al., "The diagnosis of mild cognitive impairment due to Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease," (in eng), Alzheimers Dement, vol. 7, no. 3, pp. 270-9, May 2011. [38] L. Mucke, "Alzheimer's disease," Nature, vol. 461, no. 7266, pp. 895-897, 2009/10/01 2009. [39] P. T. Nelson et al., "Correlation of Alzheimer disease neuropathologic changes with cognitive status: a review of the literature," (in eng), J Neuropathol Exp Neurol, vol. 71, no. 5, pp. 362-81, May 2012. [40] P. Scheltens et al., "Atrophy of medial temporal lobes on MRI in "probable" Alzheimer's disease and normal ageing: diagnostic value and neuropsychological correlates," (in eng), J Neurol Neurosurg Psychiatry, vol. 55, no. 10, pp. 967-72, Oct 1992. [41] G. B. Frisoni, N. C. Fox, C. R. Jack, Jr., P. Scheltens, and P. M. Thompson, "The clinical use of structural MRI in Alzheimer disease," (in eng), Nat Rev Neurol, vol. 6, no. 2, pp. 67- 77, Feb 2010. 138 [42] K. A. Jobst et al., "Detection in life of confirmed Alzheimer's disease using a simple measurement of medial temporal lobe atrophy by computed tomography," (in eng), Lancet, vol. 340, no. 8829, pp. 1179-83, Nov 14 1992. [43] K. A. Johnson, N. C. Fox, R. A. Sperling, and W. E. Klunk, "Brain imaging in Alzheimer disease," Cold Spring Harb Perspect Med, vol. 2, no. 4, p. a006213, Apr 2012. [44] S. Ogawa, T. M. Lee, A. S. Nayak, and P. Glynn, "Oxygenation-sensitive contrast in magnetic resonance image of rodent brain at high magnetic fields," (in eng), Magn Reson Med, vol. 14, no. 1, pp. 68-78, Apr 1990. [45] K. K. Kwong et al., "Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation," (in eng), Proc Natl Acad Sci U S A, vol. 89, no. 12, pp. 5675- 9, Jun 15 1992. [46] N. K. Logothetis, J. Pauls, M. Augath, T. Trinath, and A. Oeltermann, "Neurophysiological investigation of the basis of the fMRI signal," Nature, vol. 412, no. 6843, pp. 150-157, 2001/07/01 2001. [47] N. L. Foster, T. N. Chase, P. Fedio, N. J. Patronas, R. A. Brooks, and G. Di Chiro, "Alzheimer's disease: focal cortical changes shown by positron emission tomography," (in eng), Neurology, vol. 33, no. 8, pp. 961-5, Aug 1983. [48] E. M. Reiman et al., "Preclinical evidence of Alzheimer's disease in persons homozygous for the epsilon 4 allele for apolipoprotein E," (in eng), N Engl J Med, vol. 334, no. 12, pp. 752-8, Mar 21 1996. [49] W. E. Klunk et al., "Imaging brain amyloid in Alzheimer's disease with Pittsburgh Compound-B," Annals of Neurology, vol. 55, no. 3, pp. 306-319, 2004. [50] C. M. Clark et al., "Use of Florbetapir-PET for Imaging β-Amyloid Pathology," JAMA, vol. 305, no. 3, pp. 275-283, 2011. [51] N. Okamura, R. Harada, S. Furumoto, H. Arai, K. Yanai, and Y. Kudo, "Tau PET imaging in Alzheimer's disease," (in eng), Curr Neurol Neurosci Rep, vol. 14, no. 11, p. 500, Nov 2014. [52] K. A. Johnson et al., "Tau positron emission tomographic imaging in aging and early Alzheimer disease," Ann Neurol, vol. 79, no. 1, pp. 110-9, Jan 2016. [53] K. Liu, J. Li, R. Raghunathan, H. Zhao, X. Li, and S. T. C. Wong, "The Progress of Label- Free Optical Imaging in Alzheimer's Disease Screening and Diagnosis," Front Aging Neurosci, vol. 13, p. 699024, 2021. [54] C. R. Jack, Jr. et al., "NIA-AA Research Framework: Toward a biological definition of Alzheimer's disease," Alzheimers Dement, vol. 14, no. 4, pp. 535-562, Apr 2018. [55] N. Andreasen et al., "Evaluation of CSF-tau and CSF-Aβ42 as Diagnostic Markers for 139 Alzheimer Disease in Clinical Practice," Archives of Neurology, vol. 58, no. 3, pp. 373-379, 2001. [56] E. B. Hanlon et al., "Prospects for in vivo Raman spectroscopy," (in eng), Phys Med Biol, vol. 45, no. 2, pp. R1-59, Feb 2000. [57] S. Nie and S. R. Emory, "Probing Single Molecules and Single Nanoparticles by Surface- Enhanced Raman Scattering," Science, vol. 275, no. 5303, pp. 1102-1106, 1997. [58] R. Pilot, R. Signorini, C. Durante, L. Orian, M. Bhamidipati, and L. Fabris, "A Review on Surface-Enhanced Raman Scattering," Biosensors (Basel), vol. 9, no. 2, Apr 17 2019. [59] X. Zhang et al., "Robust and Universal SERS Sensing Platform for Multiplexed Detection of Alzheimer's Disease Core Biomarkers Using PAapt-AuNPs Conjugates," ACS Sens, vol. 4, no. 8, pp. 2140-2149, Aug 23 2019. [60] D. Yu et al., "SERS-Based Immunoassay Enhanced with Silver Probe for Selective Separation and Detection of Alzheimer's Disease Biomarkers," Int J Nanomedicine, vol. 16, pp. 1901-1911, 2021. [61] Y. Xia, P. Padmanabhan, S. Sarangapani, B. Gulyas, and M. Vadakke Matham, "Bifunctional Fluorescent/Raman Nanoprobe for the Early Detection of Amyloid," Sci Rep, vol. 9, no. 1, p. 8497, Jun 11 2019. [62] C. L. Zavaleta et al., "A Raman-based endoscopic strategy for multiplexed molecular imaging," Proc Natl Acad Sci U S A, vol. 110, no. 25, pp. E2288-97, Jun 18 2013. [63] Y. W. Wang, Q. Yang, S. Kang, M. A. Wall, and J. T. C. Liu, "High-speed Raman-encoded molecular imaging of freshly excised tissue surfaces with topically applied SERRS nanoparticles," J Biomed Opt, vol. 23, no. 4, pp. 1-8, Apr 2018. [64] G. A. Elder, M. A. Gama Sosa, and R. De Gasperi, "Transgenic mouse models of Alzheimer's disease," (in eng), Mt Sinai J Med, vol. 77, no. 1, pp. 69-81, Jan-Feb 2010. [65] F. M. LaFerla and K. N. Green, "Animal models of Alzheimer disease," Cold Spring Harb Perspect Med, vol. 2, no. 11, Nov 1 2012. [66] W. Street, "Breast Cancer Facts & Figures 2019-2020," (in en), p. 44. [67] S. E. Singletary et al., "Revision of the American Joint Committee on Cancer Staging System for Breast Cancer," (in en), Journal of Clinical Oncology, vol. 20, no. 17, pp. 3628- 3636, 2002/09/01/ 2002. [68] A. E. Giuliano, S. B. Edge, and G. N. Hortobagyi, "Eighth Edition of the AJCC Cancer Staging Manual: Breast Cancer," Ann Surg Oncol, vol. 25, no. 7, pp. 1783-1785, Jul 2018. [69] W. D. Foulkes and J. S. Reis-Filho, "Triple-Negative Breast Cancer," (in en), The New England Journal of Medicine, p. 11, 2010 2010. 140 [70] C. E. DeSantis et al., "Breast cancer statistics, 2019," CA Cancer J Clin, vol. 69, no. 6, pp. 438-451, Nov 2019. [71] K. L. Kummerow, L. Du, D. F. Penson, Y. Shyr, and M. A. Hooks, "Nationwide trends in mastectomy for early-stage breast cancer," JAMA Surg, vol. 150, no. 1, pp. 9-16, Jan 2015. [72] S. E. Singletary, "Surgical margins in patients with early-stage breast cancer treated with breast conservation therapy," The American Journal of Surgery, vol. 184, no. 5, pp. 383- 393, 2002/11/01/ 2002. [73] J. F. Waljee, E. S. Hu, L. A. Newman, and A. K. Alderman, "Predictors of re-excision among women undergoing breast-conserving surgery for cancer," (in eng), Annals of surgical oncology, vol. 15, no. 5, pp. 1297-1303, 2008. [74] N. B. Kouzminova, S. Aggarwal, A. Aggarwal, M. D. Allo, and A. Y. Lin, "Impact of initial surgical margins and residual cancer upon re-excision on outcome of patients with localized breast cancer," The American Journal of Surgery, vol. 198, no. 6, pp. 771-780, 2009. [75] T. S. Menes, P. I. Tartter, I. Bleiweiss, J. H. Godbold, A. Estabrook, and S. R. Smith, "The consequence of multiple re-excisions to obtain clear lumpectomy margins in breast cancer patients," (in eng), Annals of surgical oncology, vol. 12, no. 11, pp. 881-885, 2005. [76] Y. Wang, S. Kang, J. D. Doerksen, A. K. Glaser, and J. T. Liu, "Surgical Guidance via Multiplexed Molecular Imaging of Fresh Tissues Labeled with SERS-Coded Nanoparticles," IEEE J Sel Top Quantum Electron, vol. 22, no. 4, Jul-Aug 2016. [77] Y. W. Wang et al., "Multiplexed Molecular Imaging of Fresh Tissue Surfaces Enabled by Convection-Enhanced Topical Staining with SERS-Coded Nanoparticles," Small, vol. 12, no. 40, pp. 5612-5621, Oct 2016. [78] B. L. Fearey, S. M. Angel, M. L. Myrick, and T. M. Vess, "Remote Raman spectroscopy using diode lasers and fiber-optic probes," presented at the Optical Methods for Ultrasensitive Detection and Analysis: Techniques and Applications, 1991. [79] Y. Wang et al., "Quantitative molecular phenotyping with topically applied SERS nanoparticles for intraoperative guidance of breast cancer lumpectomy," Sci Rep, vol. 6, p. 21242, Feb 16 2016. [80] Y. W. Wang et al., "Raman-Encoded Molecular Imaging with Topically Applied SERS Nanoparticles for Intraoperative Guidance of Lumpectomy," Cancer Res, vol. 77, no. 16, pp. 4506-4516, Aug 15 2017. [81] E. Garai et al., "A real-time clinical endoscopic system for intraluminal, multiplexed imaging of surface-enhanced Raman scattering nanoparticles," PLoS One, vol. 10, no. 4, p. e0123185, 2015. [82] Y. W. Wang, S. Kang, A. Khan, P. Q. Bao, and J. T. Liu, "In vivo multiplexed molecular 141 imaging of esophageal cancer via spectral endoscopy of topically applied SERS nanoparticles," Biomed Opt Express, vol. 6, no. 10, pp. 3714-23, Oct 1 2015. [83] G. J. Tearney, P. Z. McVeigh, T. D. Wang, R. J. Mallia, I. Veilleux, and B. C. Wilson, "Development of a widefield SERS imaging endoscope," presented at the Endoscopic Microscopy VII, 2012. [84] P. Z. McVeigh, R. J. Mallia, I. Veilleux, and B. C. Wilson, "Widefield quantitative multiplex surface enhanced Raman scattering imaging in vivo," J Biomed Opt, vol. 18, no. 4, p. 046011, Apr 2013. [85] M. H. El-Dakdouki et al., "Development of multifunctional hyaluronan-coated nanoparticles for imaging and drug delivery to cancer cells," Biomacromolecules, vol. 13, no. 4, pp. 1144-51, Apr 9 2012. [86] S. Misra, V. C. Hascall, R. R. Markwald, and S. Ghatak, "Interactions between Hyaluronan and Its Receptors (CD44, RHAMM) Regulate the Activities of Inflammation and Cancer," Front Immunol, vol. 6, p. 201, 2015. [87] C. Chen, S. Zhao, A. Karnad, and J. W. Freeman, "The biology and role of CD44 in cancer progression: therapeutic implications," J Hematol Oncol, vol. 11, no. 1, p. 64, May 10 2018. [88] Y. Xue, X. Li, H. Li, and W. Zhang, "Quantifying thiol–gold interactions towards the efficient strength control," Nature communications, vol. 5, no. 1, pp. 1-9, 2014. [89] M.-Y. Lee et al., "Hyaluronic acid–gold nanoparticle/interferon α complex for targeted treatment of hepatitis C virus infection," ACS nano, vol. 6, no. 11, pp. 9522-9531, 2012. [90] O. Gotov, G. Battogtokh, D. Shin, and Y. T. Ko, "Hyaluronic acid-coated cisplatin conjugated gold nanoparticles for combined cancer treatment," Journal of industrial and engineering chemistry, vol. 65, pp. 236-243, 2018. [91] H. Lee, K. Lee, I. K. Kim, and T. G. Park, "Synthesis, characterization, and in vivo diagnostic applications of hyaluronic acid immobilized gold nanoprobes," Biomaterials, vol. 29, no. 35, pp. 4709-4718, 2008. [92] X. Li et al., "Enhancement of cell recognition in vitro by dual-ligand cancer targeting gold nanoparticles," Biomaterials, vol. 32, no. 10, pp. 2540-2545, 2011. [93] J. A. Schroeder et al., "MUC1 overexpression results in mammary gland tumorigenesis and prolonged alveolar differentiation," Oncogene, vol. 23, no. 34, pp. 5739-47, Jul 29 2004. [94] W. Piyawattanametha, Y.-H. Park, V. Milanovic, A. Kasturi, and V. Hachtel, "High brightness MEMS mirror based head-up display (HUD) modules with wireless data streaming capability," presented at the MOEMS and Miniaturized Systems XIV, 2015. [95] P. F. V. Kessel, L. J. Hornbeck, R. E. Meier, and M. R. Douglass, "A MEMS-based projection display," Proceedings of the IEEE, vol. 86, no. 8, pp. 1687-1704, 1998. 142 [96] D. Wang, C. Watkins, and H. Xie, "MEMS Mirrors for LiDAR: A review," Micromachines (Basel), vol. 11, no. 5, Apr 27 2020. [97] D. Wang, L. Thomas, S. Koppal, Y. Ding, and H. Xie, "A Low-Voltage, Low-Current, Digital-Driven MEMS Mirror for Low-Power LiDAR," IEEE Sensors Letters, vol. 4, no. 8, pp. 1-4, 2020. [98] M. D. Turner, G. W. Kamerman, A. Kasturi, V. Milanovic, B. H. Atwood, and J. Yang, "UAV-borne lidar with MEMS mirror-based scanning capability," presented at the Laser Radar Technology and Applications XXI, 2016. [99] Z. Qiu and W. Piyawattanametha, "MEMS-Based Medical Endomicroscopes," IEEE Journal of Selected Topics in Quantum Electronics, vol. 21, no. 4, pp. 376-391, 2015. [100] E. Pengwang, K. Rabenorosoa, M. Rakotondrabe, and N. Andreff, "Scanning Micromirror Platform Based on MEMS Technology for Medical Application," Micromachines (Basel), vol. 7, no. 2, Feb 6 2016. [101] D. Huang et al., "Optical coherence tomography," (in eng), Science, vol. 254, no. 5035, pp. 1178-81, Nov 22 1991. [102] J. G. Fujimoto, C. Pitris, S. A. Boppart, and M. E. Brezinski, "Optical Coherence Tomography: An Emerging Technology for Biomedical Imaging and Optical Biopsy," Neoplasia, vol. 2, no. 1, pp. 9-25, 2000/01/01/ 2000. [103] Y. Chen, Y. J. Hong, S. Makita, and Y. Yasuno, "Three-dimensional eye motion correction by Lissajous scan optical coherence tomography," Biomed Opt Express, vol. 8, no. 3, pp. 1783-1802, Mar 1 2017. [104] H. Ra et al., "Three-dimensional in vivo imaging by a handheld dual-axes confocal microscope," Optics Express, vol. 16, no. 10, pp. 7224-7232, 2008/05/12 2008. [105] C.-Y. Yao, B. Li, and Z. Qiu, "2D Au-Coated Resonant MEMS Scanner for NIR Fluorescence Intraoperative Confocal Microscope," Micromachines, vol. 10, no. 5, 2019 2019. [106] Y. H. Seo, K. Hwang, and K. H. Jeong, "1.65 mm diameter forward-viewing confocal endomicroscopic catheter using a flip-chip bonded electrothermal MEMS fiber scanner," Opt Express, vol. 26, no. 4, pp. 4780-4785, Feb 19 2018. [107] W. Denk, J. H. Strickler, and W. W. Webb, "Two-Photon Laser Scanning Fluorescence Microscopy," Science, vol. 248, no. 4951, pp. 73-76, 1990. [108] D. Y. Kim et al., "Lissajous Scanning Two-photon Endomicroscope for In vivo Tissue Imaging," Sci Rep, vol. 9, no. 1, p. 3560, Mar 5 2019. [109] W. Liang, K. Murari, Y. Zhang, Y. Chen, M. J. Li, and X. Li, "Increased illumination uniformity and reduced photodamage offered by the Lissajous scanning in fiber-optic two- 143 photon endomicroscopy," J Biomed Opt, vol. 17, no. 2, p. 021108, Feb 2012. [110] Y. Zhou, J. Yao, and L. V. Wang, "Tutorial on photoacoustic tomography," (in eng), Journal of biomedical optics, vol. 21, no. 6, pp. 61007-61007, 2016. [111] L. Xi, J. Sun, Y. Zhu, L. Wu, H. Xie, and H. Jiang, "Photoacoustic imaging based on MEMS mirror scanning," Biomedical Optics Express, vol. 1, no. 5, pp. 1278-1283, 2010/12/01 2010. [112] J. Y. Kim, C. Lee, K. Park, G. Lim, and C. Kim, "Fast optical-resolution photoacoustic microscopy using a 2-axis water-proofing MEMS scanner," Scientific Reports, vol. 5, no. 1, p. 7932, 2015/01/21 2015. [113] N. Khemthongcharoen, R. Jolivot, S. Rattanavarin, and W. Piyawattanametha, "Advances in imaging probes and optical microendoscopic imaging techniques for early in vivo cancer assessment," Advanced Drug Delivery Reviews, vol. 74, pp. 53-74, 2014/07/30/ 2014. [114] I. W. Jung, D. Lopez, Z. Qiu, and W. Piyawattanametha, "2-D MEMS Scanner for Handheld Multispectral Dual-Axis Confocal Microscopes," Journal of Microelectromechanical Systems, vol. 27, no. 4, pp. 605-612, 2018. [115] D. L. Dickensheets, T. Liu, M. Rajadhyaksha, W. Piyawattanametha, Y.-H. Park, and H. Zappe, "MEMS-in-the-lens 3D beam scanner for in vivo microscopy," presented at the MOEMS and Miniaturized Systems XVIII, 2019. [116] W. Piyawattanametha et al., "Electrostatic MEMS resonating micro-polygonal scanner for circumferential endoscopic bio-imaging," presented at the MOEMS and Miniaturized Systems XII, 2013. [117] D. Torres, L. Starman, H. Hall, J. Pastrana, and S. Dooley, "Design, Simulation, Fabrication, and Characterization of an Electrothermal Tip-Tilt-Piston Large Angle Micromirror for High Fill Factor Segmented Optical Arrays," Micromachines (Basel), vol. 12, no. 4, Apr 12 2021. [118] L. Zhou, X. Zhang, and H. Xie, "An Electrothermal Cu/W Bimorph Tip-Tilt-Piston MEMS Mirror with High Reliability," (in en), Micromachines, vol. 10, no. 5, 2019. [119] Y.-H. Seo, K. Hwang, H.-C. Park, and K.-H. Jeong, "Electrothermal MEMS fiber scanner for optical endomicroscopy," (in en), Optics Express, vol. 24, no. 4, 2016. [120] Y. Zhou, Q. Wen, Z. Wen, J. Huang, and F. Chang, "An electromagnetic scanning mirror integrated with blazed grating and angle sensor for a near infrared micro spectrometer," Journal of Micromechanics and Microengineering, vol. 27, no. 12, 2017. [121] E. Afsharipour, R. Soltanzadeh, B. Park, D. Chrusch, and C. Shafai, "Low-power three- degree-of-freedom Lorentz force microelectromechanical system mirror for optical applications," Journal of Micro/Nanolithography, MEMS, and MOEMS, vol. 18, no. 01, 2019. 144 [122] J. Huang, Q. Wen, Q. Nie, F. Chang, Y. Zhou, and Z. Wen, "Miniaturized NIR Spectrometer Based on Novel MOEMS Scanning Tilted Grating," Micromachines (Basel), vol. 9, no. 10, Sep 20 2018. [123] Z. Qiu, C.-H. Rhee, J. Choi, T. D. Wang, and K. R. Oldham, "Large Stroke Vertical PZT Microactuator With High-Speed Rotational Scanning," (in eng), Journal of microelectromechanical systems : a joint IEEE and ASME publication on microstructures, microactuators, microsensors, and microsystems, vol. 23, no. 2, pp. 256-258, 2014. [124] Y. Zhu, W. Liu, K. Jia, W. Liao, and H. Xie, "A piezoelectric unimorph actuator based tip- tilt-piston micromirror with high fill factor and small tilt and lateral shift," Sensors and Actuators A: Physical, vol. 167, no. 2, pp. 495-501, 2011. [125] Y. Yasuda, M. Akamatsu, M. Tani, T. Iijima, and H. Toshiyoshi, "PIEZOELECTRIC 2D- OPTICAL MICRO SCANNERS WITH PZT THICK FILMS," (in en), Integrated Ferroelectrics, vol. 80, no. 1, pp. 341-353, 2006/11// 2006. [126] L. Wang, W. Chen, J. Liu, J. Deng, and Y. Liu, "A review of recent studies on non-resonant piezoelectric actuators," Mechanical Systems and Signal Processing, vol. 133, p. 106254, 2019/11/01/ 2019. [127] A. R. Cho et al., "Electromagnetic biaxial microscanner with mechanical amplification at resonance," Optics Express, vol. 23, no. 13, pp. 16792-16802, 2015/06/29 2015. [128] R. M. Davis, B. Kiss, D. R. Trivedi, T. J. Metzner, J. C. Liao, and S. S. Gambhir, "Surface- Enhanced Raman Scattering Nanoparticles for Multiplexed Imaging of Bladder Cancer Tissue Permeability and Molecular Phenotype," ACS Nano, vol. 12, no. 10, pp. 9669-9679, Oct 23 2018. [129] K. Hwang, Y. H. Seo, J. Ahn, P. Kim, and K. H. Jeong, "Frequency selection rule for high definition and high frame rate Lissajous scanning," Sci Rep, vol. 7, no. 1, p. 14075, Oct 26 2017. [130] Q. A. A. Tanguy et al., "Real-time Lissajous imaging with a low-voltage 2-axis MEMS scanner based on electrothermal actuation," Opt Express, vol. 28, no. 6, pp. 8512-8527, Mar 16 2020. [131] M. Scholles et al., Ultra compact laser projection systems based on two-dimensional resonant micro scanning mirrors (MOEMS-MEMS 2007 Micro and Nanofabrication). SPIE, 2007. [132] J. T. C. Liu et al., "Micromirror-scanned dual-axis confocal microscope utilizing a gradient-index relay lens for image guidance during brain surgery," (in en), Journal of Biomedical Optics, vol. 15, no. 2, p. 026029, 2010 2010. [133] N. Sepúlveda, A. Rúa, R. Cabrera, and F. Fernández, "Young’s modulus of VO2 thin films as a function of temperature including insulator-to-metal transition regime," Applied Physics Letters, vol. 92, no. 19, 2008. 145 [134] A. Zylbersztejn and N. F. Mott, "Metal-insulator transition in vanadium dioxide," Physical Review B, vol. 11, no. 11, pp. 4383-4395, 06/01/ 1975. [135] A. S. Barker, H. W. Verleur, and H. J. Guggenheim, "Infrared Optical Properties of Vanadium Dioxide Above and Below the Transition Temperature," Physical Review Letters, vol. 17, no. 26, pp. 1286-1289, 12/26/ 1966. [136] A. Rúa, F. E. Fernández, and N. Sepúlveda, "Bending in VO2-coated microcantilevers suitable for thermally activated actuators," Journal of Applied Physics, vol. 107, no. 7, p. 074506, 2010/04/01 2010. [137] E. Merced, X. Tan, and N. Sepúlveda, "Strain energy density of VO2-based microactuators," Sensors and Actuators A: Physical, vol. 196, pp. 30-37, 2013. [138] J. Wu, Q. Gu, B. S. Guiton, N. P. de Leon, L. Ouyang, and H. Park, "Strain-Induced Self Organization of Metal−Insulator Domains in Single-Crystalline VO2 Nanobeams," Nano Letters, vol. 6, no. 10, pp. 2313-2317, 2006/10/01 2006. [139] P. Parikh et al., "Dynamically tracking the strain across the metal-insulator transition in VO2 measured using electromechanical resonators," Nano Lett, vol. 13, no. 10, pp. 4685- 9, Oct 9 2013. [140] L. Wu, S. Dooley, E. A. Watson, P. F. McManamon, and H. Xie, "A Tip-Tilt-Piston Micromirror Array for Optical Phased Array Applications," Journal of Microelectromechanical Systems, vol. 19, no. 6, pp. 1450-1461, 2010. [141] D. Torres et al., "VO2-Based MEMS Mirrors," Journal of Microelectromechanical Systems, vol. 25, no. 4, pp. 780-787, 2016. [142] D. Torres, J. Zhang, S. Dooley, X. Tan, and N. Sepúlveda, "Modeling of MEMS Mirrors Actuated by Phase-Change Mechanism," Micromachines, vol. 8, no. 5, 2017. [143] D. Torres, J. Zhang, S. Dooley, X. Tan, and N. Sepulveda, "Hysteresis-Based Mechanical State Programming of MEMS Mirrors," Journal of Microelectromechanical Systems, vol. 27, no. 2, pp. 344-354, 2018. [144] L. Wu and H. Xie, "A large vertical displacement electrothermal bimorph microactuator with very small lateral shift," Sensors and Actuators A: Physical, vol. 145-146, pp. 371- 379, 2008. [145] R. Cabrera, E. Merced, and N. Sepúlveda, "A micro-electro-mechanical memory based on the structural phase transition of VO2," physica status solidi (a), 2013. [146] E. Merced, R. Cabrera, N. Dávila, F. E. Fernández, and N. Sepúlveda, "A micro-mechanical resonator with programmable frequency capability," Smart Materials and Structures, vol. 21, no. 3, 2012. [147] T. D. Wang, M. J. Mandella, C. H. Contag, and G. S. Kino, "Dual-axis confocal microscope 146 for high-resolution in vivo imaging," (in en), Optics Letters, vol. 28, no. 6, p. 414, 2003/03/15/ 2003. [148] L. Wei, C. Yin, and J. T. C. Liu, "Dual-Axis Confocal Microscopy for Point-of-Care Pathology," (in en), IEEE Journal of Selected Topics in Quantum Electronics, vol. 25, no. 1, pp. 1-10, 2019/01// 2019. [149] T. D. Wang, C. H. Contag, M. J. Mandella, N. Chan, and G. S. Kino, "Dual-axes confocal microscopy with post-objective scanning and low-coherence heterodyne detection," Optics Letters, vol. 28, no. 20, pp. 1915-1917, 2003/10/15 2003. [150] L. Wei, C. Yin, Y. Fujita, N. Sanai, and J. T. C. Liu, "Handheld line-scanned dual-axis confocal microscope with pistoned MEMS actuation for flat-field fluorescence imaging," (in en), Optics Letters, vol. 44, no. 3, p. 671, 2019/02/01/ 2019. [151] Y. Chen et al., "Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope," (in en), Journal of Biomedical Optics, vol. 20, no. 10, p. 106011, 2015/10/28/ 2015. [152] S. Y. Leigh, Y. Chen, and J. T. C. Liu, "Modulated-alignment dual-axis (MAD) confocal microscopy for deep optical sectioning in tissues," (in en), Biomedical Optics Express, vol. 5, no. 6, p. 1709, 2014/06/01/ 2014. [153] S. Y. Leigh, Y. Chen, and J. T. C. Liu, "Modulated-Alignment Dual-Axis (MAD) Confocal Microscopy Optimized for Speed and Contrast," (in en), IEEE Transactions on Biomedical Engineering, vol. 63, no. 10, pp. 2119-2124, 2016/10// 2016. [154] Y. Chen, A. Glaser, and J. T. C. Liu, "Bessel-beam illumination in dual-axis confocal microscopy mitigates resolution degradation caused by refractive heterogeneities," (in en), Journal of Biophotonics, vol. 10, no. 1, pp. 68-74, 2017/01// 2017. [155] T. A. Planchon et al., "Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination," Nature Methods, vol. 8, no. 5, pp. 417-423, 2011/05/01 2011. [156] S. Preibisch, S. Saalfeld, and P. Tomancak, "Globally optimal stitching of tiled 3D microscopic image acquisitions," (in eng), Bioinformatics (Oxford, England), vol. 25, no. 11, pp. 1463-1465, 2009. [157] P. Thévenaz and M. Unser, "User-friendly semiautomated assembly of accurate image mosaics in microscopy," (in eng), Microsc Res Tech, vol. 70, no. 2, pp. 135-46, Feb 2007. [158] S. Bakas, D. Uttamchandani, R. Bauer, W. Piyawattanametha, Y.-H. Park, and H. Zappe, "Light-sheet microscopy using MEMS and active optics for 3D image acquisition control," presented at the MOEMS and Miniaturized Systems XIX, 2020. [159] K. Ikegami, T. Koyama, T. Saito, Y. Yasuda, and H. Toshiyoshi, "A biaxial PZT optical scanner for pico-projector applications," in SPIE OPTO, 2015, p. 93750M, San Francisco, 147 California, United States. [160] T. Masanao, A. Masahiro, Y. Yoshiaki, and T. Hiroshi, "A two-axis piezoelectric tilting micromirror with a newly developed PZT-meandering actuator," in 2007 IEEE 20th International Conference on Micro Electro Mechanical Systems (MEMS), 2007, pp. 699- 702, Hyogo, Japan: IEEE. [161] M. Tani, M. Akamatsu, Y. Yasuda, H. Fujita, and H. Toshiyoshi, "A 2D-optical scanner actuated by PZT film deposited by arc discharged reactive ion-plating (ADRIP) method," 2004, pp. 188-189. [162] R. H. Hadfield, "Single-photon detectors for optical quantum information applications," Nature Photonics, vol. 3, no. 12, pp. 696-705, 2009/12/01 2009. [163] B. Vyhnalek, S. Tedder, and J. Nappier, Performance and characterization of a modular superconducting nanowire single photon detector system for space-to-Earth optical communications links (SPIE LASE). SPIE, 2018. [164] J. Yu et al., "Intravital confocal fluorescence lifetime imaging microscopy in the second near-infrared window," Opt Lett, vol. 45, no. 12, pp. 3305-3308, Jun 15 2020. [165] F. Wang et al., "In vivo non-invasive confocal fluorescence imaging beyond 1,700 nm using superconducting nanowire single-photon detectors," Nat Nanotechnol, vol. 17, no. 6, pp. 653-660, Jun 2022. [166] J. Liao et al., "Depth-resolved NIR-II fluorescence mesoscope," Biomed Opt Express, vol. 11, no. 5, pp. 2366-2372, May 1 2020. [167] D. V. Reddy, A. E. Lita, S. W. Nam, R. P. Mirin, and V. B. Verma, "Achieving 98% system efficiency at 1550 nm in superconducting nanowire single photon detectors," in Rochester Conference on Coherence and Quantum Optics (CQO-11), Rochester, New York, 2019, p. W2B.2: Optica Publishing Group. [168] D. Rosenberg, A. J. Kerman, R. J. Molnar, and E. A. Dauler, "High-speed and high- efficiency superconducting nanowire single photon detector array," Optics Express, vol. 21, no. 2, pp. 1440-1447, 2013/01/28 2013. [169] E. E. Wollman et al., "UV superconducting nanowire single-photon detectors with high efficiency, low noise, and 4 K operating temperature," Optics Express, vol. 25, no. 22, pp. 26792-26801, 2017/10/30 2017. [170] L. Chen et al., "Ultra-sensitive mid-infrared emission spectrometer with sub-ns temporal resolution," Optics Express, vol. 26, no. 12, pp. 14859-14868, 2018/06/11 2018. [171] G. G. Taylor et al., "Photon counting LIDAR at 2.3µm wavelength with superconducting nanowires," Optics Express, vol. 27, no. 26, pp. 38147-38158, 2019/12/23 2019. [172] L. You et al., "Jitter analysis of a superconducting nanowire single photon detector," AIP 148 Advances, vol. 3, no. 7, p. 072135, 2013. [173] A. M. Kadin and M. W. Johnson, "Nonequilibrium photon‐induced hotspot: A new mechanism for photodetection in ultrathin metallic films," Applied Physics Letters, vol. 69, no. 25, pp. 3938-3940, 1996/12/16 1996. [174] G. N. Gol’tsman et al., "Picosecond superconducting single-photon optical detector," Applied Physics Letters, vol. 79, no. 6, pp. 705-707, 2001. [175] C. M. Natarajan, M. G. Tanner, and R. H. Hadfield, "Superconducting nanowire single- photon detectors: physics and applications," Superconductor Science and Technology, vol. 25, no. 6, 2012. [176] A. D. Semenov, G. N. Gol’tsman, and A. A. Korneev, "Quantum detection by current carrying superconducting film," Physica C: Superconductivity, vol. 351, no. 4, pp. 349-356, 2001/04/15/ 2001. [177] J. K. W. Yang, A. J. Kerman, E. A. Dauler, V. Anant, K. M. Rosfjord, and K. K. Berggren, "Modeling the Electrical and Thermal Response of Superconducting Nanowire Single- Photon Detectors," IEEE Transactions on Applied Superconductivity, vol. 17, no. 2, pp. 581-585, 2007. [178] R. Devi, V. Bansal, and D. Kumar, "Design and Simulation of Electrothermally Activated Bidirectional Microtweezer Using PMMA for Biomedical Applications," (in en). [179] R. H. Hadfield, A. J. Miller, S. W. Nam, R. L. Kautz, and R. E. Schwall, "Low-frequency phase locking in high-inductance superconducting nanowires," Applied Physics Letters, vol. 87, no. 20, p. 203505, 2005/11/14 2005. [180] A. J. Kerman et al., "Kinetic-inductance-limited reset time of superconducting nanowire photon counters," Applied Physics Letters, vol. 88, no. 11, p. 111116, 2006/03/13 2006. [181] J. H. Yoo, S. H. Jeong, R. Greif, and R. E. Russo, "Explosive change in crater properties during high power nanosecond laser ablation of silicon," Journal of Applied Physics, vol. 88, no. 3, pp. 1638-1649, 2000. [182] N. Muhammad and L. Li, "Underwater femtosecond laser micromachining of thin nitinol tubes for medical coronary stent manufacture," Applied Physics A, vol. 107, no. 4, pp. 849- 861, 2012. [183] Q. Zheng et al., "Mechanism and morphology control of underwater femtosecond laser microgrooving of silicon carbide ceramics," Opt Express, vol. 27, no. 19, pp. 26264-26280, Sep 16 2019. [184] D. Zhang, B. Gökce, S. Sommer, R. Streubel, and S. Barcikowski, "Debris-free rear-side picosecond laser ablation of thin germanium wafers in water with ethanol," Applied Surface Science, vol. 367, pp. 222-230, 2016. 149 [185] P. I. Bastiaens and A. Squire, "Fluorescence lifetime imaging microscopy: spatial resolution of biochemical processes in the cell," (in eng), Trends Cell Biol, vol. 9, no. 2, pp. 48-52, Feb 1999. [186] L. Marcu, "Fluorescence Lifetime Techniques in Medical Applications," Annals of Biomedical Engineering, vol. 40, no. 2, pp. 304-331, 2012/02/01 2012. [187] E. P. Buurman et al., "Fluorescence lifetime imaging using a confocal laser scanning microscope," Scanning, vol. 14, no. 3, pp. 155-159, 1992/01/01 1992. [188] W. Becker, "Fluorescence lifetime imaging--techniques and applications," J Microsc, vol. 247, no. 2, pp. 119-36, Aug 2012. [189] X. Liu et al., "Fast fluorescence lifetime imaging techniques: A review on challenge and development," Journal of Innovative Optical Health Sciences, vol. 12, no. 05, 2019. [190] E. B. van Munster and T. W. Gadella, "Fluorescence lifetime imaging microscopy (FLIM)," (in eng), Adv Biochem Eng Biotechnol, vol. 95, pp. 143-75, 2005. [191] R. Foord, R. Jones, C. J. Oliver, and E. R. Pike, "The Use of Photomultiplier Tubes for Photon Counting," Applied Optics, vol. 8, no. 10, pp. 1975-1989, 1969/10/01 1969. [192] D. Renker, "Geiger-mode avalanche photodiodes, history, properties and problems," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 567, no. 1, pp. 48-56, 2006/11/01/ 2006. [193] D. Renker and E. Lorenz, "Advances in solid state photon detectors," Journal of Instrumentation, vol. 4, no. 04, pp. P04004-P04004, 2009/04/07 2009. [194] S. Cova, M. Ghioni, A. Lotito, I. Rech, and F. Zappa, "Evolution and prospects for single- photon avalanche diodes and quenching circuits," Journal of Modern Optics, vol. 51, no. 9-10, pp. 1267-1288, 2004/06/01 2004. [195] M. A. Itzler et al., "Advances in InGaAsP-based avalanche diode single photon detectors," Journal of Modern Optics, vol. 58, no. 3-4, pp. 174-200, 2011/02/10 2011. [196] W. Becker, Advanced Time-Correlated Single Photon Counting Techniques. Springer Berlin Heidelberg, 2005. [197] F. Xia et al., "Short-Wave Infrared Confocal Fluorescence Imaging of Deep Mouse Brain with a Superconducting Nanowire Single-Photon Detector," ACS Photonics, vol. 8, no. 9, pp. 2800-2810, 2021. 150