Open Access
31 October 2024 Physics and artificial intelligence: illuminating the future of optics and photonics
Author Affiliations +
Abstract

The 2024 Nobel Prize in Physics recognized John Hopfield and Geoffrey Hinton for their pioneering work on artificial neural networks, which profoundly impacted the physical sciences, particularly optics and photonics. This perspective summarizes the Nobel laureates’ contributions, highlighting the physics-based principles and inspiration behind the development of modern artificial intelligence (AI) and also outlining some of the emerging major advances achieved in optics and photonics enabled by AI.

1.

Introduction

The artificial intelligence (AI) revolution is upon us, transforming not just our daily lives with smart assistants, personalized recommendations, and autonomous systems but also profoundly altering the landscape of scientific research and knowledge discovery. This revolution is characterized by the integration of AI into every domain of human activity, from healthcare and finance to education and entertainment. Its transformative effects are also being felt in the world of scientific research, e.g., in the physical sciences, where AI is not just assisting in data analysis1 but is also driving new discoveries2 and pushing the boundaries of knowledge and applied sciences. The interplay between AI and physics has reached a point where advancements in one field are catalyzing progress in the other, creating feedback loops of innovations that are reshaping our understanding of the universe and the tools we use to explore it.

1.1.

Physics and AI

The interaction between physics and AI has been a symbiotic one, where principles from physics have been applied to enhance AI models, and AI, in turn, has been used to solve complex problems in physics. This dynamic interplay is beautifully exemplified by the work of the 2024 Nobel laureates in Physics, John Hopfield3 and Geoffrey Hinton.4 Some of their pioneering contributions to artificial neural networks,516 which are deeply rooted in concepts borrowed from or inspired by physics,6,7,11,12 have laid the foundations for the modern AI revolution; see Fig. 1. John Hopfield, a physicist by training, was one of the first to draw a strong connection between physics and neural networks. His work was inspired by the complex world of spin glasses—disordered magnetic systems with intricate interactions. Hopfield recognized an analogy between these physical systems and networks of interconnected neurons in the brain.7 This insight led him to develop a type of recurrent neural network, known as the Hopfield network,17 capable of storing and retrieving patterns [Fig. 1(a)]. His 1982 paper,7 a cornerstone in the field of AI, demonstrated how principles from condensed matter physics could be harnessed to create computational systems that have learning and memory. The Hopfield network provided methods to explore how associative memory18 works, both in biological systems and artificial ones, and became one of the foundational models for AI research. Hopfield’s work was groundbreaking not just because it connected physics to AI but because it introduced the concept of energy landscapes9,19 to neural networks. In a Hopfield network, the system settles into states of minimum energy, akin to how a physical system seeks equilibrium. This analogy to physical systems allowed researchers to use well-established methods from statistical mechanics to analyze and better understand neural networks, which opened up new avenues for creating advanced AI systems that could more closely emulate human cognition.

Fig. 1

Some of the major contributions of John Hopfield and Geoffrey Hinton, with their connections to physical systems. (a) Left: the Hopfield model, a recurrent neural network capable of storing and retrieving patterns. John Hopfield proposed an energy-based memory model capable of storing and retrieving patterns, drawing intuition from the energy dynamics of spin glass. The network’s dynamics are driven by minimizing the energy of the model state E(s). Center: the energy landscape of a Hopfield network, depicting how the network converges to stable states (attractors) represented as valleys in the energy landscape. Right: a spin glass system, i.e., a disordered material with magnetic interactions among atomic spins. The energy of a state in the Hopfield model is similar to the energy Hamiltonian H(σ) of the spin glass state σ. (b) Left: the Boltzmann machine, a stochastic generalization of the Hopfield network, includes hidden units (represented by gray nodes) that enhance its representation capability. By incorporating the Boltzmann distribution from statistical mechanics, Geoffrey Hinton introduced stochasticity into his neural network models, enhancing their ability to learn complex patterns. Right: the Boltzmann distribution, which governs the probability of a state based on its energy. This distribution plays a key role in the stochastic activation of artificial neurons in Boltzmann machines.

AP_6_5_050500_f001.png

Similar to Hopfield’s scientific explorations at the intersection of physics and AI, Geoffrey Hinton, a cognitive psychologist and computer scientist, also took the push–pull relationship between AI and physics as a major inspiration for his seminal work. Hinton recognized the potential of Boltzmann machines,12 a type of stochastic neural network inspired by statistical mechanics, to learn complex patterns from data [Fig. 1(b)]. One of Hinton’s groundbreaking contributions was developing efficient learning algorithms for these networks.14,15,20 These algorithms enabled neural networks to extract meaningful features from data, such as images, text, or language, by optimizing the network’s parameters through processes similar to energy minimization. He also popularized the backpropagation algorithm,13,21,22 revolutionized convolutional neural networks,23 and introduced techniques such as dropout to improve training.24 Hinton’s work laid the groundwork for modern deep learning architectures, the applications of which have revolutionized fields such as computer vision, natural language processing, robotics, and biomedical sciences.

Beyond the seminal works of Hopfield and Hinton, the influence of physics on AI extends to various other areas. For instance, the concept of renormalization group, a powerful tool in condensed-matter and particle physics used to study systems with many interacting components across different scales, has found applications in deep learning for analyzing hierarchical structures and improving the efficiency of training algorithms.25 Another example is the use of quantum mechanics principles to develop new types of neural networks, known as quantum neural networks,26 which leverage quantum phenomena such as superposition and entanglement to potentially achieve exponential speedups for certain computational tasks. Furthermore, ideas from information theory, a field with deep roots in thermodynamics and statistical mechanics, have been instrumental in developing algorithms for compressing and efficiently representing information in AI systems. As another important example, diffusion models,27 a new class of powerful generative models, draw direct inspiration from the physics of diffusion28,29 and Brownian motion. These examples illustrate the rich and ongoing cross-fertilization between physics and AI, where fundamental concepts from physics continue to inspire novel approaches and solutions in the realm of AI.

2.

AI in Optics and Photonics

The impact and uses of AI in physics extend far beyond data analyses and simulations. It is fostering a deeper understanding of fundamental principles, enabling the design of entirely new physical systems. This influence is particularly evident in fields, such as optics and photonics,3032 where AI is revolutionizing the way that scientists manipulate, control, and harness light. From designing novel optical materials with unprecedented properties to optimizing the performance of complex photonic devices, AI is pushing the boundaries of possibilities regarding the manipulation and control of light, unveiling exciting new possibilities for applications in computing, sensing, imaging, and beyond.

2.1.

AI in Computational Imaging and Sensing

One of the exciting areas where AI has been making a significant impact is computational imaging and sensing. Traditional imaging methods often face limitations due to the physical constraints of optics, such as resolution limits or noise. AI, however, is offering powerful new tools to mitigate some of these barriers. In microscopy, for example, AI algorithms can enhance image resolution, remove noise and artifacts, and even reconstruct 3D structures from limited data.33 Techniques such as super-resolution microscopy, which breaks the diffraction limit of light to reveal finer details than previously possible, have been significantly advanced by AI.3440 In holographic imaging, AI algorithms have excelled in solving complex physics-based inverse problems,4146 such as reconstructing a 3D scene from holographic data,47 with greater accuracy and speed than traditional methods, also providing different contrast mechanisms, e.g., reconstructing the images of specimens with brightfield contrast using their monochrome holograms.47 In fact, AI has been driving major innovations through such cross-modality image transformations,36,48 where the spatial and spectral information typically associated with one imaging modality is extracted from data acquired using a different modality. This capability is opening up exciting new possibilities for biomedical imaging49 and remote sensing,50 among others. A compelling example is virtual staining in digital pathology and microscopy.51,52 Traditional histological staining of tissue involves applying chemical stains to biological tissue to highlight various features under a microscope. However, this staining process can be time-consuming, laborious, and costly and can also damage the samples. Deep neural networks can now routinely transform label-free images of specimens into virtually stained microscopic images that mimic the appearance of traditionally stained images, eliminating the need for the chemical staining processes.49,53 This allows for faster, cheaper, and more efficient analysis of biological samples and has significant implications for histology as well as live-cell imaging, where minimizing or eliminating chemical perturbations to the native biological system (through, e.g., external labeling and tags) is crucial.

AI is also making significant inroads in optical sensing, impacting both the design of sensors and the interpretation of sensor data.54 In areas such as biosensing and environmental monitoring, AI algorithms can rapidly process complex optical signals to detect subtle changes and identify specific analytes or conditions with greater sensitivity and specificity.5563 AI is also being used to design novel optical sensors with improved performance. For instance, in the development of optical sensors for point-of-care diagnostics, AI can optimize the design of the optical detection systems to enhance sensitivity/specificity and reduce sample volume requirements, while also providing multiplexed detection that can be used for the rapid and quantitative measurement of a panel of biomarkers and disease conditions.6368 By automating the optimization, quantitative multiplexed sensing, and decision processes, AI is accelerating the development of innovative optical sensors with tailored functionalities for various applications in point-of-care sensing, diagnostics, and environmental monitoring, as well as structural health, among many others.54,6972

2.2.

AI-Driven Optics and Photonics Design

AI is also revolutionizing the design of optical materials, devices, and systems7376 by enabling a paradigm shift in “inverse” design.30,77 Traditional inverse design approaches in optics and photonics typically rely on iterative optimization algorithms. These algorithms start with an initial guess of the device structure and repeatedly simulate its performance, using the results to refine the design parameters. This process continues until the desired performance metric is achieved. While these methods can be effective, they often require significant computational resources and time, especially for complex tasks and designs. In contrast, deep learning-based approaches offer more efficient and powerful alternatives. These alternative methods employ training a neural network on a large dataset of optical structures and their corresponding performance metrics.78 Once trained, the network can rapidly predict the performance of new designs and even generate novel structures with desired properties. This learning-based approach significantly accelerates the design process and enables the exploration/optimization of a wider range of parameters and possibilities.7981 AI-powered inverse design has already led to the creation of materials and systems with unprecedented capabilities. These include unidirectional imagers,82 invisibility cloaks that can render objects invisible,83 and ultra-efficient light absorbers for enhanced solar energy harvesting,84 among many others. Furthermore, this AI-powered optimization framework allows for the smart design of free-form optics, enabling the creation of compact and lightweight optical systems with superior performance.8587

3.

Addressing Challenges in AI-Enabled Physics: Potential Role of Optics and Photonics

Despite the remarkable progress made at the intersection of physics and AI, its widespread adoption faces some bottlenecks.88,89 Some of these challenges are primarily related to high energy consumption, bandwidth limitations, and latency of AI systems, as well as hallucinations/artifacts in inference. For example, while AI offers immense potential in computational imaging, hallucinations in the generated/reconstructed images create concerns, as there might be features or details in the output images that are not present in the original data, which could be catastrophic, especially for biomedical imaging-related applications. This can occur in both inverse problems and cross-modality image transformations, leading to inaccurate reconstructions or misleading interpretations. To mitigate some of these issues, researchers have been incorporating physics-based loss functions into the training of AI models. These loss functions penalize deviations from known physical principles, guiding the learning and inference of the AI model to generate outputs consistent with the underlying physics of the system.52,90 For example, in holographic image reconstruction, a physics consistency-based loss function was used to incorporate knowledge about the wave equation, driven from Maxwell’s equations, ensuring that the reconstructed scene adheres to the laws of wave propagation in free-space. This was shown to prevent the generation of unrealistic artifacts and significantly improve the fidelity of the holographic image reconstructions for out-of-distribution objects, showing superior external generalization behavior based on physics consistency-driven learning.90

Another challenge for future AI systems is that training and rapidly running complex AI models require immense computational power, leading to substantial energy demands91 and large carbon footprints.92 In addition, transferring vast amounts of data between memory and processing units can strain bandwidth and introduce latency, making real-time applications potentially difficult to implement through very large-scale models.93 This is one of the areas where optics and photonics might offer promising solutions.30,87,89,9497 Optical computing platforms leverage the inherent parallelism and speed of light to perform, e.g., matrix multiplications and other computationally intensive tasks with significantly lower energy consumption and latency compared to electronic systems.98104 Free-space optical computing platforms, such as diffractive optical networks and smart metasurfaces, can perform visual computing, i.e., directly executing analog computation on visual information from an input scene without the need for digitization or preprocessing of information, enabling massively parallel processing of optical information with minimal energy dissipation. These approaches offer a powerful platform for the implementation of frontend analog information processing, delivering a compressed form of representation to back-end digital neural networks,87,94,105,106 providing us with the best of both worlds.107110 This collaboration between optical analog processing and digital processors using neural networks can also swap places, where neural networks are used as digital encoders of information for optical networks106,111113 to decode with extreme parallelism, requiring no external power except for the illumination light.

4.

Outlook

As physics and AI continue to drive innovations in the optics and photonics field, the synergy between these two disciplines will inevitably deepen. Physics has been providing foundational principles that guide AI development, while AI has been helping to unravel complex physical phenomena, offering new advances in various fields, from quantum mechanics to cosmology. However, challenges remain in fully integrating AI into physical sciences. The “black box” nature of many AI algorithms can hinder physical interpretability and trust, making it difficult to comprehend the underlying principles driving AI-generated solutions. In addition, ensuring that AI models generalize accurately and avoid “hallucinations” or spurious results requires careful validation and a robust feedback loop between physics and deep learning systems. By fostering a deeper integration that addresses these challenges with proper regulations and checks and balances, we can create large-scale AI-powered models and systems that are not only innovative but also reliable, interpretable, and capable of pushing the boundaries of scientific discovery and technological advancements in physical sciences at large. The recent Nobel Prize in Physics, awarded for pioneering work at the intersection of artificial neural networks and physics, signals the tremendous potential of this push–pull relationship between AI and physics, heralding a new era of research where they propel each other towards groundbreaking advances and discoveries.

References

1. 

T. Chen and T. He, “Higgs Boson discovery with boosted trees,” in Proc. NIPS 2014 Workshop High-Energy Phys. Machine Learning, 69 –80 (2015). Google Scholar

2. 

J. Jumper et al., “Highly accurate protein structure prediction with AlphaFold,” Nature, 596 (7873), 583 –589 https://doi.org/10.1038/s41586-021-03819-2 (2021). Google Scholar

5. 

J. J. Hopfield, “Electron transfer between biological molecules by thermally activated tunneling,” Proc. Natl. Acad. Sci., 71 (9), 3640 –3644 https://doi.org/10.1073/pnas.71.9.3640 (1974). Google Scholar

6. 

J. J. Hopfield, “Kinetic proofreading: a new mechanism for reducing errors in biosynthetic processes requiring high specificity,” Proc. Natl. Acad. Sci., 71 (10), 4135 –4139 https://doi.org/10.1073/pnas.71.10.4135 (1974). Google Scholar

7. 

J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities.,” Proc. Natl. Acad. Sci., 79 (8), 2554 –2558 (1982). Google Scholar

8. 

J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad. Sci., 81 (10), 3088 –3092 https://doi.org/10.1073/pnas.81.10.3088 (1984). Google Scholar

9. 

J. J. Hopfield and D. W. Tank, “Neural’ computation of decisions in optimization problems,” Biol. Cybern., 52 (3), 141 –152 https://doi.org/10.1007/BF00339943 (1985). Google Scholar

10. 

J. J. Hopfield and D. W. Tank, “Computing with neural circuits: a model,” Science, 233 (4764), 625 –633 https://doi.org/10.1126/science.3755256 (1986). Google Scholar

11. 

S. E. Fahlman, G. E. Hinton and T. J. Sejnowski, “Massively parallel architectures for AI: Netl, Thistle, and Boltzmann machines,” in Proc. Third AAAI Conf. Artif. Intell., 109 –113 (1983). Google Scholar

12. 

D. H. Ackley, G. E. Hinton and T. J. Sejnowski, “A learning algorithm for Boltzmann machines,” Cogn. Sci., 9 (1), 147 –169 https://doi.org/10.1016/S0364-0213(85)80012-4 (1985). Google Scholar

13. 

D. E. Rumelhart, G. E. Hinton and R. J. Williams, “Learning representations by back-propagating errors,” Nature, 323 (6088), 533 –536 https://doi.org/10.1038/323533a0 (1986). Google Scholar

14. 

G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Comput., 14 (8), 1771 –1800 https://doi.org/10.1162/089976602760128018 (2002). Google Scholar

15. 

G. E. Hinton, S. Osindero and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput., 18 (7), 1527 –1554 https://doi.org/10.1162/neco.2006.18.7.1527 (2006). Google Scholar

16. 

G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, 313 (5786), 504 –507 https://doi.org/10.1126/science.1127647 (2006). Google Scholar

17. 

J. J. Hopfield, “Hopfield network,” Scholarpedia, 2 (5), https://doi.org/10.4249/scholarpedia.1977 (2007). Google Scholar

18. 

D. Krotov and J. J. Hopfield, “Dense associative memory for pattern recognition,” Adv. Neural Inf. Process. Sys., 29 (2016). Google Scholar

19. 

J. C. Schön, “Energy landscapes—past, present, and future: a perspective,” J. Chem. Phys., 161 (5), 050901 https://doi.org/10.1063/5.0212867 (2024). Google Scholar

20. 

G. E. Hinton, “A practical guide to training restricted Boltzmann machines,” Neural Networks: Tricks of the Trade, 599 –619 2nd Ed.Springer, Berlin, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_32 Google Scholar

21. 

H. J. Kelley, “Gradient theory of optimal flight paths,” ARS J., 30 (10), 947 –954 https://doi.org/10.2514/8.5282 (1960). Google Scholar

22. 

S. Linnainmaa, “Taylor expansion of the accumulated rounding error,” BIT Numer. Math., 16 (2), 146 –160 https://doi.org/10.1007/BF01931367 (1976). Google Scholar

23. 

A. Krizhevsky, I. Sutskever and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, 60 (6), 84 –90 https://doi.org/10.1145/3065386 (2017). Google Scholar

24. 

N. Srivastava et al., “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., 15 (56), 1929 –1958 (2014). Google Scholar

25. 

C. Bény, “Deep learning and the renormalization group,” arXiv 1301.3124, (2013). https://doi.org/10.48550/arXiv.1301.3124 Google Scholar

26. 

S. C. Kak, “Quantum neural computing,” Adv. Imaging Electron Phys., 94 259 –313 https://doi.org/10.1016/S1076-5670(08)70147-2 (1995). Google Scholar

27. 

J. Ho, A. Jain and P. Abbeel, “Denoising diffusion probabilistic models,” Adv. Neural Inf. Process. Sys., 33 6840 –6851 (2020). Google Scholar

28. 

R. Brown, “XXVII. A brief account of microscopical observations made in the months of June, July and August 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies,” Philos. Mag., 4 (21), 161 –173 https://doi.org/10.1080/14786442808674769 (2009). Google Scholar

29. 

A. Einstein, Investigations on the Theory of the Brownian Movement, Courier Corporation( (1956). Google Scholar

30. 

D. Mengu et al., “At the intersection of optics and deep learning: statistical inference, computing, and inverse design,” Adv. Opt. Photonics, 14 (2), 209 –290 https://doi.org/10.1364/AOP.450345 (2022). Google Scholar

31. 

F. Vernuccio et al., “Artificial intelligence in classical and quantum photonics,” Laser Photonics Rev., 16 (5), 2100399 https://doi.org/10.1002/lpor.202100399 (2022). Google Scholar

32. 

P. Freire et al., “Artificial neural networks for photonic applications—from algorithms to implementation: tutorial,” Adv. Opt. Photonics, 15 (3), 739 –834 https://doi.org/10.1364/AOP.484119 (2023). Google Scholar

33. 

K. de Haan et al., “Deep-learning-based image reconstruction and enhancement in optical microscopy,” Proc. IEEE, 108 (1), 30 –50 https://doi.org/10.1109/JPROC.2019.2949575 (2020). Google Scholar

34. 

Y. Rivenson et al., “Deep learning microscopy,” Optica, 4 (11), 1437 –1443 https://doi.org/10.1364/OPTICA.4.001437 (2017). Google Scholar

35. 

Y. Wu et al., “Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning,” Nat. Methods, 16 (12), 1323 –1331 https://doi.org/10.1038/s41592-019-0622-5 (2019). Google Scholar

36. 

H. Wang et al., “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods, 16 (1), 103 –110 https://doi.org/10.1038/s41592-018-0239-0 (2019). Google Scholar

37. 

H. Pinkard et al., “Deep learning for single-shot autofocus microscopy,” Optica, 6 (6), 794 –797 https://doi.org/10.1364/OPTICA.6.000794 (2019). Google Scholar

38. 

E. Nehme et al., “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods, 17 (7), 734 –740 https://doi.org/10.1038/s41592-020-0853-5 (2020). Google Scholar

39. 

L. Möckl, A. R. Roy and W. E. Moerner, “Deep learning in single-molecule microscopy: fundamentals, caveats, and recent developments [Invited],” Biomed. Opt. Express, 11 (3), 1633 –1661 https://doi.org/10.1364/BOE.386361 (2020). Google Scholar

40. 

B. Midtvedt et al., “Quantitative digital microscopy with deep learning,” Appl. Phys. Rev., 8 (1), 011310 https://doi.org/10.1063/5.0034891 (2021). Google Scholar

41. 

Y. Rivenson et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light Sci. Appl., 7 (2), 17141 –17141 https://doi.org/10.1038/lsa.2017.141 (2018). Google Scholar

42. 

Y. Wu et al., “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica, 5 (6), 704 –710 https://doi.org/10.1364/OPTICA.5.000704 (2018). Google Scholar

43. 

Y. Rivenson, Y. Wu and A. Ozcan, “Deep learning in holography and coherent imaging,” Light Sci. Appl., 8 (1), 85 https://doi.org/10.1038/s41377-019-0196-0 (2019). Google Scholar

44. 

Z. Ren, Z. Xu and E. Y. M. Lam, ““End-to-end deep learning framework for digital holographic reconstruction,” Adv. Photonics, 1 (1), 016004 https://doi.org/10.1117/1.AP.1.1.016004 (2019). Google Scholar

45. 

T. Liu et al., “Deep learning-based holographic polarization microscopy,” ACS Photonics, 7 (11), 3023 –3034 https://doi.org/10.1021/acsphotonics.0c01051 (2020). Google Scholar

46. 

J. Di et al., “Quantitative phase imaging using deep learning-based holographic microscope,” Front. Phys., 9 https://doi.org/10.3389/fphy.2021.651313 (2021). Google Scholar

47. 

Y. Wu et al., “Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram,” Light Sci. Appl., 8 (1), 25 https://doi.org/10.1038/s41377-019-0139-9 (2019). Google Scholar

48. 

Q. Yang et al., “MRI cross-modality image-to-image translation,” Sci. Rep., 10 (1), 3753 https://doi.org/10.1038/s41598-020-60520-6 (2020). Google Scholar

49. 

B. Bai et al., “Deep learning-enabled virtual histological staining of biological samples,” Light Sci. Appl., 12 (1), 57 https://doi.org/10.1038/s41377-023-01104-7 (2023). Google Scholar

50. 

W. Xiong et al., “A deep cross-modality hashing network for SAR and optical remote sensing images retrieval,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 13 5284 –5296 https://doi.org/10.1109/JSTARS.2020.3021390 (2020). Google Scholar

51. 

Y. Rivenson et al., “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nat. Biomed. Eng., 3 (6), 466 –477 https://doi.org/10.1038/s41551-019-0362-y (2019). Google Scholar

52. 

C. L. Cooke et al., “Physics-enhanced machine learning for virtual fluorescence microscopy,” in Proc. IEEE/CVF Int. Conf. Comput. Vision, 3803 –3813 (2021). Google Scholar

53. 

L. Latonen et al., “Virtual staining for histology by deep learning,” Trends Biotechnol., 42 (9), 1177 –1191 https://doi.org/10.1016/j.tibtech.2024.02.009 (2024). Google Scholar

54. 

Z. Ballard et al., “Machine learning and computation-enabled intelligent sensor design,” Nat. Mach. Intell., 3 (7), 556 –565 https://doi.org/10.1038/s42256-021-00360-9 (2021). Google Scholar

55. 

Y. Liu and L. Wu, “Geological disaster recognition on optical remote sensing images using deep learning,” Proc. Comput. Sci., 91 566 –575 https://doi.org/10.1016/j.procs.2016.07.144 (2016). Google Scholar

56. 

Y. Zhang et al., “Welding defects detection based on deep learning with multiple optical sensors during disk laser welding of thick plates,” J. Manuf. Syst., 51 87 –94 https://doi.org/10.1016/j.jmsy.2019.02.004 (2019). Google Scholar

57. 

H.-A. Joung et al., “Point-of-care serodiagnostic test for early-stage Lyme disease using a multiplexed paper-based immunoassay and machine learning,” ACS Nano, 14 (1), 229 –240 https://doi.org/10.1021/acsnano.9b08151 (2020). Google Scholar

58. 

Z. Göröcs et al., “Label-free detection of Giardia lamblia cysts using a deep learning-enabled portable imaging flow cytometer,” Lab. Chip, 20 (23), 4404 –4412 https://doi.org/10.1039/D0LC00708K (2020). Google Scholar

59. 

L. V. Nguyen et al., “Sensing in the presence of strong noise by deep learning of dynamic multimode fiber interference,” Photonics Res., 9 (4), B109 –B118 https://doi.org/10.1364/PRJ.415902 (2021). Google Scholar

60. 

Z. Wang et al., “Deep learning based label-free small extracellular vesicles analyzer with light-sheet illumination differentiates normal and cancer liver cells,” Sens. Actuators B Chem., 347 130612 https://doi.org/10.1016/j.snb.2021.130612 (2021). Google Scholar

61. 

P. D. Hernández, J. A. Ramírez and M. A. Soto, “Deep-learning-based earthquake detection for fiber-optic distributed acoustic sensing,” J. Light. Technol., 40 (8), 2639 –2650 (2022). Google Scholar

62. 

Y. Luo et al., “Virtual impactor-based label-free pollen detection using holography and deep learning,” ACS Sens., 7 (12), 3885 –3894 https://doi.org/10.1021/acssensors.2c01890 (2022). Google Scholar

63. 

G.-R. Han et al., “Deep learning-enhanced paper-based vertical flow assay for high-sensitivity troponin detection using nanoparticle amplification,” ACS Nano, 18 27933 –27948 https://doi.org/10.1021/acsnano.4c05153 (2024). Google Scholar

64. 

Z. S. Ballard et al., “Deep learning-enabled point-of-care sensing using multiplexed paper-based sensors,” npj Digit. Med., 3 (1), 1 –8 https://doi.org/10.1038/s41746-020-0274-y (2020). Google Scholar

65. 

A. Goncharov et al., “Deep learning-enabled multiplexed point-of-care sensor using a paper-based fluorescence vertical flow assay,” Small, 19 (51), 2300617 https://doi.org/10.1002/smll.202300617 (2023). Google Scholar

66. 

M. Eryilmaz et al., “A paper-based multiplexed serological test to monitor immunity against SARS-COV-2 Using machine learning,” ACS Nano, 18 (26), 16819 –16831 https://doi.org/10.1021/acsnano.4c02434 (2024). Google Scholar

67. 

R. Ghosh et al., “Rapid single-tier serodiagnosis of Lyme disease,” Nat. Commun., 15 (1), 7124 https://doi.org/10.1038/s41467-024-51067-5 (2024). Google Scholar

68. 

A. Goncharov et al., “Insertable glucose sensor using a compact and cost-effective phosphorescence lifetime imager and machine learning,” ACS Nano, 18 (34), 23365 –23379 https://doi.org/10.1021/acsnano.4c06527 (2024). Google Scholar

69. 

U. M. N. Jayawickrema et al., “Fibre-optic sensor and deep learning-based structural health monitoring systems for civil structures: a review,” Measurement, 199 111543 https://doi.org/10.1016/j.measurement.2022.111543 (2022). Google Scholar

70. 

C. Zuo et al., “Deep learning in optical metrology: a review,” Light Sci. Appl., 11 (1), 39 https://doi.org/10.1038/s41377-022-00714-x (2022). Google Scholar

71. 

N. H. Al-Ashwal et al., “Deep learning for optical sensor applications: a review,” Sensors, 23 (14), 6486 https://doi.org/10.3390/s23146486 (2023). Google Scholar

72. 

S. Yuan et al., “Geometric deep optical sensing,” Science, 379 (6637), eade1220 https://doi.org/10.1126/science.ade1220 (2023). Google Scholar

73. 

Y. Zhuo and J. Brgoch, “Opportunities for next-generation luminescent materials through artificial intelligence,” J. Phys. Chem. Lett., 12 (2), 764 –772 https://doi.org/10.1021/acs.jpclett.0c03203 (2021). Google Scholar

74. 

N. T. Hung et al., “Universal ensemble-embedding graph neural network for direct prediction of optical spectra from crystal structures,” Adv. Mater., 2409175 https://doi.org/2409175.10.1002/adma.202409175 (2024). Google Scholar

75. 

J. M. Gregoire, L. Zhou and J. A. Haber, “Combinatorial synthesis for AI-driven materials discovery,” Nat. Synth., 2 (6), 493 –504 https://doi.org/10.1038/s44160-023-00251-4 (2023). Google Scholar

76. 

L. Li et al., “Intelligent metasurfaces: control, communication and computing,” eLight, 2 (1), 7 https://doi.org/10.1186/s43593-022-00013-3 (2022). Google Scholar

77. 

P. R. Wiecha et al., “Deep learning in nano-photonics: inverse design and beyond,” Photonics Res., 9 (5), B182 –B200 https://doi.org/10.1364/PRJ.415960 (2021). Google Scholar

78. 

D. Liu et al., “Training deep neural networks for the inverse design of nanophotonic structures,” ACS Photonics, 5 (4), 1365 –1369 https://doi.org/10.1021/acsphotonics.7b01377 (2018). Google Scholar

79. 

S. So et al., “Deep learning enabled inverse design in nanophotonics,” Nanophotonics, 9 (5), 1041 –1057 https://doi.org/10.1515/nanoph-2019-0474 (2020). Google Scholar

80. 

S. Mao et al., “Inverse design for silicon photonics: from iterative optimization algorithms to deep neural networks,” Appl. Sci., 11 (9), 3822 https://doi.org/10.3390/app11093822 (2021). Google Scholar

81. 

A. Ueno, J. Hu and S. An, “AI for optical metasurface,” npj Nanophotonics, 1 (1), 36 https://doi.org/10.1038/s44310-024-00037-2 (2024). Google Scholar

82. 

J. Li et al., “Unidirectional imaging using deep learning–designed materials,” Sci. Adv., 9 (17), eadg1505 https://doi.org/10.1126/sciadv.adg1505 (2023). Google Scholar

83. 

C. Qian et al., “Deep-learning-enabled self-adaptive microwave cloak without human intervention,” Nat. Photonics, 14 (6), 383 –390 https://doi.org/10.1038/s41566-020-0604-2 (2020). Google Scholar

84. 

W. Ma et al., “Deep learning empowering design for selective solar absorber,” Nanophotonics, 12 (18), 3589 –3601 https://doi.org/10.1515/nanoph-2023-0291 (2023). Google Scholar

85. 

J. Park et al., “Free-form optimization of nanophotonic devices: from classical methods to deep learning,” Nanophotonics, 11 (9), 1809 –1845 https://doi.org/10.1515/nanoph-2021-0713 (2022). Google Scholar

86. 

T. Gahlmann and P. Tassin, “Deep neural networks for the prediction of the optical properties and the free-form inverse design of metamaterials,” Phys. Rev. B, 106 (8), 085408 https://doi.org/10.1103/PhysRevB.106.085408 (2022). Google Scholar

87. 

J. Hu et al., “Diffractive optical computing in free space,” Nat. Commun., 15 (1), 1525 https://doi.org/10.1038/s41467-024-45982-w (2024). Google Scholar

88. 

J. Shalf, “The future of computing beyond Moore’s Law,” Philos. Trans. R. Soc. Math. Phys. Eng. Sci., 378 (2166), 20190061 https://doi.org/10.1098/rsta.2019.0061 (2020). Google Scholar

89. 

B. J. Shastri et al., “Photonics for artificial intelligence and neuromorphic computing,” Nat. Photonics, 15 (2), 102 –114 https://doi.org/10.1038/s41566-020-00754-y (2021). Google Scholar

90. 

L. Huang et al., “Self-supervised learning of hologram reconstruction using physics consistency,” Nat. Mach. Intell., 5 (8), 895 –907 https://doi.org/10.1038/s42256-023-00704-7 (2023). Google Scholar

91. 

K. M. Stiefel and J. S. Coggan, “The energy challenges of artificial superintelligence,” Front. Artif. Intell., 6 1240653 https://doi.org/10.3389/frai.2023.1240653 (2023). Google Scholar

93. 

A. Gholami et al., “AI and memory wall,” IEEE Micro, 44 (03), 33 –39 https://doi.org/10.1109/MM.2024.3373763 (2024). Google Scholar

94. 

X. Lin et al., “All-optical machine learning using diffractive deep neural networks,” Science, 361 (6406), 1004 –1008 https://doi.org/10.1126/science.aat8084 (2018). Google Scholar

95. 

R. Hamerly, “The future of deep learning is photonic: reducing the energy needs of neural networks might require computing with light,” IEEE Spectr., 58 (7), 30 –47 https://doi.org/10.1109/MSPEC.2021.9475393 (2021). Google Scholar

96. 

P. L. McMahon, “The physics of optical computing,” Nat. Rev. Phys., 5 (12), 717 –734 https://doi.org/10.1038/s42254-023-00645-5 (2023). Google Scholar

98. 

Y. Shen et al., “Deep learning with coherent nanophotonic circuits,” Nat. Photonics, 11 (7), 441 –446 https://doi.org/10.1038/nphoton.2017.93 (2017). Google Scholar

99. 

A. N. Tait et al., “Neuromorphic photonic networks using silicon photonic weight banks,” Sci. Rep., 7 (1), 7430 https://doi.org/10.1038/s41598-017-07754-z (2017). Google Scholar

100. 

R. Hamerly et al., “Large-scale optical neural networks based on photoelectric multiplication,” Phys. Rev. X, 9 (2), 021032 https://doi.org/10.1103/PhysRevX.9.021032 (2019). Google Scholar

101. 

H. Zhang et al., “An optical neural chip for implementing complex-valued neural network,” Nat. Commun., 12 (1), 457 https://doi.org/10.1038/s41467-020-20719-7 (2021). Google Scholar

102. 

X. Xu et al., “11 TOPS photonic convolutional accelerator for optical neural networks,” Nature, 589 (7840), 44 –51 https://doi.org/10.1038/s41586-020-03063-0 (2021). Google Scholar

103. 

J. Feldmann et al., “Parallel convolutional processing using an integrated photonic tensor core,” Nature, 589 (7840), 52 –58 https://doi.org/10.1038/s41586-020-03070-1 (2021). Google Scholar

104. 

B. Dong et al., “Higher-dimensional processing using a photonic tensor core with continuous-time data,” Nat. Photonics, 17 (12), 1080 –1088 https://doi.org/10.1038/s41566-023-01313-x (2023). Google Scholar

105. 

O. Kulce et al., “All-optical synthesis of an arbitrary linear transformation using diffractive surfaces,” Light Sci. Appl., 10 (1), 196 https://doi.org/10.1038/s41377-021-00623-5 (2021). Google Scholar

106. 

M. S. Sakib Rahman and A. Ozcan, “Integration of programmable diffraction with digital neural networks,” ACS Photonics, 11 (8), 2906 –2922 https://doi.org/10.1021/acsphotonics.4c01099 (2024). Google Scholar

107. 

J. Li et al., ““Class-specific differential detection in diffractive optical neural networks improves inference accuracy,” Adv. Photonics, 1 (4), 046001 https://doi.org/10.1117/1.AP.1.4.046001 (2019). Google Scholar

108. 

D. Mengu et al., “Analysis of diffractive optical neural networks and their integration with electronic neural networks,” IEEE J. Sel. Top. Quantum Electron., 26 (1), 3700114 https://doi.org/10.1109/JSTQE.2019.2921376 (2020). Google Scholar

109. 

J. Li et al., “Spectrally encoded single-pixel machine vision using diffractive networks,” Sci. Adv., 7 (13), eabd7690 https://doi.org/10.1126/sciadv.abd7690 (2021). Google Scholar

110. 

B. Bai et al., “Information-hiding cameras: optical concealment of object information into ordinary images,” Sci. Adv., 10 (24), eadn9420 https://doi.org/10.1126/sciadv.adn9420 (2024). Google Scholar

111. 

Ç. Işıl et al., “Super-resolution image display using diffractive decoders,” Sci. Adv., 8 (48), eadd3433 https://doi.org/10.1126/sciadv.add3433 (2022). Google Scholar

112. 

M. S. S. Rahman et al., “Learning diffractive optical communication around arbitrary opaque occlusions,” Nat. Commun., 14 (1), 6830 https://doi.org/10.1038/s41467-023-42556-0 (2023). Google Scholar

113. 

Y. Li et al., “Optical information transfer through random unknown diffusers using electronic encoding and diffractive decoding,” Adv. Photonics, 5 (4), 046009 https://doi.org/10.1117/1.AP.5.4.046009 (2023). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE and CLP under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Md Sadman Sakib Rahman and Aydogan Ozcan "Physics and artificial intelligence: illuminating the future of optics and photonics," Advanced Photonics 6(5), 050500 (31 October 2024). https://doi.org/10.1117/1.AP.6.5.050500
Received: 15 October 2024; Accepted: 22 October 2024; Published: 31 October 2024
Advertisement
Advertisement
KEYWORDS
Artificial intelligence

Physics

Design

Evolutionary algorithms

Systems modeling

Artificial neural networks

Quantum photonics

Back to Top