Weighing in on photonic-based machine learning for automotive mobility

To the Editor — Optical processing for artificial neural networks is being re-evaluated for its potential to address large power consumption, data shuttling and thermal dissipation challenges faced by complementary metal–oxide–semiconductor (CMOS) technology. Though recent advances demonstrate an exceptionally high number of photonic calculations per unit energy, photonic processing must find a niche in the modern market. In applications requiring limited numeric precision, high inference speed and parallelism, this alternative computing framework could outperform traditional von Neumann architectures. At the inflection point of a digital transformation where connected products provide insight into the user experience, optical processing may be the key to pit edge computing against higher-bandwidth mobile networks.

Around 1993 when Apple sold its ten millionth Macintosh that ran on a 400 kB floppy drive, real-time facial recognition emerged at the California Institute of Technology (Caltech)1. While we are all familiar with today’s facial recognition software that runs on microprocessors in some smartphones, this early demonstration by Caltech was built on a form of photonic machine learning.

At the base of facial recognition systems, an artificial neural network identifies features to calculate the probability of a matching image. While the theory behind neural networks was defined in the 1940s, the first computers were not yet developed, and the utility of the algorithm was delayed.

To achieve facial recognition today, we enlist large servers to scour databases. In comparison, the aforementioned 1990s technology utilized free-space optical processing units consisting of photorefractive crystals to store the trained weights plus lenses, mirrors and gratings for associated calculations. Just a year before Caltech’s publication, the Massachusetts Institute of Technology also demonstrated facial recognition using code and an electronic processor. Photonic processing for machine learning had matched electronic hardware in little time.

So why unearth technology from 35 years ago? Tremendous computational power is needed for deep-learning tasks on today’s von Neumann architecture. Consider the training process of a convolutional neural network on a 4 GPU system; an average power for 1 week is about 180 W. This amounts to ~32 kWh of energy expenditure or higher demands for distributed processing. Comparatively, the average American and European household in 2018 consumed ~211 kWh per week and ~21 kWh per week, according to the US Energy Information Administration and Eurostat. These household values are on par with the weekly power consumption of a trained machine-learning engineer. Extrapolating this statistic, IBM vice president of research, Mukesh Khare, speculated at a neuromorphic conference in 2019 that power consumed by neural networks could exceed the total power generated in the world by 20402.

Today, connected products leverage insight into the user experience via aggregated data and there are two methods to work with this volume of data. The first is to send it to a cloud computing service, while the second is to process it at the edge. Future mobility solutions are likely to rely on light detection and ranging (LiDAR) technology, which is known to rapidly generate massive amounts of data (for example, 2 TB min–1). Even with advancements in mobile networks, handling this continuous stream of data is unwieldy. Furthermore, while edge computing is limited by localized power consumption and hardware delocalization, it has advantages in privacy, connectivity and latency.

Whether in the cloud or on the road, graphic processing units (GPUs) have been considered the standard hardware to calculate neural networks despite being ‘general use’ technology. In contrast, new specialized tensor processing units (TPUs) perform matrix multiplication at their base unit of calculation. The TPU, through less flexibility, reduces the von Neumann processing-speed bottleneck of a GPU. However, for future vehicles, maximizing computing speed and minimizing power consumption is critical. Original equipment manufacturers, Toyota and General Motors, and firms such as NVIDIA, NXP Semiconductors and Arm recently joined an Autonomous Vehicle Computing Consortium to set requirements for future vehicles. Despite these developments, the next fleet of vehicles continues to prioritize power over processor, baring a technology gap where fast, power-agile edge computing resides.

In light of this gap, can a photonic processor address the needs of autonomous mobility? Photonic processing of optical neural networks in the 1990s never reached market maturity due to bulky optical systems, poor material nonlinearity, and insensitive, narrow-bandwidth optoelectronic processing. Today, developments in silicon photonics and electro-optic conversion have brought about a resurgence in photonics for analogue computing.

Recent advances have demonstrated a variety of methods to accelerate photonic processing for tailored computing applications. These include optical spiking neuromorphic systems3, integrated circuits implemented as a system of lossless, linear optical operators4,5,6, wavelength-division multiplexing for broadcast and weight architectures7, and free-space diffractive metasurfaces8. All these technologies look to surpass the von Neumann bottleneck of conventional digital systems that predicts limitations of computational efficiencies at 100 pJ per multiply-and-accumulate9. A review article was recently published that examines some of the key features that enable photonic circuits for these machine-learning applications10. As an outcome of these results, a number of start-ups have been formed including LightMatter, a Google Venture-backed company dedicated to ‘accelerating artificial intelligence with light’ and Lightelligence supported by Baidu Ventures.

Despite exciting developments in the field, many challenges remain. Future implementations will need to address topics such as increasing the number of classifiers beyond N outputs, reducing the device footprint, inducing nonlinearity, stabilizing performance in variable conditions and realizing efficient analogue-to-digital conversion. Furthermore, the overall system-power remains a challenge point, despite the power per photonic calculation being significantly lower than electronic processes. Yet even with these hurdles, researchers continue to find novel methods to reinvent the way we process photonics; for instance, scientists recently demonstrated an implementation of machine vision at the locus of detection by an array of finely tuned photodiodes11. Additionally, with recent government support of integrated photonics in Europe and the United States through programmes such as the Interuniversity Microelectronics Centre and AIM Photonics, respectively, photonic device prototypes are more readily available to researchers through multi-project wafer vehicles.

From the perspective of a mobility company, the future identity of a vehicle has yet to be decided (Fig. 1). On one end is the vehicle as a hub for wireless connectivity, fully linked and communicating to other vehicles through mobile carriers. On the other side, the vehicle can react to sensory inputs and communicate with neighbouring cars utilizing its own agile, local and energy-efficient processors. So where does optical processing fit in this outlook?

Fig. 1: The future of in-vehicle processing balances between cloud and edge computing.
figure1

Given the evolution of autonomy and processing achieved over the past decade, original equipment manufacturers, Toyota and General Motors, and firms such as NVIDIA, NXP Semiconductors and Arm recognize the need to set industry-wide requirements for the future vehicle, and therefore have formed an Autonomous Vehicle Computing Consortium to facilitate the institution of such standards.

Rapid processing of vast amounts of data at the edge remains an active pursuit for autonomous vehicles, but also requires robust hardware impervious to temperature fluctuations; as such, filter-based machine learning could provide a functional complement in the near term. Meanwhile, data centres may be a target niche to capitalize on the increased data rate of photonic processors. These tamper-proof sites provide an isolated environment to exploit free-space optics to process data in a third dimension with relatively low thermal loss, compared with the high heat remediation required of planar electronics. However, until photonics can actively store and address memory via optical switching, the implementation of photonic processing will remain a hybrid electro-optical task, with photons responding at a latency governed by standard electronics. In the near term, it will be exciting to watch as optical neural networks learn to train at the edge, reducing the more power intensive portion of the computational process, thereby superseding the need for GPU-based training. Looking forward, the growing domain of neuromorphic processing may soon merge with neural network technology, synchronizing stochastic events to further mould the field of modern computing.

References

  1. 1.

    Li, H.-Y. S., Qiao, Y. & Psaltis, D. Appl. Opt. 32, 5026–5035 (1993).

    ADS 
    Article 

    Google Scholar
     

  2. 2.

    Khare, M. NICE 2019 – Day 1d Mukesh Khare. YouTube https://www.youtube.com/watch?v=78JKy5drKXo (2019).

  3. 3.

    Feldmann, J., Youngblood, N., Wright, C. D., Bhaskaran, H. & Pernice, W. H. P. Nature 569, 208–214 (2019).

    ADS 
    Article 

    Google Scholar
     

  4. 4.

    Shen, Y. et al. Nat. Photon. 11, 441–446 (2017).

    ADS 
    Article 

    Google Scholar
     

  5. 5.

    Pai, S., Bartlett, B., Solgaard, O. & Miller, D. A. B. Phys. Rev. Appl. 11, 064044 (2019).

    ADS 
    Article 

    Google Scholar
     

  6. 6.

    Hughes, T. W., Minkov, M., Shi, Y. & Fan, S. Optica 5, 864–871 (2018).

    ADS 
    Article 

    Google Scholar
     

  7. 7.

    Tait, A. N., Nahmias, M. A., Shastri, B. J. & Prucnal, P. R. J. Light. Technol. 32, 4029–4041 (2014).

    Article 

    Google Scholar
     

  8. 8.

    Lin, X. et al. Science 7, 1004–1008 (2018).

    ADS 
    Article 

    Google Scholar
     

  9. 9.

    Hasler, J. & Marr, B. Front. Neurosci. 7, 118 (2013).

    Article 

    Google Scholar
     

  10. 10.

    Bogaerts, W. et al. Nature 586, 207–216 (2020).

    ADS 
    Article 

    Google Scholar
     

  11. 11.

    Mennel, L. et al. Nature 579, 62–66 (2020).

    ADS 
    Article 

    Google Scholar
     

Download references

Acknowledgements

The views expressed in this text belong solely to the authors and do not necessarily reflect the views of the authors’ employer or other groups.

Author information

Affiliations

Corresponding author

Correspondence to
Sean Phillip Rodrigues.

Ethics declarations

Competing interests

The authors work at the Toyota Research Institute of North America and are employees of Toyota Motor Engineering & Manufacturing North America.

About this article

Verify currency and authenticity via CrossMark

Cite this article

Rodrigues, S.P., Yu, Z., Schmalenberg, P. et al. Weighing in on photonic-based machine learning for automotive mobility.
Nat. Photonics 15, 66–67 (2021). https://doi.org/10.1038/s41566-020-00736-0

Download citation