November 1, 2023

Postdoc opportunities at NIST in training of hardware neural networks

Position Type: Research
Priority: No
Degree Requirement: PhD, Post Doctorate
Postdoc opportunities at NIST in training of hardware neural networks
 
At NIST we've been continuing our machine-learning work with an angle towards building algorithms for physical hardware networks. We're currently on the lookout for new postdocs to join us and so have written the job posting below.  
 
Adam McCaughan
NIST
 

Postdoctoral Research Opportunities in Training of Hardware Neural Networks

 
Our team at NIST is seeking postdoctoral candidates from diverse backgrounds in physics, engineering, and applied mathematics, to work on projects at the intersection of physics and AI. These are well compensated 2-year positions, and our group regularly publishes in high-impact journals. Our team operates within a large and collaborative group, with state-of-the-art equipment and brand new labs. NIST is located in Boulder, CO in the foothills of the Rocky Mountains, with over 300 days of sunshine each year. US citizens are encouraged, as we can assist them with applications for the prestigious NRC fellowship.
 
We have three projects detailed below for a wide range of backgrounds.  If interested, please fill out your contact information here or send us an email at sonia.buckley@nist.gov or adam.mccaughan@nist.gov
 
Background: Machine learning and artificial intelligence (AI) have advanced so rapidly that they can now outperform humans at many (or even most) tasks. However, the large models that perform the most complex tasks use an enormous amount of energy, in particular during the learning or training phase.  One solution to this problem is the development of custom "neuromorphic" hardware for AI, which can be exceptionally efficient but is often difficult or impossible to train. At NIST, we have developed a physics-based framework that can train many different types of hardware neural networks. After demonstrating this framework in proof-of-concept hardware, we are now in a position to develop larger-scale implementations that can solve impactful problems.  In collaboration with researchers from institutions around the globe, we aim to develop and demonstrate training on diverse hardware platforms in real-world situations.
 
Project 1 - Algorithms for neuromorphic hardware and physical neural networks
One of the overarching goals of our team is to develop online training algorithms for hardware that are robust to noise and work in real-time like the brain. These algorithms are implemented in the MGD framework, which is a universal framework that can be applied to any hardware or software neural network, whether analog, spiking, recurrent, or otherwise. The candidate will extend this framework and quantitatively evaluate its performance on state-of-the-art benchmarks for specific emergent hardware platforms and edge computing applications. In addition to direct experience with neuromorphic hardware (e.g. spiking or memristive), relevant background could include physics or hardware modeling, experience with neural network libraries such as PyTorch, or FPGA programming.
 
Project 2 - Free space optical neural networks
Free-space optics is one of the few emerging hardware platforms that can already be used to implement large neural networks to solve relevant problems in machine learning, due to straightforward 3D connectivity and parallel processing capabilities. Optical neural networks will have applications in communications technologies, networks and sensors. We are seeking a candidate to build optical neural networks and train them to solve tasks such as image classification and network optimization at extremely high bandwidths. Relevant prior experience could include quantum optics, optical sensing, spectroscopy or optical communications.
 
Project 3 - Autonomously-learning electronic neural networks
Training and learning is one area in which the brain still far surpasses machine learning approaches. In this project the candidate will explore autonomous learning strategies, by implementing all-electrical neural networks where independent, asynchronous circuits operate collectively to solve real-world machine learning tasks via their emergent behavior. We have already partially implemented this with a computer acting as part of the circuit, and are looking for a candidate to develop it into a fully autonomous, self-learning neural network. This project will involve design and simulation of these circuits, analysis of their properties, as well as testing on different tasks such as classification, allostery, and control.