Ideally, next-generation AI technologies should understand all our requests and commands, extracting them from a huge background of irrelevant information, in order to rapidly provide relevant answers and solutions to our everyday needs. Making these “smart” AI technologies pervasive—in our smartphones, our homes, and our cars—will require energy-efficient AI hardware, which we at IBM Research plan to build around novel and highly capable analog memory devices.
In a recent paper published in Journal of Applied Physics, our IBM Research AI team established a detailed set of guidelines that emerging nano-scaled analog memory devices will need to satisfy in order to enable such energy-efficient AI hardware accelerators.
We had previously shown, in a Nature paper published in June 2018, that training a neural network using highly parallel computation within dense arrays of memory devices such as phase-change memory is faster and consumes less power than using a graphics processing unit (GPU).
The advantage of our approach comes from implementing each neural network weight with multiple devices, each serving in a different role. Some devices are mainly tasked with memorizing long-term information. Other devices are updated very rapidly, changing as training images (such as pictures of trees, cats, ships, etc.) are shown, and then occasionally transferring their learning to the long-term information devices. Although we introduced this concept in our Nature paper using existing devices (phase change memory and conventional capacitors), we felt there should be an opportunity for new memory devices to perform even better, if we could just identify the requirements for these devices.
In our follow-up paper, just published in Journal of Applied Physics, we were able to quantify the device properties that these “long-term information” and “fast-update” devices would need to exhibit. Because our scheme divides tasks across the two categories of devices, these device requirements are much less stringent—and thus much more achievable—than before. Our work provides a clear path for material scientists to develop novel devices for energy-efficient AI hardware accelerators based on analog memory.