1. Introduction
The human brain is a highly efficient system, which consists of approximately 1011 neurons and 1015 synapses with merely 20 W power consumption [1,2,3]. Neuromorphic computing is a new computing paradigm inspired by the brain, with the advantage of massive parallelism and distributed storage, and is claimed as a promising technology to enhance information analysis abilities in the data-rich era [4,5,6,7]. However, existing hardware demonstrations are far from competing with the biological ones in terms of efficiency and power consumption [8,9,10]. One reason is that the systems are constructed based on CMOS devices with complex synapses and neuron circuits occupying quite a large area [11,12,13]. Therefore, the compact nanoelectronic device which can successfully simulate the biological elements is essential to construct efficient networks [2]. Recently, the memristor with high density, low power consumption and tunable conductance has shown great promise for the synapses [14,15,16,17]. Another attribution to the inefficiency is that the recognition tasks are realized via supervised learning, which demands a large amount of training data and additional feedback circuits, leading to time latency and energy consumption [18,19,20,21], especially when online training is required [22,23,24]. Thus, recent studies focus on unsupervised learning [25,26,27,28,29,30], where the synaptic weights are usually updated according to bio-inspired local learning rules [31,32,33], such as spike-timing-dependent plasticity (STDP) [34,35] and spike-rate-dependent plasticity (SRDP) [36]. STDP refers to the learning principle that relative timing between pre-synaptic and post-synaptic spikes determines the direction of weight update and the magnitude of weight change [37,38,39]. SRDP is another learning rule that modulates the synaptic weights by the frequency of pre- and post-neuron activities, which is one of the most critical learning algorithms for neuromorphic computing [40,41,42].
Early research studies have proved that memristor devices can exhibit SRDP-like behaviors, including SiOxNy:Ag-based diffusive memristor [43], HfOx-based memristor [44], TiOx/AlOy-based [45] oxide memristor, AgInSbTe-based chalcogenide memristor [46], hybrid CMOS/memristor structure [26,28], and devices with many other materials [47,48,49]. Going beyond the device demonstrations, several hardware implementations of pattern learning by SRDP have been proposed [26,28]. Milo et al. demonstrated online unsupervised learning of patterns with 8 × 8 pixels by SRDP based on the 4T1R structure [26]. Nevertheless, the learning ability of the small-scale network is limited, which is unable to accomplish challenging tasks, such as classification of different inputs and recognition of data sets. Recently, Huang et al. proposed a single-layer fully-connected network to classify 10 images by SRDP and constructed a CNN-SRDP network to recognize the whole MNIST images with up to 92% accuracy, which enlarges the learning abilities [4]. However, only simulation results are presented, and the device demonstration is performed based on discrete cells. Therefore, hardware demonstration by SRDP at the network level is of great importance to address more practical tasks [2,50,51,52].
In this work, we present a neuromorphic hardware system, which is comprised of multiple memristor arrays, DACs, ADCs and many other assemblies, and is equipped with inference and training functions. The SRDP characteristic is implemented experimentally by the memristor synapses and CMOS neurons. A 196 × 10 SRDP neural network is constructed to demonstrate the online unsupervised learning of 10 MNIST digits, and about 90% classification accuracy is achieved.
2. Memristor-Based Neuromorphic Hardware System
Here, a memristor-based neuromorphic system is constructed for the hardware demonstration of SRDP neural networks. The system consists of three parts, including memristor crossbar arrays, the customized printed circuit board (PCB) and the personal computer (PC), as shown in Figure 1a. The memristor array provides hardware synapses, and the device conductance is considered as the analogy of synaptic weight. The vector-matrix multiplications and weight update can be performed on the array. The PCB implements partial functions of the neurons, which primarily consists of Digital-Analog Converters (DACs), trans-impedance amplifier (TIA), Analog-Digital Converters (ADCs) and multiplexers (MUX), as shown in Figure 1b. DACs in the pre-neuron module are used to generate input signal and noise signal to WL of the array under the control of a reference random signal. The post-neuron is made up of an integrator, a comparator and a multiplexer, as can be seen in Figure 1c. Therefore, DACs in the post-neuron module are used to generate constant voltage for inference tasks and spike pulses to BL or SL for weight update. ADCs together with TIAs are used to read the integral current across the synapses through SL. MUXs are utilized to select different memristor chips and operation modes including inference and weight update. MCU controls the discrete components and processes data. Matlab script running on PC is used to control the generation of signals and perform some calculations of the leaky-integrate-and-fire (LIF) post-neuron, including the accumulation of membrane voltage (Vm) and the comparison between Vm and threshold voltage (Vth). The computer sends control commands and communicates with MCU via a serial port.
Figure 2a shows the micrograph of the memristor chip. Each packaged chip is integrated with 256 × 16 1T1R cells and multiplexers to control and select word lines. The crossbar array is constructed by connecting the gates of transistors in the same row (WL) and the top electrodes (TE) of memristors in the same column (BL). The sources of the transistor are wired to the same SL, which is parallel to BL, as can be seen in Figure 2b. The structure of the array is designed to meet the requirements for the SRDP algorithm, where input signals of the pre-neurons are sent to WL and the top electrodes of the memristor corresponding to the same post-neuron should be connected for synchronous weight update. Figure 2c shows the memristor device with TiN/TaOx/HfOx/TiN structure. TiN is used as the bottom electrode, on the top of which an 8-nm HfO2 resistive layer was deposited by atomic layer deposition (ALD) at 250 °C. Then, a 45-nm TaOx was deposited as a capping layer by magnetron sputtering with an Ar/N2 atmosphere. The TiN top electrode is grown by physical vapor deposition and patterned by the dry etching method.
3. Memristor Synapse with SRDP Characteristic
The basic properties and SRDP characteristics of the memristor are shown in Figure 3. The typical I-V characteristic is presented in Figure 3a. The distribution of high conductance state (HGS) and low conductance state (LGS) of ten memristors selected randomly is shown in Figure 3b. The result shows that HGS is around 80.0 μS and LGS is 2.7 μS on average, indicating approximately 30× conductance window. The variation of HGS is below 20% and that of LGS is about 80%. The SRDP learning rules and the circuit of memristor synapse and CMOS neurons have been illustrated in Section 2. To prove the feasibility of the SRDP algorithm, we perform experiments based on the hardware system. According to the previous work, the learning efficiency and accuracy are sensitive to the circuit parameters of the post-neuron [4]. Thus, we should select the circuit parameters firstly, which include the leaky resistance R, the capacitor C of the integrator, the threshold voltage Vth of the comparator in the post-neuron module, and so on. The various signals are initiated as the binarized sequences with certain probabilities, where a high level “1” represents a spike with 1 μs width and “0” represents that there is no spike generated. We randomly select a device in the array for the demonstration of SRDP behavior. Initially, the device has the probability Pg = 0.5 to be in HGS. The training process of SRDP is comprised of three stages, including accumulation, potentiation and depression. When training starts, the system first enters the accumulation stage. DAC in the pre-neuron module generates Vg according to the input signal and sends it to the selected WL, while that in the post-neuron module sends small constant voltage (Vs) to BL. When the transistor of the memristor synapse is switched on, the current will be generated following Ohm’s law and read out by TIAs and ADCs. The current data is processed in MCU and then transferred to the computer, where Vm is calculated and compared with Vth. Once Vm exceeds Vth, a fire event occurs and Vm will be cleared to zero. If the fire spike coincides with the reference random signal, the neuron will enter the depression stage. The computer sends the instructions to control DACs for selecting the proper signals, acting as the MUX. DAC of pre-neuron sends Vg according to the noise signal to WL, and that of post-neuron generates Vreset to SL and makes BL grounded. When the RESET spike overlaps with the noise spike, the device will be RESET to LGS. Otherwise, if the fire spike is not superimposed with a reference random signal, the neuron will turn to the potentiation stage. Vg is generated according to the input signal. Vset is sent to BL and SL is switched to the ground. When the SET spike overlaps with the input spike, the device will be SET to HGS. In other cases, the neuron remains in the accumulation stage. After training, the conductance at the final epoch is recorded as the learned weight. Note that the weight update is performed without write-verify, so there exists device variation as shown in Figure 3b. Figure 3c presents the measured and simulated results of SRDP characteristics. The frequency of the input signal is normalized by 1 MHz. For each frequency point, the weight is the mean of 300 times’ experiments after 100 training epochs. The outcome of measurement agrees with that of simulation. Because Pg is 0.5, the initialized weight is about 40.0μS. When the input frequency is higher than 0.3, the synapse experiences an enhancement process, otherwise, synaptic depression is triggered. The result shows that the relationship between the trained weights and the frequency of input signals is identical to the biological SRDP phenomenon, where LTP (LTD) is achieved with a high (low) frequency of input signal [37,38].
4. Online Unsupervised Learning of SRDP Network
We partition a 196 × 10 area of the array to construct a single-layer, fully-connected network consisting of 196 pre-neurons, 1960 synapses and 10 post-neurons. Ten handwritten digits from the MNIST data set [53] are selected. The input images are rescaled to 14 × 14 pixels to match the size of the array and then binarized. The input values of each image are unrolled into 196 × 1 vectors and then mapped to signals with different frequencies. Before training, the devices are initialized into HGS with the probability Pg. The training parameters and the corresponding definitions are listed in Table 1, optimized by the fire-properly principle in ref. [4]. When the training starts, the accumulation process would be conducted first. DACs corresponding to the pattern pixels in the digit region send input signals with the same frequency Pin but different temporal sequences to WLs, and those within the background region generate signals with low-frequency Pb. For the input signals at a high level, the corresponding transistors in the same row will be switched on. Meanwhile, Vs is applied to BLs of all post-neurons. The currents sharing the same column are integrated according to Kirchhoff’s current law. Due to the random distribution of initialized weights, post-neurons sharing the same input signals will have different accumulation speeds of membrane voltage and compete with each other. PC compares every membrane voltage with Vth. Once any Vm exceeds Vth, the corresponding post-neuron will experience weight update. No matter which post-neuron becomes the winner, Vm of all the post-neurons will be cleared to zeros. If the reference random signal with the frequency Pr is at a high level, the system will be in the depression stage. Different noise signals with certain rates will be generated by DACs. The post-neuron whose Vm exceeds Vth will send a RESET spike to SL, indicating that only the winner experiences the depression. The weight of the synapses connected to the winner will be tuned according to the noise signal. If the reference random signal is at a low level, the system will be in the potentiation stage and the synapses of the winner will have a certain probability to be enhanced. During the training process, the images are forwarded to the pre-neurons in sequence and each image holds for 600 training epochs.
Figure 4 shows the experimental learning process of digit “0”. In order to present more details, the training speed is slowed down by decreasing the parameter Pin and increasing the training epochs tn. In Figure 4a, the evolution of integral current, membrane voltage and the voltage of TE is shown during the first 300 epochs. The current across the synapses charges the capacitor of LIF post-neuron, contributing to the increase of membrane voltage. When the Vth is reached, a positive spike is transferred to TE and Vm will be cleared to zeros. Figure 4b shows the change of weights during the whole 1000 epochs, indicating that the weights in pattern regions get close to HGS and those in background regions tend to LGS. Figure 4c displays the evolution of mean weight corresponding to pixels in different regions. The results suggest that the potentiation (depression) occurs at high (low) frequency due to the larger probability for the weight to be enhanced (depressed), which is identical with the SRDP phenomenon [40,41,42].
The learned synaptic weights of 10 handwritten digits are displayed in Figure 5. As the training goes on, the images are learned more clearly and the distinctions between inputs are enlarged, showing the learning ability of the SRDP network. The inference results before and after training are shown in Figure 6a,b. The post-neurons have been reordered, according to the fire sequence. The results show that the network fails to distinguish the inputs before training but succeeds to classify the digits after unsupervised learning. Figure 6c shows the normalized fire frequency for each digit. As can be seen in the result, one post-neuron fires for one digit, and 10 digits are learned by different post-neurons, indicating a successful classification. The gradual evolution process of the post-neuron dynamics in Figure 4 and the learned synaptic weights in Figure 5 is consistent with previous simulation outcomes in ref. [4], proving the SRDP network feasible. Considering that the synaptic weights are modulated in an unsupervised way without write-verify operations, the influence of the device variation on accuracy should be taken into account. Here, the accuracy is defined as the ratio of the number of successful classifications to the number of total measurements. We perform measurements 10 times and 9 of them succeed, which is in accordance with the simulation results as shown in Figure 6d. The result suggests that variation of HGS has more negative effect than that of LGS does. When HGS variation reaches 20%, the accuracy is 93.5%, which shows the strong robustness of the SRDP network.
The influence of the network parameters on training accuracy and energy consumption is simulated as shown in Figure 7. The accuracy is the most crucial standard for the network. Pin and Pg control the learning speed and have a great impact on accuracy, as shown in Figure 7a. These parameters cannot be too small, because the post-neurons will not learn images if they seldom fire. Pg cannot be too large, because when the number of set events for devices in HGS is smaller than that of reset events for devices in LGS, the training will fail, namely, forgetting is faster than learning for the neuron which should have been the winner at next epoch. The larger the Pin is, the higher the accuracy becomes in terms of this task. This is because the overlap rate between images is relatively small, thus, the images will be learned distinctly and the difference between images will be enlarged, if the training speed is fast. As for more complicated applications, the impact of Pin will be different, and all of the parameters should be re-optimized. Pr and Pn are also two critical parameters, and we only discuss the influence of Pr in Figure 7b. The product of Pr and Pn determines the probability of depression. If Pn is large, too many numbers of depression in pattern pixels may happen at a single epoch, which will make the learning fail in the worst case. However, if Pn is small, few reset events at one epoch will make the training process smoother and the probability of forgetting can remain unchanged by tuning the parameter Pr. Thus, we adjust the probability of forgetting through the parameter Pr with the fixed slight value of Pn. Pr cannot be too large in order to avoid catastrophic forgetting and cannot be too small because it will cause the winner to have no time resetting the devices in background pixels and continue to be the winner in the following learning period. The energy consumption shown in Figure 7c,d is calculated by integrating the current across the memristor array. With the increase of Pin and Pg, the fire frequency is enlarged leading to more energy cost. Meanwhile, a larger probability of forgetting will decrease the fire frequency causing the reduction of power consumption. The outcomes indicate that the network parameters have a crucial impact on accuracy and energy consumption, and need to be fine-tuned and optimized for hardware demonstration.
5. Conclusions
In conclusion, we have constructed a neuromorphic hardware system with memristor synapses and CMOS neurons. The SRDP characteristic of memristor synapse is proved experimentally. The online unsupervised learning of 10 handwritten digits at a network level is successfully demonstrated by the SRDP algorithm with above 90% accuracy. The proposition of a bio-inspired SRDP algorithm and the construction of a neuromorphic hardware system paves the way towards the realization of large-scale and highly efficient neuromorphic systems.
Conceptualization, P.H.; methodology, R.L.; software, R.L.; validation, R.L.; writing—original draft preparation, R.L.; writing—review and editing, P.H., Z.Z. and J.K.; visualization, Y.F., Y.Z. and X.D.; supervision, P.H. and L.L. All authors have read and agreed to the published version of the manuscript.
This research was supported in part by the MOST of China under Grant 2021ZD0201200; in part by the National Natural Science Foundation of China Program under Grant 61874006, Grant 62022006; Grant 92064001 and in part by the 111 Project (B18001).
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Memristor-based neuromorphic system. (a) Circuit diagram of the system; (b) Photograph of the customized printed circuit board; (c) Circuit diagram of the memristor synapse and the corresponding CMOS neurons.
Figure 2. Memristor chip. (a) The micrograph of the 256 × 16 memristor array; (b) The structure of the crossbar array; (c) Schematic illustration of the memristor cell.
Figure 3. Basic properties and SRDP characteristics of the memristor. (a) The I-V characteristics; (b) The conductance distribution of ten devices; (c) Measured and simulated SRDP characteristic.
Figure 4. Hardware demonstration of the learning process for digit “0”. (a) Evolution of integral current (top), membrane voltage (middle) and the top electrode voltage (bottom); (b) Change process of the synapse weights; (c) Evolution of the mean weights in pattern (blue) and background (red) pixels.
Figure 6. Experimental results of inference (a) before training and (b) after training. (c) Fire frequency during the training process. (d) Simulation results about the influence of the device variation on classification accuracy.
Figure 7. Simulation results of the network parameters’ impact. The influence of Pin and Pg on (a) the accuracy and (c) energy consumption. The influence of Pin and Pr on (b) the accuracy and (d) energy consumption.
The optimized network parameters.
Parameter | Definition | Value | Unit |
---|---|---|---|
Vs | Constant voltage to the top electrode | 0.2 | V |
Vth | Threshold of the membrane voltage | 0.3 | V |
Pg | Probability to be in HGS of synaptic weights in the initial state | 0.65 | a.u. |
Pr | Frequency of the reference random signal | 0.15 | a.u. |
Pn | Frequency of the noise signal | 0.04 | a.u. |
Pin | Frequency of the input signal in the pattern pixels | 1 | a.u. |
Pb | Frequency of the input signal in the background pixels | 0 | a.u. |
tn | Training epoch of each image | 600 | # |
References
1. Lennie, P. The Cost of Cortical Computation. Curr. Biol.; 2003; 13, pp. 493-497. [DOI: https://dx.doi.org/10.1016/S0960-9822(03)00135-0]
2. Yu, S. Introduction to Neuro-Inspired Computing Using Resistive Synaptic Devices. Neuro-Inspired Computing Using Resistive Synaptic Devices; Yu, S. Springer International Publishing: Tempe, AZ, USA, 2017; pp. 1-15.
3. Yu, S.; Wu, Y.; Jeyasingh, R.; Kuzum, D.; Wong, H.-P. An Electronic Synapse Device Based on Metal Oxide Resistive Switching Memory for Neuromorphic Computation. IEEE Trans. Electron Devices; 2011; 58, pp. 2729-2737. [DOI: https://dx.doi.org/10.1109/TED.2011.2147791]
4. Huang, P.; Li, Z.; Dong, Z.; Han, R.; Zhou, Z.; Zhu, D.; Liu, L.; Liu, X.; Kang, J. Binary Resistive-Switching-Device-Based Electronic Synapse with Spike-Rate-Dependent Plasticity for Online Learning. ACS Appl. Electron. Mater.; 2019; 1, pp. 845-853. [DOI: https://dx.doi.org/10.1021/acsaelm.9b00011]
5. Mead, C. Neuromorphic electronic systems. Proc. IEEE; 1990; 78, pp. 1629-1636. [DOI: https://dx.doi.org/10.1109/5.58356]
6. Furber, S.; Temple, S. Neural systems engineering. J. R. Soc. Interface; 2007; 4, pp. 193-206. [DOI: https://dx.doi.org/10.1098/rsif.2006.0177]
7. Kuzum, D.; Jeyasingh, R.; Lee, B.; Wong, H.-P. Nanoelectronic Programmable Synapses Based on Phase Change Materials for Brain-Inspired Computing. Nano Lett.; 2012; 12, pp. 2179-2186. [DOI: https://dx.doi.org/10.1021/nl201040y]
8. Boybat, I.; Le Gallo, M.; Nandakumar, S.R.; Moraitis, T.; Parnell, T.; Tuma, T.; Rajendran, B.; Leblebici, Y.; Sebastian, A.; Eleftheriou, E. Neuromorphic computing with multi-memristive synapses. Nat. Commun.; 2018; 9, 2514. [DOI: https://dx.doi.org/10.1038/s41467-018-04933-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29955057]
9. Chen, Y.-H.; Krishna, T.; Emer, J.; Sze, V. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE Int. Solid-State Circuits Conf. (ISSCC); 2016; 52, pp. 127-138. [DOI: https://dx.doi.org/10.1109/JSSC.2016.2616357]
10. Sim, J.; Park, J.-S.; Kim, M.; Bae, D.; Choi, Y.; Kim, L.-S. A 1.42TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems. IEEE Int. Solid-State Circuits Conf. (ISSCC); 2016; 14, pp. 264-265.
11. Furber, S.B.; Galluppi, F.; Temple, S.; Plana, L.A. The SpiNNaker Project. Proc. IEEE; 2014; 102, pp. 652-665. [DOI: https://dx.doi.org/10.1109/JPROC.2014.2304638]
12. Likharev, K.K. CrossNets: Neuromorphic hybrid CMOS/nanoelectronic networks. Sci. Adv. Mater.; 2011; 3, pp. 322-331. [DOI: https://dx.doi.org/10.1166/sam.2011.1177]
13. Merolla, P.A.; Arthur, J.V.; Alvarez-Icaza, R.A.; Cassidy, A.S.; Sawada, J.; Akopyan, F.; Jackson, B.L.; Imam, N.; Guo, C.; Nakamura, Y. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science; 2014; 345, pp. 668-673. [DOI: https://dx.doi.org/10.1126/science.1254642] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25104385]
14. Yu, S.; Gao, B.; Fang, Z.; Yu, H.; Kang, J.; Wong, H.-P. A low energy oxide-based electronic synaptic device for neuromorphic visual systems with tolerance to device variation. Adv. Mater.; 2013; 25, pp. 1774-1779. [DOI: https://dx.doi.org/10.1002/adma.201203680]
15. Jo, S.; Chang, T.; Ebond, I.; Bhadviya, B.; Mazumder, P.; Lu, W. Nanoscale Memristor Device as Synapse in Neuromorphic Systems. Nano Lett.; 2010; 10, pp. 1297-1301. [DOI: https://dx.doi.org/10.1021/nl904092h]
16. Wong, H.-P.; Lee, H.; Yu, S.; Chen, Y.; Wu, Y.; Chen, P.; Lee, B.; Chen, F.T.; Tsai, M. Metal–oxide RRAM. Proc. IEEE; 2012; 100, pp. 1951-1970. [DOI: https://dx.doi.org/10.1109/JPROC.2012.2190369]
17. Yang, J.J.; Strukov, D.B.; Stewart, D.R. Memristive devices for computing. Nat. Nanotechnol.; 2013; 8, pp. 13-24. [DOI: https://dx.doi.org/10.1038/nnano.2012.240] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23269430]
18. Rumelhart, D.; Hinton, G.; Williams, R. Learning representations by back-propagating errors. Nature; 1986; 323, pp. 533-536. [DOI: https://dx.doi.org/10.1038/323533a0]
19. Burr, G.W.; Narayanan, P.; Shelby, R.M.; Sidler, S.; Boybat, I.; di Nolfo, C.; Leblebici, Y. Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: Comparative performance analysis (accuracy, speed, and power). Proceedings of the 2015 IEEE International Electron Devices Meeting (IEDM); Washington, DC, USA, 7–9 December 2015; pp. 4.4.1-4.4.4.
20. Sze, V.V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE; 2017; 105, pp. 2295-2329. [DOI: https://dx.doi.org/10.1109/JPROC.2017.2761740]
21. Lillicrap, T.P.; Cownden, D.; Tweed, D.B.; Akerman, C.J. Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun.; 2016; 7, 13276. [DOI: https://dx.doi.org/10.1038/ncomms13276]
22. Alibart, F.; Zamanidoost, E.; Strukov, D.B. Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nat. Commun.; 2013; 4, 3072. [DOI: https://dx.doi.org/10.1038/ncomms3072]
23. Yao, P.; Wu, H.; Gao, B.; Eryilmaz, S.B.; Huang, X.; Zhang, W.; Zhang, Q.; Deng, N.; Shi, L.; Wong, H.-S.P. et al. Face classification using electronic synapses. Nat. Commun.; 2017; 8, 15199. [DOI: https://dx.doi.org/10.1038/ncomms15199] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28497781]
24. Sheridan, P.M.; Cai, F.; Du, C.; Ma, W.; Zhang, Z.; Lu, W.D. Sparse coding with memristor networks. Nat. Nanotechnol.; 2017; 12, pp. 784-789. [DOI: https://dx.doi.org/10.1038/nnano.2017.83] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28530717]
25. Kim, S.; Ishii, M.; Lewis, S.; Perri, T.; BrightSky, M.; Kim, W.; Jordan, R.; Burr, G.W.; Sosa, N.; Ray, A. et al. NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. Proceedings of the 2015 IEEE International Electron Devices Meeting (IEDM); Washington, DC, USA, 7–9 December 2015; pp. 17.1.1-17.1.4.
26. Milo, V.; Pedretti, G.; Carboni, R.; Calderoni, A.; Ramaswamy, N.; Ambrogio, S.; Ielmini, D. A 4-Transistors/1-Resistor Hybrid Synapse Based on Resistive Switching Memory (RRAM) Capable of Spike-Rate-Dependent Plasticity (SRDP). IEEE Trans. Very Large Scale Integr. (VLSI) Syst.; 2018; 26, pp. 2806-2815. [DOI: https://dx.doi.org/10.1109/TVLSI.2018.2818978]
27. Serb, A.; Bill, J.; Khiat, A.; Berdan, R.; Legenstein, R.; Prodromakis, T. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat. Commun.; 2016; 7, 12611. [DOI: https://dx.doi.org/10.1038/ncomms12611] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27681181]
28. Milo, V.; Pedretti, G.; Carboni, R.; Calderoni, A.; Ramaswamy, N.; Ambrogio, S.; Ielmini, D. Demonstration of hybrid CMOS/RRAM neural networks with spike time/rate-dependent plasticity. Proceedings of the 2016 IEEE International Electron Devices Meeting (IEDM); San Francisco, CA, USA, 3–7 December 2016; pp. 16.8.1-16.8.4.
29. Ambrogio, S.; Balatti, S.; Milo, V.; Carboni, R.; Wang, Z.; Calderoni, A.; Ramaswamy, N.; Ielmini, D. Novel RRAM-enabled 1T1R synapse capable of low-power STDP via burst-mode communication and real-time unsupervised machine learning. Proceedings of the 2016 IEEE Symposium on VLSI Technology; Honolulu, HI, USA, 14–16 June 2016.
30. Ambrogio, S.; Balatti, S.; Milo, V.; Carboni, R.; Wang, Z.; Calderoni, A.; Ramaswamy, N.; Ielmini, D. Neuromorphic Learning and Recognition With One-Transistor-One-Resistor Synapses and Bistable Metal Oxide RRAM. IEEE Trans. Electron Devices; 2016; 63, pp. 1508-1515. [DOI: https://dx.doi.org/10.1109/TED.2016.2526647]
31. Masquelier, T.; Thorpe, S.J. Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity. PLoS. Comput. Biol.; 2007; 3, pp. 0247-0257. [DOI: https://dx.doi.org/10.1371/journal.pcbi.0030031]
32. Suri, M.; Bichler, O.; Querlioz, D.; Palma, G.; Vianello, E.; Vuillaume, D.; Gamrat, C.; DeSalvo, B. CBRAM devices as binary synapses for low-power stochastic neuromorphic systems: Auditory (Cochlea) and visual (Retina) cognitive processing applications. Proceedings of the 2012 International Electron Devices Meeting; San Francisco, CA, USA, 10–13 December 2012; pp. 10.3.1-10.3.4.
33. Diehl, P.U.; Cook, M. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci.; 2015; 9, 99. [DOI: https://dx.doi.org/10.3389/fncom.2015.00099]
34. Bi, G.Q.; Poo, M.M. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci.; 1998; 18, pp. 10464-10472. [DOI: https://dx.doi.org/10.1523/JNEUROSCI.18-24-10464.1998]
35. Abbott, L.F.; Nelson, S.B. Synaptic plasticity: Taming the beast. Nat. Neurosci.; 2000; 3, pp. 1178-1183. [DOI: https://dx.doi.org/10.1038/81453]
36. Bear, M.F. A synaptic basis for memory storage in the cerebral cortex. Proc. Natl. Acad. Sci. USA; 1996; 93, pp. 13453-13459. [DOI: https://dx.doi.org/10.1073/pnas.93.24.13453]
37. Kempter, R.; Gerstner, W.; Van Hemmen, J.L. Hebbian learning and spiking neurons. Phys. Rev. E; 1999; 59, pp. 4498-4514. [DOI: https://dx.doi.org/10.1103/PhysRevE.59.4498]
38. Tan, Z.; Yang, R.; Terabe, K.; Yin, X.; Zhang, X.; Guo, X. Synaptic Metaplasticity Realized in Oxide Memristive Devices. Adv. Mater.; 2016; 28, pp. 377-384. [DOI: https://dx.doi.org/10.1002/adma.201503575] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26573772]
39. Rachmuth, G.; Shouval, H.Z.; Bear, M.F.; Poon, C.S. A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity. Proc. Natl. Acad. Sci. USA; 2011; 108, pp. E1266-E1274. [DOI: https://dx.doi.org/10.1073/pnas.1106161108]
40. Bear, M.F.; Cooper, L.N.; Ebner, F.F. A physiological basis for a theory of synapse modification. Science; 1987; 237, pp. 42-48. [DOI: https://dx.doi.org/10.1126/science.3037696] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/3037696]
41. Dudek, S.M.; Bear, M.F. Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade. Proc. Natl. Acad. Sci. USA; 1992; 89, pp. 4363-4367. [DOI: https://dx.doi.org/10.1073/pnas.89.10.4363]
42. Kim, C.-H.; Lim, S.; Woo, S.Y.; Kang, W.-M.; Seo, Y.-T.; Lee, S.-T.; Lee, S.; Kwon, D.; Oh, S.; Noh, Y. et al. Emerging memory technologies for neuromorphic computing. Nanotechnology; 2018; 30, 032001. [DOI: https://dx.doi.org/10.1088/1361-6528/aae975]
43. Wang, Z.; Joshi, S.; Savel’ev, S.E.; Jiang, H.; Midya, R.; Lin, P.; Hu, M.; Ge, N.; Strachan, J.P.; Li, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater.; 2017; 16, pp. 101-108. [DOI: https://dx.doi.org/10.1038/nmat4756]
44. Yin, J.; Zeng, F.; Wan, W.; Li, F.; Sun, Y.; Hu, Y.; Liu, J.; Li, G.; Pan, F. Adaptive Crystallite Kinetics in Homogenous Bilayer Oxide Memristor for Emulating Diverse Synaptic Plasticity. Adv. Funct. Mater.; 2018; 28, pp. 1706927.1-1706927.10. [DOI: https://dx.doi.org/10.1002/adfm.201706927]
45. Ziegler, M.; Riggert, C.; Hansen, M.; Bartsch, T.; Kohlstedt, H. Memristive Hebbian plasticity model: Device requirements for the emulation of Hebbian plasticity based on memristive devices. IEEE Trans. Biomed. Circuits Syst.; 2015; 9, pp. 197-206. [DOI: https://dx.doi.org/10.1109/TBCAS.2015.2410811]
46. Li, Y.; Zhong, Y.; Zhang, J.; Xu, L.; Wang, Q.; Sun, H.; Tong, H.; Cheng, X.; Miao, X. Activity-dependent synaptic plasticity of a chalcogenide electronic synapse for neuromorphic systems. Sci. Rep.; 2014; 4, 4906. [DOI: https://dx.doi.org/10.1038/srep04906] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24809396]
47. Ohno, T.; Hasegawa, T.; Tsuruoka, T.; Terabe, K.; Gimzewski, J.K.; Aono, M. Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater.; 2011; 10, pp. 591-595. [DOI: https://dx.doi.org/10.1038/nmat3054] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21706012]
48. Xiao, Z.; Huang, J. Energy-Efficient Hybrid Perovskite Memristors and Synaptic Devices. Adv. Electron. Mater.; 2016; 2, 1600100. [DOI: https://dx.doi.org/10.1002/aelm.201600100]
49. Kim, B.-Y.; Hwang, H.-G.; Woo, J.-U.; Lee, W.-H.; Lee, T.-H.; Kang, C.-Y.; Nahm, S. Nanogenerator-induced synaptic plasticity and metaplasticity of bio-realistic artificial synapses. Npg Asia. Mater.; 2017; 9, e381. [DOI: https://dx.doi.org/10.1038/am.2017.64]
50. Covi, E.; Brivio, S.; Serb, A.; Prodromakis, T.; Fanciulli, M.; Spiga, S. Analog memristive synapse in spiking networks implementing unsupervised learning. Front. Neurosci.; 2016; 10, 482. [DOI: https://dx.doi.org/10.3389/fnins.2016.00482] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27826226]
51. Boyn, S.; Grollier, J.; Lecerf, G.; Xu, B.; Locatelli, N.; Fusil, S.; Girod, S.; Carrétéro, C.; Garcia, K.; Xavier, S. et al. Learning through ferroelectric domain dynamics in solid-state synapses. Nat. Commun.; 2017; 8, 14736. [DOI: https://dx.doi.org/10.1038/ncomms14736]
52. He, W.; Huang, K.; Ning, N.; Ramanathan, K.; Li, G.; Jiang, Y.; Sze, J.; Shi, L.; Zhao, R.; Pei, J. Enabling an Integrated Rate-temporal Learning Scheme on Memristor. Sci. Rep.; 2014; 4, 04755. [DOI: https://dx.doi.org/10.1038/srep04755]
53. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE; 1998; 86, pp. 2278-2324. [DOI: https://dx.doi.org/10.1109/5.726791]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Neuromorphic computing has shown great advantages towards cognitive tasks with high speed and remarkable energy efficiency. Memristor is considered as one of the most promising candidates for the electronic synapse of the neuromorphic computing system due to its scalability, power efficiency and capability to simulate biological behaviors. Several memristor-based hardware demonstrations have been explored to achieve the capacity of unsupervised learning with the spike-rate-dependent plasticity (SRDP) learning rule. However, the learning capacity is limited and few of the memristor-based hardware demonstrations have explored the online unsupervised learning at the network level with an SRDP algorithm. Here, we construct a memristor-based hardware system and demonstrate the online unsupervised learning of SRDP networks. The neuromorphic system consists of multiple memristor arrays as the synapse and the discrete CMOS circuit unit as the neuron. Unsupervised learning and online weight update of 10 MNIST handwritten digits are realized by the constructed SRDP networks, and the recognition accuracy is above 90% with 20% device variation. This work paves the way towards the realization of large-scale and efficient networks for more complex tasks.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer