Mobile QR Code QR CODE

  1. (Department of Electronic and Electrical Engineering, Ewha Womans University, 52 Ewhayeodae-gil, Seodaemun-gu, Seoul 03760, Republic of Korea)
  2. (Ewha Womans University, in the same department, and currently is with Philophos, Inc., Bundang, Republic of Korea)
  3. (Department of Intelligent Semiconductor, University of Seoul, Seoul 02504, Republic of Korea)



Digital integrated circuit, integrate-and-fire neuron, hardware implementation, neuromorphic chip, FPGA

I. INTRODUCTION

In recent years, the demand for artificial intelligence (AI) has increased owing to its widespread application and the expansion of its scope in various fields. Simultaneously, there is a growing need for advanced computing architectures with processing parallelism to process vast amounts of data effectively and make rapid decisions in complex situations. To address these challenges, neural computing inspired by the human brain has attracted considerable attention [1]. This approach uses highly interconnected synthetic neurons and synapses to model complex neural processes and solve difficult machine-learning problems [2]. By mimicking the distributed topology of the brain, neural systems provide significant advantages in terms of speed and energy efficiency, making them suitable for sophisticated information processing tasks [3,4]. In this approach, spiking neural networks (SNNs) have the potential to replicate the parallel processing capabilities and energy efficiency of the biological nervous system by mimicking the way neurons transmit information through electrical synapses in the form of spikes [5,6,7]. At the core of the SNN and neuromorphic architecture is the integrate-and-fire (I&F) neuronal circuit, which serves as a basic unit of computation that closely mimics the behavior of biological neurons [8,9,10,11]. In recent years, the digital implementation of I&F neuron models has attracted considerable attention as a promising alternative to analog models, providing flexibility, scalability, and smooth integration with conventional CMOS technologies [12,13,14,15]. These digital I&F models play an important role in building complex neural architectures, and serve as essential components of SNNs that can perform a wide range of cognitive tasks, such as recognition, classification, and decision-making [16,17].

Thus, in this study, we explore the implications of implementing digital I&F neuron models in neuromorphic computing systems. By utilizing the inherent parallelism and low-power characteristics of spiking neurons, these systems demonstrate considerable potential for the efficient and scalable implementation of AI algorithms [12,18]. Despite these developments, studies have continuously explored the future direction and prospects of digital I&F neuron models in the SNN domain. The continued development of digital circuit design combined with neuroscientific insights is expected to lead to the development of more sophisticated and energy-efficient neuromorphic systems, ultimately narrowing the gap between AI and biological intelligence [1,19,20]. This study explores the intersections of digital I&F neuron models, and evaluates their underlying principles. In addition, we investigate the ability of these models to accurately capture the spiking dynamics of biological neurons using field-programmable gate array (FPGA)-based programming and implementation efforts.

II. DESIGN OF DIGITAL NEURON MODEL

A. Digital integrate-and-fire neuron model

The I&F neuron model is a fundamental element of the SNN. This model integrates incoming signals and triggers an output spike when the membrane potential ($V_{\rm mem}$) reaches a specific threshold. After the spike is generated, it is transmitted to the connected neurons, resetting $V_{\rm mem}$ and preparing the neuron for the next cycle of integration and firing. In the digital implementation of the I&F neuron model, the analog behavior of biological neurons is replicated using digital circuits. The core components of the digital I&F neuron model include an integrator that accumulates input signals, a threshold comparator that determines when the accumulated potential exceeds a predefined voltage ($V_{\rm th}$), and a reset mechanism that restores $V_{\rm mem}$ to its initial state after spike generation.

Fig. 1 shows the logical flowchart used to achieve the digital equivalent circuit of an I&F neuron. The integrator unit accumulates the incoming spike signals (spike_in) over time, each of which is weighted by its synaptic weight. As these signals are integrated, $V_{\rm mem}$ increases until it surpasses the preset $V_{\rm th}$. When this threshold is exceeded, the comparator triggers an output spike (spike_out). Immediately after the spike is generated, the membrane potential is reset to 0 V, allowing the neurons to begin a new integration process. This cycle of integration, threshold comparison, and spike generation verifies the essential behaviors of an I&F neuron circuit. This digital implementation provides accurate control over neuronal parameters, such as time constants and critical voltages, and can be tailored to specific applications. Owing to the flexibility and adaptability of the digital I&F neuron circuit, it is suitable for a wide range of neural computing tasks ranging from basic pattern recognition to complex cognitive functions. This compact design also enables efficient expansion, making it easier to achieve a large-scale SNN in the neural system.

Fig. 1. Workflow of the digital equivalent I&F neuron model.

../../Resources/ieie/JSTS.2025.25.2.123/fig1.png

B. FPGA implementation of an I&F neuron

The I&F neuron model was implemented in a Zynq multiprocessor system-on-chip (MPSoC) FPGA using the Xilinx Vivado Design Suite. First, a hardware description language (HDL) representation of the I&F neuron model was developed. This HDL code defines neuron behavior, including signal integration, threshold detection, spike generation, and membrane potential resetting. Then, an analog-to-digital converter module was used to digitize the continuous input current fed into the analog I&F neuron. This allowed the HDL-based design methodology to build an I&F neuron circuit by replacing analog components by equivalent digital ones.

In the simulation, a reset signal was first enabled to create the initial conditions. After a short delay of $1\times$ unit, the reset signal was toggled, starting with the main simulation sequence. The program algorithm proceeded in a loop simulation clock cycle. The input spike signal was enabled and toggled after each $1\times$ unit interval. The integral $V_{\rm mem}$ was monitored. If it exceeded a given $V_{\rm th}$, the comparator triggered the generation of an output spike (spike_out). $V_{\rm mem}$ was then reset to 0 V, allowing the neurons to resume the integration process. Subsequently, we simulated a neuron reset event by enabling the neuron_reset signal for $1\times$ unit of the clock signal. This sequence was repeated until the clock signal stopped, as shown in Fig. 2. The simulation results validate the functionality of the digital I&F neuron circuit, and provide important insights into its dynamic behaviors. These results demonstrate the ability of this circuit to accurately replicate the spiking dynamics of biological neurons, making it a suitable candidate for the hardware-oriented nervous system. The FPGA implementation is optimized to minimize resource usage, allowing the digital I&F neuron circuit to be scaled to accommodate larger neuronal networks.

Fig. 2. Simulation of the designed digital equivalent of an I&F neuron using the Vivado Design Suite.

../../Resources/ieie/JSTS.2025.25.2.123/fig2.png

III. RESULTS AND DISCUSSION

The FPGA implementation of the digital I&F neuron model was thoroughly analyzed to evaluate its performance in terms of resource utilization, power consumption, and overall design efficiency. These results provide a deeper understanding of the behaviors of neurons and enhance the reliability, cost-effectiveness, and compactness of the model. Table 1 provides an overview of the FPGA resources used to implement the I&F neuron digital circuit. The design utilizes only a minimal portion of the available resources, ensuring a simple and efficient implementation. Specifically, only 11 of the 274,080 lookup tables (LUTs) were used, representing a utilization rate of only 0.01%. Similarly, only 33 flip-flops (FFs) were required out of the 548,160 available units, corresponding to a utilization rate of 0.01%. Of the 328 pins provided by the FPGA system for input/output (IO), only five (1.52%) were sufficient to implement the neuron circuit. In addition, only one clock buffer (BUFG) was used out of the 404 available units, representing only 0.25% of the total. These results indicate that the design is highly resource-efficient, with ample room for further optimization and enhancement. As shown in Fig. 3, the total on-chip power consumption was measured to be 0.71 W, with the junction temperature maintained at 25.7 $^\circ$C. The power consumption is clearly lower than the values obtained from the comparable FPGAs in the recent reports, 4.5 W, 4.6 W, and 10.5 W [21,22,23]. These metrics highlight the efficiency of the design, and suggest that the digital I&F neuron circuit, successfully implemented and verified on the Zynq MPSoC FPGA, is well suited for integration into fully CMOS-based SNN chips. Also, although the number of components making up the FPGA and the usage portions can differ, the implementation design in this work can be adopted to other FPGA platforms with minor adjustments such as clocking and resource optimization due to its modular structure.

Table 1. Summary of the resources used for the neuron implementation in comparison with the available amount.

Resources

Utilization

Available

Utilization %

LUT

11

274,080

0.01

FF

33

548,160

0.01

IO

5

328

1.52

BUFG

1

404

0.25

PLL

1

8

12.50

Fig. 3. Power consumption per function in the digital equivalent I&F neuron circuit for an SNN realized using an FPGA.

../../Resources/ieie/JSTS.2025.25.2.123/fig3.png

The successful implementation and validation of this digital neuron circuit underscore its potential as a platform for neuromorphic computing applications. Its compact design, combined with low power consumption and minimal resource utilization, makes it a promising candidate for scaling into larger and more complex SNNs. Although there have been substantial challenges in optimizing the bit widths, reusing the computational units, and balancing efficiency and precision, trade-off relation has been precisely controlled to obtain permissible precision and scalability, achieving resource and power efficiencies. Replacing analog components introduces quantization and discretization effects due to finite bit widths, slightly impacting precision. However, optimal bit-width selection ensures functional accuracy and repeatability, which is advantageous for large-scale systems. This study not only confirms the functionality of the I&F neuron model, but also demonstrates its feasibility for future neuromorphic systems that require efficient and scalable hardware solutions.

IV. CONCLUSION

In this study, we successfully designed, implemented, and verified the digital equivalent circuit of an I&F neuron model using the FPGA technology. The functionality and performance of the neuron circuit were validated, demonstrating that it is effective in replicating biological neuron behaviors. Furthermore, the compact design and detailed power analysis of individual functional blocks highlight the efficiency of the proposed circuit and its strong potential for integration into hardware-oriented neural systems. A quantitative evaluation of component usage and power consumption further confirms the viability of the proposed circuit for scalable and energy-efficient applications in advanced neural computing and its chip implementation.

ACKNOWLEDGMENTS

This work was supported by National R&D Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT of Korea (MSIT) under Grants NRF 2022M3I7A078936 and RS-2024-00402495.

References

1 
C. Mead, ``Neuromorphic electronic systems,'' Proceedongs of IEEE, vol. 78, no. 10, pp. 1629-1636, Oct. 1990.DOI
2 
C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, ``A survey of neuromorphic computing and neural networks in hardware,'' arXiv:1705.06963, 2017.DOI
3 
S. Furber, ``Large-scale neuromorphic computing systems,'' Journal of Neural Engineering, vol. 13, no. 5, pp. 051001-1--051001-14, Aug. 2016.DOI
4 
D. Markovic, A. Mizrahi, D. Querlioz, and J. Grollier, ´ “Physics for neuromorphic computing,” Nature Reviews Physics, vol. 2, pp. 499-510, Jul. 2020.DOI
5 
Q. T. Pham, ``A review of SNN implementation on FPGA,'' Proc. of 2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR), pp. 1-6, Hanoi, Vietnam, Oct. 15-16, 2021.DOI
6 
W. Maass, ``Networks of spiking neurons: The third generation of neural network models,'' Neural Networks, vol. 10, no. 9, pp. 1659-1671, Aug. 1997.DOI
7 
J.-Q. Yang, R. Wang, Y. Ren, J. Y. Mao, Z.-P. Wang, Y. Zhou, and S.-T. Han, ``Neuromorphic engineering: From biological to spike-based hardware nervous systems,'' Advanced Materials, vol. 32, no. 52, pp. 2003610-1--2003610-32, Dec. 2020.DOI
8 
S. Yang, P. Liu, J. Xue, R. Sun, and R. Ying, ``An efficient FPGA implementation of Izhikevich neuron model,'' Proc. if 2020 International SoC Design Conference (ISOCC), pp. 141-142, Yeosu, Korea, Oct. 21-24, 2020.DOI
9 
J. Han, Z. Li, W. Zheng, and Y. Zhang, ``Hardware implementation of spiking neural networks on FPGA,'' Tsinghua Science and Technology, vol. 25, no. 4, pp. 479-486, Jan. 2020.DOI
10 
S. Schmidgall, R. Ziaei, J. Achterberg, L Kirsch, S. P. Hajiseyedrazi, and J. Eshraghian, ``Brain-inspired learning in artificial neural networks: A review,'' APL Machine Learning, vol. 2, no. 2, pp. 021501-1--021501-14, Jun. 2024.DOI
11 
A. N. Burkitt, ``A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input,'' Biological Cybernetics, vol. 95, no. 1, pp. 1-19, Apr. 2006.DOI
12 
M. Kwon, M. Baek, S. Hwang, K. Park, T. Jang, T. Kim, J. Lee, S. Cho, and B.-G. Park, ``Integrate-and-fire neuron circuit using positive feedback field effect transistor for low power operation,'' Journal of Applied Physics, vol. 124, no. 15, pp. 152107-1--152107-7, Sep. 2018.DOI
13 
G. Indiveri, E. Chicca, and R. Douglas, ``A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity,'' IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 211-221, Jan. 2006.DOI
14 
J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner, “A wafer-scale neuromorphic hardware sys- tem for large-scale neural modeling,” Proc. of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1947-1950, Paris, France, May 30-Jun. 2, 2010.DOI
15 
J. Seo, B. Brezzo, Y. Liu, B. D. Parker, S. K. Esser, and R. K. Montoye, ``A 45 nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons,'' Proc. of IEEE Custom Integrated Circuits Conference (CICC), pp. 1-4, San Jose, USA, Sep. 19-21, 2011.DOI
16 
M. Pfeiffer and T. Pfeil, ``Deep learning with spiking neurons: Opportunities and challenges,'' Frontiers in Neuroscience, vol. 12, pp. 774-1--774-18, Oct. 2018.DOI
17 
H. Taherdoost, ``Deep Learning and Neural Networks: Decision-Making Implications,'' Symmetry, vol. 15, no. 9, pp. 1723-1--1723-22, Sep. 2023.DOI
18 
P. Panda, S. A. Aketi, and K. Roy, ``Toward scalable, efficient, and accurate deep spiking neural networks with backward residual connections, stochastic softmax, and hybridization,'' Frontiers in Neuroscience, vol. 14, pp. 653-1--653-18, Jun. 2020.DOI
19 
G. Indiveri and T. K. Horiuchi, ``Frontiers in Neuromorphic Engineering,'' Frontiers in Neuroscience, vol. 5, pp. 118-1--118-2, Oct. 2011.DOI
20 
G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis, and T. Prodromakis, ``Integration of nanoscale memristor synapses in neuromorphic computing architectures,'' Nanotechnology, vol. 24, no. 38, pp. 384010-1--384010-13, Sep. 2013.DOI
21 
H. Fang, Z. Mei, A. Shrestha, Z. Zhao, Y. Li, and Q. Qiu, ``Encoding, model, and architecture: Systematic optimization for spiking neural network in FPGAs,'' Proc. of 2020 IEEE ACM International Conference on Computer-Aided Design (ICCAD), pp. 1-9, Virtual, Nov. 2-5, 2020.DOI
22 
X. Ju, B. Fang, R. Yan, X. Xu, and H. Tang, ``An FPGA implementation of deep spiking neural networks for low-power and fast classification,'' Neural Computation, vol. 32, no. 1, pp. 182-204, Jan. 2020.DOI
23 
S. Yang, J. Wang, B. Deng, C. Liu, H. Li, C. Fietkiewicz, and K. A. Loparo, ``Real-time neuromorphic system for large-scale conductance-based spiking neural networks,'' IEEE Transactions on Cybernetics, vol. 49, no. 7, pp. 2490-2503, Apr. 2018.DOI
Yeji Lee
../../Resources/ieie/JSTS.2025.25.2.123/author1.png

Yeji Lee received the B.S. degree in electrical engineering from Hankyoung National University, Anseong, Korea, in 2024. She is currently pursuing an M.S. degree at Ewha Womans University. Her current research interests include resistive-switching random-access memory (RRAM), synaptic devicese for neuromorphic systems, low-power neuron circuits and CMOS integrated fabrication. She is a Student Member at the Institute of Electronics and Information Engineers (IEIE) of Korea.

Arati Kumari Shah
../../Resources/ieie/JSTS.2025.25.2.123/author2.png

Arati Kumari Shah received her B.Tech. and M.Tech. degrees in electronics and communication engineering from the North Eastern Regional Institute of Science and Technology, Arunachal Pradesh, India, in 2017 and 2019, respectively. She received her Ph.D. degree at Gachon University, Seongnam, South Korea, in 2024. She also worked as a Postdoctoral Researcher at Ewha Womans University, South Korea, where she continued her research on CMOS technologies and integrated circuits. Dr. Shah is currently working as a Senior Research Engineer at Philophos Inc., South Korea. She received the Gold Medal for the highest score during her master’s degree. Her research interests include memory devices, neuron circuits for spiking neural networks, and circuit designs optimized for various synapse arrays.

Myounggon Kang
../../Resources/ieie/JSTS.2025.25.2.123/author3.png

Myounggon Kang received the Ph.D. degree from the Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea, in 2012. From 2005 to 2015, he worked as a Senior Engineer at Flash Design Team of Samsung Electronics Company. In 2015, he joined Korea National University of Transportation as a Professor of the Department of Electronics Engineering. His current research interests are CMOS device modeling and memory circuit design. He has been working as a Professor at Department of Intelligent Semiconductor Engineering, School of Advanced Fusion Studies, University of Seoul, Seoul, Korea, since 2023.

Seongjae Cho
../../Resources/ieie/JSTS.2025.25.2.123/author4.png

Seongjae Cho received his B.S. and Ph.D. degrees in electrical engineering from Seoul National University, Seoul, Korea, in 2004 and 2010, respectively. He worked as an Exchange Researcher at the National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan, in 2009. He worked as a Postdoctoral Researcher at Seoul National University in 2010 and at Stanford University, Palo Alto, CA, from 2010 to 2013. Also, he worked as a faculty member at the Department of Electronic Engineering, Gachon University, from 2013 to 2023. He is currently working as an Associate Professor at the Division of Convergence Electronic and Semiconductor Engineering, Ewha Womans University, Seoul, Korea from 2023. His current interests include emerging memory deviecs, advanced nanoscale CMOS devices, optical interconnect devices, and novel devices for future computing.