Mobile QR Code QR CODE

  1. (Research Scholar, Department of Electrical and Electronics Engineering, Annamalai University, Tamil Nadu 608002, India)
  2. (Professor, Department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Annamalai University, Chidambaram, Tamil Nadu, India)
  3. ( Managing Director and Liver Transplant Surgeon, Department of HPB & Liver Transplantation, RPS Hospitals, Chennai, Tamil Nadu, India)
  4. (Assistant professor, Department of Biotechnology, Saveetha School of Engineering, Chennai, Tamil Nadu, India )



Binarized spiking neural networks, Color harmony algorithm, Color Wiener filtering, Improved non-sub sampled Shearlet transform, Liver cancer

1. Introduction

Cancer is a harmful disease that occurs in all human organs, like liver, brain, kidney [1]. Liver plays a vital role in the human body. Liver has some functions, such as producing protein, detoxifying drugs in the blood, and purifying the blood [2]. Liver cancer is a frightening disease with many causes, such as alcohol consumption, smoking, eating junk foods, and excess weight. Therefore, detecting liver cancer in the early stages will lead to better outcomes [3]. Liver cancer may appear hypo-dense or hyper-dense. Where a hypo-dense liver is dark and a hyper-dense liver is brighter. Moreover, the category of liver cancer varies from patient to patient. The two stages of liver cancer are primary and secondary [4]. The diagnosis of liver cancer is achieved by three types: biopsy, imaging test, and blood test [5].

Image processing is growing in popularity and can be utilized across various industries to control production and in safety systems to evaluate users' biometrics. CAD analysis is helpful for the medical field [6]. The amount of clinical imagery for medical purposes is growing due to the aging population and the prevalence of current imaging technologies. In medical imaging, image processing and deep learning techniques are considered to address difficulties [7-11]. This effort aims to provide a deep learning-based diagnostic scheme that can be adapted to medical imaging to recognize liver tumors.

The major contributions of this manuscript are abridged below:

· Initially, the liver is separated into four equal regions on CT and MR images. An identifiable region is produced from a collection of nearby seeds. It permits any feasible size, shape as the region of interest.

· In preprocessing, color Wiener filtering (CWF) removes noise and increases the quality of input CT and MRI images.

· After preprocessing, hybrid-feature data are extracted.

· INSST is applied for optimal hybrid-feature selection.

· An optimal hybrid-feature data set is applied for Binarized Spiking Neural Networks (BSNN) optimized with a color harmony algorithm, and efficient classification accuracy is acquired.

The remainder of this manuscript is divided into five portions: section 2 delineates the literature survey; section 3 outlines the proposed strategy; section 4 proves the results; section 5 concludes this manuscript.

2. Literature Survey

Various studies include deep learning-based liver cancer classification, a few recent studies are revised here,

Naeem et al. [7] have presented a machine-learning-based hybrid-feature assessment using fused (MR, CT) images for liver cancer classification. Preprocessing of the dataset was included by applying Gabor filters to decrease noise and automatically identify the RoI utilizing the Otsu thresholding-base segmentation method. It provides the maximum accuracy and minimum precision.

Sun et al. [8] have suggested deep learning-based liver cancer histopathology images categorization under global labels. Otsu's technique was used to extract the useful patch-level feature. The patch-level features were combined to produce an image-level feature description. The final WSI feature was produced by choosing the discriminative characteristics. The suggested method provides a higher AUC and lower accuracy.

Das et al. [9] have presented deep learning-based liver cancer identification utilizing watershed transform with Gaussian mixture method. Marker-controlled watershed segmentation was first used to separate the liver, and the Gaussian mixture model method segment the cancer-affected lesion. After tumor segmentation, several texture features were extracted from the segmented area.

Khamparia et al. [10] have presented a deep learning-based multi-model ensemble model for predicting neuromuscular illnesses. Initially, feature selection was performed using a combined genetic algorithm and the Bhattacharya coefficient (GA). Second, the dataset was split between training and testing sets using a fivefold cross-validation method. The system achieves higher accuracy and lower generalization error

3. Proposed Methodology

BSNN optimized with a color harmony algorithm for liver cancer classification (BSNN-CHA-LCC) is proposed to classify liver cancer as normal and abnormal. Initially, fusion of MRI [11] and CT-scan datasets [12] of the liver cancer dataset are taken, and the input images are given to color Wiener filtering (CWF) based preprocessing for removing noise and increasing the quality of input CT and MRI images [13].The preprocessed images of CT and MRI fed as input to the INSST [14] method-based feature extraction to extract the hybrid feature set of the CT and MRI features. [15] The extracted features are given to BSNN for classifying the liver cancer as normal and abnormal. Generally, BSNN does not adopt optimization models to determine ideal parameters to guarantee the exact categorization of liver cancer [16]. Therefore, the color harmony algorithm (CHA) [17] is proposed to enhance the BSNN weight parameters. Fig. 1 shows the overall workflow.

Fig. 1. Block diagram for the BSNN-CHA-LCC Methodology.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig1.png

3.1 Image Acquisition

This study uses CT and MR images to sort a dataset of liver cancer cases into six groups. Two types of liver cancer were gathered in the image dataset: (i) benign, such as hepatocellular adenoma, hemangioma, and cyst; (ii) malignant, such as hepatocellular carcinoma, metastasis, and hepatoblastoma. Siemens Somatom definition-AS 64 and Siemens Essenza 1.5T machines with resolutions of 0.5-0.625 mm and 1-2 mm are used in the radiology department of Bahawal Victoria Hospital in Bahawalpur, Pakistan, to collect the MRI and CT datasets. The dataset containing 1200 $(100\times 6\times 2)$fused imageries was obtained. Among these, 100 patients were chosen for the examination utilizing CT scan, and 100 patients utilizing MR images with $(512\times 512)$ resolution.

3.2 Image Preprocessing using Color Wiener Filtering (CWF)

The input CT and MRI images were preprocessed using CWF. The input images have more noise illumination and noise effect. The preprocessing technique CWF was used to avoid the influence of these factors on the input image.CWF was used to restore the image after low-pass filtering rendered it blurry, stripped it of its structural information, preserved important structures, and eliminated undesirable, tiny image wrinkles or isolated edges. Restored color space, including additive noise in RGB original color space, is expressed as Eq. (1).

(1)
$ v_{i}=k_{i}+\eta _{i} $

where $v_{i}$indicates the value of the preprocessing image;$k_{i}$denotes the pixel vector of the input image;$\eta _{i}$is the noise in the image. $i$ specifies the pixel of an image as $i(a,b)$, where $a$ and $b$are directions of the pixel axis;$v_{i}$is the mean vector of the recovered imagery$r_{i}$; $\hat{k}$ implies pixel vector of the proper image;$\overline{k}$specifies the mean vector of the proper image. The parameters with CWF filtering,$V$, is labelled in Eq. (2),

(2)
$ \hat{k}-\overline{\hat{k}}=V\left(v_{i}-\overline{v}\right) $

where $V$needs to be computed to identify MSE between the input and preprocessing pictures, MSE is expressed in Eq. (3),

(3)
$$ M S E=\min \left|\left(\hat{k}-\overline{\hat{k}}-V\left(v_i-\bar{v}\right)\right)^2\right| $$

When there is no correlation with noise, the considered image $V$ in the original image is scaled using Eq. (4):

(4)
$ V=C_{\mathrm{cov}}\left(C_{\mathrm{cov}}+C_{nn}\right) $

where$C_{nn}$is the noise, and $C_{\mathrm{cov}}$is the covariance matrix. Therefore, the input image noises are eliminated, The preprocessing output is supplied to the feature extraction process.

3.3 Feature Extraction Utilizing Improved Non-sub Aampled Shearlet Transforms (INSST)

In the feature extraction stage, preprocessed images are extracted using INSST. A description of feature extraction through the INSST is described in detail. In this stage, with the aid of INSST, four different kinds of features are extracted from MR and CT image data sets, including "Co-Occurrence Matrix," "Wavelet," "Run-Length Matrix," and "Histogram."

3.3.1 Histogram Features

This is utilized to choose the object in relation to rows as well as columns. Some 1$^{\mathrm{st}}$ order histogram characteristics are mean, skewness, standard deviation, energy, and entropy. The term "mean" refers to the average numbers to describe bright with dark means of the image, as specified in Eq. (5):

(5)
$ \eta =\sum _{i}\sum _{j}\frac{l\left(i,j\right)}{l} $

The pixel was displayed by the subsequent numbers of $i$and$j$. The contrast of the picture was determined by the standard deviation, as expressed in Eq. (6).

(6)
$ \sigma _{\varepsilon }=\sqrt{\sum _{\varepsilon =0}^{Q-1}\left(\varepsilon -\overline{\varepsilon }\right)^{2}\psi \left(\varepsilon \right)} $

when there is no symmetry present around the central value, the grade of asymmetry is skewness; it signifies $\gamma $ and is articulated in Eq. (7) as follows:

(7)
$ \gamma =\frac{1}{\sigma _{\varepsilon }^{3}}\sum _{\varepsilon =0}^{Q-1}\left(\varepsilon -\overline{\varepsilon }\right)^{3}\psi \left(\varepsilon \right) $

The gray-level distribution was termed energy implicates$\vartheta $, as shown in Eq. (8).

(8)
$ \vartheta =\sum _{\varepsilon =0}^{Q-1}\left[\psi \left(\varepsilon \right)\right]^{2} $

3.3.2 Co-occurrence Matrix Features

Texture features are determined for this research in 4 dimensions and a maximum pixel distance of five. The entropy, inertia, correlation, inverse difference, energy, and eleven other second-order COM characteristics were collected. Energy is first expressed in Eq. (9). The distribution among the gray-level values $\zeta $ is used to determine the energy.

(9)
$ Energy=\sum _{i}\sum _{j}\left(\zeta _{i,j}\right)^{2} $

Eq. (10) defines the correlation as the pixel similarity at a specific pixel distance.

(10)
$ Correlation=\frac{1}{\sigma _{\alpha }\sigma _{\beta }}\sum _{a}\sum _{b}\left(a-\mu _{\alpha }\right)\left(b-\mu _{\beta }\right)\zeta _{i,j} $

3.3.3 Run-length Matrix Features

Gray-level run-length matrix (GLRM) describes run length in run-length aspects. Consider $l_{h}$ as a count of the discreet intensity values on image.$l_{h}$ refers to the count of discrete run lengths on the image;$L_{q}$ refers to the count of voxels on the image;$L_{s}$refers to the count of runs on the image, including the angle $\mu $;$\psi (a,b|\mu )$ refers run-length matrix for $\theta $ arbitrary path. Short-run emphasis (SRE) is expressed in Eq. (11),

(11)
$ SRE=\frac{\sum _{a=0}^{l_{h}}\sum _{b=0}^{L_{s}}\frac{\psi \left(a,b|\mu \right)}{b^{2}}}{L_{s}\left(\mu \right)} $

3.3.4 Wavelet Features

The linear method for imageries called discrete wavelet transforms was the converted$2n\times 2n$ matrix, wherein $n$is a positive integer. Calculated by Eq. (12).

(12)
$ Energy=\lambda ^{-1}\sum _{a,b\in Region\,\,of\,\,intrest}{w_{a,b}}^{2} $

The resultant matrix component is the restriction$w_{a,b}$. $\sum $in each pixel $(a,b)$located in the area of interest definition is subject to the operation, where $\lambda $is the total number of pixels.

3.4 Liver Cancer Classification based on Binarized Spiking Neural Networks

The features extracted using the INSST approach were given to a BSNN for classification. BSNN was used to classify liver cancer from the extracted features of the CT and MR images. where$c$ refers $c=s\left(x\right)$, $s$refers sign. To classify liver cancer, the intensity range (IR) is$\left[0,IR\left(Max\right)\right]$, and the firing time of the $b^{th}$ input features is specified as $input_{b}$under $b^{th}$ intensities, $IR_{b}$ is articulated in Eq. (13):

(13)
$ input_{b}=\left[\frac{IR\left(Max\right)-IR_{b}}{IR\left(Max\right)}FT\left(Max\right)\right] $

Here $FT$implies firing time. The input neuron’s latency is decreased, then the classification time is lower. Subsequently, the spike neuron of $b^{th}$ intensity equation is expressed in Eq. (14):

(14)
$ Sp_{b}^{0}\left(FT\right)=\left\{\begin{array}{l} 1if\,FT=FT_{b}\\ 0otherwise \end{array}\right. $

The fire neuron of the $l^{th}$layer agrees the input features of ${-}$1 or +1to determine the synaptic weights. Subsequently, liver cancer is predicted as normal using$X_{a}^{l}\left(ft\right)_{FA}$, as expressed in (15):

(15)
$ X_{a}^{l}\left(ft\right)_{FA}=X_{a}^{l}\left(ft-1\right)_{FA}+\beta ^{l}\sum _{b}c_{ab}^{l}SP_{b}^{l-1}\left(ft\right) $

where$c_{ab}^{l}$and $SP_{b}^{l-1}$ denotes input spike patterns and binary synaptic weight linked to $b^{th}$and$l^{th}$ neurons;$\beta $ implies scaling factor for raising accuracy and linked to each neuron. It is given in Eq. (16):

(16)
$ \begin{equation*} SP_{b}^{l-1}\left(ft\right)=\left\{\begin{array}{l} 1,if\,\,\,Sp_{b}^{0}\left(FT\right)\geq \varphi \,and\,\,\,SP_{b}^{l-1}\left(<ft\right)\neq 1\\ 0otherwise \end{array}\right. \end{equation*} $

where $SP_{b}^{l}\left(<ft\right)\neq 1$; hence, the neurons are not fired with the prior time step. The threshold factor for achieving the result takes the role of this scaling and function. The weight parameters are $CP$and$\chi $, where $CP$ is accuracy and$\chi $ diminish the error rate. The proposed BSNN proficiently classified lung cancer as normal and abnormal. The weight parameters $CP$and$\chi $of BSNN were optimized under the CHA to derive more exact categorization.

3.4.1 Step by Step Procedure of Color Harmony Algorithm for Optimizing the Weight Parameter of BSNN

The CHA maximizes the BSNN performance by increasing the liver cancer classification. Therefore, the proposed BSNN method provides appropriate liver cancer classification and also reduces the error probability. The step-by-step procedure of the CHA is expressed as follows:

Step 1: Initialization

Initially, every solution candidate is viewed as a color holding various decision variables. Colors are initially dispersed at random throughout the search area and are expressed as Eq. (17)

(17)
$ Y^{0}\left(a,b\right)=Y^{L}\left(a\right)+rand\cdot \left(Y^{U}\left(a\right)-Y^{L}\left(a\right)\right) $

where$a=1,2,\ldots ,T_{V}$, $b=1,2,\ldots ,T_{C}$, and $Y^{0}\left(a,b\right)$ indicates initial value of $b^{th}$ color, $Y^{L}\left(a\right)$, and $Y^{U}\left(a\right)$ indicates lower and upper bound of $a^{th}$ variable, $rand$denotes the uniformly distributed count at $\left[0,1\right]$interval, $T_{V}$, and $T_{C}$ indicates the entire count of variable and colors.

Step 2: Random Generation

The first chroma zone color distribution was carried out after producing the initial population. Initially, colors are arranged in ascending sequence by their fitness value.

Step 3: Fitness Function

By comparing the values of each potential solution to those of the objective function variables, the fitness function of the issue may be calculated. As a result, Eq. (18) was used to express the values acquired for the objective function:

(18)
$ Fitness\,Function=Optimization\left[CP\,and\,\chi \right] $

Step 4: Concentration Phase

This stage focuses on the search for space exploration and exploitation and employs a selection scheme to balance local and global searches.. The goal is to increase the$CP$and decrease$\chi $.

Step 5: Updating the hue circle at the dispersion phase for optimizing$\chi $of the BSNN

The hue circle is updated after this phase by adding new colors that include helpful fitness-related data through which the parameter $\chi $is optimized. This process was calculated using Eq. (19),

(19)
$ S_{Y}^{new}\left(a,b\right)=d_{cm}S_{CM}\left(a,b\right)+\left(1-d_{cm}\right)S_{Y}\left(a,b\right) $

where$a=1,2,\ldots ,T_{V}$;$b=1,2,\ldots ,T_{S}$;$S_{CM}$ and $S_{Y}$indicates the optimized parameter of BSNN;$T_{S}$is the replaced colors;$d_{cm}$ indicates the uniformly distributed random number.

Step 6: Termination

Finally, the proposed BSNN with the CHA categorizes liver cancer as normal and abnormal with greater accuracy. If the optimal solution is achieved, the iteration is stopped; if not, steps 1 to 3 are repeated until the halting criteria,$y=y+1$, are fulfilled Fig. 2 shows that the Flowchart of the Color Harmony Algorithm.

Fig. 2. Flowchart of the Color Harmony Algorithm.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig2.png

4. Results and Discussion

The proposed BSNN-CHA-LCC was activated in MATLAB utilizing PC with an Intel-core i5, 2.50GH CPU, and 8GB RAM. The mentioned metrics were analyzed. The performance was compared with the existing MLP-LCC [7], mask-RCNN-LCC [8], and DNN-GMM-LCC [9] models.

4.1 Performance Measures

This was used to examine the robustness of the proposed BSNN-CHA-LCC technique. The given confusion matrix was used to measure the metrics.

· True Positive ($A_{Q}$): normal accurately classified as normal.

· True Negative ($A_{M}$): abnormal accurately classified as abnormal.

· False Positive ($B_{Q}$): abnormal inaccurately classified as normal.

· False Negative ($A_{Q}$): normal inaccurately classified as abnormal.

4.1.1 Accuracy

The rate of accurately categorized data to the total predictions is determined. The accuracy is measured using Eq. (20):

(20)
$ Accuracy=\frac{\left(A_{M}+A_{Q}\right)}{\left(A_{M}+A_{Q}+B_{M}+B_{Q}\right)} $

4.1.2 Precision

This is the average mean precision of every AP category and measures network model training. The precision was calculated using Eq. (21),

(21)
$ \text{Precision }=\frac{1}{m}\sum _{i=1}^{m}e_{i} $

where$e_{i}$represents the region

4.1.3 Specificity

The specificity is the metric that forecasts the true negatives of every class. The specificity was scaled using Eq. (22),

(22)
$ Specificity=\frac{A_{M}}{\left(B_{M}+A_{M}\right)} $

4.1.4 Sensitivity

Sensitivity is the metric that assesses to predict the true positives of every class. The sensitivity was scaled using Eq. (23),

(23)
$Sensitivity=\frac{A_{Q}}{(A_{Q}+B_{M})}$

4.1.5 ROC

The ROC curve displays the true positive rate (sensitivity) versus false positive rate (specificity) for dissimilar threshold classification scores. ROC is expressed as Eq. (24),

(24)
$ ROC=0.5\times \left(\frac{A_{Q}}{\left(A_{Q}+Z_{n}\right)}+\frac{A_{M}}{\left(B_{Q}+A_{M}\right)}\right) $

4.2 Performance Analysis

Figs. 4-10 present the evaluation results of the proposed BSNN-CHA-LCC method. The acquired outcomes of the proposed BSNN-CHA-LCC are analyzed with the existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC models. Fig. 3 shows the output image for liver cancer classification.

Fig. 3. Liver cancer classification output.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig3.png

Fig. 4 presents the accuracy analysis. The proposed method achieved31.88%, 29.75%, and 21.16% higher accuracy for normal liver cancer; 28.86%, 36.79%, and 34.33% better accuracy for abnormal liver cancer compared to the existing methods, such as MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC, respectively.

Fig. 4. Accuracy analysis.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig4.png

Fig. 5 depicts precision analysis. Here, the proposed BSNN-CHA-LCC achieved 39.24%, 21.25%, and 22.29% higher precision for the normal liver cancer classification, and 13.67%, 23.16%, and 29.65% greater precision for the abnormal liver cancer classification compared to the existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC, respectively.

Fig. 5. Precision analysis.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig5.png

Fig. 6 displays specificity analysis. The proposed BSNN-CHA-LCC method achieved 22.25%, 29.16%, and 21.33% higher specificity for a normal liver cancer classification and 33.22%, 28.19%, and 31.20% greater specificity for an abnormal liver cancer classification compared to the existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC models, respectively.

Fig. 6. Specificity analysis.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig6.png

Fig. 7 shows the sensitivity results of the liver cancer classification. The proposed BSNN-CHA-LCC method attained31.21%, 29.51%, and 31.11% higher sensitivity for a normal liver cancer classification and 31.26%, 33.51%, and 31.61% greater sensitivity for an abnormal liver cancer compared to the existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC models, respectively.

Fig. 7. Sensitivity analysis.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig7.png

Fig. 8 represents the F-scores results of the liver cancer classification. The proposed BSNN-CHA-LCC method achieves 29.01%, 27.74%, and 30.11% higher F-scores for benign data; 28.98%, 30.03%, and 33.44% better F-scores for malignant data for liver cancer than the existing MLP-LCC, mask-RCNN-LCC and DNN-GMM-LCC models, respectively.

Fig. 8. F-score analysis.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig8.png

Fig. 9 represents the ROC curve. The overall accuracy is higher when the ROC curve was close to the upper left corner. The proposed BSNN-CHA-LCC method achieved 28.21%, 25.05%, and 22.11% higher ROCs for liver cancer than the existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC, respectively.

Fig. 9. Performance analysis of the ROC.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig9.png

Fig. 10 outlines the Error Rate results of the liver cancer classification. The proposed BSNN-CHA-LCC method achieved 23.14%, 25.30%, and 21.44% less error rates for the normal liver cancer classification and 30.26%, 24.09%, and 22.78% lower error rates for the abnormal liver cancer classification compared with existing MLP-LCC, mask-RCNN-LCC, and DNN-GMM-LCC models respectively.

Fig. 10. Performance analysis of the Error Rate.
../../Resources/ieie/IEIESPC.2023.12.6.502/fig10.png

In this study, the proposed BSNN-CHA-LCC method was used to improve the accuracy of the system, and the BSNN was used to increase the classification accuracy. One of the main advantages of this approach was that it could improve the system accuracy without demanding substantial additional resources. On the other hand, it will be crucial to assess the success of this strategy in various contexts to identify any potential flaws and suggest areas for development. Table 1 lists that the BSNN-CHA-LCC model provided 4.52%, 9.45%, 11.45%, and 11.94% higher accuracy than the methods reported by Naeem et al. [7], Sun et al. [8], Das [9], and Khamparia et al. [10],. The BSNN-CHA-LCC attains 22.33%, 28.70%, 14.56%, 14.79% better precision than Naeem et al., [7] Sunet al. [8], Das et al., [9], and Khamparia et al. [10], respectively. The proposed BSNN-CHA-LCCachieved26.07%, 14.36%, 33.67%,and 14.32% higher sensitivity than the methods reported by Naeem et al. [7], Sun et al. [8], Khamparia et al. [10], and Alirr [18], respectively. The proposed BSNN-CHA-LCC attained 9.80%, 14.58%, 11.21%, and 12.31% higher specificity than the methods reported by Naeem et al. [7], Sun et al. [8], Das et al. [9], and Khamparia et al. [10]. The BSNN-CHA-LCC achieves 12.54%, 38.56%, 23.67%, and 30.22% better F-measure than existing methods, such as Naeem [7], Sun et al. [8], Khamparia et al. 10], and Alirr [18], respectively. The proposed BSNN-CHA-LCC model provides 11.22%, 22.22%, 45.14%,and 15.21% better ROCs than the methods reported by Naeem et al. [7], Sun et al. [8], Das et al. [9], Khamparia et al. [10]. The BSNN-CHA-LCC model provided 10.22%, 24.67%, 15.7%, and 18.11% lower error rates than the methods reported by Naeem et al. [7], Sun et al. [8], Khamparia et al. [10] and Alirr [18], respectively.

Table 1. Benchmark comparison.

Methods

Performance Analysis

Accuracy (%)

Precision (%)

Sensitivity (%)

Specificity (%)

F-measure

ROC

Error rate

Naeemet al. [7]

95.11

94.9

91.67

92.37

91.96

0.81

4.89

Sunet al. [8]

98.89

87.89

84.75

93.76

73.86

0.79

1.11

Daset al. [9]

85.18

88.98

90.65

77.67

78.3

0.65

14.82

Khampariaet al. [10]

91.85

87.105

-

89.56

-

0.819

-

Alirr, [18]

-

-

81.76

-

80.67

-

1.75

BSNN-CHA-LCC(proposed)

99.88

99.39

99.59

98.69

98.59

0.99

1.28

5. Conclusion

The BSNN-CHA-LCC was successfully implemented to categorize liver cancer accurately as normal or abnormal. The proposed BSNN-CHA-LCC was implemented in MATLAB, and the efficiency was evaluated using several performance metrics. The proposed BSNN-CHA-LCC achieved41.26%, 73.10%, and 24.12% better recall, 97.73%, 94.89%, and 96.7% better F-Measures than the existing MLP-LCC, mask-RCNN-LCC and DNN-GMM-LCC models, respectively.

REFERENCES

1 
M. Chen, B. Zhang, W. Topatana, J. Cao, H. Zhu, S. Juengpanich, Q. Mao, H. Yu, X. Cai,`` Classification and mutation prediction based on histopathology H&E images in liver cancer using deep learning,'' NPJ precision oncology, vol. 4, no. 1, p. 14, 2020.DOI
2 
A. Kiani, B. Uyumazturk, P. Rajpurkar, A. Wang, R. Gao, E. Jones, Y. Yu, C.P Langlotz, R.L. Ball, T.J. Montine. B. A. Martin,``Impact of a deep learning assistant on the histopathologic classification of liver cancer,'' NPJ digital medicine, vol. 3, no. 1, p. 23, 2020.DOI
3 
P. Narmatha, S. Gupta, T. V. Lakshmi, D. Manikavelan, ``Skin cancer detection from dermoscopic images using Deep Siamese domain adaptation convolutional Neural Network optimized with Honey Badger Algorithm,'' Biomedical Signal Processing and Control, vol. 86, no. 1, p. 105264, 2023. hDOI
4 
M. Fujita, R. Yamaguchi, T. Hasegawa, S. Shimada, K. Arihiro, S. Hayashi, K. Maejima, K. Nakano, A. Fujimoto, A. Ono, H. Aikata,``Classification of primary liver cancer with immunosuppression mechanisms and correlation with genomic alterations, EBioMedicine, vol. 53, no. 1, p. 102659, 2020.DOI
5 
A. Krishan, D. Mittal, ``Ensembled liver cancer detection and classification using CT images,'' Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, vol. 235, no. 2, pp. 232-244, 2021.DOI
6 
M. Shahram, K. Nayebi, ``ECG beat classification based on a cross-distance analysis,'' In Proceedings of the Sixth International Symposium on Signal Processing and its Applications (Cat. No. 01EX467) (Vol. 1, pp. 234-237). IEEE. 2001.DOI
7 
S. Naeem, A. Ali, S. Qadri, W. Khan Mashwani, N. Tairan, H. Shah, M. Fayaz, F. Jamal, C Chesneau,. S Anam, ``Machine-learning based hybrid-feature analysis for liver cancer classification using fused (MR and CT) images,'' Applied Sciences, vol. 10, no. 9, p. 3134, 2020.DOI
8 
C. Sun, A. Xu, D. Liu, Z. Xiong, F. Zhao, W. Ding, ``Deep learning-based classification of liver cancer histopathology images using only global labels'', IEEE journal of biomedical and health informatics, vol. 24, no. 6, pp. 1643-1651, 2019.DOI
9 
A. Das, U.R. Acharya, S.S. Panda, S. Sabut,``Deep learning based liver cancer detection using watershed transform and Gaussian mixture model techniques,'' Cognitive Systems Research, vol. 54, no. 1, pp. 165-175, 2019.DOI
10 
A. Khamparia, A. Singh, D. Anand, D. Gupta, A. Khanna, N. Arun Kumar, J. Tan,``A novel deep learning-based multi-model ensemble method for the prediction of neuromuscular disorders,'' Neural computing and applications, vol. 32, no. 1, pp. 11083-11095, 2020.DOI
11 
https://figshare.com/articles/dataset/MedSeg_Liver_segments_dataset/13643252URL
12 
https://www.kaggle.com/datasets/andrewmvd/liver-tumor-segmentation-part-2URL
13 
M.P. Kumar, B. Poornima, H.S. Nagendraswamy, C. Manjunath, ``Structure-preserving NPR framework for image abstraction and stylization,'' The Journal of Supercomputing, vol. 77, no. 8, pp. 8445-8513, 2021.DOI
14 
F. Shen, Y. Wang, C. Liu, ``Change detection in SAR images based on improved non-subsampled Shearlet transform and multi-scale feature fusion CNN,''IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, no. 1, pp. 12174-12186, 2021.DOI
15 
M. Suganthy, S. Ashok, A. Uma Maheswari, T. D. Subha,. ``Recalling‐enhanced recurrent neural network optimized with wood pecker mating algorithm for brain tumor classification,'' Concurrency and Computation: Practice and Experience, vol. 1, no. 1, p. e7729, 2023.DOI
16 
S.R. Kheradpisheh, M. Mirsadeghi, T. Masquelier, ``Bs4nn: Binarized spiking neural networks withtemporal coding and learning,'' Neural Processing Letters, vol. 54, no. (2), pp. 1255-1273, 2022.DOI
17 
M. Zaeimi, A. Ghoddosian, ``Color harmony algorithm: an art-inspired metaheuristic for mathematical function optimization,'' Soft Computing, vol. 24, no. 16, pp. 12027-12066, 2020.DOI
18 
O.I. Alirr, “Deep learning and level set approach for liver and tumor segmentation from CT scans,” Journal of Applied Clinical Medical Physics, vol. 21, no. 10, pp. 200-209, 2020.DOI

Author

Pushpa Balakrishnan
../../Resources/ieie/IEIESPC.2023.12.6.502/au1.png

Pushpa Balakrishnan is currently working as a Research Scholar in the Department of Electrical and Electronics Engineering, Annamalai University, Tamil Nadu 608002, India.

B. Baskaran
../../Resources/ieie/IEIESPC.2023.12.6.502/au2.png

B. Baskaran is currently working as a professor in the department of Electrical and Electronics Engineering, Faculty of Engineering and Technology, Annamalai University, Chidambaram, Tamil Nadu, India.

S. Vivekanandan
../../Resources/ieie/IEIESPC.2023.12.6.502/au3.png

S. Vivekanandan is currently working as a Managing Director and Liver Transplant Surgeon, in the Department of HPB & Liver Transplantation, RPS Hospitals, Chennai, Tamil Nadu, India.

P. Gokul
../../Resources/ieie/IEIESPC.2023.12.6.502/au4.png

P. Gokul is currently working as an Assistant professor, in the Department of Biotechnology, Saveetha School of Engineering, Chennai, Tamil Nadu