3.1 System Reconstruction Preprocessing
The BP neural network algorithm has super extensive applications, and the fusion effect
of the established model is good and received a lot of attention. The neural network
fully mines the nonlinear relationship between the enterprise and its own information
data and the target object according to the sample data, continuously adjusts the
process network parameters through the back propagation algorithm, and integrates
the knowledge in the trained model. Stored in the network parameters, the new input
data are fused according to fusion knowledge to obtain the final output fusion result
[16,17]. Because enterprise information data are voluminous, and management levels are complex
with multiple dimensions, this study employed the principal component analysis technique
for sample dimension reduction processing of enterprise information data to minimize
information correlation and to subsequently reduce the number of neural network input
nodes, thus simplifying the network structure [18]. Then, the firework algorithm was used to adjust and optimize the parameters involved
in the neural network. This step can improve the network's calculation speed, boost
the fidelity of the mold fusion network’s calculation speed, and boost mound fusion's
fidelity. The fusion flow chart from constructing the model is shown in Fig. 1.
Fig. 1. Dimensionality reduction steps in the PCA method.
Fig. 2. Evaluation model based on the BP neural network.
The flowchart in Fig. 2 shows the specific dimension reduction steps of PCA. For the data collected by a
total of $g$ information management systems within the same time period in $n$ units
of time, a sample matrix, $X$, can be composed of $n$ rows and $g$ columns, and formula
(1) transforms this sample.
In formula (1), $\mu _{ij}$ represents the parameter matrix obtained after sample matrix transformation;
$x_{ij}$ represents the sample; $\overline{x_{j}}$ is the average sample. $\overline{x_{j}}=\frac{1}{n}\sum
_{i=1}^{n}x_{ij}$ and $s_{j}^{2}=\frac{1}{n-1}\sum _{i=1}^{n}\left(x_{ij}-\overline{x_{j}}\right)^{2}$
can get the standardized matrix, $U$. The solution of the correlation coefficient
matrix of the standardized matrix is shown in formula (2):
In formula (2), $R$ and its characteristic equation is solved. See formula (3) for details:
In formula (3), $m$ represents the eigenvalues obtained when the matrix equation is solved. Then,
the selected eigenvalues are sorted in descending order, and the feature vectors corresponding
to the former proper values, $p$, are picked and combined into a transformation array.
The expression for $A^{T}$ is shown in formula (4):
In formula (4), $u$ represents different eigenvectors. The value for $p$ is determined from $\frac{\sum
_{j=1}^{P}\lambda _{j}}{\sum _{j=1}^{m}\lambda _{j}}\geq 0.85$. That is to say, when
the contribution rate of the information of the previous principal component $p$ is
significantly greater than 85%, the former principal component $p$ can be used as
the eigenvalue of the sample. In the process, to eliminate the adverse effects of
different data on the prediction fidelity of the mold, it is necessary to normalize
the sample number. So min-max normalization is used to linearly change the original
data and control the data between [0,-1]. The specific expression is formula (5):
In formula (5), $x$ deputizes the basic number; $\min (x)$ deputizes the least number, $\max \left(x\right)$
deputizes the max in the basic number; $x'$ represents the valid data after normalization.
During the running of the model, 90% of the normalized sample data is stochastic chosen
as the workout sample, and the leftover 10% is used as a sample to be tested.
3.2 Construction of the FWA-BP Enterprise Game Machinery Bottom
After improving the BP algorithm by using the principal component analysis method,
the BP neural network model still has problems such as local extreme values and a
slow convergence rate. In order to better optimize and adjust the training parameters,
this study introduces FWA to optimize it and improve the computing speed and fusion
accuracy of the network. FWA will constantly produce fireworks during operation of
the system. Each firework is regarded as a solution to the system problem in the computing
space, while the process of firework explosion is regarded as the process of constantly
seeking the optimal solution.
Combining FWA and BP algorithms can effectively enhance the system's global search
capabilities and makes more efficient use of parallel computing resources. Compared
with other algorithms, implementation of the FWA-BP neural network algorithm is simpler.
The algorithm first determines the number of starting fireworks, $N$, and the dimensions
of different fireworks, $n$. The upper and lower limits of the fireworks dimensions
are then determined, and fireworks are randomly generated within that range. Then,
the explosion operation is carried out based on the strategy [19-21]. Sparks generated in the explosion process are randomly distributed, and the number
and range of sparks are mainly determined according to the detonation semidiameter,
$A_{i}$, and the date of detonation spark-over,$S_{i}$. The specific expression is
formula (6).
In formula (6), $f\left(x_{i}\right)$ represents the fitness value for each firework; $y_{\min }$
and $y_{\max }$ represent the worst-error and optimal-error values for solving the
objective function; $E_{r}$ represents the parameter that controls the range of sparks;
$E_{n}$ represents the parameter that controls the number of sparks; and $\varepsilon
$ is the coefficient to avoid 0 in the denominator. The explosion operation, $D_{select}=rand*D$,
is carried out for each firework, randomly determining the number of dimensions, $x_{i}$,
that needs to be offset. formula (7) shows the explosion sparks generated:
In formula (7), $x_{ik}$ means firework $i$ is selected and needs to be offset; $k$ denotes the
explosion spark generated after firework $i$ explodes; $h$ is the random value generated
within the explosion radius. The specific expression is formula (8):
The introduction of Gaussian sparks in the FWA increases the difference in the study
plant or animal community. The figure of Gaussian sparks is defined as $g$. Fireworks
are randomly selected from the initial fireworks population, and a dimension multiplied
by random number $e$, which conforms to the Gaussian distribution, is defined as shown
in formula (9):
In formula (9), the range of $e$ belongs to the Gaussian scatter, $N\left(1,1\right)$. For sparks
beyond the boundary generated during the explosion, mapping rules are used to constrain
them, as shown in formula (10):
In formula (10), $x_{ik}$ represents the boundary line that firework $i$ exceeds in dimension $k$.
Then, the initial fireworks group, the explosion fireworks group, and the Gaussian
sparks group together form the fireworks group, which is defined as $K$. The one with
the best fitness performance is picked from the plant or animal community, and then,
one is picked from the remaining fireworks groups according to the selection strategy,
$N-1$. The $N$ selected fireworks are used as the fireworks population for the next
iteration [22,23]. The strategy formula is (11):
In formula (11), $R\left(x_{i}\right)$ represents the sum of the European distances between other
fireworks and current fireworks $x_{i}$. The range for Euclidean distance is [0,1]. In the set of all candidates, $p\left(x_{i}\right)$ represents the probability that
fireworks $x_{i}$ are selected. When calculating the fitness value of the fireworks
population generated in each iteration, if the optimal fitness satisfies the maximum
number of iterations, the search for the optimal solution ends; otherwise, it jumps
to the first step of the process. Combining the fireworks algorithm with the neural
network, a BP neural network is created, the fireworks population is initialized,
and the goal of optimization is actually all the network parameters in the neural
network. The fitness worth of every firework is computed according to the above steps
of the FWA. The sum of squared error (SSE) of the neural network is taken as the fitness
function, and the expression for $f\left(x_{i}\right)$ is shown in (12):
In formula (12), $t$ represents the anticipated export value under the BP approach, the quantity
of nerve cells in the neural net's export layer, and the actual export value under
the BP calculation. According to the steps, optimization of the fireworks population
and judgment on the termination conditions are carried out, and the optimal individual
satisfying the conditions is decoded and assigned to the neural network. The LM algorithm
is then used to train and optimize the neural network weights and thresholds with
higher precision. The process involves setting the target error function to the SSE,
and specifying the number of training iterations and target error beforehand, followed
by initiating the training process. When the target error or the max quantity of iterations
is achieved during training, it means the training is over, and establishment of the
optimal model is completed. If the conditions are not met, the previous step is conducted
to continue the operation.
The input variables of the sample data define the nodes in the net. There is no specific
method to be sure of the number of nodes in the concealing layer. Therefore, a common
empirical formula is selected, and the number of nodes is determined according to
the change in the selected formula to carry out the experiment. The expression is
shown in (13):
In (13), $n'$ is the number of nodes in the hidden layer in the neural network, $m'$ represents
the number of nodes in the input layer, and $p$ represents the quantity of nodes in
the output layer. When network performance no longer changes with an increase in the
quantity, it could mean the number of conceal nodes at this moment is the optimal
quantity of conceal nodes. After that, all indicators would be preprocessed accordingly,
as seen in Fig. 2.
Fig. 3. Flow chart of the information game based on the fusion model for the FWA-BP neural network algorithm.
In Fig. 3, all indicators are preprocessed first, and then BP neural network detection is performed
through a series of operations. The neural network deals with enterprise information,
and thus, conducts games among the enterprises. Before fusion, it is necessary to
clarify the relationship between the data and the target object, which is not a simple
linear relationship. Therefore, it is necessary to add nonlinear factors to the activation
function in the hidden layer to improve the nonlinear expression power of the mold.
The hidden layer usually adopts a nonlinear sigmoid function, and the output layer
adopts a linear purelin effect. When comparing the constructed models, two or more
models are usually compared, and the same test samples are used to test the effect
fusion. Most researchers select the mean absolute error (MAE) and the root mean square
error (RMSE) as test indexes. The specific expression is formula (14):
In formula (14), $N'$ is the test date of the pattern, and $y$ is the predicted output date. Using
the above formula to build the enterprise game system bottom process in the FWA-BP
fusion model is revealed in Fig. 3.