Quantum behaved PSO with Global best strategy in Characteristic Length to Explore the Solution Space Efficiently and Effectively.
SiyashaSingh1✉Email
1Department of ECE (Bio-Medical)Vellore Institute of TechnologyVelloreIndia
Siyasha Singh
Department of ECE (Bio-Medical), Vellore Institute of Technology,
Vellore, India.
siyashasingh@gmail.com
Abstract
The characteristic length of the potential well largely determines the exploration in Quantum Particle Swarm Optimization. Previous methods calculated this length based on the mean of self-best solutions, resulting in slow convergence and accuracy issues due to symmetrical exploration. This work introduces a modified characteristic length that incorporates the globally best solution, enabling asymmetrical exploration around local attractors. This new approach allows for three levels of exploration in parallel: particles close to the global best focus on local refinement, those farther away contribute to global exploration, and others explore intermediate regions. By varying exploration depths, the algorithm improves convergence toward global optima. Various potential wells—including the Delta and Harmonic Oscillator—were analyzed to assess their effectiveness at guiding solution searches. As dimensionality increases, the challenge of lagging particles worsens when using the mean-based characteristic length, making the proposed global best-based approach more effective. To better evaluate algorithm performance, both accuracy and convergence characteristics are considered simultaneously, leading to the introduction of a new normalized distance measure. Extensive experiments using numerical optimization benchmark functions demonstrate that the global best-based characteristic length in the quantum version consistently outperforms the mean self-best-based approach. This trend was observed across multiple potential wells, indicating broad applicability. The Delta potential well that achieved the highest performance among all tested potential wells, for 19 functions from the CEC2005 benchmark, normalized accuracy increased by 6.8% and convergence characteristics improved by 66.62%. For 28 functions from CEC2017, performance improved by 14.29% compared to the Farmer and Seasons Algorithm. These experimental results indicate that the global best-based characteristic length significantly enhances solution exploration and convergence. Additionally, the proposed quality measurement index enables more accurate algorithm comparison. Research has also been carried out to exploring the solution space using multiple potential wells and benefits has been estimated compared to single potential well.
Keywords:
Meta-heuristic Optimization
PSO
Quantum PSO
potential well
characteristic length
quality index
1. Introduction
Optimization is rapidly progressing, with new algorithmic and theoretical techniques emerging across various fields of science and engineering. A key trend is the increasing focus on the interdisciplinary nature of optimization. Traditional approaches, such as gradient-based and Hessian-based methods, are efficient but face challenges with complex, non-differentiable, or discrete problems. These methods can get trapped in local minima and are sensitive to numerical noise. In contrast, meta-heuristic optimization approaches efficiently tackle multimodal, high-dimensional, and nonlinear problems. They can handle any objective function and are noted for their simple design and self-adaptive capabilities, which can yield global optima when properly configured. Meta-heuristic algorithms are generally grouped as follows: (i). Evolutionary Computing (EC): Considering natural evolution as the fundamental principle and includes Genetic Algorithms (GA) [1], Differential Evolution (DE) [2][3], and Genetic Programming (GP)[4], etc. (ii) Swarm Intelligence (SI): Inspired by social behaviors, examples include Particle Swarm Optimization (PSO) [5] [6], Ant Colony Optimization (ACO) [7] and Adaptive Social Behavior Optimization (ASBO) [8], etc. (iii) Physical/Chemical Principle-Based Algorithms: Includes Simulated Annealing [9] and DNA Computing [10],etc. A significant issue for meta-heuristics is maintaining a balance between exploration (finding new solutions) and exploitation (improving known solutions), which affects convergence and global optimality [11] [12]. Strategies to improve this balance include tuning algorithm parameters and using self-adaptive mechanisms based on population diversity and fitness. Furthermore, surrogate models can simplify the evaluation of complex objective functions, reducing computational costs [13]. Researchers are also combining quantum mechanics with meta-heuristics to improve the performance, leading to quantum meta-heuristic algorithms. There are two main approaches for incorporation, as shown in Fig. 1: (i) Quantum-Inspired [14] [15]: In this approach, the meta-heuristic model is transformed into the quantum realm to explore solutions in Hilbert space. (ii) Quantum Behaved [16]: This approach enhances meta-heuristic outcomes by selecting a potential well to obtain the wave function from the time-independent Schrödinger Equation and determining its position through quantum measurement.
In a quantum-inspired meta-heuristic, a quantum solution is represented by a sequence of qubits. The algorithm applies an evolutionary structure to the qubits through quantum logic gates, which function like operators within Hilbert space. These quantum gates manipulate the quantum solution. By considering the principle of quantum superposition, a quantum solution can represent multiple solutions simultaneously, facilitating a faster exploration of the vast solution space. Quantum entanglement introduces correlations among qubits, helps maintain a high level of diversity, and prevents premature convergence to local optima. These characteristics make quantum meta-heuristics (QMHs) powerful tools for solving complex optimization problems that have extensive and multimodal search domains and require rapid convergence. Several algorithms fall under this category, including Quantum-Inspired Genetic Algorithm (QGA)[17][18], Quantum Differential Evolution (QDE)[19], Quantum Ant Colony Optimization (QACO)[20], and Quantum Artificial Bee Optimization (QABO)[21].
In quantum-behaved meta-heuristics, a feasible potential is expressed in the time-independent form of the Schrödinger equation to obtain the wave function. From this wave function, a probability density function is derived, and a quantum measurement is subsequently applied to effect changes. New solutions are generated by incorporating the quantum change discovered into the meta-algorithm's solutions in a probabilistic environment. When the need arises for lower computational costs and moderate accuracy, Particle Swarm Optimization (PSO) has proven to be a highly effective algorithm, with numerous applications in the past [22]. Various versions and hybridizations with other meta-heuristics have emerged [23] [24]. Among them, a PSO variant that incorporates quantum mechanics principles to guide the search process is known as Quantum-Behaved Particle Swarm Optimization (QPSO) [25]. A significant challenge faced by all meta-heuristics is the tuning of the numerous associated parameters, which can vary across different applications. Conventional PSO encounters this issue, requiring tuning of three parameters: inertia weight, social constant, and cognitive constant [26]. In contrast, QPSO only necessitates tuning a single parameter [27] and demonstrated improvements over conventional PSO in terms of accuracy, convergence speed, variation, and robustness [25] [42].
Fig. 1
Different approaches of Quantum principles integration with Meta-heuristics
Click here to Correct
This research makes three main contributions: (i) it clarifies how potential well characteristics influence the solution exploration phase; (ii) it proposes a new global best-based characteristic length strategy to increase the accuracy of Quantum Particle Swarm Optimization; and (iii) it introduces a more accurate and robust quality measurement index for algorithm performance. To address the first contribution, the study analyzes the quantum behavior's effect as a function of random numbers depending on potential well probability density functions. It investigates various potential wells—Delta, Harmonic Oscillator, Square Well, Lorentz Potential Field, Rosen-Morse Potential Field, and Coulomb-like Square Root Field—and presents their effects graphically. The findings show that both linear and non-linear changes exist for all potential wells, influencing solution dimensions differently based on random number strength. This enables diverse exploration to support finding the global optima. The second contribution focuses on characteristic length (CL): while a mean self-best CL can cause slow convergence and risk local optima, the proposed global best-based CL enables asymmetrical changes according to each member's distance from the global best, promoting both local and global exploration. This approach accelerates convergence and improves global search capability. The third contribution is the introduction of the Quality Measuring index
QMindex
, a distance-based measure to evaluate how close an algorithm's result is to the absolute best. Smaller QMindex values indicate better performance. The research relies on two sets of benchmark functions from CEC2005 and CEC2017, incorporating characteristics of unimodal, multimodal, shifted and rotated, hybrid, and composite with different dimensions, to test and detail algorithmic performance. Overall, the proposed global-based CL in QPSO demonstrated superior performance compared to other QPSO variations and the recent Farmer and Season Algorithm (FSA) [64].
In this paper, the work is structured into several sections. Section 2 presents a review of related work, while Section 3 discusses the identified research gap and provides a brief overview of the proposed solution. The basic principles of Particle Swarm Optimization are outlined in Section 4, and Section 5 describes the association of quantum behavior with PSO. Section 6 analyzes the effective contributions of different potential wells in Quantum Particle Swarm Optimization. In Section 7, the proposed solution for the global best-based characteristic length in QPSO is discussed. A quality measurement index and its formulation are provided in Section 8. Section 9 covers the details of the experimental outcomes and their analysis over benchmark functions, along with the impact of high dimensions, as well as observed limitations with the QPSO algorithm. The outcomes of considering the multiple potential wells simultaneously in the solution exploration has been discussed in Section 10. The paper concludes with a discussion of future work.
2. Related work
In the context of the current research on Quantum Behaved Particle Swarm Optimization, previous literature has been studied over the involved fundamental principles, local attractor positioning, variations in characteristics of the contraction-expansion (CE) coefficient, characteristic length formulation, the effect of potential wells, hybridization with other algorithms, and applications over different fields.
Several past studies can be regarded as foundational developments for QPSO, particularly regarding the principles of operation and control strategies for associated parameters aimed at enhancing exploration and improving convergence characteristics. One of the most significant contributions was made by Jun Sun et al. [25], who introduced quantum characteristics to PSO particles to enhance exploration from local attractors. They investigated the Delta potential well, which demonstrated effective performance. The analysis of the upper-level fixation of the CE parameter was thoroughly discussed in [27], proposing various approaches for the control strategy of QPSO. The positioning of local attractors significantly influences the performance of particles hence a Gaussian probability distribution-based strategy for identifying better local attractors was proposed in [28]. In this strategy, the mean value represents the local attractor position derived from a standard approach, while the standard deviation reflects the difference between the global and self-best positions. This hybrid method, utilizing both standard and Gaussian-based local attractors in a probabilistic environment, showed a slight performance improvement compared to standard strategy. A comprehensive analysis was presented in [29], examining the behavior of a single particle using a probabilistic approach. This analysis yielded the upper bound value of the CE coefficient to ensure convergence. Additionally, to enhance the exploration capability, diversity control was explored in [30]. This study revealed that the distance to the average point diversity is a crucial factor in defining search performance. The integration of quantum principles with meta-heuristics has been applied and simulated within classical systems. The effects of these quantum mechanics principles in conjunction with meta-heuristics in real quantum environments were investigated in [31]. For this purpose, the quantum cellular genetic algorithm was implemented on quantum simulations and machines to assess how the real quantum world influences performance. It is well-known that the effectiveness of any meta-heuristic algorithm is problem-dependent, raising the question of which characteristics of the problem contribute to algorithmic challenges. In [32], the effectiveness of Quantum Annealing (QA) in problem-solving was analyzed using meta-learning models. It was found that problem coefficients related to bias and coupling terms were more critical for discovering effective solutions than merely considering the density of these coefficients. The optimal utilization and challenges of the quantum approach with PSO were discussed in [33], emphasizing that PSO relies on an implicit probabilistic distribution rather than an explicit one. The position of the local attractor in QPSO significantly impacts both the accuracy and convergence characteristics. Finally, a collaborative learning-based strategy was proposed in [34], which generates local attractors using orthogonal and comparison operators. The orthogonal operators aim to leverage the information available between personal and global best positions, while the comparison operators address the potential for local optima traps. Overall, the characteristic length plays a crucial role in exploration. In most research, the focus is on the absolute difference between an individual's performance and the best mean performance of the group. A variation in the characteristic length, based on a fitness-weighted mean best position rather than the mean best, was proposed in [35]. In [36], a modification to the characteristic length using the global best instead of the mean best was implemented, which showed improvements in adaptive filter-based system identification applications. The quantum behavior of QPSO has been defined through the concept of potential wells, and various types of potential wells have been explored in past studies. A detailed discussion about the formulation of different potential wells, such as Delta, Harmonic Oscillator, and Square potential wells, was presented in [37], with applications in designing various types of antennas in the field of electromagnetics. The comparative performances were analyzed in the context of three different potential fields: Lorentz, Rosen-Morse, and a Coulomb-like square root potential, as noted in [38]. The concept of soliton particles in the formation of Quantum PSO allows for adaptations in the external potential field, which helps stabilize them without getting trapped in the potential wells [39]. A discrete version was proposed in [40]. Additionally, controlling parameters were introduced in [41] to balance the influences of personal best and global best positions, as well as to strike a balance between diversification and intensification strategies. To escape local optima, some of the worst solutions in the population were re-initialized. Furthermore, Lévy flight and straight flight techniques were integrated with QPSO to enhance performance in high-dimensional problem scenarios [42].
Various possibilities have been explored to integrate other domain algorithms with Quantum Particle Swarm Optimization to enhance performance. A quantum-behaved PSO algorithm utilizing a hyper-chaotic discrete system was presented [43], where all particles are confined within an identical particle system, and the theory of two-dimensional hyper chaotic sequences is applied. Initially, seasonal fluctuation inference is eliminated by employing identical particles, followed by a chaotic search conducted through the hyper-chaotic sequence. Additionally, a mimetic algorithm combined with a memory mechanism was proposed in QPSO [44], enabling particles to gain experience through a local search mechanism before evolving. The memory mechanism is applied subsequently to improve search capabilities. To enhance convergence, the same random number is applied to each dimension of a particle. A Gaussian distribution-based random number generator was utilized that had a linearly decreasing variance, allowing a global search advantage in the initial phase and a local search in later phases [45]. A weighted mean of the best position was also employed to balance the search between global and local approaches. A chi-square-based mutation strategy integrated with QPSO in [46] for the CEED problem has demonstrated better performance compared to other mutation forms based on Gaussian and Cauchy distributions. A local search strategy in QPSO [47] was developed to enhance performance by creating a "super particle," formed by randomly selected particles, with contributions to this formation being fitness-oriented. A hybrid QPSO was proposed in [48] and applied to the Energy Demand Predictions (EDPs) problem, using Solis and Wets methods to boost local search performance. This algorithm aims to identify optimal solutions when constraints are met, employing two evolutionary operators to balance local and global searches. The principle of quantum entanglement from quantum mechanics has been leveraged in [49] to develop meta-heuristics. For a given population of proto-born particles, properties of entanglement and superposition are utilized to create twin-born and combination-born particles, respectively. An elite-born particle is generated via a localized searching approach. To address high-dimensional optimization, a quantum PSO based on diversity migration was proposed [50], where migrating individuals are selected based on their fitness and physical positioning. Furthermore, the superposition characteristic of quantum mechanics was proposed [51] to update the velocity parameters of particles in PSO, with enhancements to local searches assisted by the Kangaroo Algorithm. Quasi-opposite-based learning was introduced in the initialization stage to accelerate convergence, and a double evolutionary mechanism was applied to update individual positions [52]. This approach combines the advantages of particle mining and population guidance during iterations to improve QPSO searches. Additionally, a combination of cuckoo search and QPSO has been integrated for solving differential equations [53].
Numerous studies have explored real-world applications of QPSO. These applications include resource allocation in cloud computing [54], optimizing truss structures using quantum angle encoding and a perturbation operator [55], and minimizing glycemic loads for specific foods in the context of nutrition [56]. Additionally, QPSO has been integrated with deep Q-networks to enable path definition and obstacle avoidance for autonomous underwater vehicles [57]. A QPSO-based clustering method has also been employed to conserve energy in sensor nodes within wireless sensor networks (WSN) [58]. Other applications include Code Division Multiple Access (CDMA) [59], the Internet of Things (IoT) [60], and multi-task allocation [ 61], among others.
A
3. Research gaps and proposed remedies
The analysis of previous research primarily focused on the re-localization of local attractor positions, variations in contraction coefficients, the nature of applied potential wells, population initialization, and the integration of Quantum Particle Swarm Optimization with other meta-heuristics to enhance performance. Numerous application-oriented studies have been observed, but several critical areas have received minimal attention: (i) how the characteristic length affects exploration and performance, and its appropriate formulation. (ii) What should be a more appropriate formulation of characteristic length? (iii) Although probability density functions (PDFs) of potential wells were observable, after measurement, there was no clear information available about the characteristics of the obtained function of random numbers and their behavior in the exploration. (iv) the need for a unified quality measuring parameter that encompasses both the accuracy and convergence characteristics of algorithms throughout the convergence process.
To address these gaps, this work introduces a new strategy for characteristic length, defined through the global best instead of the mean of the self-best. The results demonstrate that the global best characteristic length enables simultaneous multi-level and multi-scale exploration, facilitating both global and local level exploration in parallel. This significantly increases the chances of escaping local optima traps and establishes a faster global convergence. The performance of the global best characteristic length in high-dimensional problems was found to be less affected compared to the self-best mean approach. An innovative metric, the Quality Measuring Index, was proposed to simultaneously account for accuracy and convergence characteristics, providing a means for relative comparison of algorithms. The proposed approach was first tested on CEC2005 benchmarks that included both unimodal and multimodal numeric optimization problems. The performance of the global best characteristic length was compared against the mean self-best approach across different types of potential wells to ensure that the observed improvements were substantial and consistent. A graphical approach was developed to illustrate the final effect of potential wells on solution exploration, clearly showing how QPSO with the global best characteristic length can explore more effectively and rapidly. In the later stage of work, the proposed approach was applied over the CEC2017 single objective function real parameters function that carried the shifted, rotated, hybrid, and composite structure of functions to ensure the capability of solution exploration even in the most difficult form of landscape.
4. Particle Swarm Optimization (PSO)
Natural swarms exhibit complex emergent behaviors resulting from local interactions, decentralized control, and self-organization. The mathematical abstraction of these behavioral properties, such as survival and reproduction, has led to the development of a branch of artificial intelligence known as Swarm Intelligence (SI). Various algorithmic models have been formulated to correspond to different types of swarms, including Particle Swarm Optimization (PSO), Ant Colony Optimization, and Bee Optimization. These algorithms are iterative meta-heuristics. PSO, in particular, has demonstrated significant success across diverse application domains and employs a computational approach inspired by leadership and self-motivation. During each iteration, three populations are maintained: the position population, which encodes candidate solutions; the velocity population, which determines changes in the solution population; and the self-best population, which stores the best solutions achieved by individuals. The updates to velocity (V) and position (S) for each member in each dimension for the
iteration are defined by Eq. 1 and Eq. 2 [26].
1
2
The inertia weight
), along with two other two positive constant parameters cognitive (
and social(
constant decide the nature of convergence characteristics. The inertia weight determines how much previous velocities influence the current one, and a better choice helps to balance between global and local search. Higher inertia weights help the algorithm to search more broadly, while lower weights make it focus on a local region. The cognitive and social parameters are less crucial, and proper values help the algorithm to converge faster and to avoid getting stuck in local optima. Uniformly generated random values of parameters
,
in range of 0 to 1 help to maintain the population diversity through different sampling of influencing factors. The performance of PSO depends a lot on these three parameters’ value settings, but there is no standard method that exists to find their best values. There are three possible approaches to assume values for the inertia parameters in algorithms: (i) static approach: where a constant value less than 1 is assumed throughout all iterations, (ii) dynamic approach: where a decreasing function of value is applied over iterations, and (iii) self-adaptive approach: where a function adjusts the inertia weight based on the current state of population diversity or fitness diversity. Regardless of the approach applied [5] [62] —whether in algorithm structure, parameter variations, or both—PSO faces challenges in global exploration, especially with large and multimodal problems, and requires enhancements to improve its performance.
5. Quantum behavior inclusion in PSO
Classical PSO was developed on the concept of inspirational factors of social culture, in which the influences of the leader and self-motivation were considered to provide changes in the individual. The Convergence characteristics of PSO have been demonstrated through the trajectory analysis in [5], and was established that convergence of each particle to its local attractors ensured the algorithm’s convergence. Considering the population size of M for a D- dimensional problem, the local attractor position (
) of a particle
Click here to download actual image
in the
Click here to download actual image
iteration can be estimated as a linear combination of its personal–best
and the global-best
position as defined by Eq. (3)
3
Click here to Correct
Where
Click here to download actual image
are the cognition and social constants while
Click here to download actual image
are the vector of random numbers, generated through uniform distribution in the range of [0 1]. In compact form, Eq. 3 can be written as Eq. 4.
4
Click here to Correct
Where
Click here to download actual image
Click here to download actual image
In the classical approach of the PSO, the attractive field pulls all the particles towards their local attractors, and many don’t have enough energy to escape eventually. With time, there was no proper exploration of regions beyond the local attractor’s field, causing of the solution to get trapped in the local minima.
Assuming a quantum system corresponding to PSO where particles described their quantum behavior and followed the quantum mechanics rules. The local attractors for a particle can be considered as the random average of personal-best and global-best positions, using the proper potential well, which provides the attraction towards their local attractors along with a good possibility of exploring the regions far beyond the potential well center. This provides the opportunity to increase the chance of obtaining the global optima. The exploration displacements from the local attractors were achieved through the quantum behavior formalism as given below. To use the Quantum behavior-based phenomenon in PSO, there are two sections under which the whole scenario has to be considered: (i) formalism of the Quantum environment and (ii) exploitation of quantum behavior with PSO in search of a better solution.
5.1 Quantum environment formalism for PSO
The following steps were used to create the quantum behavioral characteristics.
(i)An appropriate form of the potential well
(like a Delta well) is considered, centering local attractors .
(ii)The time-independent form of the Schrödinger equation, as given in Eq. 5, was solved over the considered potential well to get the wave function
.
Where,
Click here to download actual image
is a time–independent Hamiltonian operator as given in Eq. 6.
Where
Click here to download actual image
the reduced Planck’s constant, and
Click here to download actual image
is the particle mass, and for one-dimensional case, the final form of Eq. 5 is written as Eq. 7
7
;
(iii)
Probability density function obtained from wave function
as an absolute Square
, which has properties of Eq. 8.
= 1 ; (8)
(iv)
Quantum measurement: Measurement into localized space happens in quantum mechanics through the collapse of the wave function and can be considered as the localization process. In the Quantum formulation process of PSO, the Monte Carlo simulation process is applied to achieve the localization through the following steps:
●A random variable is generated that has uniform distribution characteristics in the local region of the measurement.
●The obtained probability distribution is equated to the uniform distribution (
) as given by Eq. 9.
9
●The position is obtained by solving the equation in terms of assumed random variables.
Considering the above process for the particle which is moving in the dimension of
and let
where
a reference location is, and to converge the algorithm
should be closer to zero with time. Hence, an attractive potential field is required, and among various possibilities, the Delta potential well [25] [37], the Harmonic Oscillator and Square Well [37], the Lorentz Potential Field, the Rosen-Morse Potential Field, and the Coulomb-like Square root [38] Field have been considered in the past. The formulae of these potential wells, along with the corresponding form of probability distribution function, are given in Table 1.
5.2 Exploitation of quantum behavior with PSO: QPSO
With considered potential well centered at the local attractor, the collapse of the wave function for position up-gradation can be presented by Eq. 10.
10
Click here to Correct
Where D is the displacement function and can be considered as a function of characteristic length and non-uniform distribution function F as given in Eq. 11.
.
(11)
The characteristic length plays an important role in exploration and can be defined as the absolute difference from the current position to the position of the reference point
. The characteristic length is a function of
and while the nature of the F factor completely depends upon the considered potential well in hand and becomes a function of the uniform distribution random number
considered in the quantum measurement phase. Assuming that particle
Click here to download actual image
in the
Click here to download actual image
iteration with Delta potential well (infinitely strong attractive potential field whose depth is infinite at the origin and zero elsewhere) centered the
Click here to download actual image
on the
dimension, the solution upgradation defined through Eq. 12
12
Where
Click here to download actual image
is a constant and called as contraction–expansion (CE) coefficient, which controls the mechanism of exploration as well as convergence speed. This is the only parameter available in quantum PSO to tune. Generally, a value of less than 1 ensures the convergence. In practice, a dynamic approach of varying the value of
Click here to download actual image
from high to low with iteration has been considered [27] and can be defined by Eq. 13.
13
Where
Click here to download actual image
are the initial and final values of the CE, correspondingly, and T is the maximum number of iterations. In most of the previous literature [27][37][38][39], the position of the reference point
Click here to download actual image
has been considered the center position of all members
Click here to download actual image
, and the particle position up-gradation through Eq. 14.
14
Click here to Correct
The functional block diagram of the Quantum Particle Swarm Optimization is illustrated in Fig. 2. The entire process is divided into two parts: (i) Quantum behavioral formation, which allows the use of specific types of potential wells. After the collapse of the wave function, a measurement is made to define the displacement function. (ii) The use of the displacement function to explore the surrounding regions of the local attractor, to find a better solution.
Fig. 2
Functional block diagram for Quantum behavior formalism and its application in QPSO iteration
Click here to Correct
(the broken ‘- - -‘ line carried the operation that happened once at the beginning when a new generation started)
6. Potential Wells and Their Effects on QPSO Performance
The performance of Quantum Particle Swarm Optimization is largely influenced by the characteristics of the chosen potential well. Various types of potential wells have been examined in previous studies. Table 1 lists the equations associated with commonly used potential wells, including Delta, Harmonic Oscillator, Square Well, Lorentz Potential Field, Rosen-Morse Potential Field, and Coulomb-like Square Root Field. These potential fields exhibit significant differences in their functional formulas and probability density functions [37] [38]. The primary focus here is to understand how these different potential fields impact the exploration of new solutions when utilized within QPSO. The final solution update equations for QPSO corresponding to each potential well are also included in Table 1. For each potential well, the contribution function
indicating the final effect is presented in the last column of the table, where
is a random number generated from a uniform distribution between 0 and 1. The values associated with
represent changes in each dimension that facilitate exploration. To comprehend the varying effects of different potential wells on the exploration,100 randomly selected values of
, the corresponding
, values have been plotted as shown in Fig. 3. The variability in the performance contributions can be observed. From Fig. 3, it is evident that all contribution graphs can be divided into two regions: (i) the nonlinear region, applicable when
, and (ii) the linear region, applicable when
. The first region is characterized by non-linearity for all potential wells, except for the Square Well. The degree of non-linearity is greatest with the Lorentz Potential Field, particularly for lower values of
. Harmonic, Coulomb, and Rosen-Morse fields exhibit a similar pattern with a lower degree of non-linearity. The Delta well shows a moderate level of variation compared to the two extremes. In the second region, all potential wells display linear characteristics, with the steepest slope observed in the Delta well, resulting in greater changes as it moves from 0.5 to 1.
Table 1
Different types of potential wells, their
and solution upgradation with variability in their contribution
Potential Well
Search Update Equation
Delta (D)
Harmonic Oscillator(HARM)
Square Well (SQR)
Lorentz Potential Field(LRZ)
Rosen-Morse Potential Field
(RM)
Coulomb-like Square Root Field (CLMB)
15
16
Fig. 3
Contribution function comparison over Different potential wells
Click here to Correct
The effects of the other potential wells show nearly similar variations. This nonlinear variation in the region has a significant impact on the direction and distance of the next exploration attempt due to the asymmetrical changes in different dimensions. These changes facilitate global exploration. In contrast, the linear region provides comparatively smaller changes across associated dimensions, supporting local solution exploration. In summary, there exists a mechanism that balances local and global exploration by providing equal opportunity across all dimensions. Approximately 50% of the dimensional changes are directed towards global exploration through the nonlinear regions, while the remaining 50% support local exploration through smaller changes in the linear regions. The overall effect of a specific dimension is illustrated in Fig. 4.The large differences in nonlinearity patterns among various potential wells in the nonlinear region, particularly for the Lorentz potential well, indicate that using the same value of the CE coefficient
for all types of potential wells may not be advisable. The Lorentz potential well requires a smaller range of CE variation compared to the Delta potential well. Otherwise, the search process may move abruptly from one distant location to another, which can significantly decrease the efficiency of the algorithm. Similarly, potential wells that exhibit smaller nonlinear changes may need to operate with a larger range of CE values.
A
Fig. 4
Support of global and local exploration by the contribution function
Click here to download actual image
The effects and benefits of integrating quantum mechanics with Particle Swarm Optimization can be understood by analyzing the parameters that come into play after the inclusion of quantum behavior. A more generalized solution upgrade equation for Quantum Particle Swarm Optimization can be represented by Eq. 15 and Eq. 16, where a local attractor is defined as a point on the line between the self-best explored solution and the global best-achieved solution. The exploration of new solutions from the local attractor is further defined by three parameters: (i) a constant known as the contraction-expansion coefficient, (ii) the characteristic length, (iii) the potential well contribution function
and (iv) the probabilistic selection of additive or subtractive arithmetic. The inclusion of quantum behavior in PSO enhances the stochastic approach during the search process by providing different levels of scaling across various dimensions. This leads to a comprehensive range of search possibilities concerning the local attractor in conjunction with probabilistic additive or subtractive arithmetic. The first parameter, η, serves as a scaling factor that helps control convergence over time and must be less than 1. The other parameter
and function
contribute to the magnitude variation in the search process by scaling the dimensions of available solutions. The
provided scaling reflects the absolute differential of the current solution relative to a reference point (usually the mean of the explored self-best solutions). The function
that governs this behavior is influenced by random variables, and its characteristics depend on the type of potential well considered. Additionally, this function provides scaling across different dimensions at varying levels, encompassing both global and local exploration. These parameter variations are inherently positive, imparting directional changes through additive and subtractive operations based on the local attractor position within a probabilistic environment. The entire process is compactly illustrated in Fig. 5.
The rotational exploration of new solutions is depicted in Fig. 6, where the search behaviors around the local attractor can be easily observed. It is assumed that the local attractor (LA) is situated in the first quadrant. The rotational behavior during the search for new solutions concerning the local attractor is influenced by two factors: (i) the amplitude of the quantum contribution (assuming a delta potential well
and (ii) the probabilistic selection of addition and subtraction as arithmetic operators. The potential search areas in different quadrants are defined under varying conditions, as illustrated in Table 2. It can be concluded that new solutions are explored throughout the entire spatial region surrounding the local attractor.
Fig. 5
Effect of Quantum parameters in solution exploration
Click here to Correct
Fig. 6
Rotational behavior of exploration all around the local attractor
Click here to Correct
Table 2
Rotational behavioral condition description of exploration around the local attractor
Click here to download actual image
Click here to download actual image
Click here to download actual image
Click here to download actual image
Click here to download actual image
Click here to download actual image
New Solution Pbob.Sign (X) X-component Pbob.Sign (Y) Y-component Condition Quadrant Exploration
NS1
1st
&
NS2
2nd
&
NS3
3rd
NS4
4th
&
7. QPSO with global best characteristic length
Considering the usefulness of the potential well in exploring regions far from the local attractor is a key advantage of Quantum Particle Swarm Optimization over classical Particle Swarm Optimization. Analyzing the position update equation provided in Eq. 16 reveals several interesting aspects. Any quantum version of PSO driven by a potential well incorporates three multiplicative factors: - A constant factor that controls the convergence rate and the scaling of change. - A characteristic length that defines the absolute radial distance from the corresponding local attractor for determining positional changes. - Random number-based functions that determine both the scaling of change and the direction of that change. The directional change arises from different dimensions being scaled with varying values. It is fascinating to observe that this can be seen as an exploration of new positions surrounding the local attractors.
7.1. Limitations of Existing Quantum PSO and Proposed Solutions
The current quantum version of PSO utilizes the individual best mean as a reference point for estimating the length of exploration for any new position in characteristic length. One can envision a cluster of individual bests, with the centroid of this cluster serving as the reference point for defining the exploration length. However, the solution difference from the centroid does not attract the particles; instead, it provides an absolute change in each dimension. Consequently, there is a reduced chance of diverting the direction of exploration. The directions for further scaling are determined by other factors. A significant issue with this version of the characteristic length is its generally small change, especially when the newly explored position becomes the individual's best. This can be illustrated through the geometric representation in Fig. 7, which shows that the local attractor (L) consistently appears along the line connecting the individual best and the global best. The performance of QPSO can be enhanced by introducing a new characteristic length that uses the global best point as the reference for defining the absolute difference from the previously existing position. This modification in characteristic length facilitates the exploration of larger regions, as depicted in Fig. 8. The explored radius length can vary depending on random numbers; however, there is a higher probability that this characteristic length will be larger compared to that derived from the cluster centroid. The position update can be defined by Eq. 17.
Fig. 7
Geometrical representation of CL (Pb: individual best, Gb: global best, C : best mean, L: local attractor,
Click here to Correct
CLG & CLM are CL w.r.t. Gb & C)
Fig. 8
Exploration region by different CL around local attractor ( XrG and XrM are the length generated by Global best and best mean centroid )
Click here to Correct
17
Click here to Correct
.
The absolute value of the characteristic length is utilized to explore the vicinity of the local attractor rather than to converge toward the current global best solution. The direction of exploration is determined by a function based on random numbers. This method has the advantage of allowing for diverse exploration levels among particles with varying fitness statuses. Particles that are closer to the global best position have a smaller characteristic length, which facilitates exploration in the area surrounding the current global best solution. Conversely, particles that are farther away from the global best produce a larger characteristic length, enabling exploration of regions that are more distant from the current global best. This initial divergent exploration helps avoid getting trapped in local minima, while the focus eventually shifts to convergent exploration to find the optimal solution. However, the individual best mean-based characteristic length may maintain an average distance among all particles, potentially hindering exploration throughout the process. Additionally, if the best mean fitness becomes better than the global best position fitness, then the updating of the global best position acquires the position of the mean best. The proposed algorithm for this approach, QPSO with a global best-based characteristic length (QgPSO), is detailed below.
7.2. Effect of characteristic length in exploration
To understand the impact of characteristic length on the exploration of the solution domain, the form of solution upgrade given in Eq. 15 can be particularly useful. Three factors contribute to the quantum effect: the CE coefficient, characteristic length, and a function of a random number. To isolate the effect of CL on exploration, it can be assumed that a constant value for the combined effect of the combination values of CE and the random function is 0.5, while considering all possible directional variations around the local attractor. In this case, the exploration region of CL can be represented as a circle with a specified diameter. For analytical purposes, we assume that the local attractor (LA) is positioned at the midpoint between the self-best (Sb) and global best (Gb) solutions. In practice, the LA may be closer to either the global best or the self-best; however, positioning it centrally will not affect the behavioral analysis and will allow for a comparison of the effects produced by different approaches to characteristic length. In the configuration shown in Fig. 9 and Fig. 10, it is assumed that the newly discovered positions represent the self-best positions, and we observe the exploration behavior of some particles. Considering two different characteristic lengths, the distance from the best mean as
and the distance from the global best as
, the following observations were found when comparing both figures:
(i)
Diversity of Particles: The diversity among particles can lead to different distributions of particle positions relative to the global best position when compared to their mean self-best positions. Particles with characteristic length
exhibit more diverse sizes of exploration regions, as the area of these regions is influenced by their distance from the global best position. In contrast, for the same particles with characteristic length
, there was a similar size of exploration regions.
(ii)
Common Exploration Regions: Particles with a characteristic length
shared a smaller common region of exploration, thus exploring a larger area of the solution domain compared to particles with a characteristic length
. This large area of exploration, with minimal overlap, results in faster convergence for characteristic length
. Conversely, the higher overlap of exploration regions with characteristic length
leads to slower convergence.
(iii)
Multi-Level Exploration: Multi-level exploration occurs in a parallel manner with characteristic length
. Particles closer to the global best promote exploration of the local regions surrounding it in a more intensive manner due to their restriction to smaller regions (small circles), while those farther from the global best support broader exploration (large circles). This simultaneous multi-level exploration from global to local scales facilitates achieving global optima more quickly. Such a characteristic of multi-level exploration is absent with characteristic length
,as particles near or far from the global best treat all regions similarly.
Fig. 9
Exploration characteristic behavior by particles having a characteristic length
Click here to Correct
Fig. 10
Exploration characteristic behavior by particles having a characteristic length
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
8. Quality Measuring index (QMIndex): a hybrid performance index that carries the feature of accuracy and convergence characteristics simultaneously
The final convergence value of an algorithm is commonly viewed as a key performance parameter in terms of accuracy. However, the behavior of the algorithm during its convergence process is often overlooked. This behavior is significantly influenced by the landscape it encounters and the algorithm's response to that landscape. This raises an important question: Is it appropriate to focus solely on the final convergence value when evaluating an algorithm's performance, neglecting the convergence behavior? Previous research efforts have attempted to compare objective function values with final convergence values, but they have not adequately addressed the complete dynamics of the convergence journey and its impact on overall performance [38].
Let's consider a situation involving objective minimization, where two algorithms, Algm1 and Algn1, have demonstrated their convergence characteristics, as shown in Fig. 11. It is evident that Algn1 achieved a better minimum value compared to Algm1, making it the superior algorithm in this comparison. However, when analyzing the dynamics of their convergence, it becomes clear that Algm1 performed better in the early stages of convergence (up to iteration 22). In contrast, its convergence rate slowed down in the later stages. Therefore, the comparative performance of both algorithms can vary depending on the observer's point of view.
Fig. 11
Comparative convergence having the variability in their convergence progress.
Click here to Correct
In a more complex situation, as illustrated in Fig. 12, the convergence paths of both algorithms intersect multiple times, yet they ultimately reach the same convergence point. This makes it challenging to determine which algorithm is superior. Unlike in Fig. 11, where Algm1 had better accuracy on iteration less than 22, the lack of a clear reference point means that observations from any point of convergence do not indicate a clear winner. Such situations are common, especially when irregular structures exist within the landscape, as each algorithm has its strengths and weaknesses when navigating these complexities. Often, there are constraints on allowed execution time, and the goal is to achieve minimum objective function values as quickly as possible. Instead of simply considering the final convergence value as the quality parameter, it is crucial to evaluate the improvement in the objective function value against the computational time invested. If the time spent results in a significant difference in the objective function value, then the investment is justified; otherwise, it may not be worth the effort.
.
Fig. 12
Convergences having multiple crossing phases in their convergence progress
Click here to Correct
To achieve a clear comparative performance assessment among different algorithms, this study proposes a novel approach that includes both accuracy and convergence dynamics. The accuracy is measured quantitatively by the values of the achieved objective function. However, obtaining quantitative information about convergence dynamics is not straightforward. This work introduces a method to quantify convergence dynamics by calculating the area under the convergence curve. A larger area under the curve indicates slower convergence, while a smaller area represents faster convergence. The estimation of the area under the convergence curve is based on the trapezoidal structure formed by two consecutive iterations. The process for this estimation is illustrated in Fig. 13, where the shape formed by the points
is calculated as the sum of a rectangle
and a triangle
, as expressed in Eq. 18.
Fig. 13
Area estimation under convergence curve for two consecutive 2nd & 3rd iterations.
Click here to Correct
18
The total area under the convergence curve is calculated by summing the areas between two consecutive iterations, as shown in Eq. 19. A relative estimation of the area under the curve (AUC) is achieved by normalizing each AUC value concerning the smallest AUC among all considered algorithms. This normalized AUC (NAUC) is expressed in Eq. 20. The minimum value of NAUC will be 1, which is considered the optimal value. When there are multiple test problems, the NAUC is first evaluated for each problem. Later, the overall value of NAUC for each algorithm is determined by calculating the mean across all individual problems, allowing for comparison with other considered algorithms.
.
19
20
As mentioned earlier, accuracy estimation is straightforward and directly corresponds to the achieved objective function value. To obtain a normalized value for the individual accuracy over a problem, the relative variation is calculated by dividing the individual accuracy by the best accuracy achieved with the considered algorithms. A more pronounced variation is achieved using the logarithmic value, as expressed in Eq. 21, which is especially effective when dealing with objective function values that are very close to zero. The best normalized relative accuracy (NAcc) is equal to 1. If more than one test problem needs to be evaluated, the mean value of NAcc should be taken as the final value.
21
Considering both NAUC and NAcc, the best score for each metric is equal to 1, respectively. A single performance metric parameter, referred to as the Quality Measuring Index (QMindex), has been proposed. This is calculated by finding the Euclidean distance from the best values, as given in Eq. 22. The algorithm with a distance closest to zero is considered the best and defines a distance from the best value.
22
To measure algorithm performance
, a two-dimensional metric plane has been established. On this plane, the X-axis represents the Normalized Convergence Area, while the Y-axis represents the Normalized Accuracy. This framework allows us to assess the relative superiority of one algorithm's performance over others by calculating the Euclidean distance from a given algorithm's performance point to the ideal position, point G, which has coordinates (1, 1). If an algorithm excels in both parameters, the distance to point G will be zero; otherwise, a non-zero distance will exist. A smaller distance indicates better performance among the algorithms. For example, as illustrated in Fig. 14, two different algorithms have achieved the coordinates
and
on this plane. The Euclidean distances from these points to point G are
and
, respectively. Since GP is smaller than GQ, Algorithm P (AlgP) is relatively better than Algorithm Q (AlgQ) for the problems considered. Different performance metrics, along with final convergence points and characteristics, as well as corresponding AUC values, are detailed in Table 3.
Fig. 14
Compariosn of two algorithms with
Click here to Correct
Table 3
Quality difference metric under different circumstances of accuracy and convergence characteristics.
Accuracy Convergence Characteristics AUC Quality difference
Equal Same Equal 0
Equal Different Equal 0
Equal Different Unequal
value
Unequal Different Equal
value
Unequal Different Unequal
value
9. Experimental Outcomes & Analysis Over Benchmark Functions
Benchmark functions for numeric optimization are groups of functions that embody optimization problems, displaying various characteristics such as being constrained or unconstrained, having continuous or discrete variables, and being either unimodal or multimodal. For any new algorithm, testing and validation using benchmark functions is crucial for evaluating its performance compared to other algorithms. This benchmarking is also essential for gaining a better understanding of the algorithm's advantages and disadvantages.
9.1 Experiments over single objective functions benchmark CEC2005
In this study, a set of 19 test functions was considered, sourced from CEC2005, among which
to
be uni-modal functions and
to
be multi-modal functions. Variations in Quantum Particle Swarm Optimization were developed with two characteristics: (i) consideration of different types of potential wells and (ii) various characteristic lengths. The different types of potential wells allow for the use of distinct probability density functions during the search process. The potential wells considered include the Delta potential, Rosen-Morse (Solitons), and the Harmonic Oscillator. The different characteristic lengths were constructed by considering the reference point either as the mean of the individuals best or as the global best achieved so far. Details regarding the dimension size, search range, and the allowed number of iterations are provided in the Appendix. Each function from
to
typically have a dimension of 30, while
functions has dimensions of 2. The functions
and
have the dimensions of 4 and 3, respectively. Statistical confirmation of the mean and standard deviation was estimated over 100 independent trials, with a population size of 100 for all experimental cases. The constriction expansion coefficient (η) decreased linearly from a high value of 0.9 to 0.6 across iterations, as specified in Eq. 13. The main objective of the experiments was to evaluate the relative effectiveness of the proposed characteristic length based on achieved global best solutions compared to the conventional approach using the mean individual best. For each potential well, experiments were conducted using both forms of characteristic lengths while maintaining all other parameters constant. Instead of relying solely on the mean best, a fitness-based weighting formed from the mean best [35] was also explored. Performance evolution was analyzed through visual interpretation of the mean best solution convergence graphs (Fig. 15) and numerical comparison of the final mean best solution convergence values along with their standard deviations (Table 4). The statistical significance of performance differences was assessed using a two-sample unpaired t-test at a 5% significance level over 99 degrees of freedom. The results of the test statistic are presented in Table 5, with statistical significance classified as better (B), not significant (NS), or inferior (IN). A final performance evaluation was conducted based on the total scores achieved by each algorithm across all functions, allowing for comparisons to be made. A hybrid approach was also considered for developing characteristic lengths, which utilized the mean of self-best and the globally best solutions randomly for population member upgrades. When upgrading an individual member within the population, the selection of the characteristic length category was determined by comparing a randomly generated number (uniformly distributed between 0 and 1) with a threshold value of 0.5. If the random number was greater than or equal to the threshold, the global best-based characteristic length was used; otherwise, the mean-based approach was applied. This method ensured that approximately half of the population followed the global best characteristic length while the other half followed the mean of the best approach, thus attempting to capture the benefits of both the average trend and the global best. The approaches for forming characteristic lengths with all three methods are illustrated in Fig. 15. As depicted in Fig. 15(a), the individual member characteristic length is derived from the absolute difference in length from the mean of the best individual; in Fig. 15(b), it is the absolute difference from the achieved global best solution; and in Fig. 15(c), the characteristic length results from the difference between either the mean of the best individual or the global best within a probabilistic environment. The best mean convergence graphs for all functions utilizing the mean-based characteristic length are shown with a broken line (- - -), while the corresponding version with the global best variant of the characteristic length is illustrated with solid lines
. Variants of the different potential wells are differentiated by color in the final convergence graph of function F19.
For all the unimodal functions F1-F7, a global minimum of zero existed, and considering the delta potential well, all three variants—QmDPSO, QwDPSO, and QgDPSO—attained accuracies very close to zero (approximately
), except for function F5. For function F6, the minimum of zero was achieved by all variants. In terms of comparative performance, QwDPSO demonstrated the lowest accuracy, except for F5, where it performed better than QmDPSO. As seen in Table 4, significantly better accuracy was achieved with the characteristic length defined through the global best, compared to using the mean best or a fitness-oriented weighted version. The performance of QgDPSO was notably superior to both QwDPSO and QmDPSO. For function F5, QwDPSO and QgDPSO had accuracies on the order of
, while QgDPSO achieved an accuracy on the order of
. In addition to accuracy, the reliability of QgDPSO was considerably better, as indicated by its very small standard deviation. With the hybrid version, QhDPSO showed slight improvements in accuracy compared to QgDPSO for functions F1, F2, F4, and F7, while for functions F3 and F5, the accuracy of QgDPSO was better. QgDPSO exhibited better and faster convergence for all considered unimodal functions compared to QmDPSO. It is worth noting that the convergence behavior of QwDPSO was relatively poor across all unimodal functions. The hybrid version, QhDPSO, showed some improvements in accuracy compared to QgDPSO for functions F1, F2, F4, and F7, bringing the outcomes closer to the optimum. However, for functions F3 and F5, performances were much inferior, approaching that of QmDPSO. A global optimum was achieved for function F6. The convergence behavior of QhDPSO followed the numerical accuracy pattern and closely matched the convergence performance of QgDPSO for F1, F2, F4, F6, and F7, but was inferior for functions F3 and F5. Considering the Soliton’s potential well, the numeric accuracy of QgSPSO was superior for all the unimodal functions compared to QmSPSO, demonstrating the advantages of using a characteristic length based on the global best solution rather than the mean best solution. Initially, the convergence behavior of QmSPSO was similar to that of QgSPSO, but it later lost exploration capabilities, whereas QgSPSO remained on the path of discovering new solutions. When considering the harmonic oscillator in the potential well, the trend of achieving better accuracy and convergence with the global best-based characteristic length continued. As shown in Table 4, the outcomes of QgHPSO were significantly better than those of QmHPSO, except for function F6, where exploration was lost in a few trials. The convergence characteristics of QgHPSO were outstanding and much faster.
A
Fig. 15
Different modes of characteristic length formation for an individual member of the population: (a) Mean-based (b) Global best, (c) hybrid of mean and global best
Click here to download actual image
Table 4
Best mean performances (along with standard deviation) by different variants of QPSO over 100 independent trials. The performance comparisons were given against the different variations of characteristic length under an assumed potential well. (In algorithms nomenclature the format of QcWPSO has been used: where C represents the type of characteristic length function of & W is a type of potential well, so for CL position : m:-best mean, w :- weighted best mean, g :- global best ; Potential well position W: D:-Delta, S:-Soliton, H:-harmonic Oscillator)
Fun
QmDPSO
QwDPSO
QgDPSO
QhDPSO
QmSPSO
QgSPSO
QmHPSO
QgHPSO
F1
1.8167e-35 (9.5350e-35)
3.8093e-24 (1.5485e-23)
4.4883e-42 (8.9408e-42)
1.0845e-46 (2.4216e-46)
5.0039e-25 (3.1298e-24)
4.4260e-45 (2.1020e-44)
4.0674e-17 (1.5973e-16)
1.0803e-45 (1.0206e-44)
F2
3.9750e-31 (1.5948e-30)
1.9858e-21 (5.6275e-21)
7.2109e-38 (2.4217e-37)
7.6334e-41 (2.0146e-40)
2.8044e-25 (2.2571e-24)
1.5433e-36 (1.0699e-35)
1.0288e-13 (6.2189e-13)
6.1746e-31 (3.2540e-30)
F3
6.6190e-05 (2.0334e-04)
2.5276e-03 (3.7268e-03)
4.0834e-10 (6.4700e-10)
4.5792e-05 (2.8521e-04)
8.1519e-03 (2.1412e-02)
4.0854e-11 (6.6066e-11)
1.1530e-02 (3.1372e-020
8.3648e-12 (2.3328e-11)
F4
3.3441e-13 (1.1592e-12)
4.9289e-09 (9.0037e-09)
5.6502e-16 (1.5022e-15)
3.7275e-17 (8.1376e-17)
3.5028e-09 (6.2910e-09)
1.7479e-12 (6.8642e-12)
1.1802e-06 (2.1197e-06)
2.0977e-11 (5.2837e-11)
F5
2.9312e + 01 (2.3949e + 01)
2.1819e + 01 (1.5898e + 01)
2.1032e-01 (9.8092e-01)
1.4111e + 01 (1.1578e + 01)
2.8106e + 01 (2.1317e + 01)
7.8161e-01 (1.4697e + 00)
2.6245e + 01 (1.9025e + 01)
5.3012e-01 (1.3519e + 00)
F6
0
(0)
0
(0)
0
(0)
0
(0)
0
(0)
0
(0)
0
( 0)
3.0000e-02 (1.7145e-01)
F7
4.0183e-97 (4.0141e-96)
1.3049e-67 (1.0663e-66)
4.5381e-121 (3.2435e-120)
4.6562e-136 (2.5316e-135)
1.4803e-69 (1.2339e-68)
1.2576e-130 (1.2076e-129)
7.1850e-49 (5.0906e-48)
8.8512e-146 (6.7595e-145)
F8
-0.9755e + 03 (1.7832e + 03)
-7.4948e + 03 (1.8093e + 03)
-1.1477e + 04 (2.5840e + 02)
-1.1788e + 04 (2.5554e + 02)
-6.3345e + 03 (1.5981e + 03)
-1.1209e + 04 (3.0925e + 02)
-8.0009e + 03 (1.0676e + 03)
-1.0596e + 04 (4.2303e + 02)
F9
9.9951e + 00 (4.2080e + 00)
8.8578e + 00 (2.9474e + 00)
1.3601e + 01 (3.6442e + 00)
1.0765e + 01 (3.2599e + 00)
1.0208e + 01 (3.3603e + 00)
1.6118e + 01 (4.6775e + 00)
1.3541e + 01 (3.5238e + 00)
2.3163e + 01 (6.6872e + 00)
F10
1.2363e-14 (3.7057e-15)
2.7523e-13 (3.1138e-13)
1.6236e-14 (3.9400e-15)
1.2648e-14 (3.8022e-15)
1.0360e-13 (4.1622e-13)
1.9043e-14 (5.5751e-15)
2.6505e-09 (5.3433e-09)
2.3768e-14 (1.1044e-14)
F11
8.0990e-03 (1.0757e-02)
9.5215e-03 (1.2337e-02)
1.3500e-02 (1.7577e-02)
8.8556e-03 (1.3938e-02)
3.7480e-03 (7.1361e-03)
8.8828e-03 (1.2810e-02)
6.9674e-03 (8.6789e-03)
1.0769e-02 (1.5816e-02)
F12
1.5705e-32 (3.5759e-47)
3.0911e-28 (9.8612e-28)
1.5705e-32 (3.3019e-47)
1.5705e-32 (3.5759e-47)
2.6394e-21 (2.5733e-20)
1.5705e-32 (3.0273e-47)
2.7935e-08 (2.6649e-07)
1.5705e-32 (2.2041e-47)
F13
2.0573e-09 (1.9735e-08)
5.4674e-12 (5.4400e-11)
4.7489e-32 (1.8318e-31)
4.2442e-16 (2.9065e-15)
6.9631e-12 (6.9282e-11)
4.8365e-32 (1.8686e-31)
1.0584e-08 (9.4810e-08)
3.0337e-31 (2.3746e-30)
F14
9.9805e-01 (4.0788e-04)
9.9801e-01 (7.2126e-06)
9.9800e-01 (4.2165e-16)
9.9800e-01 (4.4277e-14)
9.9849e-01 (2.2144e-03)
9.9800e-01 (2.7423e-16)
9.9812e-01 (3.9764e-04)
9.9800e-01 (1.2106e-15)
F15
3.8849e-04 (2.2876e-04)
3.1665e-04 (9.1568e-05)
3.2580e-04 (1.2884e-04)
3.3514e-04 (1.5698e-04)
3.2678e-04 (1.2908e-04)
3.3867e-04 (1.8077e-04)
3.8446e-04 (2.6464e-04)
4.3948e-04 (3.1287e-04)
F16
-1.0316e + 00 (1.3380e-06)
-1.0316e + 00 (4.5507e-07)
-1.0316e + 00 (1.4835e-15)
-1.0316e + 00 (1.0123e-14)
-1.0316e + 00 (6.0032e-06)
-1.0316e + 00 (1.4813e-15)
-1.0316e + 00 (2.2297e-05)
-1.0316e + 00 (1.5073e-15)
F17
3.9794e-01 (1.5238e-04)
3.9796e-01 (3.3602e-04)
3.9789e-01 (1.0600e-15)
3.9789e-01 (7.6899e-10)
3.9811e-01 (7.9383e-04)
3.9789e-01 (1.0600e-15)
3.9824e-01 (1.5806e-03)
3.9789e-01 (1.0600e-15)
F18
3.0000e + 00 (2.1351e-14)
3.0000e + 00 (2.5721e-15)
3.0000e + 00 (1.5358e-15)
3.0000e + 00 (2.7866e-15)
3.0000e + 00 (1.5686e-14)
3.0000e + 00 (1.7836e-15)
3.0000e + 00 (1.7792e-15)
3.0000e + 00 (1.8009e-15)
F19
-7.5915e + 00 (3.3521e + 00)
-7.7915e + 00 (3.1392e + 00)
-6.9950e + 00 (3.4583e + 00)
-5.8202e + 00 (3.5467e + 00)
-7.6845e + 00 (3.3460e + 00)
-6.9581e + 00 (3.4988e + 00)
-6.8308e + 00 (3.6101e + 00)
-6.1045e + 00 (3.6510e + 00)
A
For the multimodal function with a high dimension of 30, the accuracy performances of different variants exhibited varied behaviors across different functions. Considering the delta potential well, for function F8, the accuracy of QgDPSO was significantly better than that of both QmDPSO and QwDPSO. The differences in quality are clearly illustrated by the convergence characteristics shown in Fig. 16(h), where the mean best and fitness-weighted forms of characteristic length followed a nearly identical path of convergence, while the global best form of characteristic length demonstrated much faster convergence. For function F9, the performance of QwDPSO was the best, whereas QgDPSO performed comparatively inferior to the other two variants. But there were remarkable differences observed in the convergence behavior of all three variants. QwDPSO and QmDPSO exhibited very slow exploration and appeared to be trapped in local optima for an extended time, taking many iterations to escape. In contrast, QgDPSO’s quality of multilevel exploration helped prevent it from getting trapped in nearby local solutions in the early phase. For function F10, the accuracy and convergence performances were nearly identical across all three algorithms, showing no significant differences. In function F11, QmDPSO achieved slightly better accuracy, but QgDPSO exhibited faster convergence. The performances of QgDPSO and QmDPSO were the same for function F12, while QwDPSO lagged slightly in accuracy. The multimodal behavior of function F13 presented challenges for both QmDPSO and QwDPSO, leading to traps in local minima; however, QgDPSO consistently found the global solution. The convergence behavior of QgDPSO was highly commendable, providing opportunities for consistent exploration. The accuracy of QhDPSO was nearly the same as that of QgDPSO, except for function F13, where it outperformed QmDPSO but was still inferior to QgDPSO. Considering the Solitons potential well, benefits were noted for the multimodal function concerning the mean best-based characteristic length, achieving better accuracy with QmSPSO for functions F9 and F11, while showing poor performance for functions F8, F10, F12, and F13. When the characteristic length based on the global best was applied, significant improvements were observed. The convergence behavior of QgDPSO showed better results in comparison and occurred in the early stages. The experimental performances with the harmonic oscillator followed a pattern similar to that of the Soliton’s well. The accuracy of QmHPSO was better for functions F9 and F11, while performance lagged for functions F8, F10, F12, and F13, which improved significantly with the application of QgHPSO. As previously observed for unimodal functions, the convergence of QgHPSO was also much faster for multimodal functions.
For the comparison of performance in the low-dimensional multimodal functions F14 to F19, the results were not surprising. It was observed that for F14, all three variants based on the Delta potential wells were able to achieve the optimal solution. However, there was a much smaller standard deviation (~
) with QgDPSO compared to QmDPSO (~
) and QwDPSO (~
). Thus, the reliability of QgDPSO was better, and its convergence behavior was also faster. The performance of QhDPSO was nearly the same as that of QgDPSO, although it was slightly inferior in terms of reliability, while the convergence paths were quite similar. Accuracy performances, considering Solitons and the Harmonic Oscillator, were comparable, but faster convergences and better reliability were achieved with the proposed form of characteristic length. For F15, QgDPSO outperformed QmDPSO in both convergence and accuracy, although QwDPSO achieved the minimum value among others. When considering other potential wells, significant performance differences were not evident, although there was a slight improvement with the mean best-based characteristic length. QhDPSO showed accuracy close to QmDPSO but was better overall. In F16, the accuracy performances were nearly identical across all QPSO variants, though better reliability was observed with the global best-based characteristic length. QgDPSO demonstrated improved performance in F17 compared to QmDPSO and QwDPSO in both accuracy and reliability, with faster convergence as well. Accuracy and reliability were improved, resulting in the achievement of global minima when QgSPSO and QgHPSO were considered, while QmSPSO and QgHPSO were unable to achieve global minima. In F18, all variants exhibited nearly equal performance, achieving global optima with high reliability (~
). For F19, the accuracy performances were better for the mean best-based characteristic length variants of QPSO, with QwDPSO showing the best performance overall. Convergence graphs indicated that QwDPSO was the fastest, followed by QmDPSO.
In conclusion, the following observations were made from the analysis of the convergence graphs and accuracy:
(i)
In exploring new solutions, the nature of the characteristic length plays a more important role compared to the type of potential well.
(ii)
When considering the mean-based form of characteristic length, the performance of the Delta potential well was more impressive than that of the Solitons and Harmonic Oscillator.
(iii)
Improvements were observed with the global-based strategy for characteristic length compared to the mean-based strategy, regardless of the types of potential wells considered.
(iv)
The global best-based strategy for defining characteristic length resulted in much faster convergence compared to the mean best strategy.
(v)
Assigning characteristic length to a member of the same population using global best or mean best strategies through a probabilistic approach has proven advantageous and may enhance accuracy.
Click here to Correct
(a)
(b)
Click here to Correct
(c) (d)
Click here to Correct
(e) (f)
Click here to Correct
(g) (h)
Click here to Correct
(i)
(j)
Click here to Correct
(k) (l)
Click here to Correct
(m) (n)
Click here to Correct
(o) (p)
Click here to Correct
(q) (r)
Click here to Correct
(s)
Figure 16 Convergence characteristics of different variants of QPSO over CEC2005 numeric benchmark problems
9.1.1. Statistical interpretation of accuracy
To compare the statistical performance levels, two unpaired sample t-tests were conducted at a 5% significance level with 99 degrees of freedom across all achieved results and outcomes. The t-statistic values are presented in Table 5. The performances were evaluated using the
function available in MATLAB to determine how the proposed global best variant of characteristic length from QPSO performed compared to the mean best strategy. The performance outcomes were categorized for QgPSO variants with the following designations: ‘Better’ (B), ‘Inferior’ (IN), and ‘Not Significant’ (NS). The total scores for all 19 functions were then summed for each performance category and are detailed in Table 6. It was observed that QgDPSO performed better overall against QgDPSO on eight functions, while it was inferior on three functions, and no statistically significant differences were found on eight functions. QgDPSO also outperformed QwDPSO, showing improvements on thirteen functions, whereas there was inferior performance on one function, and no significant differences were noted for five functions. QgSPSO demonstrated superior performance over QmSPSO for eleven functions but was inferior for two functions, with no significant differences observed on six functions. Similarly, QgHPSO showed better performance than QmHPSO for ten functions but was inferior for two functions, while for seven functions showed no statistically significant differences.
Table 5
t- statistic value of two-sample unpaired t-test performances for the 5% significance over the 99 degrees of freedom.
 
QgDPSO vs. QmDPSO
QgDPSO vs. QwDPSO
QgSPSO vs. QmSPSO
QgHPSO vs. QmHPSO
F1
-1.9053NS
-2.4600 B
-1.5988 NS
-2.5464 B
F2
-2.4925B
-3.5288 B
-1.2425NS
-1.6543 NS
F3
-3.2551B
-6.7821 B
-3.8071B
-3.6754 B
F4
-2.8798B
-5.4743 B
-5.5652B
-5.5678 B
F5
-12.1413B
-13.5657 B
-12.7875B
-13.4825 B
F6
0 NS
0 NS
0 NS
1.7498 NS
F7
-1.0011NS
-1.2237 NS
-1.1996NS
-1.4114 NS
F8
-24.9851B
-21.7909 B
-29.9445B
-22.5994 B
F9
6.4778IN
10.1203 IN
10.2615IN
12.7284 IN
F10
7.1595IN
-8.3170 B
-2.0313B
-4.9603 B
F11
2.6207IN
1.8525 NS
3.5017IN
2.1074 IN
F12
0NS
-3.109B
-3.8384B
-3.6430 B
F13
-1.0320NS
-4.569B
-3.5870B
-3.1625 B
F 14
-1.0804NS
-2.0854B
-2.2135B
-2.8468 B
F15
-2.3877B
0.5789 NS
0.5351NS
1.3428 NS
F16
-2.7098B
-2.7004B
-2.7511B
-1.5797 NS
F17
-3.5319B
-2.1239 B
-2.8418B
-2.2494 B
F18
-1.8671NS
-7.4121 B
-3.6568B
0 NS
F19
1.2386NS
1.7054NS
1.5005NS
1.4145 NS
Table 6
Statistical significance-based score comparison of QgDPSO against different versions of QPSO.
Score
characteristics
QgDPSO vs. QmDPSO
QgDPSO vs. QwDPSO
QgSPSO vs. QmSPSO
QgHPSO vs. QmHPSO
Better
8
13
11
10
Inferior
3
1
2
2
Not Significant Difference
8
5
6
7
9.1.2 Effect of High Dimension on QPSO Performance
As the dimension size increases, the exploration of the landscape becomes more challenging for both unimodal and multimodal problems. Almost all meta-heuristic algorithms face similar difficulties when the dimension size reaches around 100 or more. In the past, there have been few studies examining the effects of high dimensions on QPSO performance. The purpose of this section is to evaluate the effect of the proposed modified characteristic length in QgDPSO compared to the classical mean-based characteristic length in QmDPSO. For experimental purposes, two extended dimensions, 100 and 500, for three unimodal problems (F1, F2, F5) and three multi-modal problems (F9, F10, F11) have been considered for both algorithms. Each problem was tested over 10 independent trials with a specified number of allowed iterations per trial (20000). To facilitate comparison, the mean of the best solution convergence for each trial at dimensions 100 and 500 using QmDPSO and QgDPSO has been presented in the same figure, as shown in Fig. 17. This serves two purposes: (i) to clarify the impact of dimension size on performance and (ii) to analyze the behavior of QgDPSO relative to QmDPSO as the dimension increases. The comparative final best mean values and their standard deviations for all six problems are presented in Table 7. The convergence graphs and numerical outcomes justify the assumption that increased dimension size leads to greater difficulties in finding optimal solutions. For all problems, when the dimension size increased from 100 to 500, there was a sharp decline in convergence characteristics, highlighting the challenges in exploring the landscape for better solutions. In the case of problems with 100 dimensions, QgDPSO demonstrated superior accuracy performance compared to QmDPSO or was nearly equal (except in F9). However, there were significant differences in the speed of convergence. With QgDPSO, solution exploration occurred at a much faster rate, resulting in quicker convergence. Specifically over the problems F1 and F2, the accuracy of QmDPSO was on the order of
,and
while QgDPSO has achieved the accuracies of
and
. For F5 and F10 problems, the accuracy performances of both algorithms were nearly identical, but QgDPSO showed a significant advantage in convergence speed. While QmDPSO achieved a little better accuracy in F5, if closely observed, the convergence characteristics, it becomes apparent that QgDPSO converged nearly 3.5 times faster, and over about 5000 iterations, convergence occurred, whereas QmDPSO took around 18000 iterations to reach a similar point. Interestingly, over F11, in scenarios where QmDPSO performed better at 30 dimensions, it became less effective than QgDPSO when the dimension increased to 100. In every trial, QgDPSO consistently achieved an optimal value of 0, with approximately twice the speed of convergence.
Table 7
Effect of high dimensions on the performances of QmDPSO & QgDPSO
 
Dimension 100
Dimension 500
Fun.
QmDPSO QgDPSO
QmDPSO QgDPSO
F1
1.4695e-49 3.8670e-133 (4.3882e-49) (1.1320e-132)
3.3220e + 01 2.4870e-11
(2.8848e + 01) (9.2127e-11)
F2
6.8272e-38 3.9945e-93
(1.4803e-37) (5.3843e-93 )
1.4516e + 00 3.7993e-10
(1.1371e + 00) (5.3390e-10)
F5
9.5473e + 01 1.0399e + 02
(2.3511e + 01) (2.2635e + 01)
2.9548e + 04 1.9655e + 03
(8.8906e + 03) (3.3490e + 02)
F9
5.9583e + 01 7.9199e + 01
(8.0860e + 00) (1.4700e + 01)
9.3438e + 02 5.5304e + 02
(2.2404e + 02) (7.7495e + 01)
F10
3.4284e-14 4.2810e-14
(8.1790e-15) (1.5888e-15)
1.5672e + 00 1.0991e-06
(2.1490e-01) (8.9978e-07)
F11
1.4792e-03 0
(3.3076e-03) (0)
1.3914e + 00 2.1242e-11
(1.5423e-01) (1.7500e-11)
As the dimensionality increased to 500, it became very difficult for QmDPSO to converge, while QgDPSO performed significantly better across all problems in terms of accuracy and convergence. In the case of unimodal functions F1 and F2, QmDPSO struggled to move toward convergence, whereas QgDPSO successfully achieved convergence values very close to the optimal value of 0 (within the order of 10− 10). QmDPSO consistently demonstrated poor convergence and low accuracy over the functions F5, F9, and F10 compared to QgDPSO. Interestingly, for the considered multimodal functions F10 and F11, QgDPSO managed to achieve accuracy levels of 10− 6 and 10− 11, respectively, while QmDPSO was limited to integer fractional orders. In conclusion, as the problem dimensional size increased, the performance of QgDPSO improved significantly compared to QmDPSO in terms of accuracy and convergence for both unimodal and multimodal problems.
.
Fig. 17
Mean convergence characteristics over unimodal and multimodal functions for 100 & 500 dimensions using QmDPSO and QgDPSO .
Click here to Correct
Click here to Correct
9.1.3
Estimation for different algorithms
The numerical accuracy of the algorithms for estimation of the
was compared in two different contexts: (i) when there is a change in the characteristic length equation while maintaining the same potential well function, and (ii) in terms of the relative performances among all considered algorithmic structures.
Case I : The mean performances of different algorithms, assessed in terms of NAcc and NAUC across 19 test problems, are summarized in Table 8. The goal is to evaluate the effect of the proposed form of new characteristic lengths under various forms of potential wells. The comparative performances of QmDPSO, QwDPSO, and QgDPSO were presented in the first section, specifically using the Delta potential well. It was observed that in terms of accuracy, QmDPSO performed better than QwDPSO, although the difference was not substantial. QgDPSO exhibited the best accuracy, very close to the optimal value of 1. Regarding convergence speed, QwDPSO outperformed QmDPSO and required less area under the convergence curve, while QgDPSO again achieved the best performance, nearing the optimal value of 1. The distance values were very close to 0 for QgDPSO, whereas the other two algorithms were further from the best position value. In summary, while QwDPSO ranked higher than QmDPSO overall, QgDPSO excelled in all aspects. The performances of QgDPSO and QhDPSO were quite similar, with no significant differences; however, QgDPSO displayed slightly better overall accuracy, while QhDPSO had a marginally better convergence area. Although there was no clear winner in both parameters, QhDPSO achieved a better rank. The proposed characteristic length showed significant improvements with the Soliton potential well, resulting in marked enhancements in both accuracy and convergence area. QmSPSO’s performance was notably inferior compared to QgSPSO. This success story, attributed to the modification in characteristic length, was also replicated with the harmonic oscillator potential well, as evidenced by the performance of QgHPSO compared to QmHPSO in the last section. Thus, it can be concluded that the performance of QgPSO under different potential wells improved significantly, not only in accuracy but also through faster convergence. The plots of accuracy and convergence area across the QM plane are illustrated in Fig. 18 for various algorithmic structures.
Table 8
Comparative performances of the proposed QgPSO under different potential wells.
Features
QmDPSO QwDPSO QgDPSO
QgDPSO QhDPSO
QmSPSO QgSPSO
QmHPSO QgHPSO
NAcc
0.9346 0.8539 0.9987
0.9799 0.9726
1.4469 0.9970
0.9385 0.9997
NAUC
3.1348 2.9134 1.0465
1.1054 1.1024
3.2003 1.0708
1.7888 1.0341
QMindex
2.1358 1.9190 0.0465
0.1073 0.1060
2.2453 0.0709
0.7912 0.0341
Rank
3 2 1
2 1
2 1
2 1
Case II
The purpose of this section is to understand the effectiveness of different types of potential wells in delivering performances based on various approaches to characteristic lengths in QPSO. The mean performances obtained are presented in Table 9, which covers 19 test problems. Using
and
values, the
values were estimated, and performance ranks were assigned accordingly. Additionally, the accuracy and area parameters were illustrated in the QM plane shown in Fig. 19. It can be observed that QPSO, regardless of the type of potential well used, improved with the global best function of characteristic length, bringing it closer to the optimal position. The histogram plot displaying all performance parameters is presented in Fig. 20, clearly indicating that there were more variations in convergence speed overall. Among all the variations of QPSO considered, the delta potential well, when combined with the global best function of characteristic length in QgDPSO, demonstrated the best performance across all parameters.
Table 9
Comparative performances of different variations of QPSO under different potential wells.
Features
QmDPSO QwDPSO QgDPSO QhDPSO QmSPSO QgSPSO QmHPSO QgHPSO
NAcc
0.9050 0.8316 0.9609 0.9573 0.8281 0.9214 0.7531 0.9282
NAUC
3.8865 3.6491 1.2562 1.2828 4.5633 1.3651 2.3380 1.2812
QMindex
2.8881 2.6544 0.2592 0.2860 3.5674 0.3735 1.3606 0.2902
P.Rank
7 6 1 2 8 4 5 3
Fig. 18
Relative area and accuracy presentation for algorithms in the QM plane with the mean /global best characteristic function for QPSO ( is the best position value)
Click here to Correct
Click here to Correct
Fig. 19
Relative area and accuracy presentation for different algorithms in the QM plane
Click here to Correct
Fig. 20
Histogram presentation of performance parameters
Click here to Correct
9.2. Experiments over single objective functions benchmark CEC2017
Advancements in single-objective optimization algorithms enable the development of algorithms for multi-objective optimization, which often involve additional types of constraints and increased complexity. Benchmarking for single-objective functions can incorporate dynamic elements, niching strategies, and combinations of various problem classes. The single-objective function benchmark from the 2017 IEEE Congress on Evolutionary Computation (CEC2017) includes composite functions, graded linkages, and rotated trap functions, among other complexities [63].
As suggested in [63], extract the different feature parameters from the performance outcomes, such as Best, Worst, Mean, Standard deviation, and Median, to capture the overall qualities in all ways. The extreme edge values in terms of Mean and Worst values help to find the best and worst possible outcomes that may be achieved by the algorithm. Mean values extract the overall expectation as outcomes, while standard deviation provides the reliability of the outcomes. Median value helped in another way to extract the overall quality of performance, and is particularly useful to resist outliers and skewed distribution, which makes it more robust in many real-life scenarios. The total 28 functions (Function F2 was removed from the competition due its un-stability) have been considered in two different situations as (i) the performances comparison of the proposed QgDPSO against the recently publish work with Farmer and Seasons Algorithm (FSA) [64] with dimension size of 10 (ii) the performances comparison of proposed algorithm QgDPSO against the standard version QmDPSO with dimension size of 100. For both situations, there were 100 members considered in the population, and the total allowed number of generations was 1000. There were 51 independent trials considered to extract the performance parameters.
Case I: Performance evaluation of proposed QgDPSO against FSA [64].
A
The obtained performances over all 51 independent trials are shown in Table 10. To decide the comparative betterment of the algorithm, when the performance outcomes were analyzed, it was observed that feature values had a mixed combination of better and inferior outcomes i.e., for most of the functions, some feature value outcomes were better for QgPSO, where some feature outcomes were better for FSA. For example, for function F1, the other features’ value, except ‘Best ‘outcome, was better for FSA, while the Best value achieved with QgPSO was much better. So it’s unfair to declare the FSA as absolutely better than QgPSO over function F1, and such situations have existed for many other function cases also. Hence, in this work, a rank weight factor has been assigned to an algorithm depending upon the average number of feature, where the algorithm was better, and a higher rank weight has been considered as a better performance over that particular function. The performances of both algorithms have been analyzed in the function category-wise as well as concerning all functions.
(a) Performances over different categories of functions:
Unimodal shifted & rotated function (F1): The performances of FSA were better overall, but in none of the trials were the global optima achieved. Even though QgDPSO has shown larger values for other features, there were 8 trials out of 51 where the final convergence was very close to the global optimum value of 100.
Multimodal Shifted & Rotated function (F3 to F9): The performances of QgDPSO and FSA were closer to each other, but the performances of QgDPSO were better overall. For the functions F3and F6 for all the trials, the exact global optima of 300 and 600 were obtained without fail, while none of the cases FSA has delivered the exact global optima. It was also observed that over such function characteristics, better, more reliable, and consistent performances have been shown by the QgDPSO with a lesser value of standard deviation.
Hybrid Functions (F10-F19): These functions carried different characteristics in their variable subcomponents by placing the different functions, making it very difficult to obtain the global optima. The performances of QgDPSO were better overall, and except for function F12, there were a few trials where convergence was very close to the global optima. For the functions F12, both algorithms have shown poor performance, and convergence was far from the optimal values. For the functions F13 and F18, better performance has been shown by the FSA compared to the others.
Composite Functions(F20-F29): To make the landscape more complex, the composite form of different functions with different weightage has been formed which may also provide different properties for different variable subcomponents. For the function of F20, F26, and F28, there were a few trials in which QgDPSO has achieved the exact global optima of 2000,2600, and 2800, and with the value of 2219, it was very close to the F22 global optima of 2200. In none of the cases, FSA was able to achieve the global optima, but overall, in terms of the feature points, the performance of FSA was a little better.
To decide the comparative betterment of the algorithm, when the performance outcomes were analyzed, it was observed that feature values had a mixed combination of better and inferior, i.e., for most of the functions, some feature value outcomes were better for QgDPSO, whereas some feature outcomes were better for FSA. For example, for function F1, the other features' value except ‘Best ‘outcome was better for FSA, while the Best value achieved with QgDPSO was much better. So it’s unfair to declare the FSA as better than QgDPSO over function F1, and such situations have existed for many other function cases also. Hence, in this work, a rank weight factor has been assigned to an algorithm depending upon the average number of features where the algorithm was better, and a higher rank weight has been considered as a better performance over that particular function. The obtained scores by both algorithms over the different functions in the different categories are shown in Fig. 21, where the obtained scores by FSA and QgDPSO for different categories were {4,1}; {7, 28}; {22, 23} and {27, 33} correspondingly for unimodal, multimodal, hybrid and composite function categories.
(b) Performances over all functions:
The overall performance was considered as the total number of features over all considered functions where the algorithm was better. From Table 10, it can be observed that for the functions F3, F6, F9, F20, and F22, the performance of QgDPSO was absolutely better and had values equal to 1, and for other functions, either it was a fractionally better or inferior. It must be noted that for none of the functions, FSA was considered the absolute best. Overall, there were a total of 140 scores (each feature corresponding to 1 point), and out of that, QgDPSO has achieved a total of 80 points while FSA has achieved 60 points, as shown in Fig. 22(a). The sharing of points for QgDPSO was by different features in order as 27 (Best),6 (Worst),13 (Mean),19 (Std. Dev), and 15 (Median). The contribution share for the QgDPSO is shown as a pie chart in Fig. 22(b). The comparative performances of both algorithms have shown the FSA as a good competitor for QgDPSO, but overall, QgDPSO has shown 14.29% better performance against FSA.
Table 10
FSA [64] and proposed QgDPSO performance outcomes and their comparison over CEC2017 single objective real parameter numeric optimization benchmark functions.
F1
F3
F4
F5
 
 
FSA
QgPSO
FSA
QgPSO
FSA
QgPSO
FSA
QgPSO
Best
179.5147
100.48
300.0962
300.00
400.0386
400.00
507.5797
501.9899
Worst
569.1247
6038.0
373.3498
300.00
400.6193
402.86
515.3422
522.8840
Mean
309.1914
1356.4
329.9793
300.00
400.3432
401.61
511.5284
509.48
Std Dev
734.0519
1459.3
148.4929
4.17e-14
0.982868
1.0042
14.6131
4.3567
Median
244.0631
825.96
323.2357
300.00
400.3574
401.94
511.5957
508.9546
R.Weight
0.8
0.2
0.0
1.0
0.8
0.2
0.2
0.8
F6
F7
F8
F9
 
Best
600.0533
600.00
720.4696
712.5875
806.9104
802.9849
900.0738
900.0000
Worst
601.2995
600.00
725.727
732.0730
810.6008
818.9042
903.5584
901.7277
Mean
600.6113
600.00
722.9
718.7363
808.5114
808.3694
901.2592
900.0428
Std Dev
2.459909
0.0000
10.00436
4.0931
6.696291
3.3249
6.412512
0.2489
Median
600.5462
600.00
722.7018
717.6468
808.2672
807.9597
900.7022
900.0000
R.Weight
0.0
1.0
0.2
0.8
0.2
0.8
0.0
1.0
F10
F11
F12
F13
 
Best
1170.33
1006.9
1101.425
1100.9
1829.239
1439.53
1318.328
1308.03
Worst
1602.536
1776.5
1104.585
1113.9
299662.5
42149.26
1554.745
27910.28
Mean
1415.558
1306.2
1103.425
1104.8
76682.26
14695.0
1407.801
10581.62
Std Dev
860.2325
180.1
6.03779
2.9579
609836.2
11396.15
428.4762
7939.82
Median
1444.683
1253.6
1103.844
1105.0
2618.639
9749.08
1379.066
8448.56
R.Weight
0.2
0.8
0.6
0.4
0.2
0.8
0.8
0.2
F14
F15
F16
F17
 
Best
1412.077
1406.46
1553.839
1502.96
1611.051
1600.08
1721.299
1700.17
Worst
1590.047
2045.62
2117.305
2415.15
1624.006
1901.16
1725.226
1763.18
Mean
1529.126
1446.05
1728.252
1583.05
1618.245
1644.92
1723.352
1713.27
Std Dev
333.5794
97.559
1092.453
152.43
22.33554
74.99
7.008148
14.29
Median
1557.19
1426.99
1620.932
1528.96
1618.961
1600.91
1723.442
1705.76
R.Weight
0.2
0.8
0.2
0.8
0.6
0.4
0.4
0.6
F18
F19
F20
F21
 
Best
1958.887
1809.64
1977.426
1901.75
2021.509
2000.00
2204.456
2200.00
Worst
3003.429
21526.43
2466.226
4824.20
2028.992
2016.46
2209.219
2321.46
Mean
2304.943
7249.98
2133.134
2179.36
2024.601
2003.13
2206.851
2281.01
Std Dev
1941.205
5558.79
922.7722
591.31
13.73834
4.3792
10.7801
49.81
Median
2128.728
5369.88
2044.442
1950.15
2023.952
2001.31
2206.865
2308.81
R.Weight
0.8
0.2
0.4
0.6
0.0
1.0
0.8
0.2
F22
F23
F24
F25
 
Best
2300.458
2219.22
2609.409
2604.03
2518.81
2500.00
2625.481
2897.83
Worst
2334.683
2305.33
2613.674
2629.57
2602.297
2774.84
2899.904
2950.09
Mean
2314.639
2299.03
2611.865
2613.97
2540.611
2714.49
2830.702
2931.11
Std Dev
68.31489
15.07
7.390612
6.23
168.7842
79.61
561.2735
22.15
Median
2311.708
2301.67
2612.189
2612.90
2520.667
2741.92
2898.712
2944.22
R.Weight
0.0
1.0
0.6
0.4
0.6
0.4
0.8
0.2
F26
F27
F28
F29
 
Best
2808.385
2600.00
3090.61
3088.95
3100
2800.00
3139.804
3136.59
Worst
2907
3048.62
3093.127
3173.32
3120.644
3446.61
3160.407
3293.90
Mean
2857.292
2899.71
3091.282
3099.34
3109.67
3152.02
3150.671
3172.27
Std Dev
216.164
49.47
5.054349
13.29
38.2677
125.89
35.25898
32.44
Median
2856.891
2900.00
3090.696
3096.45
3109.018
3100.00
3151.236
3164.49
R.Weight
0.6
0.4
0.8
0.2
0.6
0.4
0.6
0.4
Fig. 21
Function categories based on performance features evaluation of FSA and QgDPSO
Click here to Correct
Fig. 22
(a) Comparison of scores obtained by different algorithms against the available total scores, while (b) percentage share representation of different features in score contribution obtained by QgDPSO.
Click here to Correct
Case II: Performance evaluation of proposed QgDPSO against QmDPSO..
The performances of QgDPSO have been further evaluated against QmDPSO over the higher dimensions of the CEC2017 benchmark functions. The 100-dimensional size for all the functions has been considered, and a population size of 100 was considered. 51 number of independent trials were considered, and performances were extracted as it was done for Case I in terms of the total score points achieved by each algorithm. Along with score point-based comparison, the proposed form of quality measuring index QMindex was also evaluated in comparing the performances more robustly. In Table 11, the obtained feature outcomes have been enclosed along with relative score points for each function. The available total scores of 140, QgDPSO was able to score as 76 while the QmDPSO was able to get it as 64, as shown in Fig. 23(a) while the distribution characteristics of archived points for QgDPSO was appeared as Best (= 14), Worst (= 17), Mean (= 16), Std.Dev(= 13) and Median (= 16), and as a pie chart, the contribution is shown in Fig. 23(b). It is clear that overall, the performances over different features were comparatively better with QgDPSO. To understand and compare the convergence behavior through the graph, the mean convergence over all 51 trials has been shown in Fig. 24, and their dynamic behavior was numerically captured under the QMindex. Looking at the convergence characteristic, it was justified that the multi-level exploration of global-based characteristic length was able to explore the solution space faster and in a better way, irrespective of the nature of the landscape, and the strength of exploration was affected less, even with a higher value of dimension size.
The estimation of the QMindex was done as discussed in the section, and the mean value of the normalized accuracy and normalized area under the curve values over all the functions has been estimated and shown in Table 12. It was observed that relative accuracy and convergence area were very close to the Absolute best position value for QgDPSO, and the distance value of 0.0141 appeared as QMindex, which is very close to zero. There was a comparatively large distance, 0.1852, associated with the QmDPSO algorithm. Hence, the benefit of a global-based characteristic length in QgDPSO was again proven, and quality performances were repeated. The plot of normalized accuracy and normalized area under the curve is presented in the QM plane and shown in Fig. 25, where it can be observed that the performance of the QgDPSO was close to the absolute best position.
Fig. 23
(a) Comparison of scores obtained by different algorithms against the available total scores, while (b) percentage share representation of different features in score contribution obtained by QgPSO.
Click here to Correct
Table 11
QmPSO and proposed QgPSO performance outcomes and their comparison over CEC2017 benchmarks
F1
F3
F4
F5
 
 
QmPSO
QgPSO
QmPSO
QgPSO
QmPSO
QgPSO
QmPSO
QgPSO
Best
6.19e + 05
5.42e + 04
2.69e + 05
2.74e + 05
6.99e + 02
7.57e + 02
8.99e + 02
8.91e + 02
Worst
5.63e + 06
2.17e + 05
3.37e + 05
4.50e + 05
8.38e + 02
8.41e + 02
1.47e + 03
1.03e + 03
Mean
2.53e + 06
1.02e + 05
3.15e + 05
3.47e + 05
7.83e + 02
7.89e + 02
1.37e + 03
9.46e + 02
Std Dev
1.84e + 06
5.11e + 04
2.123e + 04
5.77e + 04
4.76e + 01
2.26e + 01
1.73e + 02
4.94e + 01
Median
2.20e + 06
8.73e + 04
3.24e + 05
3.45e + 05
7.99e + 02
7.86e + 02
1.43e + 03
9.22e + 02
R.Weight
0.0
1.0
1.0
0.0
0.6
0.4
0.0
1.0
F6
F7
F8
F9
 
Best
6.01e + 02
6.06e + 02
1.54e + 03
1.19e + 03
1.43e + 03
1.15e + 03
1.74e + 03
8.10e + 03
Worst
6.03e + 02
6.19e + 02
1.77e + 03
1.37e + 03
1.81e + 03
1.38e + 03
2.37e + 04
4.69e + 04
Mean
6.02e + 02
6.13e + 02
1.69e + 03
1.29e + 03
1.69e + 03
1.22e + 03
5.99e + 03
1.59e + 04
Std Dev
6.49e-01
4.36e + 00
6.63e + 01
6.25e + 01
1.03e + 02
6.96e + 01
6.44e + 03
1.17e + 04
Median
6.02e + 02
6.13e + 02
1.69e + 03
1.29e + 03
1.72e + 03
1.20e + 03
3.95e + 03
1.26e + 04
R.Weight
1.0
0
0.0
1.0
0.0
1.0
1.0
0.0
F10
F11
F12
F13
 
Best
3.03e + 04
1.06e + 04
1.03e + 04
8.78e + 03
6.34e + 06
7.32e + 06
1.77e + 03
1.98e + 03
Worst
3.24e + 04
1.55e + 04
2.18e + 04
2.08e + 04
4.96e + 07
1.70e + 07
1.44e + 04
9.31e + 03
Mean
3.13e + 04
1.34e + 04
1.57e + 04
1.42e + 04
2.69e + 07
1.16e + 07
4.91e + 03
5.30e + 03
Std Dev
7.62e + 02
1.66e + 03
3.58e + 03
4.29e + 03
1.52e + 07
3.22e + 06
3.81e + 03
2.52e + 03
Median
3.13e + 04
1.37e + 04
1.53e + 04
1.39e + 04
2.08e + 07
1.16e + 07
3.87e + 03
5.18e + 03
R.Weight
0.2
0.8
0.2
0.8
0.2
0.8
0.6
0.4
F14
F15
F16
F17
 
Best
6.29e + 05
5.77e + 05
1.77e + 03
2.08e + 03
8.96e + 03
3.48e + 03
6.28e + 03
3.39e + 03
Worst
3.15e + 06
1.45e + 06
6.84e + 03
8.48e + 03
1.07e + 04
6.51e + 03
7.16e + 03
5.45e + 03
Mean
1.44e + 06
1.04e + 06
3.75e + 03
4.26e + 03
9.89e + 03
5.25e + 03
6.73e + 03
4.53e + 03
Std Dev
8.68e + 05
3.12e + 05
1.60e + 03
2.12e + 03
4.83e + 02
8.94e + 02
2.67e + 02
5.43e + 02
Median
1.02e + 06
1.09e + 06
3.51e + 03
3.91e + 03
9.97e + 03
5.45e + 03
6.75e + 03
4.50e + 03
R.Weight
0.2
0.8
1.0
0.0
0.2
0.8
0.2
0.8
F18
F19
F20
F21
 
Best
1.28e + 06
2.12e + 06
2.04e + 03
2.02e + 03
6.56e + 03
3.80e + 03
3.06e + 03
2.58e + 03
Worst
7.35e + 06
4.27e + 06
5.24e + 03
7.30e + 03
7.40e + 03
4.90e + 03
3.32e + 03
2.76e + 03
Mean
3.88e + 06
2.92e + 06
3.23e + 03
3.54e + 03
6.98e + 03
4.35e + 03
3.19e + 03
2.68e + 03
Std Dev
2.09e + 06
6.68e + 05
9.60e + 02
1.97e + 03
3.05e + 02
4.42e + 02
8.54e + 01
5.48e + 01
Median
3.40e + 06
2.86e + 06
3.16e + 03
2.68e + 03
6.98e + 03
4.30e + 03
3.20e + 03
2.69e + 03
R.Weight
0.2
0.8
0.6
0.4
0.2
0.8
0.0
1.0
F22
F23
F24
F25
 
Best
3.17e + 04
1.39e + 04
3.08e + 03
3.11e + 03
3.53e + 03
3.60e + 03
3.34e + 03
3.34e + 03
Worst
3.39e + 04
1.95e + 04
3.61e + 03
3.42e + 03
4.01e + 03
4.01e + 03
3.49e + 03
3.50e + 03
Mean
3.29e + 04
1.64e + 04
3.22e + 03
3.20e + 03
3.63e + 03
3.78e + 03
3.43e + 03
3.44e + 03
Std Dev
7.45e + 02
1.55e + 03
1.77e + 02
8.98e + 01
1.77e + 02
1.23e + 02
4.32e + 01
4.98e + 01
Median
3.29e + 04
1.62e + 04
3.16e + 03
3.20e + 03
3.16e + 03
3.75e + 03
3.43e + 03
3.43e + 03
R.Weight
0.2
0.8
0.4
0.6
0.8
0.2
0.8
0.2
F26
F27
F28
F29
 
Best
7.70e + 03
1.21e + 04
3.51e + 03
3.54e + 03
3.51e + 03
3.51e + 03
5.34e + 03
5.71e + 03
Worst
1.30e + 04
1.70e + 04
3.61e + 03
3.81e + 03
3.61e + 03
3.67e + 03
9.56e + 03
7.65e + 03
Mean
9.17e + 03
1.42e + 04
3.56e + 03
3.65e + 03
3.57e + 03
3.60e + 03
7.52e + 03
6.29e + 03
Std Dev
1.5228e + 03
1.62e + 03
3.2160e + 01
8.37e + 01
3.2770e + 01
4.49e + 01
1.72e + 03
6.56e + 02
Median
8.96e + 03
1.34e + 04
3.56e + 03
3.64e + 03
3.57e + 03
3.61e + 03
7.69e + 03
6.06e + 03
R.Weight
1.0
0.0
1.0
0.0
1.0
0
0.2
0.8
Fig. 24
Convergence characteristics of QmDPSO and QgDPSO for 27 benchmark functions (F1 to F29) of 100 dimensions in CEC2017 (colors encoding have the same meaning in all figures)
Click here to Correct
Click here to Correct
Click here to Correct
Click here to Correct
Table 12
QMIndex of QmDPSO and QgDPSO over CEC2017 functions
Features
QmDPSO QgDPSO
NAcc
1.0321 1.0078
NAUC
1.1824 1.0118
QMDistance
0.1852 0.0141
Rank
2 1
Fig. 25
Relative area and accuracy presentation for QmDPSO and QgDPSO over CEC2017 functions in the QM plane.
Click here to Correct
9.2.1. Observed Limitation with QPSO
A
The importance of optimization in the present era of technological development requires that the algorithms must have a very high level of reliability in the outcomes to ensure the end quality of outcomes according to expectation. The fundamental principle of PSO is based on an inspirational approach and carries the inheritance of losing diversity sooner. This situation makes the performances highly sensitive to being trapped in the local optima. If there are numerous local optima, then outcomes in many repeated trials may carry lots of variations due to trapping in different local optima. As a result, there is a compromise in the reliability of outcomes. There has been significant improvement observed in maintaining the diversity with the inclusion of quantum-based guidance, but the problem of reliability remains an issue. This is because the local attractors were fundamentally described on an inspirational basis. In this work, when the CEC2017 functions were optimized under different numbers of trials, a very significant amount of standard deviation was observed for all the cases. To get a clear idea of this, the obtained standard deviations for the functions of 10 dimensions as well as for 100 dimensions, in increasing order, have been plotted and shown in Fig. 26(a) and Fig. 26(b). From Fig. 26(a), even though the QgDPSO has shown the lower values of standard deviations in comparison to FSA, it still has available high values and a wider range of variations, making one think of some means to increase the reliability. The global best-based CL approach has achieved better reliability in comparison to the characteristic length of QPSO, as can be observed in Fig. 26(b), and any further improvement in the direction of reduction of standard deviation can make the algorithm more reliable and useful.
Click here to Correct
(a)
(b)
Figure 26. variations in standard deviation for different functions(a) over dimension of 10 (b) over dimension of 100
10. QPSO Evolution through Multiple Potential Wells.
Previous analyses of individual potential wells aimed to enhance exploration around local attractors. Section 6 details the characteristics and effects of each potential well on solution evolution. This raises the question of whether utilizing a set of potential wells, rather than a single type, throughout the evolutionary process affects performance. To investigate this, two arrangement formats were considered. In the first format, each population member has been given the chance to evolve using a different potential well. In the second format, each dimension of a population member evolves under a distinct potential well. In the first approach, during each iteration, a population member selects a potential well from a predefined set through uniform random selection, and all dimensions of that member evolve with the same potential well. Consequently, the population evolves under multiple potential wells. In the second approach, each dimension of a member evolves with a potential well selected randomly from the set, aiming to increase diversity in the exploration process. The considered set of potential wells carries the Delta, Square Well, Lorentz Potential Field, Rosen-Morse Potential Field, and Coulomb-like Square Root Field.
The performance impact of incorporating multiple potential wells in QPSO was evaluated using 28 functions of dimension 10 (F1 to F29, excluding F2) from the CEC2017 benchmark, with a global best strategy for characteristic length. Fifty-one independent trials were conducted. Performance was compared to a single potential well-based strategy implemented by QgDPSO, with all other experimental parameters consistent with those in Section 9.2. Results showed close competition among all strategies as can observed in Fig. 27. To assess both convergence behavior and accuracy, the QMindex was calculated. The mean values of relative area under convergence and accuracy across all functions are presented in the Table 13. Accuracy values were nearly identical and close to 1 for all strategies, and the convergence characteristic area was also close to 1 in all cases. QMindex values for all strategies were close to 0. These results suggest that the inclusion of multiple potential wells does not confer a significant advantage in the solution exploration process of QPSO in compare to single potential well.
Fig. 27
Mean accuracies by single and multiple strategies of potential wells on different function ( QgPSOmpd and QgPSOmpm indicate dimension wise and member wise multi potential wells strategies in QPSO correspondingly) .
Click here to Correct
Table 13
QMIndex Comparative performances of multi potential well strategies against single potential well strategy in QPSO over CEC2017 functions
Features
QgDPSO QgPSOmpd QgPSOmpm
NAcc
1.0018 1.0016 1.0031
NAUC
0.9991 0.9907 1.0065
QMindex
0.0019 0.0093 0.0072
Conclusion & Future Work
This work thoroughly discusses and analyzes the purpose and methodology of integrating quantum principles with standard Particle Swarm Optimization (PSO). It addresses existing gaps in prior research to enhance exploration, accelerate convergence, and provide a deeper understanding of how Quantum PSO (QPSO) can improve exploration outcomes. The integration of quantum principles in the search process introduces a characteristic length and a random variable function, which generates new solutions from local attractors by applying varying degrees of change in each dimension. These changes are implemented through probabilistic selection, allowing for both additive and subtractive modifications. Consequently, the new solutions can reach far areas while maintaining directional variation around local attractors. It was observed that the characteristic length plays a more critical role in generating new solutions than the types of potential wells used. The characteristic length based on the global best particle supports parallel exploration of both global and local search, resulting in significantly better exploration and convergence characteristics than the best mean or weighted best mean approaches. Particles that are farther from the global best position engage in global exploration, while those closer to the global best focus on refining their immediate surroundings. This dual strategy enables particles that previously lagged—due to constraints with the best mean characteristic length—to overcome limitations. In contrast, particles using the global best characteristic length can explore more distant regions in search of new solutions. This also enhances the management of complex dimensions due to the capabilities of multilevel and multistage exploration inherent in the global best-based characteristic length. Furthermore, the performance of the global best characteristic length consistently outperforms that of the best mean characteristic length across various types of potential wells, with the best results achieved using the Delta potential well. Additional accuracy improvements were realized by hybridizing different types of characteristic lengths in a probabilistic environment, allowing for random selection among population members to utilize either the global best or the best mean approach. A new metric for estimating algorithm quality was proposed to ensure that both accuracy and convergence dynamics are assessed together. This index is particularly useful for comparing algorithm quality, especially when accuracy is closely aligned and convergence characteristics intersect. Several numeric benchmarks were tested with various types of characteristic lengths and potential wells. The results indicated that the combination of characteristic length with the global best, especially when associated with the Delta potential well, yielded the best performance. Specifically, over 19 functions of the CEC2005 benchmark with the Delta potential well, a 6.8% improvement in normalized accuracy and a 66.62% improvement in normalized convergence characteristics were observed compared to the best mean-based approach. For the Soliton and Harmonic Oscillator potential wells, the improvements in normalized accuracy were 31.09% and 6.52%, respectively, while the normalized convergence characteristics improved by 66.54% and 42.19%. The experiments over 28 functions from the CEC2017 benchmark that carry the hybrid and composite nature of functions, the performances of the proposed algorithm structure with Delta potential well were compared to the newly introduced metaheuristic Farmer and Seasons Algorithm, and a betterment of 14.28% was observed in the performance features. There was an improvement of 2.41% in normalized accuracy and 16.86% in normalized convergence area also in compared to the mean self-best based characteristics length version of QPSO. Availability of different existing potential wells has been used to check the benefits of using multiple potential wells at different levels. Considering multiple potential wells in the solution upgradation at dimension-wise and member-wise does not generate any useful extra benefit over a single potential well. It was observed that under a very diverse environment of landscape, the Delta potential well carried the capability to explore the solution space efficiently.
When considering further research possibilities in this area, several opportunities and challenges emerge simultaneously. The following key areas exist where further research can improve the algorithm's performance:
(i)
Finding the Optimal Position of the Local Attractor: This can significantly enhance accuracy and convergence performance. Although some research has utilized various distribution functions, each method has its advantages and limitations.
(ii)
Developing More Suitable Forms of Potential Wells: These potential wells should possess adaptive characteristics that align with population diversity or fitness, allowing for more effective scaling in the search for new solutions.
(iii)
Self-Adaptive form of Contraction Coefficient: Rather than relying on a linear varying contraction coefficient based solely on iterations, an adaptive approach that considers individual fitness could enhance the exploration of the solution space. This factor significantly influences how potential wells contribute to finding new positions from local attractors.
(iv)Addressing High-Dimensional Challenges: The use of cooperative coevolution approaches, previously applied in evolutionary algorithms, could effectively manage high-dimensional issues.
(v) Hybridization with Other Meta-Heuristics: Currently, the trend in quantum meta-heuristics involves hybridizing with other algorithms. However, it is essential to thoroughly explore the core algorithm before engaging in hybridization; otherwise, it may merely increase computational costs and hinder research progress.
(vi)Optimizing the Exploitation of Quantum Principles: It is crucial to integrate quantum principles into algorithms in a way that enables effective utilization of quantum hardware. The potential for computation presented by quantum principles is vast, and now is the time to optimally and innovatively exploit these opportunities to transform the field of optimization and its applications
(vii) Increasing the reliability strength and reduction in computational burden: even though very satisfactory performances were achieved with QgDPSO, for some functions, high values of standard deviation were observed; hence, some mechanism is needed to improve the reliability. The possibility of using a hierarchical structure in the algorithm needs to be explored, where the first stage involves a multi-population that may have evolved under a heterogeneous environment. Later, in the second stage, the better-evolved members from the first stage can be considered as a population to evolve further. This approach can be observed in the present human society. Many practical problems involve a complex objective function and demand a high computational cost. Integrating the involvement of a surrogate model can reduce the computational demand significantly.
Declarations
A
Funding:
This research was self-funded. No grants were received from any funding agencies.
A
Author Contribution
Siyasha Singh: Conceptualization, Methodology, Analysis, Investigation, Resources, Validation, Writing—original draft.
A
Data Availability
There were single-objective benchmark functions available in CEC 2005 and CEC 2017, which were considered for analysis and comparison purposes.
Ethics approval and consent to participate:
Not applicable.
Consent for publication:
Not applicable.
Competing interests:
The author declare no competing interests.
Clinical trial number
Not applicable.
References
1.
Goldberg D. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley; 1989.
2.
Storn R, Price K. Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J Global Optim. 1997;11:341–59. https://doi.org/10.1023/A:1008202821328.
3.
Das S, Mullick SS, Suganthan PN. Recent advances in differential evolution—An updated survey. Swarm Evol Comput. 2016;27:1–30. https://doi.org/10.1016/j.swevo.2016.01.004.
4.
Koza JR. (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press. ISBN 978-0-262-11170-6.
5.
Clerc M, Kennedy J. The particle swarm - explosion, stability, and convergence in a multidimensional complex space, in IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, Feb. 2002, 10.1109/4235.985692
6.
Shami TM, El-Saleh AA, Alswaitti M, Al-Tashi Q, Summakieh MA, Mirjalili S. Particle Swarm Optimization: A Comprehensive Survey, in IEEE Access, vol. 10, pp. 10031–10061, 2022, 10.1109/ACCESS.2022.3142859
7.
Marco, Dorigo. Christian Blum,Ant colony optimization theory: A survey,Theoretical Computer Science, Volume 344, Issues 2–3,2005,Pages 243–278,https://doi.org/10.1016/j.tcs.2005.05.020
8.
Singh MK. (2013). A New Optimization Method Based on Adaptive Social Behavior: ASBO. In: Kumar M., A., R., S., Kumar, T, editors Proceedings of International Conference on Advances in Computing. Advances in Intelligent Systems and Computing, vol 174. Springer, New Delhi. https://doi.org/10.1007/978-81-322-0740-5_98
9.
Bandyopadhyay S, Saha S, Maulik U, Deb K. A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA, in IEEE Transactions on Evolutionary Computation, vol. 12, no. 3, pp. 269–283, June 2008, 10.1109/TEVC.2007.900837
10.
Lin KN, Volkel K, Cao C, et al. A primordial DNA store and compute engine. Nat Nanotechnol. 2024;19:1654–64. https://doi.org/10.1038/s41565-024-01771-6.
11.
Pham Vu Hong Son,Nguyen Thi Nha Trang. Development of a Novel Artificial Intelligence Model for Better Balancing Exploration and Exploitation. Int J Comput Intell ApplicationsVol 22 No. 2023;022350001. https://doi.org/10.1142/S1469026823500013.
12.
Bernardo Morales-Castañeda, Zaldívar D, Cuevas E, Fausto F. Alma Rodríguez,A better balance in metaheuristic algorithms: Does it exist?Swarm and Evolutionary Computation,Volume 54,2020,100671,https://doi.org/10.1016/j.swevo.2020.100671
13.
Ji J-Y, Tan Z, Zeng S, See-To EWK, Wong M-L. A Surrogate-Assisted Evolutionary Algorithm for Seeking Multiple Solutions of Expensive Multimodal Optimization Problems, in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 8, no. 1, pp. 377–388, Feb. 2024, 10.1109/TETCI.2023.3301794
14.
Gharehchopogh FS. Quantum-inspired metaheuristic algorithms: comprehensive survey and classification. Artif Intell Rev. 2023;56:5479–543. https://doi.org/10.1007/s10462-022-10280-8.
15.
Hakemi S, Houshmand M, KheirKhah E, et al. A review of recent advances in quantum-inspired metaheuristics. Evol Intel. 2024;17:627–42. https://doi.org/10.1007/s12065-022-00783-2.
16.
Sun J, Xu W, Feng B. A global search strategy of quantum-behaved particle swarm optimization, IEEE Conference on Cybernetics and Intelligent Systems, 2004., Singapore, 2004, pp. 111–116 vol.1. 10.1109/ICCIS.2004.1460396
17.
Narayanan A, Moore M. Quantum-inspired genetic algorithms, Proceedings of IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 1996, pp. 61–66. 10.1109/ICEC.1996.542334
18.
Ibarrondo R, Gatti G, Sanz M. Quantum Genetic Algorithm With Individuals in Multiple Registers, in IEEE Transactions on Evolutionary Computation, vol. 28, no. 3, pp. 788–797, June 2024, 10.1109/TEVC.2023.3296780
19.
Wu Deng J, Wang A, Guo. Huimin Zhao,Quantum differential evolutionary algorithm with quantum-adaptive mutation strategy and population state evaluation framework for high-dimensional problems,Information Sciences,Volume 676,2024,120787,https://doi.org/10.1016/j.ins.2024.120787
20.
Madhushree Das A, Roy S, Maity. Samarjit Kar,A Quantum-inspired Ant Colony Optimization for solving a sustainable four-dimensional traveling salesman problem under type-2 fuzzy variable. Adv Eng Inf Volume 55,2023,101816,https://doi.org/10.1016/j.aei.2022.101816
21.
Ma R, Gui J, Wen J, et al. Chaos quantum bee colony algorithm for constrained complicate optimization problems and application of robot gripper. Soft Comput. 2024;28:11163–206. https://doi.org/10.1007/s00500-024-09877-8.
22.
Gad AG. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch Computat Methods Eng. 2022;29:2531–61. https://doi.org/10.1007/s11831-021-09694-4.
23.
Ghasemi M, Akbari E, Rahimnejad A, et al. Phasor particle swarm optimization: a simple and efficient variant of PSO. Soft Comput. 2019;23:9701–18. https://doi.org/10.1007/s00500-018-3536-8.
24.
Das S, Abraham A, Konar A. Particle Swarm Optimization and Differential Evolution Algorithms: Technical Analysis, Applications and Hybridization Perspectives. In: Liu Y, Sun A, Loh HT, Lu WF, Lim EP, editors. Advances of Computational Intelligence in Industrial Systems. Studies in Computational Intelligence. Volume 116. Berlin, Heidelberg: Springer; 2008. https://doi.org/10.1007/978-3-540-78297-1_1.
25.
Sun J, Xu W, Feng B. A global search strategy of quantum-behaved particle swarm optimization, IEEE Conference on Cybernetics and Intelligent Systems, 2004., Singapore, 2004, pp. 111–116 vol.1. 10.1109/ICCIS.2004.1460396
26.
Kennedy J, Eberhart R. (1995). Particle Swarm Optimization. Proceedings of IEEE International Conference on Neural Networks. Vol. IV. pp. 1942–1948. 10.1109/ICNN.1995.488968
27.
Sun J, Xu W, Liu J. Parameter Selection of Quantum-Behaved Particle Swarm Optimization. In: Wang L, Chen K, Ong YS, editors. Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science. Volume 3612. Berlin, Heidelberg: Springer; 2005. https://doi.org/10.1007/11539902_66.
28.
Jun Sun WF, Palade V, Wu X, Xu W. Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point,Applied Mathematics and Computation,Volume 218, Issue 7, 1 December 2011, Pages 3763–3775.
29.
Jun Sun W, Fang X, Wu V, Palade. Wenbo Xu; Quantum-Behaved Particle Swarm Optimization: Analysis of Individual Particle Behavior and Parameter Selection. Evol Comput. 2012;20(3):349–93. https://doi.org/10.1162/EVCO_a_00049.
30.
Li L-W, Sun J, Li C, Fang W, Palade V, Wu X-J. Analyzing and controlling diversity in quantum-behaved particle swarm optimization. https://doi.org/10.48550/arXiv.2308.04840
31.
Dahi ZA. Enrique Alba,Metaheuristics on quantum computers: Inspiration, simulation and real execution. Future Generation Comput Syst. 2022;130:164–80.
32.
Pellini R, Ferrari Dacrema M. Analyzing the effectiveness of quantum annealing with meta-learning. Quantum Mach Intell. 2024;6:48. https://doi.org/10.1007/s42484-024-00179-8.
33.
Maurice Clerc. What could a Quantum PSO be?. 2024. hal-04472507.
34.
Tianyu Liu L, Ma JW. Ronghua Shang,Quantum-behaved particle swarm optimization with collaborative attractors for nonlinear numerical problems,Communications in Nonlinear Science and Numerical Simulation,44,2017,Pages 167–183,ISSN 1007–5704,https://doi.org/10.1016/j.cnsns.2016.08.001
35.
Xi M, Sun J. Wenbo Xu,An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position,Applied Mathematics and Computation,205, Issue 2,2008,Pages 751–759,ISSN 0096-3003,https://doi.org/10.1016/j.amc.2008.05.135
36.
Singh S. Adaptive Filter based System Identification using Quantum PSO with optimum Characteristics Length, 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2024, pp. 1–6. 10.1109/CONECCT62155.2024.10677160
37.
Mikki. and Ahmed A. Kishk,Quantum Particle Swarm Optimization for Electromagnetics, https://arxiv.org/pdf/physics/0702214
38.
Alvarez-Alvarado MS, Alban-Chacón FE, Lamilla-Rubio EA, Rodríguez-Gallegos CD, Velásquez W. Three novel quantum-inspired swarm optimization algorithms using different bounded potential fields. Sci Rep. 2021;11(1):11655. 10.1038/s41598-021-90847-7. PMID: 34078967; PMCID: PMC8172946.
39.
Fallahi S, Taghadosi M. Quantum-behaved particle swarm optimization based on solitons. Sci Rep. 2022;12:13977. https://doi.org/10.1038/s41598-022-18351-0.
40.
Yang S. MinWang, and Licheng jiao (2004). _A quantum particle swarm optimization._ In: Congress on Evolutionary Computation, 2004. CEC2004.320_324 Vol.1.
41.
Agrawal RK, Kaur B. Parul Agarwal,Quantum inspired Particle Swarm Optimization with guided exploration for function optimization. Appl Soft Comput Volume 102,2021,107122, https://doi.org/10.1016/j.asoc.2021.107122
42.
Liu X, Wang GG, Wang L. LSFQPSO: quantum particle swarm optimization with optimal guided Lévy flight and straight flight for solving optimization problems. Engineering with Computers. 2022;38(5):4651–82. https://doi.org/10.1007/s00366-021-01497-2.
43.
Jin, Yanxia, Hanchang Z. An improved quantum particle swarm optimization algorithm, The 2nd International Conference on Information Science and Engineering, Hangzhou, China, 2010, pp. 985–988. 10.1109/ICISE.2010.5690254
44.
Deyu Tang Y, Cai J, Zhao. Yun Xue,A quantum-behaved particle swarm optimization with memetic algorithm and memory for continuous non-linear large scale problems,Information Sciences,Volume 289,2014,Pages 162–189,ISSN 0020–0255,https://doi.org/10.1016/j.ins.2014.08.030
45.
Chen Q, Sun J, Palade V, Wu X, Shi X. An improved Gaussian distribution based quantum-behaved particle swarm optimization algorithm for engineering shape design problems. Eng Optim. 2021;54(5):743–69. https://doi.org/10.1080/0305215X.2021.1900154.
46.
Muraleedharan S, Babu CA, Kumar Sasidharanpillai A. Chi-square mutated quantum-behaved PSO algorithm for combined economic and emission dispatch. Evol Intel. 2024;17:3961–84. https://doi.org/10.1007/s12065-024-00966-z.
47.
Maolong Xi1, Wu X, Sun XSJ, Xu W. Improved quantum-behaved particle swarm optimization with local search strategy. J Algorithms Comput Technol DOI: 10.1177/1748301816654020
48.
Chen Q, Jun, Sun, and, Palade V. ‘A Hybrid Quantum-behaved Particle Swarm Optimization Solution to Non-convex Economic Load Dispatch with Multiple Fuel Types and Valve-point Effects’. Intelligent Data Analysis, vol. 27, no. 5, pp. 1503–1522, 2023.
49.
Zhao Shijie M, Shilin G, Leifu. Yu Dongmei,A Novel Quantum Entanglement-Inspired Meta-heuristic Framework for Solving Multimodal Optimization Problems. Chin J Electron Vol. Jan. 2021;30(1). https://doi.org/10.1049/cje.2020.11.012.
50.
Chen Gong N, Zhou S, Xia S, Huang. Quantum particle swarm optimization algorithm based on diversity migration strategy. Future Generation Comput Syst Volume. August 2024;157:445–58.
51.
Flori A, Oulhadj H, Siarry P. QUAntum Particle Swarm Optimization: an auto-adaptive PSO for local and global optimization. Comput Optim Appl. 2022;82:525–59. https://doi.org/10.1007/s10589-022-00362-2.
52.
He G. Xiao-li Lu,Quasi opposite-based learning and double evolutionary QPSO with its application in optimization problems,Engineering Applications of Artificial Intelligence,Volume 126, Part A,2023,106861. https://doi.org/10.1016/j.engappai.2023.106861
53.
Nirmal Kumar AA, Shaikh SK, Mahato. Asoke Kumar Bhunia,Applications of new hybrid algorithm based on advanced cuckoo search and adaptive Gaussian quantum behaved particle swarm optimization in solving ordinary differential equations. Expert Syst Appl Volume 172,2021,114646, https://doi.org/10.1016/j.eswa.2021.114646
54.
Jerzy Balicki. Many-Objective Quantum-Inspired Particle Swarm Optimization Algorithm for Placement of Virtual Machines in Smart Computing Cloud, Faculty of Mathematics and Computer Science, Warsaw University of Technology, 00-662 Warsaw. Pol Entropy. 2022;24(1):58. https://doi.org/10.3390/e24010058.
55.
Xu Z, Cui Y, Li B. A quantum-inspired particle swarm optimization for sizing optimization of truss structures,Journal of Physics: Conference Series, Volume 1865, 2021 International Conference on Advances in Optics and Computational Sciences (ICAOCS) 2021 21–23 January 2021, Ottawa, Canada.
56.
Abdellah. Ahourag,Zakaria Bouhanch,Karim El Moutaouakil,Abdellah Touhafi, Improved Quantum Particle Swarm Optimization of Optimal Diet for Diabetic Patients.
57.
HanBin, Zhang. XianPeng Shi, An Improved Quantum-Behaved Particle Swarm Optimization Algorithm Combined with Reinforcement Learning for AUV Path Planning. J RoboticsVolume 2023 Article ID 8821906, 11 pages,https://doi.org/10.1155/2023/8821906
58.
Kanchan P, Pushparaj SD. A quantum inspired PSO algorithm for energy efficient clustering in wireless sensor networks. Cogent Eng. 2018;5(1). https://doi.org/10.1080/23311916.2018.1522086.
59.
de Oliveira LD et al. (2006). Particle Swarm and Quantum Particle Swarm Optimization Applied to DS/CDMA Multiuser Detection in Flat Rayleigh Channels.
60.
Marlom Bey P, Kuila BB, Naik. Santanu Ghosh,Quantum-inspired particle swarm optimization for efficient IoT service placement in edge computing systems. Expert Syst Appl Volume 236,2024,121270. https://doi.org/10.1016/j.eswa.2023.121270
A
61.
Li M, Liu C, Li K, Liao X. Keqin Li,Multi-task allocation with an optimized quantum particle swarm method’. Appl Soft Comput,Volume96,2020,106603. https://doi.org/10.1016/j.asoc.2020.106603
62.
Shi Y, Eberhart RC. (1998). A modified particle swarm optimizer. Proceedings of IEEE International Conference on Evolutionary Computation. pp. 69–73. 10.1109/ICEC.1998.699146
63.
Awad NH, Ali MZ, Suganthan PN, Liang JJ, Qu BY, Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization, Singapore: Nanyang Technological, University. Tech. Rep., November 2016. [Online]. Available: http://www.ntu.edu.sg/home/EPNSugan/
64.
Tareq B. Ibraheem, Mohammad Dehghani etc., Farmer and Seasons Algorithm (FSA): A Parameter-Free Seasonal Metaheuristic for Global Optimization. Int J Intell Eng Syst. 2025;18(6). 10.22266/ijies2025.0731.59.
Appendix
A.
CEC2005 Benchmark functions Table I Low dimensional unimodal function
Function N Range Iterations
Table II High dimensional multimodal function
+
} +
Where
Table III Low dimension multimodal function
2 100 -0.397887
)]
[30 + (2
-3
)
² (18–32
+12
+48
-36
+27
)]
B.
CEC2017 Benchmark functions: Table I. Summary of the CEC’17 Test Functions [64]
A
 
No.
Functions
Fi*=Fi(x*)
Unimodal Functions
1
Shifted and Rotated Bent Cigar Function
100
2
Shifted and Rotated Zakharov Function
200
Multimodal Functions
3
Shifted and Rotated Rosenbrock’s Function
300
4
Shifted and Rotated Rastrigin’s Function
400
5
Shifted and Rotated Expanded Scaffer’s F6 Function
500
6
Shifted and Rotated Lunacek Bi_Rastrigin Function
600
7
Shifted and Rotated Non-Continuous Rastrigin’s Function
700
8
Shifted and Rotated Levy Function
800
9
Shifted and Rotated Schwefel’s Function
900
Hybrid Functions
10
Hybrid Function 1 (N = 3)
1000
11
Hybrid Function 2 (N = 3)
1100
12
Hybrid Function 3 (N = 3)
1200
13
Hybrid Function 4 (N = 4)
1300
14
Hybrid Function 5 (N = 4)
1400
15
Hybrid Function 6 (N = 4)
1500
16
Hybrid Function 6 (N = 5)
1600
17
Hybrid Function 6 (N = 5)
1700
18
Hybrid Function 6 (N = 5)
1800
19
Hybrid Function 6 (N = 6)
1900
Composition Functions
20
Composition Function 1 (N = 3)
2000
21
Composition Function 2 (N = 3)
2100
22
Composition Function 3 (N = 4)
2200
23
Composition Function 4 (N = 4)
2300
24
Composition Function 5 (N = 5)
2400
25
Composition Function 6 (N = 5)
2500
26
Composition Function 7 (N = 6)
2600
27
Composition Function 8 (N = 6)
2700
28
Composition Function 9 (N = 3)
2800
29
Composition Function 10 (N = 3)
2900
  
Search Range: [-100,100]D
 
Total words in MS: 16491
Total words in Title: 18
Total words in Abstract: 300
Total Keyword count: 6
Total Images in MS: 34
Total Tables in MS: 14
Total Reference count: 64