1 Introduction
Inspired by the foundational work of Wolpert and Macready [1], practitioners have long sought to better understand the relationship between problems and solution methods (i.e., algorithms). Here, we are particularly interested in the question of which algorithm is bestsuited to a particular problem, and the process of addressing this has been described by some as a “blackart” [2].
Although theoretical studies in this area have yielded useful results, the experimental analysis of algorithms is receiving increasing attention. As Morgan and Gallagher point out [3], this approach is scalable in that it readily admits newlydescribed algorithms, and it is now an area of research that is supported by a number of highprofile competitions and libraries of benchmark test problems.
The fundamental properties of a problem’s search landscape underpin much work in experimental analysis, and the use of landscape/test case generators [4, 5, 3, 6, 7] has been proposed as one way in which we might effectively assess algorithms against problem instances.
In this paper we examine six different natureinspired algorithms by testing them against a number of different randomized landscapes with several different properties (e.g., ruggedness). This gives a much richer picture of their relative strengths and weaknesses, compared to simply using the “difficulty” of a landscape [8].
2 Previous work
The use of algorithms inspired by physical or natural processes is now wellestablished in the field of optimisation [9]. As the number of such algorithms grows yearonyear, there is a pressing need to better understand their properties, in order that practitioners may make informed decisions about which method is bestsuited to a particular problem, under certain conditions. Although analytical methods have been successfully applied to natureinspired methods [10] [11], their “real world” applicability is not clear, as they often rely on significant assumptions and/or simplifications.
In what follows, we take an experimental approach[12] to studying the selected algorithms, using an established landscape generation technique [4]. As Morgan and Gallagher observe, “In a general sense, an algorithm can be expected to perform well if the assumptions that it makes, either explicit or implicit, are wellmatched to the properties of the search landscape or solution space of a given problem or set of problems” [3]. We therefore seek to investigate the performance of several algorithms on a number of types of fitness landscape with specific properties or characteristics. This approach is preferred by Hooker to the use of benchmark problems, because the latter “differ in so many respects that it is rarely evident why some are harder than others, and they may yet fail to vary over parameters that are key determinants of performance. It is better generate problems in a controlled fashion… The goal is not to generate realistic problems, which random generation cannot do, but to generate several problem sets, each of which is homogeneous with respect to characteristics that are likely to affect performance” [13].
The fitness landscape approach has been successfully applied to the study of various natureinspired algorithms [14, 15, 16]. Indeed, to our knowledge, landscape analysis of natureinspired algorithms has been largely restricted to evolutionary methods. In this paper we broaden this work considerably, by considering several classes of natural algorithms (social, evolutionary and physical). Overall, we study six different natureinspired methods, as well as stochastic hillclimbing as a baseline algorithm. Our empirical approach is informed by previous work [17] [18], which emphasises the need to establish a rigorous framework for experimental algorithmics. In the next Section, we describe in detail our methodology.
3 Methodology
3.1 Algorithm selection
We select, for comparison, a number of natureinspired algorithms that are commonly applied to continuous function optimisation. These may be classified
[19] as either social, evolutionary or physical. The social algorithms we select are Bacterial Foraging Optimisation Algorithm (BFOA) [20], Bees Algorithm (BA) [21], and Particle Swarm Optimisation (PSO) [22]. The evolutionary algorithms selected are Genetic Algorithms (GA)
[23] and Evolution Strategies (ES) [24], and physical algorithms are represented by Harmony Search (HS) [25]. We also include random search (RS) and stochastic hill climbing (SHC) as “baseline” algorithms.We note that the references supplied above for each algorithm may serve simply as an example of their application, rather than their precise implementation. In terms of implementation, we heed the observation that “Ideally, competing algorithms would be coded by the same expert programmer and run on the same test problems on the same computer configuration” [12]. With that in mind, we use only implementations provided by Brownlee to accompany [26]. The limited space available prevents a complete description of each algorithm, but full implementation details are in [26], which is freely available and contains the source code used here.
3.2 Optimisation problem characteristics
As Morgan and Gallagher explain [3], their MaxSet of Gaussians (MSG) method [4] is a “randomised landscape generator that specifies test problems as a weighted sum of Gaussian functions. By specifying the number of Gaussians and the mean and covariance parameters for each component, a variety of test landscape instances can be generated. The topological properties of the landscapes are intuitively related to (and vary smoothly with) the parameters of the generator.” By manipulating these parameters, we obtain landscapes with different characteristics. This allows us to investigate the performance of our selected algorithms on landscapes with different features, and to identify which characteristics pose the greatest challenge. As Morgan and Gallagher observe, “Different problem types have their own characteristics, however it is usually the case that complementary insights into algorithm behaviour result from conducting larger experimental studies using a variety of different problem types” [3]. We now describe the different characteristics (corresponding to problem types) under study in this paper.
Ruggedness of a landscape is often linked to its difficulty [8], and factors affecting this include (1) the number of local optima [27], and (2) ratio of the fitness value of local optima to the global optimal value [28] [14]. Other significant factors concern (3) dimensionality [29] (that is, the number of variables in the objective function), (4) boundary constraints (that is, the limits imposed on the value of a variable) [30], and (5) smoothness of each Gaussian curve (effectively the gradient) used to generate the landscape [31]  a smaller value indicates a smoother gradient. A summary of the ranges selected for each characteristic is given in Table 1.
Characteristic  Min  Step  Max  Default 

Number of local optima  0  1  9  3 
Ratio of local optima to global optimum  0.1  0.2  0.9  0.5 
Dimensionality  1  1  10  2 
Boundary constraints  10  10  100  30 
Smoothness  10  10  100  15 
3.3 Performance measurement
In terms of performance metrics, we abstract away from algorithmspecific measures, due to the diverse range of methods selected. The following metrics are applied: (1) Accuracy: We define this as the mean absolute error of the best solution found on a given set of landscape characteristics, over a number of runs ()) (where is the set of best solutions found, is the number of runs performed and is the known optimum). This is the most commonlyused assessment metric for optimisation algorithms [4]. The generation technique we use creates landscapes with a known global optimum, in this case zero.
(2) Variance of final solutions
: A measure of variation in best solutions found across differently seeded runs. We use the standard deviation of the best solutions of all runs on a given set of landscape characteristics, defined as
(where is our data set, is the size of the data set and is the mean average). (3) Success rate: We measure this as the frequency with which differentlyseeded runs of an algorithm are able to find a solution within a specified distance from the optimum [32]. We keep the success tolerance relatively low (error less than 1.010) in order to ensure that we capture the change in success rate of algorithms which perform poorly.3.4 Experimental setup
In order to generate the landscapes, we used the Matlab code supplied with [4]. All landscapes were generated using default parameters of three curves, two dimensions, 0.5 average ratio of local minima to global minimum, 30 units in each dimension with a smoothness coefficient of 15), with only the parameter under investigation changing for each experiment. We ran each algorithm 100 times on each landscape in the set of landscapes generated for each particular characteristic value (when investigating smoothness, for example, we generated 10 different landscapes (smoothness = 10 …100), and ran each algorithm 100 times on each landscape).
Parameterisation of algorithms provides a significant challenge when evaluating performance. Our aim is not to perform “competitive testing” [13], but to establish general performance profiles for different algorithms over different types of problem. As such, we use the socalled “vanilla” implementation of each algorithm, with generalpurpose settings taken from [4]. Where an algorithm has a “population size” parameter, we use a value of 50; where an algorithm has a “range” or “velocity” parameter, we use a value of 10.
Termination criteria were also standardised. The most objective criterion is the number of objective function evaluations. This means each algorithm has access to the same amount of information from the landscape, and the same amount of feedback on potential solutions. Experimentally we determined that the selected algorithms generally converged within 20,000 objective function calculations, so this was used as the termination criterion. The code used for all algorithms, as well as datasets and the landscape generator, is available on request from the authors.
4 Results
Space prevents a detailed presentation of full experimental plots, but these are available from the project website^{1}^{1}1http://www2.docm.mmu.ac.uk/STAFF/M.Amos/Project/Characterisation. To summarise, we plot the resilience of each algorithm to changing landscape characteristics, in the form of a radar plot in Figure 1. To assess the resilience of an algorithm we use the standard deviation of the average error across all values of a landscape characteristic, which we normalise on a percharacteristic basis. This “ranking” shows which algorithms do not show performance variability versus those which are heavily influenced by a characteristic. BFOA shows large deviations in average error for boundary constraint range, smoothness coefficient changes and dimensionality, indicating that BFOA is an algorithm heavily dependent on the landscape of a problem  perhaps because of a heavy reliance on careful parameterisation. SHC also shows large variance  perhaps, in large part again, to a lack of parameters and complicated local optima avoidance techniques. GA and ES show large variation with respect to number of local optima, perhaps supporting the argument that evolutionary algorithms suffer more than most from the problem of becoming “stuck” in local optima.
All algorithms produce the smallest average error when no local optima (minima) are present in the fitness landscape. This is expected, as, with only one optimum, there are no alternative solutions to which the algorithms may converge. We observe the greatest average error with only one optimum from SHC, with BFOA (approx. 0.14) also showing a large average error. There are very small average errors (almost zero) from GA, ES, PSO, HS, RS and BA. BFOA also produces the largest variation in final solutions (0.32).With the introduction of only a single local optimum, performance of most algorithms degrades significantly. ES and GA suffer significantly, with average error increasing from approximately zero to 0.06 and 0.08 respectively, and the standard deviation of solutions increasing by around 0.15 for each algorithm. SHC also performs poorly, with a similar increase in average error. The least affected are RS (which blindly chooses random solutions, and is therefore unaffected by local minima) and BA, which contains a global search mechanism.
For algorithms which do not directly use the gradient of the landscape, we would expect to see no change in their performance as we adjust the ratio of local optima parameter. We observe that RS, which selects new solutions randomly from the entire search space, offers very similar performance in terms of mean error and success rate for all ratio values. Similarly, algorithms which perform a global search should be better at avoiding local minima even when they are attractive  and this is true for BA and HS. PSO shows little change in success rate as the ratio becomes more attractive, owing to the fact that solutions are directed towards the best particle, and their own best solution, regardless of their individual experience with the gradation of the landscape. Interestingly, SHC average error decreases as ratio increases  most likely due to an increased availability of ‘better’ solutions throughout the landscape. ES demonstrates very poor, yet consistent, performance as the ratio changes. Success rates are very low, and, interestingly, we observe a decrease in the standard deviation of solutions as the ratio increases. This suggests that ES is perhaps more “content” to optimise at a local minima, with the algorithm getting trapped in these more frequently as ratio increases. This could also be true of other algorithms whose deviation decrease, such as BFOA and SHC. GA performs in a similar manner to ES with regard to average error and diversity, although with a considerably better success rate, suggesting that this may be a general problem for algorithms which use an evolutionary approach.
At only one dimension, fitness landscapes are trivially easy. The performance of all algorithms reflects this, with all algorithms performing well on landscapes of a single dimension. All algorithms show a success rate (that is, optimisation with an error of under 1.010) above 90%. As we increase the dimensionality to two, most algorithm performances begin to degrade. Suffering mostly severely is RS, which is to be expected, as random search is our most basic algorithm. Algorithms which also perform poorly at only two dimensions are ES, BA and PSO. It is perhaps surprising, at first, to see BA performing poorly, given that the algorithm contains a randomly sourced global search. However, this global search is effectively RS, which performs poorly, so we can assume the global search is not covering enough of the landscape. Coupled with the nonadaptive nature of the algorithm (meaning that solution selection around the current best area is within a relatively large range), poor algorithm performance is easily explained. We propose that PSO and ES suffer from a similar problem, in that exploration is limited, and neither optimise their current best as accurately as their adaptive variants.
Random search exhibits a similar, yet less extreme, reaction to changes in boundary constraints as with the increase in dimensionality. This is to be expected, as the limit on objective function calculations results in random search having less chance to explore the search space. SHC also has an almost linear increase in average error, matching the linear increase in search space size, but produces consistently poor results in terms of success. The social system algorithms (BA and PSO) both exhibit slightly unusual behaviour  as the problem space increases, their success rate also increases. This suggests that their reliance on a parameter to search within a range is hindering the algorithms when the problem space is too small to properly explore. HS provides the best success rate for the entire range of sizes we have selected in this problem, indicating good exploration of the search space irrespective of the range parameter. BFOA also suffers significantly as search space size increases, again implying a heavy reliance on the parameter which controls the range of search for new solutions. The evolutionary algorithms do not cope particularly well with the increase of problem size, with performance in terms of both average error and success rate decreasing consistently as size increases.
The evolutionary algorithms (ES and particularly GA) perform poorly and are most affected by changing the smoothness coefficient. BA and PSO all also show decreasing success rate as the curves become steeper, as does BFOA which relies heavily on gradient information. Harmony search suffers similarly to the evolutionary algorithms, and swarm algorithms, as curves become more steep. The similarity in terms of success rate for all algorithms suggests that the availability of gradient information is something which affects all algorithms.
5 Conclusions
In this paper, we have described the results of an extensive study of natureinspired algorithms, in terms of their performance on fitness landscapes with different characteristics. We studied six naturebased methods (plus two stochastic baseline algorithms), varying a number of landscape features. The most significant characteristic appears to be the number of local minima, where a combination of global and local search appears to be beneficial. On the other hand, the ratio of local optima to the global minimum appears to have little effect on the success of the algorithms under study. As expected, dimensionality proved problematic for all algorithms, whereas landscape smoothness appeared to have little effect.
This work offers a contribution to the empirical study of natureinspired algorithms, and we hope that it motivates future investigations. To further this work, it may be useful to examine a larger collection of natureinspired algorithms over a greater range of values for the characteristics, in order to more fully capture a wider variety of algorithmic performance. The current work provides a firm foundation for this.
References

[1]
D. Wolpert and W. Macready, “No free lunch theorems for optimization,”
IEEE Transactions on Evolutionary Computation
, vol. 1, no. 1, pp. 67–82, Apr. 1997.  [2] J. Woodward, “Why classifying search algorithms is essential,” 2010 International Conference on Progress in Informatics and Computing, 2010.
 [3] R. Morgan and M. Gallagher, “When does dependency modelling help? Using a randomized landscape generator to compare algorithms in terms of problem structure,” in PPSN XI, R. et al Schaefer, Ed. SpringerVerlag, 2010, pp. 94–103.
 [4] M. Gallagher and B. Yuan, “A generalpurpose tunable landscape generator,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 590–603, 2006.
 [5] R. Jani, “A Generator for Multimodal Test Functions,” in Proc. SEAL 2008: LNCS 5361, X. Li, Ed. SPringerVerlag, 2008, vol. 3, pp. 239–248.
 [6] Y. Jin, “Constructing dynamic optimization test problems using the multiobjective optimization concept,” in Applications of Evolutionary Computing: LNCS 3005, G. Raidl, Ed. SpringerVerlag, 2004, vol. LNCS 3005, pp. 525–536.
 [7] Z. Michalewicz, K. Deb, and M. Schmidt, “Testcase generator for nonlinear continuous parameter optimization techniques,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 3, pp. 197–215, 2000.
 [8] T. Jones and S. Forrest, “Fitness distance correlation as a measure of problem difficulty for genetic algorithms,” in Proceedings of the 6th International Conference on Genetic Algorithms, 1995, pp. 184–192.
 [9] R. Chiong, Natureinspired algorithms for optimisation. SpringerVerlag, 2009.

[10]
Q. Zhang, “On the convergence of a class of estimation of distribution algorithms,”
Computation, IEEE Transactions on, vol. 8, no. 2, pp. 127–136, Apr. 2004.  [11] J. He and X. Yao, “From an individual to a population: An analysis of the first hitting time of populationbased evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 5, pp. 495–511, 2002.

[12]
R. Barr, B. Golden, and J. Kelly, “Designing and reporting on computational experiments with heuristic methods,”
Journal of Heuristics, vol. 1, pp. 9–32, 1995.  [13] J. Hooker, “Testing heuristics: we have it all wrong,” Journal of Heuristics, 1995.
 [14] P. Merz, “Fitness landscape analysis and memetic algorithms for the quadratic assignment problem,” Evolutionary Computation, IEEE, vol. 4, no. 4, pp. 337–352, 2000.
 [15] J. Tavares, F. B. Pereira, and E. Costa, “Multidimensional knapsack problem: a fitness landscape analysis.” IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, vol. 38, no. 3, pp. 604–16, Jun. 2008.
 [16] G. Uludag and A. Sima Uyar, “Fitness landscape analysis of differential evolution algorithms,” in Fifth International Conference on Soft Computing, Computing with Words and Perceptions in System Analysis, Decision and Control, 2009. ICSCCW 2009., 2009, pp. 1–4.
 [17] C. McGeoch, “Toward an experimental method for algorithm simulation,” INFORMS Journal on Computing, vol. 8, no. 1, pp. 1–15, 1996.
 [18] A. Eiben, “A critical note on experimental research methodology in EC,” in Proceedings of the 2002 Congress on Evolutionary Computation, vol. 1. Ieee, 2002, pp. 582–587.
 [19] A. Brabazon and M. O’Neill, Biologically inspired algorithms for financial modelling. SpringerVerlag, 2006.
 [20] K. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, Jun. 2002.
 [21] D. Pham, A. Ghanbarzadeh, and E. Koc, “The Bees Algorithm – A Novel Tool for Complex Optimisation Problems,” in Intelligent Production Machines and Systems, D. Pham, E. Eldukhri, and A. Soroka, Eds., 2006, pp. 454–459.

[22]
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in
Neural Networks, 1995. Proceedings. …, 1995, pp. 1942–1948. 
[23]
D. E. Goldberg,
Genetic Algorithms in Search, Optimization, and Machine Learning
, AddisonWesley, Ed. AddisonWesley, 1989.  [24] T. Bäck and H.P. Schwefel, “An Overview of Evolutionary Algorithms for Parameter Optimization,” Evolutionary Computation, vol. 1, no. 1, pp. 1–23, Mar. 1993.
 [25] Z. Geem and J. Kim, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001.
 [26] J. Brownlee, Clever Algorithms: NatureInspired Programming Recipes. Lulu, 2011. [Online]. Available: http://www.cleveralgorithms.com
 [27] J. Horn and D. Goldberg, “Genetic algorithm difficulty and the modality of fitness landscapes,” in Foundations of Genetic Algorithms 3, 1994.
 [28] K. M. Malan and A. P. Engelbrecht, “Quantifying ruggedness of continuous landscapes using entropy,” in 2009 IEEE Congress on Evolutionary Computation. IEEE, May 2009, pp. 1440–1447.
 [29] T. Hendtlass, “Particle swarm optimisation and high dimensional problem spaces,” in 2009 IEEE Congress on Evolutionary Computation, CEC’09. IEEE, May 2009, pp. 1988–1994.
 [30] S. Kukkonen and J. Lampinen, “GDE3: The third Evolution Step of Generalized Differential Evolution,” 2005 IEEE Congress on Evolutionary Computation, pp. 443–450, 2005.
 [31] H.g. Beyer and H.p. Schwefel, “Evolution strategies,” Natural Computing, vol. 1, pp. 3–52, 2002.
 [32] E. Elbeltagi, T. Hegazy, and D. Grierson, “Comparison among five evolutionarybased optimization algorithms,” Advanced Engineering Informatics, vol. 19, no. 1, pp. 43–53, Jan. 2005.
Comments
There are no comments yet.