1 Introduction
Optimization is a necessary tool in a lot of fields of science and engineering. Two main approaches of the optimization are based on mathematical methods and metaheuritic ways. Mathematical methods such as gradientbased approaches are reliable with proof of convergence to a global optimum solution under predetermined conditions on the optimization model [1]
. However, the conditions are satisfied only by specific models, and still there are a lot of realworld problems which are not tractable by mathematical optimization. (Meta)heuristics/evolutionary algorithms are appropriate approach at such situations. They have potential to discover the global optimum solution regardless of the properties of cost/fitness function. They operate with minimum information about the function. As a consequence, they are easy to program and adjust for different problems.
Metaheuristic algorithms borrow a kind of intelligence, almost from the nature. They are able to discover a global optimum solution for wide range of problems. Crucial requirement for such intelligence is existence of exploration and exploitation features in the source of inspiration. An intelligent system should be able to exploit a solution confirmed as a promising one, and be able to explore an enough number of candidate solutions, efficiently. As an obvious instance, we can think of natural thinking abilities of a human being. The focused and diffused (default) thinking modes can be regarded as the exploitation and exploration abilities of a mankind, respectively [2]. They are thinking modes from phycological point of view. Our proposed algorithm is inspired by thinking modes developed at the context of philosophy. During the history, a lot of philosophers developed some thinking modes to equip the mankind with powerful tools at the path of discovering truth [3]. In our terminology at this work, truth is the desired global optimum solution, and the population is equipped with two opposite kinds of thinking modes, i.e., speculative thinking for exploration and practical thinking for exploitation. We borrow the idea of dialectics from the modern philosophy to define the thinking modes and their results. Two types of dialectics are modeled based on distances in objective and subjective spaces. By thinking, we refer to a simple procedure in which each solution (thinker) selects a dialectical solution for interaction.
The key in solving different problems is providing a balance between the exploration and exploitation. It is approached by controlling parameters of the algorithm. A perfect balance leads to an efficient search in a reasonable time. Hence, the main point in designing a new algorithm is consistency between the operators developed for exploitation and exploration. It would make the balance to be easily captured by minimum number of parameters. A review on some main operators of a few wellknown algorithms gains some insight into the evolution of exploitation and exploration operators. A basic operator for exploitation is the one used in particle swarm optimization (PSO) algorithm
[4]. The motion of all particles toward best solution has high exploitation power, however, at the expense and risk of being trapped at a local optimum. On the other side, in genetic algorithm (GA)
[5], random mutations driven from a specific probabilistic distribution has significant exploration power, at the expense runtime. From one point of view, other operators introduced in the other metaheuristic algorithms after PSO and GA, try to relax the determinism of the motion toward one leader, and try to constrict the randomness of mutations.In order to avoid from the local optimums, determinism of the motions toward one leader in PSO was relaxed in its variants. For example, in fitnessdistanceratio based PSO (FDRPSO) [6], a near and better solution is selected for each particle to follow a local leader instead of just one global leader. Other algorithms such as imperialistic competition [7] and natural aggregation [8], utilize best solutions instead of best leader in PSO. However, there is still randomness in the selection of one of the
best solutions as the imperialist/shelter. On the other side, differential evolution (DE) constricts the completely random mutations of GA by using difference vectors among the solutions as the mutation vectors. However, there is still randomness in selection of two solutions for computation of mutation vector (in DE/rand/1/bin), and also there is a determinism in following one best solution (in DE/best/1/bin)
[9]. Moreover, same random selections exist in the interactions among solutions of other algorithms such as brainstorm optimization [10], and in learning phase of teachinglearningbased optimization [11]. Overall, except of PSO and its variants in which all particles follow same criteria, other mentioned algorithms contain a randomness in selection phase of the interactions. Our main motivation in development of the proposed algorithm was discovering a systematic interaction among the particles without any randomness in the selection phase. As explained at next section, the idea was inspired by modern philosophy based on systematic dialectic instead of arbitrary dialectic utilized among ancient philosophers. At this work, the arbitrary dialectic was interpreted as the result of random selection of particles for interaction. We would develop an efficient deterministic selection scheme by leveraging the definitions of two kinds of antitheses.In literature, opposite solutions are utilized for acceleration of evolutionary algorithms [12]. Further, there is a research direction on highlevel language programming inspired by the dialectical philosophy [13]. The most related work is an optimization algorithms called dialectic search [14]. However, it has fundamental differences with our proposed algorithm at the context of source of inspiration and modeling ways:

Dialectic search is inspired by the work of Hegel and Fiche who developed idealistic dialectic, while our source of inspiration is based on both idealistic and materialistic dialectics.

In our models, dialectic is searched among the population, such that the population of solutions improve their positions based on dialectical interactions, while in the dialectic search algorithm, dialectic was imposed by local random changes in the single solution.

In our proposed algorithm, the new solutions are generated by meaningful steps toward the dialectical solutions, known as antithesis, while in the dialectic search a new solution is searched at the path toward the dialectical solution.
It is worth mentioning that our idea of proposed algorithm was formed and developed without being aware of the dialectic search algorithm. The proposed algorithm was named as ideological sublations (IS), because of tendency of the thinkers (particles) for canceling their theses (positions) while they simultaneously preserve it. Two essential ideas behind of IS algorithm are definition of 1) the Euclidian distance between two solution as a metric of idealistic contradiction and 2) the difference between their objective functions as a metric of materialistic contradiction. The key for management of the contradictions was separation of the solutions to two groups according to their qualities.
Rest of the paper is organized as follows. At next section, the concept of dialectics and its evolution were reviewed from a philosophical point of view. Also, connections with the proposed algorithm are discussed at this section. At section 3, proposed algorithm was explained after modeling the considered thinking modes. At section 4, experimental results on test benchmark functions, sparse reconstruction problem, and antenna selection challenge were included. Finally, a discussion was provided at section 5, and the paper was concluded at section 6.
2 A Review on the Evolution of Dialectic
The word of dialect is literally composed of the prefix dia which means ”across”, and the Greek root legein which means ”speak” [17]. In the context of philosophy, dialectic is a process of contradiction between two opposite sides of everything that leads to truth. First utilization of the dialectic belongs to ancient Greek philosophers who innovated a backandforth form of dialectic in their arguments [18]. Later, other dialectical thinking modes were developed and created by different philosophers [3]. In a philosophical expression, the aim was a universal thinking mode that eliminates the opposition between thinking and existence in any situation. Among the developed modes, two complementary modes of dialectical thinking have got the most attentions; speculative thinking and practical thinking.
Speculative mode of dialectical thinking was radically evolved by G. W. F. Hegel (17701831). He reformed the classic version of dialectic. In his systematic model of dialectic, as included in Figure 1
, speculative moment or the moment of resolution arises after two stages of understanding moment and dialectical (sublation) moment. A thesis that seems stable at the understanding moment, challenges itself (because of its onesidedness or restrictedness) and pass into its opposite side (antithesis) at the dialectical moment. Contradiction between thesis and antithesis at the unstable moment of sublation, leads to a new emerging and more sophisticated thesis (synthesis) at the speculative moment. At the next repeat, the synthesis challenges itself, interacts with its antithesis, and reforms itself to another new synthesis. The process continues until reaching the truth. In our terminology in the proposed swarmbased optimization algorithm, all candidate solutions are regarded as the existing theses, truth is optimum solution to be discovered, and speculative thinking mode is modeled to explore the search space. Let us introduce the speculative operation as a guess with particular randomness.
Main difference of the Hegelian dialectic with the classical one is the process of selfsublation at the dialectical moment. At this process, each thesis cancels out and preserves itself simultaneously, such that it transforms to an antithesis. Hence, despite of classical dialectic that waits for an arbitrary opposition from outside, the progress in the Hegel’s process is deterministic because of the unity of thesisantithesis in his model. According to Hegel’s findings, his procedure leads to an exact truth, despite of the ancient method that leads to an approximate truth [18]. By this extreme refinement in the definition of dialectic, and expressing the speculative thinking process in the mentioned three logical stages, Hegel introduced a systematic idealism in which a systematic and deterministic change in the subjective idea leads to improvement in the objective material. In the context of metaheuristics, if we regard the random mutations of genetic algorithm as the arbitrary dialectics, then differential mutations of DE algorithm follow a more systematic and intelligent way in the production of dialectic. However, still the randomness comes from the arbitrary choice of generating pairs of the mutation vectors (in DE/rand/1/bin). Same kind of randomness exists in the teachinglearningbased optimization algorithm as arbitrary interactions among students. At the proposed algorithm, based on the definition of selfsublation, mutation vectors are generated from two deterministically selected candidate solutions, i.e., idealistic thesis and its available antithesis. In fact, speculative thinking mode was used as an exploration operator with eliminated randomness in choosing a pair for each solution vector.
Materialistic dialectic is the complementary part of the idealistic dialectic. It is wellknown by K. Marx (18181883). On the opposite direction with Hegel’s philosophy, Marx refused to speculate in details [19]. He realized that the opposition of thinking and existence has root in the human’s activities [3]. In the materialistic ideology, a social existence determines consciousness. That was on the contrary with the idealistic thoughts of determination of existence by consciousness. Nevertheless, the idea of materialistic dialectic was also expressed in the same threelogical stages of understanding, dialectic (sublation), and resolution moments which lead to the thesisantithesissynthesis paradigm. Practical thinking mode was developed by this kind of dialectic. According to materialism, change in the objective material leads to improvement in the subjective idea. As a symbolic example of such procedure of reformation, we can mention to an important phycological progress of confidence in the ability to use resources and to master nature, after the industrial revolution [20]. Practical thinking mode was translated to populationbased optimization context, and utilized as an exploitation operator with a relaxed determinism in selection and movement toward a leader.
3 Proposed Algorithm
Block diagram in Figure 1, illustrates the main idea behind the proposed algorithm. As illustrated, the loop of algorithm consists of three understanding, sublation, and speculative/practical moments. As would be clarified, the understanding and speculative/practical moments are modeled by simple operators regularly utilized in the context of swarmbased optimization algorithms, but with some meaningful nuances. Hence, the operators introduced for sublation moment are the main idea behind of the proposed algorithm. At this section, first, two kinds of difference among the solution vectors in a population were highlighted, consequently two models for the dialectics were formulated. Then, the proposed dialectic models which lead to unique antithesis are translated to the populationbased optimization context in the three logical stages.
3.1 Dialectic Models
A solution vector is regarded as one individual thesis about different subjects. The goal is optimizing a cost/fitness function of the decision variables, i.e., . In swarm/population based approaches candidate solutions follow some rules to discover the global optimum solution. Any difference among theses in the population leads to a challenge for motion. However, as philosophy promises, an extreme difference  called dialectic  can lead to a highresolution optimum solution (truth), with a higher speed than an arbitrary difference. In order to organize dialectical interactions among the solutions, two idealistic and materialistic antithesis are modeled.
Simply, we define the Euclidian distance between two solutions as the idealistic difference, and the distance in objective space as the materialistic difference. Regardless of limitation on the acquired number of samples from the function, an idealistic antithesis for one specific thesis was modeled as the solution of following optimization problem:
(1) 
According to the proposed model, a thesis whom belongs to the speculative thinking community should sublate itself in such way that leads to an antithesis in largest distance (canceling out property), but at the same level of quality (preserving property). The model (1) is an idealistic definition of speculative antithesis. According to the definition, an exact antithesis is only identifiable, when whole infinite number of the solutions in the domain with the same quality as the thesis are evaluated. Actually, such procedure is not efficient to be practical. However, a mimicked translation of the idea is possible for a community with finite number of population.
On the other hand, a practical antithesis is searchable among a number of best solutions; one of them that is in nearest distance from a practical thesis, has a dialectical position. Since, it is most approachable solution that promises significantly higher qualities. In a mathematical expression:
(2) 
where the amount of scalar guarantees a dialectical gap between a materialistic thesis and its corresponding antithesis. The gap is canceling out a practical thesis in a materialistic selfsublation. On the other side, looking for a closest solution or minimizing the Euclidian distance (idealistic dialectic) is the preserving side of the sublation in practical thinking mode. At the following, we arrange the thinkers/particles in such manner that the desired dialectics are included in the interactions among the thinkers. As indicated, although finding a solution in a perfect dialectic with each candidate solution is not possible, but still approximating an antithesis among the existing solutions (theses) delivers a taste of what dialectical philosophy promises.
3.2 Understanding Moment
At this stage all initial/new solutions are evaluated. Except of the first iteration in which all initial solutions are considered as the new theses, at the other iterations a new thesis (synthesis) is only accepted if its position at the current iteration has better quality than its position in previous iteration. On the other expression, the best thesis for each thinker is preserved during the optimization process. Consequently, at each iteration, all accepted solutions as the new theses are sorted according to their cost/fitness values. The sorted theses are divided into two groups of highquality and lowquality solutions. Simply, highquality solutions are appointed for speculative thinking and the remaining solutions are assigned for practical thinking, where is total number of thinkers/particles. As elaborated in the next subsection, this sort of assignment simplifies the task of each thinker in finding its corresponding antithesis among all available solutions. The integer value of is the main parameter of IS algorithm that can be easily adjusted by trying some different values between , e.g., multiplication of 5. Large values of provide a high exploration capacity by large number of speculative thinkers, and small values boost the exploitation capability by increasing the number of practical thinkers. As would be seen in simulation results, in most problems an appropriate value is a integer number larger than . As a summary, after each repeat, at the understanding moment, the new theses/positions are checked for acceptance or rejection, and are sorted for assignment of speculative or practical thinking mode to each thinker/particle.
3.3 SelfSublation Moment
At this moment, each thinker challenges her thesis according to the assigned thinking mode at the understanding moment. During this process which is called selfsublation, each thinker looks for an antithesis among the available theses from other thinkers. Discovered antithesis would be a reference point for a change. As mentioned, antithesis is a solution at maximum contradiction or dialectic with the thesis. Of course, since there are two kind of dialectics (idealistic dialectic for speculative thinking and materialistic dialectic for practical thinking), hence, along with the maximization of a particular dialectic, its opposite dialectic is minimized at each thinking mode. This fact is implicitly formulated in the constraint part of model (1), and objective sentence of model (2). At the following subsections, we reduce the proposed models to two simple hypotheses for finding an approximate antithesis for one typical thesis in the collection of solutions. Depending on the assigned thinking mode to each thinker, one of the hypotheses would be deployed. Figure 2 demonstrates the proposed selfsublation scheme among sorted theses according to their qualities at one specific iteration. At this figure, the indicated solution(s) by arrow are candidate(s) to be considered as an antithesis for their corresponding thesis.
3.3.1 Speculative Thinking Mode
The speculative thinking is the mode of thinking among highquality solutions. Except of first best and best solutions that deterministically choose their nearest speculative thinker at the objective space as the antithesis, other solutions face with a simple detection problem in finding their antithesis (see Figure 2). If we label the speculative solutions by their quality order from for the best solution to for the best solution, then the antithesis for first and last theses would be:
(3) 
Otherwise, for , the remaining speculative thinkers look for antithesis in their objective neighbourhood with radius 1 from themselves. On the other words, each thinker checks the distance of his thesis from the theses of one higherquality and one lowerquality thinker, then selects one of them who has a thesis in longest distance respect to his thesis. In fact, looking at objective neighborhood is preserving and choosing one solution in largest distance is canceling out a speculative thesis at the selfsublation moment. As another expression, the antithesis for thesis with is:
(4) 
Clearly, selection of a solution at large distance as the antithesis, boosts the exploration property of the speculative thinking mode.
3.3.2 Practical Thinking Mode
Lowquality solutions  consist of thinkers  measure the distance of their theses with the best existing thesis (existing truth) and with its idealistic antithesis. One of them which is closer to that specific practical thesis, is chosen as a practical antithesis. As indicated in equation (3), the antithesis for best solution () is always fixed on the second best solution (). The second best solution is often a reasonable candidate as the antithesis for practical thinkers. However, in some problems, it can leads to stability issues. We define the axillary parameter to increase the stability in such situations. At each iteration, the distance of best solutions (except of best solution) with the best solution are measured, and one solution in largest distance is chosen as an alternative idealistic antithesis for best solution . This alternative antithesis is an appropriate candidate antithesis for practical thinkers at specific problems, i.e.
(5) 
the result of practical sublation for solution among lowquality solutions () would be as follow:
(6) 
It is necessary to emphasis that the mentioned alternative idealistic antithesis for the best solution at each iteration, i.e., , is just used for practical thinking of lowquality solutions, and second best solution is always a fixed antithesis for speculation of the best solution . On the other words, when , the antithesis for from both view point of the practical thinkers and the best thinker are same. This value is recommended number for initial setting of . As indicated in the simulations, rarely can lead to better performance. At this value, the best thesis is compulsorily regarded as the antithesis for all practical thinkers. Moreover, increasing the amount of to larger values than 2, can lead to stability of the algorithm in optimization of some particular functions. It is worth mentioning that since , hence the desired materialistic dialectic is always held. As the final remark, although the antitheses are selected between two similar candidates, but aggregation of such nuanced decisions by the thinkers leads to a significant impact at the final result, after large number of iterations.
3.4 Speculative and Practical Moment
At this moment both practical and speculative thinkers update their theses based on their corresponding antithesis. The detected antitheses are used as a reference point for speculative/practical motions. The update rule is simply modeled by the following equation for all thinkers ():
(7) 
where is a dimensional vector with random elements as the stepsizes, and
indicates an entrywise multiplication. The main distinguishing point of the speculative/practical motions is distribution of random variables used as the stepsize vector
. The distributions are different and specific for each speculative and practical thinking modes. After checking some basic distributions, we realized that by stepsizes of speculative motions driven from a uniform distribution with a negligible bias, and simultaneously by normal biased distribution for the stepsizes of practical movements, the algorithm always converges , i.e.,
(8) 
where . The following parameters were empirically found as appropriate values in dealing with different problems. They are fixed parameters of the proposed algorithm. Two variable parameters, that their adjustment influence in the performance, are the number of speculative thinkers and the number of elites in finding two opposite directions for exploitation. The fixed parameters of the stepsizes are fixed as:



for 
As inferred from the parameters, the mean
of the uniform distribution for speculative stepsizes are always fixed on the given values, independent of other parameters or cases. However, the amount of meanfor normal distribution of practical stepsizes depends on the materialistic antithesis which is chosen for one specific practical thesis. If best solution
was detected as the antithesis, then a bias of 0.6 is imposed to the normal distribution. Otherwise, at the case of selection of as the practical antithesis, an smaller bias of 0.45 is utilized because of lower quality of respect to the existing truth. In a similar interpretation, variance of practical stepsizes is low, when antithesis for all practical theses is the best solution or equivalently
. The reason is priority of exploitation at this case. On the other hand, when was also engaged at the practical sublations for , then diversity is center of attention, hence, in order to boost the diversity, that is logical to release the concentration toward the target antithesis ( or ) by increasing the variance of stepsizes.Figure 3 demonstrates the distribution of random motions for both speculative and practical modes in 2dimensional space. Start point of the motion is the thesis A and the reference point for interaction is detected antithesis B. As depicted in Figure 3 (a), in the practical mode in which significantly better solution(s) are aimed, the thinker scans the area around the antithesis for more details. At low variance of stepsizes ( for ) the sensing area shrinkages, and becomes more concentrated around the antithesis. Exploitation capability of the algorithm is gained by these practical motions. In comparison with swarmbased global optimization algorithms, such as PSO, FDRPSO [6], ICA [7], and NAA [8], there is a gap between the cost value of the lowquality solutions and their following antitheses. Also, the decision of practical thinkers about their antithesis is deterministically taken between the best solution or its antithesis. That is despite of randomness of ICA and NAA in choosing empire or shelter as a target elite solution.
On the opposite side, the thinkers/particles in speculative mode explore the search space. As illustrated in Figure 3 (b), in the ideal case, there is not any bias toward the idealistic antithesis, and the thinker with idealistic ideology can freely move in any directions. That is a kind of motion realized by uniform distribution of stepsizes around zero mean. However, in practice, we realized that a little bias of 0.0455 has remarkable impact on the convergence and performance of IS algorithm. In comparison with populationbased algorithms, such as genetic and differential evolution, speculative motion can be regarded as a structured mutation which its intensity is controlled by idealistic antithesis. The proposed systematic interactions with deterministic selection of antitheses is in contradiction with the randomness at the selection of generating pairs of mutation vectors in DE algorithm.
The main stages of the proposed algorithm are summarized at the Algorithm 1. Initialization is accomplished in steps 1 and 2. Understanding moment is implemented by steps 3 to 6. Steps 7 and 8 are assigned for sublation and resolution moments, respectively. Obviously, any constraint on the optimization model should be imposed before any function evaluation, i.e., before step 3. As instance, at the case of single objective continuous problems, all decision variables of the candidate solutions (s ) are preserved at the valid domain of by passing through two operators of and . As another example, in combinatorial problems, the continuous variables should be transformed to the valid discrete variables by an appropriate mapping. At this case, the algorithm usually operates in continuous domain (at step 8), but the evaluations are done at discrete domain. As a remark for combinatorial problems, according to our observations on the antenna selection problem and some other discrete models,
3.5 On Computational Complexity
Three main operations determine the order of computational complexity of the proposed algorithm. With thinkers computational burden at each iteration comes from 1) sorting of cost values, 2) computation of approximately distances at the sublation moment, and 3) computation of new thesis according to the update rule in (7). Computational complexity of the first operation is . The second and third operations have similar complexity order of . Hence, worst case complexity of IS algorithm after iterations is , where indicates the number of function evaluations. As a result, the asymptotic order of complexity remains , since . Although, the computational complexity of IS algorithm is the same order of magnitude as that of DE [21], and roughly at the same order as most of the evolutionary algorithms, but in a fine comparison, as shown in the simulation results, runtime of the operators in the proposed algorithm is at least half of the other test algorithms.
4 Simulation Results
At this section, we evaluate the efficiency and speed of the proposed algorithm using a number of benchmark single objective cost functions, a continuous optimization model for sparse reconstruction, and a binary optimization problem for antenna selection in large scale. For benchmark functions, comparisons were obtained with DE/rand/1/bin (DE for brevity), cooperative DE (CoDE) [22], comprehensive learning PSO (CLPSO) [23], grey wolf optimization (GWO) [24], and teachinglearningbased optimization (TLBO) [11]. For sparse reconstruction problem, additional comparisons were provided with the PSO by constriction coefficients (PSOcc) [25], and also with the stateoftheart dedicated algorithms for sparse reconstruction. Finally, in antenna selection problem, comparisons are provided with the only competitive algorithm among the considered algorithms, and a GAbased discrete algorithm recently proposed for antenna selection. We should mention that our previously developed algorithm  inspired by tornado’s air currents [26]  despite of its efficiency in lowdimension problems, quickly failed to be competitive in large scales.
Performance metric was cost/fitness value for all problems expect of and the sparse reconstruction model, where distortion from optimum solution was also measured. Two metrics of distortion were utilized; mean squared error (MSE) , and normalized mean squared error (NMSE) , where is the optimum solution, is the approximated solution, and the expectation operator indicates averaging over a number of trials. One trial of an optimization procedure was regarded as a successful optimization, if the approached cost value was less than a threshold value of . The number of thinkers/particles or population size was fixed on 40 for all algorithms, otherwise it is mentioned. All simulations are run on the same computer with Intel Core i31.9GHz and 4GHz of RAM operating on Windows 8, 64 bit and MATLAB 2008.
4.1 Benchmark Functions
At this subsection, 12 benchmark cost functions were used for the evaluation. They were selected among challenging problems of CEC 2017 competition [27] and [28]. The considered set of benchmark functions consists of unimodal/multimodal and (non)differentiable functions with (non)separable decision variables. The test functions with 2 variables are depicted in Figure 4. As shown in the figure, the functions are clustered in such way that in each group specific test algorithm(s) outperforms other ones. The functions and their details are summarized in Table 1. The optimum cost value for all problems is zero, except of three functions of that their minimum cost values are dependent to the dimension of problem. Moreover, all functions were tested on both 10 and 100 dimensions, expect of last three ones () which are only optimized in 40, 80, and 10 dimensions, respectively.
Function  Formulation  Domain 

Cigar  
Rosenbrock 

Easom 

Griewank 

Levy 



Stochastic 

Weierstrass 



Shubert 3 

Vincent 

Modified Schwefel 

–  
Schaffer 7 


Two parameters of the proposed algorithm were adjusted at each problem for a fast and smooth convergence. Adjusted parameter values were fixed for both small and large dimensions of each problem. Also, the parameters of DE algorithm were carefully tuned for a fair competition with the proposed algorithm. Other algorithms were implemented in their original parameterfree version (TLBO) or by the recommended relations for their parameters (GWO). Most of the variants of DE algorithm were developed with the aim of getting ride of the parameter tuning task in facing different problems. Generally, in one specific application, tuning of an original variant  popularly DE/rand/1/bin  is preferred. An overview of literature on the applications of DE algorithm proofs this statement. However, in literatures, there is a lack of comparison between tunedDE algorithm and its variants. Here, one of the popular variants of DE, i.e., CoDE, was included in the simulations to justify the reason behind of popularity of original variant of DE in one specific application/problem. On the other hand, most of the variants of PSO try to avoid from trapping in the local optimums. The CLPSO as a wellknown variant was used in the comparisons, as well. Available source codes of the comparative algorithms, were utilized in the simulations [29],[30],[31],[32].
Table 2 summarizes the parameter values of IS and DE algorithms. As inferred, an appropriate value for the parameter is usually 2. This value is also an effective choice for sparse reconstruction problem. Larger integer numbers for were used in the functions and , in order to increase the stability or reduce the sensitivity to initial solutions. Moreover, smaller integer, i.e., was applied to , in order to have a special exploitation property. In addition, for most of the problems, an effective integer number for is larger than half of the population size . As shown in the table, the integers assigned to the number of speculative thinkers are almost multiplications of 5. Hence, sensitivity to the parameter is low, and a fine adjustment is not required. An appropriate integer for can be easily found by trying some limited number of possibilities, while is fixed on 2. After adjustment of , the parameter can be increased if there was a satiability issue in different runs, or can be decreased if a specific exploitation property is desired.
All algorithms were initiated with same initial solutions, and all of them were stopped after a predetermined number of function evaluations (). The initial solutions were produced randomly with the values distributed uniformly within the predefined domains. The domains, and for each scale of the problems were listed in Table 1 and Table 3, respectively. Figure 5 and Figure 6 show the convergence curves for the functions to in 10 and 100 dimensions, respectively. Also, the convergence curve of the three last benchmark functions () were included in Figure 6. The curves were obtained by averaging over 30 independent runs for each problem. Mean and standard deviation of the best cost value at the final iteration of each problem were reported in Table 3. The mean values smaller than were shown by zero. Clustering of the benchmark functions was based on the number of competitive solutions with the global optimum solution or equivalently the number of global minimums, and also based on the regularity in allocation of the local minimums. This division conduces a rough conclusion about the possible functions in which the proposed algorithm hopefully has better performance.
DE  IS  



0.2  0.3  25  2  
0.7  0.6  25  4  
0  0.5  39  1  
0.1  0.3  30  2  
0.1  0.4  35  12  
0.5  0.3  15  2  
0.2  0.2  30  2  
0  0.4  25  2  
0.3  0.2  15  2  
0.01  0.9  35  2  
0.01  0.9  35  2  
0.1  1.2  35  2  

Cluster 1 consists of prototype examples such as unimodal function without the competitive solutions (), multimodal function with regular allocation of noncompetitive local minimums (), multimodal function with negligible irregularity in the allocation of local minimums, but still without serious competitive solution(s) (). Finally, as a most challenging problem at this cluster, there is with similar structure as the but with existence of competitive solutions located in a near distance from the global optimum. In general, IS algorithm successfully optimizes the functions at this cluster. However, there are some failures in solving smallscale version of the , and also in discovering of global optimal solution of (see Figure 5(d) and Figure 6(j)).
According to the Table 3, large standard deviation of IS algorithm at solving the 10dimension function of indicates an unstable optimization by this algorithm. As indicated in Table 4, the number of successful optimization by IS algorithm for this problem is 24 from total number of 30 trials. It is highest success rate after 27 exact approximations of DE algorithm. It is worth mentioning that the similar structure of the function (regular positioning of the local minimums such that there is explicit routes toward global optimum) exists in the Rastrigin and Ackley functions [28]. According to our observations, IS algorithm was also unstable in low dimensions of these problems, and had poor performance in large dimensions. That is despite of its performance in large dimension of that was stably optimized by IS algorithm. On the other hand, small standard deviation of the proposed algorithm in solving the problem , implicitly indicates that IS algorithm always discovers a competitive local optimal solution of this function. However, according to our observations, by shrinking the domain of decision variables, the global optimum solution is approachable by IS. Roughly speaking, GWO algorithm along with TLBO (in most cases) are best algorithms for optimization of the functions with similar properties as the functions in cluster 1.
D  nfe  DE  CoDE  CLPSO  TLBO  GWO  IS  
10  20000  2.10e20  19.815  217.029  0  0  0  
(2.41e20)  (15.322)  (197.997)  0  0  (1.94e30)  

100  120000  7646.5  530.755  3.3832e+6  0  0  3.491e12 
(32529)  (228.913)  (5.3458e+5)  (0)  (0)  (6.383e12)  

10  1.6e+6  0  0  0.226  7.81e16  6.085  0.531 
(0)  0  (0.335)  (4.27e15)  (0.783)  (1.378)  

100  6e+6  0.1329  0.5315  96.241  21.293  96.645  0.5316 
(0.7279)  (1.5219)  (22.401)  (7.578)  (1.126)  (1.378)  

10  20000  0.6655  18.872  18.891  20.00  20.221  1.35e5 
(3.6449)  (0.972)  (1.976)  (1.74e8)  (0.105)  (1.14e5)  

100  240000  9.684  20.283  20.181  20.00  21.205  0.1003 
(8.755)  (0.0253)  (0.0186)  (9.242e10)  (0.0273)  (0.3064)  

10  40000  5.67e4  0.0671  2.1e3  7.6e3  0.0163  0.0164 
(1.9e3)  (0.0158)  (2.7e3)  (0.0101)  (0.0196)  (0.0390)  

100  120000  1.198e10  5.598e4  0.9072  0  0  1.505e13 
(6.078e10)  (2.1e3)  (0.0506)  (0)  (0)  (1.321e13)  

10  20000  1.12e25  2.49e6  3.64e6  0.0149  0.1129  1.27e23 
(9.92e26)  (2.04e6)  (2.78e6)  (0.0413)  (0.0965)  (3.17e23)  

100  120000  7.933e17  0.1632  0.0392  3.9043  6.2476  1.683e11 
(2.206e17)  (0.3394)  (8.6e3)  (0.4688)  (0.4753)  (3.375e11)  

10  20000  6.38e7  0.6115  0.4328  2.0e3  0.0870  8.82e18 
(7.70e7)  (0.2452)  (0.1118)  (7.5e3)  (0.0904)  (4.82e17)  

100  240000  0.0147  3.458  31.997  1.2187  1.0726  4.426e7 
(0.0474)  (1.2254)  (2.135)  (0.0915)  (0.1249)  (3.541e7)  

10  20000  179.9999  180.0485  180.0180  179.9999  179.9999  179.9999 
(0)  (9.9e3)  (3.8e3)  (0)  (0)  (0)  

100  40000  19800  19822  19843  19800  19800  19801 
(0.1684)  (1.634)  (1.9189)  (0)  (3.510e12)  (0.2165)  

10  40000  148.379  148.175  148.227  130.486  127.157  147.293 
(5.01e28)  (0.0897)  (0.0896)  (11.9264)  (12.123)  (4.2954)  

100  20000  1483.8  878.006  1392.4  645.545  791.886  1423.3 
(3.856e7)  (30.891)  (12.227)  (101.377)  (53.282)  (35.401)  

10  20000  10  9.9503  9.9964  9.9732  9.6482  10 
(6.32e12)  (0.0172)  (2.9e3)  (0.0710)  (0.7657)  (0)  

100  120000  99.9838  76.806  97.8412  98.1905  99.9783  100 
(0.0219)  (1.5443)  (0.2770)  (2.6162)  (4.6e3)  (2.39e10)  

40  240000  4.2e3  49.6217  0.3765  135.595  5.09e4  0.3305 
(7.9e3)  (12.6485)  (0.0681)  (412.933)  (1.56e12)  (0.0539)  

80  240000  0.0151  2537.6  7.2362  12268  17738  31.2504 
(9.7e3)  (484.028)  (1.5464)  (1536.2)  (1422.4)  (59.4805)  

10  120000  1.3534e4  4.166e04  8.108e5  0.0195  5.7e3  1.525e8 
(1.275e4)  (3.857e4)  (9.950e5)  (0.0223)  (0.0127)  (5.698e8)  


DE  CoDE  CLPSO  TLBO  GWO  IS  



100  e3  21  0  0  30  30  30  

10  e3  30  30  0  30  0  26 
100  e3  29  24  0  0  0  26  

10  e3  29  0  0  0  0  30 
100  e3  13  0  0  0  0  27  

10  e3  27  0  15  16  13  24 

10  e3  23  0  0  0  0  30 

80  1  30  0  0  0  0  23 

For functions in cluster 2, the competitive optimal solutions exist in relatively large distance respect together, or the local minimums are arranged in irregular positions. As instance, function  which is a shiftedvariable version of  has local minimums distributed in different positions over its domain. At the functions and , in both small and large scales, the proposed algorithm competes with the DE algorithm as the most appropriate algorithm for this cluster. As mentioned, for , the proposed algorithm was successfully stabilized by increasing the parameter to the value of 12. However, according to our observations, the increase of did not lead to a completely stable optimization for function , such that in both small and large scales, there are 4 failures from the successful optimization (see Table 4). Hence, an increase in the population size is necessary for the stable optimization of . Similar to the results in optimization of the function , IS algorithm had not any success in finding the global minimum of . However, as indicated in Table 4, despite of GWO and TLBO, IS algorithm shows some stability in finding one of the highly competitive solutions of this problem, i.e., 23 successes of IS against zero success of GWO and TLBO.
Finally, the unique feature of the functions in cluster 3 respect to the previous ones is existence of numerous competitive solutions. For example, in twovariable state of the function, the number of competitive solutions is 36 while that was at most 9 among the functions at the cluster 2 (for ). Further, this number is larger for the function , and becomes infinity for and . Both and have one global optimum solution at the origin. In the , the flat rings around the origin  with approximately equal optimal cost values as the original point  include infinite number of the competitive solutions. Also, the flat region in contains only existing competitive solutions with the global optimum, which are also infinite in number. As the results in Figure 5 and Figure 6 indicate, the proposed algorithm has best performance for the functions in cluster 3, in both small and large scales. At this cluster, most competitive algorithm with the proposed IS algorithm is DE.
Although, in small scale of the problem , DE algorithm was defeated by IS just because of one failure (see Table 4), but in 100 dimensions, there is a remarkable difference between the number of successful optimization of IS, i.e., 27, and that of DE algorithm, i.e., 13. DE has a similar instability in optimization of in the large scale. It has 7 unsuccessful optimization according to Table 4, while IS algorithm is completely successful. The XinShe Yang 3 function [28] has similar structure as the function . For this function, IS algorithm also outperforms other test algorithms according to our observations. The results were omitted for brevity. On the other side, IS algorithm discovers exact optimum solution of the function with minimum number of function evaluations (see Figure 6(i)). Last but not least, the proposed algorithm is the only successful algorithm in approaching the exact optimum solution of . Although, all comparative algorithms touch the optimal cost value close to zero (see Table 3), but their solutions have large distance from the global optimum solution located in the origin. Figure 6(l
) compares the MSE of estimated solution at minimization of the function
.Figure 7 compares average runtime of the proposed algorithm with that of other test algorithms. At this figure, the benchmark problems were sorted according to their required runtime by IS algorithm. As shown, IS algorithm has smallest runtime in optimization of all benchmark functions except of and . In order to have a fair comparison about the complexity of the operations utilized in each algorithm, without taking the complexity of function evaluations into account, we should concentrate at the function as the simplest one for the evaluation. The results for indicate that the complexity of operators of TLBO is 2.0 times more than that of IS algorithm.
Among the test algorithms, TLBO has simplest operators after IS. Moreover, CLPSO as the most complex one, has a complexity more than 5.3 times. In between, the complexity of DE is exactly 3 times more than that of the proposed algorithm.
4.2 Sparse Reconstruction
Sparse reconstruction is generally referred to solving an underdetermined system of linear equations with a prior knowledge of sparsity about the solution. It has a lot of applications such as signal compression [33], channel estimation [34], adaptive identification [35], and spectrum sensing [36], which specially were developed after compressed sensing theory. According to the compressed sensing [37], a sparse vector can be recovered from linear measurements of , while the number of measurements is less than the original dimensions of the sparse vector, i.e., . A vector is called sparse, if the number of its nonzero elements are , such that . At the mentioned linear model, the matrix is known as the measurement matrix and the vector
is regarded as the measurement noise. The noise or measurement error is usually modeled by i.i.d. zeromean Gaussian distribution with an specified variance. A condition for a reliable recovery is holding a degree of randomness by the measurement matrix
A. That is satisfied for Gaussian and binary random measurements under a predetermined number of measures [37, 38].Reconstruction of the sparse vector x from y and A is an optimization problem. Various optimization models and different algorithms were developed over past decade. Recently, some metaheuristic approaches were proposed for optimization of sparse reconstruction models. Main advantage of metaheuristic approaches is their independency from the properties of the functions used in the optimization model. For example, a preferred model for sparse reconstruction is minimization of the number of nonzero elements ( norm) of the solution vector . Metaheuristic approaches can easily optimize such nondifferentiable discontinuous functions. In other approaches, the function should be approximated, e.g., SL0 algorithm in [39]. In [40],[41], the genetic algorithm was combined with clonal selection and simulated annealing, respectively, to solve the nonconvex minimization problem. Furthermore, another evolutionary algorithm based on a softthreshold method was earlier developed for the same model [42]. Here, we aimed to optimize the follow singleobjective function based on norm:
(9) 
where is the nonconvex operation of norm used as a regularization term to promote the sparsity, indicates the Euclidian norm modeled to minimize the Gaussian measurement errors, such that, its minimization leads to fidelity of the discovered solution to the measurements y. Finally, the constant is regularization coefficient for making a balance between the sparsity and fidelity.
Setting a proper value for is essential for an accurate reconstruction. Indeed, finding an appropriate value for in the model (9) with is a tedious task. An advanced approach is separation of the model to two objective functions and utilization of an multiobjective method for finding a good balance between the sparsityinducing and fidelity functions [42]. At this paper, for sake of simplicity, we target the model (9) with . We realized that a valid amount for is easily approachable by the unit power for fidelity term. In addition, the value of was set on 0.9. At the following, we first demonstrate efficiency of the proposed algorithm respect to the conventional evolutionary algorithms, and then highlight its possible advantage respect to the stateoftheart sparse reconstruction algorithms.
First experiment was conducted in two scenarios, both at the case of noiseless measurements with decision variables and measurements. At the first scenario, nonzero elements of sparse vector x were selected by random and valued by i.i.d. Gaussian distribution with zero mean and unit variance. Also, at this case, the measurement matrix A
was a zeromean Gaussian random matrix with the i.i.d. elements and normalized columns. At the second scenario, the desired sparse vector was binary with
nonzero unit elements, distributed by random among the variables of x. At this case, the measurement matrix was also binary matrix with equal probability of 0.5 for each 0 and 1 values. The regularization coefficient
was adjusted to 0.1 and 1 for the first and second scenarios, respectively. At both Gaussian and binary scenarios, the parameters were fixed on and for DE, and for PSOcc, and and for IS, respectively. The number of particles was fixed on 40 for all algorithms.DE  CoDE  PSOcc  CLPSO  GWO  TLBO  IS  



Gaussian  1.3e3  0.132  0.574  0.609  0.397  0.676  1.2e3 
runtime  25438  22142  27387  33760  28794  22312  19072 
Binary  0.010  0.304  0.780  0.825  1.048  0.691  9.10e4 
runtime 
25402  26689  31280  37916  28954  22408  23203 

Figure 8 shows the averaged convergence curve of all test algorithms over 100 trials with different sparse vector and different measurement matrix in each trial. As shown, for the binary scenario, the proposed algorithm (IS) converges to lowest cost value after 160000 function evaluations, while except of DE algorithm, other algorithms are trapped at a local optimum solution (PSOcc, GWO, and TLBO) or have a slow convergence (CoDE and CLPSO). As can be inferred from Figure 8 about the Gaussian scenario, the mentioned number of function evaluations was enough for DE algorithm to capture the same cost value as IS algorithm. Convergence curve of other algorithms were omitted at this scenario, because of their poor performance similar to the binary case. Table 5 summarizes the distortion from exact optimal solution in both states. Furthermore, their runtime was included for comparison. As expected, at the Gaussian scenario, the distortion for both DE and IS algorithms are approximately same, while IS algorithm has significantly lower distortion at the binary case. Moreover, at both scenarios, IS algorithm has less runtime than the competitive DE algorithm.
At the second experiment, IS algorithm was compared with wellknown sparse reconstruction algorithms consist of IHT [43] and OMP as the greedy approaches [44], magic as the conventional interiorpointbased optimization method [45], a Bayesian method with Laplace priors (LBCS) [46], SL0 that smoothly approximates norm, and algorithm which minimizes an approximated version of the nonconvex function of norm [47]. All settings were same as the previous experiment; the dimension of problem was , both and IS algorithms were implemented with , and the parameters of IS algorithm were leaved unchanged. Despite of previous experiment, the measurements were contaminated by noise. Variance of the noise was and for Gaussian and binary scenarios, respectively. The NMSE curves in different number of nonzero elements were plotted in Figure 9. As depicted in Figure 9(a), for the case of Gaussian sparse vector with Gaussian measurements, the proposed algorithm outperforms all other algorithms except of the greedy ones, in a range of sparsity level, with less than 15 nonzero elements. The better performance of greedy approaches is at expense of a prior knowledge about the number of nonzero elements. In fact, it is not available information in any applications.
On the other side, as depicted in Figure 9(b) for binary scenario, IS algorithm has better performance than all other algorithms when the optimal sparse solution has more than 2 and less than 20 nonzero elements. Despite of Gaussian scenario, for binary signals, OMP and IHT algori
thms have unstable performance, since, identification of nonzero elements with the same values is generally hard for greedy approaches. The price for such outstanding performance of an evolutionary algorithm is a large runtime, even at the several order of magnitudes more than the sparse reconstruction algorithms. The worst runtime among the dedicated algorithms was approximately around 0.7 second for the algorithm. Parallel computing is main technique for reduction of runtime, with open problems in order to approach a realtime implementation for the populationbased approaches [48].
4.3 Antenna Selection in Massive MIMO
Multiuser multipleinputmultipleoutput (MUMIMO) communication system is fundamental technology in wireless networks. The system is modeled by linear equations of , where, is the channel matrix between antennas at the base station (BS) and singleantenna mobile stations (MSs). It contains largescale fading coefficients (path loss and shadowing) as the elements of diagonal matrix of , and smallscale fading coefficients modeled by matrix with i.i.d. complex random variables driven from Gaussian distribution with mean of zero, and unit variance. The elements of vector are transmitted samples from each antenna of BS (at downlink), consists of received samples at the MSs, and is the noise, modeled at the receiver side [49]. Recent developments in MUMIMO is based on large number of antennas at BS that support several MSs. As the number of BS antennas scales up, the interference among MSs vanishes and simple linear precoding/combinig methods provide nearcapacity performance [50]. However, deployment of large number of radio frequency (RF) chains, one per each antenna, leads to high cost and energy consumption. Antenna selection is a way to tackle this issues [51].
The number of RFchains can be reduced by selecting an optimal subset of antennas and deactivating rest of them. Two wellknown selection criterions are based on maximization of achievable sum throughput and maximization of minimum singular value of the selected channel matrix. According to
[52], at the case of utilization of zeroforcing (ZF) precoder, the minimum received signaltonoise ratio (SNR) among MSs is lower bounded by an scale of squared minimum singular value (MSV) of the selected channels for transmission. Maximization of this parameter leads to maximization of minimum received SNR among the MSs, or equivalently minimization of bit error rate. Hence, in the network terminology, a minimum quality of service (QoS) can be guaranteed by the MSV criteria. Fitness function for maximization of MSV is formulated as the following combinatorial model:(10) 
where binary decision variables of vector x are the elements of diagonal matrix X, i.e., 1 for selected antennas and 0 for deactivated antennas. The operator , calculates singular values of the selected channels, and cardinality of vector x is constrained to the predefined number of active antennas, . Without any manipulation in the algorithm, was replaced by for the aim of maximization. In addition, the algorithm was implemented at the continuous domain. Initial solutions were produced randomly at the range of , with uniform distribution. The only modification was mapping of the continuous decision variables to the binary digits, before function evaluation at the step 3 of Algorithm 1. The mapping was simply accomplished through replacement of largest decision variables by integer value of 1 (indicating the selected antennas), and discarding other variables to zero.
Maximization of MSV has also applications in the other engineering problems [53]. In the context of wireless communication, speed of optimization is main challenge, specially in largescale antennas. A branch and bound (BAB) method was developed for maximization of MSV in [53]. The BABbased algorithms approach the exact optimum solution with a reduced complexity respect to the exhaustive search (ES). The BAB algorithm in [53] was considered for largescale antenna selection in [54]. According to their results, the algorithm provides 4 order of magnitude reduction at the required number of function evaluations, in comparison with ES. However, this reduction is not sufficiently enough for the large scales. In addition, their algorithm is developed for selection of a square matrix. On the other words, the number of selected antennas is always equal to the number of MSs. Metaheuristic algorithms are efficient alternative for approximation of optimal solution with significantly less complexity, as well as flexility on the number of selected antennas.
Performance of the test algorithms were not competitive with the proposed algorithm, expect of DE algorithm. We focus to compare the results with DE algorithm, and a recently developed binary optimizer for largescale antenna selection, based on genetic algorithm (GA) [55]. In all of the simulations, the parameters were set on and for IS algorithm, and and for DE algorithm, both with the population size of 40. According to our observations, the utilized parameter values in the GAbased algorithm for maximizing the sum throughput [55], were also efficient for MSV maximization. Hence, its population size was set on 10, with 5 of them as the mutation vectors on the best solution. The number of BS antennas and number of singleantenna MSs were fixed on and , respectively. The results were averaged out over 400 channel realizations with normalised channel vectors of each MS [51],[56]. At each trial, MSs were uniformly distributed on a hexagonal cell with radius 500m around BS. The path loss exponent was set on 3.8 and mean of shadowing attenuation  modeled by lognormal distribution  was fixed on 8dB.
Figure 10 shows the convergence curves at the cases of i.i.d. and correlated channels. The correlation among BS antennas changes the i.i.d. smallscale fading channel to the correlated channel , where, is the exponential correlation matrix, defined by for its th element. At this experiment, correlation coefficient was set to , and the number of selected antennas was . As shown in the figure, IS algorithm performs slightly better than DE and outperform GAbased algorithm, in both i.i.d. and correlated channels. It is worth mentioning that, with this parameters of the system, the exhaustive search requires more than function evaluations, equal to the number of possible combinations, i.e., , while, the proposed algorithm converges in only function evaluations. It means 24 order of magnitude reduction in the complexity, which is significantly less than the four order reduction by bidirectional BAB algorithm [54]. The experiment with i.i.d. channel was repeated by imperfect channel state information (CSI). The perfect CSI is modeled by , where, is measured available information and is the unknown error with the same i.i.d. complex Gaussian distribution as the matrix. The scaler determines the accuracy of available CSI. Similar to the results in [55], here, at the case of MSV maximization, the performance of antenna selection algorithms also degrades significantly as the quality of CSI decreases in small values of . As instance, for , the minimum singular value of the selected channel by IS, DE, and GAbased algorithms are reduced to 0.345, 0.347, and 0.326, respectively. The results indicate similar sensitivity of IS and DE algorithms to this deficiency.
Finally, with the same parameters, the algorithms were evaluated on different number of selected antennas. Figure 11(a) shows the achieved MSVs at four levels of active antennas, i.e., 15, 30, 45, and 60. All algorithms were stopped after 120000 function evaluations. The channels were uncorrelated with an available perfect CSI. Approximately, same performance of IS and DE algorithms indicates their similar sensitivity to their parameters. The results were also compared with the MSV at the case of full array without antenna selection. The obtained MSVs by IS and DE algorithms for 60 optimal subset of antennas are approximately of the MSV with full use of 128 antennas without any selection. Main advantage of the proposed algorithm is its computational speed. Figure 11(b) illustrates average runtime in the different number of active antennas (). As increases, the runtime for DE and GAbased algorithms increases approximately at the same rate, while this rate for IS algorithm is roughly half of them. We have attention that for , the runtime can be reduced by decreasing the , without significant loss in performance. Moreover, at low number of BS antennas, the running times are significantly less, with similar respective performances illustrated for 128 dimensional antenna array. For example, with BS antennas, MSs, and active antennas, the required for convergence is 40000, that takes on average 9.2 and 5.8 seconds running time by DE and IS algorithms, respectively.
5 Discussion
Grouping method and update rule have pivotal influence in the functioning of the metaheuristic algorithms. As shown by numerical results on benchmark problems, the proposed dialectical grouping has significant performance in optimization of specific functions with large number of competitive solutions. However, finding a common feature for all possible problems in which a metaheuristc algorithm has superior performance is really hard. The proposed interactions lead to a simple and delicate stepsize mechanism. Numerous experiments confirm convergence of IS algorithm under the suggested mechanism. A mathematical proof of optimality of these stepsizes is a challenging task. In general, the analysis of metaheuristic algorithms is a challenging problem because of existence of various sources of random operations. However, we are hopeful that our proposed deterministic interactions for speculation (as a counterpart for conventional mutation operator) would simplify the analysis of IS algorithm. The two parameters of IS algorithm were easily tuned because of their integer identities and low number of possible pairs that influence on the performance. Nevertheless, an adaptive scheme for adjustment of parameters during the optimization process and its optimality remains as an open problem. As the future research directions, extension of the algorithm to the multiobjective scenarios, and applications in other engineering and scientific problems is of interest.
6 Conclusion
Philosophical paradigm of thesisantithesissynthesis in dialectical thinking modes promises an efficient search approach. Inspired by speculative and practical thinking modes, we developed a new populationbased optimization approach. Speculative thinking  assigned to high quality solutions  was modeled in a way that boosts exploration capability of the proposed algorithm. At this thinking mode, each particle/thinker looks for another one in the community who has a solution (thesis) in largest distance but with similar quality (idealistic antithesis). In contradiction, practical thinking  assigned to low quality solutions  exploits efficiency of the best solution or its idealistic antithesis by selecting one of them which is in smaller distance (materialistic antithesis). Detected antitheses were used as a reference point for reformation (update) of the solutions/theses. Uniformly distributed stepsizes with a negligible bias toward the antithesis, were utilized for explorative speculations, and a biased Gaussian distribution was used for stepsizes of exploitive practices. Results indicate efficiency of the proposed optimization scheme by lowcomplexity operators.
References
 [1] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming. Sons, Inc.: John Wiley, 2005.
 [2] M. H. ImmordinoYang, J. A. Christodoulou, and V. Singh, “Rest is not idleness: implication of the brain’s default mode for human development and education,” Perspectives on Psychological Sience, vol. 7, no. 4, pp. 352–364, 2012.
 [3] X. Yang, “On the great transformation of the dialectic theory: from the transcendence of the practical mode of thinking over the speculative mode of thinking,” CrossCultural communication, vol. 10, no. 4, pp. 58–62, 2014.
 [4] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 4, pp. 1942–1948 vol.4, Nov 1995.
 [5] J. H. Holland, Adaptation in Natural and Artificial Systems. Cambridge, MA, USA: MIT Press, 1992.
 [6] K. Veeramachaneni, T. Peram, C. Mohan, and L. A. Osadciw, Optimization Using Particle Swarms with Near Neighbor Interactions, pp. 110–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003.

[7]
E. AtashpazGargari and C. Lucas, “Imperialist competitive algorithm: An
algorithm for optimization inspired by imperialistic competition,” in
IEEE Congress on Evolutionary Computation
, pp. 4661–4667, Sept 2007.  [8] F. Luo, J. Zhao, and Z. Y. Dong, “A new metaheuristic algorithm for realparameter optimization: Natural aggregation algorithm,” in 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 94–103, July 2016.
 [9] R. Storn and K. Price, “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
 [10] Y. Shi, “Brain storm optimization algorithm in objective space,” in 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 1227–1234, May 2015.
 [11] R. Rao, V. Savsani, and D. Vakharia, “Teaching learningbased optimization: An optimization method for continuous nonlinear large scale problems,” Information Sciences, vol. 183, no. 1, pp. 1 – 15, 2012.
 [12] S. Rahnamayan, H. Tizhoosh, and M. Salama, “Oppositionbased differential evolution,” IEEE Transactions on Evolutionary Computations, vol. 12, pp. 64–79, Feb 2008.
 [13] S. Dubois, T. Eltzer, and R. D. Guio, “A dialectical based model coherent with inventive and optimization problems,” Computers in Industry, vol. 60, no. 8, pp. 575 – 583, 2009. Computer Aided Innovation.
 [14] S. Kadioglu and M. Sellmann, “Dialectic search,” in Principles and Practice of Constraint Programming  CP 2009, 15th International Conference, CP 2009, Lisbon, Portugal, September 2024, 2009, Proceedings, pp. 486–500, 2009.
 [15] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983.
 [16] F. Glover, “Tabu search — part I,” ORSA Journal on Computing, vol. 1, no. 3, pp. 190–206, 1989.
 [17] www.dictionary.com.
 [18] J. E. Maybee, “Hegel’s dialectics,” in The Stanford Encyclopedia of Philosophy (E. N. Zalta, ed.), summer 2016 ed., 2016.
 [19] J. Wolff, “Karl Marx,” in The Stanford Encyclopedia of Philosophy (E. N. Zalta, ed.), winter 2015 ed., 2015.
 [20] T. E. of Encyclopaedia Britannica, “Industrial revolution,” in Encyclopaedia Britannica (Graham, ed.), winter 2016 ed., 2016.
 [21] S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, “Differential evolution using a neighborhoodbased mutation operator,” IEEE Transactions on Evolutionary Computation, vol. 13, pp. 526–553, June 2009.
 [22] Y. Wang, Z. Cai, and Q. Zhang, “Differential evolution with composite trial vector generation strategies and control parameters,” IEEE Transactions on Evolutionary Computation, vol. 15, pp. 55–66, Feb 2011.
 [23] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, pp. 281–295, June 2006.
 [24] S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46 – 61, 2014.
 [25] M. Clerc and J. Kennedy, “The particle swarm  explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58–73, Feb 2002.
 [26] S. H. Hosseini, T. Nouri, A. Ebrahimi, and S. A. Hosseini, “Simulated tornado optimization,” in 2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS), pp. 1–6, Dec 2016.
 [27] N. H. Awad, M. Z. Ali, J. J. Liang, B. Y. Qu, and P. N. Suganthan, “Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective RealParameter Numerical Optimization,” tech. rep., 2016.
 [28] M. Jamil and X.S. Yang, “A literature survey of benchmark functions for global optimisation problems,” Int. J. of Mathematical Modelling and Numerical Optimisation, vol. 4, no. 2, pp. 150–194, 2013.
 [29] http://yarpiz.com/category/metaheuristics. link to source codes of PSO, DE, TLBO algorithms.
 [30] http://www.alimirjalili.com. link to source code of GWO algorithm.
 [31] http://dces.essex.ac.uk/staff/qzhang/. link to source code of CoDE algorithm.
 [32] http://www.ntu.edu.sg/home/epnsugan/. link to source code of CLPSO algorithm.
 [33] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Compressed sensing for realtime energyefficient ECG compression on wireless body sensor nodes,” IEEE Transactions on Biomedical Engineering, vol. 58, pp. 2456–2466, Sept 2011.
 [34] W. U. Bajwa, J. Haupt, A. M. Sayeed, and R. Nowak, “Compressed channel sensing: A new approach to estimating sparse multipath channels,” Proceedings of the IEEE, vol. 98, pp. 1058–1076, June 2010.
 [35] S. H. Hosseini, M. G. Shayesteh, and A. Ebrahimi, “Adaptive sparse system identification in compressed space,” Wireless Personal Communications, pp. 1–13, May 2017.
 [36] F. Zeng, C. Li, and Z. Tian, “Distributed compressive spectrum sensing in cooperative multihop cognitive networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, pp. 37–48, Feb 2011.
 [37] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, pp. 1289–1306, April 2006.
 [38] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, pp. 489–509, Feb 2006.
 [39] H. Mohimani, M. BabaieZadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Transactions on Signal Processing, vol. 57, pp. 289–301, Jan 2009.
 [40] F. Liu, L. Lin, L. Jiao, L. Li, S. Yang, B. Hou, H. Ma, L. Yang, and J. Xu, “Nonconvex compressed sensing by natureinspired optimization algorithms,” IEEE Transactions on Cybernetics, vol. 45, pp. 1042–1053, May 2015.
 [41] Q. Wang, D. Li, and Y. Shen, “Intelligent nonconvex compressive sensing using prior information for image reconstruction by sparse representation,” Neurocomputing, vol. 224, pp. 71 – 81, 2017.
 [42] L. Li, X. Yao, R. Stolkin, M. Gong, and S. He, “An evolutionary multiobjective approach to sparse reconstruction,” IEEE Transactions on Evolutionary Computation, vol. 18, pp. 827–845, Dec 2014.
 [43] T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and computational harmonic analysis, vol. 27, no. 3, pp. 265–274, 2009.
 [44] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655–4666, 2007.
 [45] E. Candes and J. Romberg, “L1magic: Recovery of sparse signals via convex programming,” tech. rep., Oct. 2005. California Inst. Technol., Pasadena, CA.
 [46] S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian compressive sensing using laplace priors,” IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 53–63, 2010.
 [47] S. Foucart and M.J. Lai, “Sparsest solutions of underdetermined linear systems via minimization for ,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395–407, 2009.
 [48] R. Storn, “Realworld applications in the communication industry  when do we resort to Differential Evolution?,” in 2017 IEEE Congress on Evolutionary Computation (CEC), pp. 765–772, June 2017.
 [49] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. 2005.
 [50] E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, “Massive MIMO for next generation wireless systems,” IEEE Communications Magazine, vol. 52, pp. 186–195, February 2014.
 [51] X. Gao, O. Edfors, F. Tufvesson, and E. G. Larsson, “Massive MIMO in real propagation environments: Do all antennas contribute equally?,” IEEE Transactions on Communications, vol. 63, pp. 3917–3928, Nov 2015.
 [52] R. W. Heath, S. Sandhu, and A. Paulraj, “Antenna selection for spatial multiplexing systems with linear receivers,” IEEE Communications Letters, vol. 5, pp. 142–144, April 2001.
 [53] Y. Cao and V. Kariwala, “Bidirectional branch and bound for controlled variable selection,” Computers & Chemical Engineering, vol. 32, no. 10, pp. 2306 – 2319, 2008.
 [54] Y. Gao, W. Jiang, and T. Kaiser, “Bidirectional branch and bound based antenna selection in massive MIMO systems,” in 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), pp. 563–568, Aug 2015.
 [55] B. Makki, A. Ide, T. Svensson, T. Eriksson, and M. S. Alouini, “A genetic algorithmbased antenna selection approach for largebutfinite MIMO networks,” IEEE Transactions on Vehicular Technology, vol. 66, pp. 6591–6595, July 2017.
 [56] X. Gao, O. Edfors, F. Rusek, and F. Tufvesson, “Massive MIMO performance evaluation based on measured propagation data,” IEEE Transactions on Wireless Communications, vol. 17, pp. 3899–3911, July 2015.
Comments
There are no comments yet.