Please use this identifier to cite or link to this item:
http://elar.urfu.ru/handle/10995/141536
Title: | Chaotic opposition learning with mirror reflection and worst individual disturbance grey wolf optimizer for continuous global numerical optimization |
Authors: | Adegboye, O. R. Feda, A. K. Ojekemi, O. S. Agyekum, E. B. Hussien, A. G. Kamel, S. |
Issue Date: | 2024 |
Publisher: | Nature Research |
Citation: | Adegboye, O. R., Feda, A. K., Ojekemi, O. S., Agyekum, E. B., Hussien, A. G., & Kamel, S. (2024). Chaotic opposition learning with mirror reflection and worst individual disturbance grey wolf optimizer for continuous global numerical optimization. Scientific Reports, 14(1), [4660]. https://doi.org/10.1038/s41598-024-55040-6 |
Abstract: | The effective meta-heuristic technique known as the grey wolf optimizer (GWO) has shown its proficiency. However, due to its reliance on the alpha wolf for guiding the position updates of search agents, the risk of being trapped in a local optimal solution is notable. Furthermore, during stagnation, the convergence of other search wolves towards this alpha wolf results in a lack of diversity within the population. Hence, this research introduces an enhanced version of the GWO algorithm designed to tackle numerical optimization challenges. The enhanced GWO incorporates innovative approaches such as Chaotic Opposition Learning (COL), Mirror Reflection Strategy (MRS), and Worst Individual Disturbance (WID), and it’s called CMWGWO. MRS, in particular, empowers certain wolves to extend their exploration range, thus enhancing the global search capability. By employing COL, diversification is intensified, leading to reduced solution stagnation, improved search precision, and an overall boost in accuracy. The integration of WID fosters more effective information exchange between the least and most successful wolves, facilitating a successful exit from local optima and significantly enhancing exploration potential. To validate the superiority of CMWGWO, a comprehensive evaluation is conducted. A wide array of 23 benchmark functions, spanning dimensions from 30 to 500, ten CEC19 functions, and three engineering problems are used for experimentation. The empirical findings vividly demonstrate that CMWGWO surpasses the original GWO in terms of convergence accuracy and robust optimization capabilities. © The Author(s) 2024. |
Keywords: | ALGORITHM ALPHA WOLF ARTICLE BENCHMARKING CANIS LUPUS ELECTRIC POTENTIAL LEARNING METAHEURISTICS MIRROR WOLF |
URI: | http://elar.urfu.ru/handle/10995/141536 |
Access: | info:eu-repo/semantics/openAccess cc-by |
SCOPUS ID: | 85186187719 |
WOS ID: | 001177429500044 |
PURE ID: | 53806553 |
ISSN: | 2045-2322 |
DOI: | 10.1038/s41598-024-55040-6 |
Sponsorship: | Linköpings Universitet, LiU; Centrum för Industriell Informationsteknologi, Linköpings Universitet, CENIIT, LiU |
Appears in Collections: | Научные публикации ученых УрФУ, проиндексированные в SCOPUS и WoS CC |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2-s2.0-85186187719.pdf | 12,08 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.