Skip to main content

Advances, Systems and Applications

Genetic algorithm-based secure cooperative control for high-order nonlinear multi-agent systems with unknown dynamics


Research has recently grown on multi-agent systems (MAS) and their coordination and secure cooperative control, for example in the field of edge-cloud computing. MAS offers robustness and flexibility compared to centralized systems by distributing control across decentralized agents, allowing the system to adapt and scale without overhaul. The collective behavior emerging from agent interactions can solve complex tasks beyond individual capabilities. However, controlling high-order nonlinear MAS with unknown dynamics raises challenges. This paper proposes an enhanced genetic algorithm strategy to enhance secure cooperative control performance. An efficient encoding method, adaptive decoding schemes, and heuristic initialization are introduced. These innovations enable compelling exploration of the solution space and accelerate convergence. Individual enhancement via load balancing, communication avoidance, and iterative refinement intensifies local search. Simulations demonstrate superior performance over conventional algorithms for complex control problems with uncertainty. The proposed method promises robust, efficient, and consistent solutions by adapting to find optimal points and exploiting promising areas in the space. This has implications for securely controlling real-world MAS across domains like robotics, power systems, and autonomous vehicles.


With the emerging of ChatGPT, the multiple model predictive control in the edge-cloud computing become considerably important. Correspondingly, the research landscape has recently been enriched by an increased focus on multi-agent systems (MAS). This burgeoning interest is especially pronounced in agents’ coordination and secure cooperative control. At its core, the study of MAS revolves around a simple yet profound idea: equipping follower agents with the capability to synchronize their actions or match the state trajectories of a leader agent based solely on local interactions [1]. Why is this shift towards MAS when conventional systems reliant on single centralized controllers have been prevalent? The answer lies in the advantages distributed MAS offers over their centralized counterparts [2]. Traditional centralized control systems operate from a singular point of command, potentially making them vulnerable to failures or system-wide disruptions. In contrast, MAS are inherently decentralized and thus provide a heightened sense of robustness. The decentralized nature ensures the system remains operative even if some agents malfunction or get compromised [3].

Beyond robustness, MAS exhibits flexibility, an attribute that’s becoming ever more crucial in today’s fast-evolving technological landscape. This flexibility stems from the system’s adaptability, allowing for changes or additions without a comprehensive overhaul [4]. For instance, as new agents are introduced into the system, they can be seamlessly integrated, leveraging the existing network of interactions. Scalability, another cardinal advantage of MAS, ensures that these systems can grow in size and complexity without a proportionate increase in management overhead, which is particularly beneficial in scenarios that require large-scale operations or foresee growth in the number of agents or tasks [5, 6]. By virtue of their design, Distributed MAS can efficiently distribute tasks among agents, making them particularly adept at parallelizing complex problems. One of the most intriguing facets of MAS is the emergence of sophisticated group-level behaviors [7]. While each agent in the system might be programmed for only a specific set of simple tasks, the collective behavior that emerges when they interact can be unexpectedly intricate. This phenomenon of self-organization is reminiscent of natural systems, where simple entities come together to produce behaviors far beyond their capabilities. Ant colonies, for example, demonstrate complex cooperative behaviors despite having limited actions.

Over the years, the predominant focus in MAS has been on idealized first-order and second-order linear models [8]. However, most real-world agents’ true complexity and richness lie in nonlinear dynamics characterized by higher dimensionality [9]. This observed disparity between academic models and practical realities prompts a deeper exploration into more intricate systems and their associated behaviors. For example, robotic systems. Unlike the rudimentary linear models, these systems are defined by nonlinear coupled joint kinematics and dynamics. Their movements and interactions are a culmination of complex algorithms that govern each joint, each sensor, and each actuator. Similarly, biological agents such as animals present an intricate dance of neural and musculoskeletal dynamics. Rather than linearly responding to stimuli, these agents engage in a sophisticated, multi-tiered interplay of cognitive, neural, and physical processes. The complexities do not end here. Power grids are pivotal to our modern infrastructure and do not operate on linear principles. Instead, they are characterized by nonlinear oscillatory dynamics, with each component responding to fluctuations in a manner that often diverges from basic linear expectations. Meanwhile, multi-vehicle systems, aerial drones, or undersea exploratory vehicles demonstrate intricate rigid body motions that do not conform neatly to linearized models. Venturing into socio-economic dimensions, consumer markets and social networks offer myriad challenges. These systems are governed not by mechanical or biological rules but by the unpredictable, often contradictory, whims and behaviors of human agents. Their patterns, although sometimes discernible, are rooted in the heterogeneous nature of human behavior, making them far removed from the simplicity of first or second-order linear models.

Given these complexities, the realm of MAS control has seen a myriad of methodologies being introduced and refined [10]. Techniques such as consensus algorithms serve to streamline decision-making processes across agents. Optimization-based techniques prioritize finding the best solutions within certain constraints. Simultaneously, biologically inspired approaches seek to harness the elegant solutions nature has honed over millennia. However, several issues need to be addressed when applying genetic algorithm (GA) to cooperative control in MAS. First, existing studies in this area often focus on simple agent dynamics, such as single or double integrators. The cooperative control of high-order nonlinear MAS needs to be investigated, leaving a gap in understanding how GA can be effectively applied in such scenarios. Second, the encoding method used in GA can limit the completeness of the solution space and the efficiency of the search process. Typical representations for ordering problems may need help to handle the complexity of controlling topology in MAS [11]. This limitation can hinder the ability of the GA to explore the entire solution space and find optimal solutions. Additionally, premature convergence and weak local search capability of GA can lead to suboptimal performance in cooperative control problems.

In conclusion, the study of MAS and their coordination and cooperative control has received significant attention in recent years. GA has shown promise in this field due to their global search capability. However, limitations exist in applying GA to cooperative control, such as considering simple agent dynamics, limitations in encoding methods, premature convergence, and weak local search capability. Specifically, it now discusses how typical GA encodings restrict the completeness of the solution space, slowing convergence for ordering problems. The lack of adaptive decoding schemes in GA is also a weakness when handling complex MAS control topologies. Finally, the issues of premature convergence and weak local search hamper existing GA performance. These gaps motivate the improved GA proposed to overcome said limitations. Therefore, this paper proposes an improved GA-based strategy that addresses these limitations by considering high-order nonlinear dynamics, introducing an efficient encoding method, developing adaptive decoding and individual enhancement schemes, and employing a heuristic initialization technique. The proposed strategy demonstrates superior performance through simulations, showcasing its potential for cooperative control in MAS.

The proposed approach offers several distinct contributions compared to previous works.

  • An improved GA is proposed for secure cooperative control of high-order nonlinear MAS with unknown dynamics. Simultaneously, an efficient encoding method is introduced to represent both topology control and execution order, the adaptive decoding scheme is developed using heuristics to accelerate convergence, and individual enhancement techniques like load balancing, communication avoidance, and iterative refinement are applied to intensify local search.

  • A resilient control strategy is developed against false data injection attacks via attack estimation and compensation.

The rest of the paper is organized as follows. In Related works section, the related works are studied. In High-order nonlinear MAS section, the high-order nonlinear MAS is presented in detail. In Methodology section, the methodology is extended. Simulation section presents the simulation results, and the conclusions are drawn in Conclusions section.

Related works

MAS and cooperative control

MAS has emerged as a dominant paradigm in control systems and robotics, reflecting the collective efforts of multiple autonomous agents working towards a shared or individual objective [12, 13]. These agents, defined by their abilities to perceive, act, and make decisions, have been integral to several applications ranging from autonomous vehicle fleets to distributed sensing networks. Their significance lies in their number and how they interact, share information, and cooperate. Cooperative control, as a concept, complements MAS by focusing on the coordinated efforts of these agents [14,15,16]. This coordination ensures that the overall system achieves a particular behavior or objective more efficiently than individual agents working in isolation. Historically, cooperative control has been pivotal in solving complex problems that are otherwise beyond the reach of single-agent systems. The synergy between MAS and cooperative control has resulted in a transformative shift, particularly in swarm robotics, decentralized power grids, and intelligent transportation systems. In [17], a new technique based on complex Laplacian was introduced to address the problems of which formation shapes specified by inter-agent relative positions could be formed and how they could be achieved with distributed control ensuring global stability. Literature [17] introduced complex Laplacian method for distributed formation control and provided theoretical stability guarantees but limited validation. In [18], the authors investigated the robust cooperative output regulation problem of uncertain linear MAS with additive disturbances via the celebrated internal model principle. Literature [18] proposed robust distributed control for consensus tracking and handled uncertainties but high communication overhead. While the potential of MAS in conjunction with cooperative control is vast, it brings forth unique challenges [19]. These range from ensuring real-time communication between agents, making collective decisions in uncertain information, and developing adaptive strategies to handle dynamic environments and unexpected disruptions.

GA in control systems

GA has become a critical player in optimization and search problems inspired by natural selection. With roots in evolutionary biology, GA approaches problems by simulating processes like mutation, crossover, and selection, which are fundamental to biological evolution. Within control systems, the marriage of GA with traditional methods has given rise to innovative solutions, especially in scenarios where the search space is vast or poorly understood [20]. Literature [20] optimized active suspension with GA-PID and offered good convergence but fixed encoding limits exploration. The adaptability and versatility of GA have proven invaluable in automatically tuning controllers, optimizing system parameters, and even in the design of control strategies from the ground up. GA often provides robust and efficient solutions when conventional mathematical models falter due to complexities or non-linearity. In [21], the authors introduced a combined method called PSO-GA to tackle constrained optimization issues, using particle swarm optimization (PSO) to enhance the vector while the GA modifies decision vectors through genetic processes. The equilibrium between exploration and exploitation is further refined by integrating crossover and mutation genetic operators into the PSO algorithm. Constraints in the problem are managed using a parameter-free penalty function. PSO-GA handled constraints well, while lacking mechanisms to prevent premature convergence. In [22], the authors presented a new feature selection method called ABACO, grounded in ant colony optimization (ACO). In this approach, features are represented as interconnected graph nodes. Each primary node in this graph is divided into two sub-nodes: one to select and another to deselect the feature. The ACO is employed to choose nodes, with the condition that ants must traverse all features. ABACO selected features effectively, while encoding not tailored for ordering problems. A notable strength of GA in control systems is their inherent ability to handle multiple objectives simultaneously, making them well-suited for systems where trade-offs between different performance metrics are inevitable [23,24,25]. This quality is particularly relevant in modern control challenges, where objectives like energy efficiency, robustness, and rapid response might be in contention.

Security challenges in cooperative control

In the realm of MAS, cooperative control has emerged as a pivotal paradigm, enabling agents to achieve specific objectives collaboratively. However, the inherently interconnected nature of these systems introduces a spectrum of security challenges that threaten the integrity, availability, and confidentiality of communications and operations [26, 27]. Literature [26] addressed DoS attacks, while performance guarantees require ideal models. Literature [27] handled deception attacks but high communication overhead for synchronization. One intrinsic challenge stems from the distributed nature of MAS. With agents operating in tandem, relying on local interactions and shared information, the system is particularly vulnerable to misinformation. An adversarial or compromised agent within the system can disseminate misleading information, causing potential systemic failures or unintended behaviors. Moreover, the communication channels bridging these agents become focal points of vulnerability. Eavesdropping, for instance, can jeopardize the confidentiality of exchanged data, revealing strategic insights to potential adversaries. Man-in-the-middle attacks, where an attacker intercepts and possibly alters the communication between two agents, can disrupt operational continuity and manipulate agents into undertaking hazardous actions. Additionally, as agents often rely on consensus algorithms to achieve a unified decision or action, attacks targeting these algorithms can stall or disrupt the consensus process. By presenting false data or through a Sybil attack (where one adversary controls multiple nodes), an attacker can influence the consensus decision in their favor. Ensuring data security and privacy is pivotal in both domains to safeguard the interests and data of the entities involved, whether they are system agents or recommendation service users. In [28], the authors proposed an interaction-enhanced and time-aware graph convolution network for successive point-of-interest (POI) recommendation. The proposed method incorporates high-order relationships between users and POIs, improving recommendation accuracy. However, implementing the method and managing the associated data can be complex and resource-intensive. In [29], the authors offered a point-of-interest category recommendation model based on group preferences. The proposed model leverages group preferences, considering the influence of similar users to enhance recommendation accuracy. Focusing on POI categories instead of individual POIs makes efficient use of data to mine user interests. Nevertheless, the proposed model’s performance heavily relies on the availability and quality of user check-in data and the accuracy of group categorization. Additionally, implementing locality-sensitive hashing and long short-term memory can be computationally intensive and pose scalability challenges for large datasets or real-time applications. In [30], the authors proposed a novel privacy-preserving point-of-interest recommendation model that incorporated users’ privacy preferences based on a simplified graph convolutional neural network. The proposed model addresses privacy leakage in location-based social networks while providing POI recommendations. The simplified graph convolutional network analyzes high-order connectivity between users and POIs, enhancing recommendation accuracy. However, implementing generative models and graph convolutional networks can be complex and computationally intensive. While the model aims to protect user privacy, there may still be potential privacy trade-offs and vulnerabilities not accounted for in the evaluation. Most importantly, the model may be highly specialized for geological tourism and not easily generalize to other domains. Furthermore, the scalability of MAS makes them a prime target for distributed denial of service attacks [31]. By flooding the system with superfluous requests or messages, attackers can overload agents and communication channels, rendering the system non-responsive or slowing down its operational pace. Lastly, as the landscape of technology and cyber threats is continually evolving, the unpredictability of new, sophisticated attack vectors remains a looming challenge. MAS, especially those operating in dynamic and unstructured environments, must be prepared for unforeseen threats, necessitating a proactive and adaptive security framework.

High-order nonlinear MAS

MAS, consisting of interconnected autonomous agents, has emerged as an active research area with broad applications, including robotic teams, sensor networks, transportation systems, power grids, and biological systems. The key advantages of MAS are robustness, scalability, and distributed capabilities. Complex, large-scale tasks can be accomplished efficiently by coordinating multiple simple agents. The agents leverage local information exchange with neighbors to achieve global team objectives.

An agent is an autonomous entity with computation, communication, and control capabilities. The agent state evolves based on its dynamics. Agents interact with neighbors to share information and coordinate actions. A graph captures the communication topology. The objective is to achieve team-level goals like consensus, formation, coverage, and flocking by designing distributed control protocols for each agent. A differential equation describes the dynamics of an individual agent:


where \({x}_{i}\left(t\right)\in {\mathbb{R}}^{n}\) is the state vector, \({u}_{i}\left(t\right)\in {\mathbb{R}}^{m}\) is the control input, and \({f}_{i}:{\mathbb{R}}^{n}\times {\mathbb{R}}^{m}\to {\mathbb{R}}^{n}\) represents the dynamics. Equation (1) shows the dynamics of agent \(i\), and is a general nonlinear equation capturing complex motions. The general form encapsulates a wide range of agent models. For instance, for a vehicle, \({x}_{i}\) could denote position and orientation, \({u}_{i}\) is the steering/throttle commands, and \({f}_{i}\) captures the rigid body kinematics relating acceleration to force and torque inputs. For a robot, \({x}_{i}\) would contain the joint angles and velocities, \({u}_{i}\) specifies the motor torques, and \({f}_{i}\) represents the coupled joint dynamics. The motivation is to allow vast agent types with minimal assumptions on structure.

For an \(N\) agent system, the overall dynamics are:


where \(X={[{x}_{1}^{T},\dots ,{x}_{N}^{T}]}^{T}\) concatenates all states, \(U={[{u}_{1}^{T},\dots ,{u}_{N}^{T}]}^{T}\) are the inputs, and \(F\) combines the individual dynamics \({f}_{i}\). Equation (2) aggregates the individual dynamics into a combined system model. \(X\) and \(U\) concatenate the states and inputs across all \(N\) agents. \(F\) combines the individual \({f}_{i}\) models into an overall mapping and provides a holistic model by merging the agents’ dynamics. The motivation is to analyze the collective multi-agent behavior for properties like synchronization and consensus.

The interaction topology is modeled as a graph \(\mathcal{G}=(\mathcal{V},\varepsilon)\) where \(\mathcal{V}=1,\dots ,N\) is the node set representing agents, and \(\varepsilon\subseteq \mathcal{V}\times \mathcal{V}\) is the edge set of communication links.

There are two common objectives: consensus and synchronization. Consensus means all agents converge to a common state value:

$$\begin{array}{c}\underset{t\to {\infty }}{lim}\left|{x}_{i}\left(t\right)-{x}_{j}\left(t\right)\right|=0,\forall i,j\end{array}$$

Equation (3) defines the consensus objective mathematically and states that the difference between any two agents’ states should converge to zero asymptotically, aligning all agents to a typical value. The motivation is to derive a quantifiable metric for evaluating control strategies. It imposes the key requirement that agents reach an agreement through local interactions.

While synchronization tracks a dynamic leader \({x}_{0}\left(t\right)\):

$$\begin{array}{c}\underset{t\to {\infty }}{lim}\left|{x}_{i}\left(t\right)-{x}_{0}\left(t\right)\right|=0,\forall i\end{array}$$

Equation (4) defines the leader-follower synchronization objective, where agents track a dynamic leader \({x}_{0}\left(t\right)\). It states that the deviation between agents and leaders should vanish, aligning agents with their evolving trajectory. The motivation is to provide a metric for assessing synchronization performance in leader-following paradigms.

Consensus means all agents converge to a common state value, mathematically defined as the difference between any two agents’ states approaching zero as time goes to infinity. Synchronization, conversely, means all agents track a dynamic leader, mathematically defined as the difference between any agent’s state and the leader’s state approaching zero as time progresses.

Many initial MAS studies focused on simple first-order linear models:


Equation (5) shows a simplified single integrator model where the control \({u}_{i}\) directly commands velocity/acceleration. The linear case permits graph theory tools to be applied but lacks realism. The motivation is to establish a baseline before considering more complex dynamics.

The control input is directly the acceleration or velocity command. This simplified case applies graph theory and linear system analysis tools. The protocol can achieve consensus:

$$\begin{array}{c}{u}_{i}=\sum\limits_{j\in {\mathcal{N}}_{i}}{a}_{ij}\left({x}_{j}-{x}_{i}\right)\end{array}$$

where \({\mathcal{N}}_{i}\) are the neighbors of agent \(i\) and \({a}_{ij}\) are graph edge weights. Stability can be proven using properties of the Laplacian matrix. First-order linear models must improve their ability to capture real-world agent dynamics. They apply graph theory and linear system analysis tools, and the control input is directly the acceleration or velocity command. While these models provide insights, they only partially represent the complexities of real-world agent dynamics. Higher-order and nonlinear MAS offer a more realistic representation, such as the second-order nonlinear system, capturing the intricacies of real-world dynamics more effectively.

Equation (6) gives a consensus protocol that drives all agents to the group average by leveraging neighbor-state differences. The weights \({a}_{ij}\) encode the interaction graph topology. Stability can be proven using properties of the graph Laplacian matrix. The motivation is to derive a distributed control rule that achieves consensus for first-order models, providing a foundation for more advanced extensions.

A more realistic model is a second-order nonlinear system:


where \({x}_{i}\in {\mathbb{R}}^{n}\) could represent position, \({\dot{x}}_{i}\in {\mathbb{R}}^{n}\) is velocity, and \({u}_{i}\in {\mathbb{R}}^{n}\) is an acceleration or force command.

Equation (7) presents a more realistic second-order nonlinear dynamics capturing intricacies like coupled physics and actuator constraints, which better represent real MAS like robot teams or autonomous vehicles. The motivation is to move beyond idealistic linear models to handle complex dynamics with techniques like Lyapunov methods or neural networks.

Many physical systems are accurately described by second-order nonlinear dynamics. Examples include:

  • Vehicles:\({\ddot{p}}_{i}={f}_{i}\left({v}_{i}\right)+{g}_{i}\left({p}_{i}\right){u}_{i}\)

  • Robots: \({\ddot{q}}_{i}={h}_{i}({\dot{q}}_{i},{q}_{i})+{M}_{i}{\left({q}_{i}\right)}^{-1}{u}_{i}\), showing the dynamics of a robotic manipulator, where \({q}_{i}\) is the joint angle, \({M}_{i}\) is the inertia matrix, and \({u}_{i}\) is the control input. The nonlinear couplings between joints are captured through \({h}_{i}\).

  • Oscillators: \({\ddot{\theta }}_{i}={f}_{i}({\dot{\theta }}_{i},{\theta }_{i})+{u}_{i}\), modelling an oscillator agent where \({\theta }_{i}\) is the phase and \({f}_{i}\) characterizes the nonlinear oscillations. \({u}_{i}\) provides control authority.

where \({p}_{i}\), \({v}_{i}\) are position and velocity, \({q}_{i}\), \({M}_{i}\) are joint angles and inertia matrix, and \({\theta }_{i}\) is phase.

Consensus protocols can be extended using acceleration feedback:

$$\begin{array}{c}{u}_{i}=-\sum\limits_{j\in {\mathcal{N}}_{i}}{a}_{{ij}}\left({\ddot{x}}_{i}-{\ddot{x}}_{j}\right)\end{array}$$

However, the stability analysis becomes more complex for nonlinear dynamics. Lyapunov and LaSalle’s theorems are commonly used. While more realistic, second-order models are still low-dimensional compared to real MAS, which motivates high-order nonlinear models.

In many MAS applications, the agents have complex nonlinear dynamics with higher dimensionality:

$$\begin{array}{c}{\dot{x}}_{i}={f}_{i}\left({x}_{i},{u}_{i}\right),{x}_{i}\in {\mathbb{R}}^{n},n>2\end{array}$$

Some examples include:

  • Aerial vehicles: 12 + states for rigid body dynamics.

  • Robots: \(n\) joint angles/rates for an n-degrees of freedom manipulator

  • Power systems: generator rotor angles and frequencies.

  • Biological networks: large-scale neuronal population models.

The distributed control problem becomes very challenging for high-order nonlinear MAS.

Consensus depends on coordinated tuning of multiple states and inputs. Stability analysis is complicated with complex interconnected dynamics. Many applications have partially unknown models with uncertainties, disturbances, and unmodeled dynamics:


where \({d}_{i}\left(t\right)\) represents uncertainties. This further complicates controller design and convergence guarantees.

Beyond consensus, high-order nonlinear MAS give rise to other multi-objective coordination problems such as (i) Formation control: achieving desired geometric patterns; (ii) Coverage: distributing agents over a region; (iii) Flocking: aligning agent velocities like bird swarms; (iv) Synchronization: track dynamic leaders.

While high-order nonlinear dynamics are considered, even these are simplified representations compared to the full complexity exhibited in real-world MAS. For instance, the dynamics of autonomous robotic systems can encompass intricate sensorimotor control loops, planning under uncertainty, and complex physical interactions. Power grid systems display complex oscillatory dynamics as well as stochastic perturbations. The threats considered, including false data injection attacks, also represent a subset of what systems may face in practice. The modeling can incorporate higher dimensionality, heterogeneous dynamics, and detailed physics-based interactions to extend the approach to more complex dynamics. The GA’s adaptive decoding schemes can help handle the increased dimensionality and maintain efficacy despite uncertainties. For sophisticated threat models, the resilience strategies must expand beyond attack detection and mitigation to include adaptive security mechanisms and online learning of new threats.


Consider a leader-following MAS consisting of \(N\) follower agents labeled 1 to \(N\), and one leader agent labeled 0. The dynamics of follower \(i\) is modeled by the nonlinear continuous-time system:


where \({x}_{i}\in {\mathbb{R}}^{n}\) is the state vector, \({u}_{i}\in {\mathbb{R}}^{m}\) is the control input, \({y}_{i}\in {\mathbb{R}}^{p}\) is the measurable output, \({f}_{i}:{\mathbb{R}}^{n}\to {\mathbb{R}}^{n}\) and \({g}_{i}:{\mathbb{R}}^{n}\to {\mathbb{R}}^{n\times m}\) are unknown smooth nonlinear functions representing the agent dynamics, \({d}_{i}\left(t\right)\in {\mathbb{R}}^{n}\) is the external disturbance input, and \({h}_{i}:{\mathbb{R}}^{n}\to {\mathbb{R}}^{p}\) is the output function.

The leader agent’s dynamics are similarly given by:


where the leader’s dynamics \({f}_{0}:{\mathbb{R}}^{n}\times \mathbb{R}\to {\mathbb{R}}^{n}\) are also uncertain.

The control objective is to design distributed control protocols for each follower agent using local neighbor information, such that the followers cooperatively track the dynamic leader. In other words, the tracking error between each follower output \({y}_{i}\) and the leader output \({y}_{0}\) should converge to zero:

$$\begin{array}{c}\underset{t\to {\infty }}{lim}||{y}_{i}\left(t\right)-{y}_{0}\left(t\right)||=0,\forall i\in 1,\dots ,N\end{array}$$

This cooperative tracking should be achieved despite the complex unknown nonlinear dynamics of both the leader and followers.

The interaction topology between the agents is modeled as a fixed directed graph \(\mathcal{G}=(\mathcal{V},\varepsilon)\). The vertex set \(\mathcal{V}=\text{0,1},\cdots ,N\) contains all agents, with 0 denoting the leader. The edge set \(\varepsilon\subseteq \mathcal{V}\times \mathcal{V}\) represents the communication links between agents. An ordered edge \((j,i)\in \varepsilon\) means agent \(i\) can receive information from agent \(j\) (but not necessarily vice versa).

The adjacency matrix \(\varvec{A}=\left[{a}_{ij}\right]\in {\mathbb{R}}^{\left(N+1\right)\times \left(N+1\right)}\) associated with \(\mathcal{G}\) is defined as:

$$\begin{array}{c}{a}_{ij}=\left\{\begin{array}{cc}{w}_{ij}>0& \text{if }(j,i)\in\varepsilon \\ 0& \text{o}\text{t}\text{h}\text{e}\text{r}\text{w}\text{i}\text{s}\text{e}\end{array}\right.\end{array}$$

where \({w}_{ij}\) denotes the weight of edge \((j,i)\). The neighbor set \({\mathcal{N}}_{i}\) of each agent \(i\) is given by:

$$\begin{array}{c}{\mathcal{N}}_{i}=j\in V\left|\right(j,i)\in\varepsilon\end{array}$$

The graph Laplacian matrix \(L=\left[{l}_{ij}\right]\in {\mathbb{R}}^{(N+1)\times (N+1)}\) captures the interaction topology:

$$\begin{array}{c}{l}_{ij}=\left\{\begin{array}{cc}\sum\limits_{k=0}^{N}{a}_{ik}& \text{if }i=j\\ -{a}_{ij}& \text{if }i\ne j\end{array}\right.\end{array}$$

This mathematical graph representation will be leveraged to analyze the stability and convergence properties.

A key aspect of applying GA is designing an appropriate encoding to represent control solutions (chromosomes). Here, each chromosome is encoded as a string of \(N\) random keys:

$$\begin{array}{c}\text{c}\text{h}\text{r}\text{o}\text{m}=\left[{k}_{1},{k}_{2},\dots \text{, }{k}_{N}\right]\end{array}$$

where each key \({k}_{i}\in \mathbb{R}\).

The integer part \(\lfloor {k}_{i}\rfloor\) encodes the controlling topology, specifically the agent that follower \(i\) connects its controlling input to. This captures the interaction graph edges.

The decimal part \({k}_{i}-\lfloor {k}_{i}\rfloor \in \left[\text{0,1}\right)\) encodes the execution priority of agent \(i\), which determines the order in which agents are scheduled. A smaller key value indicates higher priority.

Compared to standard binary encoding, this proposed scheme has several advantages: (i) It provides a natural unified representation for topology control and execution ordering, which are critical to the cooperative tracking problem. (ii) Keys can be easily ordered to determine scheduling priorities. Binary strings lack a direct ordering relationship. (iii) The compact representation as real numbers is more efficient than binary bits. (iv) No unique encoding/decoding is needed to maintain precedence feasibility, unlike ordering problems. (v) The randomized keys thoroughly explore the complete solution space. Therefore, the proposed encoding successfully incorporates the two essential aspects of topology and priority in a simple yet effective way.

To generate a control solution, each chromosome must be decoded to obtain the actual controller parameters and execution schedule. The proposed decoding first sorts all the agents in ascending order of their keys:

$$\begin{array}{c}{k}_{{i}_{1}}<{k}_{{i}_{2}}<\dots <{k}_{{i}_{N}}\end{array}$$

This priority ordering determines the relative execution schedule.

Next, each agent is allocated to the specific controller given by the integer part of its key:

$$\begin{array}{c}{con}\left({i}_{k}\right)=\lfloor{k}_{{i}_{k}}\rfloor,k=1,\dots \text{,}\text{ }N\end{array}$$

where \({con}\left(i\right)\) returns the controller assigned to agent \(i\).

The complete decoding procedure is summarized in Algorithm 1 below:

figure a

Algorithm 1. Decoding algorithm

This transforms a given chromosome into a valid control solution. The complexity is \(O(N\text{l}\text{o}\text{g}N)\) due to the sorting operation.

An example is shown in Fig. 1 to illustrate the decoding process. There are \(N=5\) follower agents. The chromosome encodes a random key for each agent. First, the agents are reordered by ascending keys as A3, A2, A5, A1, A4. Then the controllers are allocated from the integer parts - A3→C0, A2→C1, A5→C1, A1→C2, A4→C3.

Fig. 1
figure 1

Example decoding process from chromosome to control solution

The fitness function evaluates the quality of each control solution. It maps a chromosome to a numeric cost value. Here the fitness is defined based on the worst case tracking error:

$$\begin{array}{c}J=\underset{i}{\text{max}}||{e}_{i}\left(t\right)||,\forall t\in \left[0,T\right]\end{array}$$

where \({e}_{i}\left(t\right)={y}_{i}\left(t\right)-{y}_{0}\left(t\right)\) is the tracking error for follower \(i\) between its own output and the leader’s output.

A smaller fitness value indicates better tracking performance, driving the GA toward solutions that minimize the maximum deviation from the leader.

Computing the fitness requires simulating the agent dynamics under the control policy defined by the chromosome decoding. The fitness calculation has complexity \(O\left(N\right)\).

The proposed approach introduces two additional heuristic decoding methods and the standard decoding procedure above to accelerate convergence.

  • Earliest finish time (EFT) decoding: schedule agents according to the priority order based on key values and assign each agent to the controller, minimizing its finish execution time [32].

  • Critical path (CP) decoding: first, rearrange agents in decreasing order of upward rank and then allocate controllers according to integer parts of keys [33].

The upward rank \({r}_{i}\) of agent \(i\) indicates the length of the longest path from \(i\) to the leader, accounting for execution times and priorities:

$$\begin{array}{c}{r}_{i}=\left\{\begin{array}{cc}0& \text{if}\;i\;\text{i}\text{s}\;\text{t}\text{h}\text{e}\;\text{l}\text{e}\text{a}\text{d}\text{e}\text{r}\\ \underset{p\in {P}_{i}}{\text{max}}\;{r}_{p}+{w}_{pi}& \text{otherwise}\end{array}\right.\end{array}$$

where \({P}_{i}\) is the set of predecessors of \(i\) in the graph.

The motivation is that heuristic methods can more quickly converge to high-quality areas of the solution space. EFT minimizes individual finish times, while CP optimizes the critical upward path. To balance global exploration and local exploitation, an adaptive probabilistic distribution determines which decoding scheme to apply based on the iteration number \(k\).

Let \({p}_{n}\), \({p}_{\text{eft }}\), \({p}_{\text{c}\text{p}}\) denote the probabilities of selecting normal, EFT, and CP decoding respectively. They are updated each generation as:

$$\begin{array}{c}{p}_{n}=(k/K{)}^{\alpha }{p}_{\text{eft }}=\beta (1-(k/K{)}^{\alpha }){p}_{\text{c}\text{p}}=(1-\beta )(1-\left((k/K{)}^{\alpha }\right)\end{array}$$

where \(K\) is the maximum number of iterations and \(\alpha\), \(\beta\) are scaling constants.

Initially heuristic decodings are more probable to boost convergence speed. As iterations increase, the normal decoding likelihood rises to refine the final solution. The complete procedure is described in Algorithm 2.

figure b

Algorithm 2. Adaptive decoding scheme

This adaptive approach balances rapid convergence and focused local refinement by shifting priority from heuristics to normal decoding over the evolutionary process.

To further improve solutions, an individual enhancement procedure is applied after decoding each chromosome:

  • Balance Loads:

    Reassign some agents on overloaded controllers to underutilized controllers

    Prioritize agents with dependencies to the underloaded controllers

  • Avoid Communication:

    Swap agents on different controllers if they have predecessor/successor dependencies

    Place interdependent agents on the same controller

  • Iterative Improvement:

    Alternate between forward and backward scheduling

    Shift tasks along their float windows to further reduce the makespan

    Repeat until no further improvement

The load balancing aims to even out the computation across available controllers. Communication avoidance reduces coordination delays between dependent agents. Iterative forwarding and backing scheduling optimizes the order within flexibility limits.

This enhancement scheme intensifies local search around good solutions to uncover improved points in the vicinity. It is described in Algorithm 3.

figure c

Algorithm 3. Individual enhancement

The complexity is \(O\left({N}^{2}\right)\). If the iterative process produces better performance, the improved chromosome replaces the original one. Otherwise, it is discarded. This exploitation helps overcome the weak local search issues faced by standard GA.

To seed the initial population with good solutions, some chromosomes are generated using the following heuristic methods:

  • EFT scheduling

  • CP scheduling

  • Random keys based on execution levels

The remaining chromosomes are randomly constructed. Leveraging these simple but fast heuristics provides higher quality initialization compared to pure random generation. This enhances subsequent convergence and reduces computational requirements. The initialization process is summarized in Algorithm 4 below:

figure d

Algorithm 4. Heuristic population initialization

The constructed chromosomes help bootstrap the iterative optimization process and mitigate the cold start problem.

The GA selects good chromosomes from the population during each generation to produce offspring via crossover. Parent selection uses a roulette wheel scheme weighted by the fitness values. Chromosomes with better objective function have a higher likelihood of being chosen. The proposed crossover operator is a biased parameterized uniform crossover. For each key index, an inheritance probability controls whether that gene comes from parent 1 or parent 2:

$${child}_{i} = \left\{\begin{array}{cc}{parent1}_{i}& \mathrm{with\;probability}\;{p}_{b}\\ {parent2}_{i}& \mathrm{otherwise}\end{array}\right.$$

The bias parameter \({p}_{b}\in \left(\text{0.5,1}\right)\) skews the probability toward the fitter parent. This promotes propagation of good traits to the next generation. In standard uniform crossover, each gene has equal likelihood of originating from either parent. The proposed biased version enhances exploitation of promising solutions. The overall crossover procedure is described in Algorithm 5 below:

figure e

Algorithm 5. Biased crossover scheme

After the mating, the produced offspring are added to the population. The elite member with best fitness is directly retained into the next generation.

To maintain diversity and avoid premature convergence, random immigrant chromosomes are inserted into the population at each generation. The immigrants are generated by the same heuristic initialization procedure described earlier. The worst members in the current population are replaced with the random immigrants. Let \({p}_{m}\) denote the immigration rate. Then \(\lceil {p}_{m}N\rceil\) chromosomes are migrated each iteration, where \(N\) is the population size. This injection of random individuals prevents the loss of potentially useful genetic material. It expands the exploration range and mitigates convergence to local optima.

While the choices of population size and termination criteria like maximum generations or fitness thresholds can significantly impact the performance of the GA. Suboptimal parameter selections could lead to premature convergence or excessive runtimes, hindering optimization. Performing sensitivity analyses to understand how these parameters affect convergence for problems of different scales and complexity would provide valuable insights into best practices for implementation. Factors like problem dimensionality, search space size, and modality could help determine appropriate population sizes and termination criteria. Additionally, adaptive techniques that automatically tune these parameters based on runtime convergence metrics could help avoid poor parameter choices.

The overall improved GA for cooperative control of MAS is summarized in Algorithm 6 below:

figure f

Algorithm 6. Improved GA

In each generation, the key steps are (i) Selection of fit parents; (ii) Recombination via biased crossover; (iii) Adaptive decoding using heuristics; (iv) Local search enhancement of individuals; (v) Immigration of random members; (vi) Replacement with newly generated offsprings. Termination occurs when the maximum iterations is reached, or an acceptable solution quality is achieved. The best-performing chromosome represents the final control policy.

Based on the communication graph, a distributed control protocol is designed for each follower agent.

Let \({e}_{i}\) denote the local neighborhood synchronization error for follower \(i\):

$$\begin{array}{c}{e}_{i}=\sum\limits_{j\in {\mathcal{N}}_{i}}{a}_{ij}\left({y}_{i}-{y}_{j}\right)\end{array}$$

where \({\mathcal{N}}_{i}\) is the neighbor set of agent \(i\) and \({a}_{ij}\) are edge weights in the graph.

Using neural network approximation, the distributed control input for follower \(i\) is designed as:

$$\begin{array}{c}{u}_{i}={\widehat{g}}_{i}^{-1}\left({x}_{i}\right)\left[-{k}_{1}{\gamma }_{i}\left({\mu }_{i}\left({x}_{i}\right)\right)\text{sat}\left({e}_{i}\right)-{k}_{2}\text{sat}\left({\widehat{e}}_{i}\right)-\widehat{{f}_{i}}\left({x}_{i}\right)+{v}_{i}\right]\end{array}$$

where \(\widehat{{g}_{i}}\left({x}_{i}\right)\) and \(\widehat{{f}_{i}}\left({x}_{i}\right)\) are neural network approximations of the unknown dynamics \({g}_{i}\left({x}_{i}\right)\) and \({f}_{i}\left({x}_{i}\right)\). \({k}_{1}\) and \({k}_{2}\) are positive control gains. \({\gamma }_{i}={a}_{{i}_{0}}+\sum _{j\in {\mathcal{N}}_{i}}{a}_{ij}\). \({\mu }_{i}(\cdot )\) is a Nussbaum gain function. The Nussbaum gain function is used in control theory, particularly in adaptive control systems. It is a tool used to address the problem of ensuring stability in systems with uncertainties in the system parameters. The unique property of the Nussbaum gain function is that it can ensure the global asymptotic stability of the closed-loop system, even in the presence of unknown and time-varying uncertainties. \({v}_{i}\) is the input to tune the NN weights. \(\text{s}\text{a}\text{t}(\cdot )\) is a saturation function.

The neural network weights are adapted online to minimize the approximation errors:

$$\begin{array}{c}{\widehat{\theta }}_{i}={{\Gamma }}_{i}{\varphi }_{i}\left({x}_{i}\right){sat}\left({e}_{i}\right)\end{array}$$

where \({{\Gamma }}_{i}\) is the adaptation gain matrix and \({\varphi }_{i}\left({x}_{i}\right)\) is the activation function.

Theorem 1 Under the proposed distributed control protocol, the synchronization error is semi-globally uniformly ultimately bounded for each follower :

$$\begin{array}{c}\left|{e}_{i}\left(t\right)\right|\le \varphi \left(\left|{e}_{i}\left(0\right)\right|,t\right)+{\epsilon}_{i},\forall t\ge 0\end{array}$$

where \(\varphi (\cdot )\) is a class \(\mathcal{K}\) function and \({\epsilon}_{i}\) is the ultimate bound.

Therefore, the proposed distributed control protocol achieves cooperative leader-follower consensus between the MAS with stability guarantees despite the complex unknown dynamics.

Next, we analyze the stability and convergence properties of the cooperative control scheme. Consider the candidate Lyapunov function:

$$\begin{array}{c}V=\frac{1}{2}\sum\limits_{i=1}^{N}{\left|{\stackrel{\sim}{\theta }}_{i}\left(t\right)\right|}^{2}+\frac{1}{2}{s}^{T}\left(t\right)Ls\left(t\right)\end{array}$$

where \({\stackrel{\sim}{\theta }}_{i}={\widehat{\theta }}_{i}-{\theta }_{i}^{\ast }\) is the neural network weight estimation error, \(s={[{e}_{1},\dots ,{e}_{N}]}^{T}\) is the stacked synchronization error vector, and \(L\) is the graph Laplacian matrix.

Differentiating \(V\) yields:

$$\begin{array}{c}\dot{V}=\sum\limits_{i=1}^{N}{\stackrel{\sim}{\theta }}_{i}^{T}{\widehat{\theta }}_{i}+{s}^{T}\left(t\right)L\dot{s}\left(t\right)=\sum i={1}^{N}{\stackrel{\sim}{\theta }}_{i}^{T}{{\Gamma }}_{i}\left({x}_{i}\right){\varphi }_{i}\left({x}_{i}\right){sat}\left({e}_{i}\right)+{s}^{T}\left(t\right)L{\left({\dot{e}}_{1},\dots ,{\dot{e}}_{N}\right)}^{T}\end{array}$$

where \({\dot{e}}_{i}\) is the derivative of the tracking error for agent \(i\).

Based on Lyapunov stability theory, it can be shown that \(V\), \(s\), and \({e}_{i}\) are uniformly ultimately bounded, implying the synchronization errors converge close to zero. Therefore, the proposed control scheme achieves cooperative tracking between the dynamic leader and the follower multi-agent system with guaranteed stability. The tracking accuracy can be improved by selecting appropriate control gains.

In the presence of malicious attacks, the controller performance can degrade significantly. Specifically, we consider false data injection attacks modeled as:


where \({a}_{i}\left(t\right)\) is the attack signal injected by the adversary.

To improve the resilience of the controller against such attacks, we propose the following secure cooperative control strategy:

$$\begin{array}{c}{u}_{i}\left(t\right)=-\sum\limits_{j\in {\mathcal{N}}_{i}}{a}_{ij}\left({\stackrel{\sim}{x}}_{i}\left(t\right)-{\stackrel{\sim}{x}}_{j}\left(t\right)\right)+{v}_{i}\left(t\right)\end{array}$$

where \({v}_{i}\left(t\right)\) is designed as:

$$\begin{array}{c}{v}_{i}\left(t\right)={{sat}}_{\delta }\left({\widehat{a}}_{i}\left(t\right)\right)\end{array}$$

where \({\stackrel{\sim}{a}}_{i}\left(t\right)\) is the estimate of the attack signal obtained using an attack estimator, and \({sat}_{\delta }\) is a saturation function defined as:

$$\begin{array}{c}{sat}_{\delta }\left(z\right)=\left\{\begin{array}{cc}\delta & z>\delta \left(z\right)\\ z& -\delta \le z\le \delta \\ -\delta & z<-\delta \end{array}\right.\end{array}$$

The key steps in the proposed secure cooperative control strategy are:

  • Attack signal estimation: We design an attack estimator to generate \({\widehat{a}}_{i}\left(t\right)\) based on analyzing the residuals between nominal and observed behavior:


where \({\widehat{x}}_{i}\left(t\right)\) is the estimated nominal state, \({\widehat{u}}_{i}\left(t\right)\) is the nominal control input, and \(\varvec{H}\) is the estimator gain matrix.

  • Saturation function: The saturation function \({sat}_{\delta }(\bullet )\) ensures that the correction signal \({v}_{i}\left(t\right)\) is bounded, so it does not destabilize the system. The threshold \(\delta\) is designed based on system requirements.

  • Distributed control law: The proposed control protocol subtracts the estimate of the attack signal from the control input to counteract its effect. The distributed nature ensures resilience against localized attacks on specific agents.

Under the proposed controller, the MAS achieves resilient consensus in the presence of false data injection attacks. Specifically, we have the following results:

Lemma 1 Under the attack estimation scheme, the estimation error is uniformly ultimately bounded.

Theorem 2 Under the secure control protocol, the multi-agent system achieves consensus in the presence of false data injection attacks, provided the attack signals satisfy:

$$\begin{array}{c}\left|{a}_{i}\left(t\right)\right|\le \stackrel{-}{a},\forall i\in V\end{array}$$

where \(\stackrel{-}{a}\) is a constant satisfying:

$$\begin{array}{c}\stackrel{-}{a}<\frac{{\lambda }_{2}\left(\mathcal{L}\right)}{{\lambda }_{n}\left(\mathcal{L}\right)}\delta \end{array}$$

where \({\lambda }_{2}\left(\mathcal{L}\right)\), \({\lambda }_{n}\left(\mathcal{L}\right)\) are the algebraically second smallest and largest eigenvalues of the Laplacian matrix \(\mathcal{L}\).

Leveraging the properties and structure of the specific system dynamics may allow deriving less conservative bounds on the attack signals that still guarantee consensus. For instance, characteristics like monotonicity, Lipschitz continuity, or passivity could be leveraged to conduct a more refined stability and resilience analysis. This could expand the magnitude and types of attacks the system can safely tolerate while maintaining control performance. Being able to tolerate more significant attack signals expands the robustness and resilience of the method.

The comprehensive model framework diagram is shown in Fig. 2.

Fig. 2
figure 2

Comprehensive model framework diagram


This section presents detailed simulation experiments to evaluate the performance of the proposed improved GA for cooperative control of MAS.


Extensive simulations are performed to evaluate the effectiveness of the improved GA for secure cooperative control. A leader-following MAS with four follower agents and one leader agent is considered. Figure 3. provides the detailed agent connectivity graph for the MAS simulation. The proposed secure cooperative control strategy utilizes this fixed interaction topology between the agents.

Fig. 3
figure 3

Multi-agent communication topology

The improved GA is compared to five baselines: classical GA (standard operators), PSO-GA [21], ABACO [22], a two-stage multi-population genetic algorithm with heuristics for workflow scheduling (tmGA) [34], and a new hybrid maximum power point tracking technique based on the GA and fractional open circuit voltage (GA-FOCV) [35].

Nominal consensus protocol and detection-based control are used for comparison, which are defined as follows.

$$\begin{array}{c}{u}_{i}\left(t\right)=-\sum\limits_{j\in {\mathcal{N}}_{i}}\left({v}_{i}\left(t\right)-{v}_{j}\left(t\right)\right)-c\sum _{j\in {\mathcal{N}}_{i}}\left({\stackrel{\sim}{p}}_{i}\left(t\right)-{\stackrel{\sim}{p}}_{j}\left(t\right)\right)\end{array}$$
$$\begin{array}{c}\widehat a_j^i\left(t\right)=\boldsymbol H\left(p_j\left(t\right)-{\widehat p}_j\left(t\right)\right)v_i\left(t\right)={sat}_\delta\left(\widehat a_j^i\left(t\right)\right)u_i\left(t\right)\end{array}$$

The consensus performance is evaluated using the following metrics:

  • EFT scheduling steady-state error: \(\stackrel{-}{\epsilon}=\underset{t\to {\infty }}{lim}\frac{1}{N}\sum _{i=1}^{N}||{p}_{i}\left(t\right)-{\stackrel{-}{p}}_{i}\left(t\right)||\) where \(\stackrel{-}{p}\left(t\right)=\frac{1}{N}\sum _{i=1}^{N}{p}_{i}\left(t\right)\).

  • Settling time: time to reach \(\epsilon\left(t\right)<0.1\).

  • Control effort:\(\frac{1}{T}{\int }_{0}^{T}\sum _{i=1}^{N}||{u}_{i}\left(t\right)||\text{d}t\)  

The results are averaged over 30 trials with randomly generated topologies and attack sequences. The controller parameters are tuned to \(c=1\), \(\delta =0.1\), \(H=5{I}_{2}\).

Results and analysis

The cooperative tracking results are shown in Fig. 4, where the follower agent outputs asymptotically converge to the leader’s output under the proposed control strategy. As shown in Table 1, the proposed algorithm outperforms other algorithms in terms of key metrics. Specifically, the improved GA achieves the lowest tracking error due to its complete solution space encoding and adaptive heuristic decoding. Compared to classical GA, PSO-GA, ABACO, tmGA, and GA-FOCV, the proposed algorithm has a tracking error of 0.024, a convergence time of 35 s, and a success rate of 98%, demonstrating its superior performance and consistency over multiple runs. The proposed algorithm achieves faster convergence due to its adaptive decoding, which quickly focuses the search in high-performance areas. This adaptive heuristic decoding, combined with complete solution space encoding, allows the algorithm to hone in on optimal solutions more efficiently than other algorithms like classical GA, PSO-GA, ABACO, tmGA, and GA-FOCV. The adaptive nature of the decoding process ensures that the search is directed towards promising regions of the solution space, leading to quicker convergence to the desired results.

Fig. 4
figure 4

Secure cooperative tracking results

Table 1 summarizes the performance comparison of different algorithms based on the key metrics.

Table 1 Algorithm comparison

The improved GA achieves the lowest tracking error due to its complete solution space encoding and adaptive heuristic decoding. The individual enhancement also exploits promising solutions. Figure 5 shows the error convergence over generations. Faster convergence is attained compared to classical GA, PSO-GA, ABACO, tmGA, and GA-FOCV, as the adaptive decoding quickly focuses the search in high-performance areas. The 98% success rate demonstrates consistent performance over multiple runs.

Fig. 5
figure 5

Error convergence over generations

In secure cooperative control for high-order nonlinear MAS with unknown dynamics, the lowest tracking error indicates a precise alignment of agents’ actions or states to the desired reference or objective, leading to better synchronization, collaboration, or consensus among the agents, which is crucial for secure cooperative control. In many real-world systems, the dynamics might not be entirely known. The adaptive heuristic decoding feature of the improved GA can be invaluable here, ensuring that even when faced with uncertain or changing dynamics, the algorithm can adjust and find optimal or near-optimal solutions. Faster convergence not only saves computational resources but also time. In real-time applications or environments where timely decisions are crucial (like autonomous vehicles or robotic swarms), faster convergence can be critical for safety and success. A 98% success rate over multiple runs shows that the method is reliable. This high level of consistency can be particularly beneficial in ensuring the security of a system. By outperforming classical GA, PSO-GA, ABACO, tmGA, and GA-FOCV, it is evident that this improved GA can offer advantages not present in other algorithms. This superiority can be crucial when designing and implementing secure cooperative controls. By focusing on high-performance areas and exploiting promising solutions, the algorithm can ensure that it always works towards the best possible outcomes. This exploitation can be particularly beneficial in environments with potential threats or challenges, ensuring that the MAS always operates in its most secure and efficient state. Therefore, the improved GA’s characteristics offer significant advantages in secure cooperative control for high-order nonlinear MAS. It promises to deliver efficient, consistent, and adaptive solutions, which are critical for the security and performance of such systems.

In summary, the simulation studies demonstrate superior cooperative tracking performance by the improved GA approach compared to conventional algorithms. The key innovations of efficient encoding, adaptive decoding, and individual enhancement are shown to be highly effective in this complex coordination control problem.

The performance of the three controllers is shown in Table 2. Under attacks, the nominal controller has poor consensus performance with steady-state errors around 0.4 m. The detection-based method eventually achieves consensus by isolating attackers. However, the settling time is extended. The proposed secure controller significantly improves the consensus performance and settling time while keeping the control effort reasonable. The results demonstrate that by effectively estimating and compensating the attack signals in real time, the proposed secure control strategy can counter the impact of false data injection attacks and provide resilient consensus performance for the multi-agent system. Unlike anomaly detection and isolation methods, the proposed method responds faster to attacks without sacrificing agents.

Table 2 Performance results

Therefore, the proposed secure control strategy offers a multi-faceted solution that not only addresses the issue of false data injection attacks but also enhances the overall operation, efficiency, and resilience of high-order nonlinear MAS with unknown dynamics. This would have widespread implications in various applications where MAS are employed, from autonomous vehicles to distributed sensor networks, ensuring safer and more reliable operations.


This paper addresses the challenges of controlling high-order nonlinear MAS with unknown dynamics. The research focuses on enhancing secure cooperative control performance within MAS, by considering such systems’ distributed and decentralized nature. The proposed strategy employs an improved GA approach incorporating several innovative features. The introduction of an efficient encoding method, coupled with adaptive decoding schemes and heuristic initialization, significantly contributes to the algorithm’s efficacy. These advancements enable the exploration of solution spaces more effectively while expediting the convergence process. Furthermore, the strategy incorporates individual enhancements through load balancing, communication avoidance, and iterative refinement. These enhancements not only intensify the local search for optimal solutions but also contribute to the robustness and adaptability of the proposed approach. Simulation results showcased the proposed strategy’s superior performance compared to conventional algorithms, mainly when dealing with complex control problems featuring uncertainty. The ability to adapt to find optimal solutions and exploit promising areas within the solution space holds excellent promise for real-world applications. This research bears relevance in various domains, including robotics, power systems, and autonomous vehicles, where securely controlling MAS is paramount. In particular, in the emerging era of ChatGPT, the proposed algorithm can provide an efficient reference for the secure control in field of cloud-edge computing.

However, the strategy demonstrates superior performance in simulations, and its effectiveness in real-world scenarios needs to be validated through practical implementations. Additionally, the computational complexity of GA, particularly as they scale up for larger MAS, might need to be revised in terms of real-time applicability. For example, as the system scales to 100 + agents, the computational complexity poses challenges for real-time applicability. Techniques like parallelization and algorithm hybridization could address this issue. Analyzing the degradation in performance and convergence time with larger MAS would provide valuable insights into scalability. The strategy’s performance could also be influenced by the choice of parameters and encoding methods, requiring further optimization. Future research in this area could focus on addressing these limitations. Exploring techniques to reduce the computational burden of the GA through parallelization or hybridization with other optimization methods could enhance its practical viability. Moreover, the current results demonstrate performance on defined agent models without additional perturbations or externalities. To strengthen the viability claims for real-world deployment, it would be crucial to test the method’s robustness when the dynamics are subject to sustained disturbances, unmodeled effects, parametric variations, etc. Validating performance under such perturbed conditions beyond the idealized models would provide greater confidence in the approach’s applicability and robustness for practical multi-agent control problems. Rigorously analyzing robustness against uncertainties that real systems would encounter is imperative to transitioning the method from theory to practice.

 Availability of data and materials

All data and materials will be available upon the reasonable request.


  1. Hu X, Zhang Z, Li C et al (2021) Leader-following consensus of multi-agent systems via a hybrid protocol with saturation effects. Int J Control Autom Syst 19:124–136

    Article  Google Scholar 

  2. Sun H, Liu Y, Li F (2021) Distributed optimal consensus of second-order multi-agent systems. Sci China Inf Sci 64:209201

    Article  MathSciNet  Google Scholar 

  3. Khodabandeh S, Kharrati H, Hashemzadeh F (2021) Control for leader–follower consensus of multi-agent systems with actuator faults using decentralized robust fault-tolerant control. Iran J Sci Technol Trans Electr Eng 45:529–541

    Article  Google Scholar 

  4. Jing X, Yao X, Liu M et al (2022) Multi-agent reinforcement learning based on graph convolutional network for flexible job shop scheduling. J Intell Manuf.

    Article  Google Scholar 

  5. Barbierato L, Mazzarino R, Montarolo P (2022) A comparison study of co-simulation frameworks for multi-energy systems: the scalability problem. Energy Inf 5(Suppl 4):53

    Article  Google Scholar 

  6. Hsieh FS (2019) Dynamic configuration and collaborative scheduling in supply chains based on scalable multi-agent architecture. J Ind Eng Int 15:249–269

    Article  Google Scholar 

  7. Kasprzok A, Ayalew B, Lau C (2018) An ant-inspired model for multi-agent interaction networks without stigmergy. Swarm Intell 12:53–69

    Article  Google Scholar 

  8. Abdulghafor R, Abdullah SS, Turaev S et al (2020) Linear and nonlinear stochastic distribution for consensus problem in multi-agent systems. Neural Comput Applic 32:261–277

    Article  Google Scholar 

  9. Qi Y, Du C, Zhang X et al (2023) Dynamic event-triggered bipartite consensus for uncertain high-order nonlinear multi-agent systems. Control Theory Technol 21:222–232

    Article  MathSciNet  Google Scholar 

  10. Tang Y, Qin H (2022) Decision-making and control co-design for multi-agent systems: a hierarchical design methodology. Control Theory Technol 20:439–441

    Article  Google Scholar 

  11. Ayshwarya Lakshmi S, Sahaaya Arul Mary SA (2019) Multi-agent learning technique inspired from software engineering model for permutation coded GA. Cluster Comput 22(Suppl 6):13653–13667

    Article  Google Scholar 

  12. Gul F, Mir I, Mir S et al (2023) Multi-agent robotics system with whale optimizer as a multi-objective problem. J Ambient Intell Human Comput 14:9637–9649

    Article  Google Scholar 

  13. Drew DS (2021) Multi-agent systems for search and rescue applications. Curr Robot Rep 2:189–200

    Article  Google Scholar 

  14. Gao F, Zheng S (2023) Cooperative learning control of unknown nonlinear multi-agent systems with time-varying output constraints using neural networks and barrier lyapunov functions via backstepping design. Int J Control Autom Syst 21:2378–2386

    Article  Google Scholar 

  15. Shahnazi R (2020) cooperative neuro adaptive control of leader following uncertain multi-agent systems with unknown hysteresis and dead-zone. J Syst Sci Complex 33:312–332

    Article  MathSciNet  Google Scholar 

  16. Hou Z, Xu J, Zhang G et al (2020) Interaction matrix based analysis and asymptotic cooperative control of multi-agent systems. Int J Control Autom Syst 18:1103–1115

    Article  Google Scholar 

  17. Lin ZY, Wang LL, Han ZM et al (2014) Distributed formation control of multi-agent systems using complex Laplacian. IEEE Trans Autom Control 59:1765–1777

    Article  MathSciNet  Google Scholar 

  18. Yang RH, Zhang H, Feng G et al (2019) Robust cooperative output regulation of multi-agent systems via adaptive event-triggered control. Automatica 102:129–136

    Article  MathSciNet  Google Scholar 

  19. Oroojlooy A, Hajinezhad D (2023) A review of cooperative multi-agent deep reinforcement learning. Appl Intell 53:13677–13722

    Article  Google Scholar 

  20. Nagarkar MP, Bhalerao YJ, Vikhe Patil GJ et al (2018) GA-based multi-objective optimization of active nonlinear quarter car suspension system—PID and fuzzy logic control. Int J Mech Mater Eng 13:10

    Article  Google Scholar 

  21. Garg H (2016) A hybrid PSO-GA algorithm for constrained optimization problems. Appl Math Comput 274:292–305

    MathSciNet  Google Scholar 

  22. Kashef S, Nezamabadi-pour H (2015) An advanced ACO algorithm for feature subset selection. Neurocomputing 147:271–279

    Article  Google Scholar 

  23. Liu LF, Yang XF (2021) Multi-objective aggregate production planning for multiple products: a local search-based genetic algorithm optimization approach. Int J Comput Intell Syst 14:156

    Article  Google Scholar 

  24. Chowdhury B, Garai G (2020) A bi-objective function optimization approach for multiple sequence alignment using genetic algorithm. Soft Comput 24:15871–15888

    Article  Google Scholar 

  25. Ma H, Shan Y, Wang J et al (2022) Multi-system genetic algorithm for complex system optimization. Soft Comput 26:10187–10205

    Article  Google Scholar 

  26. Liu J, Wang X (2023) Secure consensus control for multi-agent systems subject to consecutive asynchronous dos attacks. Int J Control Autom Syst 21:61–70

    Article  Google Scholar 

  27. Wan Q, Chen WH, Lu X (2023) Secure consensus tracking of multi-agent systems with network-induced delays under deception attacks via guaranteed performance impulsive control. Nonlinear Dyn 111:12213–12232

    Article  Google Scholar 

  28. Liu YW, Wu HP, Razaee K et al (2023) Interaction-enhanced and time-aware graph convolutional network for successive point-of-interest recommendation in traveling enterprises. IEEE Trans Industr Inf 19:635–643

    Article  Google Scholar 

  29. Qi LY, Liu YW, Zhang YL et al (2022) Privacy-aware point-of-interest category recommendation in internet of things. IEEE Internet Things J 9:21398–21408

    Article  Google Scholar 

  30. Liu YW, Zhou XK, Kou HZ et al (2023) Privacy-preserving point-of-interest recommendation based on simplified graph Convolutional Network for Geological traveling. ACM Trans Intell Syst Technol.

  31. Yang Y, Li Y, Yue D (2020) Event-trigger-based consensus secure control of linear multi-agent systems under dos attacks over multiple transmission channels. Sci China Inf Sci 63:150208

    Article  MathSciNet  Google Scholar 

  32. Xie Y, Sheng YH, Qiu MQ et al (2022) An adaptive decoding biased random key genetic algorithm for cloud workflow scheduling. Eng Appl Artif Intell 112:104879

    Article  Google Scholar 

  33. Jeong SL, Bae JH, Sunwoo MH (2021) Fast multibit decision polar decoder for successive-cancellation list decoding. J Signal Process Syst Signal Image Video Technol 93:127–136

    Article  Google Scholar 

  34. Xie Y, Gui FX, Wang WJ et al (2023) A two-stage multi-population genetic algorithm with heuristics for workflow scheduling in heterogeneous distributed computing environments. IEEE Trans Cloud Comput 11:1446–1460

    Article  Google Scholar 

  35. Hassan A, Bass O, Masoum MAS (2023) An improved genetic algorithm based fractional open circuit voltage MPPT for solar PV systems. Energy Rep 9:1535–1548

    Article  Google Scholar 

Download references


This paper has not received any funding support yet.

Author information

Authors and Affiliations



X.W. contributed to the writing and conception; D.Y. contributed to the method; D.R.J.D. and S.C. contributed to the data analysis; M.O.A. and F.E.A. contributed to the software; J.L. contributed to the writing polishing.

Corresponding author

Correspondence to Dongsheng Yang.

Ethics declarations

Ethics approval and consent to participate

This declaration is “not applicable”.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Proof of Theorem1.

Consider the Lyapunov function candidate:

$$V=\frac{1}{2}\sum _{i=1}^{N}{\left|{\stackrel{\sim}{\theta }}_{i}\right|}^{2}+\frac{1}{2}{s}^{T}Ls$$

where \({\stackrel{\sim}{\theta }}_{i}={\widehat{\theta }}_{i}-{\theta }_{i}^{\ast }\) is the NN weight estimation error, \(s={[{e}_{1},\dots ,{e}_{N}]}^{T}\), and \(L\) is the graph Laplacian matrix. The Lyapunov function candidate used to prove stability. Weights errors and synchronization errors impact the collective dynamics.

Differentiating \(V\) along the system trajectories gives:

$$\dot{V}\le -{k}_{3}|s{|}^{2}+{k}_{4}|s|+\epsilon$$

for some \({k}_{3}\), \({k}_{4}>0\) and \(\epsilon>0\) under standard NN approximation assumptions.

By Lyapunov stability theory, this demonstrates that \(V\), \(s\), \({e}_{i}\) are uniformly ultimately bounded, proving the result.

Proof of Lemma 1.

Consider the error dynamics:


where \({\stackrel{\sim}{u}}_{i}\left(t\right)={u}_{i}\left(t\right)-{\widehat{u}}_{i}\left(t\right)\) and \({w}_{i}\left(t\right)\) accounts for disturbances and uncertainties. This is a stable linear system driven by bounded inputs \({\stackrel{\sim}{u}}_{i}\left(t\right)\) and \({w}_{i}\left(t\right)\). By standard results in robust control theory, we can conclude that \({\stackrel{\sim}{a}}_{i}\left(t\right)\) is uniformly ultimately bounded.

Proof of Theorem 2.

Consider the MAS under attacks:

$$\dot{x}\left(t\right)=({I}_{N}\otimes \varvec{A}-\mathcal{L}\otimes B\varvec{K})x\left(t\right)+({I}_{N}\otimes B){\Gamma }a\left(t\right)$$

where \(x\left(t\right)={\left[{x}_{1}\right(t{)}^{T},\dots ,{x}_{N}\left(t{)}^{T}\right]}^{T}\), \(a\left(t\right)={\left[{a}_{1}\right(t{)}^{T},\dots ,{a}_{N}\left(t{)}^{T}\right]}^{T}\), \({\Gamma }=\text{d}\text{i}\text{a}\text{g}({\gamma }_{1},\dots ,{\gamma }_{N})\), \(B\) is the input matrix that relates the control input to the agent states, \(\varvec{K}\) is the distributed control gain matrix, and \({\gamma }_{i}=1\) if agent \(i\) is under attack and 0 otherwise.

Under the secure control protocol, the closed loop dynamics become:

$$\dot{x}\left(t\right)=({I}_{N}\otimes \varvec{A}-\mathcal{L}\otimes BK)x\left(t\right)+({I}_{N}\otimes B)\left({\Gamma }a\right(t)-v(t\left)\right)$$

where \(v\left(t\right)={\left[{v}_{1}\right(t{)}^{T},\dots ,{v}_{N}\left(t{)}^{T}\right]}^{T} \text{a}\text{n}\text{d} {\nu }_{i}\left(t\right)={sat}_{\delta }\left({\widehat{a}}_{i}\left(t\right)\right).\) 

Since \({\widehat{a}}_{i}\left(t\right)\) is bounded, so is \({v}_{i}\left(t\right)\). Also, \(\left|{v}_{i}\right(t\left)\right|\le \delta ,\forall i\). Let \(\zeta \left(t\right)={\Gamma }a\left(t\right)-v\left(t\right)\). Then,

$$\left|\zeta \right(t\left)\right|\le \left|{\Gamma }\right|\stackrel{-}{a}+\left|v\right(t\left)\right|\le \stackrel{-}{a}+\delta$$

Therefore, \(\zeta \left(t\right)\) is bounded. Furthermore, since \(\varvec{A}-BK\) is Hurwitz by design, the perturbed system is input-to-state stable. By ISS theory, the state \(x\left(t\right)\) is ultimately bounded.

Moreover, since the graph is connected, the perturbed system achieves consensus, i.e., \({x}_{i}\left(t\right)-{x}_{j}\left(t\right)\to 0\) as \(t\to {\infty }\), provided the attacks are bounded as given. This proves the theorem.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Yang, D., Dolly, D.R.J. et al. Genetic algorithm-based secure cooperative control for high-order nonlinear multi-agent systems with unknown dynamics. J Cloud Comp 13, 1 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: