Skip to main content

Advances, Systems and Applications

Table 1 Summary of the literature

From: AI-empowered mobile edge computing: inducing balanced federated learning strategy over edge for balanced data and optimized computation cost

Author

Research Type

Problem Area

Contribution

Related Studies

Zhao et. al

Experiment

Statistical heterogeneity

A method is developed to enhance training on non-IID data by generating a restricted subset of data that is distributed globally across all edge devices [33]

[3]

Mcmahan, et.al

Experiment

Communication cost

A realistic method for the FL is based on iterative model averaging is proposed and evaluated an exhaustive empirical evaluation [3]

[27, 33, 34]

C. T Dinh et

Experiment

Convergence analysis of FL algorithms and resource allocation

An optimization issue of resource allocation in wireless networks is addressed by proposing a FL algorithm. The goal is to capture the trade-off between the convergence time of FL and the energy consumption of UEs with heterogeneous computation and power resources [34]

[27, 33]

W. Luping et. al

Experiment

Communication cost

They suggested a system called Communication-Mitigated Federated Learning (CMFL), which provides clients with feedback on the overall trend of model updates [35]

[21, 24, 36]

M. Duan et.al

Experiment

Statistical challenges in FL

They provided evidence that inaccurate FL will result from unevenly distributed training data [27]

[34, 37]

S. U. Stich et.al

Experiment

Communication cost

They suggest structured updates, which would allow them to directly learn an update from a constrained space parametrized by utilizing fewer variables, thereby reducing the communication cost by two orders of magnitude [38]

[35, 36, 39]

D. C. Verma et. al

Numerical Experiment

Communication cost

When equipped with error compensation, stochastic gradient descent (SGD) with k-sparsification or compression (such as top-k or random-k) converges at the same rate as vanilla SGD, according to an evaluation of this technique that considers accumulated errors in memory

[35, 36, 38, 39]