Skip to main content

Advances, Systems and Applications

Predicting the individual effects of team competition on college students’ academic performance in mobile edge computing

Abstract

Mobile edge computing (MEC) has revolutionized the way of teaching in universities. It enables more interactive and immersive experiences in the classroom, enhancing student engagement and learning outcomes. As an incentive mechanism based on social identity and contest theories, team competition has been adopted and shown its effectiveness in improving students’ participation and motivation in college classrooms. However, despite its potential benefit, there are still many unresolved issues: What type of students and teams benefit more from team competition? In what teaching context is team competition more effective? Which competition design methods better increase student academic performance? Mobile edge computing provides the ability to obtain the data of the teaching process and analyze the causal effect between team competition and students’ academic performance. In this paper, the authors first design a randomized field experiment among freshmen enrolled in college English courses. Then, the authors analyze the observation data collected from the online teaching platform, and predict individual treatment effects of academic performance in college English through linear and nonlinear machine learning models. Finally, by carefully investigating features of teams and individual students, the prediction error is reduced by up to 30%. In addition, through interpreting the predictive models, some valuable insights regarding the practice of team competition in college classrooms are discovered.

Introduction

In recent years, with proliferation of mobile devices, MEC has been widely adopted in various industries [1]. In the field of education, MEC has revolutionized the way of teaching in universities. It enables more interactive and immersive experiences in the classroom, enhancing student engagement and learning outcomes. By bringing computational capabilities closer to end-users, MEC facilitates the seamless integration of digital resources within the educational domain [2] Within this context, MEC terminals act as intelligent hubs which capture valuable teaching data in real-time. Team competition strategy [3, 4], serving as motivational mechanisms, has found extensive adoption in various educational tiers. As a supplement to traditional classroom instruction, team competition in college English teaching is often used to enhance student engagement, teamwork, and language proficiency. In this teaching method, students are divided into teams and engage in various language-based tasks and challenges. These can include debates, presentations, role-plays, quizzes, and other interactive activities that require the application of English language skills. This strategy, is not limited to classroom settings but has been further developed, particularly with the support of MEC. It fosters positive competition among students, enhancing learning efficiency. Additionally, it provides teachers with greater data support, helping them better under-stand students’ learning needs. Hence, it plays a significant role in modern education.

Despite of the potential benefit of team competition, plenty of unknowns remain. Because of the huge heterogeneity among the schools, the majors, the classes and the students, which may lead to significant variations in students’ motivation and academic performances? What types of students and teams (i.e., gender, major, grade) benefit more from team competition? Which teaching design methods (i.e., team formation) better increase students’ academic performance? In what teaching context team competition is more effective? Whether there is a causality between in-class activities (i.e., discussion, quiz, homework, etc.) and academic performance. Understanding the causal effects between these factors and students’ academic performance can help teachers optimize the practice of team competitions in college classrooms for different types of students, thereby improving students’ motivation and academic performance.

However, it is challenging to answering these questions. First, there are few real-world data which covers the whole team competition learning process. Controlled field experiments are necessary to collect enough data for this research. Second, measuring the causal effects between the team competition mechanism and students’ academic performance is intrinsically difficult [5]. It requires a proper definition of individual performance measures and prediction targets [6]. Third, the variable space to describe the characteristics of context, students, team and teaching activities is high-dimensional [7,8,9]. Moreover, there are a lot of complex relationships among them. Domain knowledge and data analytics are both needed to identify the potential predictive factors [10,11,12].

In this paper, a novel approach is proposed to attack these challenges, as shown in Fig. 1. A randomized field test among freshmen enrolled in college English course is first developed, then the individual treatment effect of team completion on students’ academic performance is predicted. Moreover, through interpreting the predictive models, the authors investigate the most significant factors in the practice of team completion in college classrooms. Since students' performance in teaching activities i.e., answer race, discussion, quiz and homework are distributed over long periods of a semester, The data from their homework results is consolidated into an online teaching platform, which serves as a centralized cloud platform. This enables more comprehensive analysis and mining of the data.

Fig. 1
figure 1

An overview of proposed approach

Concretely, contributions include:

  1. (1)

    The authors employ MEC terminals to capture real-time valuable teaching data, followed by the design and execution of a controlled field experiment aimed at collecting comprehensive data throughout the entire process of team competition learning.

  2. (2)

    Leveraging the capabilities of MEC infrastructure, the problem is framed as a prediction task and employ machine learning models to forecast the individual treatment effect of team competitions on students.

  3. (3)

    The prediction model is interpreted to identify the most important factors in team competition learning.

Related work

MEC in education

MEC enables real-time data analysis and processing [13], making it possible to gather and analyze learner data promptly. This information can be utilized to personalize the learning experience, adapting content and recommendations based on individual needs and preferences [14,15,16]. This can be beneficial for real-time collaboration tools [17], video streaming, and online interactive learning platforms, providing a seamless and immersive learning experience. MEC has the potential to revolutionize education by improving access, personalizing learning experiences, and enabling innovative technologies [18]. By harnessing the power of edge computing, educational institutions can enhance their digital infrastructure and provide more efficient and effective learning environments.

Team competition

As an incentive mechanism based on social identity and contest theories, team competitions have been increasingly applied in many fields. It has shown that team competition can not only effectively improve key metrics, i.e., participation [19], but also help them obtain a sense of achievements [20]. Markus et al. [21] investigate how to leverage team competition to improve the cost efficiency in crowdsourcing through a large-scale experimental evaluation. Ai et al. [22] conduct an inter-team contest field experiment on a ride-sharing platform, and find that drivers participated in the team competition works longer hours and earn higher revenue than drivers in control conditions. Ye et al. [23] study how different factors of team completion affect the outcomes of individual drivers in ridesharing based on the result of the online field experiments.

With regards to education, the imperative to maintain competitiveness and facilitate the transformation of database management practices has necessitated alignment with the prevailing, cutting-edge technologcal trends within the industry [24]. DiNapoli [20] describes the implementation of a pedagogy based on team competition in mathematics classrooms. It shows that team competition could be a useful motivator. Scales et al. [25] conclude that team-based game mechanics can increase resident participation in an online learning platform delivering quality improvement content. They draw the conclusion through a randomized, controlled field experiment. To enhance the effectiveness and quality of experimental teaching, a comprehensive experimental teaching course system that combines artificial intelligence and edge computing technologies is built [26]. By deploying edge computing nodes in laboratories or educational settings, experimental data can be transmitted in real-time to edge devices for processing and analysis. Such as students’ respective health physique data is integrated into a central cloud platform for more comprehensive data analysis and mining [27]. However, to the best of our knowledge, few have analyzed the importance of different characteristics in team competition, particularly in college English teaching.

Individual treatment effect prediction

Predicting individual treatment effects of actions plays a critical role in many domains [28,29,30,31]. Synthetic Minority Oversampling TEchnique (SMOTE) technique is used for preprocessing the missing value in the provided input dataset to enhance the prediction accuracy [31]. A new Metaheuristic Optimization-based Feature Subset Selection with an Optimal Deep Learning model (MOFSS-ODL) for predicting students’ performance is presented [32].Many researchers propose a variety of algorithms for predicting the individual treatment effect (ITE)based on different techniques, i.e., deep neural networks [33], random forests [34], etc. Others study the application of ITE prediction in different fields, i.e., medicine [34, 35], online platforms [36]. This work is similar to recent work that predicts ITE in a ride-sharing economy [23]. However, this work focuses on the ITE prediction of students’ academic performance. Moreover, different machine learning models are adopted to better capture the characteristic in college English teaching.

Experiment design

Experiment setup

To test the impact of team competition on the academic performance of college students, a randomized field experiments among freshmen enrolled in college English course is developed. The authors choose college English course to conduct a classroom experiment for two reasons. First, as part of commonly required courses, college English has a large enrollment in Chinese universities. The assessment of this course is highly standardized. All students utilized identical course materials, with instructional activities and examinations administered through a unified online platform hosted on the MEC terminal. Therefore, this course structure allows us to split control and treatment groups among classrooms uniformly. Second, the direct link between students’ academic performance and scholarships, graduation and post-graduation employment provides motivation for students to do well in college English course.

The sample is made up of freshmen enrolled in college English course taught by the author during the fall semester of the 2021–2022 academic year. Students are excluded with incomplete information, resulting in a final sample of four classes and 180 students. Table 1 shows the descriptive statistics for students in different groups. The first row shows the number of observations in each group. The second row demonstrates the ratio of female students in each group. The ratio of students from Shandong province, where the university located, is shown in the third row.

Table 1 Descriptive statics of the sample

Team formation

Classrooms were randomized into either a control group or one of three treatment groups, as shown in Fig. 2. In the first treatment group, students are permitted to create teams freely. In the second treatment group, students are assigned to different teams randomly. This group is intended to replicate the most common scenario of team formation in teaching practice. In the third treatment group, students are splitted into different teams according to their academic performance, i.e., the score of English in National College Entrance Examination (NCEE). The Control group uses traditional teaching methods, indicating that no team competition mechanism is introduced in teaching process. All the teams shaped in similar size, covering 6 to 7 regular members.

Fig. 2
figure 2

Experimental design

Contest design

During the contest period, all teams in three treatment groups will engage in team competitions to compete with other teams in the same class. And scores will be rewarded to these teams according to their ranks in the class. The score will contribute to the final score of the course. Besides final exam, the final score of a student also includes performance in teaching activities, i.e., answer race, discussion, quiz and homework. All the activities are conducted on an online teaching platform, and the performance of students are collected automatically. The score of a team is denoted by averaging the final score of all team members. The scores of each team members and other teams are presented on score board for students to check during the contest period. At the end of the semester, top 5 teams on the score board in each treatment group will be rewarded 5 to 10 extra points to their final score.

Predicting the individual treatment effect

Problem formulation

ITE indicates the effect of team competitions on the academic performance of a student. Difference-in-differences (DID) approach [37] is employed to estimate the ITE. The DID approach first calculate the difference in academic performance before and after team competition for each student; average the performance change in control group, and compute the difference between the two conditions.

Formally, given a student set \(S={S}_{t1}\cup {S}_{t2}\cup {S}_{t3}\cup {S}_{c}\), where \({S}_{t1},{S}_{t2},{S}_{t3}\) and \({S}_{c}\) indicate students in treatment group 1, treatment group 2, treatment group 3 and control group, respectively. Let \({S}_{i,T}\) be the academic performance of student \(i\) in the time period \(T\), \({T}_{0}\) be the baseline period before competition starts, and \({T}_{1}\) be the time period when the competition ends. The difference of student \(i\) in academic performance before and after competition period can be calculated by

$$\Delta {S}_{i}={S}_{i,{T}_{1}}-{S}_{i,{T}_{0}}$$
(1)

And the average performance difference of students in control group can be calculated by

$$\Delta {S}_{control}=\frac{\sum_{i\in {S}_{c}}\Delta {S}_{i}}{\left|{S}_{c}\right|}$$
(2)

Finally, the individual treatment effect of student \(i\) can be obtained by

$$IT{E}_{i}=\Delta {S}_{i}-\Delta {S}_{control}$$
(3)

Given a student \(i\) in team \(j\), let \({\mathcal{F}}_{{S}_{i}}\) denote the feature list of the student, and \({\mathcal{F}}_{{T}_{j}}\) represent the features of team \(j\). The problem of predicting the ITE of student \(i\) can be formulated by

$$\widehat{{ITE}_{i}}=f\left({\mathcal{F}}_{{S}_{i}}, {\mathcal{F}}_{{T}_{j}}\right)$$
(4)

Feature selection

Based on the theoretical insights from social identity theory and contest theory [37, 38], as well as the domain knowledge from college English teaching, the features of a student in this experiment are characterized from two aspects: team features and individual student features.

Team features

According to social identity theory, an individual’s social identity is shaped by their membership [39] in specific groups and the emotional significance [40] they attach to those groups. Team features depict the team-level characteristics that is related to the behavior of students, such as team formation strategy, team diversity and average performance of a team. In detail, team diversity is indicated by gender diversity and hometown diversity, which are measured by the ratio of female students and students within the province. To depict the performance of a team, all the teammates’ Aptis grades are averaged. The performance of a team is a potential significant predictor of ITE.

Individual student features

In contest theory, when studying the behavior of participants in team competition of college teaching, researchers often consider students individual various features or factors that can influence their performance. Individual student features are made up of the demographics, academic performance before the competition [41,42,43], and classroom behaviors [44] of a student. To depict student academic performance before the competition, students’ performance in National College Entrance Examination (NCEE) and Aptis test is investigated. In detail, NCEE performance is indicated by overall mark and subject marks. Aptis performance is indicated by the overall score, scores of listening, speaking, reading and writing, and a score for the grammar and vocabulary component. Then authors capture students’ classroom behaviors from three aspects: times of participating answer race, scores of quiz and homework. Moreover, student demographics, e.g., gender, hometown and age, are also contained in the set of features.

In this study, a student’s ITE is calculated by its Aptis score and the score of final exam. Aptis is an assessment tool which is widely adopted in China. It can help accurately test English language abilities in all four skills, reading, listening, writing and speaking. It is held in every October in our school to assess the English language level of our students. All the freshmen are asked to participate in the exam, which provide us with a fully and accurate evaluation of students’ English ability before the competition. The distributions of students’ Aptis score in each group are approximately normal, as shown in Fig. 3.

Fig. 3
figure 3

Distributions of Aptis overall marks of all participants and three treatment groups

Final exam is conducted at the end of the semester, which includes written and oral test. All the groups use the same test paper and mark by the same teacher. Because the result of oral test may be subjective, only the score of written test is taken to calculate the ITE of a student. The distribution of final exam scores of all the participants in each groups is demonstrated in Fig. 4.

Fig. 4
figure 4

Distributions of final exam marks of all participants and three treatment groups

Model implementations

A number of machine learning models can be employed for ITE prediction. Because this study focus on understanding the potential predictors for ITE, only models that can easily interpret the importance of all the influential factors are considered. Here the authors choose four commonly used machine learning methods: extreme gradient boosting (XGBoost) [39, 45], light gradient boosting machine (LGBM) [46], Lasso and Ridge.

XGBoost

XGBoost model is used with 100 trees that randomly sample 90 percent of the training data prior to growing trees. The authors choose the dart booster as the XGBoost’s booster which can prevent overfitting and improve the model performance. The implementation provided the famous dmlc XGBoost’s Python Package with the abovementioned parameters is used to train the model.

LGBM

LGBM model is also used to contrast with other model. The LightGBM model’s parameters are similar with XGBoost model, such as booster and subsample. However, 2000 trees are chosen to construct the LGBM model with 0.01 learning rate. As for other parameters, the GridSearchCV algorithm which provided by scikit-learn is used to search the best parameters. Python Package of LGBM is used to build the model.

Lasso and ridge

Both the Lasso and the Ridge are liner models. They are usually used for feature selection. Lasso takes the L1 penalty for both fitting and penalization of the coefficients. Ridge takes the L2 penalty. They all have coefficients for every feature, which visually show correlation between the feature and the target. However, because of the difference of penalty, Lasso would be forces certain coefficients to zero and Ridge would only change the value without changing to zero. The scikit-learn package has also been utilized in this study. Besides, because of the processing of data with Min–max normalization, data is not normalized again and the “cv” parameter is set to 5.

Evaluation

In this section, the effect of team competition on college students’ academic performance is analyze by answering the following research questions:

  • RQ1: How dose different machine models perform in ITE prediction?

  • RQ2: Which features are most correlated with students’ academic performance when conducting team competition in college classroom?

  • RQ3: How does different competition design methods impact the effect of team competition on students’ academic performance?

Performance comparison

Following the standard practice, the dataset is randomized and split it into training set, validation set and test set. The authors adopt RMSE, which is commonly used in measuring the accuracy of a machine learning predictor [41,42,43]:

$$RMSE=\sqrt{\frac{{\sum }_{i=1}^{N}(IT{E}_{i}-\widehat{{ITE}_{i}}{)}^{2}}{N}}$$
(5)

where \(N\) indicates the sample size.

The prediction accuracy of the models on both validation set and test set is illustrated in Fig. 5. To test validity of this study, two baselines are constructed. The random baseline retrieve a random value from a Gaussian distribution that is estimated by ITEs in the training set. The average baseline predicts all ITEs in the test set as the mean value of all ITEs in the training set. Figure 5 shows that XGBoost, LGBM, Ridge and Lasso all achieve similar accuracy, demonstrating significant advantage over average and random baselines in RMSE by up to 95% and 30%, respectively.

Fig. 5
figure 5

Comparison of model performance (RMSE)

Analysis of feature importance

XGBoost, LGBM and Lasso can select features in training process. Eliminating characteristics with zero coefficients in Lasso, as well as those with negative importance in XGBoost and LGBM, allows us to identify the most significant predictors for all three models. Note that because of the difference in structure, different models may choose different features, as shown in Fig. 6.

Fig. 6
figure 6

Importance scores of features

The importance of features is investigated from different ITE prediction models. Figure 6a and b show the selected feature from the Lasso and Ridge models. Figure 6c and d illustrate the top 15 most important features selected from XGB and LGBM models.

The academic performance features of teams and individuals before the competition, e.g., average Aptis score of a team, the overall Aptis score and the Aptis score of four skills, overall score and scores of all subjects in NCEE, show strong predictive power in ITE prediction (see Fig. 6). Surprisingly, average Aptis score of a team is the largest negative factor in both Lasso and Ridge model. The finding is consistent with the relationship between ITE and average team performance, as shown in Fig. 7. Teams with the highest Aptis score yield smaller treatment effects than teams with low Aptis score. Moreover, the Aptis score of writing, speaking, listening and reading are also negative factors in Lasso and Ridge model. Moreover, they are import features in XGBoost. Their relationships with ITE is consistent with the relationship between the ITE and the average Aptis score of a team, which suggests that students with low academic performance may benefit more from the application of team competition in college English teaching.

Fig. 7
figure 7

Relationship between average performance of a team and ITE

Impact of competition design

The way of team formation is a significant predictor in ITE prediction. Figure 8 illustrates ITEs of three treatment groups that form teams in different methods and the ITE of the control group that does not conduct team competition. As shown in Fig. 8, self-formed treatment group obtains the biggest treatment effect. The result is consistent with the conclusion drawn in other domains [23]. The reason is that students from self-formed treat groups are usually acquaintances in real life, which may lead to higher level of team identity and responsibility. Grade-balanced treatment group yield smaller treatment effect than self-formed treatment group, but its treatment effect is bigger than the other two groups. The finding provides insights for team formation in scenarios when students are not familiar with each other. Not surprisingly, the treatment effect of control group is approximately to 0. A rather intriguing finding is that random-assigned treatment group obtains the smallest treatment effect, indeed, negative treatment effect.

Fig. 8
figure 8

ITE of four groups

In addition, the authors also investigate the average discussion times of each group, as shown in Fig. 8b. It can be observed that the number of discussions self-formed treatment group engaged in is the most, and the number of discussion random-assigned treatment group participate in is the least. The number of discussions that grade-balanced group participate is bigger than that of control group, but smaller than that of self-formed treatment group. This is consistent with the average ITE of the four groups. The result shows that self-formed group is more proactive than the other groups, and obtain the biggest individual treatment effect. Moreover, it can also be concluded that introducing team competition into college English teaching may not necessarily have positive effect on students’ academic performance, which depends on how team competition is conducted.

Conclusion

In conclusion, this research delved into two crucial realms: the impact of team competition on college students' academic performance and the integration of Machine Learning techniques with MEC terminal data. Through rigorous randomized field experiments among college freshmen, team-related and individual features is meticulously analyzed, employing advanced machine learning models. The findings underscored the significant predictive power of these features on academic performance, enabling a reduction in prediction errors by up to 30%.

Moreover, this study provided valuable insights into the practical application of team competition strategies within college classrooms, offering immediate implications for the teaching design of college English. Team competitions can facilitate mutual learning among students, thus improving their grasp of English language concepts, particularly for those who struggle academically. College administrators are responsible for creating an environment that fosters healthy competition among English teaching teams. This includes providing necessary resources, such as training programs, teaching materials, and MEC technology support.

While this research represents a foundational step, further exploration is essential. Future endeavors will encompass additional field experiments, extending this insights to various courses, and addressing unresolved issues in the intersection of Machine Learning and MEC data processing. This interdisciplinary approach paves the way for enhancing educational methodologies, fostering active student engagement, and advancing the integration of cutting-edge technologies in contemporary learning environments.

References

  1. Zhang J, Cheng X, Wang C, Wang Y, Shi Z, Jin J et al (2022) FedAda: Fast-convergent adaptive federated learning in heterogeneous mobile edge computing environment. World Wide Web 25(5):1971–1998

  2. Abreu AW, Coutinho EF, Bezerra W, Maia D, Gomes AN, Santos I (2022) Analyzing a Blockchain Application for the Educational Domain from the Perspective of a Software Ecosystem. In: Anais do III Workshop sobre as Implicações da Computação na Sociedade. SBC, p 85–92

  3. Harvey JF, Bresman H, Edmondson AC, Pisano GP (2022) A strategic view of team learning in organizations. Acad Manage Ann 16(2):476–507

  4. Xing Y, Liu Y, Boojihawon DK, Tarba S (2020) Entrepreneurial team and strategic agility: A conceptual framework and research agenda. Hum Resour Manage Rev 30(1)100696

  5. Gu R, Chen Y, Liu S, Dai H, Chen G, Zhang K, Che Y, Huang Y (2021) Liquid: intelligent resource estimation and network-efficient scheduling for deep learning jobs on distributed GPU clusters. IEEE Trans Parallel Distrib Syst 33:2808–2820

    Google Scholar 

  6. Gu R, Zhang K, Xu Z, Che Y, Fan B, Hou H et al (2022) Fluid: Dataset Abstraction and Elastic Acceleration for Cloud-native Deep Learning Training Jobs. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE Computer Society, p 2182–2195

  7. Wang F, Zhu H, Srivastava G, Li S, Khosravi MR, Qi L (2021) Robust collaborative filtering recommendation with user-item-trust records. IEEE Trans Comput Soc Syst 9:986–996

    Article  Google Scholar 

  8. Qi L, Lin W, Zhang X, Dou W, Xu X, Chen J (2022) A Correlation Graph based Approach for Personalized and Compatible Web APIs Recommendation in Mobile APP Development. IEEE Trans Knowl Data Eng 35(6):1–1

  9. Kong L, Wang L, Gong W, Yan C, Duan Y, Qi L (2022) LSH-aware multitype health data prediction with privacy preservation in edge environment. World Wide Web 25:1793–1808

    Article  Google Scholar 

  10. Li Y, Xia S, Cao B, Liu Q (2019) Lyapunov optimization based trade-off policy for mobile cloud offloading in heterogeneous wireless networks. IEEE Trans Cloud Comput 10:491–505

    Article  ADS  CAS  Google Scholar 

  11. Yang Y, Yang X, Heidari M, Khan MA, Srivastava G, Khosravi M, Qi L (2022) ASTREAM: Data-stream-driven scalable anomaly detection with accuracy guarantee in IIoT environment. IEEE Trans Netw Sci Eng 10(5):3007–3016

  12. Qi L, Yang Y, Zhou X, Rafique W, Ma J (2021) Fast anomaly identification based on multi-aspect data streams for intelligent intrusion detection toward secure industry 4.0. IEEE Trans Industr Inform 18:6503–6511

    Article  Google Scholar 

  13. Shen S (2023) Metaverse-driven new energy of Chinese traditional culture education: edge computing method. Evol Intel 16:1503–1511. https://doi.org/10.1007/s12065-022-00757-4

    Article  Google Scholar 

  14. Kong L, Li G, Rafique W, Shen S, He Q, Khosravi MR, Wang R, Qi L (2022) Time-Aware Missing Healthcare Data Prediction Based on ARIMA Model. IEEE/ACM Trans Comput Biol Bioinform (01):1–10

  15. Liu Y, Li D, Wan S, Wang F, Dou W, Xu X, Li S, Ma R, Qi L (2022) A long short-term memory-based model for greenhouse climate prediction. Int J Intell Syst 37:135–151

    Article  Google Scholar 

  16. Zhang Y, Yin C, Wu Q, He Q, Zhu H (2019) Location-aware deep collaborative filtering for service recommendation. IEEE Trans Syst Man Cybernet 51:3796–3807

    Article  Google Scholar 

  17. Dai L, Wang W, Zhou Y (2021) Design and research of intelligent educational administration management system based on mobile edge computing internet. Mob Inf Syst 2021:e9787866. https://doi.org/10.1155/2021/9787866

    Article  Google Scholar 

  18. Kizilkaya B, Zhao G, Sambo YA, Li L, Imran MA (2021) 5G-enabled education 4.0: enabling technologies, challenges, and solutions. IEEE Access 9:166962–166969. https://doi.org/10.1109/ACCESS.2021.3136361

    Article  Google Scholar 

  19. Wang Q, Zhu C, Zhang Y, Zhong H, Zhong J, Sheng VS (2022) Short text topic learning using heterogeneous information network. IEEE Trans Knowl Data Eng 35(5):5269–5281

  20. DiNapoli J (2018) Leveraging collaborative competition in mathematics classrooms. Aust Math Teach 74:10–17

    Google Scholar 

  21. Rokicki M, Zerr S, Siersdorfer S (2015) Groupsourcing: team competition designs for crowdsourcing. In: Proceedings of the Proceedings of the 24th International Conference on World Wide Web; International World Wide Web Conferences Steering Committee: Republic and Canton of Geneva, CHE, p 906–915

  22. Ai W, Chen Y, Mei Q, Ye J, Zhang L (2019) Putting teams into the Gig economy: a field experiment at a ride-sharing platform. Under revision for resubmission to Management Science

  23. Ye T, Ai W, Zhang L, Luo N, Zhang L, Ye J, Mei Q (2020) Predicting individual treatment effects of large-scale team competitions in a ride-sharing economy. In: Proceedings of the Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; ACM: Virtual Event CA USA, p 2368–2377

  24. Younas M, Noor ASM, Arshad M (2022) Cloud-based knowledge management framework for decision making in higher education institutions. Intell Automation Soft Comput 31(1):83–99

  25. Scales CD, Moin T, Fink A, Berry SH, Afsar-Manesh N, Mangione CM, Kerfoot BP (2016) A randomized, controlled trial of team-based competition to increase learner participation in quality-improvement education. Int J Qual Health Care 28:227–232. https://doi.org/10.1093/intqhc/mzw008

    Article  PubMed  Google Scholar 

  26. Hou C, Hua L, Lin Y, Zhang J, Liu G, Xiao Y (2021) Application and exploration of artificial intelligence and edge computing in long-distance education on mobile network. Mobile Netw Appl 26:2164–2175. https://doi.org/10.1007/s11036-021-01773-x

    Article  Google Scholar 

  27. Xie Y, Zhang Q, Rezaee K, Xu Y (2023) Mobile computing-enabled health physique evaluation in campus based on amplified hashing. J Cloud Comput 12:102. https://doi.org/10.1186/s13677-023-00476-w

    Article  Google Scholar 

  28. Zhang Y, Wang K, He Q, Chen F, Deng S, Zheng Z, Yang Y (2019) Covering-based web service quality prediction via neighborhood-aware matrix factorization. IEEE Trans Serv Comput 14:1333–1344

    Article  Google Scholar 

  29. Wu S, Shen S, Xu X, Chen Y, Zhou X, Liu D, Xue X, Qi L (2022) Popularity-aware and diverse web APIs recommendation based on correlation graph. IEEE Trans Comput Soc Syst 10:771–782

    Article  Google Scholar 

  30. Wang F, Li G, Wang Y, Rafique W, Khosravi MR, Liu G, Liu Y, Qi L (2023) Privacy-aware traffic flow prediction based on multi-party sensor data with zero trust in smart city. ACM Trans Internet Technol 23(3):1–19

  31. Prabhu P, Valarmathie P, Dinakaran K (2023) A Feature Learning-Based Model for Analyzing Students' Performance in Supportive Learning. Intell Automation Soft Comput 36(3)2989

  32. Babu I, MathuSoothana R, Kumar S (2023) Evolutionary Algorithm Based Feature Subset Selection for Students Academic Performance Analysis. Intell Automation Soft Comput 36(3)3621

  33. Shalit U, Johansson FD, Sontag D (2017) Estimating individual treatment effect: generalization bounds and algorithms. In: Proceedings of the proceedings of the 34th International Conference on Machine Learning; PMLR, p 3076–3085

  34. Athey S, Imbens G (2016) Recursive partitioning for heterogeneous causal effects. Proc Natl Acad Sci 113:7353–7360. https://doi.org/10.1073/pnas.1510489113

    Article  ADS  MathSciNet  CAS  PubMed  PubMed Central  Google Scholar 

  35. Fang G, Annis IE, Elston-Lafata J, Cykert S (2019) Applying machine learning to predict real-world individual treatment effects: insights from a virtual patient cohort. J Am Med Inform Assoc 26:977–988. https://doi.org/10.1093/jamia/ocz036

    Article  PubMed  PubMed Central  Google Scholar 

  36. Makar M, Swaminathan A, Kıcıman E (2019) A distillation approach to data efficient individual treatment effect estimation. Proc AAAI Conf Artific Intell 33:4544–4551. https://doi.org/10.1609/aaai.v33i01.33014544

    Article  Google Scholar 

  37. Zhang Y, Cui G, Deng S, Chen F, Wang Y, He Q (2018) Efficient query of quality correlation for service composition. IEEE Trans Serv Comput 14:695–709

    Article  Google Scholar 

  38. Dai H, Xu Y, Chen G, Dou W, Tian C, Wu X, He T (2020) ROSE: robustly safe charging for wireless power transfer. IEEE Trans Mobile Comput 21:2180–2197

    Article  Google Scholar 

  39. Juvonen J, Lessard LM, Rastogi R, Schacter HL, Smith DS (2019) Promoting social inclusion in educational settings: challenges and opportunities. Educ Psychol 54:250–270. https://doi.org/10.1080/00461520.2019.1655645

    Article  Google Scholar 

  40. Hassani S, Jakob K, Schwab S, Hellmich F, Loeper M, Goerel G (2023) Fostering students’ peer relationships through the classroom-based intervention FRIEND-SHIP. Education 3–13:1–16. https://doi.org/10.1080/03004279.2023.2267584

    Article  Google Scholar 

  41. Iglesias-Pradas S, Hernández-García Á, Chaparro-Peláez J, Prieto JL (2021) Emergency remote teaching and students’ academic performance in higher education during the COVID-19 pandemic: a case study. Comput Hum Behav 119:106713. https://doi.org/10.1016/j.chb.2021.106713

    Article  Google Scholar 

  42. Wu H, Li S, Zheng J, Guo J (2020) Medical students’ motivation and academic performance: the mediating roles of self-efficacy and learning engagement. Med Educ Online 25:1742964. https://doi.org/10.1080/10872981.2020.1742964

    Article  PubMed  PubMed Central  Google Scholar 

  43. Hayat AA, Shateri K, Amini M, Shokrpour N (2020) Relationships between academic self-efficacy, learning-related emotions, and metacognitive learning strategies with academic performance in medical students: a structural equation model. BMC Med Educ 20:76. https://doi.org/10.1186/s12909-020-01995-9

    Article  PubMed  PubMed Central  Google Scholar 

  44. Murillo-Zamorano LR, López Sánchez JÁ, Godoy-Caballero AL (2019) How the flipped classroom affects knowledge, skills, and engagement in higher education: effects on students’ satisfaction. Comput Educ 141:103608. https://doi.org/10.1016/j.compedu.2019.103608

    Article  Google Scholar 

  45. Li Y, Liu J, Cao B, Wang C (2018) Joint optimization of radio and virtual machine resources with uncertain user demands in mobile cloud computing. IEEE Trans Multimedia 20:2427–2438

    Article  Google Scholar 

  46. Li Y, Liao C, Wang Y, Wang C (2015) Energy-efficient optimal relay selection in cooperative cellular networks based on double auction. IEEE Trans Wireless Commun 14:4093–4104

    Article  Google Scholar 

Download references

Funding

This research was funded by Shandong Social Science Planning Fund Program of China (GrantNo. 22CSDJ19).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, H.Z. and Y.Y.; methodology, H.Z. and W.G.; software, Z.L.; validation, Z.L. and Y.Y.; formal analysis, H.Z.; investigation, H.W.; resources, H.Z.; data curation, H.W.; writing—original draft preparation, H.Z.; writing—review and editing, H.Z.; visualization, Z.L.; supervision, Y.Y. and W.G.; project administration, H.Z.; funding acquisition, Y.Y. All authors reviewed the manuscript.

Corresponding author

Correspondence to Yan Yan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Wu, H., Li, Z. et al. Predicting the individual effects of team competition on college students’ academic performance in mobile edge computing. J Cloud Comp 13, 38 (2024). https://doi.org/10.1186/s13677-024-00591-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-024-00591-2

Keywords