Skip to main content

Advances, Systems and Applications

Quantum support vector machine for forecasting house energy consumption: a comparative study with deep learning models

Abstract

The Smart Grid operates autonomously, facilitating the smooth integration of diverse power generation sources into the grid, thereby ensuring a continuous, reliable, and high-quality supply of electricity to end users. One key focus within the realm of smart grid applications is the Home Energy Management System (HEMS), which holds significant importance given the fluctuating availability of generation and the dynamic nature of loading conditions. This paper presents an overview of HEMS and the methodologies utilized for load forecasting. It introduces a novel approach employing Quantum Support Vector Machine (QSVM) for predicting periodic power consumption, leveraging the AMPD2 dataset. In the establishment of a microgrid, various factors such as energy consumption patterns of household appliances, solar irradiance, and overall load are taken into account in dataset creation. In the realm of load forecasting in Home Energy Management Systems (HEMS), the Quantum Support Vector Machine (QSVM) stands out from other methods due to its unique approach and capabilities. Unlike traditional forecasting methods, QSVM leverages quantum computing principles to handle complex and nonlinear electricity consumption patterns. QSVM demonstrates superior accuracy by effectively capturing intricate relationships within the data, leading to more precise predictions. Its ability to adapt to diverse datasets and produce significantly low error values, such as RMSE and MAE, showcases its efficiency in forecasting electricity load consumption in smart grids. Moreover, the QSVM model’s exceptional flexibility and performance, as evidenced by achieving an accuracy of 97.3% on challenging datasets like AMpds2, highlight its distinctive edge over conventional forecasting techniques, making it a promising solution for enhancing forecasting accuracy in HEMS.The article provides a brief summary of HEMS and load forecasting techniques, demonstrating and comparing them with deep learning models to showcase the efficacy of the proposed algorithms.

Introduction

The primary difference in design between the Smart Grid (SG) and the conventional Power Grid is rooted in their operational orientations. The SG operates on a demand-follows-supply model, while the traditional grid operates on a supply-follows-demand model [1]. Renewable energy sources, particularly solar and wind power generation, are extensively integrated into both utility and consumer grids. Many countries are actively transitioning toward the deployment of smart homes and smart grids [2] to leverage their environmental and societal benefits. Advancements in communication technologies and state-of-the-art technologies have paved the way for the development of smart homes, which comprise intelligent appliances, sensors, and meters interconnected via Internet-of-Things (IoT) devices [3]. This trend has led to the deployment of home energy management systems (HEMSs) to facilitate the progression towards future smart grids. Additionally, energy users’ implementation of Demand Response (DR) programs [4, 5] aids in optimizing energy utilization to enhance power reliability and grid efficiency.

Forecasting, estimation, and prediction are crucial in determining future energy demand. Effective energy distribution planning relies on accurate forecasting to balance demand and supply [6]. Inaccuracies in forecasting can significantly impact operational costs, network safety, and service quality. Underestimating energy usage can cause power outages, resulting in economic costs and disrupting societal routines. On the other hand, overestimating energy demand can lead to unused capacity, wasting resources, especially financially [7]. Therefore, developing models to predict energy consumption trends with nonlinear data is a critical challenge for power generation and distribution networks.

Forecasting models are generally categorized as quantitative or qualitative. Quantitative models are based on data and statistics, while qualitative models rely on experience, judgment, and knowledge. The majority of forecasting methods [8] fall into causal or historical data-based methodologies. Causal methodologies analyze the cause-and-effect relationship between energy consumption and input variables such as social, climate, and economic aspects. Common methods for forecasting power consumption include Artificial Neural Networks (ANNs) [9, 10] and regression models [11] as shown in Fig. 1. Historical data-driven methods such as time series, auto regression and grey prediction models are also utilized.

Fig. 1
figure 1

Classification of forecasting models

Long-term energy consumption studies [12], typically spanning 5 to 20 years, focus on resource management and development programs. Short-term forecasting, ranging from an hour to a week, is commonly employed for scheduling and distribution network analysis [13], whereas mid-term forecasting, covering a month to 5 years [14], is used for planning power production resources and rates. Due to the influence of variables like time, climate, socioeconomic, and demographic factors on energy demand, accurately forecasting energy consumption is both crucial and challenging.

HEMS plays a critical role in regulating power flow within the smart grid, with a primary goal of reducing electricity costs and enhancing energy efficiency and security [15]. This heavily depends on the integration of sensing, communication, and control technologies. Communication networks like Wide Area Network (WAN), Home Area Network (HAN), and Neighborhood Area Network (NAN) enable access to energy demand data, allowing control of diverse components such as sensors, Renewable Energy Sources (RES), water meters, and Electric Vehicles (EV). Smart meters serve as intermediaries between the central controller of HAN and the utility, gathering data from multiple HANs and transmitting it to the utility administrator for decision-making based on system parameters. Mathematical optimization, meta-heuristic, and heuristic methods are typically used to schedule home energy usage.

A variety of techniques, including data mining, steady-state simulation, and Bayesian networks, are employed to forecast energy demand for building energy consumption, household appliance power, and overall home energy consumption. Researchers have proposed models for forecasting energy consumption across industrial, domestic, non-industrial, commercial, public illuminating, and entertainment sectors, analyzing electricity consumption and heat use to predict distribution system planning.

Organisation

This paper addresses the challenges of load forecasting in Home energy management system(HEMS). In Literature survey section, the literature survey explores various algorithms incorporating prediction analysis. Methodology section presents the implementation of the proposed QSVM approach, illustrated through a flow chart. The results and analysis of the proposed approach are presented in Methodology section. To conclude, Results section summarizes the current state of research, emphasizing the focus on enhancing accuracy in recent studies.

Literature survey

Prediction analysis is an essential component of home energy management systems due to its ability to forecast and anticipate energy usage patterns, allowing for more efficient resource allocation and consumption management. By analyzing historical energy usage data, weather patterns, and household occupancy trends, prediction analysis can accurately predict future energy demands. This information empowers homeowners to make informed decisions regarding energy usage, optimize the scheduling of appliances and heating/cooling systems, and even explore opportunities for renewable energy integration. Ultimately, the implementation of prediction analysis in home energy management systems is crucial for maximizing energy efficiency, reducing costs, and contributing to a more sustainable and environmentally conscious lifestyle.

In the early stages, prediction analysis primarily relied on basic statistical models to forecast energy usage. Zhang et al. [16] discusses a model predictive control-based home energy management system for a residential microgrid, which takes into account time-varying information such as load demand, electricity price, and renewable energy generations. Using mixed-Interger linear programming three case studies are conducted to analyze the impacts of different factors on the system. Mrazek et al. [17] also proposes a simplified model of a home that uses a 5-day weather forecast to predict energy demands and generation by photovoltaic panels, which can be used for predictive optimization in energy usage. The implementation and validation of modeling methods for forecasting PV, PEV, HP, and home load demand in a home energy management system has been discussed in [18]. A Comparitive analysis of stochastic modeling methods which shows sandia model has better performance and accuracy.Similarly, [19] presents a chance constrained, model predictive control algorithm for demand response in a home energy management system. The proposed control architecture ensures both the DR event and indoor thermal comfort are satisfied with a high probability.

Basic statistical models , while useful in certain contexts, have several drawbacks that limit their effectiveness in the realm of prediction analysis for home energy management systems. These models often rely on simplifying assumptions that may not fully capture the complexity and variability of real-world energy consumption patterns. Additionally, basic statistical models may struggle to account for non-linear relationships and interactions among various factors impacting energy usage, leading to less accurate predictions. Moreover, these models typically require manual updating and recalibration to adapt to changing conditions, making them less dynamic and responsive to real-time changes.

However, with the advent of smart home technology, machine learning, and artificial intelligence, the capabilities of prediction analysis have expanded dramatically. A data analysis approach has been proposed in [20] for predicting appliances power state in a home energy management system.Multitarget classification framework was developed for identifying power state of appliances which Outperforms FHMM and binary state modeling framework in power prediction.Load forecasting methods using machine learning for HEMS has been proposed in [21] using DBSCAN, K-means, and PCC algorithms which helps in improving the stability and reliability of HEMS.Supervised-based machine learning algorithms such as Linear Regression, Lasso Regression, Random Forest, Extra Tree Regressor, XG Boost are used in [22] for prediction of household energy consumption.The comparision of these models shows that the tree based models such as Random Forest and Extra Tree Regressor, gave the best results. Syamala et al. [23] explores deep learning-based techniques for predicting energy consumption in smart residential buildings, which is essential for home energy management systems.Deep learning models are optimal for estimating prediction performance and uncertainty.Method of machine learning with reinforcement in [24] is effective in predicting the costs. Evaluation of effectiveness and reliability of forecasting has done accurately. Although ML and DL models are performing well in predictions analysis but there are certain limitations.One of the main drawbacks is their computational intensity and resource requirements. Deep learning models often require significant computational power and substantial training time, which can be a limiting factor in practical applications, especially in resource-constrained environments.Additionally, deep learning models can be complex and challenging to interpret, leading to potential difficulties in understanding the underlying factors driving predictions and decisions. They may also require large amounts of labeled data for training, which can be difficult to acquire in certain domains.

In this paper sophisticated algorithms are used that can analyze vast amounts of data, including historical energy consumption, weather patterns, and even individual user behavior, to generate highly accurate predictions which will ovecome the drawbacks of previous methods. Furthermore, the integration of predictive analytics with IoT (Internet of Things) devices has allowed for real-time monitoring and adaptive energy management, enabling homeowners to make proactive adjustments based on dynamic conditions (Table 1).

Table 1 Literature survey on load forecasting in HEMS

Problem statement

Accurate load forecasting is essential within Home Energy Management Systems (HEMS) to effectively manage energy consumption. This need arises from the constantly changing energy demands within households. Without precise load forecasting, HEMS struggles to anticipate and adjust to these fluctuations, resulting in inefficient energy use, potential grid instability, and higher expenses. Therefore, a reliable load forecasting mechanism is critical for HEMS to efficiently allocate and manage energy resources, improve overall efficiency, and promote sustainability within the home energy ecosystem. The work presented in this article utilizes Quantam Support Vector Machine (QSVM) for forecasting energy consumption based on a complex AMPD2 dataset encompassing the energy usage of a house.

Key contributions

 

  1. 1.

    Collecting the data of overall energy consumption as well as different home appliances energy consumption data.

  2. 2.

    A Quantum Support Vector Machine(QSVM) approach based on evolutionary learning is introduced for accurately forecasting dynamic short-term load demand and power consumption, particularly improving precision for home energy management.

  3. 3.

    Comparing the performance of proposed model QSVM with Deep Learning models interms of MAPE, MAE, RMSE and accuracy.

Background work

Support vector machine

The Support Vector Machine (SVM) is a supervised AI method grounded in statistical learning theory [29], aimed at analyzing data and identifying patterns. It finds application in both data classification and regression analysis for estimating system parameters. Initially introduced by Vladimir Vapnik in 1994, SVM offers notable advantages [30], particularly in scenarios with limited sample sizes or databases, such as time series forecasting [1].

The fundamental concept behind utilizing SVM for pattern classification involves several steps. Initially, input vectors are mapped into a feature space, potentially of higher dimensionality, either through linear or nonlinear transformations dictated by the kernel function as shown in Fig. 2. Subsequently, within this feature space, an optimal linear division is sought to construct a hyperplane that effectively separates the two classes, with potential extension to multi-class scenarios. SVM training consistently pursues a globally optimized solution while mitigating overfitting, enabling effective handling of a substantial number of features. In the situation of linear separability (1), there exists the hyperplane that separates with the function.

$$\begin{aligned} c.y + b = 0 \end{aligned}$$
(1)

which implies (2),

$$\begin{aligned} z_{j}( c.y + b = 0) \geqq 1,\qquad \ \ j = 1, 2, 3,......N \end{aligned}$$
(2)
Fig. 2
figure 2

Support vector machine working

By minimizing the Euclidean norm of \(\mid \mid \ c \ \mid \mid\) under this constraint, SVMs solve a convex quadratic problem (QP) by introducing Lagrange multipliers \(\beta _{i}\). The solution yields a globally optimized result with specific properties (3).

$$\begin{aligned} k = \sum _{j}^{N} \beta _{j} z_{j} m_{j} \end{aligned}$$
(3)

These \(m_j\) are only considered support vectors if the associated \(\beta _i > 0\). When training SVM, the decision function may be expressed as (4)

$$\begin{aligned} f(y) =sign\left( \sum _{j=1}^{N} \beta _{j} z_{j}( y.m_{j}) \ +\ b\right) \end{aligned}$$
(4)

In linear non-separable scenarios, SVM employs a non-linear mapping of the input vector y from the \(R^d\) input space to a higher-dimensional Hilbert space, dictated by the kernel function.

Quantum support vector machine (QSVM)

Quantum Support Vector Machine (QSVM) is an innovative approach that integrates quantum computing principles with classical machine learning algorithms to enhance computational capabilities and optimize performance. It merges quantum algorithms with the classical SVM framework to handle complex datasets and high-dimensional feature spaces efficiently.it also solves optimization problems more effectively compared to classical SVMs, allowing for faster processing and improved accuracy in classification tasks. By employing quantum principles such as superposition and entanglement, Q-SVM can process vast amounts of data in parallel, facilitating quicker analysis and decision-making.Generally, Grover’s search algorithm and HHL algorithm are the two algorithms that have been used in implementing QSVM [31]. These methods can extract specific properties of \(\vec {m}\) satisfying \(A\vec {z} = \vec {n}\), where A is an \(M \times M\) matrix and \(\vec {n}\) is a vector of size \(M \times 1\). The computational complexity of traditional SVM algorithm is \(O[\log (\gamma ^-1) \text {poly}(MP)]\), which is directly proportional to the polynomial in MP, where M represents the dimensions of the data, P denotes the number of training data, and \(\gamma\) signifies accuracy. while comparing the traditional SVM with the QSVM based on the HHL algorithm it shows that the QSVM with HHL algorithm can achieve \(O[\log (MP)]\) performance for both training and testing processes, which helps in exponentially speeding up the calculations.

By employing the least-squares reformulation of the support vector machine (SVM), we can convert both the original SVM conundrum and a quadratic programming issue into the challenge of resolving a linear equation system (5):

$$\begin{aligned} E\left( {\begin{array}{c}j\\ \vec {\beta }\end{array}}\right) \equiv \left( \begin{array}{cc} 0 &{} \vec {1}^{M}\\ \vec {1} &{} N\ +\ \alpha ^{-1} I \end{array}\right) \left( {\begin{array}{c}j\\ \vec {\beta }\end{array}}\right) = \left( {\begin{array}{c}0\\ \vec {k}\end{array}}\right) \end{aligned}$$
(5)

In this context, N signifies the \(R \times R\) kernel matrix, where elements are computed as \(N_{rs} = N(\vec {x}_r, \vec {x}_s) = \vec {x}_r \cdot \vec {x}_s\), when employing a linear kernel. \(\alpha\) serves as a user-defined parameter that regulates the balance between training error and the SVM objective. \(\vec {k}\) represents a vector containing the labels of the training data, and I denotes the identity matrix. Thus, the sole unknown term in this linear equation is the vector \(\begin{pmatrix}j\ \vec {\beta }\end{pmatrix}\). Here, both and b are parameters utilized to compute the SVM classifier, which defines the decision hyperplane that segregates the data into two sub-groups. Once the parameters of the hyperplane are established, owing to the linear system solving algorithm, such as the HHL algorithm, a new data point \(\vec {y}_0\) can be classified accordingly (6).

$$\begin{aligned} f(\vec {y}_{0})= & {} sign(\vec {w} .\vec {y}_{0} \ +\ b)\nonumber \\ {}= & {} sign\left( \sum \nolimits _{j=1}^{N} \beta _{j} z( \vec {y}_{j}.\vec {y}_{0}) \ +\ b\right) \end{aligned}$$
(6)

Where \(\vec {y}_j\) with \(j = 1, \ldots , N\) represents the training data; \(\beta _j\) is the jth dimension of the parameter \(\vec {\beta }\); \(\vec {w}\) denotes the slope of the hyperplane, which can be derived from the parameter \(\vec {\beta }\). The parameter b serves as the offset of the hyperplane, and in this context, it’s set to 0. Mathematical representation of signum function is shown in (7):

$$\begin{aligned} sign( y) = \left\{ \begin{array}{ll} 1, &{} y \geqq 0\\ -1, &{} y < 0 \end{array}\right. \end{aligned}$$
(7)

Methodology

Dataset

Data was gathered on a home that was built in the Greater Vancouver metropolitan area of British Columbia in 1955. Following major renovations in the years 2005 and 2006, the house was awarded an Energy Guide 23 rating of 82% by the Canadian government, an increase from 61%. The house is located in Vancouver, East’s Burnaby neighbourhood. AMPds2 is available for download in various formats, including the original CSV files, RData, and tab-delimited formats, through Harvard Dataverse (Data Citation 2). AMPds2 includes a description of the house file’s electricity use. The four distinct categories of AMPds2 data are electricity, water, natural gas, and climate [32]. In this work, the prediction analysis was conducted using the electricity dataset. for example data about power billing would be included in a Electricity billing.csv file.Data on electricity from the clothes dryer (CDE) metre would be stored in Electricity CDE.csv file.

The home is supplied with the 240 V, 200A service by BC Hydro, a provincial utility. Two DENT PowerScout 18 devices measuring 21 loads of data were recorded during a two-year period (2012-2014). Every minute, twenty-one loads of data were captured. The load details for the 21 loads are shown in Fig. 3. Since no activity was detected, three loads were disconnected: a gas stove plug breaker, a microwave plug breaker, and a randomly selected lighting breaker. Low-value measurements were made and recorded as zero.

Fig. 3
figure 3

AMPds2 Bus diagram

Data preprocessing

Data Scaling is a fundamental data preprocessing procedure applied to numerical features.Several machine learning algorithms have been implemented to achieve optimal outcomes. Most commonly used scaling methods such as standard scalar, Minmax scalar have been used [33]. In the analysis of the AMPD2 dataset, this article has employed the MinMaxScaler. MinMax scaling, also known as min-max normalization, is a data preprocessing method used to adjust numerical features in a dataset to a specific range, usually between 0 and 1. This technique is valuable in machine learning and data analysis when the features exhibit varying scales and need to be standardized for consistent comparison and model training. The MinMax scaling process involves the following steps:

  • Identify the Range: Determine the minimum and maximum values for each feature in the dataset.

  • Calculate Scaling Factors: Compute the scaling factors, typically denoted as \(\text {min}\) and \(\text {max}\), representing the desired minimum and maximum values for the range. The common range is [0, 1], but it can be adjusted as needed.

  • Scale the Data: For each feature x in the dataset, apply the scaling formula (8):

    $$\begin{aligned} x_{\text {scaled}} = \frac{x - \text {min}(x)}{\text {max}(x) - \text {min}(x)} \times (\text {max} - \text {min}) + \text {min} \end{aligned}$$
    (8)

Here, scaled \(x_{\text {scaled}}\) scaled denotes the scaled value of x, \(\text {min}(x)\) represents the minimum value of feature x, \(\text {max}(x)\) signifies the maximum value of feature x, while \(\text {min}\) and \(\text {max}\) are the desired minimum and maximum values of the range, typically 0 and 1. Formula (8) linearly scales each feature’s values to the desired range.

QSVM implementation

This section comprises two parts, in the first part AMPDs2 dataset is transformed(encoded) into a quantum state using a basic embedding, which translates each data point from its original representation to a superposition state on the quantum states (9) as shown in Fig. 4.

$$\begin{aligned} |y\rangle \ \rightarrow |\varphi ( y) \rangle = Z( y) |0\rangle \end{aligned}$$
(9)
Fig. 4
figure 4

Support vector machine working

Here, \(|y\rangle\) represents a classical input data vector which is transformed into a quantum state \(|\varphi ( y) \rangle\) through applying a unitary operator \(Z( y) |0\rangle\) to its initial quantum state \(|0\rangle\).

Then the subsequent operations in the layer, we utilized Hadamard gates and CNOT gates act as a feature mapping. They manipulate the initial basis states, creating entanglement between qubits and generating a more complex quantum state that captures higher-order relationships and features from the input data. This feature mapping effectively transforms the original data into a new, quantum-based feature space, where the similarity between data points is measured by the inner product of their resulting quantum states. And the second part we employed square kernel matrix, which captures the similarities between pairs of data points not only in their quantum feature representation at a single point in time, but also across different time steps. This allows us to analyse the past history of the data and identify patterns that can inform future predictions.

Results

This study makes use of the AMPds2 dataset, which consists of 21 different loads with irregular consumption patterns. Figure 5 provides a detailed overview of the consumption patterns of the loads. Several artificial intelligence techniques, such as basic RNN, LSTM and QSVM, are used to forecast energy usage. Various combinations of activation functions and hyperparameters are used to train these models. The RMSE and MAE measures are used to evaluate each model’s performance.

Fig. 5
figure 5

Overall house energy consumption

In Python code, the data is resampled on a weekly basis before undergoing data preprocessing. Following the resampling process, the data is subjected to data preprocessing procedures. Subsequently, the processed data is then fed into the different models for further analysis and modeling.

Deep learning models (LSTM & RNN models)

The study advocates for the utilization of Deep Learning techniques, which involve modifying and resizing the dataset to meet the requirements of the neural network (NN) model. Initially, the research employs an RNN model, comprising one input layer, two output layers, and two hidden layers, each containing 40 nodes. With a total of 5362 trainable parameters and 0 non-trainable parameters, the RNN model undergoes data splitting for training, allocating 30% for validation purposes. Subsequently, the trained model is evaluated using separate test data to forecast future values, with focused adjustments made to hyperparameters such as epochs and batch size for optimal training. Additionally, the LSTM model is structured with one input and output layer, along with three hidden layers, each containing 25 nodes. The LSTM model comprises 142,363 trainable parameters and no non-trainable parameters. Upon completion of training, the model is assessed using test data to predict future values. When comparing the deep learning models, LSTM demonstrates superior training compared to RNN. Below, Figs. 6 and 7 display the actual and predicted values of both LSTM and RNN.

Fig. 6
figure 6

Household energy consumption prediction using LSTM

Fig. 7
figure 7

Household energy consumption prediction using RNN

Proposed methodology

The AMPD2 dataset is resampled, normalized, and discretized into binary data using a specified threshold value. The binary information is divided into input sequences and their respective output targets. This study focused on assigning a certain number of time steps to each sequence using 2, 4, and 8 qubits, as shown in Fig. 8. For validating the proposed QSVM model effectiveness by employing 2, 4 and 8 qbits. Subsequently, the data is processed through a quantum circuit ansatz where it is encoded, followed by the application of layers of Hadamard (H) gates and CNOT gates to create entanglement \((R_{z}(\theta ))\) between the qubits \((ZZ(\theta ))\), where \(\theta\) is rotated by \(\frac{\pi }{2}\). Following this, the data is split into 70% for training and 30% for testing. A quantum kernel is utilized to compute the inner product of quantum feature maps of the training samples. These quantum feature maps \((x_{1}.x_{2})\) are applied to the ansatz circuit to calculate the probability of observing the measurement, indicating the inner product. Finally, a traditional SVM is trained using the quantum kernel, and the QSVM model is then employed to forecast the testing data. The prediction analysis using QSVM is shown in below Fig. 9.

Fig. 8
figure 8

Feature map circuit for 2 qubit

Fig. 9
figure 9

Household energy consumption prediction using QSVM

In this study, we compared the performance metrics of different qubits in the proposed model. Specifically, we utilized 2, 4, and 8 qubits to assess the effectiveness of the model. The results from Table 2 indicate that the employment of 2 qubits resulted in lower RMSE (0.144), MAE (0.380), and MAPE (0.256) compared to the results obtained with 4 and 8 qubits. The prediction accuracy of 2 qubits is notably higher at 97.36%, surpassing the slightly lower prediction accuracies of 93.12% and 90.76% observed with 4 and 8 qbits, respectively.

Table 2 Comparing the different metrics of various qbits of proposed model

When comparing the proposed method with deep learning models in terms of performance metrics, it becomes evident that the Quantum Support Vector Machine (QSVM) demonstrates superior performance. This is highlighted by a comparative analysis of root mean square error (RMSE), mean absolute error (MAE), and MAPE, as depicted in the Table 3. Figure 10 illustrates the comparison of results achieved by the proposed model showing an accuracy of 97.36%. In contrast, both LSTM and RNN achieved slightly lower accuracies of 95.01% and 93.9%, respectively. The data unequivocally indicates that QSVM outperforms deep learning models across these key performance indicators, affirming its efficacy in handling the specific task at hand.

Table 3 Overall comparision of proposed models
Fig. 10
figure 10

Comparision of Different prediction models

Conclusion

This article emphasizes the role of forecasting in Home Energy Management Systems (HEMS) of smart grids. Using deep learning techniques including Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Quantum Support Vector Machine (QSVM) for electrical load consumption prediction, it investigates the application of QSVM to forecast complex electricity consumption patterns. The comprehensive performance evaluation of the article highlights the excellent results with significantly low values for both RMSE and MAE. In instance, the QSVM model exhibits low RMSE (0.14) and MAE (0.380) values, demonstrating its great accuracy. These results are typically applicable, even if they may not be true for all datasets. Furthermore, the selection of a time series forecasting model has a substantial impact on error reduction. It is most useful to use a highly adaptive model, especially when working with complicated and nonlinear datasets. The QSVM approach produces the best accuracy, as demonstrated by our study and documented in the literature, with an improvement of about 3%. Attaining an accuracy of 97.3% on a complicated dataset such as AMpds2 highlights the model’s exceptional flexibility.

Moreover, future studies could delve into the generalizability of the QSVM approach and its adaptability to different datasets. Investigating its performance across various types of data would provide valuable insights into the robustness of the model and its applicability in real-world scenarios. This could involve exploring the impact of various external factors and contextual variables on the accuracy and reliability of the QSVM approach.

Additionally, researchers might consider conducting comparative studies to evaluate the performance of the QSVM model against other advanced forecasting models, particularly in the context of complex and nonlinear datasets. Comparisons with existing state-of-the-art models could yield essential benchmarks for understanding the relative strengths and weaknesses of different approaches, ultimately guiding the selection of the most suitable forecasting model for specific applications.

Furthermore, future research could focus on developing hybrid models that integrate the strengths of multiple forecasting techniques, potentially addressing the limitations of individual models and enhancing overall prediction accuracy. This could involve exploring innovative combinations of deep learning approaches and traditional forecasting methods to harness the complementary advantages of each approach, thus pushing the boundaries of forecasting precision and adaptability.

By delving into these areas, future studies could further expand the understanding of forecasting in HEMS of smart grids, paving the way for enhanced accuracy, adaptability, and real-world applicability of forecasting models in the realm of energy management.

Availability of data and materials

No datasets were generated or analysed during the current study.

References

  1. Rana MM, Rahman A, Uddin M, Sarkar MR, Shezan SA, Reza C, Ishraque MF, Hossain MB (2022) Efficient energy distribution for smart household applications. Energies 15(6):2100

    Article  Google Scholar 

  2. Kumar P, Lin Y, Bai G, Paverd A, Dong JS, Martin A (2019) Smart grid metering networks: A survey on security, privacy and open research issues. IEEE Commun Surv Tutorials 21(3):2886–2927

    Article  Google Scholar 

  3. Zafar U, Bayhan S, Sanfilippo A (2020) Home energy management system concepts, configurations, and technologies for the smart grid. IEEE Access 8:119271–119286

    Article  Google Scholar 

  4. Zhang L, Kerrigan EC, Pal BC (2019) Optimal communication scheduling in the smart grid. IEEE Trans Ind Inform 15(9):5257–5265

    Article  Google Scholar 

  5. Qayyum N, Amin A, Jamil U, Mahmood A (2019) Optimization Techniques for Home Energy Management: A Review," 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan. p. 1–7. https://doi.org/10.1109/ICOMET.2019.8673435

  6. Safhi HM, Frikh B, Ouhbi B (2020) Energy load forecasting in big data context," 2020 5th International Conference on Renewable Energies for Developing Countries (REDEC), Marrakech, Morocco, p. 1–6. https://doi.org/10.1109/REDEC49234.2020.9163901

  7. Araya S, Rakesh N, Kaur M (2021) Smart Home Load Analysis and LSTM-Based Short-Term Load Forecasting. In: Singh, P.K., Polkowski, Z., Tanwar, S., Pandey, S.K., Matei, G., Pirvu, D. (eds) Innovations in Information and Communication Technologies (IICT-2020). Advances in Science, Technology & Innovation. Springer, Cham. https://doi.org/10.1007/978-3-030-66218-9_14

  8. Raza MQ, Khosravi A (2015) A review on artificial intelligence based load demand forecasting techniques for smart grid and buildings. Renew Sust Energ Rev 50:1352–1372

    Article  Google Scholar 

  9. Kuo PH, Huang CJ (2018) A high precision artificial neural networks model for short-term energy load forecasting. Energies 11(1):213

    Article  Google Scholar 

  10. Ahmad T, Chen H, Shah WA (2019) Effective bulk energy consumption control and management for power utilities using artificial intelligence techniques under conventional and renewable energy resources. Int J Electr Power Energy Syst 109:242–258

    Article  Google Scholar 

  11. Fumo N, Biswas MR (2015) Regression analysis for prediction of residential energy consumption. Renew Sust Energ Rev 47:332–343

    Article  Google Scholar 

  12. Almazrouee AI, Almeshal AM, Almutairi AS, Alenezi MR, Alhajeri SN (2020) Long-term forecasting of electrical loads in kuwait using prophet and holt-winters models. Appl Sci 10(16):5627

    Article  Google Scholar 

  13. Wang Y, Sun S, Chen X, Zeng X, Kong Y, Chen J, Guo Y, Wang T (2021) Short-term load forecasting of industrial customers based on svmd and xgboost. Int J Electr Power Energy Syst 129:106830

    Article  Google Scholar 

  14. Gupta A, Kumar A (2020) Mid Term Daily Load Forecasting using ARIMA, Wavelet-ARIMA and Machine Learning. 2020 IEEE International Conference on Environment and Electrical Engineering and 2020 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I&CPS Europe), Madrid, p. 1–5. https://doi.org/10.1109/EEEIC/ICPSEurope49358.2020.9160563

  15. Shareef H, Ahmed MS, Mohamed A, Al Hassan E (2018) Review on home energy management system considering demand responses, smart technologies, and intelligent controllers. IEEE Access 6:24498–24509

    Article  Google Scholar 

  16. Zhang Y, Wang R, Zhang T, Liu Y, Guo B (2016) Model predictive control-based operation management for a residential microgrid with considering forecast uncertainties and demand response strategies. IET Gener Transm Distrib 10(10):2367–2378

    Article  Google Scholar 

  17. Mrazek M, Honc D, Sanseverino ER, Zizzo G (2020) Predictive Model of Energy Consumption of a Home. In: Silhavy, R., Silhavy, P., Prokopova, Z. (eds) Software Engineering Perspectives in Intelligent Systems. CoMeSySo 2020. Advances in Intelligent Systems and Computing, vol 1295. Springer, Cham. https://doi.org/10.1007/978-3-030-63319-6_49

  18. Yousefi M, Hajizadeh A, Soltani MN (2019) A comparison study on stochastic modeling methods for home energy management systems. IEEE Trans Ind Inform 15(8):4799–4808

    Article  Google Scholar 

  19. Garifi K, Baker K, Touri B, Christensen D (2018) Stochastic Model Predictive Control for Demand Response in a Home Energy Management System," 2018 IEEE Power & Energy Society General Meeting (PESGM), Portland, p. 1–5. https://doi.org/10.1109/PESGM.2018.8586485

  20. Buddhahai B, Wongseree W, Rakkwamsuk P (2019) An energy prediction approach for a nonintrusive load monitoring in home appliances. IEEE Trans Consum Electron 66(1):96–105

    Article  Google Scholar 

  21. Fan L, Li J, Zhang XP (2020) Load prediction methods using machine learning for home energy management systems based on human behavior patterns recognition. CSEE J Power Energy Syst 6(3):563–571

    Google Scholar 

  22. Rambabu M, Ramakrishna NSS, Polamarasetty PK (2022) Prediction and analysis of household energy consumption by machine learning algorithms in energy management. In E3S Web of Conferences vol. 350. EDP Sciences, Chicago

  23. Syamala M, Komala CR, Pramila PV, Dash S, Meenakshi S, Boopathi S (2023) Machine learning-integrated IoT-based smart home energy management system. In Handbook of Research on Deep Learning Techniques for Cloud-Based Industrial IoT. IGI Global, Chicago, p 219–235

  24. Ren M, Liu X, Yang Z, Zhang J, Guo Y, Jia Y (2022) A novel forecasting based scheduling method for household energy management system based on deep reinforcement learning. Sustain Cities Soc 76:103207

    Article  Google Scholar 

  25. Nutakki M, Subashini M, Mandava S (2022) Energy consumption forecasting in home energy management system using Deep Learning Techniques. 2022 IEEE 19th India Council International Conference (INDICON), Kochi, p. 1–6. https://doi.org/10.1109/INDICON56171.2022.10039788

  26. Chen Y, Tang Y, Zhang S, Liu G, Liu T (2023) Weather Sensitive Residential Load Forecasting Using Neural Networks. 2023 IEEE 6th International Electrical and Energy Conference (CIEEC), Hefei, p. 3392–3397. https://doi.org/10.1109/CIEEC58067.2023.10166280

  27. Simani KN, Genga YO, Yen YCJ (2023) Using LSTM to perform load modelling for residential demand side management. 2023 31st Southern African Universities Power Engineering Conference (SAUPEC), Johannesburg, p. 1–5, https://doi.org/10.1109/SAUPEC57889.2023.10057875

  28. Su J, Zhou L, Chen F, Qiu Z (2023) Residential Energy Consumption Prediction Based on Encoder-Decoder LSTM. In: Yang, H., Fei, J., Qiang, T. (eds) Smart Grid and Innovative Frontiers in Telecommunications. SmartGIFT 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 483. Springer, Cham. https://doi.org/10.1007/978-3-031-31733-0_27

  29. Gangsar P, Tiwari R (2019) A support vector machine based fault diagnostics of induction motors for practical situation of multi-sensor limited data case. Measurement 135:694–711

    Article  Google Scholar 

  30. Widodo A, Yang BS (2007) Application of nonlinear feature extraction and support vector machines for fault diagnosis of induction motors. Expert Syst Appl 33(1):241–250

    Article  Google Scholar 

  31. Yang J, Awan AJ, Vall-Llosera G (2019) Support vector machines on noisy intermediate scale quantum computers. arXiv preprint arXiv:190911988

  32. Makonin S, Ellert B, Bajić IV, Popowich F (2016) Electricity, water, and natural gas consumption of a residential house in canada from 2012 to 2014. Sci Data 3(1):1–12

    Article  Google Scholar 

  33. Ahsan MM, Mahmud MP, Saha PK, Gupta KD, Siddique Z (2021) Effect of data scaling methods on machine learning algorithms and model performance. Technologies 9(3):52

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to acknowledge VIT University, Vellore, for providing valuable resources and support for this study. Their assistance has been instrumental in the successful completion of this research.

Funding

No funding was received for conducting this study.

Author information

Authors and Affiliations

Authors

Contributions

Karan Kumar K, Mounica Nutakki, and Suprabhath Koduru contributed to the conceptualization, methodology, software development, and were involved in writing the original draft as well as reviewing and editing the manuscript. Srihari Mandava played a key role in visualization, investigation, supervision, and project administration.

Corresponding author

Correspondence to Srihari Mandava.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

The authors confirm that all individuals mentioned in this paper have given consent for the publication of information that are included in this study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

K, K.K., Nutakki, M., Koduru, S. et al. Quantum support vector machine for forecasting house energy consumption: a comparative study with deep learning models. J Cloud Comp 13, 105 (2024). https://doi.org/10.1186/s13677-024-00669-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-024-00669-x

Keywords