Skip to main content

Advances, Systems and Applications

Edge-cloud computing cooperation detection of dust concentration for risk warning research

Abstract

An edge-cloud computing collaborative dust concentration detection architecture is proposed for real-time operation of intelligent algorithms to reduce the warning delay. And, an end-to-end three-channel convolutional neural network (E2E-SCNN) method is proposed in the paper to facilitate intelligent monitoring and management of dust concentration in tobacco production workshops. This model, which includes three sub-networks-a local feature branch, a global feature branch, and a spatial feature branch, learns the detail texture, overall layout, and spatial distribution information of the input image respectively. A fusion of the three complementary features is performed at the end of the network for the final dust concentration regression prediction. The design, when compared with the single network structure that directly regresses the entire image, is shown to more fully represent the overall information of the image and enhance monitoring performance. A richly annotated image dataset of tobacco production workshops is constructed to verify the effectiveness of the proposed method. The prediction error of E2E-SCNN is compared with existing image estimation algorithms, dual-channel networks, and other methods on this dataset using indicators such as Mean Absolute Error (MAE) and \({R}^{2}\). It is shown by the results that excellent performance is achieved by the E2E-SCNN algorithm, significantly surpassing other comparison methods. The paper demonstrates that the accuracy and robustness of dust concentration prediction can be greatly improved by using a three-channel convolutional neural network spatial information monitoring framework. This achievement provides an effective means for dust supervision and governance during the tobacco production process and offers a technical route that can be referred to for image analysis tasks in other similar fields.

Introduction

Edge cloud is a technology that deploys cloud computing capabilities and services at the network edge, close to the data sources or users. The application of edge cloud in smart factories has significant implications. First, it can improve the efficiency and security of data processing. Edge cloud can perform real-time analysis, processing and decision making on data at the network edge, reducing data transmission latency and cost, as well as lowering the risk of data leakage. Second, it can enhance the flexibility and scalability of smart factories. Edge cloud can dynamically adjust the allocation and configuration of cloud services according to the actual needs of smart factories, achieving optimal utilization of resources. Edge cloud can also work with central cloud to achieve hierarchical management and storage of data, improving the operational efficiency and stability of smart factories. The application of edge cloud in smart factories has significant implications, and can bring more value and advantages to smart factories. The architecture of smart factories based on edge cloud is generally shown in Fig. 1.

Fig. 1
figure 1

Schematic diagram of edge-cloud enabled smart factory

Tobacco is an important economic crop and the main raw material for tobacco products. Tobacco processing and production is the key link to realize the transformation of tobacco from the field to tobacco products. The workshop production process is essential to ensure the quality and quantity of tobacco products. Dust pollution is a common problem in industrial production, and tobacco processing and production are no exception. The main sources of dust in this industry are mechanical operations such as baking, cutting, conveying, and grading. Dust not only contaminates the workshop environment, but also compromises the product quality, as it sticks to the tobacco surface and forms “glue stains” that interfere with the later production stages. More importantly, the tobacco workshop dust contains harmful substances such as tobacco-specific alkaloids, which can increase the risk of workers suffering from tobacco dust pneumoconiosis and pose a direct threat to workers’ health [1, 2]. In an aerobic environment, when tobacco dust reaches the explosive concentration, it is prone to cause primary explosion when encountering open fire or strong vibration. This primary explosion can cause the dust originally attached to the equipment to suspend in the air, and if the explosion conditions are met at this time, it will cause secondary chain explosion, causing all the locations with dust points to explode, resulting in serious loss of life and property [3]. In addition, the generation of tobacco dust will also cause equipment wear and poor contact. This kind of dust may cause blockage or wear of the parts of the running equipment, the suction and exhaust ducts and the nozzle of the drum equipment, thus shortening the service life of the equipment. Moreover, the free dust particles may also affect the performance of the photoelectric switch, touch-type isolation switch and electrical circuit contact point in the workshop and the electric cabinet, resulting in poor signal contact. Therefore, it is very important to detect and alert the abnormal dust concentration as soon as possible after the dust removal equipment in the tobacco workshop fails.

There are mainly two traditional methods for measuring dust in tobacco workshops: gravimetric method [4] and optical method [5]. Gravimetric method is one of the traditional methods for dust monitoring in tobacco workshops. Its measurement principle is to collect air samples from tobacco workshops, use filter paper or other filters to collect dust in the air, and divide the dust weight on the filter by the volume of the sampled air, which can obtain the dust concentration or dust emission concentration in unit volume of air. This method has the advantages of simple operation and direct principle, and has been widely used in dust monitoring in tobacco workshops for a long time. However, the gravimetric method also has some limitations. First, its monitoring results are related to the sampling time and air flow rate, which require accurate control of the sampling volume, and also need weighing, calculation and other processes, which cannot achieve real-time online monitoring. Second, the dust on the filter needs to be dried to remove moisture, which will prolong the measurement time and reduce the real-time performance of monitoring. Third, environmental factors such as temperature, humidity, air flow rate, etc. will affect the dust collection efficiency, introducing additional errors. Finally, the gravimetric method cannot distinguish dust of different particle sizes or compositions, and can only reflect the overall dust concentration, with limited information content. Paper [6] used the gravimetric method to measure PM dust in the atmosphere.

Optical methods are widely used for dust measurement in flue gas, as they have the advantages of fast response, high sensitivity, and non-intrusive sampling. The basic principle of optical methods is to measure the optical properties of dust in flue gas, such as scattering, absorption, and shielding, by using different optical techniques, such as light scattering method, light transmission method, light reflection method, etc. These methods use photoelectric detectors, such as photomultiplier tubes and photodiodes, to measure the light signal entering or passing through the sample, and then calculate the dust concentration or physical parameters in flue gas by using empirical formulas that relate the optical properties to the dust characteristics. Compared with the gravimetric method, the optical method can realize online and rapid monitoring, and contains rich information of particle size distribution and composition, but it also has the following shortcomings: susceptible to environmental light interference. The light conditions in the tobacco workshop are complex and variable, and it is necessary to carefully select the optical path or shield the external light interference. High requirements for the alignment and calibration of the detection optical path. The misalignment of the optical path and the spectral drift of the light source will directly affect the test results. Different types of flue gas correspond to different empirical formulas. It is necessary to re-model and calibrate according to the dust composition and structure. It cannot identify the flue gas background and dust particle shape, and there is a certain inherent measurement error. Long-term use requires frequent calibration and maintenance, and the maintenance workload is large.

Deep learning-based methods for air quality (pollutant concentration) detection are also one of the hot topics of research. Literature [7] studied the imaging of visible light hyperspectral imaging technology and estimated the concentration of pollutants in the air using the existing model VGG16. Literature [8] proposed an image-based deep learning model (CNN-RC), which integrates convolutional neural network (CNN) and regression classifier (RC), to estimate the air quality of the region of interest by extracting features from the photos and classifying the features as air quality levels. Literature [9] proposed an estimation algorithm for the concentration of air pollutants in Pakistan based on convolutional neural network and multi-pollutant satellite images. Literature [10] adopts an extended model of spatio-temporal convolutional long short-term memory neural network (CNN-LSTM) for pollutant concentration prediction. In order to comprehensively consider the spatio-temporal characteristics of pollutant data, they incorporate the historical pollutant concentration of the current observation site and the pollutant concentration of k adaptively selected neighboring sites into the input data of the model. By combining long short-term memory neural network (LSTM) and convolutional neural network (CNN) to extract high-order spatio-temporal features, the prediction performance of the model is further improved.

Literature [11] proposed a method for estimating the concentration of air pollutants based on multimodal image information fusion, by calculating the depth error between the image and its corresponding dehazing result. Literature [12] proposed a method to obtain the concentration of pollutants in the air by extracting visual cues and atmospheric index from a single photo. In addition, literature [13] proposes a deep residual network model, namely AQC-Net, which includes the SCA (Scene-Condition-Attention) module, which aims to improve the correlation between environmental images and air quality features, and thus enhance the accuracy of ResNet network for image-based air quality classification. These methods establish an effective connection between image data and air quality features, and can improve the performance of air quality classification models. Literature [14] proposes a real-time detection method for indoor air quality, using an intelligent robot joint collaborative system. In this system, multiple robots cruise along the shortest path under the coverage and control of wireless signals, and continuously collect and upload indoor pollutant data, thus realizing real-time monitoring of indoor air quality. This method has a wide range of application prospects, and is conducive to the quality control and management of indoor environment.

The above methods study the relationship between images and air quality, and use PM2.5 and PM10 concentrations as the quantitative indicators of air quality in the above papers. They explore the relationship between image features and pollutant concentrations through deep learning, and achieve good results. Therefore, it is feasible to predict dust based on the monitoring images of tobacco workshops, and to find the relationship between image features and tobacco dust concentration through deep learning. Moreover, there is no relevant research on the estimation of dust concentration produced by indoor tobacco production.

Edge cloud can enhance the flexibility of computation and reduce the latency. Edge cloud is also one of the hot topics of recent research. Literature [15] proposed edge cloud collaborative EA (ECCoEAs) to solve distributed data-driven optimization problems, where data is collected by edge servers. Literature [16] studied the efficient resource scheduling problem on edge cloud. Literature [17] studied the multi-user task offloading problem in terminal edge cloud system, where all user devices compete for limited communication and computation resources. Literature [18] studied the energy-efficient resource allocation problem in heterogeneous edge cloud computing, and proposed a resource allocation algorithm based on joint optimization of power control, transmission scheduling and offloading decision between mobile devices and edge cloud. Literature [19] proposed an adaptive deep neural network inference acceleration architecture based on intelligent applications, which accelerates the inference speed of deep neural networks by edge-end cloud collaborative computing. Literature [20] proposed a task offloading algorithm based on genetic evolution particle swarm optimization in edge-end cloud collaborative computing.

The research purpose of this paper is to conduct dust risk assessment and early warning based on the images captured by the surveillance camera. It is very necessary to detect and warn the dust when it starts to appear, and the spatial distribution of tobacco dust is uneven when it appears. Therefore, the dust concentration has spatial attributes when it appears in large quantities. For this purpose, in order to improve the accuracy and real-time performance of dust measurement in tobacco workshops, this paper proposes an image-based dust measurement method for tobacco workshops based on the spatial attributes of dust concentration. This method uses image processing technology and deep learning technology to extract features related to dust concentration from scene images, and establishes the mapping relationship between image features and dust concentration, thus realizing the estimation of dust concentration in tobacco workshops. This method has the following advantages: no need for sampling, weighing and other operations, only need to install cameras or other image acquisition devices, can realize real-time monitoring of dust concentration in tobacco workshops; not affected by the physical characteristics of dust particles, only need to analyze the visual information in the scene images, without knowing the density, refractive index and other parameters of dust; not dependent on specific light source or optical path conditions, as long as the scene images are clear and visible, can be processed; can use deep learning technology to automatically learn the complex nonlinear relationship between image features and dust concentration, without setting empirical formulas or calibration curves artificially; can adapt to different types and sizes of tobacco workshops, only need to adjust the image processing and deep learning parameters according to different scenes, can get the appropriate model.

This paper is mainly divided into the following parts: “Edge-cloud computing cooperation detection of dust concentration for risk warning” section introduces the model of the image-based dust measurement method for tobacco workshops; “Experimental” section introduces the experimental method of this paper; Result section analyzes the experimental results; “Conclusion” section summarizes the main work of this paper and proposes the future research direction and prospect.

Edge-cloud computing cooperation detection of dust concentration for risk warning

Edge-cloud computing collaborative dust concentration detection architecture

The volume of video data is substantial, and there are multiple video collection points within the factory. Transmitting all data to servers severely strains both network and computational resources. This results in decreased timeliness of risk alerts and increased latency. To address this, we propose an architecture for collaborative dust concentration detection based on edge-cloud computing. Edge clouds are deployed at various locations within the factory to process uploaded image data in real-time, thereby enhancing the real-time nature of risk alerts. Figure 2 shows the architecture for collaborative dust concentration detection based on edge-cloud computing.

Fig. 2
figure 2

Edge-cloud computing collaborative dust concentration detection architecture

Data preprocessing

Even under the same conditions, the image features captured by different cameras may have slight differences. In order to reduce the influence of different camera models, this section first obtains the calibration model of the camera, and then uses the inverse transformation to cancel the influence of the nonlinear transformation on the feature extraction [21]. This process aims to achieve more accurate feature extraction, and thus improve the reliability and consistency of image processing. First, a camera in the tobacco production workshop is selected as the main camera, and the image captured by this camera is the reference image. To align the other cameras with the reference image, this section applies affine transformation to the images. For this purpose, it extracts the surf descriptor of the images [22].

The light will be affected by some particles in the air, and the relationship of the influence is shown in formula 1:

$$F\left(a\right)={T}_{v}\left(a\right)K\left(a\right)+(1-tr(a))G$$
(1)

The parameter F represents the observed light intensity, G represents the global environment brightness, Tv represents the propagation function, and the parameter \(\alpha\) represents the pixel position in the image.

In the same scene, the adjacent pixels of the image will be affected by the medium in the air. Therefore, the variance of the illuminance can be calculated by formula 2.

$$P={}^{1}\!\left/ \!{}_{\left|CE\right|}\right.{\sum }_{\alpha \epsilon CE}{\left(L\left(a\right)-LA\right)}^{2}$$
(2)

The parameter P represents the variance of the brightness, LA represents the mean of the brightness, and CE represents the set of all pixels in the image. Saturation is also easily affected by the medium in the air, and saturation can be calculated by formula 3.

$$V=1-\frac{3}{R}+G+B(min(R,G,B))$$
(3)

Where, the parameter V represents the saturation of the image, and represents the values of RGB colors. The mean of the saturation gradient can be calculated by formula 4.

$${V}_{av}={}^{1}\!\left/ \!{}_{\left|CE\right|}\right.{\sum }_{\alpha \epsilon CE}{{V}_{a}\left(a\right)}^{2}$$
(4)

Where Va is the gradient of saturation. The intensity of the dark [23] channel can be calculated by formula 5.

$${J}^{dark} \left(pix\right)=\underset{{\mathit{pix}}^{*}\in W(a)}{{\text{min}}}\left(\underset{c\in \{r,g,b\}}{{\text{min}}}{J}^{c}{(pix}^{*})\right)$$
(5)

The parameter \(J^ {dark}\) represents the dark channel, Jc is the color channel of \(j\), and W(a) represents the set of all pixels in the neighborhood of the center point with pixel a as the center. According to the above formula, the propagation coefficient can be estimated as:

$$\widehat{t}r\left(a\right)=1-\underset{{a}^{*}\in W\left(a\right)}{{\text{min}}}\left(\underset{c\in \left\{r,g,b\right\}}{{\text{min}}}\frac{{I}^{c}\left({a}^{*}\right)}{{A}^{c}}\right),$$
(6)

Where, the parameter Ic is the observed intensity on channel C, and Ac is the global environment intensity on channel c.

Edge intelligence detection model of dust concentration for risk warning

This paper proposes an end-to-end spatial convolutional neural network (E2E-SCNN) based on spatial information, as shown in Fig. 3. The input image is processed by three branches, which learn local, global and spatial features, three key information, corresponding to the design of sub-network channels to extract three types of features, and finally fuse them for dust concentration regression. Specifically, the network consists of the following modules:

Fig. 3
figure 3

End-to-end spatial convolutional neural network

First, through the local feature extraction branch, using continuous multi-layer convolution kernels to learn local texture features; this module first uses the VGG network style of multi-layer continuous small convolution kernel structure to process the data, using continuous 3 × 3 convolution to extract local features, and using attributes to define the spatial value between the element border and the element content to maintain the resolution. In order to obtain more rich multi-scale feature expression, this paper adds the down-sampling operation, and uses BatchNorm layer for normalization for training acceleration and stability.

At the same time, through the global feature extraction branch: using dilated convolution and atrous convolution to obtain global context; using dilated convolution, its calculation is as follows.

$${F}^{\mathrm{^{\prime}}}\left(x,y\right)=\sum W\left(i,j\right)*F(X+ri,y+rj)$$
(7)

Assuming that the input feature map is F, the convolution kernel is W, and the dilation rate is r. That is, F samples are taken at intervals of pixels on r, and then standard convolution is performed. The processed data is convolved with atrous, combined with up-sampling and down-sampling, to efficiently encode global features. Then, residual connection is used to avoid signal attenuation.

At the same time, spatial information is extracted through the position channel: the spatial transformation layer obtains the position features of the flue gas distribution; the core of the position channel is the spatial transformation layer, which changes the spatial position relationship of the feature points by performing affine transformation on the feature map, thus extracting the spatial transformation features. The spatial transformation layer is mathematically expressed as follows:

Let the input feature map be F the output transformed feature map be F', and the spatial transformation parameter be \(\Theta\). Then:

$${F}^{\mathrm{^{\prime}}}\left(x,y\right)=F(\Theta(x,y))$$
(8)

where \(\Theta (x,y)\) defines the transformation position of pixel (x,y) mapped to the input image. It can learn operations such as rotation, scaling, and translation. By applying spatial transformations repeatedly, feature maps in different spatially related representations can be obtained. Then, the primary features are extracted through the position channel network, and then multiple spatial transformation layers learn the spatial dimension change information of the feature map, and finally obtain the spatial transformation feature through global pooling.

After obtaining the local, global and spatial features through the above steps, a three-channel fusion network is adopted: the local, global and spatial features are compounded to obtain a multi-scale feature expression; the feature maps of the three branches are spliced and combined in the channel dimension to form a unified feature mapping containing all features. To further enhance the fusion effect, weighted fusion is performed:

$${F}_{fused}=w1*{F}_{local}+w2+{F}_{global}+w3*{F}_{spatial}$$
(9)

Where, the parameters \(w1\), \(w2\) and \(w2\) are the learned weight coefficients respectively.

After obtaining the \({F}_{fused}\) feature value, it is connected to the fully connected network, and the input layer depends on the dimension of the fused feature value. It is connected to the output layer through two hidden layers. The ReLU activation function after the hidden layer is shown in formula 10.

$$ReLU\left(x\right)={\text{max}}(0,x)$$
(10)

After the hidden layer, a BatchNorm layer is connected to speed up the network convergence. The loss function in this paper is

$$MSE=\frac{1}{n}*\sum {(y-{y}_{pred})}^{2}$$
(11)

where \(y-{y}_{pred}\) represents the difference between the predicted value \({y}_{pred}\) and the true value y. The total error is averaged, where n is the number of samples.

Experimental

To verify the effectiveness of the network, this paper fabricated and collected 1000 images with dust in the tobacco production workshop, and manually annotated the images. The data was augmented by rotation, noise addition, etc. Finally, 5000 training images were obtained. Among them, 80% of the images were randomly selected as the training set, and 20% of the images were used as the test set. The accuracy of the proposed method was determined by comparing the labels obtained from the test with the true labels. The learning rate is set to 0.001 in this paper. The parameters in Eq. (1) are: F-observed light intensity, G-global ambient brightness, Tv-propagation function, a-pixel position K-atmospheric light scattering coefficient tr-atmospheric transmittance. These parameters are mainly obtained by observing with the image sensor. The parameters in Eqs. (2)- (5) are: P-image brightness variance, LA—image brightness mean, CE-image pixel set, V-image saturation, Va-saturation gradient, J dark—dark channel image, Ic—intensity observed in channel, Ac-global ambient intensity in channel c. These parameters are obtained by statistical calculation of the input image. The main parameters in Eqs. (6)- (13) are: w1, w2, w3—weight coefficients; w1, w2, w3 are obtained during the network learning process. This paper used the correlation coefficient (R2), the mean absolute error (MAE), \(Precision\) and F1 to analyze the accuracy of the dust concentration estimation method for the tobacco production workshop. We consider the prediction to be accurate if the difference between the predicted value and the true value is less than or equal to 5%, and we consider the prediction to be erroneous if the difference between the predicted value and the true value is greater than 5%. The formula of the correlation system can be calculated by formula 12:

$${R}^{2}=1-\frac{{\sum }_{i=1}^{n}{\left({{p}^{\mathrm{^{\prime}}}}_{i}-{p}_{i}\right)}^{2}}{{\sum }_{i=1}^{n}{\left({p}_{i}-\overline{p }\right)}^{2}},$$
(12)

Where the parameter \({{p}^{\mathrm{^{\prime}}}}_{1}\) is the estimated value of the dust in the i-th tobacco workshop, and pi is the labeled value of the dust in the i-th tobacco workshop. Where n is the number of images in the test set, and \(\overline{p }\) represents the mean value of the dust labels in the tobacco workshop. where \({R}^{2}\epsilon\) [0,1] the higher its value, the higher the estimation accuracy of the algorithm.

The mean absolute value can be calculated by formula 13.

$$MAE=\frac{1}{n}\sum\nolimits_{i=1}^{n}|{p'}_{i}-{p}_{i}|$$
(13)

Where the parameter \({{p}^{\mathrm{^{\prime}}}}_{i}\) is the estimated value of the dust in the i-th tobacco workshop, and \({p}_{{\text{i}}}\) is the true value of the dust in the i-th tobacco workshop. The value of MAE is a positive number, and the smaller its value, the higher the estimation accuracy of the algorithm.

$$Precision= TP/(TP+FP)$$
(14)

Where TP, TN, FP, and FN denote the counts of true positives, true negatives, false positives, and false negatives, respectively.

$$F1 = 2 * (Precision * R) / (Precision+ R)$$
(15)

Where P is the precision and R is the recall. In this paper, R = 1.

To verify the efficiency of the proposed method, we compared several existing image-based air quality detection algorithms, VBM, RCT and FFN. This paper compared the above models with the E2E-SCNN model on the real data set we collected. The model training server configuration used in the comparative experiment was not changed.

RCT [24] studied a high-precision VBM system based on the relationship between dust concentration and image transmission (RCT) model. At the same time, it proposed an image transmission calculation method that uses the atmospheric light scattering effect and dust particle occlusion effect. RCT tends to be negatively correlated, which is a quadratic polynomial. Image transmission can eliminate the influence of atmospheric light scattering effect and dust particle occlusion effect on measurement accuracy.

FFN [25] used a regression ensemble based on deep neural network to estimate the PM2.5 concentration in the air from outdoor images. The regression used a feed-forward network to combine three convolutional neural networks VGG-2, Inception-v5 and ResNet16, and calculated the final PM2.5 prediction of the image (Feed-forward network: FFN).

CNN-TL [26] studied a PM2.5 detection method based on CNN transfer learning (CNN-TL) that used a deep convolutional neural network (CNN) to classify natural images according to their PM2.5 concentration into different categories.

Result

The study utilized a self-collected testing dataset and compared different algorithms. Table 1 presents the comparative results of the proposed algorithm against three other algorithms in terms of MAE and \({R}^{2}\) values. Figure 4 illustrates the comparison of partial predicted data values between the proposed algorithm and the other three algorithms.

Table 1 Experimental results of the E2E-SCNN algorithm
Fig. 4
figure 4

Comparison of E2E-SCNN with three other algorithms for data value prediction

Table 2 compares the proposed E2E-SCNN method with a dual-channel convolutional neural network approach (DCCNN) by excluding spatial feature values based on the E2E-SCNN. Figure 5 shows the comparison of partial predicted data values between the proposed algorithm and the dual-channel convolutional neural network.

Table 2 Comparison of the E2E-SCNN algorithm with a dual-channel convolutional neural network approach (DCCNN)
Fig. 5
figure 5

Comparison of E2E-SCNN with DCCNN in predicting data values

From Table 2, it is evident that the proposed E2E-SCNN achieves the lowest MAE score, indicating superior performance in estimating the dust concentration in tobacco production facilities, closest to the labeled ground truth values. The \({R}^{2}\) metric also demonstrates the highest value for the proposed E2E-SCNN, suggesting its suitability for estimating dust concentration generated in tobacco production settings. Figure 2 further illustrates that the proposed E2E-SCNN outperforms the other three popular algorithms, aligning closely with the labeled ground truth values. This superiority is attributed to the fact that other algorithms primarily focus on image-based air quality assessment methods, primarily considering particulate matter such as PM2.5 and PM10, which differ from the dust generated in tobacco production. Therefore, the proposed algorithm performs optimally in this context. The proposed E2E-SCNN is compared with the dual-channel convolutional neural network approach in Table 2 and Fig. 3. Both the table and the figure clearly show that incorporating spatial feature extraction in the dual-channel CNN significantly improves the accuracy of the algorithm.

Table 3 compares the algorithms based on different combinations of global, local, and spatial feature extraction modules. We assume that the method that only uses global features is called OG, the method that only uses local features is called OL, the method that only uses spatial features is called OS, the method that uses both global and local features is called GL, the method that uses both global and spatial features is called GS, and the method that uses both local and spatial features is called LS.

Table 3 The precision and F1 values of different algorithms

From the results in Table 3, it can be seen that the E2E-SCNN method proposed in this paper outperforms the other comparison methods on both Precision and F1-score metrics, reaching 0.891 and 0.942 respectively. This indicates that from the perspective of Precision and F1 evaluation, the E2E-SCNN method can predict the dust concentration of the air more accurately and reliably than the other methods. By comparing different methods of single-channel or dual-channel convolutional neural networks, it can be seen that the global features of the image contribute the most to the dust prediction module, and the performance of the dual-channel convolutional neural network is better than that of the single-channel convolutional neural network. The three-channel convolutional neural network E2E-SCNN proposed in this paper has the best performance.

Conclusion

As public concern over environmental pollution and occupational health rises, the tobacco industry urgently needs to control dust generated in workshop production processes to reduce its impact on the environment and human health. However, conventional monitoring methods like filtration devices have limited accuracy and cannot meet the refined supervision needs under complex working conditions. Non-contact image monitoring based on computer vision and deep learning technology provides a new solution. Yet, directly regressing the entire image with a single network struggles to simultaneously represent global and local features of complex scenes, resulting in poor generalization. To achieve highly accurate intelligent monitoring and assessment of tobacco dust, this study develops an end-to-end three-channel convolutional neural network (E2E-SCNN). The E2E-SCNN comprises three branches: one for extracting local detail features, another for encoding global structural information, and a third representing spatial distribution. These features from different perspectives are fused at the end of the network to obtain a comprehensive image representation. This divide-and-conquer strategy better captures the overall image, by exploiting the advantages of different feature values, than directly regressing the whole image. To validate the model, we construct a tobacco production workshop image dataset with rich annotations. Metrics such as MAE, \({R}^{2}\), etc., are employed to analyze prediction errors against existing image estimation algorithms and dual-channel networks. Results demonstrate the outstanding performance of the E2E-SCNN. The proposed framework based on the three-channel convolutional network significantly enhances prediction accuracy and robustness. This method offers a low-cost, easily deployable intelligent monitoring approach for tobacco workshops and presents a potential technological roadmap for other environmental parameter detection tasks. Future work aims to expand the dataset and advance system-level engineering practices for broader applications. Additionally, cross-disciplinary exchanges and collisions of diverse ideas are anticipated to continue providing novel directions in the field of image analysis.

Although this study collected and constructed a tobacco production workshop image dataset, due to time and conditions, the dataset has some deficiencies in terms of scale and scene range. The existing dataset contains 1000 tobacco production workshop images, which may not fully cover the various complex condition changes in the actual production environment, thus posing the risk of sample selection bias. In addition, the preprocessing method of the existing dataset may introduce some degree of systematic bias, which also needs to be further verified. These factors may limit the generalization performance of the model and cannot guarantee its applicability in all actual scenarios. To properly address these data quality issues, our follow-up work will start from the following aspects: 1) continue to collect, integrate and annotate more different scene image data of real tobacco workshops; 2) construct and optimize the data augmentation module, synthesize more samples to expand the data scale and scene range; 3) use cross-validation and multiple training set test set splits for training, and strictly evaluate the generalization ability of the model; 4) examine the impact of the existing preprocessing methods, and adjust and optimize them if necessary to avoid introducing bias. We believe that through the comprehensive application of these methods, the data quality will be greatly improved, the potential sample selection bias and systematic bias will be reduced, and the model’s adaptability and prediction accuracy in complex actual scenarios will be enhanced.

Availability of data and materials

No datasets were generated or analysed during the current study.

References

  1. Zaga V, Dell’Omo M, Murgia N et al (2021) Tobacco worker’s lung: A neglected subtype of hypersensitivity pneumonitis. Lung 199:13–19

    Article  Google Scholar 

  2. Patel J, Parmar R, Solanki H, et al (2023) Occupational Health Problems Among Tobacco Processing Factory Workers, at Kheda District Gujarat: A Cross Sectional Study. J Pharm Negat Results 1378–1387

  3. Slobodyan O, Zaets V, Neschadym L, et al (2015) Cause of the fire at the food industry enterprises. Electronic National University of Food Technologies Institutional Repository 3(2):61–269

  4. Mohammadyan M, Baharfar Y (2012) Evaluation of tobacco dust and designing of local exhaust ventilation (lev) systems in a tobacco processing industry. Int J Occup Hyg 4(1):47–52

    Google Scholar 

  5. Pinnick RG, Fernandez G, Hinds BD (1983) Explosion dust particle size measurements. Appl Opt 22(1):95–102

    Article  Google Scholar 

  6. Gębicki J, Szymańska K (2012) Comparative field test for measurement of PM10 dust in atmospheric air using gravimetric (reference) method and β-absorption method (Eberline FH 62–1). Atmos Environ 54:18–24

    Article  Google Scholar 

  7. Mukundan A, Hong-Thai N, Wang H C (2022) Detection of PM 2.5 Particulates using a Snap-Shot Hyperspectral Imaging Technology[C]//Conference on Lasers and Electro-Optics/Pacific Rim. Sapporo, Optica Publishing Group CPDP_08

  8. Kow PY, Hsia IW, Chang LC et al (2022) Real-time image-based air quality estimation by deep learning neural networks[J]. J Environ Manage 307:114560

    Article  Google Scholar 

  9. Ahmed M, Xiao Z, Shen Y (2022) Estimation of ground PM2.5 concentrations in Pakistan using convolutional neural network and multi-pollutant satellite images. Remote Sensing. 14(7):1735

    Article  Google Scholar 

  10. Wen C, Liu S, Yao X et al (2019) A novel spatiotemporal convolutional long short-term neural network for air pollution prediction. Sci Total Environ. 654:1091–1099

    Article  Google Scholar 

  11. Wang G, Shi Q, Wang H et al (2022) Multi-modal image feature fusion-based PM2.5 concentration estimation. Atmospheric Pollut Res 13(3):101345

    Article  Google Scholar 

  12. Yao S, Wang F, Huang B (2022) Measuring PM2. 5 Concentrations from a Single Smartphone Photograph. Remote Sensing 14(11):2572

    Article  Google Scholar 

  13. Zhang Q, Fu C, Tian R (2020) A deep learning and image-based model for air quality estimation. Sci Total Environ 724:138–178

    Article  Google Scholar 

  14. Hu Z, Cong S, Song T et al (2020) AirScope: Mobile robots-assisted cooperative indoor air quality sensing by distributed deep reinforcement learning. IEEE Internet Things J. 7(9):9189–9200

    Article  Google Scholar 

  15. Guo X-Q, Chen W-N, Wei F-F, Mao W-T, Hu X-M, Zhang J (2023) Edge–Cloud Co-Evolutionary Algorithms for Distributed Data-Driven Optimization Problems. IEEE Trans Cybern 53(10):6598–6611. https://doi.org/10.1109/TCYB.2022.3219452

    Article  Google Scholar 

  16. Kaur M, Kadam S, Hannoon N (2022) Multi-level parallel scheduling of dependent-tasks using graph-partitioning and hybrid approaches over edge-cloud[J]. Soft Comput 26(11):5347–5362

    Article  Google Scholar 

  17. Chen Y, Zhao J, Wu Y, et al (2024) Qoe-aware decentralized task offloading and resource allocation for end-edge-cloud systems: A game-theoretical approach. IEEE Transactions on Mobile Computing 23(1):769–784

  18. Hua W, Liu P, Huang L. Energy-Efficient Resource Allocation for Heterogeneous Edge-Cloud Computing. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2023.3293164

  19. Liu G, Dai F, Xu X et al (2023) An adaptive DNN inference acceleration framework with end–edge–cloud collaborative computing[J]. Futur Gener Comput Syst 140:422–435

    Article  Google Scholar 

  20. Wang B, Wei J (2023) Particle swarm optimization with genetic evolution for task offloading in device-edge-cloud collaborative computing[C]//International Conference on Intelligent Computing. Springer Nature Singapore, Singapore, pp 340–350

    Google Scholar 

  21. Xi T, Tian Y, Li X, et al (2019) Pixel-wise Depth-Based Intelligent Station for Inferring Fine-Grained PM2.5" J. Future Gener Comput Syst 92:84–92

  22. Bay H, Tuytelaars T, van Gool L (2006) SURF: Speeded Up Robust Features A. Proceedings of the 9th European Conference on Computer Vision (ECCV 2006), Graz, pp 404–417

    Google Scholar 

  23. He K, Sun J, Tang X (2011) Single Image Haze Removal Using Dark Channel Prior. J. IEEE Trans Pattern Anal Machine Intell. 33(12):2341–2353

    Article  Google Scholar 

  24. Li G, Wu J, Luo Z et al (2019) Vision-based measurement of dust concentration by image transmission. IEEE Trans Instrum Meas 68(10):3942–3949

    Article  Google Scholar 

  25. Rijal N, Gutta RT, Cao, et al (2018) Ensemble of deep neural networks for estimating particulate matter from images[C]//2018 IEEE 3rd international conference on image, Vision and Computing (ICIVC). IEEE, Chongqing, pp 733–738

  26. Chakma A, Vizena B, Cao T, et al (2017) Image-based air quality analysis using deep convolutional neural network[C]//2017 IEEE international conference on image processing (ICIP). IEEE, Beijing, pp 3949–3952

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Q.S, HS.W and ZJ,L wrote the main manuscript text. HY.Z and Y.C prepared all figures. J.L and X.L conducted simulation experiments for the paper. All authors reviewed the manuscript.

Corresponding author

Correspondence to Zijuan Li.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Su, Q., Wang, H., Zhao, H. et al. Edge-cloud computing cooperation detection of dust concentration for risk warning research. J Cloud Comp 13, 7 (2024). https://doi.org/10.1186/s13677-023-00573-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-023-00573-w

Keywords