Skip to main content

Advances, Systems and Applications

Target tracking using video surveillance for enabling machine vision services at the edge of marine transportation systems based on microwave remote sensing

Abstract

Automatic target tracking in emerging remote sensing video-generating tools based on microwave imaging technology and radars has been investigated in this paper. A moving target tracking system is proposed to be low complexity and fast for implementation through edge nodes in a mini-satellite or drone network enabling machine intelligence into large-scale vision systems, in particular, for marine transportation systems. The system uses a group of image processing tools for video pre-processing, and Kalman filtering to do the main task. For testing the system performance, two measures of accuracy and false alarms probability are computed for real vision data. Two types of scenes are analyzed including the scene with single target, and the scene with multiple targets that is more complicated for automatic target detection and tracking systems. The proposed system has achieved a high performance in our tests.

Introduction

Automatic satellite and aerial surveillance has received much attention from the related industries, governments, and environmental departments since many years ago. Today, there is a reliable possibility of Earth observation and surveillance by using remote sensing satellites, drones, and other sophisticated facilities for persistent and periodical surveillance [1]. For example, these tools are utilized to monitor cities, analyze weather, protect environment (e.g., vegetation), and control political boundaries of different countries for national security issues.

A key application of the persistent (online, real-time) and pervasive surveillance is to use them in military services whereas the periodical surveillance is mainly used for environmental use. The persistent use is mostly more expensive to provide real-time services, however the periodical is cheaper and may not be real-time (either offline or semi-realtime) [2].

One of the main ideas of automatic surveillance for real-world use is to monitor maritime transportation in sea and ocean. This may be toward commercial use, political aspects, or military actions. Its commercial use is the main object of this paper where the online monitoring of trade and fishing ships, marine vehicles, and their supporting stations is performed for various goals such as economic management, traffic management, and security or safety issues. In general, video tools-assisted surveillance systems for ships are mainly using ground-based cameras placed on marine stations and offshore [3, 4]. Nevertheless, the surveillance systems suffer from numerous obstacles and lacks including limited coverage and complicated maintenance and repair process. Fortunately, space-borne and airborne surveillance by using satellites and drones (unmanned aerial vehicles (UAVs)) is an ideal solution to overcome the mentioned weaknesses. In addition, continuous vision-providing satellites and real-time/semi-real-time drones belong to a kind of relatively fresh technology in the world. Since a long time ago, these platforms have been providing offline-monitoring services based on the RGB cameras and other optical and radar sensors for capturing still images. The idea of making real-time videos is a new goal of the remote sensing industry in recent years. In addition to classical optical sensors such as infrared and visible-light cameras (including multi-spectral imaging (MSI) sensors), radars—working based on my microwave imaging technology (here, active imaging)—are used in remote sensing and surveillance industry. Despite the optical sensors that are passive and cannot work in nights and all-weather (just for completing the subject, a weak performance of infrared may be useful in nights), active microwave sensors in radar remote sensing do not face any problem in such situations. Therefore, in recent years, many detection and recognition algorithms have been suggested in order to use this capability of radar imaging in days and nights both, and in allweather conditions [5,6,7,8]. The radar sensors are well deployed on aerial and space-borne platforms such that they are recently used for making remote sensing videos [9,10,11,12,13,14]. Compared with the optical images and videos, radar images and videos have more spatial resolution even though they highly suffer from the lack of natural spectral information like the real color. Colors can be very useful to detect and recognize moving vehicles, and are a good source for making AI features for intelligent vision systems. Now-a-days, the technology is going to combine the capability of both imaging tools to boost visual performance and benefit from their advantages simultaneously, for instance, joint synthetic aperture radar (SAR)-optical image fusion and SAR image colorization using deep learning [15, 16].

The main idea behind this work is to propose a new framework of processing blocks for an edge-enabled online target detection and tracking system. In fact, we illustrate a kind of architecture based on the existing techniques in machine vision. Online monitoring at the remote sensing platforms in space or sky is a new crucial need determined by the remote sensing industry. To do some processing tasks such as detection and tracking, the remote sensing data processing should be performed as on-board. However, on-board processing of high-volume data in real-time use may be a difficult task in a small platform such as drones and mini-satellites. Today, the only way of real-time implementation is not to improve the processing hardware or to decrease the computational complexity of the utilized algorithms in the processing system. Fortunately, edge-fog-cloud architecture is a new solution to realize the real-time performance of the platform. Traditional cloud-based processing is now being replaced with cloud-edge or cloud-fog-edge-based processing. Edge includes local processors at the thing layer or its around into an internet of things (IoT) infrastructure [17, 18]. This new model helps to schedule the processing tasks based on their complexity and/or priority. Thus, real-time algorithms are performed at the edge to reduce delays. The edge processors in the application of this paper can be other near mini-satellites or drones provided through an ad-hoc network of remote sensing objects. Figure 1 shows how edge processors are working. A supporting node, either drone or mini-satellite, can be a communication relay and edge computing sever at the same time, just if the network needs such a capability. In addition, a surveillance network can be a combination of UAV networks and mini-satellite networks to increase the performance of computing and communication services. All UAVs/mini-satellites can send and receive data, but a relay node among them, which is responsible for being in contact with the ground stations, should have more energy storage capacity, and better transceivers for long-range communications.

Fig. 1
figure 1

Edge-based processing in an aerial UAV ad-hoc network; UAV_1 is the platform of remote sensing imaging, UAV_2 AND UAV_3 our Edge computing resources while UAV_4 acts as the cluster head in a clustered network

Machine vision techniques in marine remote sensing

Although the most satellite images are taken by the optical sensors that might be corrupted by bad weather, clouds, marine waves, and so on, some research work has focused on the optical images to detect ships [19,20,21,22]. Most classic detection techniques in marine remote sensing have used classic learning models, for example, supervised statistical learning methods. Two main directions of the detection strategies in remote sensing systems are to remove false alarms (FAs) and to find the main objects. For example, the techniques can extract marine vehicles such as ships based on the discrepancies in this scene and gray-level difference between the potential targets and the image background [23, 24]. Then, most of the algorithms use the properties of the devices such as shape in template matching or other features for applying to a classifier to recognize the targets [25]. Some existing methods used prior information of the offshore to determine the sea areas that can help us find the real targets much better [26].

The issue of low temporal resolution of many satellite-imaging sensors has made the recognition and tracking low-accuracy, and limited, while monitoring the ships. Today, the video surveillance-providing low Earth orbit (LEO) satellites are available because of the big progress of camera technology in terms of both spatial and temporal resolution. In recent years, a number of the related studies on video satellites have been accomplished that caused a reliable detection, recognition and tracking of static targets and moving objects [27,28,29,30]. It is noticeable that the satellite videos and real-time persistent control are crucial for maritime applications. Among the three key words of the artificial intelligence (AI) used in SAR video systems, i.e., detection, recognition, and tracking, most outputs of the current research have focused on the detection topic, mainly, ships for maritime systems. The lack of research on the other two topics, video recognition and tracking, is explicitly sensible from the related literature. The present study is going to indicate a suitable solution to jointly detect and track the moving objects (mobile targets). Our finding is presented as a remote sensing data processing system benefiting from microwave imaging in SAR sensors and edge computing. It will help to find and track ships, whether commercial, or military/combat devices (and maybe equipment such as fighters) on the sea surface. In detail, this system is using the Kalman filter along with some additional preprocessing. A main part of the processing is to do pre-processing on radar frames to enhance the tracking step. Note that in the SAR imaging technology, the aim from the moving object/target is its radar shadow so that we should track the shadow of the real objects, however for simplicity in all the text, the shadow is named object/target. It is expected that by using the modern AI and edge computing tools with high performance in other areas of research [31, 33], for example LSTMs with good capability of working with temporal data [31, 32], the performance of our system can be improved.

Organization

This paper is organized as follows. The second section is to review the used methods from the literature and to form the proposed system. In the third section, we provide all tests and results. The last section is a conclusion on the whole of the research.

Materials and methods: basic concepts and proposed system

This section is presented into two subsections. First, the basic tools and pre-processing steps are provided, then, the Kalman filtering and the proposed system are introduced.

SAR images are captured from a very long distance from the oceans. However, these images suffer from complicated noises and artifacts. In total, the two biggest categories of noises are made because of the existence of multiplicative artifact of the imaging system mainly known as speckle noise and low signal-to-noise ratio (SNR) of the imaging system caused by the unsuitable transmission power of the sensor and the aerial or satellite platforms' height.

To overcome this issue, a step of pre-processing under the subject of noise removal is essential. Noise removal, or more realistic noise reduction, is not the only pre-processing step here. The other steps could be image spatial enhancement (usually as arbitrary), the behavior of SAR noises are nonlinear and multiplicative so that nonlinear filters should be normally used. Moreover, due to the semi-sparse nature of SAR images, histogram equalization and morphological operations may be required. All of these are named pre-processing that can well affect the quality of detection, recognition, and tracking.

Noise removal

As known, one of the well-discussed roots of unsuitable quality of digital images is noise. SAR images may be in more danger of being affected by the noises as a kind of added signals. Adding noise to the main information of an image will reduce the quality of experience (QoE) sensed by an end-user (human) or an AI interpreter. Noises are sometimes interpreted into two types of distortions depending upon their reasons of appearance; one is ambient or environmental noises, and the other one is internal noises or artifacts of the imaging device or processing tools. In this research, we treat all in the same way and considerer their mixture as a complicated noise with multiplicative nonlinear components. Thus, the denoising tools must be effective sufficiently to compensate. About the SAR images, despite the optical images, the ambient components are not as harmful as internal components. Therefore, noises are made by a single source of multiple sources of processing algorithms in a sensor and its surrounding processors at both signal processing and data processing stages. The most known signal processing noise is Speckle artifact, and the most common data processing noise is compression artifact. If no compensation process to remove noises is done, they will affect the next data processing steps such as edge detection and object detection. The compensation process is a kind of filtering. The noise-reduction filters often use a process named masking to sweep all pixels of an input image. In this simplest description, the filters are a kind of central tendency measure which are computed based on the neighboring pixels. As an example, the mean filter is a simple and linear solution for some kinds of additive noises, e.g., Gaussian noise. Nonetheless, for SAR images, the linear filters are not helpful enough according to the complicated nature of the images because not only they may not remove the strong Speckle noise, but also, the images' edge would be devastated. Regardless of the discrepancies of the candidate filters, all of them are low-pass. A low-pass filter with an averaging mechanism must refine all pixels of an input image with masking such that an average of the neighboring pixels in the local mask is computed and the central pixel is replaced with it. The averaging mechanism is usually a general term indicating a kind of central tendency based on the local inputs of the mask.

The famous measures for central tendency are mean and median that are linear and non-linear filters in their basic forms, respectively. The mean filter can work on Gaussian and Poison noises whereas median is more appropriate for multiplicative noises such as salt and pepper. In addition, some extensions of these two are widely available in the literature, for example, Gaussian mean, which is a Gaussian low-pass filter (a kind of linear filter in image processing). As a result, the first step before further processing of SAR images is to enhance the images by using denoising filters. A main setting parameter of the denoising masks is their window size. In higher order of the size, noises are well reduced, but instead, the main image information such as edges would be destroyed as well. In small-sized windows, image edges are being kept acceptably whereas the noise is not removed desirably. It shows that a kind of trade-off usually exists on how to select the window size.

Image enhancement

Using high-boost filters and a kind of fusion of local and global contrast information can help to find targets into a context of fake objects while doing tracking. Even though radar imaging does not require the natural light of around the Earth for remote sensing services and data provisioning, sometimes, atmospheric elements that can destroy the optical images, may affect the radar images as well, but their effect is not as hard as their impact for optical imaging systems. As usual, the impacts of aerospace resources is modelled as noises. Some pre-processing techniques such as dehazing and fog removal are used to enhance images in addition to the general solutions including denoising, resolution enhancement, contrast improvement (including Gamma correction, and histogram matching and equalization), and edge enhancement (and high-boosting). In particular, contrast of radar images taken by a SAR sensor is not acceptable normally, thus this problem must be solved earlier before the main processing.

All of these are subject to single-channel microwave imaging in SAR platforms that provide radar-imaging services for a variety of applications. There are other kinds of SAR sensors as well, which provide multi-channel spectral information such as virtual color. As an example, polarimetric SAR uses polarization modes in microwave systems to make a synthesized virtually-colored radar image. Also, multi-band SAR with different microwave ranges can also make color images. About the color images in radar imaging systems, improvement of low contrast is not often required because of benefiting from the spectral information, but for single-channel two-dimensional (2D) SAR, handling of the image contrast is a very helpful pre-processing. This may cause an increase in images' energy, entropy, and variance, and eventually the accuracy in the performance of the AI units in data post-processing.

Morphological processing

As mentioned earlier, median is a non-linear filter that can equalize the inputs. Here, we use its 2D standard form to operate on the SAR images, exclusively for non-target regions, mainly for the areas affected by Speckle, and salt and pepper-like noises. Image enhancement based on histogram equalization to modify the contrast is the second main pre-processing. Now, a filtering based on the morphological operators is utilized to complete the pre-processing steps.

Morphological tools are part of image processing operations that are used widely to change a group of pixels or process shapes. Among the well-known morphological operations, morphology-based binary extension is used. A binary image has just two levels. The filter is usually and mainly impactful on non-dark pixels of a binary image. This filter similar to the other image filters will use a sliding mask in 2D.

Kalman filtering and tracking

The topic of tracking has been a hot research in video processing over decades. In fact, the difference of still image and video sequence depends on such subjects. Tracking in SAR videos is one of the hottest topics of the recent research. After the proper pre-processing discussed in the prior sections, a review on some details of Kalman filtering is provided here. This filter is an estimator, a second-order linear model that uses the error measurements. The Kalman filtering sets an algorithm to guess the status of dynamic systems in terms of time. In the literature of advanced statistical algorithms, specifically statistical signal processing, Kalman filter is considered as a Bayesian estimator. The corresponding algorithm is implemented in two main steps. In the first step, which is a prediction, the filter provides the current situation of variables with uncertainty. When the next set of measurements over time is recorded, the past estimate is updated by using a weighted average. This way of updates causes more impact of the information with certainty and reliability. The algorithm is recursive on which it works with new inputs and past states. It is often assumed that all errors are Gaussian for the inputs, but if it is not really followed by the inputs, the accuracy of the algorithm will decrease. In brief, the Kalman filter's algorithm generates the best estimate of system only if its assumptions exist. The filter produces a state and then compares it with the measured information. Ultimately, it sets a weighting based on discrepancies of the prediction and measurements to form a new estimate for the next moment. Although, we do not want to review the filter with its computation details (refer to [34] for more background), just to make a moderate review, a number of equations to detail the filter's performance are given as follows.

The estimation for the moment ti is according to eq. 1.

$$\overset\frown E\left(t_i\right)=G\left(t_i\right)\left[x\left(t_i\right)\;-\;M\left(t_i\right).\overset\frown P\left(t_i\right)\right]$$
(1)

Where G(ti) is the gain, M(ti) is named measurement matrix, and x(ti) is the system input. P denotes the prediction factor, Eq. 2.

$$\overset\frown P(t_i)=A\left(t_i\right).\overset\frown E(t_{i-1})$$
(2)

In Eq. 2, A(ti) is the system matrix. The other term with argument of “t-1” shows that in order for the next estimate to be computed, the prior information is being continuously used. The gain matrix is obtained based upon covariance matrix. For more details, refer to the text books.

Now, we would concentrate on its use in video processing. If it is supposed that s(x,y) is a sample pixel of a frame at ti, s(x,y,t) is the under-estimation term. Consider m(x,y,ti-1) is a binary map of pixels that determines whether the pixel at location (x,y) is belonging to the background or the moving object at ti-1. m(x,y,ti-1) is formulated in Eq. 3. The value “1” indicates that the pixel at (x,y) is moving, whereas “0” is for the relatively static background. In Eq. 3, Th(x,y,ti) is the threshold as explained in Eq. 4.

$$m(x,y, t_{i-1})= \left\{\begin{array}{ll} 1 & if\ P(x,y,t_{i-1}) \geq Th(x,y,t_{i-1})\\ 0 & other \end{array}\right.$$
(3)
$$Th(x,y,t_i)=\vert\overset\frown s(x,y,t_i)-s(x,y,t_i)\vert$$
(4)

Depending upon occurrence of “0” or “1”, the gain factor would be different. As noted, the algorithm is recursive. The system's prior information is included to estimate the current state without a need to store all measurements. The intensity of every pixel is estimated as a state of the system to be flagged as a background pixel. In this method, a threshold is set according to the equations to specify whether the pixel is finally part of a moving object or background.

Proposed system: integration

The proposed system does not consist of new algorithms, but it is an optimized combination of the reviewed algorithms to find and track moving objects in SAR videos. We tried to heuristically optimize the performance for the radar dataset. The procedure presented below is a brief description of the proposed system. In the next section, the test results are provided along with visual outputs. This system does not require complicated computations of supervised machine learning systems such that can be easily performed at the edge.

The last two steps are presented in the next part of the paper.

Results and evaluation

The data used for testing the proposed system is coming from ICEYE Company as freely available on their website (https://www.iceye.com/blog/iceye-sar-videos-published-technical-insights-and-highlights), ICEYE is a European start-up for satellite services and SAR videos. Our tests are generally into two parts of preprocessing, and tracking. For pre-processing, there is no separate test, but its independent visual outputs are illustrated. On the other hand, the tracking result is an integrated output of both pre-processing and Kalman filtering. Figure 2 is showing the visual performance of the pre-processing steps for the first dataset. In this figure, numerous moving objects exist. Every setting has recorded a different output, we do not want to discuss the details of the settings here because it completely depends on the input frame, thus settings must be changed for any new input per use to select the best one (by an expert).

Fig. 2
figure 2

Visual outputs of the pre-processing step on dataset_1, the first row shows some denoised frames with various settings, the second shows the upper denoised frames after applying the enhancement filters, and finally morphology is applied on the upper enhanced frame to make a binary matrix extracting possible moving objects from background. This clearly shows that various settings can record fully different visual results and accuracy when there are multiple moving objects

Then, the best output of the pre-processing step will be an input of the tracking step. Figure 3 is the same output for the second radar dataset which includes only one moving object.

Fig. 3
figure 3

Visual outputs of the pre-processing step on dataset_2, the first row shows some denoised frames with various settings, the second shows the upper denoised frames after applying the enhancement filters, and finally morphology is applied on the upper enhanced frame to make a binary matrix extracting possible moving objects from background. This clearly shows that various settings may approximately record the same visual results and accuracy when there is a single moving object

Figure 4 illustrates the visual outputs of the tracking step for different settings in the first dataset. Two tables for qualitative and quantitative analysis of the visual outputs in Fig. 4 are provided, Tables 1 and 2, respectively. In Fig. 4, the 8th part although records the target detection accuracy of 100%, it faces a very high false alarm probability that makes it finally unreliable.

Fig. 4
figure 4

Visual outputs of tracking for dataset_1

Table 1 Qualitative analysis of the outputs of tracking for dataset_1 in Fig. 4
Table 2 Quantitative analysis of the outputs of tracking for dataset_1 in Fig. 4, the best performance is accuracy of 100% while the false alarm probability is 0

This is because of the sensitive settings assigned for the system that help us find all moving targets from one hand, but on the other hand, a more number of false targets are also detected wrongly. In Table 2, two metrics of the detection accuracy of real moving target/object and the probability of false alarms (PFA) are used as the main measures of performance, Eq. 5, and Eq. 6, respectively

$$Accuracy=\frac{Number\;of\;the\;correctly-found\;moving\;objects}{Number\;of\;all\;moving\;objects\;in\;a\;scence}\times100$$
(5)
$$P_{FA}=\frac{Number\;of\;FAs}{Number\;of\;total\;detections}$$
(6)

Figure 5 illustrates the visual outputs of the tracking in the second dataset with only one moving target. Figure 4 has shown that the system performance is very sensitive regarding the selected settings while there are multiple moving targets but this recent test in Fig. 5 shows that the system works reliably for a scenario with single target without dependency on the assigned settings. Tables 3 and 4 summarize the interpretations of the second dataset and results in Fig. 5.

Fig. 5
figure 5

Visual outputs of tracking for dataset_2, for most tests including Frames 1 and 2 here, the accuracy is 100% while the false alarm probability is 0

Table 3 Qualitative analysis of the outputs of tracking for dataset_2 in Fig. 5
Table 4 Quantitative analysis of the outputs of tracking for dataset_2 in Fig. 5, the best performance is accuracy of 100% while the false alarm probability is 0

The findings of this paper is useful for any remote sensing platforms in the sky or space. However, all the test data are coming from satellite sensors, not UAV sensors. It is because the goal was to monitor marine transportation, and no marine UAV data was found so far for this microwave sensor. In total, monitoring of seas and oceans by using satellites is more affordable.

Conclusions

This research has studied radar video tracking as a very hot topic research related to radar temporal data processing for remote sensing applications. An unsupervised system of target tracking was proposed and evaluated by using the satellite radar data. This system does not use complex algorithms or highly sensitive supervised machine learning methods, thus, its reliability for new data with limited computing capacity at the edge is dependable. As a result, for a scene with single moving object, the system not only is very high-performance, but also its settings are not complicated.

On the other hand, for scenes with multiple moving objects, although the system could be high-performance, its settings was a little complicated. Therefore, the proposed system is more suitable for single moving target detection and tracking. In addition, to prevent heuristic setting of attaining the best possible performance in multiple moving targets tracking, it is suggested to use an optimizer or an unsupervised strategy to make this part of the system automatic as well.

Availability of data and materials

The dataset used for the findings of this study is available through ICEYE Company, as freely available.

References

  1. Zhang S, Qi Z, Zhang D (2009) Ship tracking using background subtraction and inter-frame correlation. In 2009 2nd International Congress on Image and Signal Processing (pp. 1–4). IEEE.

  2. Fefilatyev, S., Goldgof, D., Lembke, C. (2010). Tracking ships from fast moving camera through image registration. In 2010 20th international conference on pattern recognition (pp. 3500–3503). IEEE

  3. Wu J, Mao S, Wang X, Zhang T (2011) Ship target detection and tracking in cluttered infrared imagery. Opt Eng 50(5):057207

    Article  ADS  Google Scholar 

  4. Qi S, Wu J, Zhou Q, Kang M (2018) Low-resolution ship detection from high-altitude aerial images. In MIPPR 2017: Automatic Target Recognition and Navigation. Soci Opt Photon 10608:1060805

    Google Scholar 

  5. Liu W, Zhen Y, Huang J, Zhao, Y (2016). Inshore ship detection with high-resolution SAR data using salience map and kernel density. In Eighth International Conference on Digital Image Processing (ICDIP 2016) 10033:775–780. SPIE.

  6. Wei X, Wang X, Chong J (2018) Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images. J Appl Remote Sens 12(1):016026

    Article  ADS  Google Scholar 

  7. Wang Q, Zhu H, Wu W, Zhao H, Yuan N (2015) Inshore ship detection using high-resolution synthetic aperture radar images based on maximally stable extremal region. J Appl Remote Sens 9(1):095094

    Article  ADS  Google Scholar 

  8. Tian S, Wang C, Zhang H (2015) Ship detection method for single-polarization synthetic aperture radar imagery based on target enhancement and nonparametric clutter estimation. J Appl Remote Sens 9(1):096073

    Article  Google Scholar 

  9. Khosravi MR et al (2020) spatial interpolators for intra-frame resampling of SAR Videos: a comparative study using real-time HD, medical and radar data. Curr Signal Transduct Ther 15(2):136–188

    Article  MathSciNet  Google Scholar 

  10. Khosravi MR et al (2021) Frame rate computing and aggregation measurement toward QoS/QoE in Video-SAR systems for UAV-borne real-time remote sensing. J Supercomput 77(12):14565–14582

    Article  Google Scholar 

  11. Khosravi MR et al (2022) Mobile multimedia computing in cyber-physical surveillance services through UAV-Borne Video-SAR: a taxonomy of intelligent data processing for iomt-enabled radar sensor networks. Tsinghua Sci Technol 27(2):288–302

    Article  Google Scholar 

  12. Kim S., et al. (2018). ViSAR: A 235 GHz Radar for Airborne Applications. In Proc. IEEE Radar Conf, USA, pp. 1549–1554. https://doi.org/10.1109/RADAR.2018.8378797

  13. Wang D., Zhu D., Liu R. (2019). Video SAR High-speed Processing Technology Based on FPGA. In Proc. 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), China

  14. Liang J, Zhang H (2019) Study on pointing accuracy effect on image quality of space-borne video SAR. IOP Conf Series: Mater Sci Eng 490:072011

    Article  Google Scholar 

  15. Li J et al (2022) Fusion of optical and SAR images based on deep learning to reconstruct vegetation NDVI time series in cloud-prone regions. Int J Appl Earth Obs Geoinf 112:102818

    Google Scholar 

  16. Kulkarni SC et al (2020) Pixel level fusion techniques for SAR and optical images: a review. Inf Fusion 59:13–29

    Article  Google Scholar 

  17. Rafique W et al (2020) Complementing IoT services through software defined networking and edge computing: a comprehensive survey. IEEE Communications Surveys Tutorials 22(3):1761–1804

    Article  Google Scholar 

  18. Xu X et al (2019) An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Futur Gener Comput Syst 96:89–100

    Article  Google Scholar 

  19. Yang F, Xu Q, Li B (2017) Ship detection from optical satellite images based on saliency segmentation and structure-LBP feature. IEEE Geosci Remote Sens Lett 14(5):602–606

    Article  ADS  Google Scholar 

  20. Yang X, Sun H, Sun X, Yan M, Guo Z, Fu K (2018) Position detection and direction prediction for arbitrary-oriented ships via multitask rotation region convolutional neural network. IEEE Access 6:5083950849

    Google Scholar 

  21. Yang G, Li B, Ji S, Gao F, Xu Q (2013) Ship detection from optical satellite images based on sea surface analysis. IEEE Geosci Remote Sens Lett 11(3):641–645

    Article  ADS  Google Scholar 

  22. Deng C, Cao Z, Fang Z, Yu Z (2013) Ship detection from optical satellite image using optical flow and saliency. In MIPPR 2013: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications. Int Soc Opt Photon 8921:89210F

    Google Scholar 

  23. Yao Y, Jiang Z, Zhang H, Zhao D, Cai B (2017) Ship detection in optical remote sensing images based on deep convolutional neural networks. J Appl Remote Sens 11(4):042611

    Article  ADS  Google Scholar 

  24. Tang J, Deng C, Huang GB, Zhao B (2014) Compressed-domain ship detection on space borne optical image using deep neural network and extreme learning machine. IEEE Trans Geosci Remote Sens 53(3):1174–1185

    Article  ADS  Google Scholar 

  25. Shi Z, Yu X, Jiang Z, Li B (2013) Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature. IEEE Trans Geosci Remote Sens 52(8):4511–4523

    ADS  Google Scholar 

  26. Zou Z, Shi Z (2016) Ship detection in spaceborne optical image with SVD networks. IEEE Trans Geosci Remote Sens 54(10):5832–5845

    Article  ADS  Google Scholar 

  27. Proia N, Pagé V (2009) Characterization of a Bayesian ship detection method in optical satellite images. IEEE Geosci Remote Sens Lett 7(2):226–230

    Article  ADS  Google Scholar 

  28. Kopsiaftis, G., Karantzalos, K. (2015). Vehicle detection and traffic density monitoring from very high resolution satellite video data. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 1881–1884). IEEE

  29. Yang T, Wang X, Yao B, Li J, Zhang Y, He Z, Duan W (2016) Small moving vehicle detection in a satellite video of an urban area. Sensors 16(9):1528

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  30. Larsen SØ, Koren H, Solberg R (2009) Traffic monitoring using very high resolution satellite imagery. Photogramm Eng Remote Sens 75(7):859–869

    Article  Google Scholar 

  31. Yang X et al (2023) Time-aware LSTM neural networks for dynamic personalized recommendation on business intelligence. Tsinghua Science and Technology 29(1):185–196

    Article  Google Scholar 

  32. Yang X., et al. (2023). LSTM network-based Adaptation Approach for Dynamic Integration in Intelligent Endedge-cloud Systems. Tsinghua Sci Technol.

  33. Li D., et al. (2023). Trust-aware Hybrid Collaborative Recommendation with Locality-Sensitive Hashing. Tsinghua Sci Technol.

  34. Chui CK, Chen G (2009) Kalman Filtering with Real-Time Applications. Springer International Publishing, Germany

    Google Scholar 

Download references

Funding

The author received no specific funding for this study.

Author information

Authors and Affiliations

Authors

Contributions

M. L. wrote and revised the main manuscript; Q. W. guided and reviewed the manuscript; Y. L. prepared Figs. 1, 2, 3, 4, 5.

Corresponding author

Correspondence to Qinyong Wang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M., Wang, Q. & Liao, Y. Target tracking using video surveillance for enabling machine vision services at the edge of marine transportation systems based on microwave remote sensing. J Cloud Comp 13, 47 (2024). https://doi.org/10.1186/s13677-024-00604-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-024-00604-0

Keywords