YOLO-Based Real-Time Floating Debris Detection and Counting in Rivers for Early Flood Warning and Long-Term Water Resource Management
ShaufikahShukri1Email
LatifahMunirahKamarudin2Email
Azfar HaniffZuel Azwar3Email
NorainiAzmi4✉Email
AmmarZakaria5✉Email
Ahmad ShakaffAli Yeon6Email
Syed Muhammad MamduhSyed Zakaria7Email
RetnamVisvanathan1✉Email
1Postdoctoral Student, Faculty of Electronic Engineering & Technology (FKTEN)Universiti Malaysia Perlis (UniMAP)02600Arau, PerlisMalaysia
2Centre of Excellence for Advanced Sensor Technology (CEASTech)Universiti Malaysia Perlis (UniMAP)PerlisMalaysia
3Undergraduate Student, Faculty of Electronic Engineering & Technology (FKTEN)UniMAP02600Arau, PerlisMalaysia
4Postdoctoral Student, Faculty of Intelligent Computing (FKC)Universiti Malaysia Perlis (UniMAP)02600Arau, PerlisMalaysia
5Assoc. Prof, Faculty of Electrical Engineering &Technology (FKTE)Universiti Malaysia Perlis (UniMAP)02600Arau, PerlisMalaysia
6Senior Lecturer, Faculty of Electrical Engineering &Technology (FKTE)Universiti Malaysia Perlis (UniMAP)02600Arau, PerlisMalaysia
7Assoc. Prof, Faculty of Electronic Engineering &Technology (FKTEN)Universiti Malaysia Perlis (UniMAP)02600Arau, PerlisMalaysia
Shaufikah Shukri1, Latifah Munirah Kamarudin2 *, Azfar Haniff Zuel Azwar3, Noraini Azmi4, Ammar Zakaria5, Ahmad Shakaff Ali Yeon6, Syed Muhammad Mamduh Syed Zakaria7, and Retnam Visvanathan8
1 Postdoctoral Student, Faculty of Electronic Engineering & Technology (FKTEN), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email:shaufikahshukri.ss@googlemail.com; https://orcid.org/0000-0002-9710-672X.
2 Director, Assoc. Professor, Centre of Excellence for Advanced Sensor Technology (CEASTech), Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia; email: latifahmunirah@unimap.edu.my; https://orcid.org/0000-0002-2547-3934. (*Corresponding author)
3 Undergraduate Student, Faculty of Electronic Engineering & Technology (FKTEN), UniMAP, 02600 Arau, Perlis, Malaysia. Email: azfarhaniff@studentmail.unimap.edu.my;
4 Postdoctoral Student, Faculty of Intelligent Computing (FKC), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email: norainibintiazmi@gmail.com; https://orcid.org/0000-0002-8528-8351.
5 Assoc. Prof., Faculty of Electrical Engineering &Technology (FKTE), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email: ammarzakaria@unimap.edu.my; https://orcid.org/0000-0002-7108-215X.
6 Senior Lecturer, Faculty of Electrical Engineering &Technology (FKTE), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email: ahmadshakaff@unimap.edu.my; https://orcid.org/0009-0003-2899-2726.
7 Assoc. Prof., Faculty of Electronic Engineering &Technology (FKTEN), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email: smmamduh@unimap.edu.my; https://orcid.org/0000-0003-3557-2204.
8 Postdoctoral Student, Faculty of Electronic Engineering & Technology (FKTEN), Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia; email: retnam@unimap.edu.my.
Abstract
Urban flooding is increasingly exacerbated by the accumulation of floating debris in rivers, which obstructs water flow, degrades water quality, and poses significant risks to human safety and environmental sustainability. Effective monitoring of floating debris is therefore critical for early flood warning and long-term water resource management. This study presents a real-time monitoring framework that integrates deep learning-based object detection models You Look Only Once (YOLO) with video surveillance for the identification and quantification of floating debris in urban rivers. Field deployments were conducted in flood-prone sites in Shah Alam, Malaysia, to evaluate the system under real-world environmental conditions. Results show that YOLOv7 achieved higher accuracy and robustness across diverse debris classes and lighting conditions compared to YOLOv9, with precision, recall, and F1-scores demonstrating strong detection reliability. Beyond technical accuracy, the system provides timely and actionable information for flood risk assessment, river management, and environmental monitoring. By automating debris detection and quantification, this study contributes to Sustainable Development Goals (SDGs) 11 (Sustainable Cities and Communities) and 13 (Climate Action), offering a scalable monitoring solution for flood-prone regions.
Keywords:
Urban flooding
Floating debris detection
Preventive measure
Environmental monitoring
YOLO (You Only Look Once)
Computer vision
1 Introduction
Urban flooding has recently become increasingly prevalent, resulting in chaos and disruptions to social and economic activities, damage to infrastructure such as roads, railway tracks, and vehicles, as well as increased health vulnerabilities and, in some cases, loss of life (Piadeh et al., 2023). Defined as the temporary overland flow of water in urban areas, urban flooding encompasses various types, including pluvial, fluvial, coastal, flash, groundwater, and urban drainage system (UDS) flooding. Among these, UDS flooding is particularly complex, occurring when excess water escapes from one or more components of the drainage system. This phenomenon is driven by multiple factors such as high-intensity rainfall, surface runoff in densely built urban areas, inadequate drainage capacity, poor drainage system design and infrastructure, rapid urban development, unplanned urbanization, and environmental degradation.
One critical but often underrecognized factor contributing to flood risk is floating debris in waterways. Studies have shown that debris obstructs water flow, exacerbates localized flooding, and accelerates environmental degradation (Rocamora et al., 2021; Sohn et al., 2020; van Emmerik et al., 2022). Such debris originates from natural sources (vegetation, branches, leaves) as well as anthropogenic waste (plastic bags, bottles, Styrofoam), which not only indicates poor waste management but also degrades water quality and diminishes river ecosystem health. Detecting and monitoring floating debris, therefore, provide a dual benefit: it serves as an environmental indicator of pollution and waste mismanagement while also functioning as an early warning signal for potential flood hazards.
Conventional observation methods, such as manual inspections and basic closed-circuit television (CCTV) surveillance, are labor-intensive, inconsistent, and incapable of providing real-time alerts during critical events. These shortcomings reduce the timeliness and reliability of flood monitoring in complex urban settings. By contrast, recent advances in computer vision and artificial intelligence (AI) have enabled automated debris detection and classification, offering more efficient and scalable monitoring solutions. Importantly, detecting floating debris is not only vital for environmental assessment but can also function as an early warning indicator for potential flood events. Lin et al. (2021), for instance, demonstrated that both the type and quantity of floating debris can act as useful indicators of water quality and overall river health. Their study applied the YOLOv5 algorithm to classify eight (8) distinct debris categories in waterways: leaf, plastic bag, grass, branch, bottle, milk box, plastic garbage, and ball, highlighting the potential of AI in enhancing environmental monitoring.
Despite these advances, current AI-based monitoring systems remain limited. Many approaches oversimplify debris detection by treating all objects as generic “trash,” without distinguishing between types of debris that pose different risks to water flow and ecological health. This lack of granularity reduces the usefulness of monitoring data for environmental assessment and flood prediction. Moreover, most systems are constrained by small, non-diverse training datasets, which restrict generalizability and often lead to misclassification or false alarms in real-world environments. The absence of real-time capability in many conventional frameworks further reduces their effectiveness for large-scale, continuous monitoring in flood-prone areas.
This study is motivated by the escalating frequency of urban flooding and the critical need for smarter, scalable monitoring tools in flood-prone regions. The limitations of manual surveillance and static monitoring systems further highlight the importance of transitioning toward automated, real-time detection solutions. Additionally, this study acknowledges that urban rainstorms and flood-related disasters have become among the most critical threats to human well-being and sustainable development. Thus, this study is closely aligned with the United Nations Sustainable Development Goals (SDGs), a global framework introduced to eradicate poverty, protect the environment, and promote peace and prosperity by 2030 (United Nations, 2024). Among the 17 goals, several are particularly relevant to urban flood resilience and governance, specifically SDG 9 (Industry, Innovation, and Infrastructure), SDG 11 (Sustainable Cities and Communities), and SDG 13 (Climate Action). These goals emphasize the importance of building resilient infrastructure, fostering sustainable urban development, and enhancing adaptive capacity to climate-related hazards, including urban rainstorms and flood events (Li et al., 2023).
A
In this paper, we present one of Malaysia’s first AI-powered flood monitoring systems, developed under the Selangor Cyber Valley (SCV) project. The system integrates YOLO-based deep learning algorithms (YOLOv7 and YOLOv9) with live video monitoring to detect and classify multiple classes of floating debris in rivers. By improving detection accuracy and enabling real-time alerts, the system not only enhances flood preparedness but also contributes to broader environmental monitoring and early warning frameworks. Potential future enhancements include the integration of water level sensors, meteorological data, and predictive flood risk modeling to provide a holistic monitoring platform.
The core aim of this work is to develop an AI-based real-time system capable of identifying and classifying multiple types of floating debris in rivers to facilitate smarter, data-driven flood management practices. In future implementations, this framework could be extended to other high-risk flood regions across Malaysia. The primary contributions of this research are as follows:
We design and implement a deep learning-based object detection system using YOLOv7 and YOLOv9 for identifying various classes of floating debris in rivers.
We address the limitations of existing monitoring systems by developing a real-time video-based detection framework that minimizes dependency on manual inspections and enhances operational efficiency.
We improve detection robustness through the use of diverse datasets, including real-world images captured under varying environmental and lighting conditions.
We evaluate the system using standard performance metrics, such as accuracy, precision, recall, and F1-score, to assess its effectiveness in supporting early warning and flood mitigation efforts.
Overall, this study contributes to building safer and more resilient urban communities through the development of intelligent river monitoring systems that enable timely decision-making and disaster response. The remainder of the paper is organized as follows. Section 2 reviews the related literature and existing works. Section 3 describes the overall floating debris detection and counting system, including its implementation at river sites in Selangor. Section 4 presents the system's performance evaluation, including Mean Average Precision (mAP) analysis and the impact of weather conditions. Finally, Section 5 concludes the paper and provides recommendations for future work.
2 Related Works
Floods remain one of the most destructive natural hazards, posing critical threats to ecosystems, infrastructure, and human safety. In Malaysia, for example, continuous rainfall during the northeast monsoon in 2010 triggered widespread flooding across Sabah, Johor, Malacca, Negeri Sembilan, and Pahang, with Johor recording over 30,000 evacuees (Baharum et al., 2011). In recent years, the frequency and severity of floods have increased, underscoring the urgent need for efficient flood monitoring and early warning systems. Effective management of waterways and drainage systems has thus become critically important, particularly given the compounding role of floating debris in obstructing river flow and intensifying flood impacts.
Remote sensing technologies, particularly satellite imagery, have been widely adopted for flood monitoring due to their capability to provide synoptic views over large areas and observe the Earth’s surface under various weather conditions (Shatnawi, 2024). Satellite-based water detection is feasible because water bodies have a higher relative dielectric constant than land, and satellite signals are more strongly reflected by smooth water surfaces than by rough land (Chen et al., 2024). Nevertheless, challenges persist in detecting inland water bodies due to terrain effects, imbalanced sampling, and atmospheric noise (Chen et al., 2024; Shatnawi, 2024). Moreover, satellite data acquisition, image processing, and analysis require specialized skills and expensive software.
While synthetic aperture radar (SAR) systems, such as TerraSAR and Sentinel-1 (Saleh, Yuzir, & Abustan, 2020), are capable of capturing surface data under cloud cover and during nighttime, offering valuable capabilities for flood monitoring. These systems detect changes in surface roughness caused by flooding and help identify water bodies based on variations in radar backscatter. Typically, land areas produce high backscatter returns, while flooded areas produce low returns. This enables the extraction of flooded regions, which can then be visually represented (Shatnawi, 2024). However, interpreting radar data remains complex, and validation using ground observations or multi-spectral satellite imagery is often necessary to ensure accuracy.
At the national scale, various flood monitoring systems have been developed that integrate water level sensors, rain gauges, communication modules, dashboards, and alert mechanisms. Many of these rely on threshold-based triggers, where warnings are issued once critical water levels are surpassed (Baharum et al., 2011; Pagatpat et al., 2015; Zahir et al., 2019). In Malaysia, communication technologies like GSM and Wi-Fi, and IoT-based platforms have been widely adopted to enable real-time river monitoring and data transmission (Hamzah et al., 2024; Faudzi et al., 2023; Hassan et al., 2021; Zain et al., 2020; Zahir et al., 2019; Hashim et al., 2018; Noar & Kamal, 2017). Table 1 summarizes existing research on flood monitoring systems implemented in Malaysia from 2017 to the present, highlighting the technologies used and their respective limitations, all aimed at enhancing early warning capabilities in flood-prone areas.
In flood-prone areas, efficient detection and classification of floating debris are critical for the development of responsive flood monitoring and warning systems. Conventional methods, primarily based on manual inspection or costly hardware, are inadequate for large-scale, real-time deployment. These methods are time-consuming, inconsistent, and prone to human error, making them unsuitable for rapid disaster response. Automated systems are therefore essential, especially in scenarios where timely and accurate information is crucial for effective emergency management.
Table 1
Research Work and Development of Flood Monitoring Systems in Malaysia (2017–Present)
Sources
Components
Location
Limitation
Hamzah et al. (2024)
ESP32-CAM, HC-SR04 Ultrasonic Sensor, Wi-Fi, Camera
Campus test site, Johor, Malaysia
Sensor accuracy affected by rough surfaces, no field validation, needs power backup, no predictive model
Syed Zaifudin et al. (2024)
ESP32, YF-S201 flow sensor, Float sensor, Blynk
Indoor test only
No outdoor validation, fixed float thresholds, manual calibration, limited alert mechanisms
Faudzi et al. (2023)
IoT, GSM, ML (LSTM)
UTM (Skudai), Johor
Short real-time data duration, no rainfall prediction accuracy at the test site, GSM dependency.
Lee et al. (2024)
ESP32-CAM, OpenCV, Solar panel
Malaysia (General)
Inconsistent accuracy under poor lighting/weather, lacks long-range communication reliability
Da Loong et al. (2023)
Arduino, LoRa, RF, Logistic Regression, RF
Batu River, Selangor
Site-specific, dependent on the internet for cloud, lacks integration with official alert systems
Zakaria et al. (2023)
Arduino, HC-SR04, LoRaWAN, TTN, TagoIO, ThingSpeak, Solar Power
East Coast Malaysia (simulated)
Lab-only test, single-node, fixed SFs, no ML prediction, packet loss over long range
Monzer M. Raslan (2023)
Arduino, LoRa, RF, ML (RF, LR), Sensors (Rain, Humidity, etc.)
UTM, Johor (lab test)
Low recall, limited features, LoRa signal loss, no real-world test
Hassan et al. (2021)
Arduino UNO, GSM, Water Level, Temp, Humidity Sensors
Pahang (conceptual test)
GSM-only, lacks rainfall data, no prediction, SMS alerts only, small-scale prototype
Zain et al.
(2020)
Arduino, Ultrasonic Sensor, GSM
Perlis (2 test locations)
Unstable GSM, no GPS, SMS-only alerts, sensitive to placement, no cloud/mobile interface
Saleh et al. (2020)
Sentinel-1 SAR Satellite imagery, Threshold-based classification
Penang (2017 flood case study)
Retrospective analysis only, not real-time, no alert system, depends on satellite pass timing
Zahir (2019)
Arduino UNO, Ultrasonic Sensor, GSM Module
Melaka (prototype-based)
Internet-dependent, no mobile alerts, basic sensing, lacks prediction, not field-tested
Hashim et al. (2018)
Ultrasonic sensors,
Arduino microcontrollers,
GSM module
Lab test (prototype)
Limited range, no cloud or mobile dashboard, SMS/Bluetooth only, no data storage, tested on a small scale
Noar & Kamal (2017)
NodeMCU, Ultrasonic Sensor, LCD, Wi-Fi, Blynk
Controlled testbed
Short-range, Wi-Fi only, fixed thresholds, lacks power backup & GPS
To address these challenges, computer vision and deep learning techniques have emerged as promising tools for automated floating debris detection. Artificial intelligence (AI)-based approaches can analyze images and video streams to identify debris with greater accuracy, consistency, and timeliness than human observers. Recent studies have demonstrated that integrating object detection models into monitoring frameworks enables not only the detection but also the classification of floating debris, thereby providing valuable insights into river health, waste accumulation patterns, and potential flood risks. For example, Lin et al. (2021) applied YOLOv5 to classify eight types of debris, including leaves, plastic bags, bottles, and boxes. Table 2 presents the most relevant research studies that have contributed to the advancement of an object identification technique for identifying trash on river surfaces, which has been influential in shaping the present work.
Table 2
Relevant Research Studies Contributing to the Advancement of an Object Identification Technique for Identifying Trash on River Surfaces
Sources
Methods
Classes
Limitation
Xu et al. (2023)
YOLO,
CNN
Bottle
YOLOW outperforms other models by improving robustness against occlusion, distortion, and reflections in water environments through enhanced feature extraction and optimization techniques.
Li et al. (2022)
SSD,
Faster R-CNN
Bottle, Plastic bag, Planktonic algae, Dead fish
Achieved superior real-time detection accuracy compared to SSD and Faster R-CNN, with 2.9–5.5% better accuracy and 55% faster detection time; operated effectively at 33 FPS.
Zhang et al. (2022)
R-CNN,
EYOLOv3
Flotage
Enhanced detection accuracy to 82.3% using deep multi-scale feature fusion and Focal Loss; achieved 35 FPS, suitable for real-time applications
Li et al. (2022)
R-CNN,
PC-NET
Bottle, Branch, Milk-box, Plastic bag, Plastic Garbage, Grass, leaf, Ball
Inconsistent accuracy under poor lighting/weather, lacks long-range communication reliability
Zhou et al. (2021)
CNN,
R-CNN,
CRB-NET
Ball, Rubbish, Rock, Buoy, Tree, Boat, Animal, Grass, Person
Site-specific, dependent on internet for cloud, lacks integration with official alert systems
He et al. (2021)
R-CNN,
YOLOv5
Boat, Aquatic, Algae, Dead Pig, Branch
Lab-only test, single-node, fixed SFs, no ML prediction, packet loss over long range
Zhang et al. (2021)
RefineDet
Flotage
Low recall, limited features, LoRa signal loss, no real-world test
Zhang et al. (2019)
Faster R-CNN,
YOLOv3
Flotage
GSM-only, lacks rainfall data, no prediction, SMS alerts only, small-scale prototype
Sun et al. (2019)
CNN
Floating Object
Short-range, Wi-Fi only, fixed thresholds, lacks power backup & GPS
Among the available AI frameworks, the YOLOv7 model has received significant attention in the computer vision community for its balance of speed and accuracy in real-time object detection. In this study, we adopt YOLOv7 to develop a system capable of detecting and counting floating debris across multiple data sources, including real-time video streams, pre-recorded footage, and static images. By addressing the shortcomings of conventional methods, this system enhances monitoring and early-warning capabilities in flood-prone regions. Beyond detection, it incorporates accurate object counting and adaptability to diverse debris types, sizes, and environmental conditions, making it a practical and scalable tool for environmental monitoring and flood risk management.
This research contributes to the advancement of flood monitoring technologies by providing a cost-effective, scalable, and automated solution for floating debris detection. The proposed system is designed to support communities, disaster management agencies, and environmental organizations in developing more effective flood mitigation and response strategies. A major limitation of current debris detection methods lies in their low precision, often resulting in false positives and false negatives that compromise reliability. Such inaccuracies can lead to overlooked hazards or unnecessary alerts, thereby reducing the overall effectiveness of monitoring systems. Improving detection precision is therefore, essential for strengthening early warning capabilities and ensuring dependable flood risk assessment.
The proposed system addresses these challenges by integrating advanced computer vision methods with the YOLOv7 deep learning architecture to achieve high accuracy in detecting floating debris across both real-time and pre-recorded video formats. The primary objective is to develop an automated tool capable of identifying and classifying diverse debris types, thereby enhancing data-driven flood management operations. The framework includes video data acquisition, dataset annotation for training, and real-time deployment of the trained model. A refined algorithm further minimizes false detections and improves classification accuracy, while real-time object counting within predefined regions of interest provides continuous updates on debris quantities.
For this study, detection is limited to eight commonly encountered debris categories, such as bottle, branch, cup, plastic bag, Styrofoam, plastic container, clustered debris, and cardboard. Performance evaluation will be carried out using benchmark datasets to validate accuracy, efficiency, and robustness. The system is developed as a software-based solution, with the goal of delivering a reliable, adaptable, and efficient floating debris detection framework for flood monitoring. In addition to curated datasets, primary data were collected from an actual monitoring system deployed in Shah Alam, Malaysia, ensuring that model training reflects real-world environmental conditions.
3 Overview of the Floating Debris Detection and Counting System
The deployment of the floating debris detection and counting system involves several critical steps to ensure efficient real-time operation in river environments. The goal is to integrate the trained YOLO-based models (YOLOv7 and YOLOv9) into an edge device, specifically the NVIDIA Jetson platform, for effective monitoring and early warning applications in flood-prone areas.
3.1 System Design and Deployment
The proposed system was deployed at two urban river sites in Shah Alam, Malaysia, namely Seksyen 2 and Seksyen 7, as shown in Fig. 1 and Fig. 2 respectively. The deployed monitoring units are presented in Fig. 3 and Fig. 4. These locations were strategically selected due to their current history of flood events and the frequent accumulation of floating debris, which obstructs water flow, reduces drainage capacity, and exacerbates flood risks.
Fig. 1
Camera location in Seksyen 2, Shah Alam Malaysia.
Click here to Correct
Fig. 2
Camera location in Seksyen 7, Shah Alam Malaysia.
Click here to Correct
The deployment was conducted in collaboration with local authorities to ensure adherence to safety protocols and to facilitate long-term system monitoring. Site selection criteria included vulnerability to flooding, accessibility for regular maintenance, and representativeness of typical urban river conditions in Malaysia. This field implementation provided an opportunity to evaluate the system’s capacity to detect, classify, and quantify floating debris in real-world conditions, thereby linking AI-based monitoring to practical flood risk management and riverine environmental assessment. Each monitoring unit integrates a high-definition surveillance camera connected to the NVIDIA Jetson device, running the YOLO models in real time to detect, classify, and count floating debris.
Fig. 3
The complete system setup at Seksyen 2, Shah Alam Malaysia.
Click here to Correct
Fig. 4
The complete system setup at Seksyen 7, Shah Alam Malaysia.
Click here to Correct
The overall process flow of the system is illustrated in Fig. 5. The first stage involves data collection, conducted by acquiring images and videos from online platforms as well as capturing footage from multiple river locations across the state. Following acquisition, the data undergo a series of image-processing steps to extract frames and ensure quality standards suitable for training. This pre-processing phase is essential to reduce training time and enhance detection accuracy. It includes resizing images, normalizing pixel values, and converting annotations into machine-readable formats. To further improve robustness, data augmentation techniques, such as rotation, flipping, and scaling, are applied to artificially expand dataset diversity. Once pre-processing is complete, the dataset is divided into training and testing subsets. These subsets are then used to train and evaluate the YOLOv7-based model for the detection and classification of floating debris.
Fig. 5
Flowchart of the floating debris detection and counting.
Click here to Correct
3.2 Dataset Buildup and Processing
Data collection involved acquiring images of rivers containing floating debris under both normal and extreme conditions. These images were obtained from primary sources, including video recordings and photographs captured using surveillance cameras, as well as from publicly available internet sources. To ensure diversity and robustness, the dataset included images taken at various times of the day and under different weather conditions. To improve dataset quality, duplicate images, both within and across datasets, were filtered out using the difPy library (Duplicate Image Finder) developed by Landman (2024).
Additionally, images with minimal flood coverage (i.e., less than 5% of flood pixels) were also excluded from the dataset. Manually curated filtering was applied to remove mislabeled debris images and irrelevant images of natural water bodies not associated with urban waterways. While most images featured binary labels (flooded vs. non-flooded), some portions of the DeepFlood dataset contained multi-class annotations (e.g., humans, vehicles, buildings, etc.). For consistency and relevance to the study’s focus on urban flooding, these labels were converted to binary format as well. The inclusion of diverse datasets in this study was not intended for comparative analysis but to improve the generalizability of the model by ensuring variability in data sources and quality, thereby reducing potential biases during training and evaluation.
Following data collection, the next phase involves data preprocessing. This begins with the extraction of video frames and their conversion into still images. Given that the collected dataset comprises images from multiple sources, the image dimensions vary significantly. Therefore, all images were resized to a uniform resolution of 640×640 pixels to ensure consistency and compatibility for model development, training, and validation. A custom Python script was developed to perform batch resizing of the images. Next, data augmentation was applied to the collected images.
Subsequently, data augmentation techniques were applied to enhance the diversity and robustness of the dataset. Data augmentation involves generating additional training samples by applying a range of transformations to the original images. These transformations include rotations, skewing, random zoom, flipping, and the addition of salt-and-pepper noise. The goal of these augmentations is to expose the proposed model to a wider range of visual events that it may experience in real-world circumstances. By introducing these variations, the model’s ability to generalize its comprehension of objects is enhanced, allowing it to better handle diverse environmental conditions and demonstrate improved resilience during inference. During augmentations, a random image is selected from the dataset and subjected to random combinations of the transformations, thereby expanding both the size and variability of the dataset.
Next, the image labeling process was conducted to assign specific labels to objects within each image. During this stage, objects visible in the images were annotated using bounding boxes, each defined by its coordinates (x and y), and associated class number. The process is one of the most labor-intensive aspects of constructing an AI-based detection model. Labeling was performed using LabelImg software, which facilitates the manual annotation of images and automatically exports the data in a format suitable for training object detection models. The labeling classes consist of 0:Bottle, 1:Branch, 2:Can, 3:Cup, 4:Styrofoam, 5:Plastic Bag, 6:Clustered Trash, 7:Plastic Container, 8:Cardboard, 9:Canopy, 10:Table, 11: Chair, and 12: Big Umbrella. These categories were specifically selected based on their potential hazards and prevalence in urban riverine environments. While objects such as canopy, table, chair, and big umbrella are not typically found in the river, they were included in the dataset due to their frequent presence along the riverbank, particularly in temporary business areas where small-scale vendors set up stalls during weekly morning and night markets in Malaysia. Each labeled image was accompanied by a corresponding .txt file containing the coordinates and class identifiers of all labeled floating debris. The text file follows the format:
<object-class-id > < x> <y > < width > < height>
A
Following the labeling process, the dataset comprised paired camera image files and their corresponding annotation files for floating debris. These datasets were subsequently subjected to a structured preprocessing workflow and partitioned into three distinct subsets: training, validation, and testing. A randomized allocation strategy was applied to divide the labeled data into 70% for training, 20% for validation, and 10% for testing. The training subset was utilized to optimize the model's parameters, while the validation subset was employed to monitor learning progress and guide hyperparameter tuning. The testing subset, held out during model development, was employed to objectively evaluate the model’s performance and generalization capability on unseen data.
3.3 Model Development
The system employed the YOLO object detection algorithm, a state-of-the-art model renowned for its real-time object detection capabilities. The implementation was carried out on a Linux server platform to ensure efficient access to computational resources. The training and evaluation processes were conducted using a curated dataset comprising images of floating debris on river surfaces. Comparative performance assessments were performed between YOLOv7 and YOLOv9. To establish the training environment, source files from both YOLOv7 and YOLOv9 were obtained from their respective official repositories and installed on a local high-performance machine. Separate virtual environments were configured for each version to ensure isolated and conflict-free development. This approach supports full local deployment, offering better control and data privacy than containerized cloud-based alternatives.
A
Both YOLO algorithms require a file that defines the dataset's structure, including class labels and file paths. The file is referred to as a .yaml file and is provided as input for the training parameter using the --data option. After all necessary files have been created, the training process is ready to begin. Several crucial parameters require adjustment. Executing training with the default parameter settings without adjustments may lead to errors or inaccurate outcomes. Therefore, it is necessary to modify certain crucial parameters to attain the intended outcome. The parameters utilized for this project encompass both the training and formatting aspects and are outlined in Table 3.
The detect.py module in the original YOLO has an incorporated capability to detect objects and mark them with bounding boxes. By modifying the source code of detect.py, it is possible to further improve the program's capabilities. This enables the display of additional data overlaid on the existing detection frames. This change enhances the program's ability to provide a more comprehensive and useful visual display, beyond the limitations of traditional bounding boxes. The improved iteration of detect.py can be tailored to incorporate additional information, such as object labels, counts, and relevant metadata, right onto the visual output. This enhancement not only improves the clarity of the detection results but also enables a more thorough comprehension of the identified items within the entire scene, hence increasing the effectiveness and user-friendliness of the YOLOv7-based detection system.
Table 3
Training Parameters
Parameters
Description
-- img
size of images that the model will be trained on; the default value is 416.
-- batch-size
the batch size determines the speed of the training; the default is 32.
-- epochs
number of training epochs; the default is 300.
-- data
path to the .yaml file.
-- weights
Original weight from the official GitHub repository (yolo.pt)
-- img
Size of images that the model will be trained on; the default value is 416.
In this project, an additional counting technique is implemented, and its results are displayed on the screen in conjunction with the detection procedure. By implementing these modifications, a greater amount of useful data becomes accessible for analytical purposes. The detect.py code has been modified to implement these changes. This code snippet creates a dictionary to store all the identified data in the current frame of detection. During this process, the detect.py script retains the data. Once the data is accessible, additional code is used to compute and store the data for display on the current detection frame. Finally, the code will display all the processed data on the frame and then move on to the next frame. The detect.py script requires two inputs: (1) the trained custom weights, obtained from prior model training, and (2) the input video or image files for object detection.
Next, the program initiates the detection process by retrieving input from a video source, which may consist of either pre-recorded footage or a real-time streaming link. The video stream is processed frame by frame using the object detection algorithm, thereby converting the continuous feed into a sequence of individual images for analysis. For each frame, the detection module identifies floating debris items and records the number of objects detected per class. Before detecting the next frame, the program compares the current count of detected debris items per class against the maximum recorded counts from previous frames. If a higher count is identified, the program updates the count accordingly. If the count is lower or unchanged, it continues with the next frame. This iterative comparison ensures that the maximum number of floating debris objects per class is retained throughout the detection process.
Once all frames have been analyzed and no further data remains to be processed, the system executes a termination command to conclude the detection session. Notably, the original detect.py script in the YOLO framework is limited to basic object detection. In this study, the script was enhanced to incorporate object counting functionality. Algorithm 1 illustrates the pseudocode detailing the modified detection and counting process for floating debris in riverine environments.
During the preliminary test, this study conducted a comparative analysis between two datasets: one comprising 3,000 images (hereafter referred to as the 3K dataset) and another containing 15,000 images (referred to as the 15K dataset). The 15K dataset was primarily constructed through frame extraction from video sources, followed by extensive preprocessing techniques, including flipping/mirroring, rotation, image sharpening, the addition of Gaussian noise, and adjustments to contrast, brightness, colour saturation, and sharpness. Additional modifications, such as the insertion of black occlusion boxes, were also applied to simulate real-world occlusion scenarios. The dataset size plays a critical role in determining the overall performance of object detection models. Generally, an increase in dataset volume enhances model accuracy, as the network is exposed to a broader variety of feature representations. However, the scale of the dataset is often constrained by the available computational resources for model training. As such, striking an optimal balance between dataset size and hardware capacity is essential to ensure both effective learning and feasible processing times.
Algorithm 1: Function to Count the Detected Floating Debris and Display the Count
 
# Counting and Displaying the Counting
1
For a count that is unique in the detection list
2
 
Sum the number of detections per class when the element in the detection list matches the count and store it in the variable n
3
 
Load an image
4
 
Define the initial display (text) for counting: position, font size, and font color
5
 
For class name and count in the counting list
6
  
Compare the class name and check if n is greater than the count
7
   
If n is greater than the count, update the count list
8
   
End
9
 
End
10
 
For class name and count in the counting list
11
  
Print and display the list containing the class name and the count
12
 
End
13
End
As illustrated in Fig. 6, the mAP graph demonstrates the performance comparison between the two datasets after 100 training epochs. The model trained on the 3K dataset achieved a mAP of 64% at an Intersection over Union (IoU) threshold of 0.5. In contrast, the model trained using the 15K dataset reached a significantly higher precision of 90% under the same IoU threshold. This substantial difference clearly indicates that the smaller 3K dataset lacks sufficient variability and volume to produce a robust and accurate model. Therefore, the 15K dataset, which provides improved generalizability and superior performance, was selected for all subsequent training, validation, and testing phases of this study.
Fig. 6
Performance of the model for different dataset sizes.
Click here to Correct
4 Result and Discussion
4.1 System Performance
In assessing the model's performance for the floating debris detection system on the river surface, several common evaluation metrics for object detection tasks were employed. The measurements include precision, recall, F1-score, mean average precision (mAP), and the Precision-Recall curve graph (PR curve). The results from the validation process highlight the model’s ability to generalize across different contexts and object categories, ensuring robustness and reliability in real-world applications.
A
The comparative analysis between YOLOv7 and YOLOv9 models trained on the same dataset reveals significant differences in performance metrics, including precision, recall, PR curve, and F1 score. The confusion matrices provide a granular view of each model's performance across different classes. YOLOv7’s confusion matrix shows fewer misclassifications, indicating a strong ability to correctly identify and classify objects. YOLOv9’s confusion matrix, while still performing well, indicates higher misclassifications in certain classes, suggesting areas for targeted improvements. YOLOv7, as expected, shows robust performance due to its established architecture and fine-tuning capabilities. The precision curve for YOLOv7 in Fig. 7 demonstrates high accuracy in predicting positive instances, with minimal false positives. This indicates its refined ability to discern between classes and accurately identify objects. Conversely, YOLOv9, being a more recent model, shows promising improvements but also exhibits areas that require further development. The precision curve for YOLOv9 in Fig. 8 shows a slight increase in false positives compared to YOLOv7.
Fig. 8
Precision curve of the floating debris detection system using YOLOv9
Click here to Correct
Table 4
Precision Evaluation
Labels
Precision
YOLOv7
YOLOv9
0. Bottle
0.919
0.931
1. Branch
0.769
0.860
2. Can
0.929
0.933
3. Cup
0.983
0.989
4. Styrofoam
0.918
0.937
5. Plastic bag
0.867
0.914
6. Clustered Trash
0.900
0.853
7. Plastic Container
0.920
0.693
8. Cardboard
0.909
0.871
9. Canopy
0.987
0.908
10. Table
0.867
0.820
11. Chair
0.795
0.775
12. Big Umbrella
0.934
0.910
However, it also indicates the potential for higher precision with additional training and optimization. Based on the precision evaluation presented in Table 4, YOLOv9 demonstrates a slight improvement in precision for most of these categories, namely Bottle, Can, Cup, Styrofoam, and Plastic Bag, compared to YOLOv7. This indicates that YOLOv9 is slightly better at correctly identifying these objects without producing false positives. Additionally, YOLOv9 shows a notable improvement in precision for branches, indicating a better ability to differentiate them from other objects. However, it has significantly lower precision for plastic containers compared to YOLOv7, suggesting a higher rate of false positives in this category. Meanwhile, for clustered trash, YOLOv7 outperforms YOLOv9 in this category, indicating that YOLOv7 is more effective in accurately identifying clustered trash.
A
The recall curves for YOLOv7 and YOLOv9 shown in Fig. 9 and Fig. 10, respectively, further elucidate the differences. YOLOv7 maintains a consistently high recall, suggesting it effectively captures most of the true positive instances. YOLOv9, while also high, demonstrates variability that suggests sensitivity to certain conditions or classes that YOLOv7 handles more uniformly. The PR curves highlight the trade-offs between precision and recall for both models. YOLOv7 shows a balanced curve, maintaining high precision and recall across various thresholds. YOLOv9, however, shows a more pronounced curve, indicating that while it can achieve high precision, it may require more careful threshold tuning to balance recall effectively. F1 score curves consolidate these observations. YOLOv7 presents a stable F1 score, confirming its balanced approach to precision and recall. YOLOv9, while having a competitive F1 score, shows variability that underscores the need for further training iterations and potentially more diverse datasets to stabilize its performance. Overall, while YOLOv7 demonstrates consistent and reliable performance across various metrics, YOLOv9 shows significant potential, with some areas requiring further refinement. The transition from YOLOv7 to YOLOv9 could yield enhanced performance with continued development and optimization, leveraging YOLOv9’s advanced features and potential for higher precision.
The deployed system successfully identified and classified various types of trash in real-time, demonstrating high accuracy and reliability. The NVIDIA Jetson's performance enabled seamless processing of video feeds, ensuring timely detection and alerting mechanisms. The deployment highlighted the system's potential to aid in flood monitoring and management by providing valuable data on trash accumulation in real time. This information can be used by local authorities and environmental agencies to implement more effective waste management and flood prevention strategies.
Based on the recall evaluation presented in Table 5, Bottle, Can, Plastic Bag, and Styrofoam: YOLOv7 has higher recall values for these categories compared to YOLOv9, indicating that it is better at detecting these objects and minimizing false negatives. Both models perform poorly in detecting branches, with YOLOv9 having a significantly lower recall than YOLOv7, suggesting that both models struggle to detect branches, but YOLOv9 struggles even more. For Cup and Clustered Trash, the recall values for these categories are relatively high for both models, with YOLOv7 having a slight edge, indicating effective detection of these objects. YOLOv9 has a notably higher recall compared to YOLOv7, suggesting it is better at detecting plastic containers without missing them, despite the lower precision noted earlier. Whereas, YOLOv7 performs better in terms of recall, indicating fewer missed detections compared to YOLOv9.
Fig. 10
Recall Curve of the floating debris detection system using YOLOv9
Click here to Correct
Table 5
Recall Evaluation
Labels
Precision
YOLOv7
YOLOv9
0. Bottle
0.922
0.853
1. Branch
0.455
0.273
2. Can
0.871
0.686
3. Cup
0.976
0.966
4. Styrofoam
0.913
0.892
5. Plastic bag
0.898
0.835
6. Clustered Trash
0.898
0.884
7. Plastic Container
0.781
0.906
8. Cardboard
0.910
0.864
9. Canopy
0.927
0.865
10. Table
0.780
0.682
11. Chair
0.989
0.818
12. Big Umbrella
0.955
0.891
4.2 mAP Evaluation
The mAP@0.50 metric evaluates a model's ability to locate objects with a moderate Intersection over Union (IoU) overlap of at least 0.50 (50%) with a ground truth object. Based on Table 6, the mAP@0.5 values indicate that YOLOv7 outperforms YOLOv9 across all classes. The high mAP values for YOLOv7 suggest better precision in detecting objects with at least 50% IoU overlap with ground truth objects. This indicates a more robust performance in object localization tasks at moderate precision levels.
The mAP@0.95 demands higher precision, requiring a minimum IoU overlap of 0.95 (95%) for a detection to be considered correct. It evaluates a model’s ability to precisely localize objects with high accuracy. When evaluating mAP@0.5:0.95, which demands higher precision for correct detections, YOLOv7 generally shows better performance compared to YOLOv9 as shown in Table 7. Notably, YOLOv7 excels in detecting branches, cans, and plastic bags with higher precision. The consistent performance of YOLOv7 across varying IoU thresholds highlights its superior adaptability and precision in object detection tasks, especially under stringent evaluation criteria.
Table 6
mAP@0.5 Evaluation
Labels
Precision
YOLOv7
YOLOv9
0. Bottle
0.967
0.934
1. Branch
0.585
0.319
2. Can
0.935
0.850
3. Cup
0.995
0.991
4. Styrofoam
0.924
0.911
5. Plastic bag
0.918
0.893
6. Clustered Trash
0.970
0.951
7. Plastic Container
0.935
0.916
8. Cardboard
0.922
0.900
9. Canopy
0.955
0.938
10. Table
0.864
0.804
11. Chair
0.801
0.810
12. Big Umbrella
0.983
0.963
Table 7
mAP@0.5:0.95 Evaluation
Labels
Precision
YOLOv7
YOLOv9
0. Bottle
0.700
0.680
1. Branch
0.319
0.207
2. Can
0.745
0.679
3. Cup
0.915
0.914
4. Styrofoam
0.712
0.704
5. Plastic bag
0.710
0.677
6. Clustered Trash
0.637
0.614
7. Plastic Container
0.755
0.758
8. Cardboard
0.714
0.732
9. Canopy
0.688
0.685
10. Table
0.536
0.486
11. Chair
0.556
0.568
12. Big Umbrella
0.723
0.705
4.3 Weather Effects
Weather conditions can significantly impact the performance of object detection models. Adverse weather conditions in Malaysia, which mostly experiences rainfall, introduce additional challenges like reduced visibility and increased noise, which can complicate the detection process. This section compares the detection performance of YOLOv7 and YOLOv9 under various weather conditions, highlighting the strengths and weaknesses of each model. Under clear, sunny conditions, both YOLOv7 and YOLOv9 perform exceptionally well, maintaining high levels of accuracy. YOLOv7, with its well-established architecture, demonstrates slightly higher precision and recall, capturing fine details with minimal false positives. YOLOv9 also performs robustly, showcasing its advanced capabilities, although it exhibits a marginally higher rate of false positives compared to YOLOv7.
Fig. 11
Rainy Weather Comparison (a) YOLOv7 and (b) YOLOv9
Click here to Correct
Rain introduces motion blur and water droplets, which can obscure parts of the objects and make detection challenging. In these conditions, YOLOv9 outperforms YOLOv7. The advanced features of YOLOv9, such as its enhanced ability to handle noise and partial occlusions, contribute to better accuracy and fewer misclassifications. While YOLOv7 maintains respectable performance, it struggles slightly more with false negatives and missing smaller or partially obscured objects. Figure 11 illustrates a comparison of detection outputs from YOLOv7 and YOLOv9 in a rainy environment, highlighting YOLOv9’s improved ability to localize and classify objects despite visual obstructions.
5 Conclusions and Future Work
This study demonstrates the potential of artificial intelligence to enhance environmental monitoring and flood risk management through the automated detection and quantification of floating debris in rivers. By integrating YOLO-based object detection models into a real-time monitoring system, the framework addresses key limitations of conventional flood monitoring methods, which often rely on manual inspections and lack scalability. Field deployments in Shah Alam, Malaysia, confirmed the system’s ability to operate reliably in natural conditions, providing critical insights into debris dynamics and their implications for flood prevention and river management. Importantly, the approach contributes not only to early warning of flood hazards but also to broader environmental objectives, such as reducing pollution loads, supporting waste management strategies, and improving river health. The findings highlight the value of combining deep learning with environmental monitoring to strengthen urban resilience against climate-related hazards. Future work will focus on expanding the system to include water level estimation, integration with weather and hydrological data, and deployment across multiple river basins to support national and regional monitoring programs.
A
Acknowledgement
The authors would like to thank all research members, cliques, and others who have been involved and made this project successful.
A
Funding
The authors would like to acknowledge the support from the Ministry of Higher Education (MoHE) under grant number PRGS_9013-00043 “Upscaling and Field-Testing of Integrated Flood Monitoring System with Visual Intelligent Dashboard and Data Analytics.”
Data availability
The Dataset is available upon request. Please note that the data and source codes are for academic use only.
Clinical trial number
Not applicable
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
A
A
Author Contribution
S.S , A.H and N.A wrote the main draft of the manuscript text. A.H, L.M, A.Z, A.S, S.M.M and R.V contribute to the conception or design of the work and overall methodology. A.H, N.A and S.S prepared the acquisition, analysis, interpretation of data and all the figures. L.M, A.Z and S.M.M provide the funding acquisition. L.M, A.Z, A.S, S.M.M and R.V supervised, provide resources and administration of the project. All authors reviewed the manuscript.
References
Baharum, M. S., Awang, R. A., & Baba, N. H. (2011, June). Flood monitoring system (MyFMS). In 2011 IEEE International Conference on System Engineering and Technology; pp. 204–208. IEEE DOI: https://doi.org/10.1109/ICSEngT.2011.5993451.
A
Chen, C., Yan, S., Yang, J., & Mei, J. (2023). An inland water detection method based on CYGNSS. Remote Sensing Letters, 15(1), pp. 35–43. https://doi.org/10.1080/2150704X.2023.229717.
Da Loong, J., Abdulla, R., Selvaperumal, S. K., & Rana, M. E. (2023, October). IoT-Based Flood Monitoring System Using Machine Learning Approach, 2023 4th International Conference on Data Analytics for Business and Industry (ICDABI); pp. 47–53. DOI: https://doi.org/10.1109/ICDABI60145.2023.10629349.
Faudzi, A. A. M., Raslan, M. M., & Alias, N. E. (2023, February). IoT based real-time monitoring system of rainfall and water level for flood prediction using LSTM Network. In IOP Conference Series: Earth and Environmental Science; Vol. 1143, No. 1, p. 012015. DOI: https://doi.org/10.1088/1755-1315/1143/1/012015.
Hamzah, S. A., Dalimin, M. N., Md Som, M., Zainal, M. S., Ramli, K. N., Yusop, A., … Mustapa, M. S. (2024). Flood level detection system using ultrasonic sensor and ESP32 camera: Preliminary results. Journal of Advanced Research in Applied Mechanics, 119(1), 162–173. DOI: https://doi.org/10.37934/aram.119.1.162173
Hashim, Y., Idzha, A. H. M., & Jabbar, W. A. (2018). The design and implementation of a wireless flood monitoring system. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 10(3 – 2), 7–11. Retrieved from https://jtec.utem.edu.my/jtec/article/view/4704
A
Hassan, H., Mazlan, M. I. Q., Ibrahim, T. N. T., & Kambas, M. F. (2020, September). IOT System: Water Level Monitoring for Flood Management. In IOP Conference Series: Materials Science and Engineering; Vol. 917, No. 1, p. 012037. IOP Publishing. DOI: https://doi.org/10.1088/1757-899X/917/1/012037.
He, X., Wang, J., Chen, C., & Yang, X. (2021, December). Detection of the floating objects on the water surface based on improved YOLOv5. 2021 IEEE 2nd International Conf. on Information Technology, Big Data and Artificial Intelligence (ICIBA); Vol. 2, pp. 772–777. DOI: https://doi.org/10.1109/ICIBA52610.2021.9688111.
A
Landman, E., “Duplicate Image Finder.” Accessed: Jul. 16, 2024. [Online]. Available: https://github.com/elisemercury/Duplicate-Image-Finder.
Lee, K.F, Ng, Z.N., Tan, K.B., Balachandran, R., Chong, A. S.I., & Chan, K.Y. (2024). Artificial Intelligence-Integrated Water Level Monitoring System for Flood Detection Enhancement. International Journal of Intelligent Systems and Applications in Engineering, 12(19s), 336–340. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5071
Li, H., Yang, S., Liu, J., Yang, Y., Kadoch, M., & Liu, T. (2022). A Framework and Method for Surface Floating Object Detection Based on 6G Networks. Electronics, 11(18), pp. 2939. MDPI. DOI: https://doi.org/10.3390/electronics11182939.
A
Li, N., Huang, H., Wang, X., Yuan, B., Liu, Y., & Xu, S. (2022). Detection of floating garbage on water surface based on PC-Net. Sustainability, 14(18), 11729. DOI: https://doi.org/10.3390/su141811729
Li, W., Jiang, R., Wu, H., Xie, J., Zhao, Y., Song, Y., & Li, F. (2023). A system dynamics model of urban rainstorm and flood resilience to achieve the sustainable development goals. Sustainable Cities and Society, 96, 104631. DOI: https://doi.org/10.1016/j.scs.2023.104631
Lin, F., Hou, T., Jin, Q., & You, A. (2021). Improved YOLO Based Detection Algorithm for Floating Debris in Waterway. Entropy, 23(9), 1111. DOI: https://doi.org/10.3390/e23091111
Noar, N. A. Z. M., & Kamal, M. M. (2017, November). The development of smart flood monitoring system using ultrasonic sensor with Blynk applications. In 2017 IEEE 4th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA) (pp. 1–6). IEEE. DOI: https://doi.org/10.1109/ICSIMA.2017.8312009.
Pagatpat, J. C., Arellano, A. C., & Gerasta, O. J. (2015, April). GSM & web-based flood monitoring system. In IOP Conference Series: Materials Science and Engineering (Vol. 79, No. 1, p. 012023). IOP Publishing. DOI: https://doi.org/10.1088/1757-899X/79/1/012023.
Piadeh, F., Behzadian, K., Chen, A. S., Campos, L. C., Rizzuto, J. P., & Kapelan, Z. (2023). Event-based decision support algorithm for real-time flood forecasting in urban drainage systems using machine learning modelling. Environmental Modelling & Software, 167, 105772. DOI: https://doi.org/10.1016/j.envsoft.2023.105772.
Rocamora, C., Puerto, H., Abadía, R., Brugarolas, M., Martínez-Carrasco, L., & Cordero, J. (2021). Floating debris in the low Segura river basin (Spain): avoiding litter through the irrigation network. Water, 13(8), 1074. DOI: https://doi.org/10.3390/w13081074
Saleh, A., Yuzir, A., & Abustan, I. (2020, June). Flood mapping using Sentinel-1 SAR Imagery: Case study of the November 2017 flood in Penang. In IOP Conference Series: Earth and Environmental Science (Vol. 479, No. 1, p. 012013). IOP Publishing. DOI: https://doi.org/10.1088/1755-1315/479/1/012013
Shatnawi, N. (2024). Mapping Floods during Cloudy Weather Using Radar Satellite Images. Jordan Journal of Civil Engineering, 18(1). DOI: https://doi.org/10.14525/JJCE.v18i1.03
Sohn, W., Brody, S. D., Kim, J. H., & Li, M. H. (2020). How effective are drainage systems in mitigating flood losses?. Cities, 107, 102917. DOI: https://doi.org/10.1016/j.cities.2020.102917
Sun, X., Deng, H., Liu, G., & Deng, X. (2019). Combination of spatial and frequency domains for floating object detection on complex water surfaces. Applied Sciences, 9(23), 5220. DOI: https://doi.org/10.3390/app9235220
A
United Nations, “The 17 Goals,” Sustainable Development. Accessed: Jul. 18, 2024. [Online]. Available: https://sdgs.un.org/goals
Van Emmerik, T., Mellink, Y., Hauk, R., Waldschläger, K., & Schreyers, L. (2022). Rivers as plastic reservoirs. Frontiers in Water, 3, 786936. DOI: https://doi.org/10.3389/frwa.2021.786936
Xu, S., Tang, H., Li, J., Wang, L., Zhang, X., & Gao, H. (2023). A YOLOW algorithm of water-crossing object detection. Applied Sciences, 13(15), 8890. DOI: https://doi.org/10.3390/app13158890
Zahir, S. B., Ehkan, P., Sabapathy, T., Jusoh, M., Osman, M. N., Yasin, M. N., … Jamaludin, R. (2019, December). Smart IoT flood monitoring system. In journal of physics: conference series (Vol. 1339, No. 1, p. 012043). IOP Publishing. DOI: https://doi.org/10.1088/1742-6596/1339/1/012043
A
Zaifudin, S. Z. S. S., Mahmud, W. M. H. W., Huong, A., Jumadi, N. A., Izaham, R. M. A. R., & Gan, H. S. (2025). Water Level and Flow Detection System: An IoT-Based Flood Monitoring Application. Journal of Advanced Research in Applied Mechanics, 127(1), 89–99. DOI: https://doi.org/10.37934/aram.127.1.8999
Zain, N. M., Elias, L. S., Paidi, Z., & Othman, M. (2020). Flood warning and monitoring system (FWMS) using GSM technology. Journal of Computing Research and Innovation, 5(1), 8–19. DOI: https://doi.org/10.24191/jcrinn.v5i1.158
Zakaria, M. I., Jabbar, W. A., & Sulaiman, N. (2023). Development of a smart sensing unit for LoRaWAN-based IoT flood monitoring and warning system in catchment areas. Internet of Things and Cyber-Physical Systems, 3, 249–261. DOI: https://doi.org/10.1016/j.iotcps.2023.04.005
Zhang, L., Wei, Y., Wang, H., Shao, Y., & Shen, J. (2021). Real-time detection of river surface floating object based on improved refinedet. IEEE Access, 9, pp. 81147–81160. DOI: https://doi.org/10.1109/ACCESS.2021.3085348
A
Zhang, L., Xie, Z., Xu, M., Zhang, Y., & Wang, G. (2023). EYOLOv3: An Efficient Real-Time Detection Model for Floating Object on River. Applied Sciences, 13(4), 2303. DOI: https://doi.org/10.3390/app13042303
Zhang, L., Zhang, Y., Zhang, Z., Shen, J., & Wang, H. (2019). Real-time water surface object detection based on improved faster R-CNN. Sensors, 19(16), 3523. DOI: https://doi.org/10.3390/s19163523
Zhou, Z., Sun, J., Yu, J., Liu, K., Duan, J., Chen, L., & Chen, C. P. (2021). An image-based benchmark dataset and a novel object detector for water surface object detection. Frontiers in Neurorobotics, 15, 723336. DOI: https://doi.org/10.3389/fnbot.2021.723336
Total words in MS: 7032
Total words in Title: 18
Total words in Abstract: 177
Total Keyword count: 6
Total Images in MS: 9
Total Tables in MS: 8
Total Reference count: 33