Download

Research Article

AI-based automatic identification and processing techniques for agricultural safety information

Aiping Li1, Pei Wang1, Lin Shao1, Huiyun Liu2*

1Department of Art & Media, Hebei Vocational University of Technology and Engineering, Xingtai, China

2Department of Business Administration, Hebei Vocational University of Industry and Technology, Shijiazhuang, China

Abstract

In the context of globalization, agricultural safety is directly linked to food security and human health. Modern agriculture’s challenge is effectively monitoring and processing vast agricultural safety information in the digital era. This study aims to achieve the automatic identification and intelligent processing of agricultural safety information within a digital media environment by applying artificial intelligence (AI) technologies. Initially, the background of AI applications in agriculture is explored, followed by an analysis of the urgent need for automated processing of agricultural safety information and an overview of current research in this field. It is demonstrated that, despite progress, existing methods still lack in-depth feature extraction and real-time capabilities of the data processing systems. A method based on multi-level feature fusion for identifying agricultural product safety is proposed to address these limitations alongside a decentralized AI Internet of Things (IoT) system for processing agricultural safety information. The multi-level feature fusion method can extract and integrate essential details on agricultural products from different dimensions and levels, thus enhancing the identification accuracy. Meanwhile, the decentralized AIoT system strengthens the efficiency and reliability of data processing, ensuring timely responses to agricultural safety information. Through deep neural networks, efficient recognition and classification of agricultural product images were achieved, enabling the automatic identification of agricultural safety information in a digital media environment. Utilizing deep learning algorithms, the system could learn and understand the characteristics of different agricultural products and accurately identify potential safety issues, such as pests, diseases, or spoilage, providing timely monitoring and management for agricultural production. The innovation of this research lies in the comprehensive utilization of multi-level data features and advanced Internet of Things (IoT) technology, significantly improving the level of intelligence in agricultural safety supervision.

Key words: agricultural safety, artificial intelligence, data processing, intelligent supervision, internet of things, multi-level feature fusion

*Corresponding Author: Huiyun Liu, Department of Business Administration, Hebei Vocational University of Industry and Technology, Shijiazhuang 050000, China. Email: [email protected]

Received: 15 March 2024; Accepted: 17 June 2024; Published: 5 August 2024

DOI: 10.15586/qas.v16i3.1495

© 2024 Codon Publications
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). License (http://creativecommons.org/licenses/by-nc-sa/4.0/)

Introduction

With the rapid advancement of AI technologies and the explosive growth in digital media information, the agricultural sector is witnessing a new wave of information transformation (Dewi et al., 2023; Li et al., 2024; Lin, 2021; Wang, 2017; Xu et al., 2017). Globally, agricultural safety remains a crucial issue directly affecting food security and human health (Jin & Xie, 2023; Murshed and Uddin, 2020; Safitri et al., 2022; Wang & Liang, 2023; Zhai et al., 2024). However, traditional methods of agricultural safety supervision often suffer from inefficiency and slow response, limiting the timeliness and accuracy of safety monitoring (Liu et al., 2023; Yi et al., 2023).

Thus, the automatic identification and processing of vast agricultural safety information using advanced AI technologies have become pressing challenges to address.

The significance of related research is evident (Pal et al., 2024; Raman et al., 2023). By applying AI to the automatic identification and processing of agricultural safety information in digital media, the efficiency of agricultural safety supervision can be significantly enhanced, and potential risks in agricultural production can also be prevented to some extent. This ensures consumer food safety and maintains public health (Agarwal et al., 2023; Gou et al., 2023; Kim and Kim, 2022; et al., 2023; Mu et al., 2024; Sunardi et al., 2023; Syaharuddin et al., 2023; Wunderlich, 2021). Moreover, the application of this technology plays a crucial role in driving the modernization and intelligence level of the agricultural industry.

Existing research methods have made some progress in identifying and processing agricultural safety information but still exhibit deficiencies (Bondre & Patil, 2023; Devi et al., 2023). For example, many methods rely on single-level feature extraction, lacking in-depth analysis and accurate identification of the complex characteristics of agricultural products (Ge et al., 2024; Liu & Luo, 2022; Liu et al., 2022; Zhang & Peng, 2022). Traditional data processing systems, often based on centralized architectures, struggle to meet the high demands for real-time and reliable responses. These issues severely restrict the effectiveness and forward-looking nature of agricultural safety information management. The cited studies highlight several key deficiencies and challenges within agricultural safety information recognition and processing. Initially, existing methods predominantly rely on single-level feature extraction, which fails to adequately delve into the complex characteristics of agricultural products, thereby limiting the accuracy of recognition. In addition, traditional data processing systems employ a centralized architecture, which performs poorly under conditions demanding high real-time capabilities and reliability, failing to meet the needs of modern agricultural safety information management. These issues underscore an urgent need for novel methods and technologies to enhance the effectiveness and foresight of agricultural product safety information management.

The research methodology proposed in this study actively addresses these challenges. Two key research approaches were presented: (i) a multi-level feature fusion recognition network based on subregional discriminative saliency. By integrating both global and local features, this network enhanced the capability to identify minute morphological differences in plant diseases and pests, particularly in recognizing key discriminative regions. (II) An agricultural safety information processing system was constructed based on a decentralized AIoT architecture. This system utilized sensory devices to capture real-time changes in the agricultural environment, incorporated a back propagation (BP) neural network for adaptive recognition, and selected optimal fog devices to reduce information transmission costs and enhance data processing efficiency. The successful implementation of this research not only fills a gap in existing technologies but also provides strong technical support for the digital transformation of the agricultural sector, offering significant theoretical value and broad application prospects.

Multi-Level Feature Fusion for Agricultural Product Safety Identification

Agricultural product safety identification technology finds application across various scenarios, such as intelligent monitoring and decision support systems. It is capable of real-time monitoring of crop growth environments, identifying pests and diseases, and providing farmers with precise pesticide usage recommendations to reduce the risk of excessive chemical application. Within the food supply chain management, this technology enables tracking products from field to table, ensuring agricultural produce’s traceability and controlled quality. Furthermore, in the import and export inspection and quarantine phases, using AI for rapid and accurate safety testing of agricultural products effectively prevents the market entry of substandard products, safeguarding public health and safety. At the consumer end, by scanning QR codes on products, consumers can use AI technology within mobile applications to instantaneously access safety information about the products, such as the types and quantities of pesticides used, thereby making more informed consumption choices.

To effectively address critical issues in agricultural product safety identification, this paper proposes a multi-level feature fusion identification network based on the discriminative saliency of subregions. Figure 1 illustrates the principle diagram of multi-level feature fusion in subregions. By integrating global and local features, this network model enhances the capability to recognize agricultural pests and diseases with minimal morphological differences, ensuring the granularity and accuracy of the identification process. Moreover, this technique strengthens the recognition of vital discriminative areas in agricultural products (such as damaged or spoiled parts), which are crucial for product safety.

Figure 1. Diagram of multi-level feature fusion principle in subregions.

Figure 2 presents a schematic diagram of the agricultural product safety identification network structure. Given that agricultural products typically exhibit different characteristics along specific axial directions, such as the longitudinal and transverse axes of fruits showing different cross-sections, the model initially divides the candidate region according to the target axial direction into L×V local subregions that align with the orientation of the candidate target area. Through this approach, the model can capture overall features and obtain details of local areas, which is crucial for identifying minor yet significant flaws or characteristics of agricultural products.

Figure 2. Structure of the agricultural product safety identification network.

Specifically, this study applies rotation and pooling operations to the region of interest alignment for the entire agricultural product image’s candidate target area. Assuming the center coordinates of the entire candidate region are represented by (az, bz), with the lengths of the major and minor axes denoted by g and q, respectively, and the angle of the candidate region denoted by ϕ, the center coordinates (auk, buk) of the subregion components indexed by (u,k)(u=0,1,..., L-1; k=0,1,..., V-1) are calculated using the following formula:

aukbuk=gCOSφLqSINφVgCOSφLqSINφVuL12kV12+azbz 1

Let the minor axis length of the subregion be represented by qo, the major axis length by go, and the expansion coefficient of the subregion by β; the calculation formula is as follows:

q0=qβNg0=gβL 2

To automatically identify and focus on subregions containing essential information or characteristics of pests and diseases without the need for additional label information, this paper utilizes the self-supervised mechanism of the mining network and guiding network from the Navigator-Teacher-Scrutinizer Network (NTS-Net) for subregion selection. The training of the mining network involves randomly or based on specific rules selecting subregions and allowing the network to learn and predict the importance of these areas independently, a process that requires no manual intervention. The network autonomously discovers the most discriminative features. The guiding network adjusts subsequent processing steps based on the output of the mining network. A feedback mechanism is applied, enabling the mining and guiding networks to learn from each other. That is, the initial predictions of the mining network guide the focus areas of the guiding network. In contrast, in its later fine-tuning stages, the guiding network optimizes the mining network’s discrimination strategy in return.

To enhance the accuracy of the information quantity scoring, this study defines the confidence score of a particular area as the degree of accuracy in agricultural product safety identification adopted by that area within the self-supervised mechanism of NTS-Net. Assuming the features of a subregion are represented by (D1, D2,..., DM), the output of the guiding network for target recognition of a particular subregion is denoted by S(Du), the corresponding true target label by b, and the multi-class cross-entropy loss function for V categories is represented by MZR. The confidence score of this subregion is denoted by Zu. The expressions for MZR and Zu are as follows:

Zu=MZRSDu,b 3
MZRo,b=z=1VbzLOGoz 4

The study and application of adaptive feature fusion technology are crucial in the automatic identification of agricultural safety information. This study proposes an adaptive feature fusion method based on the discriminative saliency of subregions. This approach allows the model to adaptively adjust the feature weights according to the contribution level of each subregion to the final classification result. This means that areas with greater discriminative power are assigned higher weights, while less important areas receive lower weights, thus highlighting the most critical information during feature fusion.

Assuming the discriminative saliency of a subregion is represented by A, the information quantity of that subregion by U, the Sigmoid function by T, and the mean and standard deviation of the information quantity of all subregions normalized by the Sigmoid function are denoted by ω and δ, respectively. The specific calculation formula is given as follows:

A=TUωδ 5

To ensure that the final feature vector encompasses not only the information extracted from each subregion but also reflects the importance of each subregion to the overall identification task, this paper determines the weight of features based on the discriminative saliency of each subregion, followed by the concatenation of various weighted features along the channel dimension. That is, the discriminative saliency for all subregions selected through information quantity ranking is calculated using the formula above, and these are then weighted and fused with the overall features. Assuming the fused feature is denoted by D, the overall target area feature by DX, and the features of subregions by D1 to DJ, with the weight adjustment factor denoted by η, the weighted fusion formula is as follows:

D=Dx;ηA1D1;ηAjDj (6)

Upon obtaining a feature vector infused with multi-level feature information, the model employs a fully connected network for the final classification. The role of the fully connected layer here is to map the complex feature vector to the target classification results. This step involves continuously optimizing the weights of the fully connected layer during the training process to ensure that the network’s output classification results are as close to the actual labels as possible, achieving high accuracy in identifying agricultural safety information. Assuming the overall loss of the multi-level feature fusion identification network is denoted by M, the classification loss of the final fused feature by MZ, the loss of the mining network under the guidance of the guiding network by MS, the loss of recognition confidence by the guiding network by MO, and the classification loss of the overall features by MX, with the weight hyperparameters denoted by η1, η2, and η3. The formula for the network’s overall loss function is given as follows:

M=MZ+η1MS+η2MO+η3MX 7

The constructed network employs class balance loss in MZ to mitigate the impact of class imbalance in the identification of agricultural product safety. Assuming the final classification output results obtained after classifying with the fused feature are denoted by osu, the class balance weights by ZY(u), with the balancing hyperparameter related to reweighting denoted by α, and the number of samples for the target category in the training set annotations by vbu, the formulas are as follows:

MZ=U=1VZYu1uoUSεlogous 8
ZYi=1α1αVbm 9

Construction of an Agricultural Safety Information Processing System Based on Decentralized AIoT

To tackle critical issues such as real-time performance, adaptability, and data processing efficiency in agricultural safety information processing, this study proposes establishing a decentralized AIoT system for agricultural safety information processing. In this system, sensing devices are directly mapped to input neurons, enabling real-time capture of changes in the agricultural environment and rapid response, thus ensuring real-time performance. Furthermore, through the learning capabilities of the BP neural network, the system can adaptively recognize safety information features across different agricultural scenarios, enhancing the accuracy of identification and demonstrating high adaptability. The system also selects the optimal fog devices as hidden layer nodes, reducing information transmission costs and improving data processing efficiency. Figure 3 displays the IoT network topology diagram for the agricultural safety information processing system.

Figure 3. IoT network topology diagram for agricultural safety information processing system.

Specifically, this paper selects a three-layer BP neural network as the core data processing model, comprising input, hidden, and output layers. The input layer receives raw data from IoT sensing devices, while the output layer conveys the results processed by the neural network to actuating devices. The hidden layer is crucial for complex feature extraction and learning, with the number and arrangement of its nodes significantly impacting network performance. The BP neural network optimizes internal weights by learning the mapping relationship between inputs and outputs, improving the accuracy of predictions and classifications. Furthermore, IoT devices that meet system requirements are selected and organized into a network. In choosing fog devices, their storage and computational resources are considered to ensure they meet data storage and computation needs. The IoT network’s structure is designed to ensure efficient data flow within the network and timely processing by fog devices. According to application needs, sensing devices in the IoT are selected as input nodes, directly influencing the neural network’s input layer. Actuating devices are confirmed as output nodes directly linked to the neural network’s output layer. Figure 4 shows the functional flowchart of actuating devices. Determining input and output nodes is based on the specific needs of agricultural safety information processing, such as pest and disease identification, temperature and humidity monitoring, etc. Finally, hidden nodes are determined with the primary goal of minimizing the overall operational cost of the system while also evaluating the performance of fog devices to select those that not only meet data processing requirements but also reduce overall system latency and energy consumption as hidden layer nodes. Selecting hidden layer nodes also considers network scalability and fault tolerance to ensure system robustness.

Figure 4. Functional flowchart of the actuating device.

The system’s configuration of variables and parameters is as follows: The input set, denoted by T, represents the collection of all IoT sensing nodes capable of gathering agricultural environmental data, consisting of input nodes t. The hidden set, represented by G, characterizes all nodes within the neural network’s hidden layer. The output set involves collecting IoT actuating nodes that can receive the neural network’s output signal and respond accordingly. These actuating nodes can include water pumps (for irrigation), ventilation systems (for temperature and humidity regulation), spraying equipment (for pest and disease control), etc., and are denoted by P. The set V comprises IoT nodes associated with fog devices, including all fog device nodes functioning as part of the neural network’s hidden layer. These nodes are responsible for edge computing, processing and analyzing received data, and transmitting results to output nodes or fog devices. They possess significant computational and storage capacities, reducing reliance on central cloud resources and lowering latency. Whether a node you is selected as a hidden node k is indicated by auk, where a value of 1 signifies selection and a value of 0 indicates non-selection. The transmission power O refers to the energy IoT nodes require during data transmission. This parameter affects the reliability and range of communication between nodes. A balance between each node’s energy consumption and communication needs must be considered in system design. The transmission time s refers to the time required for information to be transmitted from one node to another. This parameter directly impacts the system’s response speed and real-time performance. In agricultural scenarios, timely information transmission is crucial for addressing rapidly changing environmental conditions. The system’s communication cost Z refers to the total cost required to maintain communication within the IoT system, including energy consumption, data transmission fees, device maintenance, etc. To ensure the system’s economic efficiency, it is necessary to design efficient communication protocols and energy-saving node management strategies to control costs.

In the decentralized AIoT system for agricultural safety information processing, the objective function considering communication costs can be divided into three scenarios based on transmission time, transmission power, and both transmission time and power. Transmission time in agricultural safety information processing includes data collection, processing, and transmission delays.

Z= kGuVtTst,u+pPsu,pau,k 10

In IoT nodes, power consumption is related to the device’s energy efficiency, communication distance, and data volume. The objective function aims to minimize transmission power, extend the device’s operational lifespan, and reduce energy consumption.

Z= kGuVtTOt,u+pPOu,pau,k 11

Communication cost can be viewed as a weighted combination of transmission time and transmission power. A balance between these two factors must be struck in actual system design. For instance, in some application scenarios, rapid data transmission may be prioritized, while energy conservation may be more critical in others. Assuming the weights of transmission power and transmission time are represented by β and α, respectively, the formula is as follows:

Z= βkGuVtTOt,u+pPou,pau,k+αkGuVtTst,u+pPsu,pau,k=kGuVβtTOt,u+pPou,p+αtTst,u+pPsu,pau,k12

The system proposed in this study is a decentralized AI system that maps IoT nodes to the hidden layer neurons of a neural network one-to-one for the automatic identification and processing of agricultural safety information. To optimize the proposed system, the objective is to minimize costs while ensuring accurate information transmission and processing. Since each hidden neuron corresponds to a unique IoT node, this optimization problem is subject to constraints. These constraints not only limit the mapping method but may also include performance indicators of IoT nodes, such as processing capacity, memory capacity, energy consumption, etc. The expressions for the constraint conditions are as follows:

uVau,k=1kG 13
au,k=1,0u V,kG 14

The optimization model expression constructed to minimize the total transmission time of the entire system is as follows:

MINkGuVtTot,u+pPou,pau,k 15
s.t.uVau,k = 1KG 16
au,k=1,0uV,kG 17

The optimization model expression constructed to minimize the total energy consumption of the system is as follows:

MINkGuGtTst,u+pPsu,pau,k 18
s.t.uVau,k=1kG 19
au,k=1,0uV,kG 20

An optimization model that considers the importance of both transmission time and power to find a balance between the two is defined as follows:

MINkGuVβtTOt,u+pPOu,p+αtTst,u+pPsu,pau,k21
s.t.uVau,k=1kG 22
au,k=1,0uV,kG23

The optimization problem described above needs to consider the uniqueness of processing agricultural safety information and ensure efficient and reliable data transmission and processing under limited resources. This optimization concerns technical implementation and enhances agricultural production safety and efficiency.

The objective functions proposed in the system include the nonlinear relationship between transmission power and time, as well as potential nonlinear interactions between nodes, such as overlapping sensing ranges and signal attenuation. Additionally, the performance indicators of IoT nodes involved in the constraint conditions and the quality of service requirements they need to meet may also exhibit nonlinear characteristics. Considering these nonlinear features, the constraint conditions of the optimization model constructed in this study can be rewritten as follows:

au,k*au,k1=0uV,kG24

Experimental Results and Analysis

The sensing devices utilized in the experiments include various sensors capable of capturing multi-level data characteristics. These sensors encompass imaging, acoustics, and chemical composition to capture multidimensional information pertinent to agricultural product safety. Image sensors were employed to capture the external characteristics of agricultural products, acoustic sensors to detect abnormal sounds or vibrations, and chemical sensors to measure chemical constituents, such as pH values or moisture content. These devices collectively functioned to provide diverse data inputs to the neural network.

The constructed neural network received multi-level data features from these sensing devices and processed them using a multi-level feature fusion approach. In identifying specific diseases, the neural network initially acquired images of the agricultural product’s appearance through image sensors. Then it analyzed detailed features within the images, such as color and texture. Concurrently, acoustic sensors provide supplementary information, such as specific sound patterns produced by diseases. Chemical sensors contributed data on the internal chemical composition of agricultural products, aiding the neural network in gaining a comprehensive understanding of the product’s condition. By considering these different levels of data features, the neural network could accurately identify specific diseases, such as scab or rot, and conduct the requisite safety assessments and interventions.

This study introduces a multi-level feature fusion method for agricultural product safety identification. Figure 5 displays the precision-recall (P-R) curves of agricultural product safety identification models trained with different loss functions. It is observed that when the overall loss function is employed for training, the model’s AP@50 improves by 0.4 percentage points compared to models trained with other loss functions. This indicates that the model using the overall loss function achieves more accurate detections at an Intersection over Union (IoU) threshold of 0.5. When the IoU threshold is set higher, the improvement in model performance becomes even more significant. This demonstrates that the model can precisely identify and locate targets, generating prediction boxes that align more closely with the target boxes. The increase in the IoU ratio between the model-generated prediction boxes and the actual target boxes provides more accurate candidate regions for subsequent fine-grained identification. The study shows that significant enhancements in the accuracy and localization precision of agricultural product safety identification models can be achieved by adopting a multi-level feature fusion strategy and considering an overall loss function that integrates different loss items, especially at higher IoU thresholds. This is important for the rapid and accurate detection of diseases and damage in agricultural products in practical applications.

Figure 5. P-R curves of agricultural product safety identification models trained with different loss functions.

According to the results shown in Table 1 for the agricultural product safety identification models, it is evident that in most cases, the method based on multi-level feature fusion proposed in this study shows an improvement in AP@50 performance compared to models without the use of NTS-Net self-supervision mechanism. Specifically, an increase from 45.6 to 73.4% was observed for black rot, marking an improvement of 27.8 percentage points, indicating significant enhancement. Stalk rot showed a noticeable improvement from 63.2 to 81.2%. Wilting increased from 63.8 to 78.5%, 14.7 percentage points. Rotting improved from 91.2 to 95.3%, an increase of 4.1 percentage points. Scab jumped from 82.6 to 98.9%, marking an improvement of 16.3 percentage points. Tobacco mosaic virus increased from 83.7 to 92.6%, an improvement of 8.9 percentage points. The mean average precision (mAP) improved from 71.6 to 78.6%, an increase of 7 percentage points, indicating a significant improvement in overall average recognition accuracy across all categories. It can be concluded that the multi-level feature fusion method proposed in this paper achieved performance improvements in most categories, particularly in specific disease recognition categories such as black rot and scab, where the improvement was especially notable. Moreover, the significant increase in overall mAP indicates the superiority of this method over models without the use of NTS-Net self-supervision mechanism.

Table 1. Impact on recognition results (AP@50)(%) of agricultural product safety identification models.

Category Without the NTS-Net self-supervision mechanism The proposed model Category Without the NTS-Net self-supervision mechanism The proposed model Category Without the NTS-Net self-supervision mechanism The proposed model
Leaf blight 61.2 61.4 Black rot 45.6 73.4 Wilting 63.8 78.5
Brown spot 65.3 65.2 Stalk rot 63.2 81.2 Bruising 74.9 77.6
Cassava mosaic 88.7 87.9 Stalk rot 64.8 71.4 Rotting 91.2 95.3
Scab 82.6 98.9 Alternaria rot 87.5 81.6 mAP 71.6 78.6
Late blight 91.5 91.2 Insect pollution 59.1 63.5
Powdery mildew 92.6 88.6 Molding 82.1 82.7
Tobacco mosaic virus 83.7 92.6 Blighting 74.6 72.6

Table 2 illustrates the impact of varying subregion division quantities on the AP@50 metric for agricultural product safety identification models based on multi-level feature fusion. When the subregion division is set to 1×3, the model’s mAP is 73.5%. Increasing the subregion division quantity to 3×3 and 3×5 slightly elevates the mAP to 73.6 and 74.8%, respectively, with the 3×5 configuration showing the highest mAP. Further increasing the subregion division quantity to 5×5, 3×7, 5×7, and 7×7 leads to a decline in mAP to 73.5, 73.1, 72.9, and 71.2%, respectively. The quantity of training areas for the guiding network remains constant at 6 across all configurations, meaning the model learns features from 6 areas regardless of the subregion division. The quantity of subregion feature fusion remains at 1 in all configurations, indicating that regardless of the number of subregions divided, the final number of features fused is consistent. It is inferred that increasing the number of subregion divisions within a specific range enhances the model’s mAP according to the multi-level feature fusion method proposed in this paper. However, this trend reverses once the subregion quantity increases beyond a certain point. This is due to excessive subregion division leading to each subregion containing insufficient information or the feature fusion becoming overly complex, affecting model performance. The optimal subregion division configuration is 3×5, which presents the highest mAP (74.8%) in the table, suggesting that moderate subregion division benefits model performance. As the number of subregions increases, the mAP gradually decreases, indicating that excessive subregion division causes feature dispersion, leading to performance degradation.

Table 2. Impact of subregion quantity on recognition results (AP@50) of agricultural product safety identification models

Number of divided
subregions
Number of training regions of guiding network Number of subregion feature fusion mAP (%)
1×3 3 1 73.5
3×3 6 1 73.6
3×5 6 1 74.8
5×5 6 1 73.5
3×7 6 1 73.1
5×7 6 1 72.9
7×7 6 1 71.2

Figure 6 presents the detection results of different identification models across various categories of agricultural product safety recognition (all targets, disease type, pest type, damage, or spoilage). The results are expressed in percentages, reflecting the accuracy of each model across different categories. It is observed that the model proposed in this study achieves the highest accuracy in detecting all targets, reaching 71.5%, which is significantly superior to other models, namely, You Only Look Once (YOLO) at 36%, Faster Region-Convolutional Neural Network (R-CNN) at 55%, Mask R-CNN at 61.5%, and Single Shot MultiBox Detector (SSD) at 62.5%. In the disease category, the proposed model also performs the best, with an accuracy of 80.6%, far exceeding other models (YOLO at 43.2%, Faster R-CNN at 71.4%, Mask R-CNN at 75.2%, SSD at 65.9%). For the pest category, the accuracy of the proposed model is 72.9%, again the highest and better than the performance of other models (YOLO at 33.9%, Faster R-CNN at 47.4%, Mask R-CNN at 59.7%, SSD at 61.8%). In recognizing damage or spoilage, although the accuracy of the proposed model is not the highest (64.6%), it remains competitive, only second to the SSD model (61.5%) and significantly better than YOLO (36.6%) and Faster R-CNN (60.3%). Mask R-CNN’s performance in this category is slightly lower, at 57%. It can be concluded that the agricultural product safety identification method based on multi-level feature fusion proposed in this paper demonstrates high accuracy across all categories of agricultural product safety recognition, particularly in the disease and pest categories, where accuracy is significantly higher than other commonly used identification models. The effectiveness of the proposed model is attributed to the multi-level feature fusion strategy, which better captures features at different scales and levels of abstraction, thereby enhancing the model’s ability to detect various safety issues in agricultural products.

Figure 6. Comparative results of different agricultural product safety detection items using various identification models (AP@50).

Figure 7 records the propagation rate of fog devices at different numbers of data transmissions. Analyzing this data helps to understand how the performance of fog computing devices affects the entire system’s efficiency in processing agricultural safety information. It is observed that the propagation rate fluctuates between 7.1 and 8.1, indicating that fog devices can maintain a relatively stable transmission rate across different transmission counts. This stability is crucial for agricultural safety information processing systems, as it can maintain consistent performance while processing real-time data and issuing response commands. Furthermore, there were no meager propagation rates, suggesting that the chosen fog devices possess sufficient storage and computational resources to meet the system’s basic requirements. In the IoT environment, selecting suitable fog devices is vital for reducing latency and improving the timeliness of data processing. A stable transmission rate also implies that fog devices can effectively handle data flows, which indicates the timeliness and accuracy of agricultural safety information processing. Further analysis reveals slight decreases in propagation rates at specific points (e.g., at 2, 5, 7, 9, and 15 data transmissions) due to network congestion, fog devices reaching their processing capacity limits, or sudden increases in sensor data transmission. However, even at these points, the decrease in rate is insignificant, and the system quickly recovers to higher propagation rates, demonstrating the system’s resilience and robustness. It can be concluded that the decentralized AIoT system for agricultural safety information processing proposed in this paper exhibits stability and robustness in fog device transmission rates, maintaining a relatively balanced propagation rate across different data transmission frequencies. This stability is vital to agricultural safety monitoring systems’ real-time response and processing capabilities and positively impacts the system’s reliability and ongoing operational cost control.

Figure 7. Transmission rate graph of the fog device.

The data from Figure 8 allows for comparing delays between fog devices and cloud storage at different time points. It is noted that at time points 1 and 2, the delay of fog devices is similar to or the same as that of the storage side, indicating that the performance difference between fog computing devices and cloud storage is not significant when processing low data volumes. Starting from time point 3, the delay of fog devices becomes lower than that of the storage side, and this difference increases over time. Particularly at time points 5 and 6, the delay at the storage side is approximately twice that of the fog devices. This difference demonstrates that fog devices provide faster data processing and response capabilities than traditional cloud storage when the data volume increases. This is due to fog computing processing data on the network’s edge, reducing the time required for data to be transmitted to the cloud, thereby reducing overall delay. It can be concluded that fog devices have lower delays than traditional cloud storage devices, enabling fog computing-based agricultural monitoring systems to process and respond to data quickly. This is crucial for agricultural safety applications that require real-time monitoring and control. As time progresses and data volume increases, the advantage of fog devices over cloud storage becomes more evident, emphasizing the efficiency of fog computing in processing large-scale distributed IoT data. The architecture proposed in this paper effectively utilizes the advantages of fog computing to reduce delay and enhance the agricultural safety information processing system’s real-time performance and response speed. This is extremely valuable for agricultural producers, as timely data analysis and decision support are vital to improving agricultural production efficiency and mitigating risks.

Figure 8. Delay comparison graph of the sensing device sending data to the fog device and the cloud.

In summary, within the experimental validation, this study investigated the precision and recall of agricultural product safety identification models by comparing the P-R curves under various loss functions, thus demonstrating the effectiveness of the proposed method. Further analyses were conducted on the impact of the number of subregional divisions on the identification model, and the superiority of multi-feature fusion methods over single-feature methods was assessed. Additionally, the significant advantages of fog computing in reducing latency were showcased. These findings are not only significant in the field of agricultural product safety but also broadly applicable to other domains. Through the application of multi-level feature fusion, the accuracy and reliability of complex problem recognition was increased. At the same time, the decentralized AIoT processing system offered an effective solution for the real-time processing of large-scale data.

Conclusion

This study has introduced a method for identifying agricultural product safety based on multi-level feature fusion. Integrating multiple levels of data features, including images, sounds, and chemical components, aims to enhance the accuracy of agricultural product safety identification. Furthermore, a system for processing agricultural safety information based on decentralized AIoT has been constructed. This system, leveraging fog computing and cloud computing, enhances data processing speed and reduces latency, ensuring the timely and effective handling of large-scale agricultural data.

Experimentally, the proposed identification method’s effectiveness was demonstrated by drawing P-R curves for agricultural product safety identification models trained with different loss functions and studying the models’ precision and recall rates. The impact of subregion division quantity on the agricultural product safety identification model was analyzed, revealing that a more detailed subregion division can enhance the accuracy of identification results. A comparison of the results of different identification models for various types of agricultural product safety detection items proved the superiority of the multi-feature fusion method over single-feature methods. The significant advantages of fog computing in reducing latency were showcased through the transmission rate graphs of fog devices and the delay comparison graphs of sensing devices sending data to fog devices and the cloud.

It can be concluded that the recognition method based on multi-level feature fusion proposed in this paper effectively improves the accuracy and reliability of agricultural product safety identification. The agricultural safety information processing system based on decentralized AIoT demonstrated its clear advantages in processing speed and latency reduction during experiments, ensuring real-time and efficient operation. Overall, through in-depth research and a series of experimental validations, this paper provides an efficient and reliable technological solution for the automatic identification and information processing of agricultural product safety. These research outcomes contribute positively to the development of smart agriculture and the enhancement of agricultural product quality and safety.

The impact of this study is multifaceted. Firstly, it has advanced the development of agricultural safety information management technologies, enhancing the efficiency and accuracy of safety inspections for agricultural products. Secondly, applying multi-feature fusion and decentralized processing techniques offers novel approaches and methods for addressing complex issues in other domains. Future research could further optimize multi-level feature fusion algorithms, explore more effective decentralized processing strategies, and apply these methodologies to a broader range of fields, thereby promoting the application and development of intelligent technologies across various industries.

Author Contributions

Conceptualization, A.L. and W.P.; methodology, A.L.; software, A.L.; validation, A.L., W.P., and L.S.; formal analysis, A.L..; investigation, A.L.; resources, A.L.; data curation, A.L.; writing—original draft preparation, A.L.; writing—review and editing, A.L.; visualization, A.L.; supervision, A.L.; project administration, L.H.; funding acquisition, A.L., L.S., W.P., L.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

REFERENCES

Agarwal, R., Choudhury, T., Ahuja, N.J., & Sarkar, T. (2023). IndianFoodNet: Detecting Indian food items using deep learning. International Journal of Computational Methods and Experimental Measurements, 11(4), 221–232. 10.18280/ijcmem.110403

Bondre, S., Patil, D. (2023). Recent advances in agricultural disease image recognition technologies: a review. Concurrency and Computation: Practice and Experience, 35(9), e7644. 10.1002/cpe.7644

Devi, S.K.C., Ashraf, M.S., Raghuveer, K., Prasad, T.G., & Sharma, S. (2023). Recognition of weed detection in smart agricultural farms using image processing with IoT. In 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics, and Cloud)(I-SMAC). Kirtipur, Nepal, 94–99. 10.1109/I-SMAC58438.2023.10290187

Dewi, C., Christanto, H.J., & Dai, G.W. (2023). Automated identification of insect pests: a deep transfer learning approach using ResNet. Acadlore Transactions on AI and Machine Learning, 2(4), 194–203. 10.56578/ataiml020402

Ge, H., Ji, X., Jiang, Y., Wu, X., Li, L., Jia, Z., et al. (2024). Tri-band and high FOM THz metamaterial absorber for food/agricultural safety sensing applications. Optics Communications, 554, 130173. 10.1016/j.optcom.2023.130173

Gou, T., Pei, D.F., Liang, R.C., She, L., Yang, J., Ma, Q.L., et al. (2023). Analysis of nitrogen and phosphorus emissions from agricultural non-point sources based on improved output coefficient method. China Environmental Science, 43(12), 6539–6550.

Jin, H.S., & Xie, Y.N. (2023). A review of research on the impact of digitalization on agricultural supply chain security. Digitalization and Management Innovation, 367, 378–385. 10.3233/FAIA230038

Kim, S.S., & Kim, S. (2022). Impact and prospect of the fourth industrial revolution in food safety: mini-review. Food Science and Biotechnology, 31(4), 399–406. 10.1007/s10068-022-01047-6

Li, W.Q., Han, X.X., Lin, Z.B., & Rahman, A. (2024). Enhanced pest and disease detection in agriculture using deep learning-enabled drones. Acadlore Transactions on AI and Machine Learning, 3(1), 1–10. 10.56578/ataiml030101

Lin, J. (2021). Agricultural product quality and safety supervision mechanism based on information traceability system. Frontier Computing, 595–601. 10.1007/978-981-16-0115-6_66

Liu, L., & Luo, G. (2022). Quality inspection method of agricultural products based on image processing. Traitement du Signal, 39(6), 2033–2040. 10.18280/ts.390615

Liu, S., Lei, M., Xu, L., Li, J., Sun, C., & Yang, X. (2022). Development of reliable traceability system for agricultural products quality and safety based on blockchain. Transactions of the Chinese Society for Agricultural Machinery, 53(6), 327–337. 10.6041/j.issn.1000-1298.2022.06.035

Liu, Z., Yu, X., & Choi, A.Y. (2023). Data visualization for designing F2C IoT agricultural system. In Third International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2023). (pp. 488–492). 10.1117/12.3011451

Lü, X.R., Lü, M.M., Yang, B.Y., Gu, Z.C., Liu, X.Q., Li, J.R., et al. (2023). Studies on screening of lactic acid bacteria with high production of AI-2 and its biological properties. Journal of Chinese Institute of Food Science and Technology, 23(7), 152–160. 10.16429/j.1009-7848.2023.07.016

Mu, W., Kleter, G.A., Bouzembrak, Y., Dupouy, E., Frewer, L.J., Radwan Al Natour, F.N., et al. (2024). Making food systems more resilient to food safety risks by including artificial intelligence, big data, and the internet of things in food safety early warning and emerging risk identification tools. Comprehensive Reviews in Food Science and Food Safety, 23(1), 1–18. 10.1111/1541-4337.13296

Murshed, R., & Uddin, M.R. (2020). Organic farming in Bangladesh: to pursue or not to pursue? an exploratory study based on consumer perception. Organic Farming, 6(1), 1–1210.12924/of2020.06010001

Pal, A., Leite, A.C., & From, P.J. (2024). A novel end-to-end vision-based architecture for agricultural human–robot collaboration in fruit picking operations. Robotics and Autonomous Systems, 172, 104567. 10.1016/j.robot.2023.104567

Raman, R., Muramulla, S.M., Ponugoti, S., Dhinakaran, K., Kuchipudi, R., & Robin, C.R. (2023). Penetration of artificial intelligence techniques to enhance the agricultural productivity and the method of farming; opportunities and challenges. In 2023 3rd International Conference on Innovative Practices in Technology and Management (ICIPTM), pp. 1–6. Uttar Pradesh, India. 10.1109/ICIPTM57143.2023.10118045

Safitri, K.I., Abdoellah, O.S., Gunawan, B., Parikesit, T., & Suparman, Y. (2022). Environmental certification schemes based on political ecology: case study on urban agricultural farmers in Bandung Metropolitan Area, Indonesia. Journal of Urban Development and Management, 1(1), 67–75. 10.56578/judm010108

Sunardi, S., Ghulam, I., Istiqomah, N., Fadilah, K., Safitri, K.I., & Abdoellah, O.S. (2023). Environmental sustainability and food safety of the practice of urban agriculture in Great Bandung. International Journal of Sustainable Development and Planning, 18(3), 737–743. 10.18280/ijsdp.180309

Syaharuddin, S., Fatmawati, F., & Suprajitno, H. (2023). Long-term forecasting of crop water requirement with BP-RVM algorithm for food security and harvest risk reduction. International Journal of Safety and Security Engineering, 13(3), 565–575. 10.18280/ijsse.130319

Wang, C.X., & Liang, Y.R. (2023). The supervision game of economically motivated adulteration behavior of fresh agricultural product suppliers. Journal of Industrial & Management Optimization, 19(7), 4893–4909. 10.3934/jimo.2022153

Wang, K. (2017). Design of agricultural product quality safety retrospective supervision system of Jiangsu province. Journal of Physics: Conference Series, 887, 12018. 10.1088/1742-6596/887/1/012018

Wunderlich, S.M. (2021). Food supply chain during a pandemic: changes in food production, food loss, and waste. International Journal of Environmental Impacts, 4(2), 101–112. 10.2495/EI-V4-N2-101–112

Xu, N., Xu, C., Li, Y.Y., & Chen, J.Y. (2017). Supervision service system of quality and safety of agricultural products on E-commerce: perspectives in terms of “internet+”. Boletin Tecnico/Technical Bulletin, 55(13), 259–265.

Yi, J., Wisuthiphaet, N., Raja, P., Nitin, N., & Earles, J.M. (2023). AI-enabled biosensing for rapid pathogen detection: from liquid food to agricultural water. Water Research, 242, 120258. 10.1016/j.watres.2023.120258

Zhai, X.J., Zheng, L., Ma, G.F., & Lin, H. (2024). Influencing factors of different development stages of green food industry: a system dynamic model. Frontiers in Environmental Science, 11, 1319687. 10.3389/fenvs.2023.1319687

Zhang, H., & Peng, Q. (2022). PSO and K-means-based semantic segmentation toward agricultural products. Future Generation Computer Systems, 126, 82–87. 10.1016/j.future.2021.06.059