Session CSIHTIS

CSIHTIS

Conference
8:30 AM — 10:00 AM HKT
Local
Dec 13 Tue, 7:30 PM — 9:00 PM EST

Semantic Image Synthesis via Location Aware Generative Adversarial Network

Jiawei Xu, Rui Liu, Jing Dong, Pengfei Yi, Wanshu Fan, Dongsheng Zhou

2
Semantic image synthesis aims to synthesize photorealistic images through the given semantic segmentation masks. Most existing models use conditional batch normalization (CBN) to regulate normalization activation by spatially varying modulation parameters. It can prevent semantic information from being eliminated during normalization. But the modulation parameters in CBN lack location constraint, resulting in the lack of structural information in the synthetic image. And CBN is highly dependent on the batch size. To address these limitations, we propose location aware conditional group normalization (LACGN) and construct a location aware generative adversarial network (LAGAN) based on this method. LACGN can learn spatial location aware information in a weakly supervised manner that relies on the current image synthesis process to guide transformations spatially. It allows the synthetic image to have more structural information and detailed features. At the same time, group normalization(GN) replace the traditional BN to eliminate the dependence on batch size. Extensive experiments show that LAGAN is better than other methods.

Low-Complexity Code Clone Detection using Graph-based Neural Networks

Hu Liu, Hui Zhao, Changhao Han, Lu Hou

0
Code clone detection is of great significance for intellectual property protection and software maintenance. Deep learning has been applied in some research and achieved better performance than traditional methods. To adapt to more application scenarios and improve the detection efficiency, this paper proposes a low-complex code clone detection with the graphbased neural network. As the input of the neural network, code features are represented by abstract syntax trees (ASTs), in which the redundant edges are removed. The operation of pruning avoids interference in the message passing of the network and reduces the size of the graph. Then, the graph pairs for the code clone detection are sent into the message passing neural networks (MPNN). In addition, the gated recurrent unit (GRU) is used to learn the information between graph pairs to avoid the operation of Graph mapping. After multiple iterations, the attention mechanism is used to read out the graph vector, and the cosine similarity is calculated on the graph vector to obtain the code similarity. Through the experiments on two datasets, the results show that the proposed clone detection scheme removes about 20% of the redundant edges and reduces 25% of model weights, 16% of multiply-accumulate operations (MACs). In the end, the proposed method effectively reduces the training time of graph neural network while presenting a similar performance to the baseline network.

Publishing Weighted Graph with Node Differential Privacy

Xuebin Ma, Ganghong Liu, Aixin Lin

1
At present, how to protect user privacy and security while publishing user data has become an increasingly important problem. Differential privacy is mainly divided into two directions in graph data publishing. One is to publish the statistical characteristics of the graph that meets the differential privacy, and the other is to publish the synthesis graph that meets the differential privacy. This paper proposes a weighted graph publishing method based on node difference privacy. First, this paper proposes a projection method that constrains the degree of nodes and the number of triangles and reduces the increase in noise by reducing the sensitivity. Afterward, select appropriate statistical characteristics of the weighted graph to form node attributes as the parameters of the syn-thesis weighted graph. The next part proposes a graph publishing method based on node attributes and weights. This method synthesizes the initial graph according to the degree in the node attribute. It then adds or deletes the edges of the initial graph according to the number of triangles in the node attribute to obtain the final synthesis graph. Finally, this paper verifies the weighted graph publishing method proposed on three data sets. The results show that the method proposed in this paper satisfies the different privacy conditions of nodes while maintaining certain utility.

SSA and BPNN based Efficient Situation Prediction Model for Cyber Security

Minglong Cheng, Guoqing Jia, Weidong Fang, Zhiwei Gao, Wuxiong Zhang

1
Establishing an effective situation prediction model for cyber security can know the active situation of future network malicious events in advance, which plays a vital role in cyber security protection. However, traditional models cannot achieve sufficient prediction accuracy when predicting cyber situations. To solve this problem, the initial location information of the sparrow population is optimized, and a sparrow search algorithm based on the Tent map is proposed. Then, the BP neural network is optimized using the improved sparrow search algorithm. Finally, a situation prediction model based on the sparrow search algorithm and BP neural network is proposed, namely T-SSA-BPNN. The simulation results show that the convergence speed and global search ability of the prediction model are improved. It can effectively predict the network security situation with high accuracy.

IA-DD: An SDN Topological Poisoning Attack Defense Scheme Based on Blockchain

Bin Gu, Xingwei Wang, Kaiqi Yang, Yu Wang, and Qiang He

1
Software defined networking (SDN) have the advantages of centralized control, global visibility, and programmability, but these features also bring new security issues, such as Topological Poisoning Attack (TPA), where attackers can attack topology discovery services by stealing host locations or forging link information. Considering the three levels of identity, data package and path, this paper designs a chain authentication defense scheme. The scheme includes authentication mechanism, transaction information storage mechanism, source IP authentication mechanism and smart contract notification mechanism. The received packets are authenticated by digital signature algorithm, and the trusted identity and location information are stored securely. At the same time, an improved block storage structure is designed to avoid data redundancy, and malicious information is processed by smart contract notification and stream rule installation. The experimental results show that the defense scheme designed in this paper can effectively defend against TPA attacks. Compared with the benchmark mechanism, the deployment of this scheme has less impact on controller performance and less impact on the delay of topology discovery in SDN.

Low-power Robustness Learning Framework for Adversarial Attack on Edges

Bingbing Song, Haiyang Chen, Jiashun Suo, Wei Zhou

1
Recent works on adversarial attacks uncover the intrinsic vulnerability of neural networks, which reveal a critical issue that the neural networks are easily misled by adversarial attacks. As the development of edge computing, more and more real-time tasks are deployed on edge devices. The safety of these neural network-based applications is threatened by adversarial attack. Therefore, the defense technique against adversarial attack has very important application value for edges. Especially, the defense technique should consider the deployment condition on edges, such as low power and low time consumption. Unfortunately, until now, very limited research considers the security problem under adversarial attack on edges. In this paper, we propose a low-power robust learning framework to deal with the adversarial attacks at resource-constrained edge devices. In this framework, we make a rough categorization of approaches on defending against adversarial attacks, and reveal how this edge device-based framework can be used to resist adversarial attacks. Furthermore, we propose a staged ensemble defense strategy in the framework, which achieves better defensive performance than a single defense algorithm. To verify our framework on real application, we build a Drone Search and Rescue System (DSRS) which is employed to examine the performance of the proposed framework. The results indicate that our framework achieves outstanding performance in all aspects, such as robustness, time and power consumption. Multiple evaluations of the low-power robust learning framework provide the advice that help to choose the optimal security configuration on power-constrained and performance-expected environments.

Session Chair

Xiaoliang Wang, Hunan University of Science and Technology, China

Session NMIC

NMIC

Conference
8:30 AM — 10:00 AM HKT
Local
Dec 13 Tue, 7:30 PM — 9:00 PM EST

Interval Matching Algorithm for Task Scheduling with Time Varying Resource Constraints

Weiguan Li, Jialun Li, Yujie Long, Weigang Wu

0
The co-location of online services and offline tasks has become very popular in data centers, which can largely improve resource utilization. Scheduling co-located offline tasks is challenging due to the interference with online services. Existing co-location scheduling algorithms try to find the best combination of different workloads to avoid performance interference and maximize the utilization of data centers, but few of them take the time varying resource constraints into account. We propose a heuristic algorithm named interval matching scheduling algorithm based on the idea that the time series of available resources and task scheduling can be regarded as interval endpoints. The proposed scheduling algorithm makes decisions based on a scoring method that calculates the matching degrees of the tasks and the changing resource series. The experimental results show that the proposed algorithm has achieved better performance under different parameter settings comprehensively.

Privacy protection scheme based on certificateless in VSNs environment

Yanfei Lu, Suzhen Cao, Yi Guo, Qizhi He, Zixuan Fang, Junjian Yan

0
In vehicular social networks (VSNs), cloud service providers can provide many convenient services for vehicles to ensure the safety of driving. However, the wireless communication between entities in VSNs is vulnerable to attacks, which can lead to vehicle privacy leakage. To solve this problem, a certificateless searchable encryption scheme with privacy-preserving features that can resist keyword guessing attacks is proposed based on the VSNs application environment. The scheme combines proxy re-encryption technology, which enables vehicle users to obtain accurate request results without disclosing privacy information to cloud service providers, and achieves the privacy of vehicle identity and confidentiality of transmitted data. In addition, the authorization process of the data service provider not only ensures the security of the data but also achieves revocability of user authorization. Based on the computational Diffie-Hellman problem and the discrete logarithm problem, the scheme is proved to be resistant to internal or external keyword guessing attacks under the random oracle model, and the experimental results show that the scheme has better performance in terms of computational and communication efficiency.

Measurement and Analysis: Does QUIC Outperform TCP?

Xiang Qin, Xiaochou Chen, Wenju Huang, Yi Xie, Yixi Zhang

1
Many web applications adopt Transfer Control Protocol (TCP) as the underlying protocol, where congestion control (CC) plays a vital role in reliable transmission. However, some TCP mechanisms cannot cope with the requirements of new applications and ever-increasing network traffic. Therefore, people have proposed Quick UDP Internet Connection (QUIC), an excellent potential alternative based on UDP, which introduces new features to improve transmission performance and is compatible with existing CC algorithms. This paper has conducted many experiments in the testbed and actual environments to measure and compare QUIC and TCP regarding communication quality, compatibility fairness, and user experience, while considering the impacts of three typical CC algorithms: NewReno, Cubic, and BBR. QUIC outperforms TCP in most experiments for web browsing and online video, but its performance is susceptible to CC algorithms and network conditions. For example, with the Cubic algorithm, QUIC enabling the 0-RTT feature can decrease the webpages loading time by 37.11% compared with TCP. Using the BBR algorithm, both QUIC and TCP achieve high throughput, slight fluctuation, and few delayed events when playing online videos. TCP with BBR provides better fairness, while QUIC with BBR is more robust in a network with high latency or packet loss.

Binary Neural Network with P4 on Programmable Data Plane

Junming Luo, Waixi Liu, Miaoquan Tan, Haosen Chen

1
Deploying machine learning (ML) on the programmable data plane (PDP) has some unique advantages, such as quickly responding to network dynamics. However, compared to demands of ML, PDP have limited operations, computing and memory resources. Thus, some works only deploy simple traditional ML approaches (e.g., decision tree, K-means) on PDP, but their performance is not satisfactory. In this article, we propose P4-BNN (Binary Neural Network based on P4), which uses P4 to completely executes binary neural network on PDP. P4- BNN addresses some challenges. First, in order to use shift and simple integer arithmetic operations to replace multiplication, P4- BNN proposes a tailor-made data structure. Second, we use an equivalent replacement programming method to support matrix operation required by ML. Third, we propose a normalization method in PDP which needn?€?t floating-point operations. Fourth, by using register storing the model parameters, the weights of P4- BNN model can be updated without interrupting the P4 program running. Finally, as two use-cases, we deploy P4-BNN on a Netronome SmartNIC (Agilio CX 2x10GbE) to achieve flow classification and anomaly detection. Compared to the N3IC, decision tree and K-means, the accuracy of P4-BNN has 1.7%, 3.4% and 47.7% improvement respectively.

Semi-Supervised Learning Based on Reference Model for Low-resource TTS

Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao

1
Most previous neural text-to-speech (TTS) methods are mainly based on supervised learning methods, which means they depend on a large training dataset and hard to achieve comparable performance under low-resource conditions. To address this issue, we propose a semi-supervised learning method for neural TTS in which labeled target data is limited, which can also resolve the problem of exposure bias in the previous autoregressive models. Specifically, we pre-train the reference model based on Fastspeech2 with much source data, fine-tuned on a limited target dataset. Meanwhile, pseudo labels generated by the original reference model are used to guide the fine-tuned model?€?s training further, achieve a regularization effect, and reduce the overfitting of the fine-tuned model during training on the limited target data. Experimental results show that our proposed semisupervised learning scheme with limited target data significantly improves the voice quality for test data to achieve naturalness and robustness in speech synthesis.

RTSS: Robust Tuple Space Search for Packet Classification

Jiayao Wang, Ziling Wei, Baosheng Wang, Shuhui Chen, and Jincheng Zhong

0
Packet classification shows an essential role in network functions. Traditional classification algorithms assume that all field values are available and valid. However, such a premise is being challenged as networks become more complex now. Scenarios with field-missing poses great challenges to packet classifiers. Existing approaches can only list all possible situations in such cases, increasing the workload exponentially. RFC algorithm is proved to be helpful for this issue in our previous work, but its spacial performance is much poor. In this paper, we propose a novel classification scheme using Tuple Space Search (TSS) to deal with missing fields. We redesign the hash calculation method and raise a new data structure to recover field-missing packets. The experiment shows that RTSS reduce the memory consumption and construction time by several orders of magnitude. At the same time, RTSS has better classification performance than previous work, while supporting fast updates.

A Novel Reliability Evaluation Method Based on Improved Importance Algorithm for SCADA

Zhu Zhaoqian, Chen Yenan, Su Bo, Li Linsen

0
Cyber attacks have become a major factor affecting the security and reliability of power SCADA (Supervisory control and data acquisition) in recent years. We urgently need an effective SCADA reliability evaluation algorithm to predict potential risks. However, existing evaluation algorithms have the shortcomings of inefficient sampling and low accuracy of indexes. In this paper, we propose an equal dispersion algorithm and an important sampling algorithm and combine them into an improved importance algorithm. The experimental results show that the improved importance algorithm not only improves the sampling efficiency, but also improves the accuracy of the evaluation index. The evaluation indexs accurately quantify the impact of the six widely used cyber attacks on power SCADA reliability.

Evolutionary Discrete Optimization Inspired by Zero-Sum Game Theory

Ruiran Yu, Haoliang Wen ,Yuhan Xu

1
In a zero-sum game, the two players compete against each other, and the gain of one player means the loss of the other. Generative adversarial networks (GANs) are models of this kind of thinking. Evolutionary algorithms (EAs) are popular and high robust methods to solve combinatorial optimization problems. However, in the middle stages of evolution, EAs usually suffer from the problem of a serious lack of population diversity. This often results in that EAs fall into local optima. This paper presents a cooperative evolutionary algorithm driven by policy-based GANs (PGAN-CEA) for solving traveling salesman problems (TSPs). PGAN-CEA adopts a policy-gradient method in reinforcement learning to train GANs to generate discrete data. First, GANs are used to construct an initial population. Then, a cooperative evolution strategy driven by GANs is used in the middle of the evolution. Further, a dual-population mechanism is utilized to assist the co-evolution of the dominant solutions generated by GANs and the solutions from the population of EAs. Test cases from TSPLIB and the Mona Lisa Problems are used to evaluate the proposed algorithm. Compared with other GAN-based algorithms, the proposed algorithm can mitigate the problem of local convergence and achieves certain improvements in quite a few performance indicators.

Research on data collection and energy supplement mechanism in WRSN based on UAV: a method to maximize energy supplement efficiency

Wen Xie, Xiangyu Bai, Yaru Ren

2
Energy has always been a key bottleneck restricting the large-scale deployment and long-term operation of wireless sensor networks (WSNs). Wireless rechargeable sensor networks (WRSNs) can effectively alleviate the energyconstrained problem of sensor nodes. However, due to the constraints of the service capabilities of mobile charging equipment, how to efficiently replenish energy and maintain the long-term operation of the network is still very challenging. In this paper, we propose an UAV-based energy replenishment mechanism in WRSN, which aims to maximize the replenished energy benefit of the network from the energy expended by UAVs. This paper firstly constructs the network model and defines the optimization problem of energy replenishment efficiency maximization. Then, in order to improve the performance of WRSN in terms of energy replenishment efficiency, the node clustering, anchor node selection and flight path planning problems are respectively studied, and a data collection and energy replenishment mechanism is proposed for the above problems. The experimental results show that the proposed scheme can effectively improve the energy replenishment efficiency of the system, prolong the life of the network and balance the energy consumption of the network.

Session Chair

Lei Yang, South China University of Technology, China

Session AI2OT

AI2OT

Conference
10:30 AM — 12:00 PM HKT
Local
Dec 13 Tue, 9:30 PM — 11:00 PM EST

Image Classification of Alzheimer's Disease based on Residual Bilinear and Attentive Models

Xue Lin, Yushui Geng, Jing Zhao, Wenfeng Jiang, Zhen Yan

0
Due to the characteristics of high noise and low resolution in medical images, it is difficult to extract local features, which affects the accuracy of image diagnosis and classification. To exploit the discriminative features of local image regions, we propose a network model method that combines improved residual bilinear and attention mechanism. First, in the ResNeXt model, it performs segmentation and convolution on the original residual unit structure to extract multi-scale features of the image. And it replaces the VGGNet model in bilinear. Then, it uses channel nonlinear attention to obtain expressive features when extracting features, and employs spatial attention for weight region selection to achieve BAP (Bilinear Attention Pooling) fusion. Finally, it implements classification in the SVM classifier and tests our model on the Alzheimer?€?s Disease Neuroimaging Initiative (ADNI) dataset. The results show that the model has better accuracy and robustness than other models in AD diagnosis classification.

Anslysing and Evaluating Complementarity of Multi-Modaility Data Fusion in AD diagnosis

Zhaodong Chen, Fengtao Nan, Yun Yang, Jiayu Wang, Po Yang

1
The clinical progression of Alzheimer?€?s disease( AD ) can?€?t be accurately evaluated by single modality data alone. Multi-modal data have a good effect on the diagnosis of AD. Clarifying the complementarity between modalities is crucial for the assessment of each stage of AD. Few studies have specifically explored the complementarity between different modalities due to the lack of completely aligned and paired multi-modal data and the limitation of sample size. However, collecting the full set of aligned and paired data is expensive or even impractical. In addition, the limited number of samples poses a great challenge to the robustness of the model. In this paper, different machine learning( ML ) methods were used to explore data complementarity between T1-weighted magnetic resonance imaging ( MRI ), cerebrospinal fluid ( CSF ), and fluorodeoxyglucose-positron emission tomography ( FDG-PET ) modalities. The different modal data of Alzheimer?€?s Neuroimaging Initiative ( ADNI ) and the self-extracted neuroimaging data were experimentally explored. Experiments show that there is obvious complementarity between MRI and CSF. By fusing MRI and CSF data, three binary classification tasks using multi-modal fusion data have achieved varying degrees of improvement. At the same time, we also explored the important features of multi-modal fusion data through SHapley Additive exPlanations ( SHAP ), and found that most important features are supported by relevant literature.

MetaSpeech: Speech Effects Switch Along with Environment for Metaverse

Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao

1
Metaverse expands the physical world to a new dimension, and the physical environment and Metaverse environment can be directly connected and entered. Voice is an indispensable communication medium in the real world and Metaverse. Fusion of the voice with environment effects is important for user immersion in Metaverse. In this paper, we proposed using the voice conversion based method for the conversion of target environment effect speech. The proposed method was named MetaSpeech, which introduces an environment effect module containing an effect extractor to extract the environment information and an effect encoder to encode the environment effect condition, in which gradient reversal layer was used for adversarial training to keep the speech content and speaker information while disentangling the environmental effects. From the experiment results on the public dataset of LJSpeech with four environment effects, the proposed model could complete the specific environment effect conversion and outperforms the baseline methods from the voice conversion task.

Potential Game Based Connectivity Preservation for UAV-Assisted Public Safety Rescue

Jingjing Wang, Yanjing Sun, Bowen Wang, Toshimitsu Ushio

0
In public safety networks (PSNs), it is an important issue how reliable data transmission recovers when some base stations (BSs) are damaged by natural disasters. An unmanned aerial vehicle (UAV) is used as a temporal relay station to transmit data of ground users (GUs) to an undamaged BS. In this paper, we consider a swarm of UAVs and introduce the following three roles for its management: (1) Relay UAVs (RUs) sacrifice their coverage capabilities to preserve the network connectivity; (2) Air BS UAVs (BUs) perform a covering task; (3) Standby UAVs (SUs) remain inactive. Then, we formulate an optimal coverage problem where we assign a role to each UAV to maximize the number of GUs that can transmit their data to the undamaged BS. First, we transform the problem into an exact potential game (EPG) whose utility function is designed based on the number of GUs served by each UAV. Next, we propose a learning algorithm to obtain an optimal role assignment and utilize the Fiedler eigenvalue, which represents the algebraic connectivity of the network topology of the swarm, to update the strategy selection probabilities. Finally, by simulation, it is shown that the proposed algorithm can strike a better balance between coverage and connectivity preservation than other benchmark algorithms.

Three-dimensional Key Distribution Scheme in Wireless Sensor Networks

Wanqing Wu, Ziyang Zhang, Yahua Dong, Caixia Ma

0
One of the major security challenges faced by wireless sensor networks(WSNs) is establishing a secure link for communication between neighboring sensor nodes. Finding a balance between connection, overhead, and resilience against node capture attacks is difficult due to the resource limits of sensor nodes. We propose a new three-dimensional key distribution scheme for wireless sensor networks based on polynomial and random key distribution schemes. The key pool is divided into two sections in the proposed scheme: key pool 1 is generated by the polynomial pool, and key pool 2 is generated by key pool 1. A three-dimensional key distribution model is constructed using the key pool and the coefficients of the polynomials. It can enhance network resilience while maintaining good connectivity by dynamically adjusting the degree of polynomials and the size of the polynomial pool. This paper analyzes the performance of the proposed scheme and compares it with other schemes. The results show that the proposed scheme has better local connectivity and resilience against node capture attacks when compared with the previous schemes.

Application identification under Multi-Service Integration Platform

Ziyang Wu, Yi Xie

0
Multi-service integration platform (MIP) is becoming a new way for mobile applications to provide services, such as the ChatBot of Facebook and the applet of WeChat. However, currently there are no special means and filtering strategies to supervise the services running on various MIPs. Existing solutions for program detection and traffic analysis are not suitable for MIP scenarios, which creates favorable conditions for the dissemination of illegal content through MIP . To address this issue, in this work we propose a new approach to identify mobile applications running on MIP platforms. The proposed approach uses IP flow to reconstruct data units of both transport and application layers respectively. By this way, we can capture the data transmission behavior of multi-protocol layers and obtain richer semantic features for application identification. Then, multikernel convolutional neural networks (CNNs) and long short term memory (LSTM) neural networks are employed to extract and aggregate the multi-scale features from the perspective of both protocol layer and time series. Finally, the fused features generated by the models are used to identify the category of the pending applications by a classifier composed of a fully connected neural network. We validate the proposed approach by three real datasets. The experimental results show that the proposed approach outperforms most existing benchmark methods in performance.

UAV Visual Navigation System based on Digital Twin

Jingsi Miao, Ping Zhang

0
In recent years, many scholars have carried out researchs on UA V digital twin from various aspects. However, the research is still in the preliminary stage, and there are still some problems, such as incomplete data and model fusion, poor migration of algorithm policy, poor relation between virtual and physical space, and lack of extensibility of application scenarios. In order to explore the application potential of digital twin technology in UA V fields, this paper introduces digital twin into UA V monocular visual navigation. Therefore, this paper proposes a digital twin(DT)-based framework integrating with deep neural network, which consists of physical space, virtual space, twin data layer and application layer. Next, the multi-modal decision model with decoupling methods in application layer consisting of perception model and control model is built to explore the global optimal solution and control the behaviors of UA V . Finally, the digital twin system and decision model are verified in virtual space and physical space respectively. The results shows that the UA V visual navigation system based on digital twin reduces the cost of application, algorithm development and deployment, and improves the migration ability of navigation policy. Compared with the baselines, the proposed decision model has the best navigation performance in both virtual space and physical space. Compared with the navigation policy without the decoupling method, the performance index is improved by about 8.6% in virtual space and 2.7 times in physical space.

Applications of Reinforcement Learning in Virtual Network Function Placement: A Survey

Cong Zhou, Baokang Zhao, Jing Tao, Baosheng Wang

0
In recent years, network function virtualization has attracted massive attention in academia and industry,and the virtual network functions placement problem is one of them. Reinforcement learning has been widely applied in network control and decision, which can learn the optimal policy according to the environment feedback automatically. This paper presents a new summary of the virtual network functions placement problem based on reinforcement learning. We will give a detailed description of how to use reinforcement learning to solve virtual network function placement in different scenarios, then the prospect of further research is forecasted preliminarily.

Session Chair

Xuan Liu, Hunan University, China

Session ECAISS

ECAISS

Conference
10:30 AM — 12:00 PM HKT
Local
Dec 13 Tue, 9:30 PM — 11:00 PM EST

An Adaptive Data Rate-Based Task Offloading Scheme in Vehicular Networks

Chaofan Chen, Wendi Nie, Yaoxin Duan, Victor C.S. Lee, Kai Liu and Huamin Li

1
As an important application of Internet of Things (IoT), Internet of Vehicles (IoVs) can provide various valuable services which may require computation-intensive tasks under strict time constraints. Most traditional vehicles may not be able to process all these computation-intensive tasks locally because of the limitation of computing resources. Therefore, task offloading has been proposed, which allows vehicles to offload computation- intensive tasks to Mobile Edge Computing (MEC) servers. With the arising and development of intelligent vehicles, the concept of Vehicle as a Resource (VaaR) has been proposed as an important supplement to MEC, which enables intelligent vehicles to share computation resources with nearby vehicles. Most studies in VaaR generally assume that the transmission data rate of offloading tasks from one vehicle to another is fixed. However, in VaaR, due to the high mobility of vehicles, the communication distance between vehicles may change over time, resulting in changing data rate. Therefore, it is challenging to make offloading decisions (i.e., selecting proper vehicles as computation resource providers) while considering adaptive data rate. In this paper, we study task offloading in vehicular networks while considering adaptive data rate. We propose an Adaptive Data Rate-based Offloading algorithm named ADRO, which can not only achieve minimum energy consumption while satisfying time constraints, but also take adaptive data rate into consideration. Compre- hensive experiments have been conducted to demonstrate the efficiency of the ADRO algorithm.

HCA Operator: A Hybrid Cloud Auto-scaling Tooling for Microservice Workloads

Y uyang Wang, Fan Zhang, Samee U. Khan

0
Elastic cloud platform, e.g. Kubernetes, enables dy- namically scale in or out computing resources in accordance with the workloads fluctuation. As the cloud evolves to hybrid, where public and private clouds co-exist as the underline substrate, autoscaling applications within a hybrid cloud is no longer straightforward. The difficulty lies in all aspects, e.g. global load balancing, hybrid-cloud monitoring and alerting, storage sharing and replication, security and privacy, etc. However, it will significantly pay off if hybrid-cloud autoscaling is supported and boundless computing resources can be utilized per request. In this paper, we design Hybrid Cloud Autoscaler Operator (HCA Operator), a customized Kubernetes Controller that leverages the Kubernetes Custom Resource to auto-scale microservice applications across hybrid clouds. HCA Operator load balances across hybrid clouds, monitors metrics, and autoscales to des- tination clusters that exist in other clouds. We discuss the implementation details and perform experiments in a hybrid cloud environment. The experimental results demonstrate that if the workload changes quickly, our Operator can properly auto- scale the microservice applications across hybrid cloud in order to meet the Service Level Agreement (SLA) requirements.

Multi-UAV Joint Observation, Communication, and Policy in MEC

Shuai Liu, Y uebin Bai

0
The use of multi-agent reinforcement learning meth- ods (MARL) in mobile edge computing (MEC) environments enables multiple unmanned aerial vehicles (multi-UA V) to in- telligently provide relay or computational offloading services to mission targets. UA V?€?s observation range and communication methods between UA Vs have a significant impact on multi-UA V collaboration strategy. For this purpose, we study the multi- UA V observation range dynamic control method and the optimal inter-UA V communication method. Our approach is to design a multi-UA V joint observation, communication, policy, and service collaboration protocol and study the optimization method of the protocol. We propose an expert-guided deep reinforcement learn- ing framework to optimize this protocol. Each UA V?€?s optimal radar observation range and inter-UA V communication method are learned using an information entropy value decomposition method. Through our observation and communication method, multi-UA V are able to obtain the most valuable information. Experiments demonstrate that our method can improve MEC?€?s service coverage by 9.38%-21.88% compared to the classical MARL algorithm. Our method improves the radar observation efficiency and communication efficiency by 3.05%-38.9% and 8.55%-22.03%, respectively. The results show that this method improves multi-UA V energy utilization.

Federated Learning for Heterogeneous Mobile Edge Device:A Client Selection Game

Tongfei Liu, Hui Wang, Maode Ma

1
In the federated learning (FL) paradigm, edge devices use local datasets to participate in machine learning model training, and servers are responsible for aggregating and maintaining public models. FL cannot only solve the bandwidth limitation problem of centralized training, but also protect data privacy. However, it is difficult for heterogeneous edge devices to obtain optimal learning performance due to limited computing and communication resources. Specifically, in each round of the global aggregation process by the FL, clients in a ?€?strong group?€? have a greater chance to contribute their own local training results, while those clients in a ?€?weak group?€? have a low opportunity to participate, resulting in a negative impact on the final training result. In this paper, we consider a federated learning multi-client selection (FL-MCS) problem, which is an NP-hard problem. To find the optimal solution, we model the FL global aggregation process for clients participation as a potential game. In this game, each client will selfishly decide whether to participate in the FL global aggregation process based on its efforts and rewards. By the potential game, we prove that the competition among clients eventually reaches a stationary state, i.e. the Nash equilibrium point. We also design a distributed heuristic FL multi-client selection algorithm to achieve the maximum reward for the client in a finite number of iterations. Extensive numerical experiments prove the effectiveness of the algorithm.

Learning-based Computation Offloading in LEO Satellite Networks

Juan Luo, Quanwei Fu, Fan Li, Ying Qiao, Ruoyu Xiao

0
Satellite networks can provide network coverage in remote areas without terrestrial infrastructure and offer ground users an offload option. However, using satellite networks to provide computation offload services requires consideration not only of the dynamics of the satellite system, but also of how ground users offload tasks and how the limited resources of the satellite are allocated. Therefore, in this paper, we propose a computation offloading algorithm based on the optimal allocation of satellite resources (CO-SROA) and formulate an objective function to minimize the delay and energy consumption for ground users to process the computation tasks. The algorithm decomposes the optimization problem into two subproblems. One is the optimal allocation of satellite resources with determinate offloading decisions in a single time slot, which is solved based on the Lagrange multiplier method. The other is the long-term user offloading decision problem, which is solved by formulating it as a Markovian decision process and using a deep reinforcement learning (DRL) algorithm. Simulation results show that the CO- SROA can achieve better long-term returns in terms of delay and energy consumption.

The Short-Term Passenger Flow Prediction Method for Urban Rail Transit Based on CNN-LSTM with Attention Mechanism

Yang Liu, Chen Mu, Pingping Zhou

0
This paper studies the short-term passenger flow prediction of urban rail transit for optimally adjusting the real- time departure of rail trains. Aiming at the problem that the traditional deep learning model does not consider the spatial- temporal information enough, the short-term passenger flow prediction model of urban rail transit based on CNN-LSTM with attention mechanism is proposed. Firstly, the stations are divided into seven categories according to the significant difference of daily passenger flow in urban rail stations so as to further analyze the distribution pattern of daily inbound and outbound passenger flow in different categories of stations; secondly, the short sequence feature abstraction ability of CNN is used to extract the spatial characteristics of historical passenger flow in each time period in different categories of stations; finally, the attention mechanism is used to assign different weights to the extracted characteristic information, and the temporal characteristic information is obtained from the LSTM comprehensive short- term sequence to realize the short-term passenger flow prediction of urban rail transit. Experiments show that the prediction model has the encouraging prediction performance and accuracy.

Linguistic-Enhanced Transformer with CTC Embedding for Speech Recognition

Xulong Zhang, Jianzong Wang, Ning Cheng, Mengyuan Zhao, Zhiyong Zhang, Jing Xiao

1
The recent emergence of joint CTC-Attention model shows significant improvement in automatic speech recognition (ASR). The improvement largely lies in the modeling of linguistic information by decoder. The decoder joint-optimized with an acoustic encoder renders the language model from ground- truth sequences in an auto-regressive manner during training. However, the training corpus of the decoder is limited to the speech transcriptions, which is far less than the corpus needed to train an acceptable language model. This leads to poor robustness of decoder. To alleviate this problem, we propose linguistic-enhanced transformer, which introduces refined CTC information to decoder during training process, so that the decoder can be more robust. Our experiments on AISHELL- 1 speech corpus show that the character error rate (CER) is relatively reduced by up to 7%. We also find that in joint CTC- Attention ASR model, decoder is more sensitive to linguistic information than acoustic information.

Viewing Flowers at their Most Beautiful Moments: A Crowd Sensing Application

Weifeng Xiong, Fangwan Huang, Zhiyong Yu, Xianwei Guo, Binwei Lin, Qiquan Cai

0
To assist people's itinerary planning for viewing flowers, it is very meaningful to visualize the different stages of specific flowers with high spatio-temporal resolution. To achieve this goal, this paper realized a crowdsensing application called Hanami, which means ?€?flower viewing?€?. The implementation of this application contains three modules: data sensing, flower classifier, and visualization. Particularly, the flower classification module utilized a residual network to identify the types and stages of flowers from crowdsensed photos. For the visualization module, a bilayer clustering view method was designed to aggregate the points on the map, which can be further clustered by different features of flowers. Experimental evaluation showed that Hanami can help users view flowers at their most beautiful moments.

Lightweight YOLOV4 algorithm for underwater whale detection

Lili He, Defeng Du, Hongtao Bai, Kai Wang

0
At present, it is difficult to implement on-line detection on underwater equipment due to the large model of biometric algorithm. In this paper, a YOLOv4 lightweight whale detection algorithm suitable for embedded equipment is proposed. MobileNetv3 was used as the backbone network of YOLOv4 to reduce the network scale, and the neck and head network were optimized by Depthwise Separable Convolutional to achieve lightweight feature extraction. Experimental results on whale data set show that compared with YOLOv4 algorithm, the number of network parameters is reduced by 87.2%, and the detection speed is improved by 1.65 times under GPU-only and 12.56 times under CPU-only. The method presented in this paper can theoretically implement underwater whale on-line detection in embedded devices.

Anti-jamming Channel Allocation in UAV-Enabled Edge Computing: A Stackelberg Game Approach

Y uan Xinwang, Xie Zhidong, Tan Xin

0
In edge computing, terminal devices can offload intensive computing tasks to edge servers to obtain high- performance computing services with low latency. Introducing unmanned aerial vehicles (UA Vs) as relays can improve transmis- sion efficiency. However, with the continuous expansion of UA Vs?€? scale, resource management issues need to be solved urgently. In this paper, we study the anti-jamming channel assignment problem in the presence of multiple intelligent jammers for UA V- enabled edge computing scenarios. Each user is an offload-receive (or offload-relay-receive) pair of a computing task. Considering the mutual interference between users and the malicious jamming from smart jammers, we construct a multi-layer Stackelberg game model: the jammer is the leader, and the user is the follower. The user and the jammer consider their utility respectively, achieve the equilibrium state through iterations, and then obtain the optimal channel access scheme. Finally, we analyze the convergence and superiority of the proposed algorithm through simulation and performance comparison.

Session Chair

Pengfei Wang, Dalian University of Technology, China

Session UEIoT

UEIoT

Conference
10:30 AM — 12:00 PM HKT
Local
Dec 13 Tue, 9:30 PM — 11:00 PM EST

Intelligent rush repair of unmanned distribution network based on deep reinforcement learning

Y ue Zhao, Y ang Chuan, Shi Pu, Xuwen Han, Shiyu Xia, Y anqi Xie

1
With the continuous expansion of China?€?s power grid scale and the increasing number of power users year by year, it is necessary to ensure the normal power supply of users. When a power failure occurs, it is particularly critical whether the emergency repair task can be completed quickly and scientifically. In this paper, an intelligent repair model of ?€?unmanned?€? distribution network based on deep reinforcement learning is proposed, which adopts speech recognition tech- nology and deep reinforcement learning algorithm to achieve the ?€?unmanned?€? of the whole system. Users can transmit the emergency repair information to the voice recognition module of the power supply emergency repair center by voice, SMS and IMS, and the module will get the emergency repair position and the amount of emergency repair tasks. Then, the resource allocation module is used to learn the emergency repair resource allocation strategy online, and the intelligent control of emergency repair in distribution network is realized. To verify the proposed algorithm, it is compared with two typical allocation strategies under the same settings. The results of the experiments demonstrate that the method based on deep reinforcement learning performs better in terms of emergency repair delay and intelligent emergency repair of the power supply in distribution networks.

Energy Minimization for IRS-assisted UAV-empowered Wireless Communications

Yangzhe Liao, Jiaying Liu, Yi Han, Quan Y u, Qingsong Ai, Quan Liu, Xiaojun Zhai

0
Non-terrestrial wireless communications have evolved into a technology enabler for seamless connectivity and ubiquitous computing services in the beyond fifth-generation (B5G) and sixth-generation (6G) networks, aiming to provision reliable and energy efficient communications among aerial platforms and ground mobile users. This paper considers intelligent reflecting surface (IRS)-assisted unmanned aerial vehicle (UAV)-empowered wireless communication, which exploits both the high mobility of UAV and passive beamforming gain brought by IRS. The energy minimization of rotary-wing UAV is formulated by jointly considering numerous quality of service (QoS) constraints with intricately coupled variables. To tackle the formulated challenging problem, a heuristic algorithm is proposed. First, we decouple it into several subproblems. Moreover, we jointly investigate offloading decisions of Internet of Thing (IoT) devices by the proposed enhanced differential evolution algorithm. Then, minorization?€?maximization algorithm (MMA) is utilized to solve the optimization of IRS phase shift- vector. Moreover, ant colony optimization (ACO) algorithm is proposed to optimize UAV flight route indicator matrix. Numerical results validate the effectiveness of the proposed algorithm. The results show that the proposed solution can remarkably decrease UAV flight distance while improving the network energy efficiency in comparison with numerous advanced algorithms.

Trajectory Planning Model for Vehicle Platoons at Off-ramp

Xinyu Chen, Chen Mu, Yu Kong

0
Traffic delay and congestion frequently occur in off- ramp areas, but few researches focus on generating the microscopic trajectory plan for individual connected and automated vehicles (CAV) to instruct their real-time acceleration/deceleration rate and lane-changing maneuvers at off-ramps. This paper proposes a trajectory planning model for vehicle platoons at off-ramp based on mixed-integer nonlinear programming (MINLP). The model can generate systematical optimal trajectories for CAVs passing the off-ramp safely and efficiently. Aiming at optimizing the overall traffic efficiency, we developed a series of constraints to model the behavior of vehicles running in the trajectory area. Especially, the impact of the lane- changing of vehicles is carefully designed to further improve the efficiency and safety by coordinating the behavior of different vehicles on the main road into several vehicle platoons and simultaneously generates trajectories for all vehicles in a platoon. The experiments show that the model can improve the traffic efficiency of pure CAV flow on the off-ramp with safety and effectively solve traffic congestion at the off-ramp.

Space-Air-Ground-Aqua Integrated Intelligent Network: Vision, and Potential Techniques

Jinhui Huang, Junsong Yin, Shuangshuang Wang

1
The space-air-ground-aqua integrated network will become the basic form of the next generation network. Various technologies, including artificial intelligence, big data, cloud computing, edge computing, etc., will be deeply integrated into the network to form an integrated intelligent network of land, sea, air and space. In this article, we will present the vision for the development of the space-air-ground- aqua integrated intelligent network and describe its main features. We put forward a network architecture which integrated sub-networks of space, air, land and sea while emphasizing network interconnection, resources sharing, cooperative control and service reuse. We also discussed several promising technologies, including the THz, free space optical communication, software defined network, network function virtualization, edge intelligent, digital twins, physical layer security and blockchains.

Fast Detection of Multi-Direction Remote Sensing Ship Object Based on Scale Space Pyramid

Ziying Song, Li Wang, Guoxin Zhang, Caiyan Jia, Jiangfeng Bi, Haiyue Wei, Yongchao Xia,Chao Zhang,Lijun Zhao

1
Ships in remote sensing images are usually arranged in arbitrary direction, small in size, and densely arranged. As a result, existing object detection algorithms cannot detect ships quickly and accurately. In order to solve the above problems, a lightweight object detection network for fast detection of ships is proposed. The network is composed of backbone network, four-scale fusion network and rotation branch. First, a lightweight network unit S-LeanNet is designed and used to build a low-computing and accurate backbone network. Then, a four-scale feature fusion module is designed to generate a four-scale feature pyramid, which contains more features such as ship shape and texture, and at the same time is conducive to the detection of small ships. Finally, a novel rotation branch module is designed, using balance L1 loss function and R-NMS for post-processing, to realize the precise positioning and regression of the rotating bounding box in one step. Experimental results show that the detection precision of our method in the DOTA remote sensing data set is compared with the latest SCRDet detection method, the precision is increased by 1.1%, and the operating speed is increased by 8 times, which can meet the fast detection requirements of ships.

Fire Detection Scheme in Tunnels Based on Multi-source Information Fusion

Tianyu Zhang, Yi Liu, Weidong Fang, Gentuan Jia, Yunzhou Qiu

0
Multi-sensor information fusion technology is an effective method for fire detection. However, in the underground road scenario, due to the closed environment and dispersed sensor layout, common fire detection data fusion methods have defects of poor detection timeliness and low accuracy. Therefore, this paper proposes a new fire detection scheme combining BP neural network and D-S evidence theory, and further puts forward a evidence correction method based on exponential entropy. We compare this method with common methods, and the experimental results show that the new method can detect the fire at the earliest in both open fire and smoldering fire scenes of underground roads, which improves the real-time performance and accuracy of fire detection.

Improving Imbalanced Text Classification with Dynamic Curriculum Learning

Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao

1
Recent advances in pre-trained language models have improved the performance for text classification tasks. However, little attention is paid to the priority scheduling strategy on the samples during training. Humans acquire knowledge gradually from easy to complex concepts, and the difficulty of the same material can also vary significantly in different learning stages. Inspired by this insights, we proposed a novel self-paced dynamic curriculum learning (SPDCL) method for imbalanced text classification, which evaluates the sample difficulty by both linguistic character and model capacity. Meanwhile, rather than using static curriculum learning as in the existing research, our SPDCL can reorder and resample training data by difficulty criterion with an adaptive from easy to hard pace. The extensive experiments on several classification tasks show the effectiveness of SPDCL strategy, especially for the imbalanced dataset.

Intelligent optimization and allocation strategy of emergency repair resources based on big data

Jiangdong Liu, Yue Zhao, Bo Wang, Jie Gao, Li Xu and Ying Ma

0
An important task in managing production and power supply is emergency distribution network maintenance. The effectiveness of emergency repair command can be in?creased in part by the optimization of emergency repair resources. In the power user information collection system, the public transformer?€?s power outage data are examined in this article. The data on outages?€? relationship to actual line faults is built using data mining techniques. The genuine fault detection model, which is founded upon this comprehensive outage data of the common transmitter, can quickly and correctly identify the true issue, making it easy for maintenance staff to act promptly. At the same time, on the basis of analyzing the allocation level of emergency resources and the estimation of working time of emergency repair team, a general model is proposed to optimize the estimation of emergency working time. By allocating emergency maintenance resources to repair, and then optimize the allocation of emergency repair resources.

Session Chair

Ying Ma, Harbin Institute of Technology, China

Session Opening

Opening Ceremony

Conference
2:00 PM — 2:30 PM HKT
Local
Dec 14 Wed, 1:00 AM — 1:30 AM EST

Opening Ceremony

1
This talk does not have an abstract.

Session Chair

Weigang Wu, Sun Yat-sen University, China

Session Keynote-1

Keynote Speech 1: Understanding and Pushing the Sensing Limits of WiFi/4G/5G Signals

Conference
2:30 PM — 3:15 PM HKT
Local
Dec 14 Wed, 1:30 AM — 2:15 AM EST

Keynote Talk 1: Understanding and Pushing the Sensing Limits of WiFi/4G/5G Signals

Daqing Zhang

1
WiFi/4G/5G based wireless sensing has attracted a lot of attention from both academia and industry in the last decade. However, fundamental questions such as the sensing limit, sensing boundary and sensing quality of WiFi/4G/5G signals have not been answered, making the wireless sensing system design and deployment in a trial-and-error manner. In this talk, I will first introduce the Fresnel zone model as a generic theoretic basis for device-free and contactless human sensing with WiFi/4G/5G signals. Then we propose to define and deploy the Sensing Signal to Noise Ratio (SSNR) as a new metric to reveal the sensing limit, sensing boundary and sensing signal quality of WiFi/4G/5G-based human sensing systems. We further apply the SSNR metric to show how we can push the sensing range of a commodity WiFi-based human respiration monitoring system to more than 30 meters by exploiting the time, space and frequency diversity of WiFi signals.

Session Chair

Weigang Wu, Sun Yat-sen University, China

Session Keynote-2

Keynote Speech 2: Toward Virtualized Edge Computing

Conference
3:30 PM — 4:15 PM HKT
Local
Dec 14 Wed, 2:30 AM — 3:15 AM EST

Keynote Talk 2: Toward Virtualized Edge Computing

Falko Dressler

2
We will discuss the challenges and opportunities of the connected cars vision in relation to the need for distributed data management solutions ranging from the vehicle to the mobile edge and to the data centers. Vehicular networking solutions have been investigated for more than a decade but recent standardization efforts just enable a broad use of this technology to build large scale Intelligent Transportation Systems (ITS). Modern 5G networks promise to provide all means for communication in this domain, particularly when integrating Mobile Edge Computing (MEC). However, it turns out that despite the many advantages, it is unlikely that such services will be provided with sufficient coverage. As a novel concept, vehicle micro clouds have been proposed that bridge the gap between fully distributed vehicular networks based on short range device to device communication and 5G-based infrastructure. Using selected application examples, we assess the advantages of such systems. We conclude the talk by shedding light on future virtual edge computing concepts that will enable edge computing even considering minimal deployment and coverage of 5G MEC.

Session Chair

Deze Zeng, China University of Geosciences, China

Session T1S1

Radio Networks

Conference
4:30 PM — 6:00 PM HKT
Local
Dec 14 Wed, 3:30 AM — 5:00 AM EST

Human Occlusion in Ultra-wideband Ranging: What Can the Radio Do for You?

Vu Anh Minh Le, Matteo Trobinger, Davide Vecchia, Gian Pietro Picco

0
Applications of ultra-wideband (UWB) for distance estimation (ranging) and localization often involve users wearing tags. Unfortunately, the human body causes significant signal attenuation, reducing ranging accuracy. This specific case of non-line-of-sight (NLOS) condition has received little attention in the literature. Further, state-of-the-art techniques tackling generic NLOS are often based on machine learning, limiting their exploitation on embedded devices. We pursue an alternative approach and show that the features offered by the UWB transceiver, largely neglected by the literature, can be directly exploited to reliably detect human occlusions and optimize ranging accordingly. We base our findings on an extensive exper?imental campaign exploring many radio, system, and deployment dimensions in two environments, resulting in practical guidelines immediately available to the designers of UWB-based systems.

Deep Reinforcement Learning Based Radio Resource Selection Approach for C-V2X Mode 4 in Cooperative Perception Scenario

Chenhua Wei, Xiaojun Tan, Hui Zhang

0
In recent years, vehicles have been equipped with multiple sensors to enable assisted driving and even autonomous driving. However, due to the physical characteristics of the sensors, there are numerous shortcomings in the perception of the surrounding environment by a single vehicle. The development of vehicle-to-everything technology enables vehicles to extend their sensing range or enhance the reliability of perception by exchanging sensor data via vehicle-to-vehicle communication, which is called cooperative perception. In cellular vehicle-to?everything Mode 4, vehicles use the sensing-based semi-persistent scheduling scheme to select radio resource autonomously before transmission. But this scheme is hardly adaptable to cooperative perception scenario due to the time-sensitive of cooperative perception and the impact caused by the position of the per?ception information. In this paper, we modeled the cooperative perception scenario and the communication between vehicles, and then we formulated the optimization objective considering the characteristics of cooperative perception. Finally, we propose a multi-agent deep reinforcement learning based resource selection algorithm to tackle this problem and demonstrate its effectiveness through simulations.

A Quality-Aware Rendezvous Framework for Cognitive Radio Networks

Hai Liu, Lu Yu, Chung Keung Poon, Zhiyong Lin, Yiu-Wing Leung, Xiaowen Chu

0
In cognitive radio networks, rendezvous is a fundamental operation by which cognitive users establish communication links. Most of existing works were devoted to shortening the time-to-rendezvous (TTR) but paid little attention to qualities of the channels on which rendezvous is achieved. In fact, qualities of channels, such as resistance to primary users?€? activities, have a great effect on the rendezvous operation. If users achieve a rendezvous on a low-quality channel, the communication link is unstable and the communication performance is poor. In this case, re-rendezvous is required which results in considerable communication overhead and a large latency. In this paper, we first show that actual TTRs of existing rendezvous solu?tions increase by 65.40-104.38% if qualities of channels are not perfect. Then we propose a Quality-Aware Rendezvous Framework (QARF) that can be applied to any existing ren?dezvous algorithms to achieve rendezvous on high-quality channels. The basic idea of QARF is to expand the set of available channels by selectively duplicating high-quality channels. We prove that QARF can reduce the expected TTR of any rendezvous algorithm when the expanded ratio ?? is smaller than the threshold (???3 + q 1 + 4( ?? ?? ) 2)/2, where ?? and ??, respectively, are the mean and the standard deviation of qualities of channels. We further prove that QARF can always reduce the expected TTR of Random algorithm by a factor of 1+( ?? ?? ) 2 . Extensive experiments are conducted and the results show that QARF can significantly reduce the TTRs of the existing rendezvous algorithms by 10.50-51.05% when qualities of channels are taken into account.

Rendezvous Delay-Aware Multi-Hop Routing Protocol for Cognitive Radio Networks

Zengqi Zhang, Sheng Sun, Min Liu, Zhongcheng Li, Qiuping Zhang

0
In cognitive radio networks (CRNs), due to the external interference from primary users, secondary users (SUs) cannot reserve a common control channel (CCC). Hence, it is essential to consider the impact of channel rendezvous on the end-to-end delay in multi-hop CRNs. For this reason, we propose a High Probabilistic Transmission Efficiency Multi-hop Routing (HPTEMR) protocol without utilizing a CCC. In HPTEMR, we design an efficient waiting channel hopping sequence to achieve fast channel rendezvous between neighborhood SUs. We then propose a novel link metric, i.e., transmission efficiency, which characterizes the transmission distance and channel-rendezvous delay. Based on the link metric, a sender SU transmits data packets to the receiver SU with the highest probability that data packets can be forwarded to the destination SU with the shortest end-to-end delay. Evaluation results verify the effectiveness of HPTEMR and show its superiority in end-to-end delay and ratio of effective packets.

Session Chair

Pietro Tedeschi, Technology Innovation Institute, UAE

Session T2S1

Federated Learning and Edge Computing

Conference
4:30 PM — 6:00 PM HKT
Local
Dec 14 Wed, 3:30 AM — 5:00 AM EST

Fine-grained Cloud Edge Collaborative Dynamic Task Scheduling Based on DNN Layer-Partitioning

Xilong Wang, Xin Li, Ning Wang and Xiaolin Qin

0
Edge computing provides an opportunity to improve the quality of service (QoS) of Artificial Intelligence (AI) apps for the Internet of Things (IoTs) scenarios. It is an important way to improve the QoS of intelligent apps by deploying Deep Neural Network (DNN) models on edge nodes. Though the DNN execution time affects the QoS of apps significantly. Due to the limited and dynamic edge resources, and sudden load to edge nodes, it is hard to guarantee the DNN execution efficiency. In this paper, we conduct fine-grained decomposition of DNN tasks and propose a Cloud Edge Collaborative Dynamic Task Scheduling mechanism based on DNN layer-partitioning tech?nique. The approach can realize the collaborative computing of DNN models between cloud and edge, and improve the execution efficiency of DNN models, which guarantees the QoS of AI apps. Through simulation experiments, compared with the existing task scheduling mechanism and AI app deployment mode, we show that the proposed cloud edge collaborative dynamic task scheduling mechanism can effectively reduce the average service response time in the edge intelligent system, so as to improve the apps?€? overall QoS of the system. Meanwhile, the task scheduling mechanism designed in this paper makes it possible for more complex intelligent models to run in a resource-constrained edge environment.

Edge-assisted Federated Learning in Vehicular Networks

G. La Bruna, C. Risma Carletti, R. Rusca, C. Casetti, C. F. Chiasserini and M. Giordanino, R. Tola

0
Given the plethora of sensors with which vehicles are equipped, today?€?s automated vehicles already generate large amounts of data, and this is expected to increase in the case of autonomous vehicles, to enable data-driven solutions for vehicle control, safety and comfort, as well as to effectively implement convenience applications. It is expected that a crucial role in processing such data will be played by machine learning mod?els, which, however, require substantial computing and energy resources for their training. In this paper, we address the use of cooperative learning solutions to train a Neural Network (NN) model while keeping data local to each vehicle involved in the training process. In particular, we focus on Federated Learning (FL) and explore how this cooperative learning scheme can be applied in an urban scenario where several cars, supported by a server located at the edge of the network, collaborate to train a NN model. To this end, we consider an LSTM model for trajectory prediction ?€? a task that is an essential component of many safety and convenience vehicular applications, and investigate the performance of FL as the number of vehicles contributing to the learning process, and the data set they own, vary. To do so, we leverage realistic mobility traces of a large city and the FLOWER FL platform.

CFedPer: Clustered Federated Learning with Two-Stages Optimization for Personalization

Zhipeng Gao, Yan Yang, Chen Zhao, Zijia Mo

1
Federated learning(FL) is a privacy-preserving dis?tributed learning paradigm in which clients cooperate with each other to train a global model. It is becoming progressively prevalent with the rapid development of edge devices. A critical challenge in federated learning is the data heterogeneity among clients, resulting in the global model generated by standard federated learning being unable to be adapted to all clients. To tackle this problem, we propose the CFedPer for personalized FL, which generates a personalized model for each cluster after clustering to address the deficiency of standard federated learning. Our algorithm is organized into two optimization phases. The pre-start phase clusters clients by our proposed similarity-based clustering model using distribution vector and similarity matrix. In the in-training phase, we represent the neural network as the base layer and personalization layer and propose a novel optimization objective with a regularization term for the personalization layer to achieve a balance between per?sonalization and generalization, preventing over-personalization. Extensive experiments on various datasets and data distributions indicate that the performance of our algorithm is superior to the existing algorithms in terms of average local accuracy and variance among clients.

Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints

Minghui Li, Wei Wan, Jianrong Lu, Shengshan Hu, Junyu Shi, Leo Yu Zhang, Man Zhou, and Yifeng Zheng

0
Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious partici?pants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants?€? local data, minority attackers, IID data distribu?tion, etc. To relax those constraints, this paper presents Robust-FL, the first prediction-based Byzantine-robust federated learning scheme where none of the assumptions is leveraged. The core idea of the Robust-FL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously: (i) independence of participants?€? local data, (ii) tolerance of majority attackers, (iii) generalization to variable Byzantine model.

Incremental Unsupervised Adversarial Domain Adaptation for Federated Learning in IoT Networks

Yan Huang, Mengxuan Du, Haifeng Zheng and Xinxin Feng

0
Federated learning, as an effective machine learning paradigm, can collaboratively training an efficient global model by exchanging the network parameters between edge nodes and the cloud server without sacrificing data privacy. Unfortunately, the obtained global model cannot generalize to newly collected unlabeled data since the unlabeled data collected by different edge devices are diverse. Furthermore, the distributions of collected labeled data and unlabeled data are also different for edge devices. In this paper, we propose a method named Incre?mental Unsupervised Adversarial Domain Adaptation (IUADA) for federated learning, which aims to reduce the domain shift between the labeled data and unlabeled data in the edge nodes and enhance the performance of the personalized target domain models based on the local unlabeled data. Finally, we evaluate the performance of the proposed method by using three real?world datasets. Extensive experimental results demonstrate that the proposed method is efficient to solve the problem of domain shift and achieves a better performance for unlabeled data for federated learning.

Session Chair

Anna Maria Vegni, Roma Tre University, Italy

Session T3S1

Privacy

Conference
4:30 PM — 6:00 PM HKT
Local
Dec 14 Wed, 3:30 AM — 5:00 AM EST

Approximate Shortest Distance Queries with Advanced Graph Analytics over Large-scale Encrypted Graphs

Yuchuan Luo, Dongsheng Wang, Shaojing Fu, Ming Xu, Yingwen Chen, Kai Huang

0
Understanding graph characteristics is of great importance for graph analytics. Among the many properties, shortest path distance is the fundamental and widely used one. With the advent of cloud computing, it is a natural choice for the data owners to host their massive graphs on the cloud and outsource the shortest distance querying service to it. However, the new paradigm brings serious security concerns as graph data and shortest distance queries may contain sensitive information of data owners and users. In this paper, we propose a novel scheme to support privacy?preserving approximate shortest distance queries with advanced graph analytics over large-scale encrypted graphs, which enables an untrusted cloud to answer shortest distance queries as well as advanced graph metrics (e.g., node centrality) without knowing the content of queries and the sensitive information of outsourced graphs. Compared with the state-of-the-art solutions, our design can support not only efficient and accurate shortest distance approximation, but also advanced graph analytics. We prove that our scheme is secure under the chosen-plaintext model. Experimental results over real-world datasets show that our scheme achieves high approximation accuracy with practical efficiency.

Tangless: Optimizing Cost and Transaction Rate in IOTA by Using Lyapunov Optimization Theory

Yinfeng Chen, Yu Guo, and Rongfang Bie

1
IOTA has emerged as a promising and feeless de?centralized computing paradigm for developing blockchain-based Internet of Things (IoT) applications with high-performance transaction rates and incremental scalability. To support micro?payments of IoT devices, IOTA has abandoned the original blockchain reward mechanism while IOTA nodes voluntarily contribute resources to maintain network stability. However, removing the mining rewards results in the resource cost of generating IOTA ledgers (known as the Tangle) being borne only by IOTA nodes. With the continuous expansion of the IOTA network, cost consumption is increasing. Thus, the inability to effectively reduce the cost of Tangle generation would lead to people being reluctant to dedicate resources to IOTA for maintaining the network robustness. In this paper, for the first time, we present a full-fledged transaction cost optimization scheme for IOTA, called Tangless, which can assist IOTA nodes in effectively reducing the Tangle generation cost while maintaining the strong robustness of the IOTA network. By using our proposed scheme, each IOTA node can effectively formulate the threshold of transaction approval rate in real time, maintaining the stability of the IOTA network with the optimal computational cost. We harness Lyapunov opti?mization theory to design a computational optimization algorithm for minimizing the total cost of nodes in IOTA. Then, we resort to large deviations theory to devise an optimized transaction rate control algorithm to further eliminate orphan Tangles that waste computational costs. Comprehensive theoretical analysis and sim?ulation experiments confirm the effectiveness and practicability of our proposed scheme.

Cloud-assisted Road Condition Monitoring with Privacy Protection in VANETs

Lemei Da, Yujue Wang, Yong Ding, Bo Qin, Xiaochun Zhou4, Hai Liang, Huiyong Wang

0
Vehicular ad hoc network (VANET) is one of the fastest developing technologies in intelligent transportation sys?tems (ITS), which has made great contributions to improving traffic congestion and reducing traffic accidents. As it is deployed in an open environment, security and privacy are threatened to a certain extent. Moreover, there are huge data exchanges in high traffic areas, which require VANET system to improve computing efficiency while ensuring communication security. To solve the above issues, this paper proposes a cloud-assisted road condition monitoring (RCM) system. The trusted authority (TA) monitors the road conditions with the help of the cloud server. The vehicle collects the road condition information of the road section managed by the roadside unit (RU), and only the vehicles authorized by the administrative roadside unit can successfully upload the road condition reports to the cloud server. The cloud server divides the road condition reports into different equivalence classes, in this way to report the emergency to the TA when the reported quantity exceeds the threshold. Security analysis showed that the proposed RCM system can effectively protect the security and privacy of road condition reports in VANETs.

Towards Event-driven Misbehavior Detection Mechanism in Social Internet of Vehicles

Chenchen Lv, Yue Cao, Lexi Xu, Shitao Zou, Yongdong Zhu and Zhili Sun

0
Due to inadequate management of Vehicular Ad hoc Networks (VANETs), malicious nodes could participate in communications along with misbehavior, e.g., dropping packets and spreading fake information. Therefore, it is essential to detect misbehavior of internal attackers that will cause network performance degradation (e.g., taking longer time to receive messages or reaching destinations with detours). Apart from the capture of dynamic network topology of VANETs, the social relationship among nodes can also be applied as a relatively stable metric to qualify nodes. This paper proposes a misbehavior detection mechanism based on social relationships, from which nodes determine trust for the receiver or transmitter. Based on the proposed mechanism, road traffic control applications can avoid the interference from malicious nodes. The construction of social relationships depends on the geographic information reflected by the movement of nodes, including contact frequency and trajectory similarity, since the geographic information can accurately indicate the relevance among nodes. In addition to the social relationship, the proposed mechanism also evaluates the data trust from time and spatial factors to reduce the interference of fake data. Finally, the proposed mechanism integrates data trust and social relationships to enable misbehavior detection decisions. Extensive results of simulations show that the proposed mechanism has outstanding malicious nodes detection rates under various proportions of malicious nodes and movement patterns.

RDP-WGAN: Image Data Privacy Protection based on R??nyi Differential Privacy

Xuebin Ma, Ren Yang and Maobo Zheng

0
In recent years, artificial intelligence technology based on image data has been widely used in various industries. Rational analysis and mining of image data can not only promote the development of the technology field but also become a new engine to drive economic development. However, the privacy leakage problem has become more and more serious. To solve the privacy leakage problem of image data, this paper proposes the RDP-WGAN privacy protection framework, which deploys the R??nyi differential privacy (RDP) protection techniques in the training process of generative adversarial networks to obtain a generative model with differential privacy. This generative model is used to generate an unlimited number of synthetic datasets to complete various data analysis tasks instead of sensitive datasets. Experimental results demonstrate that the RDP-WGAN privacy protection framework provides privacy protection for sensitive image datasets while ensuring the usefulness of the synthetic datasets.

Session Chair

Georgios Kavallieratos, Norwegian University of Science and Technology, Norwegian

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.