Technical Sessions

Session T1S2

Ride-sharing and IoT

Conference
11:00 AM — 12:30 PM HKT
Local
Dec 14 Wed, 10:00 PM — 11:30 PM EST

Sample-based Prophet for Online Ride-sharing with Fairness

Baoju Li, En Wang, Funing Yang, Yongjian Yang, Wenbin Liu, Zijie Tian, Junyu Liu and Wanbo Zheng

0
The prosperity of industrialization urges modern ride-sharing platforms to gain profit from efficient management of their resources. Although ride-sharing allows sharing costs and promotes the traffic efficiency by making better use of vehicle capacities, dealing with large amounts of online taxi orders is an inevitable challenge in the current transportation systems, where all drivers have to make immediate and irrevocable decisions about whether to accept current order in a parallel way. Furthermore, in order to achieve global fairness, it is critical for an algorithm to function whenever the first order gets online without any observation stage. In this paper, we formulate this online user selection problem as a prophet inequality for independent identically distributed random variables from an unknown distribution. We construct a sample set to avoid the observation stage in an online decision process. Considering the driver-centered ride-sharing scenario, a route schedule algorithm and a sample-driven algorithm with a guarantee of lower bound are proposed to concurrently guide taxi drivers to accept taxi orders and achieve global fairness at the meantime. Finally, we conduct extensive evaluations based on three real-world data sets. The results verify the effectiveness of our proposed algorithm on improving the overall profit, increasing accepted orders and reducing the unoccupied time of the vehicle under the valid ridesharing constraints.

Traffic Light Routing Based on Node State Awareness in Delay Tolerant Networks

Tong Wang, Jianqun Cui, Yanan Chang, Feng Huang, Yi Yang

0
Delay-Tolerant Networks (DTNs), a supplementary means of communication network in extreme situations, have aroused wide attention from scholars. However, it is challenging to efficiently utilize DTNs since they have intermittent and highlatency characteristics. In the design of DTNs routing scheme, the selection of relay nodes takes on a great significance in efficient communication. However, existing research has either considered only one of the node features, or simply fused node attributes without fully using their potential correlations. If the above problems are not effectively solved, the propagation of messages between nodes will become blind, and a considerable number of caches will be occupied and wasted by invalid copies. To solve the above challenges, a novel routing,?€? Traffic Light Routing Based on Node State Awareness (TLRNSA)?€?, is proposed for efficient communication. To be specific, the node?€?s own state, the environmental state, and the historical encounter state are synthesized. The traffic value of the node is obtained based on the adaptive weight adjustment mechanism. The node is divided into three traffic light states, including red, green, and yellow, in accordance with the traffic value. Different routing strategies are developed for the above three states to enhance their performance. The results of the comprehensive experiments suggested that TLRNSA outperforms other state-of-theart algorithms in delivery rate and latency. Compared with the two classic algorithms and the two optimized algorithms, the proposed method increases the delivery rate by 109.1%, 84.12%, 5.09%, and 1.09%, respectively, it reduces the delay by 32.16%, 36.46%, 32.77%, and 6.77%, respectively.

Optimized sustainable strategy in Aerial Terrestrial IoT Network

Tiantian Wang, Lei Liu, Tong Ding

0
Integrating sensors into terrestrial networks for environmental monitoring, especially in wild areas, has received more and more attention. However, the efficiency of deploying Internet-of-Things (IoT) systems in wild areas are limited by high energy consumption and inconvenient power supply. The combination of wireless sensors and cordless power transmission has created a new era, providing us with a promising solution to the aforementioned problems. Unmanned Aerial Vehicle (UAV) equipped with cordless charger enables flexible power supply for the terrestrial network in the wild. To this end, the optimization of how to plan the charging operation of UAVs is becoming an impressing demand. In this paper, we investigate the cyclic charging of UAV with limited wireless battery capacity. To maximize the charging interval and minimize the power consumption of the UAV, an optimized two-stage charging strategy based on deep reinforcement learning is developed. The developed method optimizes the charging sequence of sensors and then charges each sensor regarding sustainability. Experimental results suggest that the developed method improves the charging interval.

Crowdsourcing Mobile Data for a Passive Indoor Positioning System - The MAA Case Study

Ran Guan and Robert Harle

1
Crowdsourcing radio signal fingerprints to build a radio map for indoor positioning system is an emerging alternative to conventional labour-costly manual survey. However, existing crowdsourced systems heavily rely on ground-truth location inputs or unrealistic constraints on the contributors, deterring a wider adaption of crowdsourced systems. Our work exploits three generic constraints of mobile data to retrieve the locations of the crowdsourced fingerprints and builds a completely passive indoor positioning system that assumes no manual intervention or unnatural constraints on the contributors. The proposed system was further evaluated in the Museum of Archaeology and Anthropology (MAA) with passively crowdsourced data contributed by actual visitors while visitors can behave naturally without catering to crowdsourcing. Results show that the proposed system can achieve positioning accuracy comparable to traditional manual survey-based system with essentially no extra manual effort.

Session Chair

Jianguo Chen, Sun Yat-sen University, China

Session T2S2

Task Offloading

Conference
11:00 AM — 12:30 PM HKT
Local
Dec 14 Wed, 10:00 PM — 11:30 PM EST

Joint Task Partition and Computation Offloading for Latency-Sensitive Services in Mobile Edge Networks

Yujie Peng, Xiaoqin Song, Fang Liu, Guoliang Xing, and Tiecheng Song

1
With the development of Internet of Things (IoT), wireless communication networks and Artificial Intelligence (AI), more and more real-time applications such as online games and autonomous driving have emerged. However, due to limited computing power and battery capacity, it has become increasingly difficult for local user devices to take on the full range of computing tasks under tight timing constraints. The emerging Mobile Edge Computing (MEC) technology is widely considered to be an important technology for achieving ultra-low latency. However, most of the existing work is focused on non-splittable computation tasks. In fact, data partitioning-oriented applications can be split into multiple subtasks for parallel processing. In this paper, we study the partial computation offloading of multiple detachable tasks in MEC networks, focusing on minimizing the total user device latency in the multi-MEC multiuser scenarios. Considering the dynamic partitioning of tasks, we adopt the barrel theory to construct a linear system of equations to find the optimal solutions and propose an approach for distributed computation offloading based on numerical methods. The simulation results show that the proposed algorithm can reduce the average user device latency by 31% compared with the binary offloading method.

Enabling Heterogeneous Domain Adaptation in Multi-inhabitants Smart Home Activity Learning

Md Mahmudur Rahman, Mahta Mousavi, Peri Tarr and Mohammad Arif Ul Alam

1
Domain adaptation for sensor-based activity learning is of utmost importance in remote health monitoring research. However, many domain adaptation algorithms suffer with failure to operate adaptation in presence of target domain heterogeneity (which is always present in reality) and presence of multiple inhabitants dramatically hinders their generalizability producing unsatisfactory results for semi-supervised and unseen activity learning tasks. We propose AEDA, a novel deep auto-encoderbased model to enable semi-supervised domain adaptation in the existence of target domain heterogeneity and how to incorporate it to empower heterogeneity to any homogeneous deep domain adaptation architecture for cross-domain activity learning. Experimental evaluation on 18 different heterogeneous and multiinhabitants use-cases of 8 different domains created from 2 publicly available human activity datasets (wearable and ambient smart homes) shows that AEDA outperforms (max. 12.8% and 8.9% improvements for ambient smart home and wearables) over existing domain adaptation techniques for both seen and unseen activity learning in a heterogeneous setting.

Priority-Aware Task Offloading and Resource Allocation in Vehicular Edge Computing Networks

Ye Wang, Yanheng Liu, Zemin Sun, Lingling Liu, Jiahui Li, and Geng Sun

0
In recent years, the dramatic increase in vehicles and the limited resources of VEC servers make it challenging for vehicles to execute intensive and sensitive tasks on the local own CPU. The mobile edge computing (MEC) is viewed as a promising paradigm by deploying the cloud resources on roadside road side units (RSU). However, compared to cloud server, MEC servers have limited resources. Moreover, the vehicular tasks with different priorities have different requirements on the edge resources. In this work, we propose a priority-aware collaborative task offloading and resource allocation approach for vehicular edge computing networks (VECN). Specifically, we propose a variant grey wolf optimizer (VGWO) algorithm for resource optimization and a dynamic task offloading strategy (DOS) algorithm for task offloading. Simulation results show that the proposed VGWO algorithm outperforms the basic swarm intelligence optimization algorithm, and the collaborative offloading method is able to effectively reduce the task processing latency and energy consumption.

Task Offloading in Fog: A Matching-driven Multi-User Multi-Armed Bandit Approach

Qing Zhang, Mingjun Xiao and Yin Xu

0
Fog computing is a potential technology that can solve computationally intensive, latency-sensitive, and energyintensive tasks. This paper studies the task offloading problem for decentralized multi-users with unknown system-side information. On the basis, we design the distributed task offloading bandit (DTOB) algorithm to balance task offloading exploration and exploitation in fog computing (MFC) systems. Finally, the effectiveness of the offloading scheme based on bidirectional matching is verified by experiments.

MACC: MEC-Assisted Collaborative Caching for Adaptive Bitrate Videos in Dense Cell Networks

Haojia He, Songtao Guo, Lu Yang and Ying Wang

0
Caching adaptive bitrate video at edge nodes (ENs) can provide multi-version video-on-demand (VoD) services to end users (EUs) with better experience. However, due to the limited cache capacity of ENs, it is important to decide which video content and corresponding bitrate version to be cached in the EN. In this paper, we first propose a user request hit profit (RHP) model, and then based on the RHP model we envision a mobile edge computing (MEC)-assisted collaborative caching scheme (MACC). Specifically, we model the communication links between ENs and EUs as a bipartite graph to employ the collaborative caching among ENs; and we consider the transcoding relationship between different versions to effectively utilize the processing capacity of ENs. Due to the NP-completeness of the cache placement problem, we prove it is a monotone submodular function maximization problem, and propose the proactive cache placement based on maximum RHP increment (PCP-MRI) algorithm and the reactive cache replacement based on maximum RHP increment (RCR-MRI) algorithm. Extensive simulation results show that, compared with existing methods, the proposed MACC has significant performance improvements in cache hit ratio, initial waiting delay and backhaul traffic load.

Session Chair

Hao Wang, Dalian University of Technology, China

Session T3S2

Attack and Security

Conference
11:00 AM — 12:30 PM HKT
Local
Dec 14 Wed, 10:00 PM — 11:30 PM EST

Accelerating Adversarial Attack using Process-in-Memory Architecture

Shiyi Liu, Sathwika Bavikadi, Tanmoy Sen, Haiying Shen, Purab Ranjan Sutradhar and Amlan Ganguly

0
Recent research has demonstrated that machine learning algorithms are vulnerable to adversarial attacks, in which small but carefully crafted input perturbations can lead to algorithm failure. It has been demonstrated that certain adversarial attack algorithms are capable of producing these types of perturbations. These attack methods are inapplicable when the attack must be generated in near real time. The use of a hardware accelerator, such as a Process-in-Memory (PIM) architecture, is a potential method for addressing this issue. The PIM architecture is regarded as a superior option for data-intensive applications such as solving optimization problems and Deep Neural Networks (DNN) due to its capacity for ultra-low-latency parallel processing. However, implementing an adversarial attack algorithm directly on the PIM platform is inefficient due to the PIM architecture?€?s complexity and overhead costs. To address this issue, we utilize a novel adversarial attack scheme based on the PIM that leverages Look-up-Table (LUT)-based processing. The proposed LUT-based PIM architecture is capable of being dynamically programmed to execute the operations necessary for an adversarial attack algorithm. Our simulations reveal that the proposed method is capable of achieving an ultra-low operating delay and energy-efficiency performance.

PhysioGait: Context-Aware Physiological Context Modeling for Person Re-identification Attack on Wearable Sensing

James Sullivan, Mohammad Arif Ul Alam

1
Person re-identification is a critical privacy breach in publicly shared healthcare data. We investigate the possibility of a new type of privacy threat on publicly shared privacy insensitive large scale wearable sensing data. In this paper, we investigate user specific biometric signatures in terms of two contextual biometric traits, physiological (photoplethysmography and electrodermal activity) and physical (accelerometer) contexts. In this regard, we propose PhysioGait, a context-aware physiological signal model that consists of a Multi-Modal Siamese Convolutional Neural Network (mmSNN) which learns the spatial and temporal information individually and performs sensor fusion in a Siamese cost with the objective of predicting a person?€?s identity. We evaluated PhysioGait attack model using 4 real-time collected datasets (3 collected datasets and one publicly available data) and two combined datasets achieving 89% - 93% accuracy of re-identifying persons.

Secure Deduplication Against Frequency Analysis Attacks

Hang Chen, Guanxiong Ha, Yuchen Chen, Haoyu Ma and Chunfu Jia

0
Message-locked Encryption (MLE) is the most common approach used in encrypted deduplication systems. However, the systems based on MLE are vulnerable to frequency analysis attacks, because MLE encrypts the identical plaintexts into the identical ciphertexts, which is deterministic. The state-of-theart defense scheme, which named TED, lacks key verification and uses a single key server to record frequency information. Once the key server is compromised, TED will be vulnerable to brute-force attacks. In addition, TED?€?s key generation algorithm needs to be designed more exquisitely to strengthen protection, and its security indicator is not comprehensive. We propose SDAF, which supports key verification and enhanced protection against frequency analysis attacks. Based on chameleon hash, SDAF realizes key verification to prevent malicious key servers from generating fake encryption keys. In order to disturb the frequency information, SDAF introduces reservoir sample to generate uniformly distributed encryption keys, and uses multiple key servers, which interact with each other via multi-party PSI and rotate spontaneously to avoid the single point of failure. Moreover, a new indicator Kurtosis is pointed out to evaluate the security against frequency analysis attacks.We implement the prototypes of SDAF. The experiments of the real-world data sets show that, compared with the existing schemes, SDAF can better resist frequency analysis attacks with lower time overheads.

Breaking Distributed Backdoor Defenses for Federated Learning in Non-IID Settings

Jijia Yang, Jiangang Shu and Xiaohua Jia

0
Federated learning (FL) is a privacy-preserving distributed machine learning architecture to solve the problem of data silos. While FL is proposed to protect data security, it still faces security challenges. Backdoor attacks are potential threats in FL and aim to manipulate the model performance on chosen backdoor tasks by injecting adversarial triggers. As a more insidious variant of backdoor attacks, distributed backdoor attacks decompose the same global trigger into multiple local patterns and respectively assign them to different attackers. In this paper, we study deep into the entire training process of current distributed backdoor attack (DBA) and propose a cooperative DBA method for non-IID FL to break through existing defenses. To bypass the cosine similarity detection, we design an update rotation and scaling technique based on two independent training to well disguise malicious updates among benign updates. We conduct an exhaustive experiment to evaluate the performance of our proposed method under the state-of-theart defenses. The experimental results show that it is much more stealthy than the current DBA method while maintaining the high backdoor attack intensity.

Session Chair

Aida Akbarzadeh, Norwegian University of Science and Technology, Norwegian

Session T1S3

Wireless Networks

Conference
2:00 PM — 3:30 PM HKT
Local
Dec 15 Thu, 1:00 AM — 2:30 AM EST

Crowdsourced Image Driven PM2.5 Estimation based on Hybrid 3-Channel Feature Map

Jiaxuan Wang, Muyan Yao, Ruipeng Gao and Dan Tao

1
Industrialization has resulted in a relatively high airborne PM2.5 concentration in most developing areas, causing severe consequences due to its physico-chemical properties. In this paper, we propose a crowdsourced image driven PM2.5 concentration estimation approach based on meteorological enhanced 3-channel feature map. To fulfill this framework, we first use dark channel prior and Rayleigh?€?s law of atmospheric scattering to correct the proportion of the sky area in the crowdsourced images and then construct hybrid 3-channel feature maps. We then use deep learning models to perform feature extraction on hand-engineered feature maps and provide fine-grained PM2.5 concentration estimation. We also conduct a series of experiments to evaluate the performance of our model. Evaluation on dataset collected at 8 sites over nearly 2 years demonstrates that, our system achieves an MAE of 23.27 ??g/m3, outperforming baseline solutions.

An Energy-equilibrium Opportunity network routing algorithm based on Game theory and Historical similarity rate

Gang Xu, Ming Song, HongZhi Fu, BaoQi Huang, FengQi Wei and QinFu Si

1
The energy consumed by a node in the opportunity network to forward messages determines its service time and survival period. In order to solve the problem of energy consumption imbalance, the paper proposes an energy equalization opportunity network routing algorithm (EOGH) based on game theory and historical similarity. The algorithm selects a suitable relay node to forward the message based on the remaining energy of the node and its encounter probability with the destination node. Simulation results show that EOGH can get high delivery rate with the energy balance consumption and prolongs the network lifetime.

Opportunistic Network Routing Strategy Based on Relay Node Collaboration

Gang Xu, Xiaoying Yang, Ruijie Hang, Baoqi Huang, Fengqi Wei and Qinfu Si

1
The paper proposes an Opportunistic Network Routing Strategy Based on Relay Node Collaboration (BRCR), which solves the problem of low effectiveness of data forwarding caused by existing algorithms that ignore the social nature of node movement. The paper introduces a hybrid data forwarding service collection consisting of a cluster of fixed relay nodes and mobile relay nodes, which a relatively stable communication link is established between the source and target nodes. The experimental results show that BRCR proposed in this paper improves the efficiency of data forwarding as compared to classical routing algorithms.

Characterizing Energy Sources in Outdoor Wireless Sensor Networks

Robert Hartung, Jan Kaberich, Christian Bunzeck and Lars Wolf

1
A reliable energy supply is the foundation of many Wireless Sensor Networks that can not be powered from mains power permanently. Therefore, batteries have been widely used since many years. However, due to their environmental impact and limited lifetime, costs are high and they should be used most efficiently. Further, predictability is challenging and overprovisioning is common practice, which once again increases costs. Hence, energy harvesting has become very popular and can increase lifetime, reduce cost, but comes with additional challenges. Outdoor Wireless Sensor Networks operate in varying temperature ranges which has a noticeable impact on the energy sources. In this paper, we characterize different types of energy sources regarding their behavior under usage patterns and temperature changes.

Session Chair

Changlin Yang, Sun Yat-sen University, China

Session T2S3

Edge Computing

Conference
2:00 PM — 3:30 PM HKT
Local
Dec 15 Thu, 1:00 AM — 2:30 AM EST

Leakage Detection via Edge Processing in LoRaWAN-based Smart Water Distribution Networks

Domenico Garlisi, Gabriele Restuccia, Ilenia Tinnirello, Francesca Cuomo and Ioannis Chatzigiannakis

0
The optimization and digitalization of Water Distribution Networks (WDNs) are becoming key objectives in our modern society. Indeed, WDNs are typically old, worn and obsolete. These inadequate conditions of the infrastructures lead to significant water loss due to leakages inside pipes, junctions and nodes. It has been measured that in Europe the average value of lost water is about 26%. Leakage control in current WDNs is typically passive, repairing leaks only when they are visible. Emerging Low Power Wide Area Network (LPWAN) technologies, and especially IoT ones, can help monitor water consumption and automatically detect leakages. In this context, LoRaWAN can be the right way to deploy a smart monitoring system for WDNs. Moreover, most of the current smart WDNs solutions just collect measurements from the smart metres and send the data to the cloud servers, in order to execute the intended analyses, in centralised way. In this paper, we propose new solutions to improve monitoring, leak management and prediction by exploiting edge processing capabilities inside LoRaWAN networks. Our approach is based on an IoT system of water sensors that are placed at junctions of the WDN to have measurements in correspondence to various smart metres in the network and Machine Learning (ML) algorithms to process the data directly at the edge in order to visualise and predict leakages. We present a numerical simulation tool useful to evaluate the suggested monitoring method. Based on our results, we examine whether it is possible to identify network leaks using the edges without having a complete or accurate overview of the collected measurements of the full WDN. System performance is shown separately at gateways network.

EdgeMan: Ensuring Real-Time Service for Containerized Edge Systems

Wenzhao Zhang, Wei Dong, Geng Ren, Yi Gao

0
Containers have emerged as a popular technology for edge computing platforms. Although there are varieties of container orchestration frameworks, e.g., Kubernetes to provide high-reliable services for cloud infrastructure, ensuring realtime service at the containerized edge systems (CESs) remains a challenge. In this paper, we propose EDGEMAN, a holistic edge service management framework for CESs, which consists of (1) a model-assisted event-driven lightweight online scheduling algorithm to provide request-level execution plans; (2) a bottleneck-metric-aware progressive resource allocation mechanism to improve resource efficiency. We then build a testbed that installed three containerized services with different latency sensitivities for concrete evaluation. Besides, we adopt real-world data traces from Alibaba and Twitter for large-scale emulations. Extensive experiments demonstrate that the deadline miss ratio of EDGEMAN is reduced 85.9% on average compared with existing methods in both industry and academia.

LoRaDrone: Enabling Low-Power LoRa Data Transmission via a Mobile Approach

Ciyuan Chen, Junzhou Luo, Zhuqing Xu, Runqun Xiong, Zhimeng Yin, Jingkai Lin, Dian Shen

0
Low-Power Wide Area Networks (LPWANs) are widely used to connect large-scale Internet of Things (IoT) applications. Long Range (LoRa) is a promising LPWAN technology sensitive to energy consumption, since LoRa nodes are generally battery-powered, and the battery life will influence the lifetime of the LoRa network. In practice, the battery life of LoRa nodes is short in many scenarios, due to the long transmission distance form the gateway leading to high energy consumption. Existing techniques for energy-efficient data transmission mainly focus on static gateways, and will consume huge energy of remote nodes. In this paper, we propose to integrate LoRa with mobility to minimize the energy consumption of nodes by effectively shortening the transmission distances, and design the first mobile LoRa data transmission system called LoRaDrone by leveraging the unmanned aerial vehicle (UAV) gateway flying close to nodes. Specifically, we present a low-power communication mechanism and a dynamic channel allocation policy to minimize the energy consumed in sensing and communicating with the UAV gateway, while considering the distinctive LoRa parallel reception and complex transmission collisions. Then, an optimal speed scheduling strategy is designed to ensure the reliability of data transmission, and minimize the energy consumption of the UAV. Evaluations on various scales verify the effectiveness of LoRaDrone under different nodes?€? distributions and UAV paths. Compared with the baselines, the energy consumption of nodes using LoRaDrone is at most reduced by 70.37?? at 5000 nodes. Index Terms?€?LoRa, mobility, low-power, data transmission, scheduling

Online Service Provisioning and Updating in QoS-aware Mobile Edge Computing

Shuaibing Lu, Jie Wu, Pengfan Lu, Jiamei Shi, Ning Wang, Juan Fang

0
The vigorous development of IoT technology has spawned a series of applications that are delay-sensitive or resource-intensive. Mobile edge computing is an emerging paradigm which provides services between end devices and traditional cloud data centers to users. However, with the continuously increasing investment of demands, it is nontrivial to maintain a higher quality-of-service (QoS) under the erratic activities of mobile users. In this paper, we investigate the service provisioning and updating problem under the multiple-users scenario by improving the performance of services with longterm cost constraints. We first decouple the original long-term optimization problem into a per-slot deterministic one by using Lyapunov optimization. Then, we propose two service updating decision strategies by considering the trajectory prediction conditions of users. Based on that, we design an online strategy by utilizing the committed horizon control method looking forward to multiple slots predictions. We prove the performance bound of our online strategy theoretically in terms of the trade-off between delay and cost. Extensive experiments demonstrate the superior performance of the proposed algorithm. Index Terms?€?mobile edge computing, online service provisioning, mobility, quality-of-service (QoS).

Session Chair

Miao Hu, Sun Yat-sen University, China

Session T3S3

Blockchain

Conference
2:00 PM — 3:30 PM HKT
Local
Dec 15 Thu, 1:00 AM — 2:30 AM EST

Blockchain Based Secure Outsourcing Data Integrity Auditing for Internet of Things in Cloud-edge Environment

Yangfei Lin, Celimuge Wu, Yusheng Ji, Jie Li, Zhi Liu

0
Internet of Things enables devices to communicate, collect and exchange data with the network. As the number of IoT devices keeps growing, the volume of data they produce is also increasing exponentially. Given the feature of limited computing and storage resources of IoT, it is inevitable to store data in the cloud for better services. However, for users to effectively and efficiently inspect those data over the cloud is a critical and open problem. Most public integrity auditing over the cloud schemes requires the user to do a sheer amount of preprocessing work on the local devices, which is unsuitable for IoT devices. With the development of edge computing extending cloud computing, it can provide computing capability for resource-constrained devices in close geographic proximity. In this paper, we design an auditing scheme based on secure computation outsourcing assisted by edge computing, in which the data preprocessing work can be offloaded to the edge server. The experiments show that it reduces the computing load on the devices and improves the efficiency of task processing. Index Terms?€?edge computing, Internet of Things (IoT), secure computation outsourcing, privacy-preserving, cloud data integrity auditing

Trusted-Committee-Based Secure and Scalable BFT Consensus for Consortium Blockchain

Liaoliao Feng, Yan Ding, Yusong Tan, Xiang Fu, Keming Wang, Junsheng Chang

0
Compared with public blockchain, consortium blockchain is more secure and controllable deployed in an enterprise scenario. Byzantine fault tolerance (BFT) consensus is widely applied in consortium blockchain. Although PBFT is the most classic practical BFT consensus with message complexity O(n^2 ), it still faces some security threats and has low consensus efficiency. To address these issues, we propose a secure and trusted BFT (S2BFT) consensus based on trusted committees. S 2BFT generates a trusted anonymous number using trust execution environment (TEE) for each server node and selects committees by pseudo-random algorithm. S2BFT can efficiently reach consensus by the committees with an O(m???n) message complexity. In addition, correctness analysis proves that S2BFT can resist more attacks than traditional BFT consensus and tolerate 1/2 byzantine server nodes. Results further demonstrate the efficiency of the simulated S2BFT implementation. Index Terms?€?byzantine fault tolerance, anonymity, committee, trust execution environment, consortium blockchain

An Efficient and Secure Node-sampling Consensus Mechanism for Blockchain Systems

Zhelin Liang, Hao Xu, Xiulong Liu, Shan Jiang, Keqiu Li

0
The consensus mechanism plays a pivotal role in guaranteeing the security and consistency of blockchain systems and substantially affects system performance. However, an increasing number of blockchain nodes degrade the consensus performance dramatically because of the high communication complexity in traditional consensus mechanisms. In this paper, we propose NS-consensus, a secure node-sampling blockchain consensus mechanism reducing the communication complexity significantly. The key novelty lies in the sampling of blockchain nodes so that the leader only needs to interact with the sampling nodes in each consensus epoch. However, NS-consensus imposes two challenges in determining an optimal sample size and denying malicious proposals. To address the challenges, we determine the sample size under the constraints of a confidence level and a margin of error to enhance communication efficiency without compromising system security. Furthermore, we design a mechanism to enable the leader to interact with all blockchain nodes in the last consensus phase, ensuring the denial of malicious proposals. The extensive experimental results indicate that NSconsensus outperforms the state-of-the-art with up to 175.1% higher system throughput and 79.9% lower time overhead in the sampling phases. Index Terms?€?Consensus, Blockchain, Sampling, Security.

An atomic member addition mechanism for permissioned blockchain based on autonomous rollback

Qihui Zhou, Xianglin Dang, Yazhe Wang, Zhen Xu, Penghui Lv

0
As a distributed ledger technology, the addition of new members in permissioned blockchain is usually composed of several steps among distributed nodes. The addition can not be considered successful until all of the steps are completed. In other words, these steps are an atomic operation. However, there is no solution for the atomic operation in existing permissioned blockchain, leading to an inconsistent state when the addition of new members is partially completed. To implement the atomic member addition in permissioned blockchain, we propose a method targeting at the atomic addition of new members based on distributed and autonomous rollback. After member addition starts, distributed nodes of existing members detect the new node and decide whether to rollback or not, instead of getting commands from the coordinator. After deciding to rollback, a new configuration block is added to achieve rollback of the uncompleted member addition. In order for the new configuration block to pass the policy validation of orderers, we set a rollback mode for orderers. The evaluation results show that our method can actually implement atomic member addition and has little impact on performance. Index Terms?€?atomic, member addition, permissioned blockchain, rollback, Fabric

CDTP: A Copyright-preserving Decentralized Data Trading Platform Based on Blockchain

Heng Tian, Mingjun Xiao

0
The conventional centralized data trading system confront the problem that the Trusted Third Party (TTP) may be dishonest, which harms the fairness and transparency of the system. Besides, we notice that most data trading systems lack distinguishing between the copyright and the use-right in trading. To address these issues, we propose a novel blockchainbased data trading system with copyright-preserving, called CDTP, mainly including two blockchains and an agreement. The copyright chain, one of the blockchains, is designed for registering and trading copyrights stored in the form of atomic transactions. It adopts an auction-based Byzantine agreement, namely ABFT. Another is use-right blockchain, which records use-right transactions and stores data, combined with IPFS-based storage. Moreover, we carry out experiments to simulate the performance of ABFT when it is under attacks. Index Terms?€?Blockchain, Agreement, Copyright-preserving, Data trading

Session Chair

Georgios Spathoulas, Norwegian University of Science and Technology, Norwegian

Session T1S4

Mobile and Fog Computing

Conference
4:00 PM — 5:30 PM HKT
Local
Dec 15 Thu, 3:00 AM — 4:30 AM EST

Proactive Handover Mechanism for Blockage Avoidance in Indoor VLC Networks

Anna Maria Vegni,Panagiotis Diamantoulakis

1
Handover management in Visible Light Communications (VLC) networks is an open issue due to the need of maintaining alignment between a transmitting Light Emitting Diode (LED) and a receiver PhotoDetector (PD). Mobility strongly affects VLC connectivity links, as well as occlusions that may obstruct the Line-of-Sight (LoS) propagation. In this paper we present a proactive handover solution, which is able to switch connectivity links from a serving VLC ?€?lighting?€? cell to a candidate one, in case of blockage that affects the VLC link. Differently from conventional handover mechanisms, where a handover is trigged by quality of service metrics or localization information, our technique is dedicated to VLC networks which can be affected by occlusions causing blockages. The proposed approach has been implemented both in hard and soft mode, and assessed in terms of achievable data rate and handover latency for a user walking in a given reference room at different speeds. Index Terms?€?Visible Light Communications, handover management, blockage, Line-of-Sight

Task Offloading for Post-disaster Rescue in Vehicular Fog Computing-assisted UAV Networks

Geng Sun, Long He, Zemin Sun, Jiayun Zhang, and Jiahui Li

0
Due to more flexible mobility, better line-of-sight (LOS) and faster on-demand deployment, unmanned aerial vehicles (UAVs) play a unique role for assisting post-disaster rescues, which often require UAVs to perform computationintensive rescue missions. However, UAVs generally have inherent limited computational capacity and battery storage, which makes it challenging to complete the heavy computing tasks within short period of time during the complicated postdisaster recovery. To overcome this issue, we introduce the vehicular fog computing (VFC) system in which a UAV splits and assigns the heavy tasks to the ground vehicles. First, to evaluate the performance of the VFC-assisted UAV network task offloading, the task processing latency and energy consumption are incorporated into a system utility construction. Moreover, we propose a joint UAV and vehicular task assignment scheme (JUVTAS) with the aim of optimizing the performance of the network. Specifically, we propose a genetic algorithminvasive weed optimization (GA-IWO) algorithm to achieve the approximately optimal task assignment strategy. The GA-IWO algorithm combines the global search ability of genetic algorithm and the local search ability of invasive weed optimization to achieve a better optimization performance. Simulation results show that the proposed JUVTAS is able to effectively reduce the latency and energy consumption for task processing. Moreover, JUVTAS achieves superior performance compared to several conventional methods. Index Terms?€?UAV communication, fog computing, computation offloading, post-disaster rescue, genetic algorithm, invasive weed optimization.

Anomaly Detection for Reoccurring Concept Drift in Smart Environments

Vincenzo Agate,Salvatore Drago,Pierluca Ferraro,Giuseppe Lo Re

0
Many crowdsensing applications today rely on learning algorithms applied to data streams to accurately classify information and events of interest in smart environments. Unfortunately, the statistical properties of the input data may change in unexpected ways. As a result, the definition of anomalous and normal data can vary over time and machine learning models may need to be re-trained incrementally. This problem is known as concept drift, and it has often been ignored by anomaly detection systems, resulting in significant performance degradation. In addition, the statistical distribution of past data often tends to repeat itself, and thus old learning models could be reused, avoiding costly retraining phases on new data, which would waste computational and energy resources. In this paper, we propose a hybrid anomaly detection system for streaming data in smart environments that accounts for concept drift and minimize the number of machine learning models that need to be retrained when shifts in incoming data distribution are detected. The system is multi-tier and relies on two different concept drift detection modules and an ensemble of anomaly detection models. An extensive experimental evaluation has been carried out, using two real datasets and a synthetic one; results show the high performance achieved by the system using common metrics such as F1-score and accuracy. Index Terms?€?concept drift, smart city, online anomaly detection, unsupervised learning

A Novel Data Aggregation Scheme for Wireless Sensor Networks Based on Robust Chinese Remainder Theorem

Jinxin Zhang,Fuyou Miao

0
In wireless sensor networks (WSNs), to improve sensing accuracy and coverage, a large number of sensor nodes are usually deployed in the monitoring area. The high density makes the data sensed by adjacent sensor nodes the same or similar, causing a lot of data redundancy and energy waste. In addition, reliability and non-plaintext transmission of the sensed data are also major concerns in WSNs. In this paper, we propose a novel data aggregation scheme to satisfy the requirements of energy efficiency, reliability, and non-plaintext transmission simultaneously, which obtains the approximate measurement result when small measurement errors are allowed. The scheme employs robust Chinese Remainder Theorem (RCRT) to compress the data when it is sensed and no other assumptions are required. We further derive some analytical results and give the simulation results of our scheme. Finally, we compare the performance of the typical data aggregation schemes with our RCRT-based data aggregation scheme in experimental simulation. The results demonstrate that the proposed RCRT-based data aggregation scheme has a better performance in energy saving. Index Terms?€?robust Chinese remainder theorem (RCRT), wireless sensor networks (WSNs), measurement errors, data aggregation, energy efficiency, reliability, non-plaintext transmission

Session Chair

Dangyang Xiao, Sun Yat-sen University, China

Session T2S4

IoT

Conference
4:00 PM — 5:30 PM HKT
Local
Dec 15 Thu, 3:00 AM — 4:30 AM EST

T2C: A Multi-User System for Deploying DNNs in a Thing-to-Cloud Continuum

Chia-Ying Hsieh, Praveen Venkateswaran, Nalini Venkatasubramanian, and Cheng-Hsin Hsu

0
The importance of IoT analytics in smart deployments has resulted in an increased use of powerful Deep Neural Network (DNN) models to extract insights from the growing amount of IoT sensor data. Traditional approaches that entirely offload computation and model deployment to cloud servers have been shown to be inefficient due to network congestion and latency concerns. However, with the improved capabilities of IoT devices, it has now become possible to distribute and host DNNs across IoT devices, edge servers and the cloud. In this paper, we propose a multi-user system, called T2C, to dynamically choose, deploy, monitor and control DNN-driven IoT analytics in a thing-to-cloud continuum. T2C leverages strategies such as multi-task learning, hitchhiking, early exit, and dynamic reconfiguration, to maximize the number of served user requests while simultaneously satisfying accuracy and latency requirements. We propose a suite of deployment planning and reconfiguration algorithms to dynamically deploy and migrate DNN layers between IoT devices, edge servers, and the cloud. We implement T2C in a prototype testbed and show that our system: (i) achieves 6.8X throughput boost compared to baseline algorithms in the planning phase, and (ii) improves the satisfied ratio by up to 35% in the operation and reconfiguration phase. Index Terms?€?multi-task learning, early exit, edge computing, distributed deep learning

Dynamic Vehicle Aware Task Offloading Based on Reinforcement Learning in a Vehicular Edge Computing Network

Lingling Wang, Xiumin Zhu, Nianxin Li, Yumei Li, Shuyue Ma, Linbo Zhai

0
The rapid development of edge computing has an impact on the Internet of Vehicles (IoV). However, the high-speed mobility of vehicles makes the task offloading delay unstable and unreliable. Hence, this paper studies the task offloading problem to provide stable computing, communication and storage services for user vehicles in vehicle networks. The offloading problem is formulated to minimize cost consumption under the maximum delay constraint by jointly considering the positions, speeds and computation resources of vehicles. Due to the complexity of the problem, we propose the vehicle deep Q-network (VDQN) algorithm. In V-DQN algorithm, we firstly propose a vehicle adaptive feedback (VAF) algorithm to obtain the priority setting of processing tasks for service vehicles. Then, the VDQN algorithm is implemented based on the result of VAF to realize task offloading strategy. Specially, the interruption problem caused by the movement of the vehicle is formulated as a return function as part of evaluating the task offloading strategy. The simulation results show that our proposed scheme significantly reduces cost consumption and improves Quality of Service (QoS). Index Terms?€?IoV, Mobile edge computing, Task offloading, Dynamic, V-DQN

VSLink: A Fast and Pervasive Approach to Physical Cyber Space Interaction via Visual SLAM

Han Zhou, Jiaming Huang, Hongchang Fan, Geng Ren, Yi Gao,and Wei Dong

0
With the fast growth of the Internet of Things, people now are surrounded by plenty of devices. To achieve efficient interaction with these devices, human-device interaction technologies are evolving. Because existing methods (mobile App) require users to remember the mapping between the realworld device and the digital one, an important point is to break such a gap. In this paper, we propose VSLink, which offers human-device interaction in an Augmented-Reality-like manner. VSLink achieves fast object identification and pervasive interaction for fusing the physical and cyberspace. To improve processing speed and accuracy, VSLink adopts a two-step object identification method to locate the interaction targets. In VSLink, visual SLAM and object detection neural networks detect stable/movable objects separately, and detection prior from SLAM is sent to neural networks which enables sparse-convolution-based inference acceleration. VSLink offers a platform where the user could customize the interaction target, function, and interface. We evaluated VSLink in an environment containing multiple objects to interact with. The results showed that it achieves a 33% network inference acceleration on state-of-the-art networks, and enables object identification with 30FPS video input. Index Terms?€?Human-object interaction, Augmented reality, Visual SLAM, Object detection.

BACO: A Bi-Ant-Colony-Based Strategy for UAV Trajectory Planning Considering Obstacle

Zhiyang Liu, Ximin Yang, Wan Tang, Xiao Zhang, Zhen Yang

0
Trajectory planning for a logistic delivery using an unmanned aerial vehicle (UAV) involves a typical traveling salesman problem (TSP), in which the turning of the UAV to avoid obstacles can cause significant energy consumption. The obstacles in the airspace and the angle constraints of the UAV must also be considered in the delivery. To address the low precision of UAV trajectory searches, and the serious impact of flight angles on UAV energy consumption, we propose a UAV trajectory planning strategy called bi-ant-colony optimization (BACO). BACO consists of two phases: path planning and track planning. By applying the guidance layer ant colony optimization (GuLACO) algorithm, the path planning phase eliminates the problem of ant colony deadlock that arises in multi-target point environments, and reopens the ant tabu table to search for a guidance path. Following this, the track planning phase employs the general layer ant colony optimization (GeLACO) algorithm to build the guidance path in segments. Furthermore, the precision of the flight heading for the UAV is optimized by adjusting the flight step in an adaptive manner, and obtaining fine-grained UAV flight tracks to control the turning angle of the logistics UAV. Our simulation results show that compared with the use of the greedy algorithm and the classical ACO algorithm, UAV trajectory planning using BACO can not only obtain shorter flight paths that take into account obstacle avoidance, but can also reduce the energy consumption of the UAV by finely controlling the amplitudes of the flight angles to ensure the safety and energy efficiency of UAV while in flight. Index Terms?€?Trajectory Planning, Logistics and Transportation, UAVs, Ant Colony Optimization

Session Chair

Jingjing Li, South China Normal University, China

Session T3S4

Analysis and Detection

Conference
4:00 PM — 5:30 PM HKT
Local
Dec 15 Thu, 3:00 AM — 4:30 AM EST

Recognition of Abnormal Proxy Voice Traffic in 5G Environment Based on Deep Learning

Hongce Zhao,Shunliang Zhang,Xianjin Huang,Zhuang Qiao,Xiaohui Zhang,Guanglei Wu

0
With the commercial use of the fifth generation (5G), the rapid popularization of mobile Over-The-Top (OTT) voice applications has brought high-quality voice communication methods to users. The intelligent Internet in the 5G era makes communication terminals not limited to mobile phones. The complex communication environment has higher requirements for the security of data transmission between various terminals to prevent the system from being monitored or breached. At present, many OTT users use encrypted proxy technology to get rid of certain restrictions of network operators, prevent their private information from leaking, and ensure communication security. However, in some cases the encryption proxy may be subject to configuration error or maliciously attacked makes the encryption ineffective. The resulting abnormal proxy traffic may cause privacy leakage when users use voice services. However, little effort has been put on fingerprint the effectiveness of encryption for proxy voice traffic in a 5G environment. To this end, we adopt the VGG deep learning method to identify agent speech traffic, compare it with common deep learning methods, and study the impact on model performance with less abnormal traffic. Extensive experimental results show that the deep learning method we use can identify abnormal encrypted proxy voice traffic with the accuracy up to 99.77%. Moreover, VGG outperform other DL methods on indentifying the encryption algorithms of normal encrypted proxy traffic. Index Terms?€?5G OTT, proxy voice traffic, encryption validity, deep learning, identify proxy traffic

Web Attack Payload Identification and Interpretability Analysis Based on Graph Convolutional Network

Yijia Xu,Yong Fang,Zhonglin Liu

0
Web attack payload identification is a significant part of the Web defense system. The current Web attack payload identification usually combines natural language processing and deep learning to automatically build a detection model to intercept malicious payloads. However, these detection methods ignore the bidirectional association between fields and is prone to the payload dilution problem for long strings. In addition, the weak interpretability of deep learning models makes it difficult for researchers to solve the problem of model pollution and adjust the model according to the prediction logic. Therefore, this paper proposes a new Web attack payload identification method based on Graph Convolutional Network (GCN), which can effectively extract Web payload features and help model interpretability analysis. The core of this method is to transform the text feature problem into a graph feature extraction problem and to understand the structure and content of the Web payload from the graph perspective. The method performs node embedding on the Web payload graph through GCN, then converts the embedding vector into a graph feature vector through a feature fusion method. The node ablation method is used to analyze malicious payloads?€? interpretability and calculate the predicted impact rate of nodes inside the graph structure. The experiments on the CSIC 2010 v2 HTTP dataset show that the method proposed in this paper has high accuracy for identifying Web attack payloads, and the node embedding of the Relational Graph Convolutional Network (RGCN) method is more suitable for identifying Web attack payloads than other GCN methods. The research results of the paper show that the model interpretability analysis based on the Web payload graph is reasonable and can effectively assist researchers in adjusting the model and preventing the problem of model pollution. Index Terms?€?Web attack, Payload detection, Graph embedding, Interpretability analysis.

A Mitmproxy-based Dynamic Vulnerability Detection System For Android Applications

Xinghang Lv, Tao Peng, Junwei Tang, Ruhan He, Xinrong Hu, Minghua Jiang,Zaihui Deng, Wenli Cao

0
During the process of pushing patch packets for Android application hotfix, the attacker can hijack and tamper with the dex file due to the lack of adding a digital signature, which leads to code injection with serious consequences. To address the above problems, an dynamic vulnerability detection system based on mitmproxy is primary proposed, which first utilizes mitmproxy to capture all the packets interacted between the client and the server while locating the dex file, then injects the test code into the dex and pushes it to the client for execution using a man-in-the-middle attack, and finally verifies through the log output by the application whether there is a code injection vulnerability. For 1000 applications in the application market, our system successfully detects 34 new unknown applications with dex injection, and the experimental results show that the system is effective in detecting real-world applications with vulnerabilities caused by hotfix. Index Terms?€?application hotfix, vulnerability detection, code injection, mitmproxy, man-in-the-middle

Detection of DoH Tunnels with Dual-tier Classifier

Yuqi Qiu, Baiyang Li, Liang Jiao, Yujia Zhu?, Qingyun Liu

0
DNS over HTTPS (DoH) has been deployed to provide confidentiality in the DNS resolution process. However, encryption is a double-edged sword in providing security while increasing the risk of data tunneling attacks. Current approaches for plaintext DNS tunnel detection are disabled. Due to the diversity of tunneling tool variations and the low proportion of tunneled traffic in real situations, detecting malicious behaviors is becoming more and more challenging. In this paper, we propose a novel behavior-based model with Dual-Tier Tunnel Classifier (DTC) for tool-level DoH tunneling detection. The major advantage of DTC is that it can not only capture existing tunneling tools but also explore unknown ones in the wild. In particular, DTC considers data imbalance, which improves robustness of the model in the open environment. Our method has been proven successful in both closed and open scenarios, achieving 99.99% accuracy in detecting known malicious DoH traffic, 96.93% accuracy in unknown and 95.31% accuracy in identifying malicious DoH tunnel tools. Index Terms?€?DoH Tunnels, Traffic Detection, Variational Auto-encoder, Machine Learning

QP-LDP for better global model performance in federated learning

Qian Chen, Zheng Chai, Zilong Wang, Jiawei Chen, Haonan Yan,Xiaodong Lin

0
With the deployment of local differential privacy (LDP), federated learning (FL) has gained stronger privacypreserving capability against inference-type attacks. However, existing LDP methods reduce global model performance. In this paper, we propose a QP-LDP algorithm for FL to obtain a better-performed global model without losing privacy guarantees defined by the original LDP. Different from previous LDP methods for FL, QP-LDP improves the global model performance by precisely disturbing the non-common components of quantized local contributions. In addition, QP-LDP comprehensively protects two types of local contributions. Through security analysis, QP-LDP provides the probability indistinguishability of clients?€? private local contributions at a component-level. More importantly, ingenious experiments show that with the deployment of QP-LDP, the global model outperforms that in the original LDPbased FL in terms of prediction accuracy and convergence rate. Index Terms?€?federated learning, privacy-preserving, local differential privacy, quantization, private set intersection.

Session Chair

Ahmed Amro, Norwegian University of Science and Technology, Norwegian

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.