In this article, we consider a scenario of a decode-and-forward (DaF) wireless system supporting the communication of an unmanned aerial vehicle (UAV) with a ground-control-station (GCS) through an intelligent reflecting surface (IRS). Particularly, the UAV moves according to the three-dimensional (3D) random way point model at low altitude in a complex urban environment. However, a stationary relay-station (RS) decodes and forwards the UAV’s signal over an IRS-aided virtual line-of-sight (LoS) link to a GCS. The highly dynamic and terrain-dependent UAV-to-RS channel follows the Beaulieu-Xie fading model. However, the RS-to-IRS and IRS-to-GCS links enjoy clear LoS; thus, follow the Rice fading model. We derive new closed-form expressions for the probability density functions (PDFs) and the cumulative distribution functions (CDFs). Then based on the derived statistical expressions, several performance metrics including outage probability, average bit error rate, and ergodic channel capacity are derived in closed-forms. Additionally, simple and accurate approximated expressions in the high signal-to-noise ratio regime are also provided. The analytical results are validated through some representative numerical examples and supported by Monte-Carlo simulation results.
This paper investigates the resource allocation problem in non-orthogonal multiple-access (NOMA) cellular networks underlaid with OMA-based device-to-device (D2D) communication. This network architecture enjoys the intrinsic features of NOMA and D2D communications; namely, spectral efficiency, massive connectivity, and low-latency. Despite these indispensable features, the combination of NOMA and D2D communications exacerbates the resource allocation problem in cellular networks due to the tight coupling among their constraints and conflict over access to shared resources. The aim of our work is to maximize the downlink network sum-rate, while meeting the minimum rate requirements of the cellular tier and underlay D2D communication, and incorporating interference management as well as other practical constraints. To this end, many-to-many matching and difference-of-convex programming are employed to develop a holistic sub-channels and power allocation algorithmic solution. In addition to analyzing the properties of the proposed solution, its performance is benchmarked against an existing solution and the traditional OMA-based algorithm. The proposed solution demonstrates superiority in terms of network sum-rate, users’ connectivity, minimum rate satisfaction, fairness, and interference management, while maintaining acceptable computational complexity.
The demand for high capacity network services with stringent quality of service requirements is at a rapidly accelerating rate due to the exponential rise in the numbers of mobile-connected devices. This demand has motivated the use of the heterogeneous network (HetNet) architectures. However, even though small-cell base-stations have relatively low power consumption, the overall aggregate power consumption of a dense HetNet is significant. Due to high inter-cell-interference and imbalanced loads in dense HetNets with conventional user association techniques, cell-edge users perceive dramatically less quality of service than their cell-center counterparts. The use of a Coordinated Multipoint (CoMP) association can augment the service perceived by cell-edge users by allowing a single user to be jointly served by two base-stations. In this work, we propose a load balancing scheme for CoMP-enabled HetNets with hybrid energy supplies that jointly optimizes user latency and green energy utilization. The proposed scheme employs a fractional solution to the user association problem to decide CoMP transmission for cell-edge users, ultimately improving their data rates. Performance evaluations of the proposed scheme show a reduction in latency of 79% and on-grid power consumption by 99% compared to conventional user association schemes that associate users based on the maximum received signal strength. Furthermore, an improvement in the network sum-rate for cell-edge users by 24% has been achieved compared to the traditional association scheme and as much as 40% over other existing schemes.
Network softwarization has recently been enabled via the software-defined networking (SDN) paradigm, which separates the data plane from control plane allowing for a flexible and centralized control of networks. This separation facilitates implementation of machine learning techniques for network management and optimization. In this work, a machine learning-based multipath routing (MLMR) framework is proposed for software-defined networks with quality-of-service (QoS) constraints and flow rules space constraints. The QoS-aware multipath routing problem in SDN is modeled as multicommodity network flow problem with side constraints, that is known to be NP-hard. The proposed framework utilizes network status estimates, and their corresponding routing configurations available at the network central controller to learn a mapping function between them. Once the mapping function is learned, it is applied on live-inputs of network status and routing requests to predict a multipath routing solutions in real-time. Performance evaluations of the MLMR framework on real traces of network traffic verify its accuracy and resilience to noise in training data. Furthermore, the MLMR framework demonstrates more than 98.99% improvement in computational efficiency.
Joint transmission coordinated multi-point (JT-CoMP) and non-orthogonal multiple access (NOMA) are key enabling technologies of 5G ubiquitous broadband infrastructures. These technologies are jointly expected to exploit multi-cell and non-orthogonal resource transmissions; thus, conventional resource allocation schemes that only consider either one of them fail to efficiently exploit resources of 5G networks. In this paper, we bridge this gap by proposing a practical and comprehensive joint sub-carrier assignment and power allocation scheme for network sum-rate maximization in JT-CoMP-enabled NOMA networks. We formulate the problem as a mixed integer non-linear programming (MINLP) problem, which is NP-hard. The problem is decoupled into two sub-problems, where the sub-carrier assignment is modeled as a two-sided many-to-many matching game and the power allocation is formulated as a difference of convex (DC) programming problem. The matching algorithm is proved to converge to a two-sided exchange stable matching. Furthermore, the solution computed by the proposed scheme is verified against a baseline solution computed by a commercial optimization package, and has been shown to achieve 91.38% of the baseline solution for JT-CoMP-NOMA networks. Simulation results illustrate that the proposed scheme enhances cell-edge users’ achievable rates by in JT-CoMP-NOMA over conventional NOMA.
Utilizing the intelligence at the network edge, edge computing paradigm emerges to provide time-sensitive computing services for Internet of Things. In this paper, we investigate sustainable computation offloading in an edge-computing system that consists of energy harvesting-enabled mobile devices (MDs) and a dispatcher. The dispatcher collects computation tasks generated by IoT devices with limited computation power, and offloads them to resourceful MDs in exchange for rewards. We propose an online Rewards-optimal Auction (RoA) to optimize the long-term sum-of-rewards for processing offloaded tasks, meanwhile adapting to the highly dynamic energy harvesting (EH) process and computation task arrivals. RoA is designed based on Lyapunov optimization and Vickrey-Clarke-Groves auction, the operation of which does not require a prior knowledge of the energy harvesting, task arrivals, or wireless channel statistics. Our analytical results confirm the optimality of tasks assignment. Furthermore, simulation results validate the analytical analysis, and verify the efficacy of the proposed RoA.
Heterogeneous networks (HetNets) have been widely accepted as a promising architecture to fulfill the ever-increasing demand for capacity expansion. However, the energy consumed by the dense underlay of the large number of micro base stations that is required to achieve capacity expansion, exacerbates the energy inefficiency of cellular networks. Hybrid energy sources, i.e., the grid and green energy sources, can be used to meet the HetNets excessive demand for energy. In such networks, traffic load balancing becomes crucial to balance the trade-off between green energy utilization and quality of service (QoS) provisioning. Leveraging software-defined radio access networks (SoftRAN) and considering inaccuracy of vital network measurements, we develop an autonomous, robust and resilient load balancing framework. The framework consists of two major modules. First, the H ∞ regulator module, which guides the temporal utilization of green energy and distribution of network loads among base stations (BSs) in order to achieve long-term average QoS provisioning. Second, a user association module that optimizes user association and its corresponding traffic loads to minimize the network traffic latency while respecting loads proposed by the H ∞ regulator. Extensive performance evaluations demonstrate the efficacy of the proposed framework in autonomously balancing the trade-off between green energy consumption and traffic latency. Furthermore, performance evaluations confirm the robustness of the proposed framework to estimation inaccuracy and its resilience to sudden changes in network parameters.
In this study, the problems of joint node selection, flow routing, and cell coverage optimisation in energy-constrained wireless sensor networks (WSNs) are considered. Due to the energy constraints on network nodes, maximising network sum-rate under target network lifetime, flow routing, cell coverage, and minimum rate constraints is of paramount importance in WSNs. To this end, a mixed-integer non-linear programming problem is formulated, where the aim is to optimally select which network nodes to act as sensors or relays while ensuring connectivity to the fusion centre optimised network flows, and full network coverage. The formulated problem happens to be NP-hard (i.e. computationally prohibitive). In turn, a solution procedure based on the branch and bound with the reformulation-linearisation technique (BB-RLT) is devised to provide a-optimal solution to the formulated problem. Simulation results are presented to validate the efficacy of the devised BB-RLT solution procedure. This work provides significant theoretical results on network sum-rate maximisation for WSNs under a variety of practical constraints.
In this paper, the problem of joint subcarrier assignment and global energy-efficient power allocation (J-SA-GEE-PA) for energy-harvesting (EH) two-tier downlink non-orthogonal multiple-access (NOMA) heterogeneous networks (HetNets) is considered. Particularly, the HetNet consists of a macro base-station (MBS) and a number of small base-stations (SBSs), which are solely powered via renewable-energy sources. The aim is to solve the J-SA-GEE-PA maximization problem subject to quality-of-service (QoS) per user as well as other practical constraints. However, the formulated J-SA-GEE-PA problem happens to be non-convex and NP-hard, and thus is computationally-prohibitive. In turn, problem J-SA-GEE-PA is split into two sub-problems: (1) subcarrier assignment via many-to-many matching, and (2) GEE-maximizing power allocation. In the first sub-problem, the subcarriers are assigned to users via the Gale-Shapley deferred acceptance mechanism. As for the second sub-problem, the GEE-PA problem is solved optimally via a low-complexity algorithm. After that, a two-stage solution procedure is devised to efficiently solve the J-SA-GEE-PA problem, while ensuring stability. Simulation results are presented to validate the proposed solution procedure, where it is shown to efficiently yield comparable network global energy-efficiency to the J-SA-GEE-PA scheme, and superior to that of OFDMA; however, with lower computational-complexity. The algorithmic designs presented in this work constitute a step towards filling the gap for computationally-efficient and effective resource allocation solutions to guarantee a fully autonomous and grid-independent operation of EH two-tier downlink NOMA HetNets.
Traffic offloading through heterogenous small-cell networks (HSCNs) has been envisioned as a cost-efficient approach to accommodate the tremendous traffic growth in cellular networks. In this paper, we investigate an energy-efficient dual-connectivity (DC) enabled traffic offloading through HSCNs, in which small cells are powered in a hybrid manner including both the conventional on-grid power-supply and renewable energy harvested from environment. To achieve a flexible traffic offloading, the emerging DC-enabled traffic offloading in 3GPP specification allows each mobile user (MU) to simultaneously communicate with a macro cell and offload data through a small cell. In spite of saving the on-grid power consumption, powering traffic offloading by energy harvesting (EH) might lead to quality of service (QoS) degradation, e.g., when the EH power-supply fails to support the required offloading rate. Thus, to reap the benefits of the DC-capability and the EH power-supply, we propose a joint optimization of traffic scheduling and power allocation that aims at minimizing the total on-grid power consumption of macro and small cells, while guaranteeing each served MU’s traffic requirement. We start by studying a representative case of one small cell serving a group of MUs. In spite of the non-convexity of the formulated joint optimization problem, we exploit its layered structure and propose an algorithm that efficiently computes the optimal offloading solution. We further study the scenario of multiple small cells, and investigate how the small cells select different MUs for maximizing the system-wise reward that accounts for the revenue for offloading the MUs’ traffic and the cost of total on-grid power consumption of all cells. We also propose an efficient algorithm to find the optimal MU-selection solution. Numerical results are provided to validate our proposed algorithms and show the advantage of our proposed DC-enabled traffic offloading through the EH-powered small cells.
The rapidly growing energy consumption of the Internet core network has been a growing concern. In this respect, we have proposed a distributed and load adaptive energy saving router (ESR) mechanism to manage the energy consumption of green routers in our previous work. In this paper, we propose an analytical model to investigate the performance of ESR. The proposed model captures the distribution of the packet service time, the buffer size, and the packet loss probability. Under the low (resp. high) traffic load situation, our numerical results show that the ESR has the ability to save more than 40% (resp. 9%) energy. In addition to evaluating the ESR performance in terms of the energy saving ratio, rerouting probability and average delay, the models provide manufactures and and operators with guidelines for the deployment of green Internet.
Software defined networking (SDN) is a promising networking paradigm for achieving programmability and centralized control in communication networks. These features simplify network management and enable innovation in network applications and services such as routing, virtual machine migration, load balancing, security, access control, and traffic engineering. The routing application can be optimized for power efficiency by routing flows and coalescing them such that the least number of links is activated with the lowest link rates. However, in practice, flow coalescing can generally overflow the flow tables, which are implemented in a size-limited and power-hungry ternary content addressable memory (TCAM). In this paper, a set of practical constraints are imposed to the SDN routing problem, namely size-limited flow table and discrete link rate constraints, to ensure applicability in real networks. Because the problem is NP-hard and difficult to approximate, a low-complexity particle swarm optimization-based and power-efficient routing (PSOPR) heuristic is proposed. Performance evaluation results revealed that PSOPR achieves more than 90% of the optimal network power consumption while requiring only 0.0045% to 0.9% of the optimal computation time in real network topologies. In addition, PSOPR generates shorter routes than the optimal routes generated by CPLEX.
The existing Ethernet networks are designed with high redundancy and over-dimensioning so they can provide reliable services during peak traffic demand periods. However, this has increased the total energy consumption and operational cost. In this paper, we propose an energy saving algorithm (ESA) to reduce the energy consumption of green routers by considering the buffer status and the traffic load. We develop a Network Simulator, version 2, (NS-2)–based simulation model for ESA to evaluate its performance with respect to real traffic traces. Performance bounds of the proposed algorithm are derived. Numerical evaluations are conducted to verify the accuracy of the simulation model against derived bounds. Performance evaluations demonstrate that the proposed algorithm outperforms candidate algorithms, thereby providing greater energy savings with an acceptable packet delay and loss. We show that the introduced delay is bounded by an upper bound that is slightly larger than half of the sleep timer. Furthermore, performance comparisons are extensive and detailed, thus providing insights into the performance of different energy saving functions considered by the candidate algorithms.
The incorporation of Cognitive Radio (CR) and Energy Harvesting (EH) capabilities in wireless sensor networks enables spectrum and energy efficient heterogeneous cognitive radio sensor networks (HCRSNs). The new networking paradigm of HCRSNs consists of EH-enabled spectrum sensors and batterypowered data sensors. Spectrum sensors can cooperatively scan the licensed spectrum for available channels, while data sensors monitor an area of interest and transmit sensed data to the sink over those channels. In this work, we propose a resource allocation solution for the HCRSN to achieve the sustainability of spectrum sensors and conserve energy of data sensors. The proposed solution is achieved by two algorithms that operate in tandem, a spectrum sensor scheduling algorithm and a data sensor resource allocation algorithm. The spectrum sensor scheduling algorithm allocates channels to spectrum sensors such that the average detected available time for the channels is maximized, while the EH dynamics are considered and PU transmissions are protected. The data sensor resource allocation algorithm allocates the transmission time, power and channels such that the energy consumption of the data sensors is minimized. Extensive simulation results demonstrate that the energy consumption of the data sensors can be significantly reduced while maintaining the sustainability of the spectrum sensors.
The smart electricity grid introduces new opportunities for fine-grained consumption monitoring. Such functionality, however, requires the constant collection of electricity data that can be used to undermine consumer privacy. In this work, we address this problem by proposing two decentralized protocols to securely aggregate the measurements of n smart meters. The first protocol is a very lightweight one, it uses only symmetric cryptographic primitives and provides security against honest-but-curious adversaries. The second protocol is public-key based and considers the malicious adversarial model; malicious entities not only try to learn the private measurements of smart meters but also disrupt protocol execution. Both protocols do not rely on centralized entities or trusted third parties to operate. Additionally, we show that they are highly scalable owning to the fact that every smart meter has to interact with only a few others, thus requiring only O(1) work and memory overhead. Finally, we implement a prototype based on our proposals and we evaluate its performance in realistic deployment settings.
The energy efficiency of wired networks has received considerable attention over the past decade due to its economic and environmental impacts. However, because of the vertical integration of the control and data planes in conventional networks, optimizing energy consumption in such networks is challenging. Software-defined networking (SDN) is an emerging networking paradigm that decouples the control plane from the data plane and introduces network programmability for the development of network applications. In this work, we propose an energy-aware integral flow-routing solution to improve the energy efficiency of the SDN routing application. We consider discreteness of link rates and pose the routing problem as a Mixed Integer Linear Programming (MILP) problem, which is known to be NP complete. The proposed solution is a heuristic implementation of the Benders decomposition method that routes additional single and multiple flows without resolving the routing problem. Performance evaluations demonstrate that the proposed solution achieves a close-to-optimal performance (within 3.27% error) compared to CPLEX on various topologies with less than 0.056% of CPLEX average computation time. Furthermore, our solution outperforms the shortest path algorithm by 24.12% to 54.35% in power savings.
Due to the limited battery power of sensor nodes and harsh deployment environment, it is of fundamental importance and great challenge to achieve high energy efficiency and strong robustness in large-scale wireless sensor networks (LS-WSNs). To this end, we propose two self-organizing schemes for LS-WSNs. The first scheme is the energy-aware common neighbor (ECN) scheme, which considers the neighborhood overlap in link establishment. The second scheme is energy-aware low potential-degree common neighbor (ELDCN) scheme, which takes both neighborhood overlap in topology formation and the potential degrees of common neighbors into consideration. Both schemes generate clustering-based and scale-free-inspired LS-WSNs, which are energy-efficient and robust. However, the ELDCN scheme shows higher energy efficiency and stronger robustness to node failures because it avoids establishing links to hub-nodes with high potential connectivity. Analytical and simulation results demonstrate that our proposed schemes outperform the existing scale-free evolution models in terms of energy efficiency and robustness.
In this paper, we study resource management and allocation for Energy Harvesting Cognitive Radio Sensor Networks (EHCRSNs). In these networks, energy harvesting supplies the network with a continual source of energy to facilitate self-sustainability of the power-limited sensors. Furthermore, cognitive radio enables access to the underutilized licensed spectrum to mitigate the spectrum-scarcity problem in the unlicensed band. We develop an aggregate network utility optimization framework for the design of an online energy management, spectrum management and resource allocation algorithm based on Lyapunov optimization. The framework captures three stochastic processes: energy harvesting dynamics, inaccuracy of channel occupancy information, and channel fading. However, a priori knowledge of any of these processes statistics is not required. Based on the framework, we propose an online algorithm to achieve two major goals: first, balancing sensors’ energy consumption and energy harvesting while stabilizing their data and energy queues; second, optimizing the utilization of the licensed spectrum while maintaining a tolerable collision rate between the licensed subscriber and unlicensed sensors. Performance analysis shows that the proposed algorithm achieves a close-to-optimal aggregate network utility while guaranteeing bounded data and energy queue occupancy. Extensive simulations are conducted to verify the effectiveness of the proposed algorithm and the impact of various network parameters on its performance.
Although academic dishonesty has a long history in academia, its pervasiveness has recently reached an alarming level. Academic dishonesty not only undermines the purpose of education and the assessment process but also threatens the creditability of academic records. We propose a framework for analysing students’ behaviour with respect to academic policies and honour codes. We draw an analogy between law enforcement and academic integrity enforcement and highlight similarities and differences. The proposed framework captures major determinants of academic dishonesty reported in the literature, namely detection probability, punishment severity, class average and record of academic deviance. The framework models both students’ development of nonacademic skills to improve their grades and teaching assistants’ development of detection skills, which both affect the detection probability. Our analysis demonstrates that the optimality of escalating penalties is conditional on the offenders and academic policy enforcers learning. Use-case scenarios are presented to facilitate the implementation of our results in classrooms.
The above paper (Awad M. K. and Wong K. T., Recursive Least-Squares Source Tracking using One Acoustic Vector Sensor, IEEE Transactions on Aerospace and Electronic Systems, 48, 4 (Oct. 2012), 3073—3083) was published with incomplete figures. The correct figures appears below.
An acoustic vector-sensor (a.k.a. vector-hydrophone) is composed of three acoustic velocity-sensors, plus a collocated pressure-sensor, all collocated in space. The velocity-sensors are identical, but orthogonally oriented, each measuring a different Cartesian component of the three-dimensional particle-velocity field. This acoustic vector-sensor offers an azimuth-elevation response that is invariant with respect to the source’s centre frequency or bandwidth. This acoustic vector-sensor is adopted here for recursive least-squares (RLS) adaptation, to track a single mobile source, in the absence of any multi path fading and any directional interference. A formula is derived to preset the RLS forgetting factor, based on the prior knowledge of only the incident signal power, the incident source’s spatial random walk variance, and the additive noise power. The work presented here further advances a multiple-forgetting-factor (MFF) version of the RLS adaptive tracking algorithm, that requires no prior knowledge of these aforementioned source statistics or noise statistics. Monte Carlo simulations demonstrate the tracking performance and computational load of the proposed algorithms.
Fair weights have been implemented to maintain fairness in recent resource allocation schemes. However, designing fair weights for multiservice wireless networks is not trivial because users’ rate requirements are heterogeneous and their channel gains are variable. In this paper, we design fair weights for opportunistic scheduling of heterogeneous traffic in orthogonal frequency division multiple access (OFDMA) networks. The fair weights determine each user’s share of rate for maintaining a utility notion of fairness. We then present a scheduling scheme which enforces users’ long term average transmission rates to be proportional to the fair weights. The proposed scheduler takes the advantage of users’ channel state information and the inherent flexibility of OFDMA resource allocation for efficient resource utilization. Furthermore, using the fair weights allows flexibility for realization of different scheduling schemes which accommodate a variety of requirements in terms of heterogeneous traffic types and user mobility. Simulation based performance analysis is presented to demonstrate efficacy of the proposed solution in this paper.
This paper presents a novel scheme for the allocation of subcarriers, rates, and power in orthogonal frequency-division multiple-access (OFDMA) networks. The scheme addresses practical implementation issues of resource allocation in OFDMA networks: the inaccuracy of channel-state information (CSI) available to the resource allocation unit (RAU) and the diversity of subscribers’ quality-of-service (QoS) requirements. In addition to embedding the effect of CSI imperfection in the evaluation of the subscribers’ expected rate, the resource-allocation problem is posed as a network utility maximization (NUM) one that is solved via decomposing it into a hierarchy of subproblems. These subproblems coordinate their allocations to achieve a final allocation that satisfies aggregate rate constraints imposed by the call-admission control (CAC) unit and OFDMA-related constraints. A complexity analysis shows that the proposed scheme is computationally efficient. In addition, performance evaluation findings support our theoretical claims: A substantial data rate gain can be achieved by considering the CSI imperfection, and multiservice classes can be supported with QoS guarantees.
This paper presents a novel approach to investigate ergodic mutual information of OFDMA Selection-Decode-and-Forward (SDF) cooperative relay networks with imperfect channel state information (CSI). Relay stations are either dedicated or non-dedicated (i.e., subscriber stations assisting other subscriber stations). The CSI imperfection is modeled as an additive random variable with known statistics. Numerical evaluations and simulations demonstrate that by considering the CSI imperfection based on a priori knowledge of the estimation error statistics, a substantial gain can be achieved in terms of ergodic mutual information which makes channel adaptive schemes closer to practical implementations.
A comprehensive and integrative overview (excluding ultrawideband measurements) is given of all the empirical data available from the open literature on various temporal properties of the indoor radiowave communication channel. The concerned frequency range spans over 0.8-8 GHz. Originally, these data were presented in about 70 papers in various journals, at diverse conferences, and in different books. Herein overviewed are the multipaths’ amplitude versus arrival delay, the probability of multipath arrival versus arrival delay, the multipath amplitude’s temporal correlation, the power delay profile and associated time dispersion parameters (e.g., the RMS delay spread and the mean delay), the coherence bandwidth, and empirically ldquotunedrdquo tapped-delay-line models. Supported by the present authors’ new analysis, this paper discusses how these channel-fading metrics depend on the indoor radiowave propagation channel’s various properties, (e.g., the physical environment, the floor layout, the construction materials, the furnishing’s locations and electromagnetic properties) as well as the transmitted signal’s carrier-frequency, the transmitting-antenna’s location, the receiving-antenna’s location, and the receiver’s detection amplitude threshold.