Communication networks, whether they are wired or wireless, have traditionally been assumed to be connected at least most of the time. However, emerging applications such as emergency response, special operations, smart environments, VANETs, etc. coupled with node heterogeneity and volatile links (e.g. due to wireless propagation phenomena and node mobility) will likely change the typical conditions under which networks operate. In fact, in such scenarios, networks may be mostly disconnected, i.e., most of the time, end-to-end paths connecting every node pair do not exist. To cope with frequent, long-lived disconnections, opportunistic routing techniques have been proposed in which, at every hop, a node decides whether it should forward or store-and-carry a message. Despite a growing number of such proposals, there still exists little consensus on the most suitable routing algorithm(s) in this context. One of the reasons is the large diversity of emerging wireless applications and networks exhibiting such ''episodic'' connectivity. These networks often have very different characteristics and requirements, making it very difficult, if not impossible, to design a routing solution that fits all. In this paper, we first break up existing routing strategies into a small number of common and tunable routing modules (e.g. message replication, coding, etc.), and then show how and when a given routing module should be used, depending on the set of network characteristics exhibited by the wireless application. We further attempt to create a taxonomy for intermittently connected networks. We try to identify generic network characteristics that are relevant to the routing process (e.g., network density, node heterogeneity, mobility patterns) and dissect different ''challenged'' wireless networks or applications based on these characteristics. Our goal is to identify a set of useful design guidelines that will enable one to choose an appropriate routing protocol for the application or network in hand. Finally, to demonstrate the utility of our approach, we take up some case studies of challenged wireless networks, and validate some of our routing design principles using simulations.
Despite significant infrastructure improvements, cloud computing still faces numerous challenges in terms of load balancing. Several techniques have been applied in the literature to improve load balancing efficiency. Recent research manifested that load balancing techniques based on metaheuristics provide better solutions for proper scheduling and allocation of resources in the cloud. However, most of the existing approaches consider only a single or few QoS metrics and ignore many important factors. The performance efficiency of these approaches is further enhanced by merging with machine learning techniques. These approaches combine the relative benefits of load balancing algorithm backed up by powerful machine learning models such as Support Vector Machines (SVM). In the cloud, data exists in huge volume and variety that requires extensive computations for its accessibility, and hence performance efficiency is a major concern. To address such concerns, we propose a load balancing algorithm, namely, Data Files Type Formatting (DFTF) that utilizes a modified version of Cat Swarm Optimization (CSO) along with SVM. First, the proposed system classifies data in the cloud from diverse sources into various types, such as text, images, video, and audio using one to many types of SVM classifiers. Then, the data is input to the modified load balancing algorithm CSO that efficiently distributes the load on VMs. Simulation results compared to existing approaches showed an improved performance in terms of throughput (7%), the response time (8.2%), migration time (13%), energy consumption (8.5%), optimization time (9.7%), overhead time (6.2%), SLA violation (8.9%), and average execution time (9%). These results outperformed some of the existing baselines used in this research such as CBSMKC, FSALB, PSO-BOOST, IACSO-SVM, CSO-DA, and GA-ACO.
Smart city planning is envisaged as advance technology based independent and autonomous environment enabled by optimal utilisation of resources to meet the short and long run needs of its citizens. It is therefore, preeminent area of research to improve the energy consumption as a potential solution in multi-tier 5G Heterogeneous Networks (HetNets). This article predominantly focuses on energy consumption coupled with CO2 emissions in cellular networks in the context of smart cities. We use Reinforcement Learning (RL) vertical traffic offloading algorithm to optimize energy consumption in Base Stations (BSs) and to reduce carbon footprint by applying widely accepted strategy of cell switching and traffic offloading. The algorithm relies on a macro cell and multiple small cells traffic load information to determine the cell offloading strategy in most energy efficient way while maintaining quality of service demands and fulfilling users applications. Spatio-temporal simulations are performed to determine a cell switch on/off operation and offload strategy using varying traffic conditions in control data separated architecture. The simulation results of the proposed scheme prove to achieve reasonable percentage of energy and CO2 reduction.
Content-Centric Networking (CCN) is a novel architecture that is shifting host-centric communication to a content-centric infrastructure. In recent years, in-network caching in CCNs has received significant attention from research community. To improve the cache hit ratio, most of the existing schemes store the content at maximum number of routers along the downloading path of content from source. While this helps in increased cache hits and reduction in delay and server load, the unnecessary caching significantly increases the network cost, bandwidth utilization, and storage consumption. To address the limitations in existing schemes, we propose an optimization based in-network caching policy, named as opt-Cache, which makes more efficient use of available cache resources, in order to reduce overall network utilization with reduced latency. Unlike existing schemes that mostly focus on a single factor to improve the cache performance, we intend to optimize the caching process by simultaneously considering various factors, e.g., content popularity, bandwidth, and latency, under a given set of constraints, e.g., available cache space, content availability, and careful eviction of existing contents in the cache. Our scheme determines optimized set of content to be cached at each node towards the edge based on content popularity and content distance from the content source. The contents that have less frequent requests have their popularity decreased with time. The optimal placement of contents across the CCN routers allows the overall reduction in bandwidth and latency. The proposed scheme is compared with the existing schemes and depicts better performance in terms of bandwidth consumption and latency while using less network resources.
This paper presents a run-time detection mechanism for access-driven cache-based Side-Channel Attacks (CSCAs) on Intel's x86 architecture. We demonstrate the detection capability and effectiveness of proposed mechanism on Prime+Probe attcks. The mechanism comprises of multiple machine learning models, which use real-time data from the HPCs for detection. Experiments are performed with two different implementations of AES cryptosystem while under Prime+Probe attack. We provide results under stringent design constraints such as: realistic system load conditions, real-time detection accuracy, speed, system-wide performance overhead and distribution of error (i.e., false positives and negatives) for the used machine learning models. Our results show detection accuracy of > 99% for Prime+Probe attack with performance overhead of 3 − 4% at the highest detection speed, i.e., within 1−2% completion of 4800 AES encryption rounds needed to complete a successful attack.
Binarkation of document images is 1, JntroductionImage binarization or thrcsholding is an important tool in image processing and computer vision, to extract the object pixels in an image from the background pixels. Image binarization is central to many applications including document image analysis (printed characters, logos, graphical content, and musical scores are important as objects), map processing (fines, lzgends and characters necd to be cxtracted), scene processing, quality inspection of materials, cell images, segmentation of various image modalities for nondestructive testing (NDT) applications (ultrasonic images, eddy cumnt images, thermal images, X-ray computed tomography, laser scanning confocal microscopy, extraction of edge field and spatioteinporal segmentation of video images). A number of methods have already been proposed for image binarization but unfortunately, most of them are very much specific for a few applications. Thus, it can be said that a binarization (thresholding) inethod may work well for one application but its performance can be unsatisfactory for another application.Bi-level image is used as a pre-processing unit in several applications. The use of binary images decreases computational load for the overall application. These applications include document analysis, optical character recognition system, scene matching, quality inspection of materials etc. The binarization process computes the threshold value that differentiate object and background pixels. Under varying illumination and noise, the binarization can become a challenging job. A number of factors contribute to complicate the thresholding scheme inchding ambient illumination, variance of gray levels within the object and the background, inadequate contrast, object shape and size non-commensurate with the scene. A wrong selection of threshold value may misinterpret thc background pixel and can classify it as object and vice versa, resulting in overall degradation of system performance. The determination of a threshold itself i s application dependent since one threshold may work with one application and may not work with other one. In document analysis, binarization is sensitive to noisc, surrounding illumination, gray Ievel distribution, local shading effects? inadequate contrast, the presence o f dense nontext components such as photographs, etc. while at the same time, the merges,' fractures and other deformations in the character shapes affects the threshold value io OCR system. On the other hand, the simplification is needed that benefits the overall system processing characteristics (computational load, algorithm complexity, real-time requirements etc). So all these challenges and problems makes binarization a difficult task.There are a number of important performance requirements that need to be considered while binarizing gray Ievel images. With reference to the proposed algorithm, following sections describe all the rehtsd concepts briefly along with the rclatcd work done in the past. The algorithmi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.