The Internet of Things (IoT) is an emerging paradigm branded by heterogeneous technologies composed of smart ubiquitous objects that are seamlessly connected to the Internet. These objects are often deployed in open environments to provide innovative services in various application domains such as smart cities, smart health, and smart communities. These IoT devices produce a massive amount of confidentiality and security-sensitive data. Thus, security of these devices is very important in order to ensure the safety and effectiveness of the system. In this paper, a decentralized authentication and access control mechanism is proposed for lightweight IoT devices and is applicable to a large number of scenarios. The mechanism is based on the technology of the fog computing and the concept of the public blockchain. The results gained from the experiments demonstrate a superior performance of the proposed mechanism when compared to a state-of-the-art blockchainbased authentication technique.
In recent years, the graph partitioning problem gained importance as a mandatory preprocessing step for distributed graph processing on very large graphs. Existing graph partitioning algorithms minimize partitioning latency by assigning individual graph edges to partitions in a streaming mannerat the cost of reduced partitioning quality. However, we argue that the mere minimization of partitioning latency is not the optimal design choice in terms of minimizing total graph analysis latency, i.e., the sum of partitioning and processing latency. Instead, for complex and long-running graph processing algorithms that run on very large graphs, it is beneficial to invest more time into graph partitioning to reach a higher partitioning quality -which drastically reduces graph processing latency. In this paper, we propose ADWISE, a novel window-based streaming partitioning algorithm that increases the partitioning quality by always choosing the best edge from a set of edges for assignment to a partition. In doing so, ADWISE controls the partitioning latency by adapting the window size dynamically at run-time. Our evaluations show that ADWISE can reach the sweet spot between graph partitioning latency and graph processing latency, reducing the total latency of partitioning plus processing by up to 23 − 47 percent compared to the state-of-the-art. Single-edgeAll-edge Fig. 1: Research gap -adaptive window-based streaming vertex-cut partitioning. NE[40]this paper due to its superior partitioning properties on realworld graphs compared to edge-cut partitioning [4]. In vertexcut partitioning, each vertex can reside on multiple partitions, i.e., can be replicated across the corresponding worker machines. However, a replicated vertex causes synchronization and communication overhead between the worker machines, inducing higher graph processing latency [2], [6], [7]. Hence, graph processing latency strongly correlates with partitioning quality, defined as the replication degree of vertices on the different worker machines. The problem of partitioning a graph optimally, i.e., with minimal vertex replication, is impracticable for large graphs due to its NP-hardness [8]. In literature, there are two basic approaches to practically address the partitioning problem: (i) single-edge streaming algorithms perform partitioning decisions on one edge at a time, minimizing the partitioning latency, or (ii) all-edge algorithms load the complete graph into memory and employ global placement heuristics to optimize the partitioning quality. The existing algorithms follow either of the methods: Figure 1 illustrates the landscape of stateof-the-art vertex-cut partitioning algorithms. Modern graph processing systems use streaming partitioning when loading massive graphs due to their superior scalability and minimal runtime complexity [4], [9].In this paper, we investigate whether it is always optimal to invest minimal partitioning latency as done by the established streaming partitioning algorithms. Clearly, there is a tradeoff between partitioning ...
Distributed Complex Event Processing (DCEP) is a paradigm to infer the occurrence of complex situations in the surrounding world from basic events like sensor readings. In doing so, DCEP operators detect event patterns on their incoming event streams. To yield high operator throughput, data parallelization frameworks divide the incoming event streams of an operator into overlapping windows that are processed in parallel by a number of operator instances. In doing so, the basic assumption is that the di erent windows can be processed independently from each other. However, consumption policies enforce that events can only be part of one pattern instance; then, they are consumed, i.e., removed from further pattern detection. That implies that the constituent events of a pattern instance detected in one window are excluded from all other windows as well, which breaks the data parallelism between di erent windows. In this paper, we tackle this problem by means of speculation: Based on the likelihood of an event's consumption in a window, subsequent windows may speculatively suppress that event. We propose the SPECTRE framework for speculative processing of multiple dependent windows in parallel. Our evaluations show an up to linear scalability of SPECTRE with the number of CPU cores.
SUMMARY Current distributed publish/subscribe systems consider all participants to have similar QoS requirements and contribute equally to the system's resources. However, in many real‐world applications, the message delay tolerance of individual participants may differ widely. Disseminating messages according to individual delay requirements not only allows for the satisfaction of user‐specific needs, but also significantly improves the utilization of the resources that participants contribute to a publish/subscribe system. In this article, we propose a peer‐to‐peer‐based approach to satisfy the individual delay requirements of subscribers in the presence of bandwidth constraints. Our approach allows subscribers to dynamically adjust the granularity of their subscriptions according to their bandwidth constraints and delay requirements. Subscribers maintain the overlay in a decentralized manner, exclusively establishing connections that satisfy their individual delay requirements, and that provide messages exactly meeting their subscription granularity. The evaluations show that for many practical workloads, the proposed publish/subscribe system can scale up to a large number of subscribers and performs robustly in a very dynamic setting. Copyright © 2011 John Wiley & Sons, Ltd.
With the increasing popularity of Software-defined Networking (SDN), Ternary Content-Addressable Memory (TCAM) of switches can be directly accessed by a publish/subscribe middleware to perform filtering operations at low latency. In this way, three important requirements for a publish/subscribe middleware can be fulfilled: namely, bandwidth efficiency, line-rate performance, and low latency in forwarding messages between producers and consumers. Nevertheless, it is challenging to sustain line-rate performance in the presence of dynamically changing interests of producers and consumers. In this article, we realize a scalable, SDN-based publish/subscribe middleware, called PLEROMA, that performs efficient forwarding at line-rate. Moreover, PLEROMA offers methods to efficiently reconfigure a deployed topology in the presence of dynamic subscriptions and advertisements. We evaluate the performance of both the data plane and the control plane of PLEROMA to support our claim. Furthermore, we evaluate and benchmark the performances of SDN-compliant hardware and software switches in the context of our middleware.
In this work, we introduce the idea and concept of m –polynomial p –harmonic exponential type convex functions. In addition, we elaborate the newly introduced idea by examples and some interesting algebraic properties. As a result, several new integral inequalities are established. Finally, we investigate some applications for means. The amazing techniques and wonderful ideas of this work may excite and motivate for further activities and research in the different areas of science.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.