Resource Allocation in Stochastic Processing Networks

Resource Allocation in Stochastic Processing Networks
Author: Yuan Zhong (Ph.D.)
Publisher:
Total Pages: 193
Release: 2012
Genre:
ISBN:

This thesis addresses the design and analysis of resource allocation policies in largescale stochastic systems, motivated by examples such as the Internet, cloud facilities, wireless networks, etc. A canonical framework for modeling many such systems is provided by "stochastic processing networks" (SPN) (Harrison [28, 29]). In this context, the key operational challenge is efficient and timely resource allocation. We consider two important classes of SPNs: switched networks and bandwidth-sharing networks. Switched networks are constrained queueing models that have been used successfully to describe the detailed packet-level dynamics in systems such as input-queued switches and wireless networks. Bandwidth-sharing networks have primarily been used to capture the long-term behavior of the flow-level dynamics in the Internet. In this thesis, we develop novel methods to analyze the performance of existing resource allocation policies, and we design new policies that achieve provably good performance. First, we study performance properties of so-called Maximum-Weight-[alpha] (MW-[alpha]) policies in switched networks, and of a-fair policies in bandwidth-sharing networks, both of which are well-known families of resource allocation policies, parametrized by a positive parameter [alpha] > 0. We study both their transient properties as well as their steady-state behavior. In switched networks, under a MW-a policy with a 2 1, we obtain bounds on the maximum queue size over a given time horizon, by means of a maximal inequality derived from the standard Lyapunov drift condition. As a corollary, we establish the full state space collapse property when [alpha] > 1. In the steady-state regime, for any [alpha] >/= 0, we obtain explicit exponential tail bounds on the queue sizes, by relying on a norm-like Lyapunov function, different from the standard Lyapunov function used in the literature. Methods and results are largely parallel for bandwidth-sharing networks. Under an a-fair policy with [alpha] >/= 1, we obtain bounds on the maximum number of flows in the network over a given time horizon, and hence establish the full state space collapse property when [alpha] >/= 1. In the steady-state regime, using again a norm-like Lyapunov function, we obtain explicit exponential tail bounds on the number of flows, for any a > 0. As a corollary, we establish the validity of the diffusion approximation developed by Kang et al. [32], in steady state, for the case [alpha] = 1. Second, we consider the design of resource allocation policies in switched networks. At a high level, the central performance questions of interest are: what is the optimal scaling behavior of policies in large-scale systems, and how can we achieve it? More specifically, in the context of general switched networks, we provide a new class of online policies, inspired by the classical insensitivity theory for product-form queueing networks, which admits explicit performance bounds. These policies achieve optimal queue-size scaling, in the conventional heavy-traffic regime, for a class of switched networks, thus settling a conjecture (documented in [51]) on queue-size scaling in input-queued switches. In the particular context of input-queued switches, we consider the scaling behavior of queue sizes, as a function of the port number n and the load factor [rho]. In particular, we consider the special case of uniform arrival rates, and we focus on the regime where [rho] = 1 - 1/f(n), with f(n) >/= n. We provide a new class of policies under which the long-run average total queue size scales as O(n1.5 -f(n) log f(n)). As a corollary, when f(n) = n, the long-run average total queue size scales as O(n2.5 log n). This is a substantial improvement upon prior works [44], [52], [48], [39], where the same quantity scales as O(n3 ) (ignoring logarithmic dependence on n).

Stochastic Models for Resource Allocation in Large Distributed Systems

Stochastic Models for Resource Allocation in Large Distributed Systems
Author: Guilherme Thompson
Publisher:
Total Pages: 0
Release: 2017
Genre:
ISBN:

This PhD thesis investigates four problems in the context of Large Distributed Systems. This work is motivated by the questions arising with the expansion of Cloud Computing and related technologies. The present work investigates the efficiency of different resource allocation algorithms in this framework. The methods used involve a mathematical analysis of several stochastic models associated to these networks. Chapter 1 provides an introduction to the subject in general, as well as a presentation of the main mathematical tools used throughout the subsequent chapters. Chapter 2 presents a congestion control mechanism in Video on Demand services delivering files encoded in various resolutions. We propose a policy under which the server delivers the video only at minimal bit rate when the occupancy rate of the server is above a certain threshold. The performance of the system under this policy is then evaluated based on both the rejection and degradation rates. Chapters 3, 4 and 5 explore problems related to cooperation schemes between data centres on the edge of the network. In the first setting, we analyse a policy in the context of multi-resource cloud services. In second case, requests that arrive at a congested data centre are forwarded to a neighbouring data centre with some given probability. In the third case, requests blocked at one data centre are forwarded systematically to another where a trunk reservation policy is introduced such that a redirected request is accepted only if there are a certain minimum number of free servers at this data centre.

INFORMS Conference Program

INFORMS Conference Program
Author: Institute for Operations Research and the Management Sciences. National Meeting
Publisher:
Total Pages: 180
Release: 1998
Genre: Industrial management
ISBN:

Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks

Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks
Author: Nan Hu
Publisher:
Total Pages:
Release: 2016
Genre:
ISBN:

Support for intelligent and autonomous resource management is one of the key factors to the success of modern sensor network systems. The limited resources, such as exhaustible battery life, moderate processing ability and finite bandwidth, restrict the systems ability to simultaneously accommodate all missions that are submitted by users. In order to achieve the optimal profit in such dynamic conditions, the value of each mission, quantified by its demand on resources and achievable profit, need to be properly evaluated in different situations.In practice, uncertainties may exist in the entire execution of a mission, thus should not be ignored. For a single mission, uncertainty, such as unreliable wireless medium and variable quality of sensor outputs, both demands and profits of the mission may not be deterministic and may be hard to predict precisely. Moreover,throughout the process of execution, each mission may experience multiple states, the transitions between which may be affected by different conditions. Even if the current state of a mission is identified, because multiple potential transitions may occur each leading to different consequences, the subsequent state cannot be confirmed until the transition actually occurs. In systems with multiple missions, each with uncertainties, a more complicated circumstance arises, in which the strategy for resource allocation among missions needs to be modified adaptively and dynamically based on both the present status and potential evolution of all missions.In our research, we take into account several levels of uncertainties that may be faced when allocating limited resources in dynamic environments as described above, where the concepts of missions that require resources may be matched to those as in certain network applications. Our algorithms calculate resource allocation solutions to corresponding scenarios and always aim to achieve high profit, as well as other performance improvements (e.g., resource utilization rate, mission preemption rate, etc.).Given a fixed set of missions, we consider both demands and profits as random variables, whose values follow certain distributions and may change over time. Since the profit is not constant, rather than achieving a specific maximized profit, our objective is to select the optimal set of missions so as to maximize a certain percentile of their combined profit, while constraining the probability of resource capacity violation within an acceptable threshold. Note that, in this scenario, the selection of missions is final and will not change after the decision has been made. Therefore, this static solution only fits in the applications with long-term running missions.For the scenarios with both long-term and short-term missions, to increase the total achieved profit, instead of selecting a fixed mission set, we propose a dynamic strategy which tunes mission selections adaptively to the changing environments. We take a surveillance application as an example, where missions are targetingspecific sets of events, and both demands and profits of a mission depend on which event is actually occurring. To some extent resources should be focused on those high-valued events with a high probability of occurring; on the other hand, resources should also be distributed to gain an understanding of the overall condition of the environment. We develop Self-Adaptive Resource Allocation algorithm (SARA) to model mission execution as Markov processes, in which the states are decided by the combination of occurring events. In this case, resources need to be allocated before the events actually occur, otherwise, the mission will miss the event due to lack of support. Therefore, a prediction as to which events are about to occur is necessary, and when the prediction fails, in exchange of the loss of profit, the mistakenly allocated resources collect information to assist prediction in the future.When the transitions between mission states can be controlled by taking certain maneuvers at the proper time, the probability of the cases when missions transit to lower profit states may be decreased. As a consequence, sometimes a loss of profit may be avoided. We model this problem as a Semi-Markov Decision Process, andpropose Action-Drive Operation Model With Evaluation of Risk and Executability (ADOM-ERE) to calculate optimal maneuvers. One challenge is that the state transitions can be affected not only by states and actions, but also by external risks and competition for resources. On one hand, external risks (e.g., a DoS attack) may change the existing transition probabilities between states; on the other hand, taking actions to avoid lower profit states may require special constrained resources.As a result, sometimes lower profit missions may not choose its optimal action because of resource exhaustion. ADOM-ERE considers all of states, actions, risks and competition when searching for the optimal allocation solution, and is available for both scenarios in which resources for actions are managed either centralized or managed in a distributed way.Numerical simulation are performed for all algorithms, and the results are compared with several competitive works to show that our solutions are better in terms of higher profit achieved in corresponding settings.