Personal tools
You are here: Home Tutorials WORLDCOMP'11 Featured Tutorial: Prof. H. J. Siegel
All tracks of WORLDCOMP have now been listed as "Top-ranked Conferences" by Microsoft Academic Search (based on no. of citations to papers)
Click Here for details

Important Date
July 18-21, 2011
22 joint conferences

« September 2015 »
Su Mo Tu We Th Fr Sa
Current Event
Click Here

Past Events
Click Here

Click Here

Click Here

Click Here

Click Here

Click Here

WORLDCOMP'07 & '06
Click Here

Join Our Mailing List
Sign up to receive email announcements and updates about conferences and future events


WORLDCOMP'11 Featured Tutorial: Prof. H. J. Siegel

Last modified 2011-06-18 12:57

Robust Resource Management for Parallel and Distributed Computing Systems
Prof. H. J. Siegel
Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering and Professor of Computer Science
Director, CSU Information Science and Technology Center (ISTeC)
Colorado State University, Fort Collins, Colorado, USA

Date: July 18, 2011
Time: 5:30pm - 9.00pm
Location: Ballroom 6


    What does "robust" mean? Often people state that their system software component, piece of hardware, application code, or technique is “robust,” but never define what they mean by "robust." How does one define robustness in a given situation so it can be determined if a system is indeed robust? Furthermore, how does one quantify robustness, so that if two people claim to have robust computing systems, for example, how can one decide which is the more robust? These are the types of issues we address in this tutorial. We study robustness in the context of resource allocation in heterogeneous parallel and distributed computing systems, but the robustness concepts presented have broad applicability.

    In heterogeneous parallel and distributed computing environments, a collection of different machines is interconnected and provides a variety of computational capabilities. These capabilities can be used to execute a workload composed of different types of applications, each of which may consist of multiple tasks, where the tasks have diverse computational requirements. The execution time of a task on a machine is based on how the task’s computational requirements interact with the machine’s capabilities. Thus, a task’s execution time may vary from one machine to the next, and just because some machine A is faster than some machine B for task 1 does not mean it will be faster for task 2. Furthermore, there can be inter-task data dependencies. Tasks must share the computing and communication resources of the system.

    A critical research problem for heterogeneous parallel and distributed computing systems is how to assign tasks to machines, and schedule the order of their execution, to optimize some given system performance measure. However, assignment decisions and associated performance prediction are often based on estimated values of task and system parameters. The actual values of these parameters are uncertain, and may differ from the estimates. For example, the estimates may represent only average values, the models used to generate the estimates may have limited accuracy, or there may be changes in the environment. Because the actual values of these parameters are uncertain, the actual system performance may differ from the predicted performance. Thus, it is important to develop resource management strategies that strive to meet particular system performance requirements even when such uncertainties are present.

    To address this problem, we have designed two models for deriving the degree of robustness of a resource allocation. One model is based on having deterministic estimates of the parameters whose exact values are uncertain. In this case, the degree of robustness of a resource allocation is quantified as the maximum amount of collective difference between actual and estimated values in these system parameters within which a user-specified level of system performance (QoS) can be guaranteed. The second model assumes that stochastic information is available about the values of these parameters whose actual values are uncertain. With this model, the degree of robustness is quantified as the probability that a user-specified level of system performance can be met.

    Both robustness models, and the robustness metrics associated with each, will be presented. It will be shown how they can be used to evaluate and compare the robustness of different resource allocations. In addition, it will be demonstrated how these models can be incorporated into resource management heuristics that produce robust allocations to optimize some user-specified performance criterion. Robust resource allocation heuristics for a variety of environments will be discussed and compared. This will be done for both static heuristics, which are executed off-line for production environments, and dynamic heuristics, which are executed on-line for environments where tasks must be assigned resources as they arrive into the system.

    The tutorial material is applicable to various types of heterogeneous computing and communication environments, including parallel, distributed, cluster, grid, Internet, cloud, embedded, multicore, content distribution networks, wireless networks, and sensor networks. Furthermore, the robustness models, concepts, and metrics presented are generally applicable to resource allocation problems throughout various fields, such as smart grids and search and rescue operations.


    This course will enable you to:

    • understand the problem of robust resource allocation in heterogeneous parallel and distributed computing systems
    • ask the “three robustness questions” that must be answered whenever anyone makes robustness claims
    • apply the appropriate model of robustness depending on the information available about the system uncertainties
    • develop and use robustness metrics to quantify the robustness of a particular resource allocation for a given computational environment
    • design resource allocation heuristics that incorporate robustness, for both static (off-line) and dynamic (on-line) environments
    • use the concepts of robustness for resource allocation in a variety of problem domains


    This course is intended for faculty, engineers, scientists, and graduate students who want to learn how to define, model, and quantify robustness when designing and using heterogeneous suites of computers (including clusters and clouds) to execute applications in a way that will optimize some performance criterion.


    H. J. Siegel is the George T. Abell Endowed Chair Distinguished Professor of Electrical and Computer Engineering at Colorado State University (CSU), where he is also a Professor of Computer Science. He is the Director of the CSU Information Science and Technology Center (ISTeC), a university-wide organization for enhancing CSU’s research, education, and outreach activities pertaining to the design and innovative application of computer, communication, and information systems. From 1976 to 2001, he was a professor in the School of Electrical and Computer Engineering at Purdue University. He received two B.S. degrees from the Massachusetts Institute of Technology (MIT), and the M.A., M.S.E., and Ph.D. degrees from Princeton University. He is a Fellow of the IEEE and a Fellow of the ACM. Prof. Siegel has co-authored over 380 published technical papers in the areas of parallel and distributed computing and communications. He was a Coeditor-in-Chief of the Journal of Parallel and Distributed Computing, and was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. For more information, please see


    Professor H. J. Siegel
    Department of Electrical and Computer Engineering and Department of Computer Science
    Colorado State University
    Fort Collins, CO 80523-1373
    Office: (970) 491-7982
    Fax: (970) 491-2249

Academic Co-Sponsors
The Berkeley Initiative in Soft Computing (BISC)
University of California, Berkeley, USA

Biomedical Cybernetics Laboratory, HST of Harvard University
Massachusetts Institute of Technology (MIT)

Intelligent Data Exploration and Analysis Laboratory
University of Texas at Austin, Austin, Texas, USA

Collaboratory for Advanced Computing and Simulations (CACS)
University of Southern California, USA

Minnesota Supercomputing Institute
University of Minnesota, USA

Knowledge Management & Intelligent System Center (KMIS)
University of Siegen, Germany

UMIT, Institute of Bioinformatics and Translational Research, Austria
BioMedical Informatics & Bio-Imaging Laboratory
Georgia Institute of Technology and Emory University, Atlanta, Georgia, USA

Hawkeye Radiology Informatics, Department of Radiology, College of Medicine, University of Iowa, Iowa, USA

Supercomputer Software Department (SSD), Institute of Computational Mathematics & Mathematical Geophysics, Russian Academy of Sciences
SECLAB of University of Naples Federico II
University of Naples Parthenope, & Second University of Naples, Italy

Medical Image HPC & Informatics Lab (MiHi Lab)
University of Iowa, Iowa, USA

Intelligent Cyberspace Engineering Lab., ICEL, Texas A&M; University (Com./Texas), USA
Model-Based Engineering Laboratory, University of North Dakota, North Dakota, USA

Corporate Sponsor

Intel Corporation

Altera Corporation

Pico Computing

High Performance Computing for Nanotechnology (HPCNano)

International Society of Intelligent Biological Medicine

World Academy of Biomedical Sciences and Technologies
The International Council on Medical and Care Compunetics
The UK Department for Business, Enterprise & Regulatory Reform
Scientific Technologies Corporation

HoIP - Health without Boundaries


Administered by UCMSS
Universal Conference Management Systems & Support
San Diego, California, USA
Contact: Kaveh Arbtan

If you can read this text, it means you are not experiencing the Plone design at its best. Plone makes heavy use of CSS, which means it is accessible to any internet browser, but the design needs a standards-compliant browser to look like we intended it. Just so you know ;)