File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Benchmarking and Comparison of the Task Graph Scheduling Algorithms

TitleBenchmarking and Comparison of the Task Graph Scheduling Algorithms
Authors
KeywordsPerformance evaluation
benchmarks
multiprocessors
parallel processing
scheduling
task graphs
scalability
Issue Date1999
PublisherAcademic Press. The Journal's web site is located at http://www.elsevier.com/locate/jpdc
Citation
Journal Of Parallel And Distributed Computing, 1999, v. 59 n. 3, p. 381-422 How to Cite?
AbstractThe problem of scheduling a parallel program represented by a weighted directed acyclic graph (DAG) to a set of homogeneous processors for minimizing the completion time of the program has been extensively studied. The NP-completeness of the problem has stimulated researchers to propose a myriad of heuristic algorithms. While most of these algorithms are reported to be efficient, it is not clear how they compare against each other. A meaningful performance evaluation and comparison of these algorithms is a complex task and it must take into account a number of issues. First, most scheduling algorithms are based upon diverse assumptions, making the performance comparison rather meaningless. Second, there does not exist a standard set of benchmarks to examine these algorithms. Third, most algorithms are evaluated using small problem sizes, and, therefore, their scalability is unknown. In this paper, we first provide a taxonomy for classifying various algorithms into distinct categories according to their assumptions and functionalities. We then propose a set of benchmarks that are based on diverse structures and are not biased toward a particular scheduling technique. We have implemented 15 scheduling algorithms and compared them on a common platform by using the proposed benchmarks, as well as by varying important problem parameters. We interpret the results based upon the design philosophies and principles behind these algorithms, drawing inferences why some algorithms perform better than others. We also propose a performance measure called scheduling scalability (SS) that captures the collective effectiveness of a scheduling algorithm in terms of its solution quality, the number of processors used, and the running time. © 1999 Academic Press.
Persistent Identifierhttp://hdl.handle.net/10722/73624
ISSN
2015 Impact Factor: 1.32
2015 SCImago Journal Rankings: 0.851
ISI Accession Number ID
References

 

DC FieldValueLanguage
dc.contributor.authorKwok, YKen_HK
dc.contributor.authorAhmad, Ien_HK
dc.date.accessioned2010-09-06T06:53:11Z-
dc.date.available2010-09-06T06:53:11Z-
dc.date.issued1999en_HK
dc.identifier.citationJournal Of Parallel And Distributed Computing, 1999, v. 59 n. 3, p. 381-422en_HK
dc.identifier.issn0743-7315en_HK
dc.identifier.urihttp://hdl.handle.net/10722/73624-
dc.description.abstractThe problem of scheduling a parallel program represented by a weighted directed acyclic graph (DAG) to a set of homogeneous processors for minimizing the completion time of the program has been extensively studied. The NP-completeness of the problem has stimulated researchers to propose a myriad of heuristic algorithms. While most of these algorithms are reported to be efficient, it is not clear how they compare against each other. A meaningful performance evaluation and comparison of these algorithms is a complex task and it must take into account a number of issues. First, most scheduling algorithms are based upon diverse assumptions, making the performance comparison rather meaningless. Second, there does not exist a standard set of benchmarks to examine these algorithms. Third, most algorithms are evaluated using small problem sizes, and, therefore, their scalability is unknown. In this paper, we first provide a taxonomy for classifying various algorithms into distinct categories according to their assumptions and functionalities. We then propose a set of benchmarks that are based on diverse structures and are not biased toward a particular scheduling technique. We have implemented 15 scheduling algorithms and compared them on a common platform by using the proposed benchmarks, as well as by varying important problem parameters. We interpret the results based upon the design philosophies and principles behind these algorithms, drawing inferences why some algorithms perform better than others. We also propose a performance measure called scheduling scalability (SS) that captures the collective effectiveness of a scheduling algorithm in terms of its solution quality, the number of processors used, and the running time. © 1999 Academic Press.en_HK
dc.languageengen_HK
dc.publisherAcademic Press. The Journal's web site is located at http://www.elsevier.com/locate/jpdcen_HK
dc.relation.ispartofJournal of Parallel and Distributed Computingen_HK
dc.subjectPerformance evaluationen_HK
dc.subjectbenchmarksen_HK
dc.subjectmultiprocessorsen_HK
dc.subjectparallel processingen_HK
dc.subjectschedulingen_HK
dc.subjecttask graphsen_HK
dc.subjectscalabilityen_HK
dc.titleBenchmarking and Comparison of the Task Graph Scheduling Algorithmsen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=0743-7315&volume=59&issue=3&spage=381&epage=422&date=1999&atitle=Benchmarking+and+Comparison+of+the+Task+Graph+Scheduling+Algorithmsen_HK
dc.identifier.emailKwok, YK:ykwok@eee.hku.hken_HK
dc.identifier.authorityKwok, YK=rp00128en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1006/jpdc.1999.1578en_HK
dc.identifier.scopuseid_2-s2.0-0001514167en_HK
dc.identifier.hkuros53830en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-0001514167&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume59en_HK
dc.identifier.issue3en_HK
dc.identifier.spage381en_HK
dc.identifier.epage422en_HK
dc.identifier.isiWOS:000084113900003-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridKwok, YK=7101857718en_HK
dc.identifier.scopusauthoridAhmad, I=7201878459en_HK

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats