File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Efficient Scheduling of Arbitrary Task Graphs to Multiprocessors Using a Parallel Genetic Algorithm

TitleEfficient Scheduling of Arbitrary Task Graphs to Multiprocessors Using a Parallel Genetic Algorithm
Authors
Issue Date1997
PublisherAcademic Press. The Journal's web site is located at http://www.elsevier.com/locate/jpdc
Citation
Journal Of Parallel And Distributed Computing, 1997, v. 47 n. 1, p. 58-77 How to Cite?
AbstractGiven a parallel program represented by a task graph, the objective of a scheduling algorithm is to minimize the overall execution time of the program by properly assigning the nodes of the graph to the processors. This multiprocessor scheduling problem is NP-complete even with simplifying assumptions and becomes more complex under relaxed assumptions such as arbitrary precedence constraints, and arbitrary task execution and communication times. The present literature on this topic is a large repertoire of heuristics that produce good solutions in a reasonable amount of time. These heuristics, however, have restricted applicability in a practical environment because they have a number of fundamental problems including high time complexity, lack of scalability, and no performance guarantee with respect to optimal solutions. Recently, genetic algorithms (GAs) have been widely reckoned as a useful vehicle for obtaining high quality or even optimal solutions for a broad range of combinatorial optimization problems. While a few GAs for scheduling have already been suggested, in this paper we propose a novel GA-based algorithm with an objective to simultaneously meet the goals of high performance, scalability, and fast running time. The proposed parallel genetic scheduling (PGS) algorithm itself is a parallel algorithm which generates high quality solutions in a short time. By encoding the scheduling list as a chromosome, the PGS algorithm can potentially generate an optimal scheduling list which in turn leads to an optimal schedule. The major strength of the PGS algorithm lies in its two efficient genetic operators: the order crossover and mutation. These operators effectively combine the building-blocks of good scheduling lists to construct better lists. The proposed algorithm is evaluated through a robust comparison with two heuristics best known in terms of performance and time complexity. It outperforms both heuristics while taking considerably less running time. When evaluated with random task graphs for which optimal solutions are known, the PGS algorithm generates optimal solutions for more than half of the test cases and close-to-optimal for the other half. © 1997 Academic Press.
Persistent Identifierhttp://hdl.handle.net/10722/73640
ISSN
2021 Impact Factor: 4.542
2020 SCImago Journal Rankings: 0.638
ISI Accession Number ID
References

 

DC FieldValueLanguage
dc.contributor.authorKwok, YKen_HK
dc.contributor.authorAhmad, Ien_HK
dc.date.accessioned2010-09-06T06:53:20Z-
dc.date.available2010-09-06T06:53:20Z-
dc.date.issued1997en_HK
dc.identifier.citationJournal Of Parallel And Distributed Computing, 1997, v. 47 n. 1, p. 58-77en_HK
dc.identifier.issn0743-7315en_HK
dc.identifier.urihttp://hdl.handle.net/10722/73640-
dc.description.abstractGiven a parallel program represented by a task graph, the objective of a scheduling algorithm is to minimize the overall execution time of the program by properly assigning the nodes of the graph to the processors. This multiprocessor scheduling problem is NP-complete even with simplifying assumptions and becomes more complex under relaxed assumptions such as arbitrary precedence constraints, and arbitrary task execution and communication times. The present literature on this topic is a large repertoire of heuristics that produce good solutions in a reasonable amount of time. These heuristics, however, have restricted applicability in a practical environment because they have a number of fundamental problems including high time complexity, lack of scalability, and no performance guarantee with respect to optimal solutions. Recently, genetic algorithms (GAs) have been widely reckoned as a useful vehicle for obtaining high quality or even optimal solutions for a broad range of combinatorial optimization problems. While a few GAs for scheduling have already been suggested, in this paper we propose a novel GA-based algorithm with an objective to simultaneously meet the goals of high performance, scalability, and fast running time. The proposed parallel genetic scheduling (PGS) algorithm itself is a parallel algorithm which generates high quality solutions in a short time. By encoding the scheduling list as a chromosome, the PGS algorithm can potentially generate an optimal scheduling list which in turn leads to an optimal schedule. The major strength of the PGS algorithm lies in its two efficient genetic operators: the order crossover and mutation. These operators effectively combine the building-blocks of good scheduling lists to construct better lists. The proposed algorithm is evaluated through a robust comparison with two heuristics best known in terms of performance and time complexity. It outperforms both heuristics while taking considerably less running time. When evaluated with random task graphs for which optimal solutions are known, the PGS algorithm generates optimal solutions for more than half of the test cases and close-to-optimal for the other half. © 1997 Academic Press.en_HK
dc.languageengen_HK
dc.publisherAcademic Press. The Journal's web site is located at http://www.elsevier.com/locate/jpdcen_HK
dc.relation.ispartofJournal of Parallel and Distributed Computingen_HK
dc.titleEfficient Scheduling of Arbitrary Task Graphs to Multiprocessors Using a Parallel Genetic Algorithmen_HK
dc.typeArticleen_HK
dc.identifier.openurlhttp://library.hku.hk:4550/resserv?sid=HKU:IR&issn=0743-7315&volume=47&issue=1&spage=58&epage=77&date=1997&atitle=Efficient+Scheduling+of+Arbitrary+Task+Graphs+to+Multiprocessors+Using+A+Parallel+Genetic+Algorithmen_HK
dc.identifier.emailKwok, YK:ykwok@eee.hku.hken_HK
dc.identifier.authorityKwok, YK=rp00128en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1006/jpdc.1997.1395en_HK
dc.identifier.scopuseid_2-s2.0-0002086462en_HK
dc.identifier.hkuros36986en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-0002086462&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.volume47en_HK
dc.identifier.issue1en_HK
dc.identifier.spage58en_HK
dc.identifier.epage77en_HK
dc.identifier.isiWOS:000072288600005-
dc.publisher.placeUnited Statesen_HK
dc.identifier.scopusauthoridKwok, YK=7101857718en_HK
dc.identifier.scopusauthoridAhmad, I=7201878459en_HK
dc.identifier.issnl0743-7315-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats