File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/IWCMC61514.2024.10592370
- Scopus: eid_2-s2.0-85199992000
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Mixture of Experts for Intelligent Networks: A Large Language Model-enabled Approach
Title | Mixture of Experts for Intelligent Networks: A Large Language Model-enabled Approach |
---|---|
Authors | |
Keywords | Generative AI (GAI) large language model mixture of experts network optimization |
Issue Date | 2024 |
Citation | 20th International Wireless Communications and Mobile Computing Conference, IWCMC 2024, 2024, p. 531-536 How to Cite? |
Abstract | Optimizing various wireless user tasks poses a significant challenge for networking systems because of the expanding range of user requirements. Despite advancements in Deep Reinforcement Learning (DRL), the need for customized optimization tasks for individual users complicates developing and applying numerous DRL models, leading to substantial computation resource and energy consumption and can lead to inconsistent outcomes. To address this issue, we propose a novel approach utilizing a Mixture of Experts (MoE) framework, augmented with Large Language Models (LLMs), to analyze user objectives and constraints effectively, select specialized DRL experts, and weigh each decision from the participating experts. Specifically, we develop a gate network to oversee the expert models, allowing a collective of experts to tackle a wide array of new tasks. Furthermore, we innovatively substitute the traditional gate network with an LLM, leveraging its advanced reasoning capabilities to manage expert model selection for joint decisions. Our proposed method reduces the need to train new DRL models for each unique optimization problem, decreasing energy consumption and AI model implementation costs. The LLMenabled MoE approach is validated through a general maze navigation task and a specific network service provider utility maximization task, demonstrating its effectiveness and practical applicability in optimizing complex networking systems. |
Persistent Identifier | http://hdl.handle.net/10722/353200 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Du, Hongyang | - |
dc.contributor.author | Liu, Guangyuan | - |
dc.contributor.author | Lin, Yijing | - |
dc.contributor.author | Niyato, Dusit | - |
dc.contributor.author | Kang, Jiawen | - |
dc.contributor.author | Xiong, Zehui | - |
dc.contributor.author | Kim, Dong In | - |
dc.date.accessioned | 2025-01-13T03:02:35Z | - |
dc.date.available | 2025-01-13T03:02:35Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | 20th International Wireless Communications and Mobile Computing Conference, IWCMC 2024, 2024, p. 531-536 | - |
dc.identifier.uri | http://hdl.handle.net/10722/353200 | - |
dc.description.abstract | Optimizing various wireless user tasks poses a significant challenge for networking systems because of the expanding range of user requirements. Despite advancements in Deep Reinforcement Learning (DRL), the need for customized optimization tasks for individual users complicates developing and applying numerous DRL models, leading to substantial computation resource and energy consumption and can lead to inconsistent outcomes. To address this issue, we propose a novel approach utilizing a Mixture of Experts (MoE) framework, augmented with Large Language Models (LLMs), to analyze user objectives and constraints effectively, select specialized DRL experts, and weigh each decision from the participating experts. Specifically, we develop a gate network to oversee the expert models, allowing a collective of experts to tackle a wide array of new tasks. Furthermore, we innovatively substitute the traditional gate network with an LLM, leveraging its advanced reasoning capabilities to manage expert model selection for joint decisions. Our proposed method reduces the need to train new DRL models for each unique optimization problem, decreasing energy consumption and AI model implementation costs. The LLMenabled MoE approach is validated through a general maze navigation task and a specific network service provider utility maximization task, demonstrating its effectiveness and practical applicability in optimizing complex networking systems. | - |
dc.language | eng | - |
dc.relation.ispartof | 20th International Wireless Communications and Mobile Computing Conference, IWCMC 2024 | - |
dc.subject | Generative AI (GAI) | - |
dc.subject | large language model | - |
dc.subject | mixture of experts | - |
dc.subject | network optimization | - |
dc.title | Mixture of Experts for Intelligent Networks: A Large Language Model-enabled Approach | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/IWCMC61514.2024.10592370 | - |
dc.identifier.scopus | eid_2-s2.0-85199992000 | - |
dc.identifier.spage | 531 | - |
dc.identifier.epage | 536 | - |