File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Large Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network

TitleLarge Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network
Authors
KeywordsIn-Context Learning
MIMO Equalization
Transformer
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Open Journal of the Communications Society, 2025, v. 6, p. 4491-4504 How to Cite?
AbstractFully decoupled RAN (FD-RAN) aims to improve network performance by decoupling the hardware of base stations (BSs) and enabling flexible cooperation, making it a promising architecture for next-generation wireless networks. With the emergence of artificial intelligence, FD-RAN provides an opportunity to integrate physical-layer signal processing with neural network models. However, conventional deep learning-based multi-input multi-output (MIMO) equalization methods often rely on extensive offline training under fixed channel conditions, resulting in limited generalization to unseen wireless environments. Motivated by the strong generalization ability demonstrated by in-context learning (ICL) in natural language processing, we extend ICL to cooperative MIMO equalization in the FD-RAN framework. In this setup, geographical location information is incorporated as side information to enhance inference accuracy. Lightweight Transformer encoders are deployed at resource-constrained BSs to compress received signals, which are then forwarded to a central unit where a large decoder-only Transformer, adapted from GPT-2, performs equalization. The components are jointly trained to capture channel characteristics effectively. We further evaluate the generalization capability of large models by comparing the proposed ICL-based equalizer against meta-learning baselines. Experimental results show that our method achieves over 29% improvement in normalized mean square error under 8-bit and 12-bit fronthaul constraints compared to an unquantized LMMSE baseline. Moreover, as the pretraining dataset size increases, the ICL-based equalizer consistently outperforms meta-learning approaches, underscoring its scalability and potential for deployment in large-scale, data-driven wireless systems.
Persistent Identifierhttp://hdl.handle.net/10722/362051

 

DC FieldValueLanguage
dc.contributor.authorYu, Kai-
dc.contributor.authorZhou, Haibo-
dc.contributor.authorXu, Yunting-
dc.contributor.authorLiu, Zongxi-
dc.contributor.authorDu, Hongyang-
dc.contributor.authorShen, Xuemin-
dc.date.accessioned2025-09-19T00:31:18Z-
dc.date.available2025-09-19T00:31:18Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Open Journal of the Communications Society, 2025, v. 6, p. 4491-4504-
dc.identifier.urihttp://hdl.handle.net/10722/362051-
dc.description.abstractFully decoupled RAN (FD-RAN) aims to improve network performance by decoupling the hardware of base stations (BSs) and enabling flexible cooperation, making it a promising architecture for next-generation wireless networks. With the emergence of artificial intelligence, FD-RAN provides an opportunity to integrate physical-layer signal processing with neural network models. However, conventional deep learning-based multi-input multi-output (MIMO) equalization methods often rely on extensive offline training under fixed channel conditions, resulting in limited generalization to unseen wireless environments. Motivated by the strong generalization ability demonstrated by in-context learning (ICL) in natural language processing, we extend ICL to cooperative MIMO equalization in the FD-RAN framework. In this setup, geographical location information is incorporated as side information to enhance inference accuracy. Lightweight Transformer encoders are deployed at resource-constrained BSs to compress received signals, which are then forwarded to a central unit where a large decoder-only Transformer, adapted from GPT-2, performs equalization. The components are jointly trained to capture channel characteristics effectively. We further evaluate the generalization capability of large models by comparing the proposed ICL-based equalizer against meta-learning baselines. Experimental results show that our method achieves over 29% improvement in normalized mean square error under 8-bit and 12-bit fronthaul constraints compared to an unquantized LMMSE baseline. Moreover, as the pretraining dataset size increases, the ICL-based equalizer consistently outperforms meta-learning approaches, underscoring its scalability and potential for deployment in large-scale, data-driven wireless systems.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Open Journal of the Communications Society-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectIn-Context Learning-
dc.subjectMIMO Equalization-
dc.subjectTransformer-
dc.titleLarge Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network-
dc.typeArticle-
dc.identifier.doi10.1109/OJCOMS.2025.3567495-
dc.identifier.scopuseid_2-s2.0-105004776158-
dc.identifier.volume6-
dc.identifier.spage4491-
dc.identifier.epage4504-
dc.identifier.eissn2644-125X-
dc.identifier.issnl2644-125X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats