File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/OJCOMS.2025.3567495
- Scopus: eid_2-s2.0-105004776158
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Large Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network
| Title | Large Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network |
|---|---|
| Authors | |
| Keywords | In-Context Learning MIMO Equalization Transformer |
| Issue Date | 1-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Open Journal of the Communications Society, 2025, v. 6, p. 4491-4504 How to Cite? |
| Abstract | Fully decoupled RAN (FD-RAN) aims to improve network performance by decoupling the hardware of base stations (BSs) and enabling flexible cooperation, making it a promising architecture for next-generation wireless networks. With the emergence of artificial intelligence, FD-RAN provides an opportunity to integrate physical-layer signal processing with neural network models. However, conventional deep learning-based multi-input multi-output (MIMO) equalization methods often rely on extensive offline training under fixed channel conditions, resulting in limited generalization to unseen wireless environments. Motivated by the strong generalization ability demonstrated by in-context learning (ICL) in natural language processing, we extend ICL to cooperative MIMO equalization in the FD-RAN framework. In this setup, geographical location information is incorporated as side information to enhance inference accuracy. Lightweight Transformer encoders are deployed at resource-constrained BSs to compress received signals, which are then forwarded to a central unit where a large decoder-only Transformer, adapted from GPT-2, performs equalization. The components are jointly trained to capture channel characteristics effectively. We further evaluate the generalization capability of large models by comparing the proposed ICL-based equalizer against meta-learning baselines. Experimental results show that our method achieves over 29% improvement in normalized mean square error under 8-bit and 12-bit fronthaul constraints compared to an unquantized LMMSE baseline. Moreover, as the pretraining dataset size increases, the ICL-based equalizer consistently outperforms meta-learning approaches, underscoring its scalability and potential for deployment in large-scale, data-driven wireless systems. |
| Persistent Identifier | http://hdl.handle.net/10722/362051 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yu, Kai | - |
| dc.contributor.author | Zhou, Haibo | - |
| dc.contributor.author | Xu, Yunting | - |
| dc.contributor.author | Liu, Zongxi | - |
| dc.contributor.author | Du, Hongyang | - |
| dc.contributor.author | Shen, Xuemin | - |
| dc.date.accessioned | 2025-09-19T00:31:18Z | - |
| dc.date.available | 2025-09-19T00:31:18Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Open Journal of the Communications Society, 2025, v. 6, p. 4491-4504 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362051 | - |
| dc.description.abstract | Fully decoupled RAN (FD-RAN) aims to improve network performance by decoupling the hardware of base stations (BSs) and enabling flexible cooperation, making it a promising architecture for next-generation wireless networks. With the emergence of artificial intelligence, FD-RAN provides an opportunity to integrate physical-layer signal processing with neural network models. However, conventional deep learning-based multi-input multi-output (MIMO) equalization methods often rely on extensive offline training under fixed channel conditions, resulting in limited generalization to unseen wireless environments. Motivated by the strong generalization ability demonstrated by in-context learning (ICL) in natural language processing, we extend ICL to cooperative MIMO equalization in the FD-RAN framework. In this setup, geographical location information is incorporated as side information to enhance inference accuracy. Lightweight Transformer encoders are deployed at resource-constrained BSs to compress received signals, which are then forwarded to a central unit where a large decoder-only Transformer, adapted from GPT-2, performs equalization. The components are jointly trained to capture channel characteristics effectively. We further evaluate the generalization capability of large models by comparing the proposed ICL-based equalizer against meta-learning baselines. Experimental results show that our method achieves over 29% improvement in normalized mean square error under 8-bit and 12-bit fronthaul constraints compared to an unquantized LMMSE baseline. Moreover, as the pretraining dataset size increases, the ICL-based equalizer consistently outperforms meta-learning approaches, underscoring its scalability and potential for deployment in large-scale, data-driven wireless systems. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Open Journal of the Communications Society | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | In-Context Learning | - |
| dc.subject | MIMO Equalization | - |
| dc.subject | Transformer | - |
| dc.title | Large Sequence Model for MIMO Equalization in Fully Decoupled Radio Access Network | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/OJCOMS.2025.3567495 | - |
| dc.identifier.scopus | eid_2-s2.0-105004776158 | - |
| dc.identifier.volume | 6 | - |
| dc.identifier.spage | 4491 | - |
| dc.identifier.epage | 4504 | - |
| dc.identifier.eissn | 2644-125X | - |
| dc.identifier.issnl | 2644-125X | - |
