File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/BigData52589.2021.9672015
- Scopus: eid_2-s2.0-85125306677
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Transparent Network Memory Storage for Efficient Container Execution in Big Data Clouds
Title | Transparent Network Memory Storage for Efficient Container Execution in Big Data Clouds |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021, 2021, p. 76-85 How to Cite? |
Abstract | This paper presents a transparent Container Network Memory storage device, coined as CNetMem, aiming to address the open problem of unpredictable performance degradation of containers when the working set of an application no longer fits in container memory. First, CNetMem will enable application tenants running in a container to park their working set memory/file to a faster network memory storage by organizing a group of remote memory nodes as remote memory donors. This allows CNetMem to take advantage of remote idle memory on a cluster before resorting to a slow local I/O subsystem like local disk without any modification of host OS or application. Second, CNetMem provides a hybrid batching technique to remove or alleviate performance bottlenecks in the I/O performance critical path for remote memory read/write with replication or disk backup for fault tolerance. Third, CNetMem introduces a rank-based node selection algorithm to find the optimal node for placing remote memory blocks across cluster. This helps CNetMem to reduce the performance impact due to remote memory eviction. Extensive experiments are conducted on three big data applications and four machine learning workloads. The results show that CNetMem achieves up to 172× throughput improvements compared to vanilla Linux and up to 5.9× completion time improvements over existing approaches in big data and ML workload. |
Persistent Identifier | http://hdl.handle.net/10722/343520 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Bae, Juhyun | - |
dc.contributor.author | Liu, Ling | - |
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Wu, Yanzhao | - |
dc.contributor.author | Su, Gong | - |
dc.contributor.author | Iyengar, Arun | - |
dc.date.accessioned | 2024-05-10T09:08:45Z | - |
dc.date.available | 2024-05-10T09:08:45Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021, 2021, p. 76-85 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343520 | - |
dc.description.abstract | This paper presents a transparent Container Network Memory storage device, coined as CNetMem, aiming to address the open problem of unpredictable performance degradation of containers when the working set of an application no longer fits in container memory. First, CNetMem will enable application tenants running in a container to park their working set memory/file to a faster network memory storage by organizing a group of remote memory nodes as remote memory donors. This allows CNetMem to take advantage of remote idle memory on a cluster before resorting to a slow local I/O subsystem like local disk without any modification of host OS or application. Second, CNetMem provides a hybrid batching technique to remove or alleviate performance bottlenecks in the I/O performance critical path for remote memory read/write with replication or disk backup for fault tolerance. Third, CNetMem introduces a rank-based node selection algorithm to find the optimal node for placing remote memory blocks across cluster. This helps CNetMem to reduce the performance impact due to remote memory eviction. Extensive experiments are conducted on three big data applications and four machine learning workloads. The results show that CNetMem achieves up to 172× throughput improvements compared to vanilla Linux and up to 5.9× completion time improvements over existing approaches in big data and ML workload. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021 | - |
dc.title | Transparent Network Memory Storage for Efficient Container Execution in Big Data Clouds | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/BigData52589.2021.9672015 | - |
dc.identifier.scopus | eid_2-s2.0-85125306677 | - |
dc.identifier.spage | 76 | - |
dc.identifier.epage | 85 | - |