icon-cookie
The website uses cookies to optimize your user experience. Using this website grants us the permission to collect certain information essential to the provision of our services to you, but you may change the cookie settings within your browser any time you wish. Learn more
I agree
blank_error__heading
blank_error__body
Text direction?

A Case for Adaptive Resource Management in Alibaba Datacenter Using Neural Networks

Abstract

Both resource efficiency and application QoS have been big concerns of datacenter operators for a long time, but remain to be irreconcilable. High resource utilization increases the risk of resource contention between co-located workload, which makes latency-critical (LC) applications suffer unpredictable, and even unacceptable performance. Plenty of prior work devotes the effort on exploiting effective mechanisms to protect the QoS of LC applications while improving resource efficiency. In this paper, we propose MAGI, a resource management runtime that leverages neural networks to monitor and further pinpoint the root cause of performance interference, and adjusts resource shares of corresponding applications to ensure the QoS of LC applications. MAGI is a practice in Alibaba datacenter to provide on-demand resource adjustment for applications using neural networks. The experimental results show that MAGI could reduce up to 87.3% performance degradation of LC application when co-located with other antagonist applications.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

References

  1. [1]

    Reiss C, Tumanov A, Ganger G R, Katz R H, Kozuch M A. Heterogeneity and dynamicity of clouds at scale: Google trace analysis. In Proc. the 3rd ACM Symposium on Cloud Computing, October 2012, Article No. 7.

  2. [2]

    Liu H. A measurement study of server utilization in public clouds. In Proc. the 9th IEEE International Conference on Dependable, Autonomic and Secure Computing, December 2011, pp.435-442.

  3. [3]

    Delimitrou C, Kozyrakis C. Quasar: Resource-efficient and QoS-aware cluster management. ACM SIGPLAN Notices, 2014, 49(4): 127-144.

    Google Scholar 

  4. [4]

    Cortez E, Bonde A, Muzio A, Russinovich M, Fontoura M, Bianchini R. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proc. the 26th Symposium on Operating Systems Principles, October 2017, pp.153-167.

  5. [5]

    Lo D, Cheng L Q, Govindaraju R, Ranganathan P, Kozyrakis C. Heracles: Improving resource efficiency at scale. ACM SIGARCH Computer Architecture News, 2015, 43: 450-462.

    Article  Google Scholar 

  6. [6]

    Chen S, Delimitrou C, Mart´ınez J F. PARTIES: QoS-aware resource partitioning for multiple interactive services. In Proc. the 24th International Conference on Architectural Support for Programming Languages and Operating Systems, April 2019, pp.107-120.

  7. [7]

    Zhuravlev S, Blagodurov S, Fedorova A. Addressing shared resource contention in multicore processors via scheduling. ACM SIGPLAN Notices, 2010, 45: 129-142.

    Article  Google Scholar 

  8. [8]

    Zhang X, Tune E, Hagmann R et al. CPI2: CPU performance isolation for shared compute clusters. In Proc. the 8th ACM European Conference on Computer Systems, April 2013, pp.379-391.

  9. [9]

    Yasin A. A top-down method for performance analysis and counters architecture. In Proc. the 2014 IEEE International Symposium on Performance Analysis of Systems and Software, March 2014, pp.35-44.

  10. [10]

    Kasture H, Sanchez D. Tailbench: A benchmark suite and evaluation methodology for latency-critical applications. In Proc. the 2016 IEEE International Symposium on Workload Characterization, September 2016, pp.3-12.

  11. [11]

    Henning J L. SPEC CPU2006 benchmark descriptions. SIGARCH Comput. Archit. News, 2006, 34(4): 1-17.

    Article  Google Scholar 

  12. [12]

    Verma A, Pedrosa L, Korupolu M, Oppenheimer D, Tune E,Wilkes J. Large-scale cluster management at Google with Borg. In Proc. the 10th European Conference on Computer Systems, April 2015, Article No. 18.

  13. [13]

    Hindman B, Konwinski A, Zaharia M, Ghodsi A, Joseph A D, Katz R H, Shenker S, Stoica I. Mesos: A platform for fine-grained resource sharing in the data center. In Proc. the 8th USENIX Symposium on Networked Systems Design and Implementation, March 2011, Article No. 4.

  14. [14]

    Schwarzkopf M, Konwinski A, Abd-El-Malek M, Wilkes J. Omega: Flexible, scalable schedulers for large compute clusters. In Proc. the 8th ACM European Conference on Computer Systems, April 2013, pp.351-364.

  15. [15]

    Ousterhout K, Wendell P, Zaharia M, Stoica I. Sparrow: Distributed, low latency scheduling. In Proc. the 24th ACM Symposium on Operating Systems Principles, November 2013, pp.69-84.

  16. [16]

    Zhang Z, Li C, Tao Y Y, Yang R Y, Tang H, Xu J. Fuxi: A fault-tolerant resource management and job scheduling system at Internet scale. Proceedings of the VLDB Endowment, 2014, 7(13): 1393-1404.

    Article  Google Scholar 

  17. [17]

    Guo J, Chang Z H, Wang S, Ding H Y, Feng Y H, Mao L, Bao Y G. Who limits the resource efficiency of my datacenter: An analysis of Alibaba datacenter traces. In Proc. the International Symposium on Quality of Service, June 2019, Article No. 39.

  18. [18]

    Herdrich A, Verplanke E, Autee P, Illikkal R, Gianos C, Singhal R, Iyer R. Cache QoS: From concept to reality in the intel® Xeonr® processor E5-2600 v3 product family. In Proc. the 2016 IEEE International Symposium on High Performance Computer Architecture, March 2016, pp.657-668.

Download references

Author information

Affiliations

  1. State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China

    Sa Wang, Tian-Ze Wu, Wen-Jie Li, Xu-Sheng Zhan & Yun-Gang Bao

  2. University of Chinese Academy of Sciences, Beijing, 100049, China

    Sa Wang, Tian-Ze Wu, Wen-Jie Li, Xu-Sheng Zhan & Yun-Gang Bao

  3. Peng Cheng Laboratory, Shenzhen, 518055, China

    Sa Wang & Yun-Gang Bao

  4. Alibaba Inc., Hangzhou, 311121, China

    Yan-Hai Zhu, Shan-Pei Chen & Hai-Yang Ding

  5. Department of Computer Science, Wayne State University, Michigan, MI, 48202, U.S.A.

    Wei-Song Shi

Authors
  1. Sa Wang
  2. Yan-Hai Zhu
  3. Shan-Pei Chen
  4. Tian-Ze Wu
  5. Wen-Jie Li
  6. Xu-Sheng Zhan
  7. Hai-Yang Ding
  8. Wei-Song Shi
  9. Yun-Gang Bao

Corresponding author

Correspondence to Yan-Hai Zhu.

Electronic supplementary material

ESM 1

(PDF 1107 kb)

About this article

Cite this article

Wang, S., Zhu, Y., Chen, S. et al. A Case for Adaptive Resource Management in Alibaba Datacenter Using Neural Networks. J. Comput. Sci. Technol. 35, 209–220 (2020). https://doi.org/10.1007/s11390-020-9732-x

Download citation

Keywords

  • resource management
  • neural network
  • resource efficiency
  • tail latency
Measure
Measure
Related Notes
Get a free MyMarkup account to save this article and view it later on any device.
Create account

End User License Agreement

Summary | 2 Annotations
resource contention
2020/09/07 10:39
a resource management runtime
2020/09/07 10:41