icon-cookie
The website uses cookies to optimize your user experience. Using this website grants us the permission to collect certain information essential to the provision of our services to you, but you may change the cookie settings within your browser any time you wish. Learn more
I agree
richard yuwen
606 articles
My Web Markups - richard yuwen
  • Allocations can't be over-committed
  • Non-root cgroups can distribute domain resources to their children only when they don't have any processes of their own
  • Only one process can be migrated on a single write(2) call
  • use cases where multiple cgroups write to a single inode simultaneously are not supported well
  • cgroup writeback is implemented on ext2, ext4, btrfs, f2fs, and xfs
  • per-cgroup dirty memory states
  • dirty memory ratio
  • how much the workload is being impacted due to lack of memory
  • memory.pressure
  • memory.stat
  • memory.events
  • Memory usage hard limit
  • Memory usage throttle limit
  • Best-effort memory protection
  • Protections can be hard guarantees or best effort soft boundaries
  • Memory is stateful and implements both limit and protection models
  • cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner
  • "min" and "max"
  • weight"
  • Limits can be over-committed
  • [0, max] and defaults to "max"
  • [1, 10000] with the default at 100
  • absolute resource guarantee
  • weight based resource distribution
  • The root cgroup should be exempt from resource control and thus shouldn't have resource control interface files
  • Consider cgroup namespaces as delegation boundaries
  • namespace root
  • all non-root "cgroup.subtree_control" files can only contain controllers which are enabled in the parent's "cgroup.subtree_control" file.
  • not subject to the no internal process constraint
  • threaded domain or thread root
  • The io controller, in conjunction with the memory controller, implements control of page cache writeback IOs
  • CPU
  • Memory
  • IO
  • PID
  • Cpuset
  • Device
  • RDMA
  • HugeTLB
  • Misc
  • A read-only flat-keyed file
  • allows to limit the HugeTLB usage per control group
  • controller limit during page fault
  • anon_thp
44 annotations
  • a memory-intensive process
  • out-of-the-box improvement over the kernel OOM killer
  • The kernel OOM handler’s main job is to protect the kernel
  • oomd
  • rejects a few and continues to run
  • Load shedding
  • oad shedding is a technique to avoid overloading and crashing a system by temporarily rejecting new requests. The idea is that all loads will be better served if the system rejects a few and continues to run, instead of accepting all requests and crashing due to lack of resources. In a recent test, a team at Facebook that runs asynchronous jobs, called Async, used memory pressure as part of a load shedding strategy to reduce the frequency of OOMs. The Async tier runs many short-lived jobs in parallel. Because there was previously no way of knowing how close the system was to invoking the OOM handler, Async hosts experienced excessive OOM kills. Using memory pressure as a proactive indicator of general memory health, Async servers can now estimate, before executing each job, whether the system is likely to have enough memory to run the job to completion. When memory pressure exceeds the specified threshold, the system ignores further requests until conditions stabilize. The chart shows how async responds to changes in memory pressure: when memory.full (in orange) spikes, async sheds jobs back to the async dispatcher, shown by the blue async_execution_decision line. The results were signifcant: Load shedding based on memory pressure decreased memory overflows in the Async tier and increased throughput by 25%. This enabled the Async team to replace larger servers with servers using less memory, while keeping OOMs under control. oomd - memory pressure-based OOM oomd is a new userspace tool similar to the kernel OOM handler, but that uses memory pressure to provide greater control over when processes start getting killed, and which processes are selected. The kernel OOM handler’s main job is to protect the kernel; it’s not concerned with ensuring workload progress or health. Consequently, it’s less than ideal in terms of when and how it operates: It starts killing processes only after failing at multiple attempts to allocate memory, i.e., after a problem is already underway. It selects processes to kill using primitive heuristics, typically killing whichever one frees the most memory. It can fail to start at all when the system is thrashing: memory utilization remains within normal limits, but workloads don't make progress, and the OOM killer never gets invoked to clean up the mess. Lacking knowledge of a process's context or purpose, the OOM killer can even kill vital system processes: When this happens, the system is lost, and the only solution is to reboot, losing whatever was running, and taking tens of minutes to restore the host. Using memory pressure to monitor for memory shortages, oomd can deal more proactively and gracefully with increasing pressure by pausing some tasks to ride out the bump, or by performing a graceful app shutdown with a scheduled restart. In recent tests, oomd was an out-of-the-box improvement over the kernel OOM killer and is now deployed in production on a number of Facebook tiers. Case study: oomd at Facebook See how oomd was deployed in production at Facebook in this case study looking at Facebook's build system, one of the largest services running at Facebook. oomd in the fbtax2 project As discussed previously, the fbtax2 project team prioritized protection of the main workload by using memory.low to soft-guarantee memory to workload.slice, the main workload's cgroup. In this work-conserving model, processes in system.slice could use the memory when the main workload didn't need it. There was a problem though: when a memory-intensive process in system.slice can no longer take memory due to the memory.low protection on workload.slice, the memory contention turns into IO pressure from page faults, which can compromise overall system performance. Because of limits set in system.slice's IO controller (which we'll look at in the next section of this case study) the increased IO pressure causes system.slice to be throttled. The kernel recognizes the slowdown is caused by lack of memory, and memory.pressure rises accordingly. oomd monitors the pressure, and once it exceeds the configured threshold, kills one of the processes—most likely the memory hog in system.slice—and resolves the situation before the excess memory pressure crashes the system. This behavior ← Memory ControllerIO Controller →Memory overcommitPressure-based load sheddingoomd - memory pressure-based OOMCase study: oomd at Facebook
  • outweigh the overhead of occasional OOM events
  • demand exceeds the total memory available
  • Overcommitting on memory—promising more memory for processes than the total system memory—is a key technique for increasing memory utilization
10 annotations
  • io.latency
  • You protect workloads with io.latency by specifying a latency target (e.g., 20ms). If the protected workload experiences average completion latency longer than its latency target value, the controller throttles any peers that have a more relaxed latency target than the protected workload. The delta between the prioritized cgroup's target and the targets of other cgroups is used to determine how hard the other cgroups are throttled: If a cgroup with io.latency set to 20ms is prioritized, cgroups with latency targets <= 20ms will never be throttled, while a cgroup with 50ms will get throttled harder than a cgroup with a 30ms target. Interface The interface for io.latency is in a format similar to the other controllers: MAJOR:MINOR target=<target time in microseconds> When io.latency is enabled, you'll see additional stats in io.stat: depth=<integer>—The current queue depth for the group. avg_lat=<time in microseconds>—The running average IO latency for this group. This provides a general idea of the overall latency you can expect for this workload on the specified disk. Note: All cgroup knobs can be configured through systemd. See the systemd.resource-control documentation for details. Using io.latency The limits are applied only at the peer level in the hierarchy. This means that in the diagram below, only groups A, B, and C will influence each other, and groups D and F will influence each other. Group G will influence nobody. Thus, a common way to configure this is to set io.latency in groups A, B, and C. Configuration strategies Generally you don't want to set a value lower than the latency your device supports. Experiment to find the value that works best for your workload: Start at higher than the expected latency for your device, and watch the avg_lat value in io.stat for your workload group to get an idea of the latency during normal operation. Use this value as a basis for your real setting: Try setting it, for example, around 20% higher than the value in io.stat. Experimentation is key here since avg_lat is a running average and subject to statistical anomalies. Setting too tight of a control (i.e., too low of a latency target) provides greater protection to a workload, but it can come at the expense of overall system IO overhead if other workloads get throttled prematurely. Another important factor is that hard disk IO latency can fluctuate greatly: If the latency target is too low, other workloads can get throttled due to normal latency fluctuations, again leading to sub-optimal IO control. Thus, in most cases then, you'll want to set the latency target higher than expected latency to avoid unnecessary throttling—the only question is by how much. Two general approaches have proven most effective: Setting io.latency higher (20-25%) than the usual expected latency. TThis provides a tighter protection guarantee for the workload. However, the tighter control can sometimes mean the system pays more in terms of IO overhead, which leads to lower system-wide IO utilization. A setting like this can be effective for systems with SSDs. Setting io.latency to several times higher than the usual expected latency, especially for hard disks. A hard disk's usual uncontended completion latencies are between 7 and 20ms, but when contention occurs, the completion latency balloons quickly, easily reaching 10 times normal. Because the latency is so volatile, workloads running on hard disks are usually not sensitive to small swings in completion latency; things break down only in extreme conditions when latency jumps several times higher (which isn't difficult to trigger). Effective protection can be achieved in cases like this by setting a relaxed target on the protected group (e.g., 50 or 75ms), and a higher setting for lower priority groups (e.g., an additional 25ms over the higher priority group). This way, the workload can have reasonable protection without significantly compromising hard disk utilization by triggering throttling when it's not necessary. How throttling works io.latency is work conserving: as long as everybody's meeting their latency target, the controller doesn't do anything. Once a group starts missing its target it begins throttling any peer group that has a higher target than itself. This throttling takes two forms: Queue depth throttling—This is the number of outstanding IO's a group is allowed to have. The controller will clamp down relatively quickly, starting at no limit and going all the way down to 1 IO at a time. Artificial delay induction—There are certain types of IO that can't be throttled without possibly affecting higher priority groups adversely. This includes swapping and metadata IO. These types of IO are allowed to occur normally, but they are "charged" to the originating group. Once the victimized group starts meeting its latency target again, it will start unthrottling any peer groups that were throttled previously. If the victimized group simply stops doing IO the global counter will unthrottle appropriately. fbtax2 IO controller configuration As discussed previously, the goal of the fbtax2 cgroup hierarchy was to protect workload.slice. In addition to the memory controller settings, the team found that IO protections were also necessary to make it all work. When memory pressure increases, it often translates into IO pressure. Memory pressure leads to page evictions: the higher the memory pressure, the more page evictions and re-faults, and therefore more IOs. It isn’t hard to generate memory pressure high enough to saturate a disk with IOs, especially the rotating hard disks that were used on the machines in the fbtax2 project. To correct for this, the team used a strategy similar to strategy 2 described above: they prioritized workload.slice by setting its io.latency to higher than expected, to 50ms. This provides more protection for workload.slice than for system.slice, whose io.latency is set to 75ms. When workload.slice has been delayed by lack of IO past its 50ms threshold, it gets IO priority: the kernel limits IO from system.slice and reallocates it to workload.slice so the main workload can keep running. hostcritical.slice was given a similar level of protection as workload.slice since any problems there can also impact the main workload. In this case it used memory.min to guarantee it will have enough to keep running. Though they knew system.slice needed lower IO priority, the team determined the 75ms number through trial and error, modifying it repeatedly until they achieved the right balance between protecting the main workload and ensuring the stability of system.slice. In the final installment of this case study, we'll summarize the strategies used in the fbtax2 project, and look at some of the utilization gains that resulted in Facebook's server farms. ← Memory Controller Strategies and ToolsCPU Controller →cgroup2 IO controller enhancementsInterface filesProtecting workloads with io.latencyInterfaceUsing io.latencyConfiguration strategiesHow throttling works
  • This is where you specify IO limits
  • O
  • accounting of all IOs per-cgroup
  • IOPS
  • system has the flexibility to limit IO to low priority workloads
7 annotations
23 annotations
  • 保持一定的资源 buffer
  • 统一填充式调度
  • 资源使用提出指导意见,以便其在保证资源使用的同时,防止资源超配导致的浪费
  • 评估预警,及早进行容器的自动扩容或者迁移
  • 时空互补
  • 应用画像可以为调度提供依据
  • 应用
  • 画像
  • 应用对于资源的使用存在一定的规律。一般常规的做法是将其对于资源使用的特点,分为计算密集型、内存密集型、存储密集型等。这种简单的做法无法从 CPU/ 内存 / 存储 / 网络多个资源维度和时间维度上对于资源的使用进行描述。我们采用了强化机器学习算法,根据应用的历史数据,提取其资源使用的特征,进而将不同的应用进行归类,形成应用画像。 应用画像可以为调度提供依据。不同的应用根据应用画像的结果,进行亲和 / 反亲和的调度,将不同类的应用容器调度在一起,使其资源需求得到时空互补,而不会相互影响。同时对于一个应用中多个容器,可以依据应用画像对容器的健康状态进行评估预警,及早进行容器的自动扩容或者迁移,以免影响业务。另外,应用画像还为之后应用的资源使用提出指导意见,以便其在保证资源使用的同时,防止资源超配导致的浪费。 Serverless 与延迟容忍 在以往的技术架构中,大量的业务应用属于长期服务 (long-running services),其特点是需要长时间提供服务。而实际上,许多应用并不需要长期提供服务。以图片转换应用为例,商户上传的商品图片需要转换成多种尺寸,并在图片中打上水印 /logo 等,并上传到存储中。图片转换应用只需要在用户上传图片时提供服务,其他时间并不需要占用资源;而且应用对于延迟不敏感,允许最长几十秒级别的延迟执行。对于这种事件驱动、延迟容忍的应用,我们推动其由长期服务,转向使用 JDOS 提供的 serverless 架构。serverless 的架构在将长期服务转为离线计算任务方面发挥了巨大的作用。 serverless 的应用任务和大数据的离线计算任务,抽象为统一的批处理任务 (batch jobs)。批处理任务提交到阿基米德时,需要提供任务描述,描述内容包括任务函数、任务类型、资源描述以及任务的延迟容忍时间等,由阿基米德进行调度执行。延迟容忍时间是指该任务可以容忍的最长延迟执行时间。也就是说任务提交后不必立即执行,可以容忍一定时间后才获得资源执行,这就为阿基米德的调度规划提供了重要依据,以便其提前进行流水线编排规划。 资源碎片与时空复用 不同批次采购的服务器的资源配比不同,而不同的应用申请的资源配比也不同。基于资源适配的调度算法容易导致一台服务器上的 CPU 的配额已经分配完毕,但是内存还空余几十 GB 或者内存分配完毕,CPU 还空余几核。这种情况我们称之为资源碎片。 资源碎片在几乎每台物理机上都有发生。长期服务,特别是面向用户的任务,在每天的执行中会出现明显的高峰低谷。而且不同的长期服务的资源消耗也不同。因此集群中的时空资源利用率不均是常态。资源碎片和时空分布不均问题造成了巨额的资源浪费。 我们倾向于长期服务稳定存在,尽量低频度迁移。因此对于资源碎片和时空不均的情况,阿基米德采用批处理任务进行统一填充式调度,以达到资源碎片的充分利用和资源的时空复用的效果。阿基米德不仅仅可以对当前的资源和任务进行调度,还可以综合应用画像和批处理任务的描述,对未来一段时间的任务调度进行提前规划,使得业务能够正常运行的同时,资源得到充分的利用,有效防止了批处理任务与长期服务的资源竞争。阿基米德会时刻保持一定的资源 buffer 应对突发流量的资源需求。 SLA 无论是长期服务还是批处理任务,均会与阿基米德签订 SLA 协议。阿基米德将会保证服务或者任务的资源使用量、服务可用性等。特别是长期服务,阿基米德将会优先保证其资源使用和服务可用。在批处理任务与长期服务、长期服务与长期服务即将出现资源竞争时,阿基米德会根据 SLA 协议的可用性和优先级进行筛选排序,依序对于任务或者服务进行驱逐迁移,保证高优先级的长期服务能够优先使用资源,非必要情况不进行迁移,不受其他任务 / 服务的资源竞争影响。 集群自治 JDOS 提供了集群的自动化管理,阿基米德则将集群从自动化转为了自治的管理系统。社区的 kube-controller 提供了一个控制器的范例,但是存在着诸多弊端,比如 controller 获取的信息量过少,只能从 apiserver 获取 node 状态,无法准确判断 node 节点是否离线,从而导致误判,致使容器发生频繁迁移。因此阿基米德中扩展了 controller,成为一个单独的系统 MAGI。MAGI 共有五个节点,分布在数据中心的不同物理 POD 内。MAGI 系统负责集群的自治决策,采用投票会商制,用以对节点是否离线、容器是否需要迁移等决策进行复核。经 MAGI 系统归票决策后,才会实际触发节点的离线摘除和容器的迁移。 不止于调度 阿基米德不仅仅是 JDOS 的调度系统,更是应用资源使用情况的数据分析平台。阿基米德为项目管理、业务、审计、采购等部门的相关工作,提供了直接的数据支持。 机房的主要电力消耗用于制冷,而制冷的主要目的是为 CPU 降温。阿基米德会根据应用画像与调度规划,对于服务器的 CPU 的主频进行相应调整,以达到节能降耗的作用。此功能已在 2 个核心机房进行了大规模的实践,取得了降低 17% 电力的成果。 在 2018 年,我们将进一步推动优化调度算法,精确应用画像,提升调度的准确性,在整合计算、提升效率、节能降耗方面进行更多的实践。届时也将把更多的生产一线的调度数据和模型与业界分享。 作者介绍 鲍永成,京东商城研发体系 - 基础平台部技术总监。2013 年加入京东商城研发体系,负责京东容器平台 JDOS 研发,带领团队完成京东容器大规模战略项目落地,有效支撑京东日常业务系统运行和大促高峰流量。目前聚焦在京东 JDOS 阿基米德战略项目和敏捷智能数据中心等方向。 感谢蔡芳芳对本文的策划。 阅读数:2094 发布于:2017 年 11 月 15 日 16:59 文章版权归极客邦科技 InfoQ 所有,未经许可不得转载。  语言 & 开发架构双十一 评论 发布 暂无评论 推荐阅读 27 | 微服务容器化运维:容器调度和服务编排 2018 年 10 月 23 日 想了解阿里巴巴的云化架构 看这篇就够了 2017 年 12 月 26 日 第 33 讲 | 区块链与供应链(二) 2018 年 6 月 8 日 面向容器技术资源调度关键技术对比 2016 年 6 月 2 日 新浪微博混合云架构实践挑战之概述篇 2016 年 4 月 3 日 11 | 负载均衡:节点负载差距这么大,为什么收到的流量还一样? 2020 年 3 月 13 日 不断超越的调度系统:如何撑住 9 年双 11 交易峰值 800 倍增长 2020 年 5 月 29 日 订阅 每周精要 你将获得 了解详情  资深编辑编译的全球 IT 要闻 一线技术专家撰写的实操技术案例 InfoQ 出品的课程和线下活动报名通道 立即订阅 
  • 京东数据中心的资源调度与驱逐
  • 业务系统资源申请量和使用量之间差距巨大
  • 靠新增机器来应对高峰瞬时流量
  • 集群的平均资源利用率提升 3 倍
  • 3000 万核·小时
14 annotations
27 annotations
47 annotations
  • 评分函数
  • GPU卡的连接拓扑
  • 负载局限在卡上绑定的容器数
  • Allocate的时候调用Redis的信息查看负载情况
  • 外部数据库存放每个节点的GPU负载信息
  • NVML获取GPU卡的实时状态
  • 任务创建时的真实的GPU负载来动态的决定挂载GPU的id
  • 虚拟id到真实id的映射不能够是通过一个简单的线性映射关系
  • GPU负载的大体上的平衡
  • 伪造出更多虚拟的GPU ID给K8s
  • 没有考虑GPU的亲和性,
  • 提供共享机制最大化资源的使用
  • 显存分配的策略
  • 没有任何gpu虚拟化
  • llocate将在容器创建被调用,用来返回一些能够使用主机上该资源的特殊配置,比如一些环境变量,再将这些信息给到Kubelet,并在容器启动的时候传给容器
  • vendor-domain/resource
  • K8s注册该资源
  • Device-Plugin机制本质上是一个RPC服务
  • AMD GPU等
  • RDMA设备
  • NVIDIA_VISIBLE_DEVICES=0,1
  • 环境变量来指定将要挂载的GPU设备
  • 多个Nvidia-Docker可以挂载同一个GPU
  • 主机侧的Nvidia-Driver
  • libnvidia-container暴露的接口进行交互
  • Runc会调用一个叫做nvidia-container-runtime-hook的hook
  • GPU的支持放入一个兼容OCI标准的运行时库拓展libnvidia-container中
  • Containerd则包装了Runc和其它功能如生命周期管理等,以Daemon的形式运行在主机
  • Nvidia-Docker来做AI系统环境已经是非常主流的做法
  • 原生Nvidia-Device-Plugin
  • K8s中将GPU作为拓展资源调度的Device-Plugin机制
  • 容器GPU挂载
  • GPU调度的通用流程
  • 没有考虑GPU卡之间的通道亲和性
  • 每一块GPU同时最多只能被一个容器使用
  • Device-Plugin的方式来增加对默认资源(CPU,Memory等)之外的设备
  • 借助Kubernetes来管理Nvidia-Docker,使得GPU任务的分配更加简单和合理,目前已成为几乎所有主流的AI算力平台的方案
  • 容器粒度来管理和使用GPU要比从主机角度容易很多
  • Nvidia公司为Docker写的Runtime
  • Nvidia的GPU的利用
40 annotations
  • CRI + containerd ShimV2 revolution
  • Container Runtime management engine
  • Sigma/Kubernetes
  • lower-layer Container Runtime
  • CRI + containerd shimv2
  • CRI is the first calling interface in Kubernetes to be divided into plug-ins
  • remove and decouple the complex features that are originally invasive to the main code from the core library one by one by dividing them into different interfaces and plug-ins
  • how to connect containerd to the kata container
  • implementation of Shimv2 API
  • kata-Containerd-Shimv2
  • container-shim-v2 in Sandbox
  • a containerd shim
  • specify a shim for each Pod
  • containerd shim for each container
  • make KataContainers follow containerd
  • standard interface between the CRI shim and the containerd runtime
  • Containerd ShimV2
  • CRI-O
  • reuse the existing CRI shims
  • What can a CRI shim do? It can translate CRI requests into Runtime APIs
  • CRI shim
  • Dockershim
  • maintenance
  • we do not want a project like Docker to have to know what a Pod is and expose the API of a Pod
  • Containerd-centric API
  • Container Runtime Interface
  • multi-tenant
  • security
  • Kernel version run by your container is completely different from that run by the Host machine
  • each Pod now has an Independent kernel
  • the more layers you build here, the worse your container performance is
  • SECCOMP
  • secure Container Runtime
  • we are concerned about security
  • each Pod like the KataContainer is a lightweight virtual machine with a complete Linux kernel
  • a compressed package of your program + data + all dependencies + all directory files
  • the Container Image
  • the Container Runtime
  • runC that helps you set up these namespaces and cgroups, and helps you chroot, building a container required by an application
  • binding operation
  • NodeName field of the Pod object
  • Pods are created, instead of containers
  • the designs of Kubernetes CRI and Containerd ShimV2
  • KataContainers
  • RuntimeClass
  • ShimV2
  • container runtime
  • CRI
  • design and implementation of key technical features
49 annotations