Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Geode 网络配置最佳实践

介绍

Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures. Geode pools memory是一个数据管理平台,提供实时的、一致的、贯穿整个云架构地访问数据关键型应用.

Geode 池化了服务器上的内存, CPU, network resources, and optionally disk storage across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques to implement high availability, improved performance, scalability, and fault tolerance. In addition to being a distributed data container, Geode is an in-memory data management system that provides reliable asynchronous event notifications and guaranteed message delivery.

Due to Geode’s distributed nature, network resources can have a significant impact on system performance and availability. Geode is designed to be fault tolerant and to handle network disruptions gracefully. However, proper network design and tuning are essential to achieving optimum performance and High Availability with Geode.

目的

The purpose of this paper is to provide best practice recommendations for configuring the network resources in a Geode solution. The recommendations in this paper are not intended to provide a comprehensive, one-size-fits-all guide to network design and implementation. However, they should serve to provide a working foundation to help guide Geode implementations.

范围

网络资源, 和本地磁盘,跨多个进程来管理应用对象和应用行为. 它使用了动态数据复制和分区技术来实现高可用, 高性能, 高可扩展性, 和容错. 另外, 对于一个分布式数据容器, Apache Geode 是一个基于内存的数据管理系统, 提供了可靠的异步事件通知和可靠的消息投递.

由于 Geode天生的分布式特性, 网络资源对于系统的性能和可用性有比较大的影响. Geode 被设计具有容错能力和自动处理网络中断故障. 然而, 合适的网络设计和调优对于达到最佳性能是必要的.

目的

本篇文章主要提供配置网络资源的最佳实践. 本推荐并不提供很详细的网络设计和实现指南. 然而, 是提供一个工作基础, 指导大家进行网络配置实现.

范围

本章节所讲的话题与网络组件的设计和配置有关. 将介绍如下的话题:

  • 网络架构的目标
  • NIC网卡选择和配置
  • 交换机配置参考
  • 通用的网络基础设施参考
  • TCP vs. UDP 协议参考
  • Socket 通信和 Socket Buffer 设置
  • TCP 设置, 拥塞控制, 窗口扩展, 等.

对象

This paper assumes a basic knowledge and understanding of Geode, virtualization concepts and networking. Its primary audience consists of:

  • Architects: who can use this paper to inform key decisions and design choices surrounding a Geode solution
  • System Engineers and Administrators: who can use this paper as a guide for system configuration

Geode: 快速回顾

概要介绍

本文假设你对 Geode 有一些基础的理解和知识, 虚拟化概念和网络知识. 主要的读者如下:

  • 架构师: 能够使用此篇文章形成围绕 Geode 的关键决策和设计选择
  • 系统工程师和管理者: 它能够使用此篇文章作为系统配置的指南

Geode: 快速回顾

概要介绍

一个 Geode 分布式系统由跨网络的成员节点组成, 提供内存速度, 带有高可靠, 高扩展性, 和容错. 每个成员基于JVM, 管理数据和计算逻辑, 连接到其他 Geode 成员. 管理数据的成员维护一个缓存, 缓存中的Region 可以跨分布式系统进行复制或分区. 计算逻辑被部署到添加合适 Java JAR 文件的成员上A Geode distributed system is comprised of members distributed over a network to provide in-memory speed along with high availability, scalability, and fault tolerance. Each member consists of a Java virtual machine (JVM) that hosts data and/or compute logic and is connected to other Geode members over a network. Members hosting data maintain a cache consisting of one or more Regions that can be replicated or partitioned across the distributed system. Compute logic is deployed to members as needed by adding the appropriate Java JAR files to the member’s class path.

使用 Geode 的公司:

  • 减少风险分析从6 小时到 20 分钟, 在闪电崩盘的过程中能够记录下交易资料, 而其他机构都不能够做到这一点.
  • 增强的最终用户响应时间, 从3秒到 50 毫秒, 一年能够增加 8 位数的收益, 而项目交付小于6个月.
  • 实时追踪资产, 协调对的人和机器在对的时间到达对的地点, 发挥高价值的机会.
  • 创建用户预留系统, 每天可以处理10亿的请求
  • Reduced risk analysis time from 6 hours to 20 minutes, allowing for record profits in the flash crash of 2008 that other firms were not able to monetize.
  • Improved end-user response time from 3 seconds to 50 ms, worth 8 figures a year in new revenue from a project delivered in fewer than 6 months.
  • Tracked assets in real time to coordinate all the right persons and machinery into the right place at the right time to take advantage of immediate high-value opportunities.
  • Created end-user reservation systems that handle over a billion requests daily with no downtime.

Geode 通信

Geode 成员使用 TCP, UDP 单播和 UDP 多播相结合的方式进行成员之间的通信. Geode 成员与其他成员之间维护着经常性通信 , 为了分发数据和管理整个分布式系统.

...

延时问题涉及到在跨网络处理数据过程中的各种类型的延时. 这些延时包括:

  • 广播延时 – 这与网络传播距离有关, 数据跨网络到达目的地,  和信号通过的中介地. 延时范围从本地网络(LAN)的纳秒到微秒延时, 到卫星通信系统的0.25秒延时.
  • 传输延时 – 这些延时是发送所有数据包的比特流到链接网络层所需要的时间, 这是一个包长度和链接层速率的问题. 例如, 为了传输一个10 Mb 文件跨 1 Mbps 链接层将需要10秒中, 而跨 100 Mbps 链接层只需要 0.1 秒.
  • 处理延时 – 这个延时是处理包头, 检查比特级错误, 确定包的发送目的地的所花时间. 在高速路由环境处理延时基本是最小的. 然而, 对于网络处理复杂加密或深度包检测, 处理延时还是比较大的. 另外, 处理 NAT 的路由器也有高于正常处理的延时, 因为这些路由器都需要检查, 和修改输入和输出包.
  • 队列延时 – 这些延时都是路由队列所消耗的时间. 网络设计的实际情况是一些队列延时将出现. 有效的队列管理技术是关键的, 可以保障高优先级的流量体验.

最佳实践

It should be noted that latency, not bandwidth, is the most common performance bottleneck for network dependent systems like websites. Therefore, one of the key design goals in architecting a Geode solution is to minimize network latency. Best practices for achieving this goal include:

  • , 数据跨网络到达目的地,  和信号通过的中介地. 延时范围从本地网络(LAN)的纳秒到微秒延时, 到卫星通信系统的0.25秒延时.
  • 传输延时 – 这些延时是发送所有数据包的比特流到链接网络层所需要的时间, 这是一个包长度和链接层速率的问题. 例如, 为了传输一个10 Mb 文件跨 1 Mbps 链接层将需要10秒中, 而跨 100 Mbps 链接层只需要 0.1 秒.
  • 处理延时 – 这个延时是处理包头, 检查比特级错误, 确定包的发送目的地的所花时间. 在高速路由环境处理延时基本是最小的. 然而, 对于网络处理复杂加密或深度包检测, 处理延时还是比较大的. 另外, 处理 NAT 的路由器也有高于正常处理的延时, 因为这些路由器都需要检查, 和修改输入和输出包.
  • 队列延时 – 这些延时都是路由队列所消耗的时间. 网络设计的实际情况是一些队列延时将出现. 有效的队列管理技术是关键的, 可以保障高优先级的流量体验.

最佳实践

本文将主要关注延时, 而不是带宽, 对于依赖网络的系统来说, 这是最普遍的性能瓶颈. 因此架构Geode解决方案的关键设计之一就是最小化网络延时. 达到这一目标的最佳实践如下:

  • 保持 Geode 服务器和客户端在同一个局域网内, 最好是在同一个网段内. 目标是放所有的Geode 集群成员和客户端尽可能临近, 以减少互相通信的网络延时. 这不仅减少了广播延时, 也减少了路由和流量管理的延时. Geode 成员在一致的通信环境中, 甚至网络延时相对变化较少,能够大幅放大整体的性能.
  • 慎重使用网络流量加密, Geode 将产生大量网络流量, 包括一定量的系统管理流量. 在 Geode 集群成员之间加密网络流量将增加处理延时, 甚至流量并不包含敏感数据.  可以考虑仅仅在敏感数据上进行加密. 或者, 如果是必要的限制 Geode 成员之间的数据访问, 考虑将 Geode 集群放在一个隔离的网络安全域内, 与其他的系统进行隔离.
  • 使用最快的网络链路, 虽然带宽并不能单独决定吞吐量 - 所有事情都是平等的, 一个高速链路将比低速链路传输更多的数据, 在相同的时间下. 分布式系统如 Geode 透过网络传输高通量数据得益于高速的网络链路. 而一些对网络性能要求比较高通常使用 InfiniBand 网络技术, 达到 40Gbps, 10GbE 对于大多数应用是足够的, 在生产/测试环境下基本满足需求
  • Keep Geode members and clients on the same LAN Keep all members of a Geode distributed system and their clients on the same LAN and preferably on the same LAN segment. The goal is to place all Geode cluster members and clients in close proximity to each other on the network. This not only minimizes propagation delays, it also serves to minimize other delays resulting from routing and traffic management. Geode members are in constant communication and so even relatively small changes in network delays can multiply, impacting overall performance.
  • Use network traffic encryption prudently Distributed systems like Geode generate high volumes of network traffic, including a fair amount of system management traffic. Encrypting network traffic between the members of a Geode cluster will add processing delays even when the traffic contains no sensitive data. As an alternative, consider encrypting only the sensitive data itself. Or, if it is necessary to restrict access to data on the wire between Geode members, consider placing the Geode members in a separate network security zone that cordons off the Geode cluster from other systems.
  • Use the fastest link possible Although bandwidth alone does not determine throughput - all things being equal, a higher speed link will transmit more data in the same amount of time than a slower one. Distributed systems like Geode move high volumes of traffic through the network and can benefit from having the highest speed link available. While some Geode customers with exacting performance requirements make use of InfiniBand network technology that is capable of link speeds up to 40Gbps, 10GbE is sufficient for most applications and is generally recommended for production and performance/system testing environments. For development environments and less critical applications, 1GbE is often sufficient.

高吞吐量

另外对于低延时来讲, Geode 系统的网络需要有高吞吐量. ISPs 和 FCC 经常使用术语'带宽'和'速度', 虽然它们并不是一回事. 事实上, 带宽只是众多影响因素之一. 因此, 更准确地说

...

设置推荐值基本原理
net.core.netdev_max_backlog30000Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. Recommended setting is for 10GbE links. For 1GbE links use 设置包的最大数, 在输入端进行排队, 当接口接收包比内核处理更快时. 推荐设置为10GbE 链路. 对于1GbE 链路使用 8000.
net.core.wmem_max67108864Set max to 对于 1GbE 链路, 设置最大数为 16MB (16777216) for 1GbE links and , 而对于10GbE链路为 64MB (67108864) for 10GbE links.
net.core.rmem_max67108864Set max to 对于 1GbE 链路, 设置最大数为 16MB (16777216) for 1GbE links and , 而对于10GbE链路为 64MB (67108864) for 10GbE links.
net.ipv4.tcp_congestion_controlhtcpThere seem to be bugs in both bic and cubic (the default) for a number of versions of the Linux kernel up to version 这看起来是 bugs 在 bic 和 cubic 上(默认) , 对于 Linux 内核上到版本 2.6.33. The kernel version for Redhat 5.x is 内核版本是 2.6.18-x and , Redhat 6.x内核版本是 2.6.32-x for Redhat 6.x
net.ipv4.tcp_congestion_window10This is the default for Linux operating systems based on 默认情况下, Linux OS 是基于 Linux kernel 2.6.39 or later或以上版本.
net.ipv4.tcp_fin_timeout1010此设置确定了在 TCP/IP 释放一个关闭连接和重用资源之前 时间必须超时.在这个 TIME_WAIT 状态中, 重新打开到客户端的连接的成本低于建立新连接的成本. 通过减少条目值, TCP/IP 能够更快地释放关闭的连接, 对于新的连接让更多的资源可用. 默认值是 60. 推荐设置较低, 为10. 你能够进一步拉低这个值, 如果这个值太低, 将会在网络中得到 socket close errors , 并带有大量的抖动This setting determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. During this TIME_WAIT state, reopening the connection to the client costs less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, making more resources available for new connections. The default value is 60. The recommened setting lowers its to 10. You can lower this even further, but too low, and you can run into socket close errors in networks with lots of jitter.
net.ipv4.tcp_keepalive_interval30This determines the wait time between isAlive interval probes. Default value is 75. Recommended value reduces this in keeping with the reduction of the overall keepalive time此设置确定了在isAlive间隔的等待时间. 默认值是75. 推荐值拉低了这个值, keepalive时间为30.
net.ipv4.tcp_keepalive_probes5How many keepalive probes to send out before the socket is timed out. Default value is 9. Recommended value reduces this to 5 so that retry attempts will take 2.5 minutes在 socket 超时之前, 有多少 keepalive probes 发出. 默认值为 9. 推荐值拉低了这个值为 5 , 因此重试操作将花费 2.5 分钟.
net.ipv4.tcp_keepalive_time600Set the 设置 TCP Socket timeout value to 10 minutes instead of 2 hour default. With an idle socket, the system will wait 超时值为 10 分钟, 默认是 2 小时. 在一个空闲 socket, 系统将要等待 tcp_keepalive_time seconds, and after that try 秒, 在尝试 tcp_keepalive_probes times to send a 次数后 发送一个 TCP KEEPALIVE in intervals of tcp, 时间间隔为tcp_keepalive_intvl seconds. If the retry attempts fail, the socket times out秒. 如果重试尝试失败, socket 将超时.
net.ipv4.tcp_low_latency1Configure 配置 TCP for low latency, favoring low latency over throughput为低延时, 在吞吐量上达到低延时
net.ipv4.tcp_max_orphans16384Limit number of orphans, each orphan can eat up to 限制 孤儿套接字的数量, 每个孤儿套接字将吃掉最大 16M (max wmem) of unswappable memory非交换内存
net.ipv4.tcp_max_tw_buckets1440000Maximal number of timewait sockets held by system simultaneously. If this number is exceeded 通过系统持有的timewait sockets最大数量. 如果此数量超过了, time-wait socket is immediately destroyed and warning is printed. This limit exists to help prevent simple DoS attacks将立即销毁, 并打印出警告信息. 此限制帮助对一些简单的 DDoS 攻击进行防护.
net.ipv4.tcp_no_metrics_save1Disable caching TCP metrics on connection close禁用 连接关闭的缓存TCP metrics
net.ipv4.tcp_orphan_retries0Limit number of orphans, each orphan can eat up to 限制孤儿套接字的数量, 每个孤儿套接字将吃掉最大 16M (max wmem) of unswappable memory非交换内存
net.ipv4.tcp_rfc13371Enable a fix for 开启对 RFC1337 的修复 - TCP 中的 time-wait assassination hazards in TCP破坏风险
net.ipv4.tcp_rmem10240 131072 33554432Setting is 设置是 min/default/max. Recommed increasing the 推荐增加 Linux autotuning TCP buffer limit to 自动调优 TCP Buffer 限制到 32MB
net.ipv4.tcp_wmem10240 131072 33554432Setting is 设置是 min/default/max. Recommed increasing the 推荐增加 Linux autotuning TCP buffer limit to 自动调优 TCP Buffer 限制到 32MB
net.ipv4.tcp_sack1启用 选择确认
net.ipv4.tcp_slow_start_after_idle0By default默认情况下, TCP starts with a single small segment, gradually increasing it by one each time. This results in unnecessary slowness that impacts the start of every request以单个小段开始, 通过每次一个逐渐增加它.这导致了不必要的拖慢, 影响了每个请求的开始.
net.ipv4.tcp_syncookies0Many default 很多默认的 Linux installations use SYN cookies to protect the system against malicious attacks that flood TCP SYN packets. The use of SYN cookies dramatically reduces network bandwidth, and can be triggered by a running Geode cluster. If your Geode cluster is otherwise protected against such attacks, disable SYN cookies to ensure that Geode network throughput is not affected. 
NOTE: if SYN floods are an issue and SYN cookies can’t be disabled, try the following安装使用 SYN 来保护系统免于 TCP SYN包洪泛攻击. 使用 SYN cookies 显著减小了网络带宽, 通过运行 Geode 集群来触发. 如果你的 Geode 集群防护受攻击, 则禁用 SYN cookies 来保障 Geode 网络吞吐量不受影响. 
注意: 如果 SYN 洪泛是一个问题, 那么 SYN cookies 则不能禁用, 尝试配置以下参数
net.ipv4.tcp_max_syn_backlog="16384"
net.ipv4.tcp_synack_retries="1" 
net.ipv4.tcp_max_orphans="400000"
net.ipv4.tcp_timestamps1Enable timestamps as defined in RFC1323启用时间戳(在 RFC1323中定义):
net.ipv4.tcp_tw_recycle1This enables fast recycling of 启用TIME_WAIT sockets. The default value is Socket快速回收. 默认值为 0 (disabled). Should be used with caution with load balancers禁用). 应用的时候带有负载均衡的警告 .
net.ipv4.tcp_tw_reuse1This allows reusing sockets in 对于新连接, 这允许TIME_WAIT state for new connections when it is safe from protocol viewpoint. Default value is 0 (disabled). It is generally a safer alternative to 状态重用 Sockets, 从协议视角来说当它是安全时. 默认值为 0 (禁用). 它通常是tcp_tw_recyclerecycle的一个安全替换. The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web servers and tcp_tw_reuse 设置通常是非常有用的, 在如下的环境 有大量的短连接打开, 留在了TIME_WAIT 状态, 例如 web servers 和 loadbalancers.
net.ipv4.tcp_window_scaling1Turn on 开启 window scaling which can be an option to enlarge the transfer window, 它是一个选项扩大传输窗口:

另外, 增加传输队列的大小也能够帮助提升 TCP 吞吐量. 添加如下的命令到 /etc/rc.local 来完成.

/sbin/ifconfig eth0 txqueuelen 10000

NOTE: substitute the appropriate adapter name for eth0 in the above example注意: 替换合适的网卡适配器名称 eth0 在上面例子中.