结合案例深入解析orphan socket产生与消亡(一)
细节分析
linux内核源代码中与orphaned socket有关的三个函数(2.6.37为例):
void tcp_close(struct sock *sk, long timeout)
static int tcp_out_of_resources(struct sock *sk, int do_reset)
static int tcp_orphan_retries(struct sock sk, int alive) / Calculate maximal number or retries on an orphaned socket. */
先分析orphaned socket来龙去脉的“来”,即应用程序调用close()关闭连接后的状态迁移和为什么会增加orphan socket。
满足以下三种情况的之一的TCP连接在调用tcp_close()后不会产生orphan socket。
listen状态的socket在调用tcp_close()后,将状态(迁移)设置为TCP_CLOSE,同时清理此监听上的半/全新建连接队列,(全连接队列中有涉及增加和立即减少orphan socket的代码,推断大量新建时半连接与全连接队列的清理不会产生orphan socket);
void tcp_close(struct sock *sk, long timeout)
{
struct sk_buff *skb;
int data_was_unread = 0;
int state;
lock_sock(sk);
sk->sk_shutdown = SHUTDOWN_MASK;
if (sk->sk_state == TCP_LISTEN) {
tcp_set_state(sk, TCP_CLOSE);
/ Special case. /
inet_csk_listen_stop(sk);
goto adjudge_to_death;
}
/* We need to flush the recv. buffs. We do this only on the
* descriptor close, not protocol-sourced closes, because the
* reader process may not have drained the data yet!
*/
while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) {
u32 len = TCP_SKB_CB(skb)->end_seq - TCP_SKB_CB(skb)->seq -
tcp_hdr(skb)->fin;
data_was_unread += len;
__kfree_skb(skb);
}
sk_mem_reclaim(sk);
/ If socket has been already reset (e.g. in tcp_reset()) - kill it. /
if (sk->sk_state == TCP_CLOSE)
goto adjudge_to_death;
/* As outlined in RFC 2525, section 2.17, we send a RST here because
* data was lost. To witness the awful effects of the old behavior of
* always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk
* GET in an FTP client, suspend the process, wait for the client to
* advertise a zero window, then kill -9 the FTP client, wheee...
* Note: timeout is always zero in such a case.
*/
if (data_was_unread) {
/ Unread data was tossed, zap the connection. /
NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE);
tcp_set_state(sk, TCP_CLOSE);
tcp_send_active_reset(sk, sk->sk_allocation);
TCP的接收队列中存在未读取到应用层数据时,直接将TCP连接的状态设置为TCP_CLOSE,并且向对端发送reset +ack的报文;
} else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) {
/ Check zero linger _after_ checking for unread data. /
sk->sk_prot->disconnect(sk, 0);
NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
当需要关闭的TCP连接设置了so_linger延迟关闭,并且延迟时间设置为0,直接丢掉所有的发送和接收队列中的报文,设置连接状态为TCP_CLOSE;若此TCP不处于TCP_SYN_SENT状态,则发送reset报文给对端; 备注: 此项可知设置 so_linger+ 时间为0可以调用close()用reset报文关闭连接;
tcp_close()后产生orphan socket的分析:
} else if (tcp_close_state(sk)) {
/* We FIN if the application ate all the data before
* zapping the connection.
*/
/* RED-PEN. Formally speaking, we have broken TCP state
* machine. State transitions:
*
* TCP_ESTABLISHED -> TCP_FIN_WAIT1
* TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible)
* TCP_CLOSE_WAIT -> TCP_LAST_ACK
*
* are legal only when FIN has been sent (i.e. in window),
* rather than queued out of window. Purists blame.
*
* F.e. "RFC state" is ESTABLISHED,
* if Linux state is FIN-WAIT-1, but FIN is still not sent.
*
* The visible declinations are that sometimes
* we enter time-wait state, when it is not required really
* (harmless), do not send active resets, when they are
* required by specs (TCP_ESTABLISHED, TCP_CLOSE_WAIT, when
* they look as CLOSING or LAST_ACK for Linux)
* Probably, I missed some more holelets.
* --ANK
*/
tcp_send_fin(sk);
}
tcp_close()处理中的TCP状态迁移函数是tcp_close_state(sk),其中用查new_state[]表的方法查到即将转换后的状态, 其数组索引代表当前状态, 返回数组值是新状态;这部分状态迁移需要参照经典图理解(相信读完这部分会对此图有新的理解)。
static const unsigned char new_state[16] = {
/ current state: new state: action: /
/ (Invalid) / TCP_CLOSE,
/ TCP_ESTABLISHED / TCP_FIN_WAIT1 | TCP_ACTION_FIN,
/ TCP_SYN_SENT / TCP_CLOSE,
/ TCP_SYN_RECV / TCP_FIN_WAIT1 | TCP_ACTION_FIN,
/ TCP_FIN_WAIT1 / TCP_FIN_WAIT1,
/ TCP_FIN_WAIT2 / TCP_FIN_WAIT2,
/ TCP_TIME_WAIT / TCP_CLOSE,
/ TCP_CLOSE / TCP_CLOSE,
/ TCP_CLOSE_WAIT / TCP_LAST_ACK | TCP_ACTION_FIN,
/ TCP_LAST_ACK / TCP_LAST_ACK,
/ TCP_LISTEN / TCP_CLOSE,
/ TCP_CLOSING / TCP_CLOSING,
};
从上边的数组值可以看出迁移后的状态只可能是TCP_CLOSE、TCP_FIN_WAIT1、TCP_FIN_WAIT2、TCP_LAST_ACK、TCP_CLOSING五种状态之一,这也可以从状态转换图中看出是相符合的。
从tcp_close() adjudge_to_death以下代码中可以看出,上述五种状态除了TCP_CLOSE状态,TCP_FIN_WAIT1、TCP_FIN_WAIT2、TCP_LAST_ACK、TCP_CLOSING都可归类于orphan socket。
adjudge_to_death:
state = sk->sk_state;// 迁移后的状态保存
sock_hold(sk);
sock_orphan(sk); //清除了文件描述结构,可以理解为成为了orphan socket
/ It is the last release_sock in its life. It will remove backlog. /
release_sock(sk);
/* Now socket is owned by kernel and we acquire BH lock
to finish close. No need to check for user refs.
*/
local_bh_disable();
bh_lock_sock(sk);
WARN_ON(sock_owned_by_user(sk));
percpu_counter_inc(sk->sk_prot->orphan_count);//增加orphan socket数量,如果下边转为TCP_CLOSE则可以减1
……..
……..
if (sk->sk_state == TCP_CLOSE)//等于TCP_CLOSE状态时
inet_csk_destroy_sock(sk);// 减少orphan socket数量
分析orphan socket来龙去脉的“去”,即orphan socket数量的减少。
第一个“去”还是tcp_close()中
if (sk->sk_state != TCP_CLOSE) {
sk_mem_reclaim(sk);
if (tcp_too_many_orphans(sk, 0)) {//超过孤儿socket 阈值,发送reset
if (net_ratelimit())
printk(KERN_INFO "TCP: too many of orphaned "
"sockets\n");
tcp_set_state(sk, TCP_CLOSE);//设置为close状态
tcp_send_active_reset(sk, GFP_ATOMIC);//发送reset
NET_INC_STATS_BH(sock_net(sk),
LINUX_MIB_TCPABORTONMEMORY);
}
}
执行到此处的TCP连接若状态不在close状态,需要判断系统中的orphan socket数量是否超限,若超限直接将状态转为close,并向另一端发送reset+ack的报文。
第二个“去”是在报文重传定时器tcp_write_timeout()或持续定时器tcp_probe_timer()超时后调用tcp_out_of_resources(),判断sock SOCK_DEAD时调用发送reset报文; 并调用tcp_done()设置为close,然后释放sock; 内核中唯一一处打印Out of socket memory,也在此函数中。
static int tcp_out_of_resources(struct sock *sk, int do_reset)
{
struct tcp_sock *tp = tcp_sk(sk);
int shift = 0;
/* If peer does not open window for long time, or did not transmit
anything for long time, penalize it. /
if ((s32)(tcp_time_stamp - tp->lsndtime) > 2*TCP_RTO_MAX || !do_reset)
shift++;
/ If some dubious ICMP arrived, penalize even more. /
if (sk->sk_err_soft)
shift++;
if (tcp_too_many_orphans(sk, shift)) {
if (net_ratelimit())
printk(KERN_INFO "Out of socket memory\n");
/* Catch exceptional cases, when connection requires reset.
1. Last segment was sent recently. /
if ((s32)(tcp_time_stamp - tp->lsndtime) <= TCP_TIMEWAIT_LEN ||
/ 2. Window is closed. /
(!tp->snd_wnd && !tp->packets_out))
do_reset = 1;
if (do_reset)
tcp_send_active_reset(sk, GFP_ATOMIC);
tcp_done(sk);
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY);
return 1;
}
return 0;
}
另外从代码中看到,判断orphan socket是否超过系统最大限制值,实际会有个偏移量 shift参数传入,该值范围为 [0, 2] 。注释说的比较清楚
/* If peer does not open window for long time, or did not transmit
anything for long time, penalize it. /
/ If some dubious ICMP arrived, penalize even more. /
在这些场景下会对当前sock做惩罚,将当前orphan 的数量 2x 甚至 4x与系统限制值做比较。这样会打印出Out of socket memory,但实际的数量并没有超过系统的限制值,属于误报的一种情形。
orphan socket状态的sock连接重传超时的最大次数,在系统中用net.ipv4.tcp_orphan_retries 取得,若内核参数中设置为0, 内核代码中则重置为8次, 即当前sock重传失败的次数超过8次后可能要reset此连接。这里需要感谢一位同事,发现了 net.ipv4.tcp_orphan_retries=0的坑,我原来一直以为=0就是不允许重传。
/ Calculate maximal number or retries on an orphaned socket. /
static int tcp_orphan_retries(struct sock *sk, int alive)
{
int retries = sysctl_tcp_orphan_retries; / May be zero. /
/ We know from an ICMP that something is wrong. /
if (sk->sk_err_soft && !alive)
retries = 0;
/* However, if socket sent something recently, select some safe
* number of retries. 8 corresponds to >100 seconds with minimal
RTO of 200msec. /
if (retries == 0 && alive)
retries = 8;
return retries;
}
最后一点分析的是FIN_WAIT2状态是否属于orphan socket?
在TCP状态转换处理函数tcp_rcv_state_process()中,case TCP_FIN_WAIT1 在FIN_WAIT1状态时收到ack报文后将TCP状态转换为FIN_WAIT2状态,根据此状态设定或计算出的超时时间,做出判定大于60秒启用keepalive_timer(大于60秒超时的FIN_wait2也属于orphan sockets);或小于60秒调用tcp_time_wait()将sock替换为timewait结构管理,并将设置FIN_WAIT2状态为close状态且将orphan的数量减1。
case TCP_FIN_WAIT1:
…………
tmo = tcp_fin_time(sk);
if (tmo > TCP_TIMEWAIT_LEN) {
inet_csk_reset_keepalive_timer(sk, tmo - TCP_TIMEWAIT_LEN);
} else if (th->fin || sock_owned_by_user(sk)) {
/* Bad case. We could lose such FIN otherwise.
* It is not a big problem, but it looks confusing
* and not so rare event. We still can lose it now,
* if it spins in bh_lock_sock(), but it is really
* marginal case.
*/
inet_csk_reset_keepalive_timer(sk, tmo);
} else {
tcp_time_wait(sk, TCP_FIN_WAIT2, tmo);
goto discard;
}
}
}
break;
案例小结
内核日志dmesg命令看到 Out of socket memory,出现内存不足可能会有两种情况:
有太多的 orphan sockets,通常对于一些负载较重的服务器经常会出现这种情况。
分配给 TCP 的内存确实较少,从而导致内存不足。
第一种情况,可以用ss -s命令或cat /proc/net/sockstat查看是否孤儿套接字过多。
第二种情况,可以调整TCP的内存,因网上有大量的此类分析解决,这里不再详述。
最后来回答一下对于线上稳定性来说需要搞清楚的问题:
- orphan socket的连接处于TCP状态的那一个阶段?
TCP_FIN_WAIT1、TCP_LAST_ACK、TCP_CLOSING状态都可归类计数于orphan socket,通过TCP_LINGER2或sysctl_tcp_fin_timeout设置的超时时间大于60秒时的TCP_FIN_WAIT2的连接也归类计数于orphan socket;小于60秒的TCP_FIN_WAIT2状态的连接则归类计数于TIME_WAIT,从代码可以看出TCP_TIME_WAIT状态是不计入orphan socket;TCP_CLOSE_WAIT 状态的连接既不计入orphan socket 也不计入TIME_WAIT。
- 什么原因或条件下会导致出现这么多orphan socket ?
- orphan socket过多会给线上带来什么风险?
TCP_FIN_WAIT1 和 TCP_LAST_ACK 状态的连接都在等待对方回复ACK,例如client端可以对产生的连接故意发送FIN半关闭,而不回复最后的ACK使服务器端产生大量LAST_ACK,消耗TCP资源,这种情况下调整tcp_max_orphans参数和tcp_orphan_retries可以限制简单的DDOS攻击。
引出的疑问
tcp_fin() 的调用中包括一处FIN_WAIT2状态转换为TIME_WAIT, 但是tcp_fin()调用点都在establish状态的处理中,那为什么还要在tcp_fin() 实现? 这一点没想明白,欢迎探讨。