^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) // SPDX-License-Identifier: GPL-2.0-or-later
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) * INET An implementation of the TCP/IP protocol suite for the LINUX
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) * operating system. INET is implemented using the BSD Socket
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) * interface as the means of communication with the user level.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) * The Internet Protocol (IP) module.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) * Authors: Ross Biro
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) * Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) * Donald Becker, <becker@super.org>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) * Alan Cox, <alan@lxorguk.ukuu.org.uk>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) * Richard Underwood
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) * Stefan Becker, <stefanb@yello.ping.de>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) * Jorge Cwik, <jorge@laser.satlink.net>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) * Arnt Gulbrandsen, <agulbra@nvg.unit.no>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) * Fixes:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) * Alan Cox : Commented a couple of minor bits of surplus code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) * Alan Cox : Undefining IP_FORWARD doesn't include the code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) * (just stops a compiler warning).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) * Alan Cox : Frames with >=MAX_ROUTE record routes, strict routes or loose routes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) * are junked rather than corrupting things.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) * Alan Cox : Frames to bad broadcast subnets are dumped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) * We used to process them non broadcast and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) * boy could that cause havoc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) * Alan Cox : ip_forward sets the free flag on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) * new frame it queues. Still crap because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) * it copies the frame but at least it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) * doesn't eat memory too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) * Alan Cox : Generic queue code and memory fixes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) * Fred Van Kempen : IP fragment support (borrowed from NET2E)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) * Gerhard Koerting: Forward fragmented frames correctly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) * Gerhard Koerting: Fixes to my fix of the above 8-).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) * Gerhard Koerting: IP interface addressing fix.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) * Linus Torvalds : More robustness checks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) * Alan Cox : Even more checks: Still not as robust as it ought to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) * Alan Cox : Save IP header pointer for later
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) * Alan Cox : ip option setting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) * Alan Cox : Use ip_tos/ip_ttl settings
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) * Alan Cox : Fragmentation bogosity removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) * (Thanks to Mark.Bush@prg.ox.ac.uk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) * Dmitry Gorodchanin : Send of a raw packet crash fix.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) * Alan Cox : Silly ip bug when an overlength
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) * fragment turns up. Now frees the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) * queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) * Linus Torvalds/ : Memory leakage on fragmentation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) * Alan Cox : handling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) * Gerhard Koerting: Forwarding uses IP priority hints
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) * Teemu Rantanen : Fragment problems.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) * Alan Cox : General cleanup, comments and reformat
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) * Alan Cox : SNMP statistics
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) * Alan Cox : BSD address rule semantics. Also see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) * UDP as there is a nasty checksum issue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) * if you do things the wrong way.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) * Alan Cox : Always defrag, moved IP_FORWARD to the config.in file
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) * Alan Cox : IP options adjust sk->priority.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) * Pedro Roque : Fix mtu/length error in ip_forward.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) * Alan Cox : Avoid ip_chk_addr when possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) * Richard Underwood : IP multicasting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) * Alan Cox : Cleaned up multicast handlers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) * Alan Cox : RAW sockets demultiplex in the BSD style.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) * Gunther Mayer : Fix the SNMP reporting typo
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) * Alan Cox : Always in group 224.0.0.1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) * Pauline Middelink : Fast ip_checksum update when forwarding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) * Masquerading support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) * Alan Cox : Multicast loopback error for 224.0.0.1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) * Alan Cox : IP_MULTICAST_LOOP option.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) * Alan Cox : Use notifiers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) * Bjorn Ekwall : Removed ip_csum (from slhc.c too)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) * Bjorn Ekwall : Moved ip_fast_csum to ip.h (inline!)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) * Stefan Becker : Send out ICMP HOST REDIRECT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) * Arnt Gulbrandsen : ip_build_xmit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) * Alan Cox : Per socket routing cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) * Alan Cox : Fixed routing cache, added header cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) * Alan Cox : Loopback didn't work right in original ip_build_xmit - fixed it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) * Alan Cox : Only send ICMP_REDIRECT if src/dest are the same net.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) * Alan Cox : Incoming IP option handling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) * Alan Cox : Set saddr on raw output frames as per BSD.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) * Alan Cox : Stopped broadcast source route explosions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) * Alan Cox : Can disable source routing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) * Takeshi Sone : Masquerading didn't work.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) * Dave Bonn,Alan Cox : Faster IP forwarding whenever possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) * Alan Cox : Memory leaks, tramples, misc debugging.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) * Alan Cox : Fixed multicast (by popular demand 8))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) * Alan Cox : Fixed forwarding (by even more popular demand 8))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) * Alan Cox : Fixed SNMP statistics [I think]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) * Gerhard Koerting : IP fragmentation forwarding fix
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) * Alan Cox : Device lock against page fault.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) * Alan Cox : IP_HDRINCL facility.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) * Werner Almesberger : Zero fragment bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) * Alan Cox : RAW IP frame length bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) * Alan Cox : Outgoing firewall on build_xmit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) * A.N.Kuznetsov : IP_OPTIONS support throughout the kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) * Alan Cox : Multicast routing hooks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) * Jos Vos : Do accounting *before* call_in_firewall
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) * Willy Konynenberg : Transparent proxying support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) * To Fix:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) * IP fragmentation wants rewriting cleanly. The RFC815 algorithm is much more efficient
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) * and could be made very efficient with the addition of some virtual memory hacks to permit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) * the allocation of a buffer that can then be 'grown' by twiddling page tables.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) * Output fragmentation wants updating along with the buffer management to use a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) * interleaved copy algorithm so that fragmenting has a one copy overhead. Actual packet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) * output should probably do its own fragmentation at the UDP/RAW layer. TCP shouldn't cause
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) * fragmentation anyway.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) #define pr_fmt(fmt) "IPv4: " fmt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) #include <linux/net.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) #include <linux/socket.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) #include <linux/sockios.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) #include <linux/in.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) #include <linux/inet.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) #include <linux/inetdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) #include <linux/netdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) #include <linux/etherdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) #include <linux/indirect_call_wrapper.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) #include <net/snmp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) #include <net/ip.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) #include <net/protocol.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) #include <net/route.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) #include <net/sock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) #include <net/arp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) #include <net/icmp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) #include <net/raw.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) #include <net/checksum.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) #include <net/inet_ecn.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) #include <linux/netfilter_ipv4.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) #include <net/xfrm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) #include <linux/mroute.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) #include <linux/netlink.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) #include <net/dst_metadata.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) * Process Router Attention IP option (RFC 2113)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) bool ip_call_ra_chain(struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) struct ip_ra_chain *ra;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) u8 protocol = ip_hdr(skb)->protocol;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) struct sock *last = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) struct net_device *dev = skb->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) struct net *net = dev_net(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) for (ra = rcu_dereference(net->ipv4.ra_chain); ra; ra = rcu_dereference(ra->next)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) struct sock *sk = ra->sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) /* If socket is bound to an interface, only report
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) * the packet if it came from that interface.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) if (sk && inet_sk(sk)->inet_num == protocol &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) (!sk->sk_bound_dev_if ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) sk->sk_bound_dev_if == dev->ifindex)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) if (ip_is_fragment(ip_hdr(skb))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) if (ip_defrag(net, skb, IP_DEFRAG_CALL_RA_CHAIN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) if (last) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) if (skb2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) raw_rcv(last, skb2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) last = sk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) if (last) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) raw_rcv(last, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) INDIRECT_CALLABLE_DECLARE(int udp_rcv(struct sk_buff *));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) INDIRECT_CALLABLE_DECLARE(int tcp_v4_rcv(struct sk_buff *));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) void ip_protocol_deliver_rcu(struct net *net, struct sk_buff *skb, int protocol)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) const struct net_protocol *ipprot;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) int raw, ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) resubmit:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) raw = raw_local_deliver(skb, protocol);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) ipprot = rcu_dereference(inet_protos[protocol]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) if (ipprot) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) if (!ipprot->no_policy) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) nf_reset_ct(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) ret = INDIRECT_CALL_2(ipprot->handler, tcp_v4_rcv, udp_rcv,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) protocol = -ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) goto resubmit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) __IP_INC_STATS(net, IPSTATS_MIB_INDELIVERS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) if (!raw) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) if (xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) __IP_INC_STATS(net, IPSTATS_MIB_INUNKNOWNPROTOS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) icmp_send(skb, ICMP_DEST_UNREACH,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) ICMP_PROT_UNREACH, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) __IP_INC_STATS(net, IPSTATS_MIB_INDELIVERS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) consume_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) static int ip_local_deliver_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) __skb_pull(skb, skb_network_header_len(skb));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) ip_protocol_deliver_rcu(net, skb, ip_hdr(skb)->protocol);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) * Deliver IP Packets to the higher protocol layers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) int ip_local_deliver(struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) * Reassemble IP fragments.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) struct net *net = dev_net(skb->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) if (ip_is_fragment(ip_hdr(skb))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) if (ip_defrag(net, skb, IP_DEFRAG_LOCAL_DELIVER))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) return NF_HOOK(NFPROTO_IPV4, NF_INET_LOCAL_IN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) net, NULL, skb, skb->dev, NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) ip_local_deliver_finish);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) static inline bool ip_rcv_options(struct sk_buff *skb, struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) struct ip_options *opt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) const struct iphdr *iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) /* It looks as overkill, because not all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) IP options require packet mangling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) But it is the easiest for now, especially taking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) into account that combination of IP options
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) and running sniffer is extremely rare condition.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) --ANK (980813)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) if (skb_cow(skb, skb_headroom(skb))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) __IP_INC_STATS(dev_net(dev), IPSTATS_MIB_INDISCARDS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) opt = &(IPCB(skb)->opt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) opt->optlen = iph->ihl*4 - sizeof(struct iphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) if (ip_options_compile(dev_net(dev), opt, skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) __IP_INC_STATS(dev_net(dev), IPSTATS_MIB_INHDRERRORS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) if (unlikely(opt->srr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) struct in_device *in_dev = __in_dev_get_rcu(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) if (in_dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) if (!IN_DEV_SOURCE_ROUTE(in_dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) if (IN_DEV_LOG_MARTIANS(in_dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) net_info_ratelimited("source route option %pI4 -> %pI4\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) &iph->saddr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) &iph->daddr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) if (ip_options_rcv_srr(skb, dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) drop:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) static bool ip_can_use_hint(const struct sk_buff *skb, const struct iphdr *iph,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) const struct sk_buff *hint)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) return hint && !skb_dst(skb) && ip_hdr(hint)->daddr == iph->daddr &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) ip_hdr(hint)->tos == iph->tos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) INDIRECT_CALLABLE_DECLARE(int udp_v4_early_demux(struct sk_buff *));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) INDIRECT_CALLABLE_DECLARE(int tcp_v4_early_demux(struct sk_buff *));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) static int ip_rcv_finish_core(struct net *net, struct sock *sk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) struct sk_buff *skb, struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) const struct sk_buff *hint)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) const struct iphdr *iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) int (*edemux)(struct sk_buff *skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) struct rtable *rt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) if (ip_can_use_hint(skb, iph, hint)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) err = ip_route_use_hint(skb, iph->daddr, iph->saddr, iph->tos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) dev, hint);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) if (unlikely(err))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) goto drop_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) if (net->ipv4.sysctl_ip_early_demux &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) !skb_dst(skb) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) !skb->sk &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) !ip_is_fragment(iph)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) const struct net_protocol *ipprot;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) int protocol = iph->protocol;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) ipprot = rcu_dereference(inet_protos[protocol]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) if (ipprot && (edemux = READ_ONCE(ipprot->early_demux))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) err = INDIRECT_CALL_2(edemux, tcp_v4_early_demux,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) udp_v4_early_demux, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) if (unlikely(err))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) goto drop_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) /* must reload iph, skb->head might have changed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) * Initialise the virtual path cache for the packet. It describes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) * how the packet travels inside Linux networking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) if (!skb_valid_dst(skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) err = ip_route_input_noref(skb, iph->daddr, iph->saddr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) iph->tos, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) if (unlikely(err))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) goto drop_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) #ifdef CONFIG_IP_ROUTE_CLASSID
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) if (unlikely(skb_dst(skb)->tclassid)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) struct ip_rt_acct *st = this_cpu_ptr(ip_rt_acct);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) u32 idx = skb_dst(skb)->tclassid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) st[idx&0xFF].o_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) st[idx&0xFF].o_bytes += skb->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) st[(idx>>16)&0xFF].i_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) st[(idx>>16)&0xFF].i_bytes += skb->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) if (iph->ihl > 5 && ip_rcv_options(skb, dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) rt = skb_rtable(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) if (rt->rt_type == RTN_MULTICAST) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) __IP_UPD_PO_STATS(net, IPSTATS_MIB_INMCAST, skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) } else if (rt->rt_type == RTN_BROADCAST) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) __IP_UPD_PO_STATS(net, IPSTATS_MIB_INBCAST, skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) } else if (skb->pkt_type == PACKET_BROADCAST ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) skb->pkt_type == PACKET_MULTICAST) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) struct in_device *in_dev = __in_dev_get_rcu(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) /* RFC 1122 3.3.6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) * When a host sends a datagram to a link-layer broadcast
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) * address, the IP destination address MUST be a legal IP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) * broadcast or IP multicast address.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) * A host SHOULD silently discard a datagram that is received
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) * via a link-layer broadcast (see Section 2.4) but does not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) * specify an IP multicast or broadcast destination address.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) * This doesn't explicitly say L2 *broadcast*, but broadcast is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) * in a way a form of multicast and the most common use case for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) * this is 802.11 protecting against cross-station spoofing (the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) * so-called "hole-196" attack) so do it for both.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) if (in_dev &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) IN_DEV_ORCONF(in_dev, DROP_UNICAST_IN_L2_MULTICAST))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) return NET_RX_SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) drop:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) return NET_RX_DROP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) drop_error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) if (err == -EXDEV)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) __NET_INC_STATS(net, LINUX_MIB_IPRPFILTER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) struct net_device *dev = skb->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) /* if ingress device is enslaved to an L3 master device pass the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) * skb to its handler for processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) skb = l3mdev_ip_rcv(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) if (!skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) return NET_RX_SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) ret = ip_rcv_finish_core(net, sk, skb, dev, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) if (ret != NET_RX_DROP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) ret = dst_input(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) * Main IP Receive routine.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) static struct sk_buff *ip_rcv_core(struct sk_buff *skb, struct net *net)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) const struct iphdr *iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) u32 len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) /* When the interface is in promisc. mode, drop all the crap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) * that it receives, do not try to analyse it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) if (skb->pkt_type == PACKET_OTHERHOST)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) __IP_UPD_PO_STATS(net, IPSTATS_MIB_IN, skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) skb = skb_share_check(skb, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) if (!skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) __IP_INC_STATS(net, IPSTATS_MIB_INDISCARDS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) if (!pskb_may_pull(skb, sizeof(struct iphdr)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) goto inhdr_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) * RFC1122: 3.2.1.2 MUST silently discard any IP frame that fails the checksum.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) * Is the datagram acceptable?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) * 1. Length at least the size of an ip header
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) * 2. Version of 4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) * 3. Checksums correctly. [Speed optimisation for later, skip loopback checksums]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) * 4. Doesn't have a bogus length
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) if (iph->ihl < 5 || iph->version != 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) goto inhdr_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) BUILD_BUG_ON(IPSTATS_MIB_ECT1PKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_ECT_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) BUILD_BUG_ON(IPSTATS_MIB_ECT0PKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_ECT_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) BUILD_BUG_ON(IPSTATS_MIB_CEPKTS != IPSTATS_MIB_NOECTPKTS + INET_ECN_CE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) __IP_ADD_STATS(net,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) IPSTATS_MIB_NOECTPKTS + (iph->tos & INET_ECN_MASK),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) max_t(unsigned short, 1, skb_shinfo(skb)->gso_segs));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) if (!pskb_may_pull(skb, iph->ihl*4))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) goto inhdr_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) if (unlikely(ip_fast_csum((u8 *)iph, iph->ihl)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) goto csum_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) len = ntohs(iph->tot_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) if (skb->len < len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) __IP_INC_STATS(net, IPSTATS_MIB_INTRUNCATEDPKTS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) } else if (len < (iph->ihl*4))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) goto inhdr_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) /* Our transport medium may have padded the buffer out. Now we know it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) * is IP we can trim to the true length of the frame.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) * Note this now means skb->len holds ntohs(iph->tot_len).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) if (pskb_trim_rcsum(skb, len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) __IP_INC_STATS(net, IPSTATS_MIB_INDISCARDS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) goto drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) iph = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) skb->transport_header = skb->network_header + iph->ihl*4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) /* Remove any debris in the socket control block */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) IPCB(skb)->iif = skb->skb_iif;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) /* Must drop socket now because of tproxy. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) if (!skb_sk_is_prefetched(skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) skb_orphan(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) csum_error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) __IP_INC_STATS(net, IPSTATS_MIB_CSUMERRORS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) inhdr_error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) __IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) drop:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) * IP receive entry point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) struct net_device *orig_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) struct net *net = dev_net(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) skb = ip_rcv_core(skb, net);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) if (skb == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) return NET_RX_DROP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) return NF_HOOK(NFPROTO_IPV4, NF_INET_PRE_ROUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) net, NULL, skb, dev, NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) ip_rcv_finish);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) static void ip_sublist_rcv_finish(struct list_head *head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) struct sk_buff *skb, *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) list_for_each_entry_safe(skb, next, head, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) skb_list_del_init(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) dst_input(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) static struct sk_buff *ip_extract_route_hint(const struct net *net,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) struct sk_buff *skb, int rt_type)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) if (fib4_has_custom_rules(net) || rt_type == RTN_BROADCAST)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) static void ip_list_rcv_finish(struct net *net, struct sock *sk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) struct list_head *head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) struct sk_buff *skb, *next, *hint = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) struct dst_entry *curr_dst = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) struct list_head sublist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) INIT_LIST_HEAD(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) list_for_each_entry_safe(skb, next, head, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) struct net_device *dev = skb->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) struct dst_entry *dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) skb_list_del_init(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) /* if ingress device is enslaved to an L3 master device pass the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) * skb to its handler for processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) skb = l3mdev_ip_rcv(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) if (!skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) if (ip_rcv_finish_core(net, sk, skb, dev, hint) == NET_RX_DROP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) dst = skb_dst(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) if (curr_dst != dst) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) hint = ip_extract_route_hint(net, skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) ((struct rtable *)dst)->rt_type);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) /* dispatch old sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) if (!list_empty(&sublist))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) ip_sublist_rcv_finish(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) /* start new sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) INIT_LIST_HEAD(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) curr_dst = dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) list_add_tail(&skb->list, &sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) /* dispatch final sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) ip_sublist_rcv_finish(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) static void ip_sublist_rcv(struct list_head *head, struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) struct net *net)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) NF_HOOK_LIST(NFPROTO_IPV4, NF_INET_PRE_ROUTING, net, NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) head, dev, NULL, ip_rcv_finish);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) ip_list_rcv_finish(net, NULL, head);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) /* Receive a list of IP packets */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) void ip_list_rcv(struct list_head *head, struct packet_type *pt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) struct net_device *orig_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) struct net_device *curr_dev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) struct net *curr_net = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) struct sk_buff *skb, *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) struct list_head sublist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) INIT_LIST_HEAD(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) list_for_each_entry_safe(skb, next, head, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622) struct net_device *dev = skb->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) struct net *net = dev_net(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625) skb_list_del_init(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) skb = ip_rcv_core(skb, net);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627) if (skb == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) if (curr_dev != dev || curr_net != net) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) /* dispatch old sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632) if (!list_empty(&sublist))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) ip_sublist_rcv(&sublist, curr_dev, curr_net);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634) /* start new sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) INIT_LIST_HEAD(&sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) curr_dev = dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) curr_net = net;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) list_add_tail(&skb->list, &sublist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) /* dispatch final sublist */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) if (!list_empty(&sublist))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) ip_sublist_rcv(&sublist, curr_dev, curr_net);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) }