Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) /************************************************************************
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2)  * s2io.c: A Linux PCI-X Ethernet driver for Neterion 10GbE Server NIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3)  * Copyright(c) 2002-2010 Exar Corp.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  * This software may be used and distributed according to the terms of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * the GNU General Public License (GPL), incorporated herein by reference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  * Drivers based on or derived from this code fall under the GPL and must
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  * retain the authorship, copyright and license notice.  This file is not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  * a complete program and may only be used when the entire operating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10)  * system is licensed under the GPL.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  * See the file COPYING in this distribution for more information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13)  * Credits:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14)  * Jeff Garzik		: For pointing out the improper error condition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15)  *			  check in the s2io_xmit routine and also some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  *			  issues in the Tx watch dog function. Also for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  *			  patiently answering all those innumerable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  *			  questions regaring the 2.6 porting issues.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  * Stephen Hemminger	: Providing proper 2.6 porting mechanism for some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  *			  macros available only in 2.6 Kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  * Francois Romieu	: For pointing out all code part that were
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  *			  deprecated and also styling related comments.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  * Grant Grundler	: For helping me get rid of some Architecture
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  *			  dependent code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  * Christopher Hellwig	: Some more 2.6 specific issues in the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  * The module loadable parameters that are supported by the driver and a brief
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  * explanation of all the variables.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  * rx_ring_num : This can be used to program the number of receive rings used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  * in the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  * rx_ring_sz: This defines the number of receive blocks each ring can have.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  *     This is also an array of size 8.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  * rx_ring_mode: This defines the operation mode of all 8 rings. The valid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  *		values are 1, 2.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  * tx_fifo_num: This defines the number of Tx FIFOs thats used int the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  * tx_fifo_len: This too is an array of 8. Each element defines the number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  * Tx descriptors that can be associated with each corresponding FIFO.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  * intr_type: This defines the type of interrupt. The values can be 0(INTA),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  *     2(MSI_X). Default value is '2(MSI_X)'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  * lro_max_pkts: This parameter defines maximum number of packets can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *     aggregated as a single large packet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  * napi: This parameter used to enable/disable NAPI (polling Rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  *     Possible values '1' for enable and '0' for disable. Default is '1'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  * vlan_tag_strip: This can be used to enable or disable vlan stripping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  *                 Possible values '1' for enable , '0' for disable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  *                 Default is '2' - which means disable in promisc mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  *                 and enable in non-promiscuous mode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  * multiq: This parameter used to enable/disable MULTIQUEUE support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  *      Possible values '1' for enable and '0' for disable. Default is '0'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  ************************************************************************/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53) #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58) #include <linux/ioport.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59) #include <linux/pci.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60) #include <linux/dma-mapping.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62) #include <linux/netdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63) #include <linux/etherdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64) #include <linux/mdio.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67) #include <linux/delay.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68) #include <linux/stddef.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69) #include <linux/ioctl.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70) #include <linux/timex.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71) #include <linux/ethtool.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72) #include <linux/workqueue.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73) #include <linux/if_vlan.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74) #include <linux/ip.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75) #include <linux/tcp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76) #include <linux/uaccess.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77) #include <linux/io.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78) #include <linux/io-64-nonatomic-lo-hi.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80) #include <linux/prefetch.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81) #include <net/tcp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82) #include <net/checksum.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84) #include <asm/div64.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85) #include <asm/irq.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87) /* local include */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88) #include "s2io.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89) #include "s2io-regs.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91) #define DRV_VERSION "2.0.26.28"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93) /* S2io Driver name & version. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94) static const char s2io_driver_name[] = "Neterion";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95) static const char s2io_driver_version[] = DRV_VERSION;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97) static const int rxd_size[2] = {32, 48};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98) static const int rxd_count[2] = {127, 85};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100) static inline int RXD_IS_UP2DT(struct RxD_t *rxdp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104) 	ret = ((!(rxdp->Control_1 & RXD_OWN_XENA)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105) 	       (GET_RXD_MARKER(rxdp->Control_2) != THE_RXD_MARK));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111)  * Cards with following subsystem_id have a link state indication
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112)  * problem, 600B, 600C, 600D, 640B, 640C and 640D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113)  * macro below identifies these cards given the subsystem_id.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115) #define CARDS_WITH_FAULTY_LINK_INDICATORS(dev_type, subid)		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116) 	(dev_type == XFRAME_I_DEVICE) ?					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117) 	((((subid >= 0x600B) && (subid <= 0x600D)) ||			\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) 	  ((subid >= 0x640B) && (subid <= 0x640D))) ? 1 : 0) : 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) #define LINK_IS_UP(val64) (!(val64 & (ADAPTER_STATUS_RMAC_REMOTE_FAULT | \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) 				      ADAPTER_STATUS_RMAC_LOCAL_FAULT)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123) static inline int is_s2io_card_up(const struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125) 	return test_bit(__S2IO_STATE_CARD_UP, &sp->state);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128) /* Ethtool related variables and Macros. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129) static const char s2io_gstrings[][ETH_GSTRING_LEN] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130) 	"Register test\t(offline)",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131) 	"Eeprom test\t(offline)",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132) 	"Link test\t(online)",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133) 	"RLDRAM test\t(offline)",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134) 	"BIST Test\t(offline)"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137) static const char ethtool_xena_stats_keys[][ETH_GSTRING_LEN] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138) 	{"tmac_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139) 	{"tmac_data_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140) 	{"tmac_drop_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141) 	{"tmac_mcst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142) 	{"tmac_bcst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) 	{"tmac_pause_ctrl_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) 	{"tmac_ttl_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) 	{"tmac_ucst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) 	{"tmac_nucst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) 	{"tmac_any_err_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) 	{"tmac_ttl_less_fb_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) 	{"tmac_vld_ip_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) 	{"tmac_vld_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) 	{"tmac_drop_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) 	{"tmac_icmp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) 	{"tmac_rst_tcp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) 	{"tmac_tcp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) 	{"tmac_udp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) 	{"rmac_vld_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) 	{"rmac_data_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) 	{"rmac_fcs_err_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) 	{"rmac_drop_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) 	{"rmac_vld_mcst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) 	{"rmac_vld_bcst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 	{"rmac_in_rng_len_err_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) 	{"rmac_out_rng_len_err_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) 	{"rmac_long_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) 	{"rmac_pause_ctrl_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) 	{"rmac_unsup_ctrl_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) 	{"rmac_ttl_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) 	{"rmac_accepted_ucst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) 	{"rmac_accepted_nucst_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) 	{"rmac_discarded_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) 	{"rmac_drop_events"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) 	{"rmac_ttl_less_fb_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) 	{"rmac_ttl_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) 	{"rmac_usized_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) 	{"rmac_osized_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) 	{"rmac_frag_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) 	{"rmac_jabber_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) 	{"rmac_ttl_64_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) 	{"rmac_ttl_65_127_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) 	{"rmac_ttl_128_255_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) 	{"rmac_ttl_256_511_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) 	{"rmac_ttl_512_1023_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183) 	{"rmac_ttl_1024_1518_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184) 	{"rmac_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185) 	{"rmac_ip_octets"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186) 	{"rmac_hdr_err_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187) 	{"rmac_drop_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) 	{"rmac_icmp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) 	{"rmac_tcp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) 	{"rmac_udp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) 	{"rmac_err_drp_udp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 	{"rmac_xgmii_err_sym"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) 	{"rmac_frms_q0"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) 	{"rmac_frms_q1"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) 	{"rmac_frms_q2"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) 	{"rmac_frms_q3"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) 	{"rmac_frms_q4"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) 	{"rmac_frms_q5"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) 	{"rmac_frms_q6"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) 	{"rmac_frms_q7"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) 	{"rmac_full_q0"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 	{"rmac_full_q1"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) 	{"rmac_full_q2"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) 	{"rmac_full_q3"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) 	{"rmac_full_q4"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) 	{"rmac_full_q5"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) 	{"rmac_full_q6"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) 	{"rmac_full_q7"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) 	{"rmac_pause_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) 	{"rmac_xgmii_data_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) 	{"rmac_xgmii_ctrl_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) 	{"rmac_accepted_ip"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) 	{"rmac_err_tcp"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) 	{"rd_req_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) 	{"new_rd_req_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 	{"new_rd_req_rtry_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) 	{"rd_rtry_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 	{"wr_rtry_rd_ack_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) 	{"wr_req_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 	{"new_wr_req_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 	{"new_wr_req_rtry_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 	{"wr_rtry_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 	{"wr_disc_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) 	{"rd_rtry_wr_ack_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) 	{"txp_wr_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) 	{"txd_rd_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) 	{"txd_wr_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) 	{"rxd_rd_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) 	{"rxd_wr_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) 	{"txf_rd_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 	{"rxf_wr_cnt"}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) static const char ethtool_enhanced_stats_keys[][ETH_GSTRING_LEN] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 	{"rmac_ttl_1519_4095_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 	{"rmac_ttl_4096_8191_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 	{"rmac_ttl_8192_max_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) 	{"rmac_ttl_gt_max_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 	{"rmac_osized_alt_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) 	{"rmac_jabber_alt_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 	{"rmac_gt_max_alt_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 	{"rmac_vlan_frms"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	{"rmac_len_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 	{"rmac_fcs_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	{"rmac_pf_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 	{"rmac_da_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) 	{"rmac_red_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 	{"rmac_rts_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) 	{"rmac_ingm_full_discard"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250) 	{"link_fault_cnt"}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) static const char ethtool_driver_stats_keys[][ETH_GSTRING_LEN] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254) 	{"\n DRIVER STATISTICS"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255) 	{"single_bit_ecc_errs"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) 	{"double_bit_ecc_errs"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) 	{"parity_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) 	{"serious_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	{"soft_reset_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 	{"fifo_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	{"ring_0_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	{"ring_1_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) 	{"ring_2_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 	{"ring_3_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 	{"ring_4_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 	{"ring_5_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 	{"ring_6_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) 	{"ring_7_full_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 	{"alarm_transceiver_temp_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 	{"alarm_transceiver_temp_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 	{"alarm_laser_bias_current_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 	{"alarm_laser_bias_current_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 	{"alarm_laser_output_power_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 	{"alarm_laser_output_power_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) 	{"warn_transceiver_temp_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 	{"warn_transceiver_temp_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) 	{"warn_laser_bias_current_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) 	{"warn_laser_bias_current_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 	{"warn_laser_output_power_high"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) 	{"warn_laser_output_power_low"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 	{"lro_aggregated_pkts"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 	{"lro_flush_both_count"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) 	{"lro_out_of_sequence_pkts"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 	{"lro_flush_due_to_max_pkts"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) 	{"lro_avg_aggr_pkts"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 	{"mem_alloc_fail_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 	{"pci_map_fail_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 	{"watchdog_timer_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) 	{"mem_allocated"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 	{"mem_freed"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) 	{"link_up_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 	{"link_down_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) 	{"link_up_time"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 	{"link_down_time"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) 	{"tx_tcode_buf_abort_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 	{"tx_tcode_desc_abort_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) 	{"tx_tcode_parity_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) 	{"tx_tcode_link_loss_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) 	{"tx_tcode_list_proc_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) 	{"rx_tcode_parity_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) 	{"rx_tcode_abort_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) 	{"rx_tcode_parity_abort_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) 	{"rx_tcode_rda_fail_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) 	{"rx_tcode_unkn_prot_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) 	{"rx_tcode_fcs_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) 	{"rx_tcode_buf_size_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 	{"rx_tcode_rxd_corrupt_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) 	{"rx_tcode_unkn_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) 	{"tda_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 	{"pfc_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) 	{"pcc_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) 	{"tti_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) 	{"tpa_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) 	{"sm_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 	{"lso_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 	{"mac_tmac_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) 	{"mac_rmac_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) 	{"xgxs_txgxs_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 	{"xgxs_rxgxs_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 	{"rc_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) 	{"prc_pcix_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 	{"rpa_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) 	{"rda_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) 	{"rti_err_cnt"},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) 	{"mc_err_cnt"}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) #define S2IO_XENA_STAT_LEN	ARRAY_SIZE(ethtool_xena_stats_keys)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) #define S2IO_ENHANCED_STAT_LEN	ARRAY_SIZE(ethtool_enhanced_stats_keys)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) #define S2IO_DRIVER_STAT_LEN	ARRAY_SIZE(ethtool_driver_stats_keys)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) #define XFRAME_I_STAT_LEN (S2IO_XENA_STAT_LEN + S2IO_DRIVER_STAT_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) #define XFRAME_II_STAT_LEN (XFRAME_I_STAT_LEN + S2IO_ENHANCED_STAT_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) #define XFRAME_I_STAT_STRINGS_LEN (XFRAME_I_STAT_LEN * ETH_GSTRING_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) #define XFRAME_II_STAT_STRINGS_LEN (XFRAME_II_STAT_LEN * ETH_GSTRING_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) #define S2IO_TEST_LEN	ARRAY_SIZE(s2io_gstrings)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) #define S2IO_STRINGS_LEN	(S2IO_TEST_LEN * ETH_GSTRING_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) /* copy mac addr to def_mac_addr array */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) static void do_s2io_copy_mac_addr(struct s2io_nic *sp, int offset, u64 mac_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 	sp->def_mac_addr[offset].mac_addr[5] = (u8) (mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 	sp->def_mac_addr[offset].mac_addr[4] = (u8) (mac_addr >> 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 	sp->def_mac_addr[offset].mac_addr[3] = (u8) (mac_addr >> 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 	sp->def_mac_addr[offset].mac_addr[2] = (u8) (mac_addr >> 24);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 	sp->def_mac_addr[offset].mac_addr[1] = (u8) (mac_addr >> 32);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 	sp->def_mac_addr[offset].mac_addr[0] = (u8) (mac_addr >> 40);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353)  * Constants to be programmed into the Xena's registers, to configure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354)  * the XAUI.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) #define	END_SIGN	0x0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) static const u64 herc_act_dtx_cfg[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) 	0x8000051536750000ULL, 0x80000515367500E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 	0x8000051536750004ULL, 0x80000515367500E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) 	0x80010515003F0000ULL, 0x80010515003F00E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 	0x80010515003F0004ULL, 0x80010515003F00E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 	0x801205150D440000ULL, 0x801205150D4400E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 	0x801205150D440004ULL, 0x801205150D4400E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) 	0x80020515F2100000ULL, 0x80020515F21000E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 	0x80020515F2100004ULL, 0x80020515F21000E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) 	/* Done */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 	END_SIGN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) static const u64 xena_dtx_cfg[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 	0x8000051500000000ULL, 0x80000515000000E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 	0x80000515D9350004ULL, 0x80000515D93500E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 	0x8001051500000000ULL, 0x80010515000000E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 	0x80010515001E0004ULL, 0x80010515001E00E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) 	/* Set address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 	0x8002051500000000ULL, 0x80020515000000E0ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 	/* Write data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 	0x80020515F2100004ULL, 0x80020515F21000E4ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 	END_SIGN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396)  * Constants for Fixing the MacAddress problem seen mostly on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397)  * Alpha machines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) static const u64 fix_mac[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 	0x0060000000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 	0x0040600000000000ULL, 0x0000600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	0x0020600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 	0x0020600000000000ULL, 0x0000600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 	0x0040600000000000ULL, 0x0060600000000000ULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 	END_SIGN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) MODULE_LICENSE("GPL");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) MODULE_VERSION(DRV_VERSION);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) /* Module Loadable parameters. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) S2IO_PARM_INT(tx_fifo_num, FIFO_DEFAULT_NUM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) S2IO_PARM_INT(rx_ring_num, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) S2IO_PARM_INT(multiq, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) S2IO_PARM_INT(rx_ring_mode, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) S2IO_PARM_INT(use_continuous_tx_intrs, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) S2IO_PARM_INT(rmac_pause_time, 0x100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) S2IO_PARM_INT(mc_pause_threshold_q0q3, 187);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) S2IO_PARM_INT(mc_pause_threshold_q4q7, 187);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) S2IO_PARM_INT(shared_splits, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) S2IO_PARM_INT(tmac_util_period, 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) S2IO_PARM_INT(rmac_util_period, 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) S2IO_PARM_INT(l3l4hdr_size, 128);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) /* 0 is no steering, 1 is Priority steering, 2 is Default steering */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) S2IO_PARM_INT(tx_steering_type, TX_DEFAULT_STEERING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) /* Frequency of Rx desc syncs expressed as power of 2 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) S2IO_PARM_INT(rxsync_frequency, 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) /* Interrupt type. Values can be 0(INTA), 2(MSI_X) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) S2IO_PARM_INT(intr_type, 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) /* Large receive offload feature */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) /* Max pkts to be aggregated by LRO at one time. If not specified,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443)  * aggregation happens until we hit max IP pkt size(64K)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) S2IO_PARM_INT(lro_max_pkts, 0xFFFF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) S2IO_PARM_INT(indicate_max_pkts, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) S2IO_PARM_INT(napi, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) S2IO_PARM_INT(vlan_tag_strip, NO_STRIP_IN_PROMISC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) static unsigned int tx_fifo_len[MAX_TX_FIFOS] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) {DEFAULT_FIFO_0_LEN, [1 ...(MAX_TX_FIFOS - 1)] = DEFAULT_FIFO_1_7_LEN};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) static unsigned int rx_ring_sz[MAX_RX_RINGS] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) {[0 ...(MAX_RX_RINGS - 1)] = SMALL_BLK_CNT};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) static unsigned int rts_frm_len[MAX_RX_RINGS] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) {[0 ...(MAX_RX_RINGS - 1)] = 0 };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) module_param_array(tx_fifo_len, uint, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) module_param_array(rx_ring_sz, uint, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) module_param_array(rts_frm_len, uint, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463)  * S2IO device table.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464)  * This table lists all the devices that this driver supports.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) static const struct pci_device_id s2io_tbl[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 	{PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_WIN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 	 PCI_ANY_ID, PCI_ANY_ID},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) 	{PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_S2IO_UNI,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 	 PCI_ANY_ID, PCI_ANY_ID},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 	{PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_WIN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 	 PCI_ANY_ID, PCI_ANY_ID},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 	{PCI_VENDOR_ID_S2IO, PCI_DEVICE_ID_HERC_UNI,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 	 PCI_ANY_ID, PCI_ANY_ID},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 	{0,}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) MODULE_DEVICE_TABLE(pci, s2io_tbl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) static const struct pci_error_handlers s2io_err_handler = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 	.error_detected = s2io_io_error_detected,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 	.slot_reset = s2io_io_slot_reset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 	.resume = s2io_io_resume,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) static struct pci_driver s2io_driver = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	.name = "S2IO",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 	.id_table = s2io_tbl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 	.probe = s2io_init_nic,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 	.remove = s2io_rem_nic,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 	.err_handler = &s2io_err_handler,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) /* A simplifier macro used both by init and free shared_mem Fns(). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) #define TXD_MEM_PAGE_CNT(len, per_each) DIV_ROUND_UP(len, per_each)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) /* netqueue manipulation helper functions */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) static inline void s2io_stop_all_tx_queue(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) 	if (!sp->config.multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 		for (i = 0; i < sp->config.tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 			sp->mac_control.fifos[i].queue_state = FIFO_QUEUE_STOP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 	netif_tx_stop_all_queues(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) static inline void s2io_stop_tx_queue(struct s2io_nic *sp, int fifo_no)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	if (!sp->config.multiq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 		sp->mac_control.fifos[fifo_no].queue_state =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) 			FIFO_QUEUE_STOP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 	netif_tx_stop_all_queues(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) static inline void s2io_start_all_tx_queue(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 	if (!sp->config.multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 		for (i = 0; i < sp->config.tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 			sp->mac_control.fifos[i].queue_state = FIFO_QUEUE_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 	netif_tx_start_all_queues(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) static inline void s2io_wake_all_tx_queue(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 	if (!sp->config.multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 		for (i = 0; i < sp->config.tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) 			sp->mac_control.fifos[i].queue_state = FIFO_QUEUE_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	netif_tx_wake_all_queues(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) static inline void s2io_wake_tx_queue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 	struct fifo_info *fifo, int cnt, u8 multiq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 	if (multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 		if (cnt && __netif_subqueue_stopped(fifo->dev, fifo->fifo_no))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 			netif_wake_subqueue(fifo->dev, fifo->fifo_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 	} else if (cnt && (fifo->queue_state == FIFO_QUEUE_STOP)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 		if (netif_queue_stopped(fifo->dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 			fifo->queue_state = FIFO_QUEUE_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 			netif_wake_queue(fifo->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556)  * init_shared_mem - Allocation and Initialization of Memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557)  * @nic: Device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558)  * Description: The function allocates all the memory areas shared
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559)  * between the NIC and the driver. This includes Tx descriptors,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560)  * Rx descriptors and the statistics block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) static int init_shared_mem(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 	u32 size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 	void *tmp_v_addr, *tmp_v_addr_next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 	dma_addr_t tmp_p_addr, tmp_p_addr_next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 	struct RxD_block *pre_rxd_blk = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) 	int i, j, blk_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) 	int lst_size, lst_per_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 	struct net_device *dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 	unsigned long tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	struct buffAdd *ba;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 	unsigned long long mem_allocated = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	/* Allocation and initialization of TXDLs in FIFOs */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 	size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 		size += tx_cfg->fifo_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) 	if (size > MAX_AVAILABLE_TXDS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) 			  "Too many TxDs requested: %d, max supported: %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 			  size, MAX_AVAILABLE_TXDS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 		size = tx_cfg->fifo_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 		 * Legal values are from 2 to 8192
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 		if (size < 2) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 			DBG_PRINT(ERR_DBG, "Fifo %d: Invalid length (%d) - "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 				  "Valid lengths are 2 through 8192\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 				  i, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 			return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 	lst_size = (sizeof(struct TxD) * config->max_txds);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 	lst_per_page = PAGE_SIZE / lst_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 		int fifo_len = tx_cfg->fifo_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 		int list_holder_size = fifo_len * sizeof(struct list_info_hold);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 		fifo->list_info = kzalloc(list_holder_size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 		if (!fifo->list_info) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 			DBG_PRINT(INFO_DBG, "Malloc failed for list_info\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 		mem_allocated += list_holder_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 		int page_num = TXD_MEM_PAGE_CNT(config->tx_cfg[i].fifo_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 						lst_per_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 		fifo->tx_curr_put_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 		fifo->tx_curr_put_info.fifo_len = tx_cfg->fifo_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 		fifo->tx_curr_get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 		fifo->tx_curr_get_info.fifo_len = tx_cfg->fifo_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 		fifo->fifo_no = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		fifo->nic = nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 		fifo->max_txds = MAX_SKB_FRAGS + 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 		fifo->dev = dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 		for (j = 0; j < page_num; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 			int k = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 			dma_addr_t tmp_p;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 			void *tmp_v;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 			tmp_v = dma_alloc_coherent(&nic->pdev->dev, PAGE_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 						   &tmp_p, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 			if (!tmp_v) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 				DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 					  "dma_alloc_coherent failed for TxDL\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 				return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 			/* If we got a zero DMA address(can happen on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 			 * certain platforms like PPC), reallocate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 			 * Store virtual address of page we don't want,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 			 * to be freed later.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 			if (!tmp_p) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 				mac_control->zerodma_virt_addr = tmp_v;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 				DBG_PRINT(INIT_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 					  "%s: Zero DMA address for TxDL. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 					  "Virtual address %p\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 					  dev->name, tmp_v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 				tmp_v = dma_alloc_coherent(&nic->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 							   PAGE_SIZE, &tmp_p,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 							   GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 				if (!tmp_v) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 					DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 						  "dma_alloc_coherent failed for TxDL\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 					return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 				mem_allocated += PAGE_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 			while (k < lst_per_page) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 				int l = (j * lst_per_page) + k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 				if (l == tx_cfg->fifo_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 				fifo->list_info[l].list_virt_addr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 					tmp_v + (k * lst_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 				fifo->list_info[l].list_phy_addr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 					tmp_p + (k * lst_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 				k++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 		size = tx_cfg->fifo_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 		fifo->ufo_in_band_v = kcalloc(size, sizeof(u64), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 		if (!fifo->ufo_in_band_v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 		mem_allocated += (size * sizeof(u64));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 	/* Allocation and initialization of RXDs in Rings */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 	size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 		if (rx_cfg->num_rxd % (rxd_count[nic->rxd_mode] + 1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 			DBG_PRINT(ERR_DBG, "%s: Ring%d RxD count is not a "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 				  "multiple of RxDs per Block\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 				  dev->name, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 			return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 		size += rx_cfg->num_rxd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 		ring->block_count = rx_cfg->num_rxd /
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 			(rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 		ring->pkt_cnt = rx_cfg->num_rxd - ring->block_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 	if (nic->rxd_mode == RXD_MODE_1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 		size = (size * (sizeof(struct RxD1)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 		size = (size * (sizeof(struct RxD3)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 		ring->rx_curr_get_info.block_index = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 		ring->rx_curr_get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 		ring->rx_curr_get_info.ring_len = rx_cfg->num_rxd - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 		ring->rx_curr_put_info.block_index = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 		ring->rx_curr_put_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 		ring->rx_curr_put_info.ring_len = rx_cfg->num_rxd - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 		ring->nic = nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 		ring->ring_no = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 		blk_cnt = rx_cfg->num_rxd / (rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 		/*  Allocating all the Rx blocks */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 		for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 			struct rx_block_info *rx_blocks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 			int l;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 			rx_blocks = &ring->rx_blocks[j];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 			size = SIZE_OF_BLOCK;	/* size is always page size */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 			tmp_v_addr = dma_alloc_coherent(&nic->pdev->dev, size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 							&tmp_p_addr, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 			if (tmp_v_addr == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 				/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 				 * In case of failure, free_shared_mem()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 				 * is called, which should free any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 				 * memory that was alloced till the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 				 * failure happened.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 				 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 				rx_blocks->block_virt_addr = tmp_v_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) 				return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 			mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 			size = sizeof(struct rxd_info) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 				rxd_count[nic->rxd_mode];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 			rx_blocks->block_virt_addr = tmp_v_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 			rx_blocks->block_dma_addr = tmp_p_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 			rx_blocks->rxds = kmalloc(size,  GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 			if (!rx_blocks->rxds)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 				return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 			mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 			for (l = 0; l < rxd_count[nic->rxd_mode]; l++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 				rx_blocks->rxds[l].virt_addr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 					rx_blocks->block_virt_addr +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 					(rxd_size[nic->rxd_mode] * l);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 				rx_blocks->rxds[l].dma_addr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 					rx_blocks->block_dma_addr +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 					(rxd_size[nic->rxd_mode] * l);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 		/* Interlinking all Rx Blocks */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 		for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 			int next = (j + 1) % blk_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 			tmp_v_addr = ring->rx_blocks[j].block_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 			tmp_v_addr_next = ring->rx_blocks[next].block_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 			tmp_p_addr = ring->rx_blocks[j].block_dma_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 			tmp_p_addr_next = ring->rx_blocks[next].block_dma_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 			pre_rxd_blk = tmp_v_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 			pre_rxd_blk->reserved_2_pNext_RxD_block =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 				(unsigned long)tmp_v_addr_next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 			pre_rxd_blk->pNext_RxD_Blk_physical =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 				(u64)tmp_p_addr_next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 	if (nic->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 		 * Allocation of Storages for buffer addresses in 2BUFF mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 		 * and the buffers as well.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 		for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 			struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 			struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 			blk_cnt = rx_cfg->num_rxd /
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 				(rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 			size = sizeof(struct buffAdd *) * blk_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 			ring->ba = kmalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 			if (!ring->ba)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 				return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 			mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 			for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 				int k = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 				size = sizeof(struct buffAdd) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 					(rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 				ring->ba[j] = kmalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 				if (!ring->ba[j])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 					return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 				mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 				while (k != rxd_count[nic->rxd_mode]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 					ba = &ring->ba[j][k];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 					size = BUF0_LEN + ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 					ba->ba_0_org = kmalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 					if (!ba->ba_0_org)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 						return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) 					mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 					tmp = (unsigned long)ba->ba_0_org;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 					tmp += ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 					tmp &= ~((unsigned long)ALIGN_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 					ba->ba_0 = (void *)tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 					size = BUF1_LEN + ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 					ba->ba_1_org = kmalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 					if (!ba->ba_1_org)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 						return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 					mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 					tmp = (unsigned long)ba->ba_1_org;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 					tmp += ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 					tmp &= ~((unsigned long)ALIGN_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 					ba->ba_1 = (void *)tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 					k++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 	/* Allocation and initialization of Statistics block */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 	size = sizeof(struct stat_block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 	mac_control->stats_mem =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 		dma_alloc_coherent(&nic->pdev->dev, size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 				   &mac_control->stats_mem_phy, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 	if (!mac_control->stats_mem) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 		 * In case of failure, free_shared_mem() is called, which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 		 * should free any memory that was alloced till the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 		 * failure happened.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 	mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 	mac_control->stats_mem_sz = size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 	tmp_v_addr = mac_control->stats_mem;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 	mac_control->stats_info = tmp_v_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 	memset(tmp_v_addr, 0, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 	DBG_PRINT(INIT_DBG, "%s: Ring Mem PHY: 0x%llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 		dev_name(&nic->pdev->dev), (unsigned long long)tmp_p_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 	mac_control->stats_info->sw_stat.mem_allocated += mem_allocated;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863)  * free_shared_mem - Free the allocated Memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864)  * @nic:  Device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865)  * Description: This function is to free all memory locations allocated by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866)  * the init_shared_mem() function and return it to the kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) static void free_shared_mem(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 	int i, j, blk_cnt, size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	void *tmp_v_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 	dma_addr_t tmp_p_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 	int lst_size, lst_per_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 	struct net_device *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 	int page_num = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) 	struct config_param *config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 	struct mac_info *mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 	struct stat_block *stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 	struct swStat *swstats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 	if (!nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 	dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 	config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 	mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 	stats = mac_control->stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 	swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 	lst_size = sizeof(struct TxD) * config->max_txds;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 	lst_per_page = PAGE_SIZE / lst_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) 		page_num = TXD_MEM_PAGE_CNT(tx_cfg->fifo_len, lst_per_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 		for (j = 0; j < page_num; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) 			int mem_blks = (j * lst_per_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 			struct list_info_hold *fli;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 			if (!fifo->list_info)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 			fli = &fifo->list_info[mem_blks];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 			if (!fli->list_virt_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 			dma_free_coherent(&nic->pdev->dev, PAGE_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 					  fli->list_virt_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 					  fli->list_phy_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 			swstats->mem_freed += PAGE_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 		/* If we got a zero DMA address during allocation,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 		 * free the page now
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 		if (mac_control->zerodma_virt_addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 			dma_free_coherent(&nic->pdev->dev, PAGE_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 					  mac_control->zerodma_virt_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 					  (dma_addr_t)0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 			DBG_PRINT(INIT_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 				  "%s: Freeing TxDL with zero DMA address. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 				  "Virtual address %p\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 				  dev->name, mac_control->zerodma_virt_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 			swstats->mem_freed += PAGE_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 		kfree(fifo->list_info);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 		swstats->mem_freed += tx_cfg->fifo_len *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 			sizeof(struct list_info_hold);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 	size = SIZE_OF_BLOCK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 		blk_cnt = ring->block_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 		for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 			tmp_v_addr = ring->rx_blocks[j].block_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 			tmp_p_addr = ring->rx_blocks[j].block_dma_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 			if (tmp_v_addr == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 			dma_free_coherent(&nic->pdev->dev, size, tmp_v_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 					  tmp_p_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 			swstats->mem_freed += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 			kfree(ring->rx_blocks[j].rxds);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 			swstats->mem_freed += sizeof(struct rxd_info) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 				rxd_count[nic->rxd_mode];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 	if (nic->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 		/* Freeing buffer storage addresses in 2BUFF mode. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 		for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 			struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 			struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 			blk_cnt = rx_cfg->num_rxd /
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 				(rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 			for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 				int k = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 				if (!ring->ba[j])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 					continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 				while (k != rxd_count[nic->rxd_mode]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 					struct buffAdd *ba = &ring->ba[j][k];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 					kfree(ba->ba_0_org);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 					swstats->mem_freed +=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 						BUF0_LEN + ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 					kfree(ba->ba_1_org);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 					swstats->mem_freed +=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 						BUF1_LEN + ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 					k++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 				kfree(ring->ba[j]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 				swstats->mem_freed += sizeof(struct buffAdd) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 					(rxd_count[nic->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 			kfree(ring->ba);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 			swstats->mem_freed += sizeof(struct buffAdd *) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 				blk_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 	for (i = 0; i < nic->config.tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 		if (fifo->ufo_in_band_v) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) 			swstats->mem_freed += tx_cfg->fifo_len *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 				sizeof(u64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 			kfree(fifo->ufo_in_band_v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 	if (mac_control->stats_mem) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 		swstats->mem_freed += mac_control->stats_mem_sz;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 		dma_free_coherent(&nic->pdev->dev, mac_control->stats_mem_sz,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 				  mac_control->stats_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 				  mac_control->stats_mem_phy);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004)  * s2io_verify_pci_mode -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) static int s2io_verify_pci_mode(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 	int     mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 	val64 = readq(&bar0->pci_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 	mode = (u8)GET_PCI_MODE(val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	if (val64 & PCI_MODE_UNKNOWN_MODE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 		return -1;      /* Unknown PCI mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 	return mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) #define NEC_VENID   0x1033
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) #define NEC_DEVID   0x0125
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) static int s2io_on_nec_bridge(struct pci_dev *s2io_pdev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 	struct pci_dev *tdev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 	for_each_pci_dev(tdev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 		if (tdev->vendor == NEC_VENID && tdev->device == NEC_DEVID) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 			if (tdev->bus == s2io_pdev->bus->parent) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 				pci_dev_put(tdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 				return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) static int bus_speed[8] = {33, 133, 133, 200, 266, 133, 200, 266};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039)  * s2io_print_pci_mode -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) static int s2io_print_pci_mode(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 	int	mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 	const char *pcimode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 	val64 = readq(&bar0->pci_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 	mode = (u8)GET_PCI_MODE(val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 	if (val64 & PCI_MODE_UNKNOWN_MODE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 		return -1;	/* Unknown PCI mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 	config->bus_speed = bus_speed[mode];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 	if (s2io_on_nec_bridge(nic->pdev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 		DBG_PRINT(ERR_DBG, "%s: Device is on PCI-E bus\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 			  nic->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 		return mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 	switch (mode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 	case PCI_MODE_PCI_33:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 		pcimode = "33MHz PCI bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 	case PCI_MODE_PCI_66:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 		pcimode = "66MHz PCI bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 	case PCI_MODE_PCIX_M1_66:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 		pcimode = "66MHz PCIX(M1) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 	case PCI_MODE_PCIX_M1_100:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 		pcimode = "100MHz PCIX(M1) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 	case PCI_MODE_PCIX_M1_133:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 		pcimode = "133MHz PCIX(M1) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) 	case PCI_MODE_PCIX_M2_66:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 		pcimode = "133MHz PCIX(M2) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 	case PCI_MODE_PCIX_M2_100:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 		pcimode = "200MHz PCIX(M2) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 	case PCI_MODE_PCIX_M2_133:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 		pcimode = "266MHz PCIX(M2) bus";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 		pcimode = "unsupported bus!";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 		mode = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 	DBG_PRINT(ERR_DBG, "%s: Device is on %d bit %s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 		  nic->dev->name, val64 & PCI_MODE_32_BITS ? 32 : 64, pcimode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 	return mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100)  *  init_tti - Initialization transmit traffic interrupt scheme
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101)  *  @nic: device private variable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102)  *  @link: link status (UP/DOWN) used to enable/disable continuous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103)  *  transmit interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104)  *  Description: The function configures transmit traffic interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105)  *  Return Value:  SUCCESS on success and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106)  *  '-1' on failure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) static int init_tti(struct s2io_nic *nic, int link)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 		 * TTI Initialization. Default Tx timer gets us about
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 		 * 250 interrupts per sec. Continuous interrupts are enabled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 		 * by default.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 		if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 			int count = (nic->config.bus_speed * 125)/2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 			val64 = TTI_DATA1_MEM_TX_TIMER_VAL(count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 		} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 			val64 = TTI_DATA1_MEM_TX_TIMER_VAL(0x2078);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 		val64 |= TTI_DATA1_MEM_TX_URNG_A(0xA) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 			TTI_DATA1_MEM_TX_URNG_B(0x10) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 			TTI_DATA1_MEM_TX_URNG_C(0x30) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 			TTI_DATA1_MEM_TX_TIMER_AC_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 		if (i == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 			if (use_continuous_tx_intrs && (link == LINK_UP))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) 				val64 |= TTI_DATA1_MEM_TX_TIMER_CI_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 		writeq(val64, &bar0->tti_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 		if (nic->config.intr_type == MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 			val64 = TTI_DATA2_MEM_TX_UFC_A(0x10) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 				TTI_DATA2_MEM_TX_UFC_B(0x100) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 				TTI_DATA2_MEM_TX_UFC_C(0x200) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 				TTI_DATA2_MEM_TX_UFC_D(0x300);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 			if ((nic->config.tx_steering_type ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 			     TX_DEFAULT_STEERING) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 			    (config->tx_fifo_num > 1) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 			    (i >= nic->udp_fifo_idx) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 			    (i < (nic->udp_fifo_idx +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 				  nic->total_udp_fifos)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) 				val64 = TTI_DATA2_MEM_TX_UFC_A(0x50) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 					TTI_DATA2_MEM_TX_UFC_B(0x80) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) 					TTI_DATA2_MEM_TX_UFC_C(0x100) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 					TTI_DATA2_MEM_TX_UFC_D(0x120);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 				val64 = TTI_DATA2_MEM_TX_UFC_A(0x10) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 					TTI_DATA2_MEM_TX_UFC_B(0x20) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 					TTI_DATA2_MEM_TX_UFC_C(0x40) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 					TTI_DATA2_MEM_TX_UFC_D(0x80);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 		writeq(val64, &bar0->tti_data2_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 		val64 = TTI_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 			TTI_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 			TTI_CMD_MEM_OFFSET(i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 		writeq(val64, &bar0->tti_command_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 		if (wait_for_cmd_complete(&bar0->tti_command_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 					  TTI_CMD_MEM_STROBE_NEW_CMD,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 					  S2IO_BIT_RESET) != SUCCESS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 			return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177)  *  init_nic - Initialization of hardware
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178)  *  @nic: device private variable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179)  *  Description: The function sequentially configures every block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180)  *  of the H/W from their reset values.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181)  *  Return Value:  SUCCESS on success and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182)  *  '-1' on failure (endian settings incorrect).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) static int init_nic(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 	struct net_device *dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 	void __iomem *add;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	u32 time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 	int i, j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 	int dtx_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 	unsigned long long mem_share;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 	int mem_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 	/* to set the swapper controle on the card */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 	if (s2io_set_swapper(nic)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 		DBG_PRINT(ERR_DBG, "ERROR: Setting Swapper failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 		return -EIO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 	 * Herc requires EOI to be removed from reset before XGXS, so..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 	if (nic->device_type & XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 		val64 = 0xA500000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 		writeq(val64, &bar0->sw_reset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 		msleep(500);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 		val64 = readq(&bar0->sw_reset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 	/* Remove XGXS from reset state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 	writeq(val64, &bar0->sw_reset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 	msleep(500);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 	val64 = readq(&bar0->sw_reset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 	/* Ensure that it's safe to access registers by checking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 	 * RIC_RUNNING bit is reset. Check is valid only for XframeII.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 	if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 		for (i = 0; i < 50; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 			val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 			if (!(val64 & ADAPTER_STATUS_RIC_RUNNING))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 			msleep(10);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 		if (i == 50)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 			return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 	/*  Enable Receiving broadcasts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 	add = &bar0->mac_cfg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 	val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 	val64 |= MAC_RMAC_BCAST_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 	writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 	writel((u32)val64, add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 	writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 	writel((u32) (val64 >> 32), (add + 4));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 	/* Read registers in all blocks */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 	val64 = readq(&bar0->mac_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 	val64 = readq(&bar0->mc_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 	val64 = readq(&bar0->xgxs_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 	/*  Set MTU */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 	val64 = dev->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 	writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 	if (nic->device_type & XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 		while (herc_act_dtx_cfg[dtx_cnt] != END_SIGN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 			SPECIAL_REG_WRITE(herc_act_dtx_cfg[dtx_cnt],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 					  &bar0->dtx_control, UF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) 			if (dtx_cnt & 0x1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 				msleep(1); /* Necessary!! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) 			dtx_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 		while (xena_dtx_cfg[dtx_cnt] != END_SIGN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 			SPECIAL_REG_WRITE(xena_dtx_cfg[dtx_cnt],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 					  &bar0->dtx_control, UF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 			val64 = readq(&bar0->dtx_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 			dtx_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 	/*  Tx DMA Initialization */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 	writeq(val64, &bar0->tx_fifo_partition_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 	writeq(val64, &bar0->tx_fifo_partition_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 	writeq(val64, &bar0->tx_fifo_partition_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 	writeq(val64, &bar0->tx_fifo_partition_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 	for (i = 0, j = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 		val64 |= vBIT(tx_cfg->fifo_len - 1, ((j * 32) + 19), 13) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 			vBIT(tx_cfg->fifo_priority, ((j * 32) + 5), 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 		if (i == (config->tx_fifo_num - 1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 			if (i % 2 == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 				i++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 		switch (i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 		case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 			writeq(val64, &bar0->tx_fifo_partition_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 			val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 			j = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 		case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 			writeq(val64, &bar0->tx_fifo_partition_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) 			val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 			j = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 		case 5:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 			writeq(val64, &bar0->tx_fifo_partition_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 			val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 			j = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 		case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) 			writeq(val64, &bar0->tx_fifo_partition_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 			val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) 			j = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) 		default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) 			j++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 	 * Disable 4 PCCs for Xena1, 2 and 3 as per H/W bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 	 * SXE-008 TRANSMIT DMA ARBITRATION ISSUE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) 	if ((nic->device_type == XFRAME_I_DEVICE) && (nic->pdev->revision < 4))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) 		writeq(PCC_ENABLE_FOUR, &bar0->pcc_enable);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) 	val64 = readq(&bar0->tx_fifo_partition_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) 	DBG_PRINT(INIT_DBG, "Fifo partition at: 0x%p is: 0x%llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) 		  &bar0->tx_fifo_partition_0, (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) 	 * Initialization of Tx_PA_CONFIG register to ignore packet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 	 * integrity checking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 	val64 = readq(&bar0->tx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 	val64 |= TX_PA_CFG_IGNORE_FRM_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 		TX_PA_CFG_IGNORE_SNAP_OUI |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 		TX_PA_CFG_IGNORE_LLC_CTRL |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 		TX_PA_CFG_IGNORE_L2_ERR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 	writeq(val64, &bar0->tx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 	/* Rx DMA initialization. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) 		val64 |= vBIT(rx_cfg->ring_priority, (5 + (i * 8)), 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 	writeq(val64, &bar0->rx_queue_priority);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) 	 * Allocating equal share of memory to all the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 	 * configured Rings.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 	if (nic->device_type & XFRAME_II_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 		mem_size = 32;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 		mem_size = 64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 		switch (i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 		case 0:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 			mem_share = (mem_size / config->rx_ring_num +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 				     mem_size % config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 			val64 |= RX_QUEUE_CFG_Q0_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 		case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 			val64 |= RX_QUEUE_CFG_Q1_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 		case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 			val64 |= RX_QUEUE_CFG_Q2_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 		case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 			val64 |= RX_QUEUE_CFG_Q3_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 		case 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 			val64 |= RX_QUEUE_CFG_Q4_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 		case 5:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 			val64 |= RX_QUEUE_CFG_Q5_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 		case 6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 			val64 |= RX_QUEUE_CFG_Q6_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 		case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 			mem_share = (mem_size / config->rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 			val64 |= RX_QUEUE_CFG_Q7_SZ(mem_share);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 	writeq(val64, &bar0->rx_queue_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 	 * Filling Tx round robin registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 	 * as per the number of FIFOs for equal scheduling priority
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 	switch (config->tx_fifo_num) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 	case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 		val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) 	case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) 		val64 = 0x0001000100010001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 		val64 = 0x0001000100000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 	case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 		val64 = 0x0001020001020001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 		val64 = 0x0200010200010200ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 		val64 = 0x0102000102000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 		val64 = 0x0001020001020001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 		val64 = 0x0200010200000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 	case 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) 		val64 = 0x0001020300010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 		val64 = 0x0001020300000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 	case 5:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 		val64 = 0x0001020304000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 		val64 = 0x0304000102030400ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 		val64 = 0x0102030400010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) 		val64 = 0x0400010203040001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 		val64 = 0x0203040000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) 	case 6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 		val64 = 0x0001020304050001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 		val64 = 0x0203040500010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 		val64 = 0x0405000102030405ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 		val64 = 0x0001020304050001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 		val64 = 0x0203040500000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 	case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 		val64 = 0x0001020304050600ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 		val64 = 0x0102030405060001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 		val64 = 0x0203040506000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 		val64 = 0x0304050600010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 		val64 = 0x0405060000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 	case 8:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 		val64 = 0x0001020304050607ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 		writeq(val64, &bar0->tx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 		writeq(val64, &bar0->tx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 		writeq(val64, &bar0->tx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 		writeq(val64, &bar0->tx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) 		val64 = 0x0001020300000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 		writeq(val64, &bar0->tx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) 	/* Enable all configured Tx FIFO partitions */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) 	val64 = readq(&bar0->tx_fifo_partition_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 	val64 |= (TX_FIFO_PARTITION_EN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) 	writeq(val64, &bar0->tx_fifo_partition_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 	/* Filling the Rx round robin registers as per the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 	 * number of Rings and steering based on QoS with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 	 * equal priority.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) 	switch (config->rx_ring_num) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 	case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 		val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) 		val64 = 0x8080808080808080ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506) 	case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) 		val64 = 0x0001000100010001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 		val64 = 0x0001000100000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 		val64 = 0x8080808040404040ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 	case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 		val64 = 0x0001020001020001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 		val64 = 0x0200010200010200ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 		val64 = 0x0102000102000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 		val64 = 0x0001020001020001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 		val64 = 0x0200010200000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 		val64 = 0x8080804040402020ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 	case 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) 		val64 = 0x0001020300010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 		val64 = 0x0001020300000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 		val64 = 0x8080404020201010ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 	case 5:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 		val64 = 0x0001020304000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 		val64 = 0x0304000102030400ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 		val64 = 0x0102030400010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) 		val64 = 0x0400010203040001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) 		val64 = 0x0203040000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) 		val64 = 0x8080404020201008ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 	case 6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) 		val64 = 0x0001020304050001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) 		val64 = 0x0203040500010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) 		val64 = 0x0405000102030405ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) 		val64 = 0x0001020304050001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 		val64 = 0x0203040500000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 		val64 = 0x8080404020100804ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) 	case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 		val64 = 0x0001020304050600ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) 		val64 = 0x0102030405060001ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 		val64 = 0x0203040506000102ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) 		val64 = 0x0304050600010203ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) 		val64 = 0x0405060000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 		val64 = 0x8080402010080402ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) 	case 8:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 		val64 = 0x0001020304050607ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) 		writeq(val64, &bar0->rx_w_round_robin_0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593) 		writeq(val64, &bar0->rx_w_round_robin_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) 		writeq(val64, &bar0->rx_w_round_robin_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) 		writeq(val64, &bar0->rx_w_round_robin_3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) 		val64 = 0x0001020300000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) 		writeq(val64, &bar0->rx_w_round_robin_4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 		val64 = 0x8040201008040201ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 		writeq(val64, &bar0->rts_qos_steering);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) 	/* UDP Fix */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 	for (i = 0; i < 8; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) 		writeq(val64, &bar0->rts_frm_len_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) 	/* Set the default rts frame length for the rings configured */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) 	val64 = MAC_RTS_FRM_LEN_SET(dev->mtu+22);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) 	for (i = 0 ; i < config->rx_ring_num ; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) 		writeq(val64, &bar0->rts_frm_len_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) 	/* Set the frame length for the configured rings
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 	 * desired by the user
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) 		/* If rts_frm_len[i] == 0 then it is assumed that user not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619) 		 * specified frame length steering.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620) 		 * If the user provides the frame length then program
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621) 		 * the rts_frm_len register for those values or else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622) 		 * leave it as it is.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) 		if (rts_frm_len[i] != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) 			writeq(MAC_RTS_FRM_LEN_SET(rts_frm_len[i]),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) 			       &bar0->rts_frm_len_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) 	/* Disable differentiated services steering logic */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) 	for (i = 0; i < 64; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) 		if (rts_ds_steer(nic, i, 0) == FAILURE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) 				  "%s: rts_ds_steer failed on codepoint %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) 				  dev->name, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 			return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 	/* Program statistics memory */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) 	writeq(mac_control->stats_mem_phy, &bar0->stat_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) 	if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 		val64 = STAT_BC(0x320);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) 		writeq(val64, &bar0->stat_byte_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) 	 * Initializing the sampling rate for the device to calculate the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) 	 * bandwidth utilization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) 	val64 = MAC_TX_LINK_UTIL_VAL(tmac_util_period) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) 		MAC_RX_LINK_UTIL_VAL(rmac_util_period);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654) 	writeq(val64, &bar0->mac_link_util);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) 	 * Initializing the Transmit and Receive Traffic Interrupt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) 	 * Scheme.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) 	/* Initialize TTI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) 	if (SUCCESS != init_tti(nic, nic->last_link_state))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) 	/* RTI Initialization */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) 	if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) 		 * Programmed to generate Apprx 500 Intrs per
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) 		 * second
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) 		int count = (nic->config.bus_speed * 125)/4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) 		val64 = RTI_DATA1_MEM_RX_TIMER_VAL(count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) 		val64 = RTI_DATA1_MEM_RX_TIMER_VAL(0xFFF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 	val64 |= RTI_DATA1_MEM_RX_URNG_A(0xA) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) 		RTI_DATA1_MEM_RX_URNG_B(0x10) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) 		RTI_DATA1_MEM_RX_URNG_C(0x30) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) 		RTI_DATA1_MEM_RX_TIMER_AC_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) 	writeq(val64, &bar0->rti_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) 	val64 = RTI_DATA2_MEM_RX_UFC_A(0x1) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) 		RTI_DATA2_MEM_RX_UFC_B(0x2) ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) 	if (nic->config.intr_type == MSI_X)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) 		val64 |= (RTI_DATA2_MEM_RX_UFC_C(0x20) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686) 			  RTI_DATA2_MEM_RX_UFC_D(0x40));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688) 		val64 |= (RTI_DATA2_MEM_RX_UFC_C(0x40) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) 			  RTI_DATA2_MEM_RX_UFC_D(0x80));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) 	writeq(val64, &bar0->rti_data2_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 		val64 = RTI_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) 			RTI_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 			RTI_CMD_MEM_OFFSET(i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) 		writeq(val64, &bar0->rti_command_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) 		 * Once the operation completes, the Strobe bit of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700) 		 * command register will be reset. We poll for this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) 		 * particular condition. We wait for a maximum of 500ms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) 		 * for the operation to complete, if it's not complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) 		 * by then we return error.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) 		time = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706) 		while (true) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) 			val64 = readq(&bar0->rti_command_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) 			if (!(val64 & RTI_CMD_MEM_STROBE_NEW_CMD))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 			if (time > 10) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) 				DBG_PRINT(ERR_DBG, "%s: RTI init failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) 					  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) 				return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 			time++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 			msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 	 * Initializing proper values as Pause threshold into all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) 	 * the 8 Queues on Rx side.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) 	writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q0q3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) 	writeq(0xffbbffbbffbbffbbULL, &bar0->mc_pause_thresh_q4q7);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 	/* Disable RMAC PAD STRIPPING */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) 	add = &bar0->mac_cfg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 	val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) 	val64 &= ~(MAC_CFG_RMAC_STRIP_PAD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) 	writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 	writel((u32) (val64), add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) 	writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) 	writel((u32) (val64 >> 32), (add + 4));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) 	val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) 	/* Enable FCS stripping by adapter */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) 	add = &bar0->mac_cfg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) 	val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) 	val64 |= MAC_CFG_RMAC_STRIP_FCS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 	if (nic->device_type == XFRAME_II_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 		writeq(val64, &bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) 	else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) 		writel((u32) (val64), add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) 		writel((u32) (val64 >> 32), (add + 4));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) 	 * Set the time value to be inserted in the pause frame
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753) 	 * generated by xena.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) 	val64 = readq(&bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) 	val64 &= ~(RMAC_PAUSE_HG_PTIME(0xffff));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) 	val64 |= RMAC_PAUSE_HG_PTIME(nic->mac_control.rmac_pause_time);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) 	writeq(val64, &bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761) 	 * Set the Threshold Limit for Generating the pause frame
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762) 	 * If the amount of data in any Queue exceeds ratio of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763) 	 * (mac_control.mc_pause_threshold_q0q3 or q4q7)/256
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) 	 * pause frame is generated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) 	for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768) 		val64 |= (((u64)0xFF00 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) 			   nic->mac_control.mc_pause_threshold_q0q3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) 			  << (i * 2 * 8));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) 	writeq(val64, &bar0->mc_pause_thresh_q0q3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) 	val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) 	for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) 		val64 |= (((u64)0xFF00 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 			   nic->mac_control.mc_pause_threshold_q4q7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) 			  << (i * 2 * 8));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 	writeq(val64, &bar0->mc_pause_thresh_q4q7);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) 	 * TxDMA will stop Read request if the number of read split has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 	 * exceeded the limit pointed by shared_splits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) 	val64 = readq(&bar0->pic_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) 	val64 |= PIC_CNTL_SHARED_SPLITS(shared_splits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) 	writeq(val64, &bar0->pic_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) 	if (nic->config.bus_speed == 266) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 		writeq(TXREQTO_VAL(0x7f) | TXREQTO_EN, &bar0->txreqtimeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) 		writeq(0x0, &bar0->read_retry_delay);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) 		writeq(0x0, &bar0->write_retry_delay);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) 	 * Programming the Herc to split every write transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) 	 * that does not start on an ADB to reduce disconnects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) 	if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 		val64 = FAULT_BEHAVIOUR | EXT_REQ_EN |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 			MISC_LINK_STABILITY_PRD(3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) 		writeq(val64, &bar0->misc_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) 		val64 = readq(&bar0->pic_control2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) 		val64 &= ~(s2BIT(13)|s2BIT(14)|s2BIT(15));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) 		writeq(val64, &bar0->pic_control2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) 	if (strstr(nic->product_name, "CX4")) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) 		val64 = TMAC_AVG_IPG(0x17);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 		writeq(val64, &bar0->tmac_avg_ipg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) #define LINK_UP_DOWN_INTERRUPT		1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) #define MAC_RMAC_ERR_TIMER		2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) static int s2io_link_fault_indication(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 	if (nic->device_type == XFRAME_II_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) 		return LINK_UP_DOWN_INTERRUPT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) 		return MAC_RMAC_ERR_TIMER;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827)  *  do_s2io_write_bits -  update alarm bits in alarm register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828)  *  @value: alarm bits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829)  *  @flag: interrupt status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830)  *  @addr: address value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831)  *  Description: update alarm bits in alarm register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833)  *  NONE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) static void do_s2io_write_bits(u64 value, int flag, void __iomem *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) 	u64 temp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) 	temp64 = readq(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) 	if (flag == ENABLE_INTRS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 		temp64 &= ~((u64)value);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 		temp64 |= ((u64)value);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 	writeq(temp64, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) static void en_dis_err_alarms(struct s2io_nic *nic, u16 mask, int flag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 	register u64 gen_int_mask = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) 	u64 interruptible;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 	writeq(DISABLE_ALL_INTRS, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 	if (mask & TX_DMA_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 		gen_int_mask |= TXDMA_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 		do_s2io_write_bits(TXDMA_TDA_INT | TXDMA_PFC_INT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) 				   TXDMA_PCC_INT | TXDMA_TTI_INT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) 				   TXDMA_LSO_INT | TXDMA_TPA_INT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) 				   TXDMA_SM_INT, flag, &bar0->txdma_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863) 		do_s2io_write_bits(PFC_ECC_DB_ERR | PFC_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864) 				   PFC_MISC_0_ERR | PFC_MISC_1_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865) 				   PFC_PCIX_ERR | PFC_ECC_SG_ERR, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866) 				   &bar0->pfc_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868) 		do_s2io_write_bits(TDA_Fn_ECC_DB_ERR | TDA_SM0_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) 				   TDA_SM1_ERR_ALARM | TDA_Fn_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) 				   TDA_PCIX_ERR, flag, &bar0->tda_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) 		do_s2io_write_bits(PCC_FB_ECC_DB_ERR | PCC_TXB_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 				   PCC_SM_ERR_ALARM | PCC_WR_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 				   PCC_N_SERR | PCC_6_COF_OV_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 				   PCC_7_COF_OV_ERR | PCC_6_LSO_OV_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 				   PCC_7_LSO_OV_ERR | PCC_FB_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 				   PCC_TXB_ECC_SG_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) 				   flag, &bar0->pcc_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) 		do_s2io_write_bits(TTI_SM_ERR_ALARM | TTI_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) 				   TTI_ECC_DB_ERR, flag, &bar0->tti_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) 		do_s2io_write_bits(LSO6_ABORT | LSO7_ABORT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) 				   LSO6_SM_ERR_ALARM | LSO7_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) 				   LSO6_SEND_OFLOW | LSO7_SEND_OFLOW,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) 				   flag, &bar0->lso_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 		do_s2io_write_bits(TPA_SM_ERR_ALARM | TPA_TX_FRM_DROP,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) 				   flag, &bar0->tpa_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) 		do_s2io_write_bits(SM_SM_ERR_ALARM, flag, &bar0->sm_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894) 	if (mask & TX_MAC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895) 		gen_int_mask |= TXMAC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896) 		do_s2io_write_bits(MAC_INT_STATUS_TMAC_INT, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) 				   &bar0->mac_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898) 		do_s2io_write_bits(TMAC_TX_BUF_OVRN | TMAC_TX_SM_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) 				   TMAC_ECC_SG_ERR | TMAC_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) 				   TMAC_DESC_ECC_SG_ERR | TMAC_DESC_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901) 				   flag, &bar0->mac_tmac_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) 	if (mask & TX_XGXS_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) 		gen_int_mask |= TXXGXS_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) 		do_s2io_write_bits(XGXS_INT_STATUS_TXGXS, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) 				   &bar0->xgxs_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) 		do_s2io_write_bits(TXGXS_ESTORE_UFLOW | TXGXS_TX_SM_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) 				   TXGXS_ECC_SG_ERR | TXGXS_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910) 				   flag, &bar0->xgxs_txgxs_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) 	if (mask & RX_DMA_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) 		gen_int_mask |= RXDMA_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915) 		do_s2io_write_bits(RXDMA_INT_RC_INT_M | RXDMA_INT_RPA_INT_M |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916) 				   RXDMA_INT_RDA_INT_M | RXDMA_INT_RTI_INT_M,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917) 				   flag, &bar0->rxdma_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) 		do_s2io_write_bits(RC_PRCn_ECC_DB_ERR | RC_FTC_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) 				   RC_PRCn_SM_ERR_ALARM | RC_FTC_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) 				   RC_PRCn_ECC_SG_ERR | RC_FTC_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) 				   RC_RDA_FAIL_WR_Rn, flag, &bar0->rc_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) 		do_s2io_write_bits(PRC_PCI_AB_RD_Rn | PRC_PCI_AB_WR_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) 				   PRC_PCI_AB_F_WR_Rn | PRC_PCI_DP_RD_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) 				   PRC_PCI_DP_WR_Rn | PRC_PCI_DP_F_WR_Rn, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925) 				   &bar0->prc_pcix_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) 		do_s2io_write_bits(RPA_SM_ERR_ALARM | RPA_CREDIT_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) 				   RPA_ECC_SG_ERR | RPA_ECC_DB_ERR, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) 				   &bar0->rpa_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) 		do_s2io_write_bits(RDA_RXDn_ECC_DB_ERR | RDA_FRM_ECC_DB_N_AERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) 				   RDA_SM1_ERR_ALARM | RDA_SM0_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) 				   RDA_RXD_ECC_DB_SERR | RDA_RXDn_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) 				   RDA_FRM_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933) 				   RDA_MISC_ERR|RDA_PCIX_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934) 				   flag, &bar0->rda_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935) 		do_s2io_write_bits(RTI_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936) 				   RTI_ECC_SG_ERR | RTI_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937) 				   flag, &bar0->rti_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) 	if (mask & RX_MAC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) 		gen_int_mask |= RXMAC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) 		do_s2io_write_bits(MAC_INT_STATUS_RMAC_INT, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) 				   &bar0->mac_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 		interruptible = (RMAC_RX_BUFF_OVRN | RMAC_RX_SM_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) 				 RMAC_UNUSED_INT | RMAC_SINGLE_ECC_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) 				 RMAC_DOUBLE_ECC_ERR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) 		if (s2io_link_fault_indication(nic) == MAC_RMAC_ERR_TIMER)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) 			interruptible |= RMAC_LINK_STATE_CHANGE_INT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949) 		do_s2io_write_bits(interruptible,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) 				   flag, &bar0->mac_rmac_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953) 	if (mask & RX_XGXS_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954) 		gen_int_mask |= RXXGXS_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955) 		do_s2io_write_bits(XGXS_INT_STATUS_RXGXS, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956) 				   &bar0->xgxs_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957) 		do_s2io_write_bits(RXGXS_ESTORE_OFLOW | RXGXS_RX_SM_ERR, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958) 				   &bar0->xgxs_rxgxs_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) 	if (mask & MC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962) 		gen_int_mask |= MC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) 		do_s2io_write_bits(MC_INT_MASK_MC_INT,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964) 				   flag, &bar0->mc_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) 		do_s2io_write_bits(MC_ERR_REG_SM_ERR | MC_ERR_REG_ECC_ALL_SNG |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966) 				   MC_ERR_REG_ECC_ALL_DBL | PLL_LOCK_N, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967) 				   &bar0->mc_err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) 	nic->general_int_mask = gen_int_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) 	/* Remove this line when alarm interrupts are enabled */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 	nic->general_int_mask = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976)  *  en_dis_able_nic_intrs - Enable or Disable the interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977)  *  @nic: device private variable,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978)  *  @mask: A mask indicating which Intr block must be modified and,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979)  *  @flag: A flag indicating whether to enable or disable the Intrs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980)  *  Description: This function will either disable or enable the interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981)  *  depending on the flag argument. The mask argument can be used to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982)  *  enable/disable any Intr block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983)  *  Return Value: NONE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) static void en_dis_able_nic_intrs(struct s2io_nic *nic, u16 mask, int flag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) 	register u64 temp64 = 0, intr_mask = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) 	intr_mask = nic->general_int_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) 	/*  Top level interrupt classification */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) 	/*  PIC Interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) 	if (mask & TX_PIC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) 		/*  Enable PIC Intrs in the general intr mask register */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) 		intr_mask |= TXPIC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) 		if (flag == ENABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) 			 * If Hercules adapter enable GPIO otherwise
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) 			 * disable all PCIX, Flash, MDIO, IIC and GPIO
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) 			 * interrupts for now.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) 			 * TODO
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005) 			if (s2io_link_fault_indication(nic) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006) 			    LINK_UP_DOWN_INTERRUPT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007) 				do_s2io_write_bits(PIC_INT_GPIO, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008) 						   &bar0->pic_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009) 				do_s2io_write_bits(GPIO_INT_MASK_LINK_UP, flag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010) 						   &bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011) 			} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012) 				writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013) 		} else if (flag == DISABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015) 			 * Disable PIC Intrs in the general
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016) 			 * intr mask register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018) 			writeq(DISABLE_ALL_INTRS, &bar0->pic_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) 	/*  Tx traffic interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) 	if (mask & TX_TRAFFIC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024) 		intr_mask |= TXTRAFFIC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025) 		if (flag == ENABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) 			 * Enable all the Tx side interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) 			 * writing 0 Enables all 64 TX interrupt levels
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030) 			writeq(0x0, &bar0->tx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031) 		} else if (flag == DISABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033) 			 * Disable Tx Traffic Intrs in the general intr mask
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034) 			 * register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036) 			writeq(DISABLE_ALL_INTRS, &bar0->tx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040) 	/*  Rx traffic interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041) 	if (mask & RX_TRAFFIC_INTR) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) 		intr_mask |= RXTRAFFIC_INT_M;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043) 		if (flag == ENABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) 			/* writing 0 Enables all 8 RX interrupt levels */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) 			writeq(0x0, &bar0->rx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) 		} else if (flag == DISABLE_INTRS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) 			 * Disable Rx Traffic Intrs in the general intr mask
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) 			 * register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051) 			writeq(DISABLE_ALL_INTRS, &bar0->rx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) 	temp64 = readq(&bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056) 	if (flag == ENABLE_INTRS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) 		temp64 &= ~((u64)intr_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) 		temp64 = DISABLE_ALL_INTRS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) 	writeq(temp64, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062) 	nic->general_int_mask = readq(&bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066)  *  verify_pcc_quiescent- Checks for PCC quiescent state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067)  *  @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068)  *  s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069)  *  @flag: boolean controlling function path
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070)  *  Return: 1 If PCC is quiescence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071)  *          0 If PCC is not quiescence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073) static int verify_pcc_quiescent(struct s2io_nic *sp, int flag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075) 	int ret = 0, herc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) 	u64 val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079) 	herc = (sp->device_type == XFRAME_II_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) 	if (flag == false) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) 		if ((!herc && (sp->pdev->revision >= 4)) || herc) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083) 			if (!(val64 & ADAPTER_STATUS_RMAC_PCC_IDLE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086) 			if (!(val64 & ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090) 		if ((!herc && (sp->pdev->revision >= 4)) || herc) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091) 			if (((val64 & ADAPTER_STATUS_RMAC_PCC_IDLE) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092) 			     ADAPTER_STATUS_RMAC_PCC_IDLE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) 			if (((val64 & ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) 			     ADAPTER_STATUS_RMAC_PCC_FOUR_IDLE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104)  *  verify_xena_quiescence - Checks whether the H/W is ready
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105)  *  @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106)  *  s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107)  *  Description: Returns whether the H/W is ready to go or not. Depending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108)  *  on whether adapter enable bit was written or not the comparison
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109)  *  differs and the calling function passes the input argument flag to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110)  *  indicate this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111)  *  Return: 1 If xena is quiescence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112)  *          0 If Xena is not quiescence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115) static int verify_xena_quiescence(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117) 	int  mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119) 	u64 val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120) 	mode = s2io_verify_pci_mode(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122) 	if (!(val64 & ADAPTER_STATUS_TDMA_READY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123) 		DBG_PRINT(ERR_DBG, "TDMA is not ready!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126) 	if (!(val64 & ADAPTER_STATUS_RDMA_READY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2127) 		DBG_PRINT(ERR_DBG, "RDMA is not ready!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2128) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2129) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2130) 	if (!(val64 & ADAPTER_STATUS_PFC_READY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2131) 		DBG_PRINT(ERR_DBG, "PFC is not ready!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2132) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2133) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2134) 	if (!(val64 & ADAPTER_STATUS_TMAC_BUF_EMPTY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2135) 		DBG_PRINT(ERR_DBG, "TMAC BUF is not empty!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2136) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2137) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2138) 	if (!(val64 & ADAPTER_STATUS_PIC_QUIESCENT)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2139) 		DBG_PRINT(ERR_DBG, "PIC is not QUIESCENT!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2140) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2141) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2142) 	if (!(val64 & ADAPTER_STATUS_MC_DRAM_READY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2143) 		DBG_PRINT(ERR_DBG, "MC_DRAM is not ready!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2144) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2145) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2146) 	if (!(val64 & ADAPTER_STATUS_MC_QUEUES_READY)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2147) 		DBG_PRINT(ERR_DBG, "MC_QUEUES is not ready!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2148) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2149) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2150) 	if (!(val64 & ADAPTER_STATUS_M_PLL_LOCK)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2151) 		DBG_PRINT(ERR_DBG, "M_PLL is not locked!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2152) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2153) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2154) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2155) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2156) 	 * In PCI 33 mode, the P_PLL is not used, and therefore,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2157) 	 * the the P_PLL_LOCK bit in the adapter_status register will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2158) 	 * not be asserted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2159) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2160) 	if (!(val64 & ADAPTER_STATUS_P_PLL_LOCK) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2161) 	    sp->device_type == XFRAME_II_DEVICE &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2162) 	    mode != PCI_MODE_PCI_33) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2163) 		DBG_PRINT(ERR_DBG, "P_PLL is not locked!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2164) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2165) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2166) 	if (!((val64 & ADAPTER_STATUS_RC_PRC_QUIESCENT) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2167) 	      ADAPTER_STATUS_RC_PRC_QUIESCENT)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2168) 		DBG_PRINT(ERR_DBG, "RC_PRC is not QUIESCENT!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2169) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2170) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2171) 	return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2172) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2174) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2175)  * fix_mac_address -  Fix for Mac addr problem on Alpha platforms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2176)  * @sp: Pointer to device specifc structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2177)  * Description :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2178)  * New procedure to clear mac address reading  problems on Alpha platforms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2179)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2180)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2181) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2182) static void fix_mac_address(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2183) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2184) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2185) 	int i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2187) 	while (fix_mac[i] != END_SIGN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2188) 		writeq(fix_mac[i++], &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2189) 		udelay(10);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2190) 		(void) readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2191) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2192) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2193) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2194) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2195)  *  start_nic - Turns the device on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2196)  *  @nic : device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2197)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2198)  *  This function actually turns the device on. Before this  function is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2199)  *  called,all Registers are configured from their reset states
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2200)  *  and shared memory is allocated but the NIC is still quiescent. On
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2201)  *  calling this function, the device interrupts are cleared and the NIC is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2202)  *  literally switched on by writing into the adapter control register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2203)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2204)  *  SUCCESS on success and -1 on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2205)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2206) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2207) static int start_nic(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2208) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2209) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2210) 	struct net_device *dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2211) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2212) 	u16 subid, i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2213) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2214) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2216) 	/*  PRC Initialization and configuration */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2217) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2218) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2220) 		writeq((u64)ring->rx_blocks[0].block_dma_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2221) 		       &bar0->prc_rxd0_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2222) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2223) 		val64 = readq(&bar0->prc_ctrl_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2224) 		if (nic->rxd_mode == RXD_MODE_1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2225) 			val64 |= PRC_CTRL_RC_ENABLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2226) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2227) 			val64 |= PRC_CTRL_RC_ENABLED | PRC_CTRL_RING_MODE_3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2228) 		if (nic->device_type == XFRAME_II_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2229) 			val64 |= PRC_CTRL_GROUP_READS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2230) 		val64 &= ~PRC_CTRL_RXD_BACKOFF_INTERVAL(0xFFFFFF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2231) 		val64 |= PRC_CTRL_RXD_BACKOFF_INTERVAL(0x1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2232) 		writeq(val64, &bar0->prc_ctrl_n[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2233) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2235) 	if (nic->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2236) 		/* Enabling 2 buffer mode by writing into Rx_pa_cfg reg. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2237) 		val64 = readq(&bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2238) 		val64 |= RX_PA_CFG_IGNORE_L2_ERR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2239) 		writeq(val64, &bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2240) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2242) 	if (vlan_tag_strip == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2243) 		val64 = readq(&bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2244) 		val64 &= ~RX_PA_CFG_STRIP_VLAN_TAG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2245) 		writeq(val64, &bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2246) 		nic->vlan_strip_flag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2247) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2249) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2250) 	 * Enabling MC-RLDRAM. After enabling the device, we timeout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2251) 	 * for around 100ms, which is approximately the time required
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2252) 	 * for the device to be ready for operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2253) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2254) 	val64 = readq(&bar0->mc_rldram_mrs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2255) 	val64 |= MC_RLDRAM_QUEUE_SIZE_ENABLE | MC_RLDRAM_MRS_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2256) 	SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_mrs, UF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2257) 	val64 = readq(&bar0->mc_rldram_mrs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2258) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2259) 	msleep(100);	/* Delay by around 100 ms. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2261) 	/* Enabling ECC Protection. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2262) 	val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2263) 	val64 &= ~ADAPTER_ECC_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2264) 	writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2266) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2267) 	 * Verify if the device is ready to be enabled, if so enable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2268) 	 * it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2269) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2270) 	val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2271) 	if (!verify_xena_quiescence(nic)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2272) 		DBG_PRINT(ERR_DBG, "%s: device is not ready, "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2273) 			  "Adapter status reads: 0x%llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2274) 			  dev->name, (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2275) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2276) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2278) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2279) 	 * With some switches, link might be already up at this point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2280) 	 * Because of this weird behavior, when we enable laser,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2281) 	 * we may not get link. We need to handle this. We cannot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2282) 	 * figure out which switch is misbehaving. So we are forced to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2283) 	 * make a global change.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2284) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2286) 	/* Enabling Laser. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2287) 	val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2288) 	val64 |= ADAPTER_EOI_TX_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2289) 	writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2290) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2291) 	if (s2io_link_fault_indication(nic) == MAC_RMAC_ERR_TIMER) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2292) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2293) 		 * Dont see link state interrupts initially on some switches,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2294) 		 * so directly scheduling the link state task here.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2295) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2296) 		schedule_work(&nic->set_link_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2297) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2298) 	/* SXE-002: Initialize link and activity LED */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2299) 	subid = nic->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2300) 	if (((subid & 0xFF) >= 0x07) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2301) 	    (nic->device_type == XFRAME_I_DEVICE)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2302) 		val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2303) 		val64 |= 0x0000800000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2304) 		writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2305) 		val64 = 0x0411040400000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2306) 		writeq(val64, (void __iomem *)bar0 + 0x2700);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2307) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2308) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2309) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2310) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2311) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2312)  * s2io_txdl_getskb - Get the skb from txdl, unmap and return skb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2313)  * @fifo_data: fifo data pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2314)  * @txdlp: descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2315)  * @get_off: unused
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2316)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2317) static struct sk_buff *s2io_txdl_getskb(struct fifo_info *fifo_data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2318) 					struct TxD *txdlp, int get_off)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2319) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2320) 	struct s2io_nic *nic = fifo_data->nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2321) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2322) 	struct TxD *txds;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2323) 	u16 j, frg_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2324) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2325) 	txds = txdlp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2326) 	if (txds->Host_Control == (u64)(long)fifo_data->ufo_in_band_v) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2327) 		dma_unmap_single(&nic->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2328) 				 (dma_addr_t)txds->Buffer_Pointer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2329) 				 sizeof(u64), DMA_TO_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2330) 		txds++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2331) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2333) 	skb = (struct sk_buff *)((unsigned long)txds->Host_Control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2334) 	if (!skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2335) 		memset(txdlp, 0, (sizeof(struct TxD) * fifo_data->max_txds));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2336) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2337) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2338) 	dma_unmap_single(&nic->pdev->dev, (dma_addr_t)txds->Buffer_Pointer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2339) 			 skb_headlen(skb), DMA_TO_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2340) 	frg_cnt = skb_shinfo(skb)->nr_frags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2341) 	if (frg_cnt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2342) 		txds++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2343) 		for (j = 0; j < frg_cnt; j++, txds++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2344) 			const skb_frag_t *frag = &skb_shinfo(skb)->frags[j];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2345) 			if (!txds->Buffer_Pointer)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2346) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2347) 			dma_unmap_page(&nic->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2348) 				       (dma_addr_t)txds->Buffer_Pointer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2349) 				       skb_frag_size(frag), DMA_TO_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2350) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2351) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2352) 	memset(txdlp, 0, (sizeof(struct TxD) * fifo_data->max_txds));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2353) 	return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2354) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2356) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2357)  *  free_tx_buffers - Free all queued Tx buffers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2358)  *  @nic : device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2359)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2360)  *  Free all queued Tx buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2361)  *  Return Value: void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2362)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2363) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2364) static void free_tx_buffers(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2365) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2366) 	struct net_device *dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2367) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2368) 	struct TxD *txdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2369) 	int i, j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2370) 	int cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2371) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2372) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2373) 	struct stat_block *stats = mac_control->stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2374) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2375) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2376) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2377) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2378) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2379) 		unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2380) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2381) 		spin_lock_irqsave(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2382) 		for (j = 0; j < tx_cfg->fifo_len; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2383) 			txdp = fifo->list_info[j].list_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2384) 			skb = s2io_txdl_getskb(&mac_control->fifos[i], txdp, j);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2385) 			if (skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2386) 				swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2387) 				dev_kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2388) 				cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2389) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2390) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2391) 		DBG_PRINT(INTR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2392) 			  "%s: forcibly freeing %d skbs on FIFO%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2393) 			  dev->name, cnt, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2394) 		fifo->tx_curr_get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2395) 		fifo->tx_curr_put_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2396) 		spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2397) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2398) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2399) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2400) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2401)  *   stop_nic -  To stop the nic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2402)  *   @nic : device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2403)  *   Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2404)  *   This function does exactly the opposite of what the start_nic()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2405)  *   function does. This function is called to stop the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2406)  *   Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2407)  *   void.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2408)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2409) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2410) static void stop_nic(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2411) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2412) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2413) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2414) 	u16 interruptible;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2416) 	/*  Disable all interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2417) 	en_dis_err_alarms(nic, ENA_ALL_INTRS, DISABLE_INTRS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2418) 	interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2419) 	interruptible |= TX_PIC_INTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2420) 	en_dis_able_nic_intrs(nic, interruptible, DISABLE_INTRS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2421) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2422) 	/* Clearing Adapter_En bit of ADAPTER_CONTROL Register */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2423) 	val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2424) 	val64 &= ~(ADAPTER_CNTL_EN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2425) 	writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2426) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2427) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2428) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2429)  *  fill_rx_buffers - Allocates the Rx side skbs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2430)  *  @nic : device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2431)  *  @ring: per ring structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2432)  *  @from_card_up: If this is true, we will map the buffer to get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2433)  *     the dma address for buf0 and buf1 to give it to the card.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2434)  *     Else we will sync the already mapped buffer to give it to the card.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2435)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2436)  *  The function allocates Rx side skbs and puts the physical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2437)  *  address of these buffers into the RxD buffer pointers, so that the NIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2438)  *  can DMA the received frame into these locations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2439)  *  The NIC supports 3 receive modes, viz
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2440)  *  1. single buffer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2441)  *  2. three buffer and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2442)  *  3. Five buffer modes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2443)  *  Each mode defines how many fragments the received frame will be split
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2444)  *  up into by the NIC. The frame is split into L3 header, L4 Header,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2445)  *  L4 payload in three buffer mode and in 5 buffer mode, L4 payload itself
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2446)  *  is split into 3 fragments. As of now only single buffer mode is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2447)  *  supported.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2448)  *   Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2449)  *  SUCCESS on success or an appropriate -ve value on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2450)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2451) static int fill_rx_buffers(struct s2io_nic *nic, struct ring_info *ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2452) 			   int from_card_up)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2453) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2454) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2455) 	struct RxD_t *rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2456) 	int off, size, block_no, block_no1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2457) 	u32 alloc_tab = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2458) 	u32 alloc_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2459) 	u64 tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2460) 	struct buffAdd *ba;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2461) 	struct RxD_t *first_rxdp = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2462) 	u64 Buffer0_ptr = 0, Buffer1_ptr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2463) 	struct RxD1 *rxdp1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2464) 	struct RxD3 *rxdp3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2465) 	struct swStat *swstats = &ring->nic->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2467) 	alloc_cnt = ring->pkt_cnt - ring->rx_bufs_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2469) 	block_no1 = ring->rx_curr_get_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2470) 	while (alloc_tab < alloc_cnt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2471) 		block_no = ring->rx_curr_put_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2472) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2473) 		off = ring->rx_curr_put_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2475) 		rxdp = ring->rx_blocks[block_no].rxds[off].virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2477) 		if ((block_no == block_no1) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2478) 		    (off == ring->rx_curr_get_info.offset) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2479) 		    (rxdp->Host_Control)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2480) 			DBG_PRINT(INTR_DBG, "%s: Get and Put info equated\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2481) 				  ring->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2482) 			goto end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2483) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2484) 		if (off && (off == ring->rxd_count)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2485) 			ring->rx_curr_put_info.block_index++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2486) 			if (ring->rx_curr_put_info.block_index ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2487) 			    ring->block_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2488) 				ring->rx_curr_put_info.block_index = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2489) 			block_no = ring->rx_curr_put_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2490) 			off = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2491) 			ring->rx_curr_put_info.offset = off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2492) 			rxdp = ring->rx_blocks[block_no].block_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2493) 			DBG_PRINT(INTR_DBG, "%s: Next block at: %p\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2494) 				  ring->dev->name, rxdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2495) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2496) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2498) 		if ((rxdp->Control_1 & RXD_OWN_XENA) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2499) 		    ((ring->rxd_mode == RXD_MODE_3B) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2500) 		     (rxdp->Control_2 & s2BIT(0)))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2501) 			ring->rx_curr_put_info.offset = off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2502) 			goto end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2503) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2504) 		/* calculate size of skb based on ring mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2505) 		size = ring->mtu +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2506) 			HEADER_ETHERNET_II_802_3_SIZE +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2507) 			HEADER_802_2_SIZE + HEADER_SNAP_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2508) 		if (ring->rxd_mode == RXD_MODE_1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2509) 			size += NET_IP_ALIGN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2510) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2511) 			size = ring->mtu + ALIGN_SIZE + BUF0_LEN + 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2512) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2513) 		/* allocate skb */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2514) 		skb = netdev_alloc_skb(nic->dev, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2515) 		if (!skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2516) 			DBG_PRINT(INFO_DBG, "%s: Could not allocate skb\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2517) 				  ring->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2518) 			if (first_rxdp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2519) 				dma_wmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2520) 				first_rxdp->Control_1 |= RXD_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2521) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2522) 			swstats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2523) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2524) 			return -ENOMEM ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2525) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2526) 		swstats->mem_allocated += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2528) 		if (ring->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2529) 			/* 1 buffer mode - normal operation mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2530) 			rxdp1 = (struct RxD1 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2531) 			memset(rxdp, 0, sizeof(struct RxD1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2532) 			skb_reserve(skb, NET_IP_ALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2533) 			rxdp1->Buffer0_ptr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2534) 				dma_map_single(&ring->pdev->dev, skb->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2535) 					       size - NET_IP_ALIGN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2536) 					       DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2537) 			if (dma_mapping_error(&nic->pdev->dev, rxdp1->Buffer0_ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2538) 				goto pci_map_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2540) 			rxdp->Control_2 =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2541) 				SET_BUFFER0_SIZE_1(size - NET_IP_ALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2542) 			rxdp->Host_Control = (unsigned long)skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2543) 		} else if (ring->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2544) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2545) 			 * 2 buffer mode -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2546) 			 * 2 buffer mode provides 128
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2547) 			 * byte aligned receive buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2548) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2549) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2550) 			rxdp3 = (struct RxD3 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2551) 			/* save buffer pointers to avoid frequent dma mapping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2552) 			Buffer0_ptr = rxdp3->Buffer0_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2553) 			Buffer1_ptr = rxdp3->Buffer1_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2554) 			memset(rxdp, 0, sizeof(struct RxD3));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2555) 			/* restore the buffer pointers for dma sync*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2556) 			rxdp3->Buffer0_ptr = Buffer0_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2557) 			rxdp3->Buffer1_ptr = Buffer1_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2559) 			ba = &ring->ba[block_no][off];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2560) 			skb_reserve(skb, BUF0_LEN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2561) 			tmp = (u64)(unsigned long)skb->data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2562) 			tmp += ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2563) 			tmp &= ~ALIGN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2564) 			skb->data = (void *) (unsigned long)tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2565) 			skb_reset_tail_pointer(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2566) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2567) 			if (from_card_up) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2568) 				rxdp3->Buffer0_ptr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2569) 					dma_map_single(&ring->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2570) 						       ba->ba_0, BUF0_LEN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2571) 						       DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2572) 				if (dma_mapping_error(&nic->pdev->dev, rxdp3->Buffer0_ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2573) 					goto pci_map_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2574) 			} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2575) 				dma_sync_single_for_device(&ring->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2576) 							   (dma_addr_t)rxdp3->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2577) 							   BUF0_LEN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2578) 							   DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2579) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2580) 			rxdp->Control_2 = SET_BUFFER0_SIZE_3(BUF0_LEN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2581) 			if (ring->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2582) 				/* Two buffer mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2584) 				/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2585) 				 * Buffer2 will have L3/L4 header plus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2586) 				 * L4 payload
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2587) 				 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2588) 				rxdp3->Buffer2_ptr = dma_map_single(&ring->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2589) 								    skb->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2590) 								    ring->mtu + 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2591) 								    DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2592) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2593) 				if (dma_mapping_error(&nic->pdev->dev, rxdp3->Buffer2_ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2594) 					goto pci_map_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2595) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2596) 				if (from_card_up) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2597) 					rxdp3->Buffer1_ptr =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2598) 						dma_map_single(&ring->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2599) 							       ba->ba_1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2600) 							       BUF1_LEN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2601) 							       DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2602) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2603) 					if (dma_mapping_error(&nic->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2604) 							      rxdp3->Buffer1_ptr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2605) 						dma_unmap_single(&ring->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2606) 								 (dma_addr_t)(unsigned long)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2607) 								 skb->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2608) 								 ring->mtu + 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2609) 								 DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2610) 						goto pci_map_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2611) 					}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2612) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2613) 				rxdp->Control_2 |= SET_BUFFER1_SIZE_3(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2614) 				rxdp->Control_2 |= SET_BUFFER2_SIZE_3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2615) 					(ring->mtu + 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2616) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2617) 			rxdp->Control_2 |= s2BIT(0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2618) 			rxdp->Host_Control = (unsigned long) (skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2619) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2620) 		if (alloc_tab & ((1 << rxsync_frequency) - 1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2621) 			rxdp->Control_1 |= RXD_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2622) 		off++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2623) 		if (off == (ring->rxd_count + 1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2624) 			off = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2625) 		ring->rx_curr_put_info.offset = off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2626) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2627) 		rxdp->Control_2 |= SET_RXD_MARKER;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2628) 		if (!(alloc_tab & ((1 << rxsync_frequency) - 1))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2629) 			if (first_rxdp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2630) 				dma_wmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2631) 				first_rxdp->Control_1 |= RXD_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2632) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2633) 			first_rxdp = rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2634) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2635) 		ring->rx_bufs_left += 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2636) 		alloc_tab++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2637) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2639) end:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2640) 	/* Transfer ownership of first descriptor to adapter just before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2641) 	 * exiting. Before that, use memory barrier so that ownership
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2642) 	 * and other fields are seen by adapter correctly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2643) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2644) 	if (first_rxdp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2645) 		dma_wmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2646) 		first_rxdp->Control_1 |= RXD_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2647) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2648) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2649) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2651) pci_map_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2652) 	swstats->pci_map_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2653) 	swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2654) 	dev_kfree_skb_irq(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2655) 	return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2656) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2657) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2658) static void free_rxd_blk(struct s2io_nic *sp, int ring_no, int blk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2659) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2660) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2661) 	int j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2662) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2663) 	struct RxD_t *rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2664) 	struct RxD1 *rxdp1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2665) 	struct RxD3 *rxdp3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2666) 	struct mac_info *mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2667) 	struct stat_block *stats = mac_control->stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2668) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2670) 	for (j = 0 ; j < rxd_count[sp->rxd_mode]; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2671) 		rxdp = mac_control->rings[ring_no].
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2672) 			rx_blocks[blk].rxds[j].virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2673) 		skb = (struct sk_buff *)((unsigned long)rxdp->Host_Control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2674) 		if (!skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2675) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2676) 		if (sp->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2677) 			rxdp1 = (struct RxD1 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2678) 			dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2679) 					 (dma_addr_t)rxdp1->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2680) 					 dev->mtu +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2681) 					 HEADER_ETHERNET_II_802_3_SIZE +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2682) 					 HEADER_802_2_SIZE + HEADER_SNAP_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2683) 					 DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2684) 			memset(rxdp, 0, sizeof(struct RxD1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2685) 		} else if (sp->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2686) 			rxdp3 = (struct RxD3 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2687) 			dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2688) 					 (dma_addr_t)rxdp3->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2689) 					 BUF0_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2690) 			dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2691) 					 (dma_addr_t)rxdp3->Buffer1_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2692) 					 BUF1_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2693) 			dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2694) 					 (dma_addr_t)rxdp3->Buffer2_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2695) 					 dev->mtu + 4, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2696) 			memset(rxdp, 0, sizeof(struct RxD3));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2697) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2698) 		swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2699) 		dev_kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2700) 		mac_control->rings[ring_no].rx_bufs_left -= 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2701) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2702) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2703) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2704) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2705)  *  free_rx_buffers - Frees all Rx buffers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2706)  *  @sp: device private variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2707)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2708)  *  This function will free all Rx buffers allocated by host.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2709)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2710)  *  NONE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2711)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2712) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2713) static void free_rx_buffers(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2714) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2715) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2716) 	int i, blk = 0, buf_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2717) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2718) 	struct mac_info *mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2719) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2720) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2721) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2722) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2723) 		for (blk = 0; blk < rx_ring_sz[i]; blk++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2724) 			free_rxd_blk(sp, i, blk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2725) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2726) 		ring->rx_curr_put_info.block_index = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2727) 		ring->rx_curr_get_info.block_index = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2728) 		ring->rx_curr_put_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2729) 		ring->rx_curr_get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2730) 		ring->rx_bufs_left = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2731) 		DBG_PRINT(INIT_DBG, "%s: Freed 0x%x Rx Buffers on ring%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2732) 			  dev->name, buf_cnt, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2733) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2734) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2736) static int s2io_chk_rx_buffers(struct s2io_nic *nic, struct ring_info *ring)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2737) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2738) 	if (fill_rx_buffers(nic, ring, 0) == -ENOMEM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2739) 		DBG_PRINT(INFO_DBG, "%s: Out of memory in Rx Intr!!\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2740) 			  ring->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2741) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2742) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2743) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2744) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2745) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2746)  * s2io_poll - Rx interrupt handler for NAPI support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2747)  * @napi : pointer to the napi structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2748)  * @budget : The number of packets that were budgeted to be processed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2749)  * during  one pass through the 'Poll" function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2750)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2751)  * Comes into picture only if NAPI support has been incorporated. It does
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2752)  * the same thing that rx_intr_handler does, but not in a interrupt context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2753)  * also It will process only a given number of packets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2754)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2755)  * 0 on success and 1 if there are No Rx packets to be processed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2756)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2757) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2758) static int s2io_poll_msix(struct napi_struct *napi, int budget)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2759) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2760) 	struct ring_info *ring = container_of(napi, struct ring_info, napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2761) 	struct net_device *dev = ring->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2762) 	int pkts_processed = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2763) 	u8 __iomem *addr = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2764) 	u8 val8 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2765) 	struct s2io_nic *nic = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2766) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2767) 	int budget_org = budget;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2768) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2769) 	if (unlikely(!is_s2io_card_up(nic)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2770) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2771) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2772) 	pkts_processed = rx_intr_handler(ring, budget);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2773) 	s2io_chk_rx_buffers(nic, ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2774) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2775) 	if (pkts_processed < budget_org) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2776) 		napi_complete_done(napi, pkts_processed);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2777) 		/*Re Enable MSI-Rx Vector*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2778) 		addr = (u8 __iomem *)&bar0->xmsi_mask_reg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2779) 		addr += 7 - ring->ring_no;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2780) 		val8 = (ring->ring_no == 0) ? 0x3f : 0xbf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2781) 		writeb(val8, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2782) 		val8 = readb(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2783) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2784) 	return pkts_processed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2785) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2786) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2787) static int s2io_poll_inta(struct napi_struct *napi, int budget)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2788) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2789) 	struct s2io_nic *nic = container_of(napi, struct s2io_nic, napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2790) 	int pkts_processed = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2791) 	int ring_pkts_processed, i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2792) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2793) 	int budget_org = budget;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2794) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2795) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2797) 	if (unlikely(!is_s2io_card_up(nic)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2798) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2799) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2800) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2801) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2802) 		ring_pkts_processed = rx_intr_handler(ring, budget);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2803) 		s2io_chk_rx_buffers(nic, ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2804) 		pkts_processed += ring_pkts_processed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2805) 		budget -= ring_pkts_processed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2806) 		if (budget <= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2807) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2808) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2809) 	if (pkts_processed < budget_org) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2810) 		napi_complete_done(napi, pkts_processed);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2811) 		/* Re enable the Rx interrupts for the ring */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2812) 		writeq(0, &bar0->rx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2813) 		readl(&bar0->rx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2814) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2815) 	return pkts_processed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2816) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2817) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2818) #ifdef CONFIG_NET_POLL_CONTROLLER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2819) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2820)  * s2io_netpoll - netpoll event handler entry point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2821)  * @dev : pointer to the device structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2822)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2823)  * 	This function will be called by upper layer to check for events on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2824)  * interface in situations where interrupts are disabled. It is used for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2825)  * specific in-kernel networking tasks, such as remote consoles and kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2826)  * debugging over the network (example netdump in RedHat).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2827)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2828) static void s2io_netpoll(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2829) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2830) 	struct s2io_nic *nic = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2831) 	const int irq = nic->pdev->irq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2832) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2833) 	u64 val64 = 0xFFFFFFFFFFFFFFFFULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2834) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2835) 	struct config_param *config = &nic->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2836) 	struct mac_info *mac_control = &nic->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2837) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2838) 	if (pci_channel_offline(nic->pdev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2839) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2841) 	disable_irq(irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2842) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2843) 	writeq(val64, &bar0->rx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2844) 	writeq(val64, &bar0->tx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2845) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2846) 	/* we need to free up the transmitted skbufs or else netpoll will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2847) 	 * run out of skbs and will fail and eventually netpoll application such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2848) 	 * as netdump will fail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2849) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2850) 	for (i = 0; i < config->tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2851) 		tx_intr_handler(&mac_control->fifos[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2852) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2853) 	/* check for received packet and indicate up to network */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2854) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2855) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2856) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2857) 		rx_intr_handler(ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2858) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2859) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2860) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2861) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2862) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2863) 		if (fill_rx_buffers(nic, ring, 0) == -ENOMEM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2864) 			DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2865) 				  "%s: Out of memory in Rx Netpoll!!\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2866) 				  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2867) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2868) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2869) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2870) 	enable_irq(irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2871) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2872) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2873) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2874) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2875)  *  rx_intr_handler - Rx interrupt handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2876)  *  @ring_data: per ring structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2877)  *  @budget: budget for napi processing.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2878)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2879)  *  If the interrupt is because of a received frame or if the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2880)  *  receive ring contains fresh as yet un-processed frames,this function is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2881)  *  called. It picks out the RxD at which place the last Rx processing had
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2882)  *  stopped and sends the skb to the OSM's Rx handler and then increments
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2883)  *  the offset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2884)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2885)  *  No. of napi packets processed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2886)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2887) static int rx_intr_handler(struct ring_info *ring_data, int budget)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2888) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2889) 	int get_block, put_block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2890) 	struct rx_curr_get_info get_info, put_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2891) 	struct RxD_t *rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2892) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2893) 	int pkt_cnt = 0, napi_pkts = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2894) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2895) 	struct RxD1 *rxdp1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2896) 	struct RxD3 *rxdp3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2897) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2898) 	if (budget <= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2899) 		return napi_pkts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2900) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2901) 	get_info = ring_data->rx_curr_get_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2902) 	get_block = get_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2903) 	memcpy(&put_info, &ring_data->rx_curr_put_info, sizeof(put_info));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2904) 	put_block = put_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2905) 	rxdp = ring_data->rx_blocks[get_block].rxds[get_info.offset].virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2906) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2907) 	while (RXD_IS_UP2DT(rxdp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2908) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2909) 		 * If your are next to put index then it's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2910) 		 * FIFO full condition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2911) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2912) 		if ((get_block == put_block) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2913) 		    (get_info.offset + 1) == put_info.offset) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2914) 			DBG_PRINT(INTR_DBG, "%s: Ring Full\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2915) 				  ring_data->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2916) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2917) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2918) 		skb = (struct sk_buff *)((unsigned long)rxdp->Host_Control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2919) 		if (skb == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2920) 			DBG_PRINT(ERR_DBG, "%s: NULL skb in Rx Intr\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2921) 				  ring_data->dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2922) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2923) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2924) 		if (ring_data->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2925) 			rxdp1 = (struct RxD1 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2926) 			dma_unmap_single(&ring_data->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2927) 					 (dma_addr_t)rxdp1->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2928) 					 ring_data->mtu +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2929) 					 HEADER_ETHERNET_II_802_3_SIZE +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2930) 					 HEADER_802_2_SIZE +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2931) 					 HEADER_SNAP_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2932) 					 DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2933) 		} else if (ring_data->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2934) 			rxdp3 = (struct RxD3 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2935) 			dma_sync_single_for_cpu(&ring_data->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2936) 						(dma_addr_t)rxdp3->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2937) 						BUF0_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2938) 			dma_unmap_single(&ring_data->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2939) 					 (dma_addr_t)rxdp3->Buffer2_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2940) 					 ring_data->mtu + 4, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2941) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2942) 		prefetch(skb->data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2943) 		rx_osm_handler(ring_data, rxdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2944) 		get_info.offset++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2945) 		ring_data->rx_curr_get_info.offset = get_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2946) 		rxdp = ring_data->rx_blocks[get_block].
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2947) 			rxds[get_info.offset].virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2948) 		if (get_info.offset == rxd_count[ring_data->rxd_mode]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2949) 			get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2950) 			ring_data->rx_curr_get_info.offset = get_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2951) 			get_block++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2952) 			if (get_block == ring_data->block_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2953) 				get_block = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2954) 			ring_data->rx_curr_get_info.block_index = get_block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2955) 			rxdp = ring_data->rx_blocks[get_block].block_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2956) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2957) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2958) 		if (ring_data->nic->config.napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2959) 			budget--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2960) 			napi_pkts++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2961) 			if (!budget)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2962) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2963) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2964) 		pkt_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2965) 		if ((indicate_max_pkts) && (pkt_cnt > indicate_max_pkts))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2966) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2967) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2968) 	if (ring_data->lro) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2969) 		/* Clear all LRO sessions before exiting */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2970) 		for (i = 0; i < MAX_LRO_SESSIONS; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2971) 			struct lro *lro = &ring_data->lro0_n[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2972) 			if (lro->in_use) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2973) 				update_L3L4_header(ring_data->nic, lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2974) 				queue_rx_frame(lro->parent, lro->vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2975) 				clear_lro_session(lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2976) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2977) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2978) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2979) 	return napi_pkts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2980) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2981) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2982) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2983)  *  tx_intr_handler - Transmit interrupt handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2984)  *  @fifo_data : fifo data pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2985)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2986)  *  If an interrupt was raised to indicate DMA complete of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2987)  *  Tx packet, this function is called. It identifies the last TxD
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2988)  *  whose buffer was freed and frees all skbs whose data have already
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2989)  *  DMA'ed into the NICs internal memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2990)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2991)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2992)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2994) static void tx_intr_handler(struct fifo_info *fifo_data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2995) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2996) 	struct s2io_nic *nic = fifo_data->nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2997) 	struct tx_curr_get_info get_info, put_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2998) 	struct sk_buff *skb = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2999) 	struct TxD *txdlp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3000) 	int pkt_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3001) 	unsigned long flags = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3002) 	u8 err_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3003) 	struct stat_block *stats = nic->mac_control.stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3004) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3005) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3006) 	if (!spin_trylock_irqsave(&fifo_data->tx_lock, flags))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3007) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3008) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3009) 	get_info = fifo_data->tx_curr_get_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3010) 	memcpy(&put_info, &fifo_data->tx_curr_put_info, sizeof(put_info));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3011) 	txdlp = fifo_data->list_info[get_info.offset].list_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3012) 	while ((!(txdlp->Control_1 & TXD_LIST_OWN_XENA)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3013) 	       (get_info.offset != put_info.offset) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3014) 	       (txdlp->Host_Control)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3015) 		/* Check for TxD errors */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3016) 		if (txdlp->Control_1 & TXD_T_CODE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3017) 			unsigned long long err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3018) 			err = txdlp->Control_1 & TXD_T_CODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3019) 			if (err & 0x1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3020) 				swstats->parity_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3021) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3022) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3023) 			/* update t_code statistics */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3024) 			err_mask = err >> 48;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3025) 			switch (err_mask) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3026) 			case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3027) 				swstats->tx_buf_abort_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3028) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3029) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3030) 			case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3031) 				swstats->tx_desc_abort_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3032) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3033) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3034) 			case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3035) 				swstats->tx_parity_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3036) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3037) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3038) 			case 10:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3039) 				swstats->tx_link_loss_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3040) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3041) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3042) 			case 15:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3043) 				swstats->tx_list_proc_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3044) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3045) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3046) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3047) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3048) 		skb = s2io_txdl_getskb(fifo_data, txdlp, get_info.offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3049) 		if (skb == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3050) 			spin_unlock_irqrestore(&fifo_data->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3051) 			DBG_PRINT(ERR_DBG, "%s: NULL skb in Tx Free Intr\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3052) 				  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3053) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3054) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3055) 		pkt_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3056) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3057) 		/* Updating the statistics block */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3058) 		swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3059) 		dev_consume_skb_irq(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3060) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3061) 		get_info.offset++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3062) 		if (get_info.offset == get_info.fifo_len + 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3063) 			get_info.offset = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3064) 		txdlp = fifo_data->list_info[get_info.offset].list_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3065) 		fifo_data->tx_curr_get_info.offset = get_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3066) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3067) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3068) 	s2io_wake_tx_queue(fifo_data, pkt_cnt, nic->config.multiq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3069) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3070) 	spin_unlock_irqrestore(&fifo_data->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3071) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3072) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3073) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3074)  *  s2io_mdio_write - Function to write in to MDIO registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3075)  *  @mmd_type : MMD type value (PMA/PMD/WIS/PCS/PHYXS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3076)  *  @addr     : address value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3077)  *  @value    : data value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3078)  *  @dev      : pointer to net_device structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3079)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3080)  *  This function is used to write values to the MDIO registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3081)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3082)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3083) static void s2io_mdio_write(u32 mmd_type, u64 addr, u16 value,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3084) 			    struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3085) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3086) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3087) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3088) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3089) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3090) 	/* address transaction */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3091) 	val64 = MDIO_MMD_INDX_ADDR(addr) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3092) 		MDIO_MMD_DEV_ADDR(mmd_type) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3093) 		MDIO_MMS_PRT_ADDR(0x0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3094) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3095) 	val64 = val64 | MDIO_CTRL_START_TRANS(0xE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3096) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3097) 	udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3098) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3099) 	/* Data transaction */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3100) 	val64 = MDIO_MMD_INDX_ADDR(addr) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3101) 		MDIO_MMD_DEV_ADDR(mmd_type) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3102) 		MDIO_MMS_PRT_ADDR(0x0) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3103) 		MDIO_MDIO_DATA(value) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3104) 		MDIO_OP(MDIO_OP_WRITE_TRANS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3105) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3106) 	val64 = val64 | MDIO_CTRL_START_TRANS(0xE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3107) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3108) 	udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3110) 	val64 = MDIO_MMD_INDX_ADDR(addr) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3111) 		MDIO_MMD_DEV_ADDR(mmd_type) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3112) 		MDIO_MMS_PRT_ADDR(0x0) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3113) 		MDIO_OP(MDIO_OP_READ_TRANS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3114) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3115) 	val64 = val64 | MDIO_CTRL_START_TRANS(0xE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3116) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3117) 	udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3118) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3120) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3121)  *  s2io_mdio_read - Function to write in to MDIO registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3122)  *  @mmd_type : MMD type value (PMA/PMD/WIS/PCS/PHYXS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3123)  *  @addr     : address value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3124)  *  @dev      : pointer to net_device structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3125)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3126)  *  This function is used to read values to the MDIO registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3127)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3128)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3129) static u64 s2io_mdio_read(u32 mmd_type, u64 addr, struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3130) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3131) 	u64 val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3132) 	u64 rval64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3133) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3134) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3136) 	/* address transaction */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3137) 	val64 = val64 | (MDIO_MMD_INDX_ADDR(addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3138) 			 | MDIO_MMD_DEV_ADDR(mmd_type)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3139) 			 | MDIO_MMS_PRT_ADDR(0x0));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3140) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3141) 	val64 = val64 | MDIO_CTRL_START_TRANS(0xE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3142) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3143) 	udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3145) 	/* Data transaction */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3146) 	val64 = MDIO_MMD_INDX_ADDR(addr) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3147) 		MDIO_MMD_DEV_ADDR(mmd_type) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3148) 		MDIO_MMS_PRT_ADDR(0x0) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3149) 		MDIO_OP(MDIO_OP_READ_TRANS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3150) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3151) 	val64 = val64 | MDIO_CTRL_START_TRANS(0xE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3152) 	writeq(val64, &bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3153) 	udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3154) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3155) 	/* Read the value from regs */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3156) 	rval64 = readq(&bar0->mdio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3157) 	rval64 = rval64 & 0xFFFF0000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3158) 	rval64 = rval64 >> 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3159) 	return rval64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3160) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3162) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3163)  *  s2io_chk_xpak_counter - Function to check the status of the xpak counters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3164)  *  @counter      : counter value to be updated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3165)  *  @regs_stat    : registers status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3166)  *  @index        : index
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3167)  *  @flag         : flag to indicate the status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3168)  *  @type         : counter type
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3169)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3170)  *  This function is to check the status of the xpak counters value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3171)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3172)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3174) static void s2io_chk_xpak_counter(u64 *counter, u64 * regs_stat, u32 index,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3175) 				  u16 flag, u16 type)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3176) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3177) 	u64 mask = 0x3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3178) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3179) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3180) 	for (i = 0; i < index; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3181) 		mask = mask << 0x2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3183) 	if (flag > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3184) 		*counter = *counter + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3185) 		val64 = *regs_stat & mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3186) 		val64 = val64 >> (index * 0x2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3187) 		val64 = val64 + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3188) 		if (val64 == 3) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3189) 			switch (type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3190) 			case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3191) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3192) 					  "Take Xframe NIC out of service.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3193) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3194) "Excessive temperatures may result in premature transceiver failure.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3195) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3196) 			case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3197) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3198) 					  "Take Xframe NIC out of service.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3199) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3200) "Excessive bias currents may indicate imminent laser diode failure.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3201) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3202) 			case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3203) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3204) 					  "Take Xframe NIC out of service.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3205) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3206) "Excessive laser output power may saturate far-end receiver.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3207) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3208) 			default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3209) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3210) 					  "Incorrect XPAK Alarm type\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3211) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3212) 			val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3213) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3214) 		val64 = val64 << (index * 0x2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3215) 		*regs_stat = (*regs_stat & (~mask)) | (val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3217) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3218) 		*regs_stat = *regs_stat & (~mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3219) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3220) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3222) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3223)  *  s2io_updt_xpak_counter - Function to update the xpak counters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3224)  *  @dev         : pointer to net_device struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3225)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3226)  *  This function is to upate the status of the xpak counters value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3227)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3228)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3229) static void s2io_updt_xpak_counter(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3230) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3231) 	u16 flag  = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3232) 	u16 type  = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3233) 	u16 val16 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3234) 	u64 val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3235) 	u64 addr  = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3237) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3238) 	struct stat_block *stats = sp->mac_control.stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3239) 	struct xpakStat *xstats = &stats->xpak_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3241) 	/* Check the communication with the MDIO slave */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3242) 	addr = MDIO_CTRL1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3243) 	val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3244) 	val64 = s2io_mdio_read(MDIO_MMD_PMAPMD, addr, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3245) 	if ((val64 == 0xFFFF) || (val64 == 0x0000)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3246) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3247) 			  "ERR: MDIO slave access failed - Returned %llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3248) 			  (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3249) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3250) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3252) 	/* Check for the expected value of control reg 1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3253) 	if (val64 != MDIO_CTRL1_SPEED10G) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3254) 		DBG_PRINT(ERR_DBG, "Incorrect value at PMA address 0x0000 - "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3255) 			  "Returned: %llx- Expected: 0x%x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3256) 			  (unsigned long long)val64, MDIO_CTRL1_SPEED10G);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3257) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3258) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3260) 	/* Loading the DOM register to MDIO register */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3261) 	addr = 0xA100;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3262) 	s2io_mdio_write(MDIO_MMD_PMAPMD, addr, val16, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3263) 	val64 = s2io_mdio_read(MDIO_MMD_PMAPMD, addr, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3265) 	/* Reading the Alarm flags */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3266) 	addr = 0xA070;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3267) 	val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3268) 	val64 = s2io_mdio_read(MDIO_MMD_PMAPMD, addr, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3270) 	flag = CHECKBIT(val64, 0x7);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3271) 	type = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3272) 	s2io_chk_xpak_counter(&xstats->alarm_transceiver_temp_high,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3273) 			      &xstats->xpak_regs_stat,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3274) 			      0x0, flag, type);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3276) 	if (CHECKBIT(val64, 0x6))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3277) 		xstats->alarm_transceiver_temp_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3279) 	flag = CHECKBIT(val64, 0x3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3280) 	type = 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3281) 	s2io_chk_xpak_counter(&xstats->alarm_laser_bias_current_high,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3282) 			      &xstats->xpak_regs_stat,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3283) 			      0x2, flag, type);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3284) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3285) 	if (CHECKBIT(val64, 0x2))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3286) 		xstats->alarm_laser_bias_current_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3288) 	flag = CHECKBIT(val64, 0x1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3289) 	type = 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3290) 	s2io_chk_xpak_counter(&xstats->alarm_laser_output_power_high,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3291) 			      &xstats->xpak_regs_stat,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3292) 			      0x4, flag, type);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3294) 	if (CHECKBIT(val64, 0x0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3295) 		xstats->alarm_laser_output_power_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3297) 	/* Reading the Warning flags */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3298) 	addr = 0xA074;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3299) 	val64 = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3300) 	val64 = s2io_mdio_read(MDIO_MMD_PMAPMD, addr, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3302) 	if (CHECKBIT(val64, 0x7))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3303) 		xstats->warn_transceiver_temp_high++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3304) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3305) 	if (CHECKBIT(val64, 0x6))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3306) 		xstats->warn_transceiver_temp_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3308) 	if (CHECKBIT(val64, 0x3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3309) 		xstats->warn_laser_bias_current_high++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3310) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3311) 	if (CHECKBIT(val64, 0x2))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3312) 		xstats->warn_laser_bias_current_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3314) 	if (CHECKBIT(val64, 0x1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3315) 		xstats->warn_laser_output_power_high++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3317) 	if (CHECKBIT(val64, 0x0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3318) 		xstats->warn_laser_output_power_low++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3319) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3320) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3321) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3322)  *  wait_for_cmd_complete - waits for a command to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3323)  *  @addr: address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3324)  *  @busy_bit: bit to check for busy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3325)  *  @bit_state: state to check
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3326)  *  Description: Function that waits for a command to Write into RMAC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3327)  *  ADDR DATA registers to be completed and returns either success or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3328)  *  error depending on whether the command was complete or not.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3329)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3330)  *   SUCCESS on success and FAILURE on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3331)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3333) static int wait_for_cmd_complete(void __iomem *addr, u64 busy_bit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3334) 				 int bit_state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3335) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3336) 	int ret = FAILURE, cnt = 0, delay = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3337) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3338) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3339) 	if ((bit_state != S2IO_BIT_RESET) && (bit_state != S2IO_BIT_SET))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3340) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3342) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3343) 		val64 = readq(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3344) 		if (bit_state == S2IO_BIT_RESET) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3345) 			if (!(val64 & busy_bit)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3346) 				ret = SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3347) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3348) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3349) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3350) 			if (val64 & busy_bit) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3351) 				ret = SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3352) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3353) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3354) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3356) 		if (in_interrupt())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3357) 			mdelay(delay);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3358) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3359) 			msleep(delay);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3361) 		if (++cnt >= 10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3362) 			delay = 50;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3363) 	} while (cnt < 20);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3364) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3365) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3366) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3367)  * check_pci_device_id - Checks if the device id is supported
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3368)  * @id : device id
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3369)  * Description: Function to check if the pci device id is supported by driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3370)  * Return value: Actual device id if supported else PCI_ANY_ID
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3371)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3372) static u16 check_pci_device_id(u16 id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3373) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3374) 	switch (id) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3375) 	case PCI_DEVICE_ID_HERC_WIN:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3376) 	case PCI_DEVICE_ID_HERC_UNI:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3377) 		return XFRAME_II_DEVICE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3378) 	case PCI_DEVICE_ID_S2IO_UNI:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3379) 	case PCI_DEVICE_ID_S2IO_WIN:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3380) 		return XFRAME_I_DEVICE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3381) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3382) 		return PCI_ANY_ID;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3383) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3384) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3385) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3386) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3387)  *  s2io_reset - Resets the card.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3388)  *  @sp : private member of the device structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3389)  *  Description: Function to Reset the card. This function then also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3390)  *  restores the previously saved PCI configuration space registers as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3391)  *  the card reset also resets the configuration space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3392)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3393)  *  void.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3394)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3395) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3396) static void s2io_reset(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3397) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3398) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3399) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3400) 	u16 subid, pci_cmd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3401) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3402) 	u16 val16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3403) 	unsigned long long up_cnt, down_cnt, up_time, down_time, reset_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3404) 	unsigned long long mem_alloc_cnt, mem_free_cnt, watchdog_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3405) 	struct stat_block *stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3406) 	struct swStat *swstats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3407) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3408) 	DBG_PRINT(INIT_DBG, "%s: Resetting XFrame card %s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3409) 		  __func__, pci_name(sp->pdev));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3410) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3411) 	/* Back up  the PCI-X CMD reg, dont want to lose MMRBC, OST settings */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3412) 	pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER, &(pci_cmd));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3413) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3414) 	val64 = SW_RESET_ALL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3415) 	writeq(val64, &bar0->sw_reset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3416) 	if (strstr(sp->product_name, "CX4"))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3417) 		msleep(750);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3418) 	msleep(250);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3419) 	for (i = 0; i < S2IO_MAX_PCI_CONFIG_SPACE_REINIT; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3421) 		/* Restore the PCI state saved during initialization. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3422) 		pci_restore_state(sp->pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3423) 		pci_save_state(sp->pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3424) 		pci_read_config_word(sp->pdev, 0x2, &val16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3425) 		if (check_pci_device_id(val16) != (u16)PCI_ANY_ID)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3426) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3427) 		msleep(200);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3428) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3429) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3430) 	if (check_pci_device_id(val16) == (u16)PCI_ANY_ID)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3431) 		DBG_PRINT(ERR_DBG, "%s SW_Reset failed!\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3433) 	pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER, pci_cmd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3435) 	s2io_init_pci(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3436) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3437) 	/* Set swapper to enable I/O register access */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3438) 	s2io_set_swapper(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3440) 	/* restore mac_addr entries */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3441) 	do_s2io_restore_unicast_mc(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3442) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3443) 	/* Restore the MSIX table entries from local variables */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3444) 	restore_xmsi_data(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3445) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3446) 	/* Clear certain PCI/PCI-X fields after reset */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3447) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3448) 		/* Clear "detected parity error" bit */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3449) 		pci_write_config_word(sp->pdev, PCI_STATUS, 0x8000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3450) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3451) 		/* Clearing PCIX Ecc status register */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3452) 		pci_write_config_dword(sp->pdev, 0x68, 0x7C);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3454) 		/* Clearing PCI_STATUS error reflected here */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3455) 		writeq(s2BIT(62), &bar0->txpic_int_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3456) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3458) 	/* Reset device statistics maintained by OS */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3459) 	memset(&sp->stats, 0, sizeof(struct net_device_stats));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3460) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3461) 	stats = sp->mac_control.stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3462) 	swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3463) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3464) 	/* save link up/down time/cnt, reset/memory/watchdog cnt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3465) 	up_cnt = swstats->link_up_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3466) 	down_cnt = swstats->link_down_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3467) 	up_time = swstats->link_up_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3468) 	down_time = swstats->link_down_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3469) 	reset_cnt = swstats->soft_reset_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3470) 	mem_alloc_cnt = swstats->mem_allocated;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3471) 	mem_free_cnt = swstats->mem_freed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3472) 	watchdog_cnt = swstats->watchdog_timer_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3474) 	memset(stats, 0, sizeof(struct stat_block));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3475) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3476) 	/* restore link up/down time/cnt, reset/memory/watchdog cnt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3477) 	swstats->link_up_cnt = up_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3478) 	swstats->link_down_cnt = down_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3479) 	swstats->link_up_time = up_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3480) 	swstats->link_down_time = down_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3481) 	swstats->soft_reset_cnt = reset_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3482) 	swstats->mem_allocated = mem_alloc_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3483) 	swstats->mem_freed = mem_free_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3484) 	swstats->watchdog_timer_cnt = watchdog_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3485) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3486) 	/* SXE-002: Configure link and activity LED to turn it off */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3487) 	subid = sp->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3488) 	if (((subid & 0xFF) >= 0x07) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3489) 	    (sp->device_type == XFRAME_I_DEVICE)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3490) 		val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3491) 		val64 |= 0x0000800000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3492) 		writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3493) 		val64 = 0x0411040400000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3494) 		writeq(val64, (void __iomem *)bar0 + 0x2700);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3495) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3496) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3497) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3498) 	 * Clear spurious ECC interrupts that would have occurred on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3499) 	 * XFRAME II cards after reset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3500) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3501) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3502) 		val64 = readq(&bar0->pcc_err_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3503) 		writeq(val64, &bar0->pcc_err_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3504) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3506) 	sp->device_enabled_once = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3507) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3508) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3509) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3510)  *  s2io_set_swapper - to set the swapper controle on the card
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3511)  *  @sp : private member of the device structure,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3512)  *  pointer to the s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3513)  *  Description: Function to set the swapper control on the card
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3514)  *  correctly depending on the 'endianness' of the system.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3515)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3516)  *  SUCCESS on success and FAILURE on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3517)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3519) static int s2io_set_swapper(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3520) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3521) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3522) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3523) 	u64 val64, valt, valr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3525) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3526) 	 * Set proper endian settings and verify the same by reading
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3527) 	 * the PIF Feed-back register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3528) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3529) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3530) 	val64 = readq(&bar0->pif_rd_swapper_fb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3531) 	if (val64 != 0x0123456789ABCDEFULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3532) 		int i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3533) 		static const u64 value[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3534) 			0xC30000C3C30000C3ULL,	/* FE=1, SE=1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3535) 			0x8100008181000081ULL,	/* FE=1, SE=0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3536) 			0x4200004242000042ULL,	/* FE=0, SE=1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3537) 			0			/* FE=0, SE=0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3538) 		};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3540) 		while (i < 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3541) 			writeq(value[i], &bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3542) 			val64 = readq(&bar0->pif_rd_swapper_fb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3543) 			if (val64 == 0x0123456789ABCDEFULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3544) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3545) 			i++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3546) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3547) 		if (i == 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3548) 			DBG_PRINT(ERR_DBG, "%s: Endian settings are wrong, "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3549) 				  "feedback read %llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3550) 				  dev->name, (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3551) 			return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3552) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3553) 		valr = value[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3554) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3555) 		valr = readq(&bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3556) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3558) 	valt = 0x0123456789ABCDEFULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3559) 	writeq(valt, &bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3560) 	val64 = readq(&bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3562) 	if (val64 != valt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3563) 		int i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3564) 		static const u64 value[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3565) 			0x00C3C30000C3C300ULL,	/* FE=1, SE=1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3566) 			0x0081810000818100ULL,	/* FE=1, SE=0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3567) 			0x0042420000424200ULL,	/* FE=0, SE=1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3568) 			0			/* FE=0, SE=0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3569) 		};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3570) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3571) 		while (i < 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3572) 			writeq((value[i] | valr), &bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3573) 			writeq(valt, &bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3574) 			val64 = readq(&bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3575) 			if (val64 == valt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3576) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3577) 			i++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3578) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3579) 		if (i == 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3580) 			unsigned long long x = val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3581) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3582) 				  "Write failed, Xmsi_addr reads:0x%llx\n", x);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3583) 			return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3584) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3585) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3586) 	val64 = readq(&bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3587) 	val64 &= 0xFFFF000000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3588) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3589) #ifdef __BIG_ENDIAN
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3590) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3591) 	 * The device by default set to a big endian format, so a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3592) 	 * big endian driver need not set anything.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3593) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3594) 	val64 |= (SWAPPER_CTRL_TXP_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3595) 		  SWAPPER_CTRL_TXP_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3596) 		  SWAPPER_CTRL_TXD_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3597) 		  SWAPPER_CTRL_TXD_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3598) 		  SWAPPER_CTRL_TXF_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3599) 		  SWAPPER_CTRL_RXD_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3600) 		  SWAPPER_CTRL_RXD_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3601) 		  SWAPPER_CTRL_RXF_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3602) 		  SWAPPER_CTRL_XMSI_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3603) 		  SWAPPER_CTRL_STATS_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3604) 		  SWAPPER_CTRL_STATS_SE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3605) 	if (sp->config.intr_type == INTA)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3606) 		val64 |= SWAPPER_CTRL_XMSI_SE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3607) 	writeq(val64, &bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3608) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3609) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3610) 	 * Initially we enable all bits to make it accessible by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3611) 	 * driver, then we selectively enable only those bits that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3612) 	 * we want to set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3613) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3614) 	val64 |= (SWAPPER_CTRL_TXP_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3615) 		  SWAPPER_CTRL_TXP_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3616) 		  SWAPPER_CTRL_TXD_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3617) 		  SWAPPER_CTRL_TXD_R_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3618) 		  SWAPPER_CTRL_TXD_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3619) 		  SWAPPER_CTRL_TXD_W_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3620) 		  SWAPPER_CTRL_TXF_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3621) 		  SWAPPER_CTRL_RXD_R_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3622) 		  SWAPPER_CTRL_RXD_R_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3623) 		  SWAPPER_CTRL_RXD_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3624) 		  SWAPPER_CTRL_RXD_W_SE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3625) 		  SWAPPER_CTRL_RXF_W_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3626) 		  SWAPPER_CTRL_XMSI_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3627) 		  SWAPPER_CTRL_STATS_FE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3628) 		  SWAPPER_CTRL_STATS_SE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3629) 	if (sp->config.intr_type == INTA)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3630) 		val64 |= SWAPPER_CTRL_XMSI_SE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3631) 	writeq(val64, &bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3632) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3633) 	val64 = readq(&bar0->swapper_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3634) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3635) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3636) 	 * Verifying if endian settings are accurate by reading a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3637) 	 * feedback register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3638) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3639) 	val64 = readq(&bar0->pif_rd_swapper_fb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3640) 	if (val64 != 0x0123456789ABCDEFULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3641) 		/* Endian settings are incorrect, calls for another dekko. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3642) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3643) 			  "%s: Endian settings are wrong, feedback read %llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3644) 			  dev->name, (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3645) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3646) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3647) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3648) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3649) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3651) static int wait_for_msix_trans(struct s2io_nic *nic, int i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3652) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3653) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3654) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3655) 	int ret = 0, cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3657) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3658) 		val64 = readq(&bar0->xmsi_access);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3659) 		if (!(val64 & s2BIT(15)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3660) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3661) 		mdelay(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3662) 		cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3663) 	} while (cnt < 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3664) 	if (cnt == 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3665) 		DBG_PRINT(ERR_DBG, "XMSI # %d Access failed\n", i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3666) 		ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3667) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3668) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3669) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3670) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3671) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3672) static void restore_xmsi_data(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3673) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3674) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3675) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3676) 	int i, msix_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3677) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3678) 	if (nic->device_type == XFRAME_I_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3679) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3680) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3681) 	for (i = 0; i < MAX_REQUESTED_MSI_X; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3682) 		msix_index = (i) ? ((i-1) * 8 + 1) : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3683) 		writeq(nic->msix_info[i].addr, &bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3684) 		writeq(nic->msix_info[i].data, &bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3685) 		val64 = (s2BIT(7) | s2BIT(15) | vBIT(msix_index, 26, 6));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3686) 		writeq(val64, &bar0->xmsi_access);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3687) 		if (wait_for_msix_trans(nic, msix_index))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3688) 			DBG_PRINT(ERR_DBG, "%s: index: %d failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3689) 				  __func__, msix_index);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3690) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3691) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3692) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3693) static void store_xmsi_data(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3694) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3695) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3696) 	u64 val64, addr, data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3697) 	int i, msix_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3699) 	if (nic->device_type == XFRAME_I_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3700) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3701) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3702) 	/* Store and display */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3703) 	for (i = 0; i < MAX_REQUESTED_MSI_X; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3704) 		msix_index = (i) ? ((i-1) * 8 + 1) : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3705) 		val64 = (s2BIT(15) | vBIT(msix_index, 26, 6));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3706) 		writeq(val64, &bar0->xmsi_access);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3707) 		if (wait_for_msix_trans(nic, msix_index)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3708) 			DBG_PRINT(ERR_DBG, "%s: index: %d failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3709) 				  __func__, msix_index);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3710) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3711) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3712) 		addr = readq(&bar0->xmsi_address);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3713) 		data = readq(&bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3714) 		if (addr && data) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3715) 			nic->msix_info[i].addr = addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3716) 			nic->msix_info[i].data = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3717) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3718) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3719) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3720) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3721) static int s2io_enable_msi_x(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3722) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3723) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3724) 	u64 rx_mat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3725) 	u16 msi_control; /* Temp variable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3726) 	int ret, i, j, msix_indx = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3727) 	int size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3728) 	struct stat_block *stats = nic->mac_control.stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3729) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3730) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3731) 	size = nic->num_entries * sizeof(struct msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3732) 	nic->entries = kzalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3733) 	if (!nic->entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3734) 		DBG_PRINT(INFO_DBG, "%s: Memory allocation failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3735) 			  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3736) 		swstats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3737) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3738) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3739) 	swstats->mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3740) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3741) 	size = nic->num_entries * sizeof(struct s2io_msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3742) 	nic->s2io_entries = kzalloc(size, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3743) 	if (!nic->s2io_entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3744) 		DBG_PRINT(INFO_DBG, "%s: Memory allocation failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3745) 			  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3746) 		swstats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3747) 		kfree(nic->entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3748) 		swstats->mem_freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3749) 			+= (nic->num_entries * sizeof(struct msix_entry));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3750) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3751) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3752) 	swstats->mem_allocated += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3753) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3754) 	nic->entries[0].entry = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3755) 	nic->s2io_entries[0].entry = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3756) 	nic->s2io_entries[0].in_use = MSIX_FLG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3757) 	nic->s2io_entries[0].type = MSIX_ALARM_TYPE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3758) 	nic->s2io_entries[0].arg = &nic->mac_control.fifos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3760) 	for (i = 1; i < nic->num_entries; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3761) 		nic->entries[i].entry = ((i - 1) * 8) + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3762) 		nic->s2io_entries[i].entry = ((i - 1) * 8) + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3763) 		nic->s2io_entries[i].arg = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3764) 		nic->s2io_entries[i].in_use = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3765) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3766) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3767) 	rx_mat = readq(&bar0->rx_mat);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3768) 	for (j = 0; j < nic->config.rx_ring_num; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3769) 		rx_mat |= RX_MAT_SET(j, msix_indx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3770) 		nic->s2io_entries[j+1].arg = &nic->mac_control.rings[j];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3771) 		nic->s2io_entries[j+1].type = MSIX_RING_TYPE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3772) 		nic->s2io_entries[j+1].in_use = MSIX_FLG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3773) 		msix_indx += 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3774) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3775) 	writeq(rx_mat, &bar0->rx_mat);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3776) 	readq(&bar0->rx_mat);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3777) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3778) 	ret = pci_enable_msix_range(nic->pdev, nic->entries,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3779) 				    nic->num_entries, nic->num_entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3780) 	/* We fail init if error or we get less vectors than min required */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3781) 	if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3782) 		DBG_PRINT(ERR_DBG, "Enabling MSI-X failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3783) 		kfree(nic->entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3784) 		swstats->mem_freed += nic->num_entries *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3785) 			sizeof(struct msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3786) 		kfree(nic->s2io_entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3787) 		swstats->mem_freed += nic->num_entries *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3788) 			sizeof(struct s2io_msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3789) 		nic->entries = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3790) 		nic->s2io_entries = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3791) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3792) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3794) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3795) 	 * To enable MSI-X, MSI also needs to be enabled, due to a bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3796) 	 * in the herc NIC. (Temp change, needs to be removed later)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3797) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3798) 	pci_read_config_word(nic->pdev, 0x42, &msi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3799) 	msi_control |= 0x1; /* Enable MSI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3800) 	pci_write_config_word(nic->pdev, 0x42, msi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3801) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3802) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3803) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3804) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3805) /* Handle software interrupt used during MSI(X) test */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3806) static irqreturn_t s2io_test_intr(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3807) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3808) 	struct s2io_nic *sp = dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3809) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3810) 	sp->msi_detected = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3811) 	wake_up(&sp->msi_wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3812) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3813) 	return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3814) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3815) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3816) /* Test interrupt path by forcing a a software IRQ */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3817) static int s2io_test_msi(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3818) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3819) 	struct pci_dev *pdev = sp->pdev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3820) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3821) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3822) 	u64 val64, saved64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3823) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3824) 	err = request_irq(sp->entries[1].vector, s2io_test_intr, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3825) 			  sp->name, sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3826) 	if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3827) 		DBG_PRINT(ERR_DBG, "%s: PCI %s: cannot assign irq %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3828) 			  sp->dev->name, pci_name(pdev), pdev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3829) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3830) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3831) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3832) 	init_waitqueue_head(&sp->msi_wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3833) 	sp->msi_detected = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3834) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3835) 	saved64 = val64 = readq(&bar0->scheduled_int_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3836) 	val64 |= SCHED_INT_CTRL_ONE_SHOT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3837) 	val64 |= SCHED_INT_CTRL_TIMER_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3838) 	val64 |= SCHED_INT_CTRL_INT2MSI(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3839) 	writeq(val64, &bar0->scheduled_int_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3841) 	wait_event_timeout(sp->msi_wait, sp->msi_detected, HZ/10);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3842) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3843) 	if (!sp->msi_detected) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3844) 		/* MSI(X) test failed, go back to INTx mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3845) 		DBG_PRINT(ERR_DBG, "%s: PCI %s: No interrupt was generated "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3846) 			  "using MSI(X) during test\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3847) 			  sp->dev->name, pci_name(pdev));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3848) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3849) 		err = -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3850) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3851) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3852) 	free_irq(sp->entries[1].vector, sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3853) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3854) 	writeq(saved64, &bar0->scheduled_int_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3855) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3856) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3857) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3858) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3859) static void remove_msix_isr(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3860) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3861) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3862) 	u16 msi_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3863) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3864) 	for (i = 0; i < sp->num_entries; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3865) 		if (sp->s2io_entries[i].in_use == MSIX_REGISTERED_SUCCESS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3866) 			int vector = sp->entries[i].vector;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3867) 			void *arg = sp->s2io_entries[i].arg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3868) 			free_irq(vector, arg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3869) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3870) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3871) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3872) 	kfree(sp->entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3873) 	kfree(sp->s2io_entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3874) 	sp->entries = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3875) 	sp->s2io_entries = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3876) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3877) 	pci_read_config_word(sp->pdev, 0x42, &msi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3878) 	msi_control &= 0xFFFE; /* Disable MSI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3879) 	pci_write_config_word(sp->pdev, 0x42, msi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3880) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3881) 	pci_disable_msix(sp->pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3882) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3883) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3884) static void remove_inta_isr(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3885) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3886) 	free_irq(sp->pdev->irq, sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3887) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3888) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3889) /* ********************************************************* *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3890)  * Functions defined below concern the OS part of the driver *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3891)  * ********************************************************* */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3893) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3894)  *  s2io_open - open entry point of the driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3895)  *  @dev : pointer to the device structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3896)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3897)  *  This function is the open entry point of the driver. It mainly calls a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3898)  *  function to allocate Rx buffers and inserts them into the buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3899)  *  descriptors and then enables the Rx part of the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3900)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3901)  *  0 on success and an appropriate (-)ve integer as defined in errno.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3902)  *   file on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3903)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3904) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3905) static int s2io_open(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3906) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3907) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3908) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3909) 	int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3910) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3911) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3912) 	 * Make sure you have link off by default every time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3913) 	 * Nic is initialized
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3914) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3915) 	netif_carrier_off(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3916) 	sp->last_link_state = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3917) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3918) 	/* Initialize H/W and enable interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3919) 	err = s2io_card_up(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3920) 	if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3921) 		DBG_PRINT(ERR_DBG, "%s: H/W initialization failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3922) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3923) 		goto hw_init_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3924) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3925) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3926) 	if (do_s2io_prog_unicast(dev, dev->dev_addr) == FAILURE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3927) 		DBG_PRINT(ERR_DBG, "Set Mac Address Failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3928) 		s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3929) 		err = -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3930) 		goto hw_init_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3931) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3932) 	s2io_start_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3933) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3934) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3935) hw_init_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3936) 	if (sp->config.intr_type == MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3937) 		if (sp->entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3938) 			kfree(sp->entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3939) 			swstats->mem_freed += sp->num_entries *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3940) 				sizeof(struct msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3941) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3942) 		if (sp->s2io_entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3943) 			kfree(sp->s2io_entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3944) 			swstats->mem_freed += sp->num_entries *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3945) 				sizeof(struct s2io_msix_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3946) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3947) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3948) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3949) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3950) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3951) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3952)  *  s2io_close -close entry point of the driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3953)  *  @dev : device pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3954)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3955)  *  This is the stop entry point of the driver. It needs to undo exactly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3956)  *  whatever was done by the open entry point,thus it's usually referred to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3957)  *  as the close function.Among other things this function mainly stops the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3958)  *  Rx side of the NIC and frees all the Rx buffers in the Rx rings.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3959)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3960)  *  0 on success and an appropriate (-)ve integer as defined in errno.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3961)  *  file on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3962)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3963) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3964) static int s2io_close(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3965) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3966) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3967) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3968) 	u64 tmp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3969) 	int offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3971) 	/* Return if the device is already closed               *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3972) 	 *  Can happen when s2io_card_up failed in change_mtu    *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3973) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3974) 	if (!is_s2io_card_up(sp))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3975) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3976) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3977) 	s2io_stop_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3978) 	/* delete all populated mac entries */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3979) 	for (offset = 1; offset < config->max_mc_addr; offset++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3980) 		tmp64 = do_s2io_read_unicast_mc(sp, offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3981) 		if (tmp64 != S2IO_DISABLE_MAC_ENTRY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3982) 			do_s2io_delete_unicast_mc(sp, tmp64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3983) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3985) 	s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3986) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3987) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3988) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3989) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3990) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3991)  *  s2io_xmit - Tx entry point of te driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3992)  *  @skb : the socket buffer containing the Tx data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3993)  *  @dev : device pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3994)  *  Description :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3995)  *  This function is the Tx entry point of the driver. S2IO NIC supports
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3996)  *  certain protocol assist features on Tx side, namely  CSO, S/G, LSO.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3997)  *  NOTE: when device can't queue the pkt,just the trans_start variable will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3998)  *  not be upadted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3999)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4000)  *  0 on success & 1 on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4001)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4002) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4003) static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4004) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4005) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4006) 	u16 frg_cnt, frg_len, i, queue, queue_len, put_off, get_off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4007) 	register u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4008) 	struct TxD *txdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4009) 	struct TxFIFO_element __iomem *tx_fifo;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4010) 	unsigned long flags = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4011) 	u16 vlan_tag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4012) 	struct fifo_info *fifo = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4013) 	int offload_type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4014) 	int enable_per_list_interrupt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4015) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4016) 	struct mac_info *mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4017) 	struct stat_block *stats = mac_control->stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4018) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4020) 	DBG_PRINT(TX_DBG, "%s: In Neterion Tx routine\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4021) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4022) 	if (unlikely(skb->len <= 0)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4023) 		DBG_PRINT(TX_DBG, "%s: Buffer has no data..\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4024) 		dev_kfree_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4025) 		return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4026) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4027) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4028) 	if (!is_s2io_card_up(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4029) 		DBG_PRINT(TX_DBG, "%s: Card going down for reset\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4030) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4031) 		dev_kfree_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4032) 		return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4033) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4034) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4035) 	queue = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4036) 	if (skb_vlan_tag_present(skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4037) 		vlan_tag = skb_vlan_tag_get(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4038) 	if (sp->config.tx_steering_type == TX_DEFAULT_STEERING) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4039) 		if (skb->protocol == htons(ETH_P_IP)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4040) 			struct iphdr *ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4041) 			struct tcphdr *th;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4042) 			ip = ip_hdr(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4044) 			if (!ip_is_fragment(ip)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4045) 				th = (struct tcphdr *)(((unsigned char *)ip) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4046) 						       ip->ihl*4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4047) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4048) 				if (ip->protocol == IPPROTO_TCP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4049) 					queue_len = sp->total_tcp_fifos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4050) 					queue = (ntohs(th->source) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4051) 						 ntohs(th->dest)) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4052) 						sp->fifo_selector[queue_len - 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4053) 					if (queue >= queue_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4054) 						queue = queue_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4055) 				} else if (ip->protocol == IPPROTO_UDP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4056) 					queue_len = sp->total_udp_fifos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4057) 					queue = (ntohs(th->source) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4058) 						 ntohs(th->dest)) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4059) 						sp->fifo_selector[queue_len - 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4060) 					if (queue >= queue_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4061) 						queue = queue_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4062) 					queue += sp->udp_fifo_idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4063) 					if (skb->len > 1024)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4064) 						enable_per_list_interrupt = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4065) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4066) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4067) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4068) 	} else if (sp->config.tx_steering_type == TX_PRIORITY_STEERING)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4069) 		/* get fifo number based on skb->priority value */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4070) 		queue = config->fifo_mapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4071) 			[skb->priority & (MAX_TX_FIFOS - 1)];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4072) 	fifo = &mac_control->fifos[queue];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4073) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4074) 	spin_lock_irqsave(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4075) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4076) 	if (sp->config.multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4077) 		if (__netif_subqueue_stopped(dev, fifo->fifo_no)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4078) 			spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4079) 			return NETDEV_TX_BUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4080) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4081) 	} else if (unlikely(fifo->queue_state == FIFO_QUEUE_STOP)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4082) 		if (netif_queue_stopped(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4083) 			spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4084) 			return NETDEV_TX_BUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4085) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4086) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4087) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4088) 	put_off = (u16)fifo->tx_curr_put_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4089) 	get_off = (u16)fifo->tx_curr_get_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4090) 	txdp = fifo->list_info[put_off].list_virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4092) 	queue_len = fifo->tx_curr_put_info.fifo_len + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4093) 	/* Avoid "put" pointer going beyond "get" pointer */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4094) 	if (txdp->Host_Control ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4095) 	    ((put_off+1) == queue_len ? 0 : (put_off+1)) == get_off) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4096) 		DBG_PRINT(TX_DBG, "Error in xmit, No free TXDs.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4097) 		s2io_stop_tx_queue(sp, fifo->fifo_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4098) 		dev_kfree_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4099) 		spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4100) 		return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4101) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4103) 	offload_type = s2io_offload_type(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4104) 	if (offload_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4105) 		txdp->Control_1 |= TXD_TCP_LSO_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4106) 		txdp->Control_1 |= TXD_TCP_LSO_MSS(s2io_tcp_mss(skb));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4107) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4108) 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4109) 		txdp->Control_2 |= (TXD_TX_CKO_IPV4_EN |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4110) 				    TXD_TX_CKO_TCP_EN |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4111) 				    TXD_TX_CKO_UDP_EN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4112) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4113) 	txdp->Control_1 |= TXD_GATHER_CODE_FIRST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4114) 	txdp->Control_1 |= TXD_LIST_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4115) 	txdp->Control_2 |= TXD_INT_NUMBER(fifo->fifo_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4116) 	if (enable_per_list_interrupt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4117) 		if (put_off & (queue_len >> 5))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4118) 			txdp->Control_2 |= TXD_INT_TYPE_PER_LIST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4119) 	if (vlan_tag) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4120) 		txdp->Control_2 |= TXD_VLAN_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4121) 		txdp->Control_2 |= TXD_VLAN_TAG(vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4122) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4124) 	frg_len = skb_headlen(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4125) 	txdp->Buffer_Pointer = dma_map_single(&sp->pdev->dev, skb->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4126) 					      frg_len, DMA_TO_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4127) 	if (dma_mapping_error(&sp->pdev->dev, txdp->Buffer_Pointer))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4128) 		goto pci_map_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4130) 	txdp->Host_Control = (unsigned long)skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4131) 	txdp->Control_1 |= TXD_BUFFER0_SIZE(frg_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4132) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4133) 	frg_cnt = skb_shinfo(skb)->nr_frags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4134) 	/* For fragmented SKB. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4135) 	for (i = 0; i < frg_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4136) 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4137) 		/* A '0' length fragment will be ignored */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4138) 		if (!skb_frag_size(frag))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4139) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4140) 		txdp++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4141) 		txdp->Buffer_Pointer = (u64)skb_frag_dma_map(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4142) 							     frag, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4143) 							     skb_frag_size(frag),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4144) 							     DMA_TO_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4145) 		txdp->Control_1 = TXD_BUFFER0_SIZE(skb_frag_size(frag));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4146) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4147) 	txdp->Control_1 |= TXD_GATHER_CODE_LAST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4148) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4149) 	tx_fifo = mac_control->tx_FIFO_start[queue];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4150) 	val64 = fifo->list_info[put_off].list_phy_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4151) 	writeq(val64, &tx_fifo->TxDL_Pointer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4152) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4153) 	val64 = (TX_FIFO_LAST_TXD_NUM(frg_cnt) | TX_FIFO_FIRST_LIST |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4154) 		 TX_FIFO_LAST_LIST);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4155) 	if (offload_type)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4156) 		val64 |= TX_FIFO_SPECIAL_FUNC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4158) 	writeq(val64, &tx_fifo->List_Control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4160) 	put_off++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4161) 	if (put_off == fifo->tx_curr_put_info.fifo_len + 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4162) 		put_off = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4163) 	fifo->tx_curr_put_info.offset = put_off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4164) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4165) 	/* Avoid "put" pointer going beyond "get" pointer */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4166) 	if (((put_off+1) == queue_len ? 0 : (put_off+1)) == get_off) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4167) 		swstats->fifo_full_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4168) 		DBG_PRINT(TX_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4169) 			  "No free TxDs for xmit, Put: 0x%x Get:0x%x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4170) 			  put_off, get_off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4171) 		s2io_stop_tx_queue(sp, fifo->fifo_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4172) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4173) 	swstats->mem_allocated += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4174) 	spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4176) 	if (sp->config.intr_type == MSI_X)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4177) 		tx_intr_handler(fifo);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4178) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4179) 	return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4180) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4181) pci_map_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4182) 	swstats->pci_map_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4183) 	s2io_stop_tx_queue(sp, fifo->fifo_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4184) 	swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4185) 	dev_kfree_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4186) 	spin_unlock_irqrestore(&fifo->tx_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4187) 	return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4188) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4190) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4191) s2io_alarm_handle(struct timer_list *t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4192) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4193) 	struct s2io_nic *sp = from_timer(sp, t, alarm_timer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4194) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4196) 	s2io_handle_errors(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4197) 	mod_timer(&sp->alarm_timer, jiffies + HZ / 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4198) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4199) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4200) static irqreturn_t s2io_msix_ring_handle(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4201) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4202) 	struct ring_info *ring = (struct ring_info *)dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4203) 	struct s2io_nic *sp = ring->nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4204) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4206) 	if (unlikely(!is_s2io_card_up(sp)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4207) 		return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4208) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4209) 	if (sp->config.napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4210) 		u8 __iomem *addr = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4211) 		u8 val8 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4213) 		addr = (u8 __iomem *)&bar0->xmsi_mask_reg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4214) 		addr += (7 - ring->ring_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4215) 		val8 = (ring->ring_no == 0) ? 0x7f : 0xff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4216) 		writeb(val8, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4217) 		val8 = readb(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4218) 		napi_schedule(&ring->napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4219) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4220) 		rx_intr_handler(ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4221) 		s2io_chk_rx_buffers(sp, ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4222) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4224) 	return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4225) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4227) static irqreturn_t s2io_msix_fifo_handle(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4228) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4229) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4230) 	struct fifo_info *fifos = (struct fifo_info *)dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4231) 	struct s2io_nic *sp = fifos->nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4232) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4233) 	struct config_param *config  = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4234) 	u64 reason;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4236) 	if (unlikely(!is_s2io_card_up(sp)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4237) 		return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4239) 	reason = readq(&bar0->general_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4240) 	if (unlikely(reason == S2IO_MINUS_ONE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4241) 		/* Nothing much can be done. Get out */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4242) 		return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4243) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4244) 	if (reason & (GEN_INTR_TXPIC | GEN_INTR_TXTRAFFIC)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4245) 		writeq(S2IO_MINUS_ONE, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4246) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4247) 		if (reason & GEN_INTR_TXPIC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4248) 			s2io_txpic_intr_handle(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4250) 		if (reason & GEN_INTR_TXTRAFFIC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4251) 			writeq(S2IO_MINUS_ONE, &bar0->tx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4253) 		for (i = 0; i < config->tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4254) 			tx_intr_handler(&fifos[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4256) 		writeq(sp->general_int_mask, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4257) 		readl(&bar0->general_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4258) 		return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4259) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4260) 	/* The interrupt was not raised by us */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4261) 	return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4262) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4264) static void s2io_txpic_intr_handle(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4265) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4266) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4267) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4268) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4269) 	val64 = readq(&bar0->pic_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4270) 	if (val64 & PIC_INT_GPIO) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4271) 		val64 = readq(&bar0->gpio_int_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4272) 		if ((val64 & GPIO_INT_REG_LINK_DOWN) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4273) 		    (val64 & GPIO_INT_REG_LINK_UP)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4274) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4275) 			 * This is unstable state so clear both up/down
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4276) 			 * interrupt and adapter to re-evaluate the link state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4277) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4278) 			val64 |= GPIO_INT_REG_LINK_DOWN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4279) 			val64 |= GPIO_INT_REG_LINK_UP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4280) 			writeq(val64, &bar0->gpio_int_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4281) 			val64 = readq(&bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4282) 			val64 &= ~(GPIO_INT_MASK_LINK_UP |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4283) 				   GPIO_INT_MASK_LINK_DOWN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4284) 			writeq(val64, &bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4285) 		} else if (val64 & GPIO_INT_REG_LINK_UP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4286) 			val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4287) 			/* Enable Adapter */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4288) 			val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4289) 			val64 |= ADAPTER_CNTL_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4290) 			writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4291) 			val64 |= ADAPTER_LED_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4292) 			writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4293) 			if (!sp->device_enabled_once)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4294) 				sp->device_enabled_once = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4296) 			s2io_link(sp, LINK_UP);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4297) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4298) 			 * unmask link down interrupt and mask link-up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4299) 			 * intr
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4300) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4301) 			val64 = readq(&bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4302) 			val64 &= ~GPIO_INT_MASK_LINK_DOWN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4303) 			val64 |= GPIO_INT_MASK_LINK_UP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4304) 			writeq(val64, &bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4306) 		} else if (val64 & GPIO_INT_REG_LINK_DOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4307) 			val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4308) 			s2io_link(sp, LINK_DOWN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4309) 			/* Link is down so unmaks link up interrupt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4310) 			val64 = readq(&bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4311) 			val64 &= ~GPIO_INT_MASK_LINK_UP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4312) 			val64 |= GPIO_INT_MASK_LINK_DOWN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4313) 			writeq(val64, &bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4315) 			/* turn off LED */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4316) 			val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4317) 			val64 = val64 & (~ADAPTER_LED_ON);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4318) 			writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4319) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4320) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4321) 	val64 = readq(&bar0->gpio_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4322) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4324) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4325)  *  do_s2io_chk_alarm_bit - Check for alarm and incrment the counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4326)  *  @value: alarm bits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4327)  *  @addr: address value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4328)  *  @cnt: counter variable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4329)  *  Description: Check for alarm and increment the counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4330)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4331)  *  1 - if alarm bit set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4332)  *  0 - if alarm bit is not set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4333)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4334) static int do_s2io_chk_alarm_bit(u64 value, void __iomem *addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4335) 				 unsigned long long *cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4336) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4337) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4338) 	val64 = readq(addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4339) 	if (val64 & value) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4340) 		writeq(val64, addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4341) 		(*cnt)++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4342) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4343) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4344) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4345) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4346) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4347) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4348) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4349)  *  s2io_handle_errors - Xframe error indication handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4350)  *  @dev_id: opaque handle to dev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4351)  *  Description: Handle alarms such as loss of link, single or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4352)  *  double ECC errors, critical and serious errors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4353)  *  Return Value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4354)  *  NONE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4355)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4356) static void s2io_handle_errors(void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4357) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4358) 	struct net_device *dev = (struct net_device *)dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4359) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4360) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4361) 	u64 temp64 = 0, val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4362) 	int i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4363) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4364) 	struct swStat *sw_stat = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4365) 	struct xpakStat *stats = &sp->mac_control.stats_info->xpak_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4367) 	if (!is_s2io_card_up(sp))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4368) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4369) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4370) 	if (pci_channel_offline(sp->pdev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4371) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4372) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4373) 	memset(&sw_stat->ring_full_cnt, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4374) 	       sizeof(sw_stat->ring_full_cnt));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4375) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4376) 	/* Handling the XPAK counters update */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4377) 	if (stats->xpak_timer_count < 72000) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4378) 		/* waiting for an hour */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4379) 		stats->xpak_timer_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4380) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4381) 		s2io_updt_xpak_counter(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4382) 		/* reset the count to zero */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4383) 		stats->xpak_timer_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4384) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4385) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4386) 	/* Handling link status change error Intr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4387) 	if (s2io_link_fault_indication(sp) == MAC_RMAC_ERR_TIMER) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4388) 		val64 = readq(&bar0->mac_rmac_err_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4389) 		writeq(val64, &bar0->mac_rmac_err_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4390) 		if (val64 & RMAC_LINK_STATE_CHANGE_INT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4391) 			schedule_work(&sp->set_link_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4392) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4393) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4394) 	/* In case of a serious error, the device will be Reset. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4395) 	if (do_s2io_chk_alarm_bit(SERR_SOURCE_ANY, &bar0->serr_source,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4396) 				  &sw_stat->serious_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4397) 		goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4398) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4399) 	/* Check for data parity error */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4400) 	if (do_s2io_chk_alarm_bit(GPIO_INT_REG_DP_ERR_INT, &bar0->gpio_int_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4401) 				  &sw_stat->parity_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4402) 		goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4403) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4404) 	/* Check for ring full counter */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4405) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4406) 		val64 = readq(&bar0->ring_bump_counter1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4407) 		for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4408) 			temp64 = (val64 & vBIT(0xFFFF, (i*16), 16));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4409) 			temp64 >>= 64 - ((i+1)*16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4410) 			sw_stat->ring_full_cnt[i] += temp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4411) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4413) 		val64 = readq(&bar0->ring_bump_counter2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4414) 		for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4415) 			temp64 = (val64 & vBIT(0xFFFF, (i*16), 16));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4416) 			temp64 >>= 64 - ((i+1)*16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4417) 			sw_stat->ring_full_cnt[i+4] += temp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4418) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4419) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4421) 	val64 = readq(&bar0->txdma_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4422) 	/*check for pfc_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4423) 	if (val64 & TXDMA_PFC_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4424) 		if (do_s2io_chk_alarm_bit(PFC_ECC_DB_ERR | PFC_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4425) 					  PFC_MISC_0_ERR | PFC_MISC_1_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4426) 					  PFC_PCIX_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4427) 					  &bar0->pfc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4428) 					  &sw_stat->pfc_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4429) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4430) 		do_s2io_chk_alarm_bit(PFC_ECC_SG_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4431) 				      &bar0->pfc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4432) 				      &sw_stat->pfc_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4433) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4435) 	/*check for tda_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4436) 	if (val64 & TXDMA_TDA_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4437) 		if (do_s2io_chk_alarm_bit(TDA_Fn_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4438) 					  TDA_SM0_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4439) 					  TDA_SM1_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4440) 					  &bar0->tda_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4441) 					  &sw_stat->tda_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4442) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4443) 		do_s2io_chk_alarm_bit(TDA_Fn_ECC_SG_ERR | TDA_PCIX_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4444) 				      &bar0->tda_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4445) 				      &sw_stat->tda_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4446) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4447) 	/*check for pcc_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4448) 	if (val64 & TXDMA_PCC_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4449) 		if (do_s2io_chk_alarm_bit(PCC_SM_ERR_ALARM | PCC_WR_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4450) 					  PCC_N_SERR | PCC_6_COF_OV_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4451) 					  PCC_7_COF_OV_ERR | PCC_6_LSO_OV_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4452) 					  PCC_7_LSO_OV_ERR | PCC_FB_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4453) 					  PCC_TXB_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4454) 					  &bar0->pcc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4455) 					  &sw_stat->pcc_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4456) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4457) 		do_s2io_chk_alarm_bit(PCC_FB_ECC_SG_ERR | PCC_TXB_ECC_SG_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4458) 				      &bar0->pcc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4459) 				      &sw_stat->pcc_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4460) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4461) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4462) 	/*check for tti_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4463) 	if (val64 & TXDMA_TTI_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4464) 		if (do_s2io_chk_alarm_bit(TTI_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4465) 					  &bar0->tti_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4466) 					  &sw_stat->tti_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4467) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4468) 		do_s2io_chk_alarm_bit(TTI_ECC_SG_ERR | TTI_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4469) 				      &bar0->tti_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4470) 				      &sw_stat->tti_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4471) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4472) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4473) 	/*check for lso_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4474) 	if (val64 & TXDMA_LSO_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4475) 		if (do_s2io_chk_alarm_bit(LSO6_ABORT | LSO7_ABORT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4476) 					  LSO6_SM_ERR_ALARM | LSO7_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4477) 					  &bar0->lso_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4478) 					  &sw_stat->lso_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4479) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4480) 		do_s2io_chk_alarm_bit(LSO6_SEND_OFLOW | LSO7_SEND_OFLOW,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4481) 				      &bar0->lso_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4482) 				      &sw_stat->lso_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4483) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4484) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4485) 	/*check for tpa_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4486) 	if (val64 & TXDMA_TPA_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4487) 		if (do_s2io_chk_alarm_bit(TPA_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4488) 					  &bar0->tpa_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4489) 					  &sw_stat->tpa_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4490) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4491) 		do_s2io_chk_alarm_bit(TPA_TX_FRM_DROP,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4492) 				      &bar0->tpa_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4493) 				      &sw_stat->tpa_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4494) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4495) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4496) 	/*check for sm_err*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4497) 	if (val64 & TXDMA_SM_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4498) 		if (do_s2io_chk_alarm_bit(SM_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4499) 					  &bar0->sm_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4500) 					  &sw_stat->sm_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4501) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4502) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4503) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4504) 	val64 = readq(&bar0->mac_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4505) 	if (val64 & MAC_INT_STATUS_TMAC_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4506) 		if (do_s2io_chk_alarm_bit(TMAC_TX_BUF_OVRN | TMAC_TX_SM_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4507) 					  &bar0->mac_tmac_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4508) 					  &sw_stat->mac_tmac_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4509) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4510) 		do_s2io_chk_alarm_bit(TMAC_ECC_SG_ERR | TMAC_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4511) 				      TMAC_DESC_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4512) 				      TMAC_DESC_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4513) 				      &bar0->mac_tmac_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4514) 				      &sw_stat->mac_tmac_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4515) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4517) 	val64 = readq(&bar0->xgxs_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4518) 	if (val64 & XGXS_INT_STATUS_TXGXS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4519) 		if (do_s2io_chk_alarm_bit(TXGXS_ESTORE_UFLOW | TXGXS_TX_SM_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4520) 					  &bar0->xgxs_txgxs_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4521) 					  &sw_stat->xgxs_txgxs_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4522) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4523) 		do_s2io_chk_alarm_bit(TXGXS_ECC_SG_ERR | TXGXS_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4524) 				      &bar0->xgxs_txgxs_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4525) 				      &sw_stat->xgxs_txgxs_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4526) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4528) 	val64 = readq(&bar0->rxdma_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4529) 	if (val64 & RXDMA_INT_RC_INT_M) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4530) 		if (do_s2io_chk_alarm_bit(RC_PRCn_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4531) 					  RC_FTC_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4532) 					  RC_PRCn_SM_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4533) 					  RC_FTC_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4534) 					  &bar0->rc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4535) 					  &sw_stat->rc_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4536) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4537) 		do_s2io_chk_alarm_bit(RC_PRCn_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4538) 				      RC_FTC_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4539) 				      RC_RDA_FAIL_WR_Rn, &bar0->rc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4540) 				      &sw_stat->rc_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4541) 		if (do_s2io_chk_alarm_bit(PRC_PCI_AB_RD_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4542) 					  PRC_PCI_AB_WR_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4543) 					  PRC_PCI_AB_F_WR_Rn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4544) 					  &bar0->prc_pcix_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4545) 					  &sw_stat->prc_pcix_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4546) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4547) 		do_s2io_chk_alarm_bit(PRC_PCI_DP_RD_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4548) 				      PRC_PCI_DP_WR_Rn |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4549) 				      PRC_PCI_DP_F_WR_Rn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4550) 				      &bar0->prc_pcix_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4551) 				      &sw_stat->prc_pcix_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4552) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4553) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4554) 	if (val64 & RXDMA_INT_RPA_INT_M) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4555) 		if (do_s2io_chk_alarm_bit(RPA_SM_ERR_ALARM | RPA_CREDIT_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4556) 					  &bar0->rpa_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4557) 					  &sw_stat->rpa_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4558) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4559) 		do_s2io_chk_alarm_bit(RPA_ECC_SG_ERR | RPA_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4560) 				      &bar0->rpa_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4561) 				      &sw_stat->rpa_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4562) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4563) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4564) 	if (val64 & RXDMA_INT_RDA_INT_M) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4565) 		if (do_s2io_chk_alarm_bit(RDA_RXDn_ECC_DB_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4566) 					  RDA_FRM_ECC_DB_N_AERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4567) 					  RDA_SM1_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4568) 					  RDA_SM0_ERR_ALARM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4569) 					  RDA_RXD_ECC_DB_SERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4570) 					  &bar0->rda_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4571) 					  &sw_stat->rda_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4572) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4573) 		do_s2io_chk_alarm_bit(RDA_RXDn_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4574) 				      RDA_FRM_ECC_SG_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4575) 				      RDA_MISC_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4576) 				      RDA_PCIX_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4577) 				      &bar0->rda_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4578) 				      &sw_stat->rda_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4579) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4581) 	if (val64 & RXDMA_INT_RTI_INT_M) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4582) 		if (do_s2io_chk_alarm_bit(RTI_SM_ERR_ALARM,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4583) 					  &bar0->rti_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4584) 					  &sw_stat->rti_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4585) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4586) 		do_s2io_chk_alarm_bit(RTI_ECC_SG_ERR | RTI_ECC_DB_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4587) 				      &bar0->rti_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4588) 				      &sw_stat->rti_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4589) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4590) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4591) 	val64 = readq(&bar0->mac_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4592) 	if (val64 & MAC_INT_STATUS_RMAC_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4593) 		if (do_s2io_chk_alarm_bit(RMAC_RX_BUFF_OVRN | RMAC_RX_SM_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4594) 					  &bar0->mac_rmac_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4595) 					  &sw_stat->mac_rmac_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4596) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4597) 		do_s2io_chk_alarm_bit(RMAC_UNUSED_INT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4598) 				      RMAC_SINGLE_ECC_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4599) 				      RMAC_DOUBLE_ECC_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4600) 				      &bar0->mac_rmac_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4601) 				      &sw_stat->mac_rmac_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4602) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4603) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4604) 	val64 = readq(&bar0->xgxs_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4605) 	if (val64 & XGXS_INT_STATUS_RXGXS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4606) 		if (do_s2io_chk_alarm_bit(RXGXS_ESTORE_OFLOW | RXGXS_RX_SM_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4607) 					  &bar0->xgxs_rxgxs_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4608) 					  &sw_stat->xgxs_rxgxs_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4609) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4610) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4611) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4612) 	val64 = readq(&bar0->mc_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4613) 	if (val64 & MC_INT_STATUS_MC_INT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4614) 		if (do_s2io_chk_alarm_bit(MC_ERR_REG_SM_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4615) 					  &bar0->mc_err_reg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4616) 					  &sw_stat->mc_err_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4617) 			goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4618) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4619) 		/* Handling Ecc errors */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4620) 		if (val64 & (MC_ERR_REG_ECC_ALL_SNG | MC_ERR_REG_ECC_ALL_DBL)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4621) 			writeq(val64, &bar0->mc_err_reg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4622) 			if (val64 & MC_ERR_REG_ECC_ALL_DBL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4623) 				sw_stat->double_ecc_errs++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4624) 				if (sp->device_type != XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4625) 					/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4626) 					 * Reset XframeI only if critical error
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4627) 					 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4628) 					if (val64 &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4629) 					    (MC_ERR_REG_MIRI_ECC_DB_ERR_0 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4630) 					     MC_ERR_REG_MIRI_ECC_DB_ERR_1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4631) 						goto reset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4632) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4633) 			} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4634) 				sw_stat->single_ecc_errs++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4635) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4636) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4637) 	return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4639) reset:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4640) 	s2io_stop_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4641) 	schedule_work(&sp->rst_timer_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4642) 	sw_stat->soft_reset_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4643) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4644) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4645) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4646)  *  s2io_isr - ISR handler of the device .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4647)  *  @irq: the irq of the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4648)  *  @dev_id: a void pointer to the dev structure of the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4649)  *  Description:  This function is the ISR handler of the device. It
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4650)  *  identifies the reason for the interrupt and calls the relevant
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4651)  *  service routines. As a contongency measure, this ISR allocates the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4652)  *  recv buffers, if their numbers are below the panic value which is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4653)  *  presently set to 25% of the original number of rcv buffers allocated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4654)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4655)  *   IRQ_HANDLED: will be returned if IRQ was handled by this routine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4656)  *   IRQ_NONE: will be returned if interrupt is not from our device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4657)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4658) static irqreturn_t s2io_isr(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4659) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4660) 	struct net_device *dev = (struct net_device *)dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4661) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4662) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4663) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4664) 	u64 reason = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4665) 	struct mac_info *mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4666) 	struct config_param *config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4667) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4668) 	/* Pretend we handled any irq's from a disconnected card */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4669) 	if (pci_channel_offline(sp->pdev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4670) 		return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4671) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4672) 	if (!is_s2io_card_up(sp))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4673) 		return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4674) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4675) 	config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4676) 	mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4677) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4678) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4679) 	 * Identify the cause for interrupt and call the appropriate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4680) 	 * interrupt handler. Causes for the interrupt could be;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4681) 	 * 1. Rx of packet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4682) 	 * 2. Tx complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4683) 	 * 3. Link down.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4684) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4685) 	reason = readq(&bar0->general_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4686) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4687) 	if (unlikely(reason == S2IO_MINUS_ONE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4688) 		return IRQ_HANDLED;	/* Nothing much can be done. Get out */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4689) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4690) 	if (reason &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4691) 	    (GEN_INTR_RXTRAFFIC | GEN_INTR_TXTRAFFIC | GEN_INTR_TXPIC)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4692) 		writeq(S2IO_MINUS_ONE, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4693) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4694) 		if (config->napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4695) 			if (reason & GEN_INTR_RXTRAFFIC) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4696) 				napi_schedule(&sp->napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4697) 				writeq(S2IO_MINUS_ONE, &bar0->rx_traffic_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4698) 				writeq(S2IO_MINUS_ONE, &bar0->rx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4699) 				readl(&bar0->rx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4700) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4701) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4702) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4703) 			 * rx_traffic_int reg is an R1 register, writing all 1's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4704) 			 * will ensure that the actual interrupt causing bit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4705) 			 * get's cleared and hence a read can be avoided.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4706) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4707) 			if (reason & GEN_INTR_RXTRAFFIC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4708) 				writeq(S2IO_MINUS_ONE, &bar0->rx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4709) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4710) 			for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4711) 				struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4712) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4713) 				rx_intr_handler(ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4714) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4715) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4716) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4717) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4718) 		 * tx_traffic_int reg is an R1 register, writing all 1's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4719) 		 * will ensure that the actual interrupt causing bit get's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4720) 		 * cleared and hence a read can be avoided.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4721) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4722) 		if (reason & GEN_INTR_TXTRAFFIC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4723) 			writeq(S2IO_MINUS_ONE, &bar0->tx_traffic_int);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4724) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4725) 		for (i = 0; i < config->tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4726) 			tx_intr_handler(&mac_control->fifos[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4727) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4728) 		if (reason & GEN_INTR_TXPIC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4729) 			s2io_txpic_intr_handle(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4730) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4731) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4732) 		 * Reallocate the buffers from the interrupt handler itself.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4733) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4734) 		if (!config->napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4735) 			for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4736) 				struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4737) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4738) 				s2io_chk_rx_buffers(sp, ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4739) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4740) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4741) 		writeq(sp->general_int_mask, &bar0->general_int_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4742) 		readl(&bar0->general_int_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4744) 		return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4745) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4746) 	} else if (!reason) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4747) 		/* The interrupt was not raised by us */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4748) 		return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4749) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4750) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4751) 	return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4752) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4753) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4754) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4755)  * s2io_updt_stats -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4756)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4757) static void s2io_updt_stats(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4758) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4759) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4760) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4761) 	int cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4762) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4763) 	if (is_s2io_card_up(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4764) 		/* Apprx 30us on a 133 MHz bus */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4765) 		val64 = SET_UPDT_CLICKS(10) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4766) 			STAT_CFG_ONE_SHOT_EN | STAT_CFG_STAT_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4767) 		writeq(val64, &bar0->stat_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4768) 		do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4769) 			udelay(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4770) 			val64 = readq(&bar0->stat_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4771) 			if (!(val64 & s2BIT(0)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4772) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4773) 			cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4774) 			if (cnt == 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4775) 				break; /* Updt failed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4776) 		} while (1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4777) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4778) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4779) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4780) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4781)  *  s2io_get_stats - Updates the device statistics structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4782)  *  @dev : pointer to the device structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4783)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4784)  *  This function updates the device statistics structure in the s2io_nic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4785)  *  structure and returns a pointer to the same.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4786)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4787)  *  pointer to the updated net_device_stats structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4788)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4789) static struct net_device_stats *s2io_get_stats(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4790) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4791) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4792) 	struct mac_info *mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4793) 	struct stat_block *stats = mac_control->stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4794) 	u64 delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4795) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4796) 	/* Configure Stats for immediate updt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4797) 	s2io_updt_stats(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4798) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4799) 	/* A device reset will cause the on-adapter statistics to be zero'ed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4800) 	 * This can be done while running by changing the MTU.  To prevent the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4801) 	 * system from having the stats zero'ed, the driver keeps a copy of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4802) 	 * last update to the system (which is also zero'ed on reset).  This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4803) 	 * enables the driver to accurately know the delta between the last
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4804) 	 * update and the current update.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4805) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4806) 	delta = ((u64) le32_to_cpu(stats->rmac_vld_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4807) 		le32_to_cpu(stats->rmac_vld_frms)) - sp->stats.rx_packets;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4808) 	sp->stats.rx_packets += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4809) 	dev->stats.rx_packets += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4810) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4811) 	delta = ((u64) le32_to_cpu(stats->tmac_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4812) 		le32_to_cpu(stats->tmac_frms)) - sp->stats.tx_packets;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4813) 	sp->stats.tx_packets += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4814) 	dev->stats.tx_packets += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4815) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4816) 	delta = ((u64) le32_to_cpu(stats->rmac_data_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4817) 		le32_to_cpu(stats->rmac_data_octets)) - sp->stats.rx_bytes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4818) 	sp->stats.rx_bytes += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4819) 	dev->stats.rx_bytes += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4821) 	delta = ((u64) le32_to_cpu(stats->tmac_data_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4822) 		le32_to_cpu(stats->tmac_data_octets)) - sp->stats.tx_bytes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4823) 	sp->stats.tx_bytes += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4824) 	dev->stats.tx_bytes += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4826) 	delta = le64_to_cpu(stats->rmac_drop_frms) - sp->stats.rx_errors;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4827) 	sp->stats.rx_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4828) 	dev->stats.rx_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4829) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4830) 	delta = ((u64) le32_to_cpu(stats->tmac_any_err_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4831) 		le32_to_cpu(stats->tmac_any_err_frms)) - sp->stats.tx_errors;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4832) 	sp->stats.tx_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4833) 	dev->stats.tx_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4834) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4835) 	delta = le64_to_cpu(stats->rmac_drop_frms) - sp->stats.rx_dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4836) 	sp->stats.rx_dropped += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4837) 	dev->stats.rx_dropped += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4838) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4839) 	delta = le64_to_cpu(stats->tmac_drop_frms) - sp->stats.tx_dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4840) 	sp->stats.tx_dropped += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4841) 	dev->stats.tx_dropped += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4842) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4843) 	/* The adapter MAC interprets pause frames as multicast packets, but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4844) 	 * does not pass them up.  This erroneously increases the multicast
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4845) 	 * packet count and needs to be deducted when the multicast frame count
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4846) 	 * is queried.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4847) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4848) 	delta = (u64) le32_to_cpu(stats->rmac_vld_mcst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4849) 		le32_to_cpu(stats->rmac_vld_mcst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4850) 	delta -= le64_to_cpu(stats->rmac_pause_ctrl_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4851) 	delta -= sp->stats.multicast;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4852) 	sp->stats.multicast += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4853) 	dev->stats.multicast += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4854) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4855) 	delta = ((u64) le32_to_cpu(stats->rmac_usized_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4856) 		le32_to_cpu(stats->rmac_usized_frms)) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4857) 		le64_to_cpu(stats->rmac_long_frms) - sp->stats.rx_length_errors;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4858) 	sp->stats.rx_length_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4859) 	dev->stats.rx_length_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4860) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4861) 	delta = le64_to_cpu(stats->rmac_fcs_err_frms) - sp->stats.rx_crc_errors;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4862) 	sp->stats.rx_crc_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4863) 	dev->stats.rx_crc_errors += delta;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4864) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4865) 	return &dev->stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4866) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4867) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4868) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4869)  *  s2io_set_multicast - entry point for multicast address enable/disable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4870)  *  @dev : pointer to the device structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4871)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4872)  *  This function is a driver entry point which gets called by the kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4873)  *  whenever multicast addresses must be enabled/disabled. This also gets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4874)  *  called to set/reset promiscuous mode. Depending on the deivce flag, we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4875)  *  determine, if multicast address must be enabled or if promiscuous mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4876)  *  is to be disabled etc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4877)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4878)  *  void.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4879)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4880) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4881) static void s2io_set_multicast(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4882) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4883) 	int i, j, prev_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4884) 	struct netdev_hw_addr *ha;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4885) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4886) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4887) 	u64 val64 = 0, multi_mac = 0x010203040506ULL, mask =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4888) 		0xfeffffffffffULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4889) 	u64 dis_addr = S2IO_DISABLE_MAC_ENTRY, mac_addr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4890) 	void __iomem *add;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4891) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4893) 	if ((dev->flags & IFF_ALLMULTI) && (!sp->m_cast_flg)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4894) 		/*  Enable all Multicast addresses */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4895) 		writeq(RMAC_ADDR_DATA0_MEM_ADDR(multi_mac),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4896) 		       &bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4897) 		writeq(RMAC_ADDR_DATA1_MEM_MASK(mask),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4898) 		       &bar0->rmac_addr_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4899) 		val64 = RMAC_ADDR_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4900) 			RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4901) 			RMAC_ADDR_CMD_MEM_OFFSET(config->max_mc_addr - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4902) 		writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4903) 		/* Wait till command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4904) 		wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4905) 				      RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4906) 				      S2IO_BIT_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4907) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4908) 		sp->m_cast_flg = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4909) 		sp->all_multi_pos = config->max_mc_addr - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4910) 	} else if ((dev->flags & IFF_ALLMULTI) && (sp->m_cast_flg)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4911) 		/*  Disable all Multicast addresses */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4912) 		writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4913) 		       &bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4914) 		writeq(RMAC_ADDR_DATA1_MEM_MASK(0x0),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4915) 		       &bar0->rmac_addr_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4916) 		val64 = RMAC_ADDR_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4917) 			RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4918) 			RMAC_ADDR_CMD_MEM_OFFSET(sp->all_multi_pos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4919) 		writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4920) 		/* Wait till command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4921) 		wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4922) 				      RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4923) 				      S2IO_BIT_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4924) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4925) 		sp->m_cast_flg = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4926) 		sp->all_multi_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4927) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4928) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4929) 	if ((dev->flags & IFF_PROMISC) && (!sp->promisc_flg)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4930) 		/*  Put the NIC into promiscuous mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4931) 		add = &bar0->mac_cfg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4932) 		val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4933) 		val64 |= MAC_CFG_RMAC_PROM_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4934) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4935) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4936) 		writel((u32)val64, add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4937) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4938) 		writel((u32) (val64 >> 32), (add + 4));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4939) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4940) 		if (vlan_tag_strip != 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4941) 			val64 = readq(&bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4942) 			val64 &= ~RX_PA_CFG_STRIP_VLAN_TAG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4943) 			writeq(val64, &bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4944) 			sp->vlan_strip_flag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4945) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4946) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4947) 		val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4948) 		sp->promisc_flg = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4949) 		DBG_PRINT(INFO_DBG, "%s: entered promiscuous mode\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4950) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4951) 	} else if (!(dev->flags & IFF_PROMISC) && (sp->promisc_flg)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4952) 		/*  Remove the NIC from promiscuous mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4953) 		add = &bar0->mac_cfg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4954) 		val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4955) 		val64 &= ~MAC_CFG_RMAC_PROM_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4956) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4957) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4958) 		writel((u32)val64, add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4959) 		writeq(RMAC_CFG_KEY(0x4C0D), &bar0->rmac_cfg_key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4960) 		writel((u32) (val64 >> 32), (add + 4));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4961) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4962) 		if (vlan_tag_strip != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4963) 			val64 = readq(&bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4964) 			val64 |= RX_PA_CFG_STRIP_VLAN_TAG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4965) 			writeq(val64, &bar0->rx_pa_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4966) 			sp->vlan_strip_flag = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4967) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4968) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4969) 		val64 = readq(&bar0->mac_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4970) 		sp->promisc_flg = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4971) 		DBG_PRINT(INFO_DBG, "%s: left promiscuous mode\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4972) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4973) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4974) 	/*  Update individual M_CAST address list */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4975) 	if ((!sp->m_cast_flg) && netdev_mc_count(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4976) 		if (netdev_mc_count(dev) >
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4977) 		    (config->max_mc_addr - config->max_mac_addr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4978) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4979) 				  "%s: No more Rx filters can be added - "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4980) 				  "please enable ALL_MULTI instead\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4981) 				  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4982) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4983) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4985) 		prev_cnt = sp->mc_addr_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4986) 		sp->mc_addr_count = netdev_mc_count(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4987) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4988) 		/* Clear out the previous list of Mc in the H/W. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4989) 		for (i = 0; i < prev_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4990) 			writeq(RMAC_ADDR_DATA0_MEM_ADDR(dis_addr),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4991) 			       &bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4992) 			writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4993) 			       &bar0->rmac_addr_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4994) 			val64 = RMAC_ADDR_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4995) 				RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4996) 				RMAC_ADDR_CMD_MEM_OFFSET
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4997) 				(config->mc_start_offset + i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4998) 			writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4999) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5000) 			/* Wait for command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5001) 			if (wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5002) 						  RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5003) 						  S2IO_BIT_RESET)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5004) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5005) 					  "%s: Adding Multicasts failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5006) 					  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5007) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5008) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5009) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5010) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5011) 		/* Create the new Rx filter list and update the same in H/W. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5012) 		i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5013) 		netdev_for_each_mc_addr(ha, dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5014) 			mac_addr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5015) 			for (j = 0; j < ETH_ALEN; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5016) 				mac_addr |= ha->addr[j];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5017) 				mac_addr <<= 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5018) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5019) 			mac_addr >>= 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5020) 			writeq(RMAC_ADDR_DATA0_MEM_ADDR(mac_addr),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5021) 			       &bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5022) 			writeq(RMAC_ADDR_DATA1_MEM_MASK(0ULL),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5023) 			       &bar0->rmac_addr_data1_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5024) 			val64 = RMAC_ADDR_CMD_MEM_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5025) 				RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5026) 				RMAC_ADDR_CMD_MEM_OFFSET
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5027) 				(i + config->mc_start_offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5028) 			writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5029) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5030) 			/* Wait for command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5031) 			if (wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5032) 						  RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5033) 						  S2IO_BIT_RESET)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5034) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5035) 					  "%s: Adding Multicasts failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5036) 					  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5037) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5038) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5039) 			i++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5040) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5041) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5042) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5044) /* read from CAM unicast & multicast addresses and store it in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5045)  * def_mac_addr structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5046)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5047) static void do_s2io_store_unicast_mc(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5048) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5049) 	int offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5050) 	u64 mac_addr = 0x0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5051) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5052) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5053) 	/* store unicast & multicast mac addresses */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5054) 	for (offset = 0; offset < config->max_mc_addr; offset++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5055) 		mac_addr = do_s2io_read_unicast_mc(sp, offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5056) 		/* if read fails disable the entry */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5057) 		if (mac_addr == FAILURE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5058) 			mac_addr = S2IO_DISABLE_MAC_ENTRY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5059) 		do_s2io_copy_mac_addr(sp, offset, mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5060) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5061) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5063) /* restore unicast & multicast MAC to CAM from def_mac_addr structure */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5064) static void do_s2io_restore_unicast_mc(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5065) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5066) 	int offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5067) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5068) 	/* restore unicast mac address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5069) 	for (offset = 0; offset < config->max_mac_addr; offset++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5070) 		do_s2io_prog_unicast(sp->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5071) 				     sp->def_mac_addr[offset].mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5072) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5073) 	/* restore multicast mac address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5074) 	for (offset = config->mc_start_offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5075) 	     offset < config->max_mc_addr; offset++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5076) 		do_s2io_add_mc(sp, sp->def_mac_addr[offset].mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5077) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5078) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5079) /* add a multicast MAC address to CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5080) static int do_s2io_add_mc(struct s2io_nic *sp, u8 *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5081) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5082) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5083) 	u64 mac_addr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5084) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5085) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5086) 	for (i = 0; i < ETH_ALEN; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5087) 		mac_addr <<= 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5088) 		mac_addr |= addr[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5089) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5090) 	if ((0ULL == mac_addr) || (mac_addr == S2IO_DISABLE_MAC_ENTRY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5091) 		return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5092) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5093) 	/* check if the multicast mac already preset in CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5094) 	for (i = config->mc_start_offset; i < config->max_mc_addr; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5095) 		u64 tmp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5096) 		tmp64 = do_s2io_read_unicast_mc(sp, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5097) 		if (tmp64 == S2IO_DISABLE_MAC_ENTRY) /* CAM entry is empty */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5098) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5099) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5100) 		if (tmp64 == mac_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5101) 			return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5102) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5103) 	if (i == config->max_mc_addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5104) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5105) 			  "CAM full no space left for multicast MAC\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5106) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5107) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5108) 	/* Update the internal structure with this new mac address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5109) 	do_s2io_copy_mac_addr(sp, i, mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5110) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5111) 	return do_s2io_add_mac(sp, mac_addr, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5112) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5114) /* add MAC address to CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5115) static int do_s2io_add_mac(struct s2io_nic *sp, u64 addr, int off)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5116) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5117) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5118) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5120) 	writeq(RMAC_ADDR_DATA0_MEM_ADDR(addr),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5121) 	       &bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5123) 	val64 =	RMAC_ADDR_CMD_MEM_WE | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5124) 		RMAC_ADDR_CMD_MEM_OFFSET(off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5125) 	writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5126) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5127) 	/* Wait till command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5128) 	if (wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5129) 				  RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5130) 				  S2IO_BIT_RESET)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5131) 		DBG_PRINT(INFO_DBG, "do_s2io_add_mac failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5132) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5133) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5134) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5135) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5136) /* deletes a specified unicast/multicast mac entry from CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5137) static int do_s2io_delete_unicast_mc(struct s2io_nic *sp, u64 addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5138) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5139) 	int offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5140) 	u64 dis_addr = S2IO_DISABLE_MAC_ENTRY, tmp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5141) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5143) 	for (offset = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5144) 	     offset < config->max_mc_addr; offset++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5145) 		tmp64 = do_s2io_read_unicast_mc(sp, offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5146) 		if (tmp64 == addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5147) 			/* disable the entry by writing  0xffffffffffffULL */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5148) 			if (do_s2io_add_mac(sp, dis_addr, offset) ==  FAILURE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5149) 				return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5150) 			/* store the new mac list from CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5151) 			do_s2io_store_unicast_mc(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5152) 			return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5153) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5154) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5155) 	DBG_PRINT(ERR_DBG, "MAC address 0x%llx not found in CAM\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5156) 		  (unsigned long long)addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5157) 	return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5158) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5160) /* read mac entries from CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5161) static u64 do_s2io_read_unicast_mc(struct s2io_nic *sp, int offset)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5162) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5163) 	u64 tmp64, val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5164) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5166) 	/* read mac addr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5167) 	val64 =	RMAC_ADDR_CMD_MEM_RD | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5168) 		RMAC_ADDR_CMD_MEM_OFFSET(offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5169) 	writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5171) 	/* Wait till command completes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5172) 	if (wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5173) 				  RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5174) 				  S2IO_BIT_RESET)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5175) 		DBG_PRINT(INFO_DBG, "do_s2io_read_unicast_mc failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5176) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5177) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5178) 	tmp64 = readq(&bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5179) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5180) 	return tmp64 >> 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5181) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5183) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5184)  * s2io_set_mac_addr - driver entry point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5185)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5187) static int s2io_set_mac_addr(struct net_device *dev, void *p)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5188) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5189) 	struct sockaddr *addr = p;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5190) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5191) 	if (!is_valid_ether_addr(addr->sa_data))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5192) 		return -EADDRNOTAVAIL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5193) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5194) 	memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5196) 	/* store the MAC address in CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5197) 	return do_s2io_prog_unicast(dev, dev->dev_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5198) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5199) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5200)  *  do_s2io_prog_unicast - Programs the Xframe mac address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5201)  *  @dev : pointer to the device structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5202)  *  @addr: a uchar pointer to the new mac address which is to be set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5203)  *  Description : This procedure will program the Xframe to receive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5204)  *  frames with new Mac Address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5205)  *  Return value: SUCCESS on success and an appropriate (-)ve integer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5206)  *  as defined in errno.h file on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5207)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5208) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5209) static int do_s2io_prog_unicast(struct net_device *dev, u8 *addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5210) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5211) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5212) 	register u64 mac_addr = 0, perm_addr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5213) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5214) 	u64 tmp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5215) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5217) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5218) 	 * Set the new MAC address as the new unicast filter and reflect this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5219) 	 * change on the device address registered with the OS. It will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5220) 	 * at offset 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5221) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5222) 	for (i = 0; i < ETH_ALEN; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5223) 		mac_addr <<= 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5224) 		mac_addr |= addr[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5225) 		perm_addr <<= 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5226) 		perm_addr |= sp->def_mac_addr[0].mac_addr[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5227) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5228) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5229) 	/* check if the dev_addr is different than perm_addr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5230) 	if (mac_addr == perm_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5231) 		return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5233) 	/* check if the mac already preset in CAM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5234) 	for (i = 1; i < config->max_mac_addr; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5235) 		tmp64 = do_s2io_read_unicast_mc(sp, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5236) 		if (tmp64 == S2IO_DISABLE_MAC_ENTRY) /* CAM entry is empty */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5237) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5239) 		if (tmp64 == mac_addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5240) 			DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5241) 				  "MAC addr:0x%llx already present in CAM\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5242) 				  (unsigned long long)mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5243) 			return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5244) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5245) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5246) 	if (i == config->max_mac_addr) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5247) 		DBG_PRINT(ERR_DBG, "CAM full no space left for Unicast MAC\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5248) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5249) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5250) 	/* Update the internal structure with this new mac address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5251) 	do_s2io_copy_mac_addr(sp, i, mac_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5253) 	return do_s2io_add_mac(sp, mac_addr, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5254) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5256) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5257)  * s2io_ethtool_set_link_ksettings - Sets different link parameters.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5258)  * @dev : pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5259)  * @cmd: pointer to the structure with parameters given by ethtool to set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5260)  * link information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5261)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5262)  * The function sets different link parameters provided by the user onto
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5263)  * the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5264)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5265)  * 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5266)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5267) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5268) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5269) s2io_ethtool_set_link_ksettings(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5270) 				const struct ethtool_link_ksettings *cmd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5271) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5272) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5273) 	if ((cmd->base.autoneg == AUTONEG_ENABLE) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5274) 	    (cmd->base.speed != SPEED_10000) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5275) 	    (cmd->base.duplex != DUPLEX_FULL))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5276) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5277) 	else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5278) 		s2io_close(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5279) 		s2io_open(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5280) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5282) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5283) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5284) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5285) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5286)  * s2io_ethtol_get_link_ksettings - Return link specific information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5287)  * @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5288)  * @cmd : pointer to the structure with parameters given by ethtool
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5289)  * to return link information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5290)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5291)  * Returns link specific information like speed, duplex etc.. to ethtool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5292)  * Return value :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5293)  * return 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5294)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5296) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5297) s2io_ethtool_get_link_ksettings(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5298) 				struct ethtool_link_ksettings *cmd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5299) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5300) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5302) 	ethtool_link_ksettings_zero_link_mode(cmd, supported);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5303) 	ethtool_link_ksettings_add_link_mode(cmd, supported, 10000baseT_Full);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5304) 	ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5306) 	ethtool_link_ksettings_zero_link_mode(cmd, advertising);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5307) 	ethtool_link_ksettings_add_link_mode(cmd, advertising, 10000baseT_Full);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5308) 	ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5310) 	cmd->base.port = PORT_FIBRE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5311) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5312) 	if (netif_carrier_ok(sp->dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5313) 		cmd->base.speed = SPEED_10000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5314) 		cmd->base.duplex = DUPLEX_FULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5315) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5316) 		cmd->base.speed = SPEED_UNKNOWN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5317) 		cmd->base.duplex = DUPLEX_UNKNOWN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5318) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5320) 	cmd->base.autoneg = AUTONEG_DISABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5321) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5322) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5324) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5325)  * s2io_ethtool_gdrvinfo - Returns driver specific information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5326)  * @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5327)  * @info : pointer to the structure with parameters given by ethtool to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5328)  * return driver information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5329)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5330)  * Returns driver specefic information like name, version etc.. to ethtool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5331)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5332)  *  void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5333)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5334) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5335) static void s2io_ethtool_gdrvinfo(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5336) 				  struct ethtool_drvinfo *info)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5337) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5338) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5339) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5340) 	strlcpy(info->driver, s2io_driver_name, sizeof(info->driver));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5341) 	strlcpy(info->version, s2io_driver_version, sizeof(info->version));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5342) 	strlcpy(info->bus_info, pci_name(sp->pdev), sizeof(info->bus_info));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5343) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5344) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5345) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5346)  *  s2io_ethtool_gregs - dumps the entire space of Xfame into the buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5347)  *  @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5348)  *  @regs : pointer to the structure with parameters given by ethtool for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5349)  *          dumping the registers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5350)  *  @space: The input argument into which all the registers are dumped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5351)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5352)  *  Dumps the entire register space of xFrame NIC into the user given
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5353)  *  buffer area.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5354)  * Return value :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5355)  * void .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5356)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5357) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5358) static void s2io_ethtool_gregs(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5359) 			       struct ethtool_regs *regs, void *space)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5360) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5361) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5362) 	u64 reg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5363) 	u8 *reg_space = (u8 *)space;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5364) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5366) 	regs->len = XENA_REG_SPACE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5367) 	regs->version = sp->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5369) 	for (i = 0; i < regs->len; i += 8) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5370) 		reg = readq(sp->bar0 + i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5371) 		memcpy((reg_space + i), &reg, 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5372) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5373) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5374) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5375) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5376)  *  s2io_set_led - control NIC led
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5377)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5378) static void s2io_set_led(struct s2io_nic *sp, bool on)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5379) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5380) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5381) 	u16 subid = sp->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5382) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5383) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5384) 	if ((sp->device_type == XFRAME_II_DEVICE) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5385) 	    ((subid & 0xFF) >= 0x07)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5386) 		val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5387) 		if (on)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5388) 			val64 |= GPIO_CTRL_GPIO_0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5389) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5390) 			val64 &= ~GPIO_CTRL_GPIO_0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5392) 		writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5393) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5394) 		val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5395) 		if (on)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5396) 			val64 |= ADAPTER_LED_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5397) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5398) 			val64 &= ~ADAPTER_LED_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5399) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5400) 		writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5401) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5403) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5405) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5406)  * s2io_ethtool_set_led - To physically identify the nic on the system.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5407)  * @dev : network device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5408)  * @state: led setting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5409)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5410)  * Description: Used to physically identify the NIC on the system.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5411)  * The Link LED will blink for a time specified by the user for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5412)  * identification.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5413)  * NOTE: The Link has to be Up to be able to blink the LED. Hence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5414)  * identification is possible only if it's link is up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5415)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5417) static int s2io_ethtool_set_led(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5418) 				enum ethtool_phys_id_state state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5419) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5420) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5421) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5422) 	u16 subid = sp->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5423) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5424) 	if ((sp->device_type == XFRAME_I_DEVICE) && ((subid & 0xFF) < 0x07)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5425) 		u64 val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5426) 		if (!(val64 & ADAPTER_CNTL_EN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5427) 			pr_err("Adapter Link down, cannot blink LED\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5428) 			return -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5429) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5430) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5431) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5432) 	switch (state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5433) 	case ETHTOOL_ID_ACTIVE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5434) 		sp->adapt_ctrl_org = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5435) 		return 1;	/* cycle on/off once per second */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5436) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5437) 	case ETHTOOL_ID_ON:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5438) 		s2io_set_led(sp, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5439) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5440) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5441) 	case ETHTOOL_ID_OFF:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5442) 		s2io_set_led(sp, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5443) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5445) 	case ETHTOOL_ID_INACTIVE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5446) 		if (CARDS_WITH_FAULTY_LINK_INDICATORS(sp->device_type, subid))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5447) 			writeq(sp->adapt_ctrl_org, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5448) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5449) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5450) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5451) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5452) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5453) static void s2io_ethtool_gringparam(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5454) 				    struct ethtool_ringparam *ering)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5455) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5456) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5457) 	int i, tx_desc_count = 0, rx_desc_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5458) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5459) 	if (sp->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5460) 		ering->rx_max_pending = MAX_RX_DESC_1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5461) 		ering->rx_jumbo_max_pending = MAX_RX_DESC_1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5462) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5463) 		ering->rx_max_pending = MAX_RX_DESC_2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5464) 		ering->rx_jumbo_max_pending = MAX_RX_DESC_2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5465) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5467) 	ering->tx_max_pending = MAX_TX_DESC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5469) 	for (i = 0; i < sp->config.rx_ring_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5470) 		rx_desc_count += sp->config.rx_cfg[i].num_rxd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5471) 	ering->rx_pending = rx_desc_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5472) 	ering->rx_jumbo_pending = rx_desc_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5474) 	for (i = 0; i < sp->config.tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5475) 		tx_desc_count += sp->config.tx_cfg[i].fifo_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5476) 	ering->tx_pending = tx_desc_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5477) 	DBG_PRINT(INFO_DBG, "max txds: %d\n", sp->config.max_txds);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5478) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5480) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5481)  * s2io_ethtool_getpause_data -Pause frame frame generation and reception.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5482)  * @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5483)  * @ep : pointer to the structure with pause parameters given by ethtool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5484)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5485)  * Returns the Pause frame generation and reception capability of the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5486)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5487)  *  void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5488)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5489) static void s2io_ethtool_getpause_data(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5490) 				       struct ethtool_pauseparam *ep)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5491) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5492) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5493) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5494) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5495) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5496) 	val64 = readq(&bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5497) 	if (val64 & RMAC_PAUSE_GEN_ENABLE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5498) 		ep->tx_pause = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5499) 	if (val64 & RMAC_PAUSE_RX_ENABLE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5500) 		ep->rx_pause = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5501) 	ep->autoneg = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5502) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5503) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5504) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5505)  * s2io_ethtool_setpause_data -  set/reset pause frame generation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5506)  * @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5507)  * @ep : pointer to the structure with pause parameters given by ethtool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5508)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5509)  * It can be used to set or reset Pause frame generation or reception
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5510)  * support of the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5511)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5512)  * int, returns 0 on Success
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5513)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5515) static int s2io_ethtool_setpause_data(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5516) 				      struct ethtool_pauseparam *ep)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5517) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5518) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5519) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5520) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5522) 	val64 = readq(&bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5523) 	if (ep->tx_pause)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5524) 		val64 |= RMAC_PAUSE_GEN_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5525) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5526) 		val64 &= ~RMAC_PAUSE_GEN_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5527) 	if (ep->rx_pause)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5528) 		val64 |= RMAC_PAUSE_RX_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5529) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5530) 		val64 &= ~RMAC_PAUSE_RX_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5531) 	writeq(val64, &bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5532) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5533) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5535) #define S2IO_DEV_ID		5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5536) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5537)  * read_eeprom - reads 4 bytes of data from user given offset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5538)  * @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5539)  *      s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5540)  * @off : offset at which the data must be written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5541)  * @data : Its an output parameter where the data read at the given
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5542)  *	offset is stored.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5543)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5544)  * Will read 4 bytes of data from the user given offset and return the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5545)  * read data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5546)  * NOTE: Will allow to read only part of the EEPROM visible through the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5547)  *   I2C bus.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5548)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5549)  *  -1 on failure and 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5550)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5551) static int read_eeprom(struct s2io_nic *sp, int off, u64 *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5552) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5553) 	int ret = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5554) 	u32 exit_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5555) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5556) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5558) 	if (sp->device_type == XFRAME_I_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5559) 		val64 = I2C_CONTROL_DEV_ID(S2IO_DEV_ID) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5560) 			I2C_CONTROL_ADDR(off) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5561) 			I2C_CONTROL_BYTE_CNT(0x3) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5562) 			I2C_CONTROL_READ |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5563) 			I2C_CONTROL_CNTL_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5564) 		SPECIAL_REG_WRITE(val64, &bar0->i2c_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5565) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5566) 		while (exit_cnt < 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5567) 			val64 = readq(&bar0->i2c_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5568) 			if (I2C_CONTROL_CNTL_END(val64)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5569) 				*data = I2C_CONTROL_GET_DATA(val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5570) 				ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5571) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5572) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5573) 			msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5574) 			exit_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5575) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5576) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5578) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5579) 		val64 = SPI_CONTROL_KEY(0x9) | SPI_CONTROL_SEL1 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5580) 			SPI_CONTROL_BYTECNT(0x3) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5581) 			SPI_CONTROL_CMD(0x3) | SPI_CONTROL_ADDR(off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5582) 		SPECIAL_REG_WRITE(val64, &bar0->spi_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5583) 		val64 |= SPI_CONTROL_REQ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5584) 		SPECIAL_REG_WRITE(val64, &bar0->spi_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5585) 		while (exit_cnt < 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5586) 			val64 = readq(&bar0->spi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5587) 			if (val64 & SPI_CONTROL_NACK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5588) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5589) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5590) 			} else if (val64 & SPI_CONTROL_DONE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5591) 				*data = readq(&bar0->spi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5592) 				*data &= 0xffffff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5593) 				ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5594) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5595) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5596) 			msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5597) 			exit_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5598) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5599) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5600) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5601) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5602) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5603) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5604)  *  write_eeprom - actually writes the relevant part of the data value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5605)  *  @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5606)  *       s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5607)  *  @off : offset at which the data must be written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5608)  *  @data : The data that is to be written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5609)  *  @cnt : Number of bytes of the data that are actually to be written into
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5610)  *  the Eeprom. (max of 3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5611)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5612)  *  Actually writes the relevant part of the data value into the Eeprom
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5613)  *  through the I2C bus.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5614)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5615)  *  0 on success, -1 on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5616)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5618) static int write_eeprom(struct s2io_nic *sp, int off, u64 data, int cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5619) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5620) 	int exit_cnt = 0, ret = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5621) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5622) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5623) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5624) 	if (sp->device_type == XFRAME_I_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5625) 		val64 = I2C_CONTROL_DEV_ID(S2IO_DEV_ID) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5626) 			I2C_CONTROL_ADDR(off) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5627) 			I2C_CONTROL_BYTE_CNT(cnt) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5628) 			I2C_CONTROL_SET_DATA((u32)data) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5629) 			I2C_CONTROL_CNTL_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5630) 		SPECIAL_REG_WRITE(val64, &bar0->i2c_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5632) 		while (exit_cnt < 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5633) 			val64 = readq(&bar0->i2c_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5634) 			if (I2C_CONTROL_CNTL_END(val64)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5635) 				if (!(val64 & I2C_CONTROL_NACK))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5636) 					ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5637) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5638) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5639) 			msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5640) 			exit_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5641) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5642) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5644) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5645) 		int write_cnt = (cnt == 8) ? 0 : cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5646) 		writeq(SPI_DATA_WRITE(data, (cnt << 3)), &bar0->spi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5647) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5648) 		val64 = SPI_CONTROL_KEY(0x9) | SPI_CONTROL_SEL1 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5649) 			SPI_CONTROL_BYTECNT(write_cnt) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5650) 			SPI_CONTROL_CMD(0x2) | SPI_CONTROL_ADDR(off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5651) 		SPECIAL_REG_WRITE(val64, &bar0->spi_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5652) 		val64 |= SPI_CONTROL_REQ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5653) 		SPECIAL_REG_WRITE(val64, &bar0->spi_control, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5654) 		while (exit_cnt < 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5655) 			val64 = readq(&bar0->spi_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5656) 			if (val64 & SPI_CONTROL_NACK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5657) 				ret = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5658) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5659) 			} else if (val64 & SPI_CONTROL_DONE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5660) 				ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5661) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5662) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5663) 			msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5664) 			exit_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5665) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5666) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5667) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5668) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5669) static void s2io_vpd_read(struct s2io_nic *nic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5670) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5671) 	u8 *vpd_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5672) 	u8 data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5673) 	int i = 0, cnt, len, fail = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5674) 	int vpd_addr = 0x80;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5675) 	struct swStat *swstats = &nic->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5676) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5677) 	if (nic->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5678) 		strcpy(nic->product_name, "Xframe II 10GbE network adapter");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5679) 		vpd_addr = 0x80;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5680) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5681) 		strcpy(nic->product_name, "Xframe I 10GbE network adapter");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5682) 		vpd_addr = 0x50;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5683) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5684) 	strcpy(nic->serial_num, "NOT AVAILABLE");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5685) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5686) 	vpd_data = kmalloc(256, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5687) 	if (!vpd_data) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5688) 		swstats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5689) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5690) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5691) 	swstats->mem_allocated += 256;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5692) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5693) 	for (i = 0; i < 256; i += 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5694) 		pci_write_config_byte(nic->pdev, (vpd_addr + 2), i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5695) 		pci_read_config_byte(nic->pdev,  (vpd_addr + 2), &data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5696) 		pci_write_config_byte(nic->pdev, (vpd_addr + 3), 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5697) 		for (cnt = 0; cnt < 5; cnt++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5698) 			msleep(2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5699) 			pci_read_config_byte(nic->pdev, (vpd_addr + 3), &data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5700) 			if (data == 0x80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5701) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5702) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5703) 		if (cnt >= 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5704) 			DBG_PRINT(ERR_DBG, "Read of VPD data failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5705) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5706) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5707) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5708) 		pci_read_config_dword(nic->pdev,  (vpd_addr + 4),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5709) 				      (u32 *)&vpd_data[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5710) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5711) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5712) 	if (!fail) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5713) 		/* read serial number of adapter */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5714) 		for (cnt = 0; cnt < 252; cnt++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5715) 			if ((vpd_data[cnt] == 'S') &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5716) 			    (vpd_data[cnt+1] == 'N')) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5717) 				len = vpd_data[cnt+2];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5718) 				if (len < min(VPD_STRING_LEN, 256-cnt-2)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5719) 					memcpy(nic->serial_num,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5720) 					       &vpd_data[cnt + 3],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5721) 					       len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5722) 					memset(nic->serial_num+len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5723) 					       0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5724) 					       VPD_STRING_LEN-len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5725) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5726) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5727) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5728) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5729) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5730) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5731) 	if ((!fail) && (vpd_data[1] < VPD_STRING_LEN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5732) 		len = vpd_data[1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5733) 		memcpy(nic->product_name, &vpd_data[3], len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5734) 		nic->product_name[len] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5735) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5736) 	kfree(vpd_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5737) 	swstats->mem_freed += 256;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5738) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5739) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5740) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5741)  *  s2io_ethtool_geeprom  - reads the value stored in the Eeprom.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5742)  *  @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5743)  *  @eeprom : pointer to the user level structure provided by ethtool,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5744)  *  containing all relevant information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5745)  *  @data_buf : user defined value to be written into Eeprom.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5746)  *  Description: Reads the values stored in the Eeprom at given offset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5747)  *  for a given length. Stores these values int the input argument data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5748)  *  buffer 'data_buf' and returns these to the caller (ethtool.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5749)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5750)  *  int  0 on success
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5751)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5752) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5753) static int s2io_ethtool_geeprom(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5754) 				struct ethtool_eeprom *eeprom, u8 * data_buf)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5755) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5756) 	u32 i, valid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5757) 	u64 data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5758) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5760) 	eeprom->magic = sp->pdev->vendor | (sp->pdev->device << 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5761) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5762) 	if ((eeprom->offset + eeprom->len) > (XENA_EEPROM_SPACE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5763) 		eeprom->len = XENA_EEPROM_SPACE - eeprom->offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5765) 	for (i = 0; i < eeprom->len; i += 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5766) 		if (read_eeprom(sp, (eeprom->offset + i), &data)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5767) 			DBG_PRINT(ERR_DBG, "Read of EEPROM failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5768) 			return -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5769) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5770) 		valid = INV(data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5771) 		memcpy((data_buf + i), &valid, 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5772) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5773) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5774) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5775) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5776) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5777)  *  s2io_ethtool_seeprom - tries to write the user provided value in Eeprom
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5778)  *  @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5779)  *  @eeprom : pointer to the user level structure provided by ethtool,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5780)  *  containing all relevant information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5781)  *  @data_buf : user defined value to be written into Eeprom.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5782)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5783)  *  Tries to write the user provided value in the Eeprom, at the offset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5784)  *  given by the user.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5785)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5786)  *  0 on success, -EFAULT on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5787)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5788) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5789) static int s2io_ethtool_seeprom(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5790) 				struct ethtool_eeprom *eeprom,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5791) 				u8 *data_buf)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5792) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5793) 	int len = eeprom->len, cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5794) 	u64 valid = 0, data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5795) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5797) 	if (eeprom->magic != (sp->pdev->vendor | (sp->pdev->device << 16))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5798) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5799) 			  "ETHTOOL_WRITE_EEPROM Err: "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5800) 			  "Magic value is wrong, it is 0x%x should be 0x%x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5801) 			  (sp->pdev->vendor | (sp->pdev->device << 16)),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5802) 			  eeprom->magic);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5803) 		return -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5804) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5805) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5806) 	while (len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5807) 		data = (u32)data_buf[cnt] & 0x000000FF;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5808) 		if (data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5809) 			valid = (u32)(data << 24);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5810) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5811) 			valid = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5812) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5813) 		if (write_eeprom(sp, (eeprom->offset + cnt), valid, 0)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5814) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5815) 				  "ETHTOOL_WRITE_EEPROM Err: "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5816) 				  "Cannot write into the specified offset\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5817) 			return -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5818) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5819) 		cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5820) 		len--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5821) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5822) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5823) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5824) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5826) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5827)  * s2io_register_test - reads and writes into all clock domains.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5828)  * @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5829)  * s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5830)  * @data : variable that returns the result of each of the test conducted b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5831)  * by the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5832)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5833)  * Read and write into all clock domains. The NIC has 3 clock domains,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5834)  * see that registers in all the three regions are accessible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5835)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5836)  * 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5837)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5838) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5839) static int s2io_register_test(struct s2io_nic *sp, uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5840) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5841) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5842) 	u64 val64 = 0, exp_val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5843) 	int fail = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5845) 	val64 = readq(&bar0->pif_rd_swapper_fb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5846) 	if (val64 != 0x123456789abcdefULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5847) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5848) 		DBG_PRINT(INFO_DBG, "Read Test level %d fails\n", 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5849) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5850) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5851) 	val64 = readq(&bar0->rmac_pause_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5852) 	if (val64 != 0xc000ffff00000000ULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5853) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5854) 		DBG_PRINT(INFO_DBG, "Read Test level %d fails\n", 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5855) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5856) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5857) 	val64 = readq(&bar0->rx_queue_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5858) 	if (sp->device_type == XFRAME_II_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5859) 		exp_val = 0x0404040404040404ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5860) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5861) 		exp_val = 0x0808080808080808ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5862) 	if (val64 != exp_val) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5863) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5864) 		DBG_PRINT(INFO_DBG, "Read Test level %d fails\n", 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5865) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5866) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5867) 	val64 = readq(&bar0->xgxs_efifo_cfg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5868) 	if (val64 != 0x000000001923141EULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5869) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5870) 		DBG_PRINT(INFO_DBG, "Read Test level %d fails\n", 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5871) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5872) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5873) 	val64 = 0x5A5A5A5A5A5A5A5AULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5874) 	writeq(val64, &bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5875) 	val64 = readq(&bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5876) 	if (val64 != 0x5A5A5A5A5A5A5A5AULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5877) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5878) 		DBG_PRINT(ERR_DBG, "Write Test level %d fails\n", 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5879) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5880) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5881) 	val64 = 0xA5A5A5A5A5A5A5A5ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5882) 	writeq(val64, &bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5883) 	val64 = readq(&bar0->xmsi_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5884) 	if (val64 != 0xA5A5A5A5A5A5A5A5ULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5885) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5886) 		DBG_PRINT(ERR_DBG, "Write Test level %d fails\n", 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5887) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5888) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5889) 	*data = fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5890) 	return fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5891) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5893) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5894)  * s2io_eeprom_test - to verify that EEprom in the xena can be programmed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5895)  * @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5896)  * s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5897)  * @data:variable that returns the result of each of the test conducted by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5898)  * the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5899)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5900)  * Verify that EEPROM in the xena can be programmed using I2C_CONTROL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5901)  * register.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5902)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5903)  * 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5904)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5905) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5906) static int s2io_eeprom_test(struct s2io_nic *sp, uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5907) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5908) 	int fail = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5909) 	u64 ret_data, org_4F0, org_7F0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5910) 	u8 saved_4F0 = 0, saved_7F0 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5911) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5912) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5913) 	/* Test Write Error at offset 0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5914) 	/* Note that SPI interface allows write access to all areas
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5915) 	 * of EEPROM. Hence doing all negative testing only for Xframe I.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5916) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5917) 	if (sp->device_type == XFRAME_I_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5918) 		if (!write_eeprom(sp, 0, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5919) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5920) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5921) 	/* Save current values at offsets 0x4F0 and 0x7F0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5922) 	if (!read_eeprom(sp, 0x4F0, &org_4F0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5923) 		saved_4F0 = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5924) 	if (!read_eeprom(sp, 0x7F0, &org_7F0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5925) 		saved_7F0 = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5926) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5927) 	/* Test Write at offset 4f0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5928) 	if (write_eeprom(sp, 0x4F0, 0x012345, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5929) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5930) 	if (read_eeprom(sp, 0x4F0, &ret_data))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5931) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5932) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5933) 	if (ret_data != 0x012345) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5934) 		DBG_PRINT(ERR_DBG, "%s: eeprom test error at offset 0x4F0. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5935) 			  "Data written %llx Data read %llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5936) 			  dev->name, (unsigned long long)0x12345,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5937) 			  (unsigned long long)ret_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5938) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5939) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5940) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5941) 	/* Reset the EEPROM data go FFFF */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5942) 	write_eeprom(sp, 0x4F0, 0xFFFFFF, 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5943) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5944) 	/* Test Write Request Error at offset 0x7c */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5945) 	if (sp->device_type == XFRAME_I_DEVICE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5946) 		if (!write_eeprom(sp, 0x07C, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5947) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5948) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5949) 	/* Test Write Request at offset 0x7f0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5950) 	if (write_eeprom(sp, 0x7F0, 0x012345, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5951) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5952) 	if (read_eeprom(sp, 0x7F0, &ret_data))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5953) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5954) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5955) 	if (ret_data != 0x012345) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5956) 		DBG_PRINT(ERR_DBG, "%s: eeprom test error at offset 0x7F0. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5957) 			  "Data written %llx Data read %llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5958) 			  dev->name, (unsigned long long)0x12345,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5959) 			  (unsigned long long)ret_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5960) 		fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5961) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5962) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5963) 	/* Reset the EEPROM data go FFFF */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5964) 	write_eeprom(sp, 0x7F0, 0xFFFFFF, 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5965) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5966) 	if (sp->device_type == XFRAME_I_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5967) 		/* Test Write Error at offset 0x80 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5968) 		if (!write_eeprom(sp, 0x080, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5969) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5971) 		/* Test Write Error at offset 0xfc */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5972) 		if (!write_eeprom(sp, 0x0FC, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5973) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5974) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5975) 		/* Test Write Error at offset 0x100 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5976) 		if (!write_eeprom(sp, 0x100, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5977) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5978) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5979) 		/* Test Write Error at offset 4ec */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5980) 		if (!write_eeprom(sp, 0x4EC, 0, 3))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5981) 			fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5982) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5983) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5984) 	/* Restore values at offsets 0x4F0 and 0x7F0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5985) 	if (saved_4F0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5986) 		write_eeprom(sp, 0x4F0, org_4F0, 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5987) 	if (saved_7F0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5988) 		write_eeprom(sp, 0x7F0, org_7F0, 3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5989) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5990) 	*data = fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5991) 	return fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5992) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5994) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5995)  * s2io_bist_test - invokes the MemBist test of the card .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5996)  * @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5997)  * s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5998)  * @data:variable that returns the result of each of the test conducted by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5999)  * the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6000)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6001)  * This invokes the MemBist test of the card. We give around
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6002)  * 2 secs time for the Test to complete. If it's still not complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6003)  * within this peiod, we consider that the test failed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6004)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6005)  * 0 on success and -1 on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6006)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6007) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6008) static int s2io_bist_test(struct s2io_nic *sp, uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6009) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6010) 	u8 bist = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6011) 	int cnt = 0, ret = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6012) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6013) 	pci_read_config_byte(sp->pdev, PCI_BIST, &bist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6014) 	bist |= PCI_BIST_START;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6015) 	pci_write_config_word(sp->pdev, PCI_BIST, bist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6016) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6017) 	while (cnt < 20) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6018) 		pci_read_config_byte(sp->pdev, PCI_BIST, &bist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6019) 		if (!(bist & PCI_BIST_START)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6020) 			*data = (bist & PCI_BIST_CODE_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6021) 			ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6022) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6023) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6024) 		msleep(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6025) 		cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6026) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6027) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6028) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6029) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6030) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6031) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6032)  * s2io_link_test - verifies the link state of the nic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6033)  * @sp: private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6034)  * s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6035)  * @data: variable that returns the result of each of the test conducted by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6036)  * the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6037)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6038)  * The function verifies the link state of the NIC and updates the input
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6039)  * argument 'data' appropriately.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6040)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6041)  * 0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6042)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6044) static int s2io_link_test(struct s2io_nic *sp, uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6045) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6046) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6047) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6048) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6049) 	val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6050) 	if (!(LINK_IS_UP(val64)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6051) 		*data = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6052) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6053) 		*data = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6054) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6055) 	return *data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6056) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6057) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6058) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6059)  * s2io_rldram_test - offline test for access to the RldRam chip on the NIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6060)  * @sp: private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6061)  * s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6062)  * @data: variable that returns the result of each of the test
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6063)  * conducted by the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6064)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6065)  *  This is one of the offline test that tests the read and write
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6066)  *  access to the RldRam chip on the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6067)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6068)  *  0 on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6069)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6070) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6071) static int s2io_rldram_test(struct s2io_nic *sp, uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6072) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6073) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6074) 	u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6075) 	int cnt, iteration = 0, test_fail = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6076) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6077) 	val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6078) 	val64 &= ~ADAPTER_ECC_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6079) 	writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6080) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6081) 	val64 = readq(&bar0->mc_rldram_test_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6082) 	val64 |= MC_RLDRAM_TEST_MODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6083) 	SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_test_ctrl, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6084) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6085) 	val64 = readq(&bar0->mc_rldram_mrs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6086) 	val64 |= MC_RLDRAM_QUEUE_SIZE_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6087) 	SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_mrs, UF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6088) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6089) 	val64 |= MC_RLDRAM_MRS_ENABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6090) 	SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_mrs, UF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6092) 	while (iteration < 2) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6093) 		val64 = 0x55555555aaaa0000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6094) 		if (iteration == 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6095) 			val64 ^= 0xFFFFFFFFFFFF0000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6096) 		writeq(val64, &bar0->mc_rldram_test_d0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6097) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6098) 		val64 = 0xaaaa5a5555550000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6099) 		if (iteration == 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6100) 			val64 ^= 0xFFFFFFFFFFFF0000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6101) 		writeq(val64, &bar0->mc_rldram_test_d1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6103) 		val64 = 0x55aaaaaaaa5a0000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6104) 		if (iteration == 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6105) 			val64 ^= 0xFFFFFFFFFFFF0000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6106) 		writeq(val64, &bar0->mc_rldram_test_d2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6108) 		val64 = (u64) (0x0000003ffffe0100ULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6109) 		writeq(val64, &bar0->mc_rldram_test_add);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6110) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6111) 		val64 = MC_RLDRAM_TEST_MODE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6112) 			MC_RLDRAM_TEST_WRITE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6113) 			MC_RLDRAM_TEST_GO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6114) 		SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_test_ctrl, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6115) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6116) 		for (cnt = 0; cnt < 5; cnt++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6117) 			val64 = readq(&bar0->mc_rldram_test_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6118) 			if (val64 & MC_RLDRAM_TEST_DONE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6119) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6120) 			msleep(200);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6121) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6123) 		if (cnt == 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6124) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6125) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6126) 		val64 = MC_RLDRAM_TEST_MODE | MC_RLDRAM_TEST_GO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6127) 		SPECIAL_REG_WRITE(val64, &bar0->mc_rldram_test_ctrl, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6128) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6129) 		for (cnt = 0; cnt < 5; cnt++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6130) 			val64 = readq(&bar0->mc_rldram_test_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6131) 			if (val64 & MC_RLDRAM_TEST_DONE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6132) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6133) 			msleep(500);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6134) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6136) 		if (cnt == 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6137) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6139) 		val64 = readq(&bar0->mc_rldram_test_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6140) 		if (!(val64 & MC_RLDRAM_TEST_PASS))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6141) 			test_fail = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6143) 		iteration++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6144) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6146) 	*data = test_fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6147) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6148) 	/* Bring the adapter out of test mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6149) 	SPECIAL_REG_WRITE(0, &bar0->mc_rldram_test_ctrl, LF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6151) 	return test_fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6152) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6154) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6155)  *  s2io_ethtool_test - conducts 6 tsets to determine the health of card.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6156)  *  @dev: pointer to netdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6157)  *  @ethtest : pointer to a ethtool command specific structure that will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6158)  *  returned to the user.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6159)  *  @data : variable that returns the result of each of the test
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6160)  * conducted by the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6161)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6162)  *  This function conducts 6 tests ( 4 offline and 2 online) to determine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6163)  *  the health of the card.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6164)  * Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6165)  *  void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6166)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6168) static void s2io_ethtool_test(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6169) 			      struct ethtool_test *ethtest,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6170) 			      uint64_t *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6171) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6172) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6173) 	int orig_state = netif_running(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6175) 	if (ethtest->flags == ETH_TEST_FL_OFFLINE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6176) 		/* Offline Tests. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6177) 		if (orig_state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6178) 			s2io_close(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6179) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6180) 		if (s2io_register_test(sp, &data[0]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6181) 			ethtest->flags |= ETH_TEST_FL_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6183) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6185) 		if (s2io_rldram_test(sp, &data[3]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6186) 			ethtest->flags |= ETH_TEST_FL_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6188) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6190) 		if (s2io_eeprom_test(sp, &data[1]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6191) 			ethtest->flags |= ETH_TEST_FL_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6193) 		if (s2io_bist_test(sp, &data[4]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6194) 			ethtest->flags |= ETH_TEST_FL_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6196) 		if (orig_state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6197) 			s2io_open(sp->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6199) 		data[2] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6200) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6201) 		/* Online Tests. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6202) 		if (!orig_state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6203) 			DBG_PRINT(ERR_DBG, "%s: is not up, cannot run test\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6204) 				  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6205) 			data[0] = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6206) 			data[1] = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6207) 			data[2] = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6208) 			data[3] = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6209) 			data[4] = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6210) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6212) 		if (s2io_link_test(sp, &data[2]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6213) 			ethtest->flags |= ETH_TEST_FL_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6214) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6215) 		data[0] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6216) 		data[1] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6217) 		data[3] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6218) 		data[4] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6219) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6220) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6222) static void s2io_get_ethtool_stats(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6223) 				   struct ethtool_stats *estats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6224) 				   u64 *tmp_stats)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6225) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6226) 	int i = 0, k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6227) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6228) 	struct stat_block *stats = sp->mac_control.stats_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6229) 	struct swStat *swstats = &stats->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6230) 	struct xpakStat *xstats = &stats->xpak_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6231) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6232) 	s2io_updt_stats(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6233) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6234) 		(u64)le32_to_cpu(stats->tmac_frms_oflow) << 32  |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6235) 		le32_to_cpu(stats->tmac_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6236) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6237) 		(u64)le32_to_cpu(stats->tmac_data_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6238) 		le32_to_cpu(stats->tmac_data_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6239) 	tmp_stats[i++] = le64_to_cpu(stats->tmac_drop_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6240) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6241) 		(u64)le32_to_cpu(stats->tmac_mcst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6242) 		le32_to_cpu(stats->tmac_mcst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6243) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6244) 		(u64)le32_to_cpu(stats->tmac_bcst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6245) 		le32_to_cpu(stats->tmac_bcst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6246) 	tmp_stats[i++] = le64_to_cpu(stats->tmac_pause_ctrl_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6247) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6248) 		(u64)le32_to_cpu(stats->tmac_ttl_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6249) 		le32_to_cpu(stats->tmac_ttl_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6250) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6251) 		(u64)le32_to_cpu(stats->tmac_ucst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6252) 		le32_to_cpu(stats->tmac_ucst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6253) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6254) 		(u64)le32_to_cpu(stats->tmac_nucst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6255) 		le32_to_cpu(stats->tmac_nucst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6256) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6257) 		(u64)le32_to_cpu(stats->tmac_any_err_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6258) 		le32_to_cpu(stats->tmac_any_err_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6259) 	tmp_stats[i++] = le64_to_cpu(stats->tmac_ttl_less_fb_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6260) 	tmp_stats[i++] = le64_to_cpu(stats->tmac_vld_ip_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6261) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6262) 		(u64)le32_to_cpu(stats->tmac_vld_ip_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6263) 		le32_to_cpu(stats->tmac_vld_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6264) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6265) 		(u64)le32_to_cpu(stats->tmac_drop_ip_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6266) 		le32_to_cpu(stats->tmac_drop_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6267) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6268) 		(u64)le32_to_cpu(stats->tmac_icmp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6269) 		le32_to_cpu(stats->tmac_icmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6270) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6271) 		(u64)le32_to_cpu(stats->tmac_rst_tcp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6272) 		le32_to_cpu(stats->tmac_rst_tcp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6273) 	tmp_stats[i++] = le64_to_cpu(stats->tmac_tcp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6274) 	tmp_stats[i++] = (u64)le32_to_cpu(stats->tmac_udp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6275) 		le32_to_cpu(stats->tmac_udp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6276) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6277) 		(u64)le32_to_cpu(stats->rmac_vld_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6278) 		le32_to_cpu(stats->rmac_vld_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6279) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6280) 		(u64)le32_to_cpu(stats->rmac_data_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6281) 		le32_to_cpu(stats->rmac_data_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6282) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_fcs_err_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6283) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_drop_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6284) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6285) 		(u64)le32_to_cpu(stats->rmac_vld_mcst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6286) 		le32_to_cpu(stats->rmac_vld_mcst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6287) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6288) 		(u64)le32_to_cpu(stats->rmac_vld_bcst_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6289) 		le32_to_cpu(stats->rmac_vld_bcst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6290) 	tmp_stats[i++] = le32_to_cpu(stats->rmac_in_rng_len_err_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6291) 	tmp_stats[i++] = le32_to_cpu(stats->rmac_out_rng_len_err_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6292) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_long_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6293) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_pause_ctrl_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6294) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_unsup_ctrl_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6295) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6296) 		(u64)le32_to_cpu(stats->rmac_ttl_octets_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6297) 		le32_to_cpu(stats->rmac_ttl_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6298) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6299) 		(u64)le32_to_cpu(stats->rmac_accepted_ucst_frms_oflow) << 32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6300) 		| le32_to_cpu(stats->rmac_accepted_ucst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6301) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6302) 		(u64)le32_to_cpu(stats->rmac_accepted_nucst_frms_oflow)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6303) 		<< 32 | le32_to_cpu(stats->rmac_accepted_nucst_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6304) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6305) 		(u64)le32_to_cpu(stats->rmac_discarded_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6306) 		le32_to_cpu(stats->rmac_discarded_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6307) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6308) 		(u64)le32_to_cpu(stats->rmac_drop_events_oflow)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6309) 		<< 32 | le32_to_cpu(stats->rmac_drop_events);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6310) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_less_fb_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6311) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6312) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6313) 		(u64)le32_to_cpu(stats->rmac_usized_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6314) 		le32_to_cpu(stats->rmac_usized_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6315) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6316) 		(u64)le32_to_cpu(stats->rmac_osized_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6317) 		le32_to_cpu(stats->rmac_osized_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6318) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6319) 		(u64)le32_to_cpu(stats->rmac_frag_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6320) 		le32_to_cpu(stats->rmac_frag_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6321) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6322) 		(u64)le32_to_cpu(stats->rmac_jabber_frms_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6323) 		le32_to_cpu(stats->rmac_jabber_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6324) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_64_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6325) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_65_127_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6326) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_128_255_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6327) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_256_511_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6328) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_512_1023_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6329) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_1024_1518_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6330) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6331) 		(u64)le32_to_cpu(stats->rmac_ip_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6332) 		le32_to_cpu(stats->rmac_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6333) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_ip_octets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6334) 	tmp_stats[i++] = le32_to_cpu(stats->rmac_hdr_err_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6335) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6336) 		(u64)le32_to_cpu(stats->rmac_drop_ip_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6337) 		le32_to_cpu(stats->rmac_drop_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6338) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6339) 		(u64)le32_to_cpu(stats->rmac_icmp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6340) 		le32_to_cpu(stats->rmac_icmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6341) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_tcp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6342) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6343) 		(u64)le32_to_cpu(stats->rmac_udp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6344) 		le32_to_cpu(stats->rmac_udp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6345) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6346) 		(u64)le32_to_cpu(stats->rmac_err_drp_udp_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6347) 		le32_to_cpu(stats->rmac_err_drp_udp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6348) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_xgmii_err_sym);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6349) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6350) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6351) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6352) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6353) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6354) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6355) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q6);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6356) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_frms_q7);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6357) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6358) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6359) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6360) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6361) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6362) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6363) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q6);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6364) 	tmp_stats[i++] = le16_to_cpu(stats->rmac_full_q7);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6365) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6366) 		(u64)le32_to_cpu(stats->rmac_pause_cnt_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6367) 		le32_to_cpu(stats->rmac_pause_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6368) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_xgmii_data_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6369) 	tmp_stats[i++] = le64_to_cpu(stats->rmac_xgmii_ctrl_err_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6370) 	tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6371) 		(u64)le32_to_cpu(stats->rmac_accepted_ip_oflow) << 32 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6372) 		le32_to_cpu(stats->rmac_accepted_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6373) 	tmp_stats[i++] = le32_to_cpu(stats->rmac_err_tcp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6374) 	tmp_stats[i++] = le32_to_cpu(stats->rd_req_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6375) 	tmp_stats[i++] = le32_to_cpu(stats->new_rd_req_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6376) 	tmp_stats[i++] = le32_to_cpu(stats->new_rd_req_rtry_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6377) 	tmp_stats[i++] = le32_to_cpu(stats->rd_rtry_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6378) 	tmp_stats[i++] = le32_to_cpu(stats->wr_rtry_rd_ack_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6379) 	tmp_stats[i++] = le32_to_cpu(stats->wr_req_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6380) 	tmp_stats[i++] = le32_to_cpu(stats->new_wr_req_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6381) 	tmp_stats[i++] = le32_to_cpu(stats->new_wr_req_rtry_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6382) 	tmp_stats[i++] = le32_to_cpu(stats->wr_rtry_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6383) 	tmp_stats[i++] = le32_to_cpu(stats->wr_disc_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6384) 	tmp_stats[i++] = le32_to_cpu(stats->rd_rtry_wr_ack_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6385) 	tmp_stats[i++] = le32_to_cpu(stats->txp_wr_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6386) 	tmp_stats[i++] = le32_to_cpu(stats->txd_rd_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6387) 	tmp_stats[i++] = le32_to_cpu(stats->txd_wr_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6388) 	tmp_stats[i++] = le32_to_cpu(stats->rxd_rd_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6389) 	tmp_stats[i++] = le32_to_cpu(stats->rxd_wr_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6390) 	tmp_stats[i++] = le32_to_cpu(stats->txf_rd_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6391) 	tmp_stats[i++] = le32_to_cpu(stats->rxf_wr_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6392) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6393) 	/* Enhanced statistics exist only for Hercules */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6394) 	if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6395) 		tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6396) 			le64_to_cpu(stats->rmac_ttl_1519_4095_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6397) 		tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6398) 			le64_to_cpu(stats->rmac_ttl_4096_8191_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6399) 		tmp_stats[i++] =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6400) 			le64_to_cpu(stats->rmac_ttl_8192_max_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6401) 		tmp_stats[i++] = le64_to_cpu(stats->rmac_ttl_gt_max_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6402) 		tmp_stats[i++] = le64_to_cpu(stats->rmac_osized_alt_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6403) 		tmp_stats[i++] = le64_to_cpu(stats->rmac_jabber_alt_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6404) 		tmp_stats[i++] = le64_to_cpu(stats->rmac_gt_max_alt_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6405) 		tmp_stats[i++] = le64_to_cpu(stats->rmac_vlan_frms);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6406) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_len_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6407) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_fcs_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6408) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_pf_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6409) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_da_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6410) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_red_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6411) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_rts_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6412) 		tmp_stats[i++] = le32_to_cpu(stats->rmac_ingm_full_discard);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6413) 		tmp_stats[i++] = le32_to_cpu(stats->link_fault_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6414) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6416) 	tmp_stats[i++] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6417) 	tmp_stats[i++] = swstats->single_ecc_errs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6418) 	tmp_stats[i++] = swstats->double_ecc_errs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6419) 	tmp_stats[i++] = swstats->parity_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6420) 	tmp_stats[i++] = swstats->serious_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6421) 	tmp_stats[i++] = swstats->soft_reset_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6422) 	tmp_stats[i++] = swstats->fifo_full_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6423) 	for (k = 0; k < MAX_RX_RINGS; k++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6424) 		tmp_stats[i++] = swstats->ring_full_cnt[k];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6425) 	tmp_stats[i++] = xstats->alarm_transceiver_temp_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6426) 	tmp_stats[i++] = xstats->alarm_transceiver_temp_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6427) 	tmp_stats[i++] = xstats->alarm_laser_bias_current_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6428) 	tmp_stats[i++] = xstats->alarm_laser_bias_current_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6429) 	tmp_stats[i++] = xstats->alarm_laser_output_power_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6430) 	tmp_stats[i++] = xstats->alarm_laser_output_power_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6431) 	tmp_stats[i++] = xstats->warn_transceiver_temp_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6432) 	tmp_stats[i++] = xstats->warn_transceiver_temp_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6433) 	tmp_stats[i++] = xstats->warn_laser_bias_current_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6434) 	tmp_stats[i++] = xstats->warn_laser_bias_current_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6435) 	tmp_stats[i++] = xstats->warn_laser_output_power_high;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6436) 	tmp_stats[i++] = xstats->warn_laser_output_power_low;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6437) 	tmp_stats[i++] = swstats->clubbed_frms_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6438) 	tmp_stats[i++] = swstats->sending_both;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6439) 	tmp_stats[i++] = swstats->outof_sequence_pkts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6440) 	tmp_stats[i++] = swstats->flush_max_pkts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6441) 	if (swstats->num_aggregations) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6442) 		u64 tmp = swstats->sum_avg_pkts_aggregated;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6443) 		int count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6444) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6445) 		 * Since 64-bit divide does not work on all platforms,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6446) 		 * do repeated subtraction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6447) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6448) 		while (tmp >= swstats->num_aggregations) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6449) 			tmp -= swstats->num_aggregations;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6450) 			count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6451) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6452) 		tmp_stats[i++] = count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6453) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6454) 		tmp_stats[i++] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6455) 	tmp_stats[i++] = swstats->mem_alloc_fail_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6456) 	tmp_stats[i++] = swstats->pci_map_fail_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6457) 	tmp_stats[i++] = swstats->watchdog_timer_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6458) 	tmp_stats[i++] = swstats->mem_allocated;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6459) 	tmp_stats[i++] = swstats->mem_freed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6460) 	tmp_stats[i++] = swstats->link_up_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6461) 	tmp_stats[i++] = swstats->link_down_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6462) 	tmp_stats[i++] = swstats->link_up_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6463) 	tmp_stats[i++] = swstats->link_down_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6464) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6465) 	tmp_stats[i++] = swstats->tx_buf_abort_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6466) 	tmp_stats[i++] = swstats->tx_desc_abort_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6467) 	tmp_stats[i++] = swstats->tx_parity_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6468) 	tmp_stats[i++] = swstats->tx_link_loss_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6469) 	tmp_stats[i++] = swstats->tx_list_proc_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6470) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6471) 	tmp_stats[i++] = swstats->rx_parity_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6472) 	tmp_stats[i++] = swstats->rx_abort_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6473) 	tmp_stats[i++] = swstats->rx_parity_abort_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6474) 	tmp_stats[i++] = swstats->rx_rda_fail_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6475) 	tmp_stats[i++] = swstats->rx_unkn_prot_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6476) 	tmp_stats[i++] = swstats->rx_fcs_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6477) 	tmp_stats[i++] = swstats->rx_buf_size_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6478) 	tmp_stats[i++] = swstats->rx_rxd_corrupt_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6479) 	tmp_stats[i++] = swstats->rx_unkn_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6480) 	tmp_stats[i++] = swstats->tda_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6481) 	tmp_stats[i++] = swstats->pfc_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6482) 	tmp_stats[i++] = swstats->pcc_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6483) 	tmp_stats[i++] = swstats->tti_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6484) 	tmp_stats[i++] = swstats->tpa_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6485) 	tmp_stats[i++] = swstats->sm_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6486) 	tmp_stats[i++] = swstats->lso_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6487) 	tmp_stats[i++] = swstats->mac_tmac_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6488) 	tmp_stats[i++] = swstats->mac_rmac_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6489) 	tmp_stats[i++] = swstats->xgxs_txgxs_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6490) 	tmp_stats[i++] = swstats->xgxs_rxgxs_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6491) 	tmp_stats[i++] = swstats->rc_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6492) 	tmp_stats[i++] = swstats->prc_pcix_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6493) 	tmp_stats[i++] = swstats->rpa_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6494) 	tmp_stats[i++] = swstats->rda_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6495) 	tmp_stats[i++] = swstats->rti_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6496) 	tmp_stats[i++] = swstats->mc_err_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6497) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6498) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6499) static int s2io_ethtool_get_regs_len(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6500) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6501) 	return XENA_REG_SPACE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6502) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6503) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6504) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6505) static int s2io_get_eeprom_len(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6506) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6507) 	return XENA_EEPROM_SPACE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6508) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6509) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6510) static int s2io_get_sset_count(struct net_device *dev, int sset)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6511) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6512) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6513) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6514) 	switch (sset) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6515) 	case ETH_SS_TEST:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6516) 		return S2IO_TEST_LEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6517) 	case ETH_SS_STATS:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6518) 		switch (sp->device_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6519) 		case XFRAME_I_DEVICE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6520) 			return XFRAME_I_STAT_LEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6521) 		case XFRAME_II_DEVICE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6522) 			return XFRAME_II_STAT_LEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6523) 		default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6524) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6525) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6526) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6527) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6528) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6529) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6531) static void s2io_ethtool_get_strings(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6532) 				     u32 stringset, u8 *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6533) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6534) 	int stat_size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6535) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6536) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6537) 	switch (stringset) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6538) 	case ETH_SS_TEST:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6539) 		memcpy(data, s2io_gstrings, S2IO_STRINGS_LEN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6540) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6541) 	case ETH_SS_STATS:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6542) 		stat_size = sizeof(ethtool_xena_stats_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6543) 		memcpy(data, &ethtool_xena_stats_keys, stat_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6544) 		if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6545) 			memcpy(data + stat_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6546) 			       &ethtool_enhanced_stats_keys,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6547) 			       sizeof(ethtool_enhanced_stats_keys));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6548) 			stat_size += sizeof(ethtool_enhanced_stats_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6549) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6550) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6551) 		memcpy(data + stat_size, &ethtool_driver_stats_keys,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6552) 		       sizeof(ethtool_driver_stats_keys));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6553) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6554) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6555) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6556) static int s2io_set_features(struct net_device *dev, netdev_features_t features)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6557) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6558) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6559) 	netdev_features_t changed = (features ^ dev->features) & NETIF_F_LRO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6560) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6561) 	if (changed && netif_running(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6562) 		int rc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6563) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6564) 		s2io_stop_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6565) 		s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6566) 		dev->features = features;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6567) 		rc = s2io_card_up(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6568) 		if (rc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6569) 			s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6570) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6571) 			s2io_start_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6572) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6573) 		return rc ? rc : 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6574) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6575) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6576) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6577) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6578) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6579) static const struct ethtool_ops netdev_ethtool_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6580) 	.get_drvinfo = s2io_ethtool_gdrvinfo,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6581) 	.get_regs_len = s2io_ethtool_get_regs_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6582) 	.get_regs = s2io_ethtool_gregs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6583) 	.get_link = ethtool_op_get_link,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6584) 	.get_eeprom_len = s2io_get_eeprom_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6585) 	.get_eeprom = s2io_ethtool_geeprom,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6586) 	.set_eeprom = s2io_ethtool_seeprom,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6587) 	.get_ringparam = s2io_ethtool_gringparam,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6588) 	.get_pauseparam = s2io_ethtool_getpause_data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6589) 	.set_pauseparam = s2io_ethtool_setpause_data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6590) 	.self_test = s2io_ethtool_test,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6591) 	.get_strings = s2io_ethtool_get_strings,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6592) 	.set_phys_id = s2io_ethtool_set_led,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6593) 	.get_ethtool_stats = s2io_get_ethtool_stats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6594) 	.get_sset_count = s2io_get_sset_count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6595) 	.get_link_ksettings = s2io_ethtool_get_link_ksettings,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6596) 	.set_link_ksettings = s2io_ethtool_set_link_ksettings,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6597) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6598) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6599) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6600)  *  s2io_ioctl - Entry point for the Ioctl
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6601)  *  @dev :  Device pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6602)  *  @rq :  An IOCTL specefic structure, that can contain a pointer to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6603)  *  a proprietary structure used to pass information to the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6604)  *  @cmd :  This is used to distinguish between the different commands that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6605)  *  can be passed to the IOCTL functions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6606)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6607)  *  Currently there are no special functionality supported in IOCTL, hence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6608)  *  function always return EOPNOTSUPPORTED
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6609)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6610) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6611) static int s2io_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6612) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6613) 	return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6614) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6615) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6616) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6617)  *  s2io_change_mtu - entry point to change MTU size for the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6618)  *   @dev : device pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6619)  *   @new_mtu : the new MTU size for the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6620)  *   Description: A driver entry point to change MTU size for the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6621)  *   Before changing the MTU the device must be stopped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6622)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6623)  *   0 on success and an appropriate (-)ve integer as defined in errno.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6624)  *   file on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6625)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6626) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6627) static int s2io_change_mtu(struct net_device *dev, int new_mtu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6628) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6629) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6630) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6632) 	dev->mtu = new_mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6633) 	if (netif_running(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6634) 		s2io_stop_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6635) 		s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6636) 		ret = s2io_card_up(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6637) 		if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6638) 			DBG_PRINT(ERR_DBG, "%s: Device bring up failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6639) 				  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6640) 			return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6641) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6642) 		s2io_wake_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6643) 	} else { /* Device is down */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6644) 		struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6645) 		u64 val64 = new_mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6646) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6647) 		writeq(vBIT(val64, 2, 14), &bar0->rmac_max_pyld_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6648) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6649) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6650) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6651) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6652) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6653) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6654)  * s2io_set_link - Set the LInk status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6655)  * @work: work struct containing a pointer to device private structue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6656)  * Description: Sets the link status for the adapter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6657)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6658) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6659) static void s2io_set_link(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6660) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6661) 	struct s2io_nic *nic = container_of(work, struct s2io_nic,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6662) 					    set_link_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6663) 	struct net_device *dev = nic->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6664) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6665) 	register u64 val64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6666) 	u16 subid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6667) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6668) 	rtnl_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6670) 	if (!netif_running(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6671) 		goto out_unlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6672) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6673) 	if (test_and_set_bit(__S2IO_STATE_LINK_TASK, &(nic->state))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6674) 		/* The card is being reset, no point doing anything */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6675) 		goto out_unlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6676) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6677) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6678) 	subid = nic->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6679) 	if (s2io_link_fault_indication(nic) == MAC_RMAC_ERR_TIMER) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6680) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6681) 		 * Allow a small delay for the NICs self initiated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6682) 		 * cleanup to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6683) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6684) 		msleep(100);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6685) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6686) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6687) 	val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6688) 	if (LINK_IS_UP(val64)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6689) 		if (!(readq(&bar0->adapter_control) & ADAPTER_CNTL_EN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6690) 			if (verify_xena_quiescence(nic)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6691) 				val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6692) 				val64 |= ADAPTER_CNTL_EN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6693) 				writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6694) 				if (CARDS_WITH_FAULTY_LINK_INDICATORS(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6695) 					    nic->device_type, subid)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6696) 					val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6697) 					val64 |= GPIO_CTRL_GPIO_0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6698) 					writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6699) 					val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6700) 				} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6701) 					val64 |= ADAPTER_LED_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6702) 					writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6703) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6704) 				nic->device_enabled_once = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6705) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6706) 				DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6707) 					  "%s: Error: device is not Quiescent\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6708) 					  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6709) 				s2io_stop_all_tx_queue(nic);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6710) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6711) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6712) 		val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6713) 		val64 |= ADAPTER_LED_ON;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6714) 		writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6715) 		s2io_link(nic, LINK_UP);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6716) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6717) 		if (CARDS_WITH_FAULTY_LINK_INDICATORS(nic->device_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6718) 						      subid)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6719) 			val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6720) 			val64 &= ~GPIO_CTRL_GPIO_0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6721) 			writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6722) 			val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6723) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6724) 		/* turn off LED */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6725) 		val64 = readq(&bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6726) 		val64 = val64 & (~ADAPTER_LED_ON);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6727) 		writeq(val64, &bar0->adapter_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6728) 		s2io_link(nic, LINK_DOWN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6729) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6730) 	clear_bit(__S2IO_STATE_LINK_TASK, &(nic->state));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6731) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6732) out_unlock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6733) 	rtnl_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6734) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6736) static int set_rxd_buffer_pointer(struct s2io_nic *sp, struct RxD_t *rxdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6737) 				  struct buffAdd *ba,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6738) 				  struct sk_buff **skb, u64 *temp0, u64 *temp1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6739) 				  u64 *temp2, int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6740) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6741) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6742) 	struct swStat *stats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6744) 	if ((sp->rxd_mode == RXD_MODE_1) && (rxdp->Host_Control == 0)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6745) 		struct RxD1 *rxdp1 = (struct RxD1 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6746) 		/* allocate skb */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6747) 		if (*skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6748) 			DBG_PRINT(INFO_DBG, "SKB is not NULL\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6749) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6750) 			 * As Rx frame are not going to be processed,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6751) 			 * using same mapped address for the Rxd
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6752) 			 * buffer pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6753) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6754) 			rxdp1->Buffer0_ptr = *temp0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6755) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6756) 			*skb = netdev_alloc_skb(dev, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6757) 			if (!(*skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6758) 				DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6759) 					  "%s: Out of memory to allocate %s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6760) 					  dev->name, "1 buf mode SKBs");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6761) 				stats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6762) 				return -ENOMEM ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6763) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6764) 			stats->mem_allocated += (*skb)->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6765) 			/* storing the mapped addr in a temp variable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6766) 			 * such it will be used for next rxd whose
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6767) 			 * Host Control is NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6768) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6769) 			rxdp1->Buffer0_ptr = *temp0 =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6770) 				dma_map_single(&sp->pdev->dev, (*skb)->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6771) 					       size - NET_IP_ALIGN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6772) 					       DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6773) 			if (dma_mapping_error(&sp->pdev->dev, rxdp1->Buffer0_ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6774) 				goto memalloc_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6775) 			rxdp->Host_Control = (unsigned long) (*skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6776) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6777) 	} else if ((sp->rxd_mode == RXD_MODE_3B) && (rxdp->Host_Control == 0)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6778) 		struct RxD3 *rxdp3 = (struct RxD3 *)rxdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6779) 		/* Two buffer Mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6780) 		if (*skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6781) 			rxdp3->Buffer2_ptr = *temp2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6782) 			rxdp3->Buffer0_ptr = *temp0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6783) 			rxdp3->Buffer1_ptr = *temp1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6784) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6785) 			*skb = netdev_alloc_skb(dev, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6786) 			if (!(*skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6787) 				DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6788) 					  "%s: Out of memory to allocate %s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6789) 					  dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6790) 					  "2 buf mode SKBs");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6791) 				stats->mem_alloc_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6792) 				return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6793) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6794) 			stats->mem_allocated += (*skb)->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6795) 			rxdp3->Buffer2_ptr = *temp2 =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6796) 				dma_map_single(&sp->pdev->dev, (*skb)->data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6797) 					       dev->mtu + 4, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6798) 			if (dma_mapping_error(&sp->pdev->dev, rxdp3->Buffer2_ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6799) 				goto memalloc_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6800) 			rxdp3->Buffer0_ptr = *temp0 =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6801) 				dma_map_single(&sp->pdev->dev, ba->ba_0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6802) 					       BUF0_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6803) 			if (dma_mapping_error(&sp->pdev->dev, rxdp3->Buffer0_ptr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6804) 				dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6805) 						 (dma_addr_t)rxdp3->Buffer2_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6806) 						 dev->mtu + 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6807) 						 DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6808) 				goto memalloc_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6809) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6810) 			rxdp->Host_Control = (unsigned long) (*skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6812) 			/* Buffer-1 will be dummy buffer not used */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6813) 			rxdp3->Buffer1_ptr = *temp1 =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6814) 				dma_map_single(&sp->pdev->dev, ba->ba_1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6815) 					       BUF1_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6816) 			if (dma_mapping_error(&sp->pdev->dev, rxdp3->Buffer1_ptr)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6817) 				dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6818) 						 (dma_addr_t)rxdp3->Buffer0_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6819) 						 BUF0_LEN, DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6820) 				dma_unmap_single(&sp->pdev->dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6821) 						 (dma_addr_t)rxdp3->Buffer2_ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6822) 						 dev->mtu + 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6823) 						 DMA_FROM_DEVICE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6824) 				goto memalloc_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6825) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6826) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6827) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6828) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6829) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6830) memalloc_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6831) 	stats->pci_map_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6832) 	stats->mem_freed += (*skb)->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6833) 	dev_kfree_skb(*skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6834) 	return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6835) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6836) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6837) static void set_rxd_buffer_size(struct s2io_nic *sp, struct RxD_t *rxdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6838) 				int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6839) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6840) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6841) 	if (sp->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6842) 		rxdp->Control_2 = SET_BUFFER0_SIZE_1(size - NET_IP_ALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6843) 	} else if (sp->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6844) 		rxdp->Control_2 = SET_BUFFER0_SIZE_3(BUF0_LEN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6845) 		rxdp->Control_2 |= SET_BUFFER1_SIZE_3(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6846) 		rxdp->Control_2 |= SET_BUFFER2_SIZE_3(dev->mtu + 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6847) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6848) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6849) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6850) static  int rxd_owner_bit_reset(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6851) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6852) 	int i, j, k, blk_cnt = 0, size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6853) 	struct config_param *config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6854) 	struct mac_info *mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6855) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6856) 	struct RxD_t *rxdp = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6857) 	struct sk_buff *skb = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6858) 	struct buffAdd *ba = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6859) 	u64 temp0_64 = 0, temp1_64 = 0, temp2_64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6860) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6861) 	/* Calculate the size based on ring mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6862) 	size = dev->mtu + HEADER_ETHERNET_II_802_3_SIZE +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6863) 		HEADER_802_2_SIZE + HEADER_SNAP_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6864) 	if (sp->rxd_mode == RXD_MODE_1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6865) 		size += NET_IP_ALIGN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6866) 	else if (sp->rxd_mode == RXD_MODE_3B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6867) 		size = dev->mtu + ALIGN_SIZE + BUF0_LEN + 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6868) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6869) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6870) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6871) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6872) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6873) 		blk_cnt = rx_cfg->num_rxd / (rxd_count[sp->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6874) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6875) 		for (j = 0; j < blk_cnt; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6876) 			for (k = 0; k < rxd_count[sp->rxd_mode]; k++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6877) 				rxdp = ring->rx_blocks[j].rxds[k].virt_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6878) 				if (sp->rxd_mode == RXD_MODE_3B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6879) 					ba = &ring->ba[j][k];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6880) 				if (set_rxd_buffer_pointer(sp, rxdp, ba, &skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6881) 							   &temp0_64,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6882) 							   &temp1_64,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6883) 							   &temp2_64,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6884) 							   size) == -ENOMEM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6885) 					return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6886) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6887) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6888) 				set_rxd_buffer_size(sp, rxdp, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6889) 				dma_wmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6890) 				/* flip the Ownership bit to Hardware */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6891) 				rxdp->Control_1 |= RXD_OWN_XENA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6892) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6893) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6894) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6895) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6896) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6897) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6898) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6899) static int s2io_add_isr(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6900) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6901) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6902) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6903) 	int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6904) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6905) 	if (sp->config.intr_type == MSI_X)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6906) 		ret = s2io_enable_msi_x(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6907) 	if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6908) 		DBG_PRINT(ERR_DBG, "%s: Defaulting to INTA\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6909) 		sp->config.intr_type = INTA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6910) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6911) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6912) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6913) 	 * Store the values of the MSIX table in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6914) 	 * the struct s2io_nic structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6915) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6916) 	store_xmsi_data(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6917) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6918) 	/* After proper initialization of H/W, register ISR */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6919) 	if (sp->config.intr_type == MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6920) 		int i, msix_rx_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6921) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6922) 		for (i = 0; i < sp->num_entries; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6923) 			if (sp->s2io_entries[i].in_use == MSIX_FLG) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6924) 				if (sp->s2io_entries[i].type ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6925) 				    MSIX_RING_TYPE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6926) 					snprintf(sp->desc[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6927) 						sizeof(sp->desc[i]),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6928) 						"%s:MSI-X-%d-RX",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6929) 						dev->name, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6930) 					err = request_irq(sp->entries[i].vector,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6931) 							  s2io_msix_ring_handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6932) 							  0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6933) 							  sp->desc[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6934) 							  sp->s2io_entries[i].arg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6935) 				} else if (sp->s2io_entries[i].type ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6936) 					   MSIX_ALARM_TYPE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6937) 					snprintf(sp->desc[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6938) 						sizeof(sp->desc[i]),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6939) 						"%s:MSI-X-%d-TX",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6940) 						dev->name, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6941) 					err = request_irq(sp->entries[i].vector,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6942) 							  s2io_msix_fifo_handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6943) 							  0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6944) 							  sp->desc[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6945) 							  sp->s2io_entries[i].arg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6946) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6947) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6948) 				/* if either data or addr is zero print it. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6949) 				if (!(sp->msix_info[i].addr &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6950) 				      sp->msix_info[i].data)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6951) 					DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6952) 						  "%s @Addr:0x%llx Data:0x%llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6953) 						  sp->desc[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6954) 						  (unsigned long long)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6955) 						  sp->msix_info[i].addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6956) 						  (unsigned long long)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6957) 						  ntohl(sp->msix_info[i].data));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6958) 				} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6959) 					msix_rx_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6960) 				if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6961) 					remove_msix_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6962) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6963) 					DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6964) 						  "%s:MSI-X-%d registration "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6965) 						  "failed\n", dev->name, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6966) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6967) 					DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6968) 						  "%s: Defaulting to INTA\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6969) 						  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6970) 					sp->config.intr_type = INTA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6971) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6972) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6973) 				sp->s2io_entries[i].in_use =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6974) 					MSIX_REGISTERED_SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6975) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6976) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6977) 		if (!err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6978) 			pr_info("MSI-X-RX %d entries enabled\n", --msix_rx_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6979) 			DBG_PRINT(INFO_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6980) 				  "MSI-X-TX entries enabled through alarm vector\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6981) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6982) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6983) 	if (sp->config.intr_type == INTA) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6984) 		err = request_irq(sp->pdev->irq, s2io_isr, IRQF_SHARED,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6985) 				  sp->name, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6986) 		if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6987) 			DBG_PRINT(ERR_DBG, "%s: ISR registration failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6988) 				  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6989) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6990) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6991) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6992) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6993) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6994) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6995) static void s2io_rem_isr(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6996) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6997) 	if (sp->config.intr_type == MSI_X)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6998) 		remove_msix_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6999) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7000) 		remove_inta_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7001) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7002) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7003) static void do_s2io_card_down(struct s2io_nic *sp, int do_io)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7004) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7005) 	int cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7006) 	struct XENA_dev_config __iomem *bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7007) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7008) 	struct config_param *config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7009) 	config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7010) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7011) 	if (!is_s2io_card_up(sp))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7012) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7013) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7014) 	del_timer_sync(&sp->alarm_timer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7015) 	/* If s2io_set_link task is executing, wait till it completes. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7016) 	while (test_and_set_bit(__S2IO_STATE_LINK_TASK, &(sp->state)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7017) 		msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7018) 	clear_bit(__S2IO_STATE_CARD_UP, &sp->state);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7020) 	/* Disable napi */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7021) 	if (sp->config.napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7022) 		int off = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7023) 		if (config->intr_type ==  MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7024) 			for (; off < sp->config.rx_ring_num; off++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7025) 				napi_disable(&sp->mac_control.rings[off].napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7026) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7027) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7028) 			napi_disable(&sp->napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7029) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7030) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7031) 	/* disable Tx and Rx traffic on the NIC */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7032) 	if (do_io)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7033) 		stop_nic(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7034) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7035) 	s2io_rem_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7036) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7037) 	/* stop the tx queue, indicate link down */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7038) 	s2io_link(sp, LINK_DOWN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7039) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7040) 	/* Check if the device is Quiescent and then Reset the NIC */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7041) 	while (do_io) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7042) 		/* As per the HW requirement we need to replenish the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7043) 		 * receive buffer to avoid the ring bump. Since there is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7044) 		 * no intention of processing the Rx frame at this pointwe are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7045) 		 * just setting the ownership bit of rxd in Each Rx
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7046) 		 * ring to HW and set the appropriate buffer size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7047) 		 * based on the ring mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7048) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7049) 		rxd_owner_bit_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7050) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7051) 		val64 = readq(&bar0->adapter_status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7052) 		if (verify_xena_quiescence(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7053) 			if (verify_pcc_quiescent(sp, sp->device_enabled_once))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7054) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7055) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7056) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7057) 		msleep(50);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7058) 		cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7059) 		if (cnt == 10) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7060) 			DBG_PRINT(ERR_DBG, "Device not Quiescent - "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7061) 				  "adapter status reads 0x%llx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7062) 				  (unsigned long long)val64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7063) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7064) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7065) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7066) 	if (do_io)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7067) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7068) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7069) 	/* Free all Tx buffers */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7070) 	free_tx_buffers(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7071) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7072) 	/* Free all Rx buffers */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7073) 	free_rx_buffers(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7074) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7075) 	clear_bit(__S2IO_STATE_LINK_TASK, &(sp->state));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7076) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7077) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7078) static void s2io_card_down(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7079) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7080) 	do_s2io_card_down(sp, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7081) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7082) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7083) static int s2io_card_up(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7084) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7085) 	int i, ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7086) 	struct config_param *config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7087) 	struct mac_info *mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7088) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7089) 	u16 interruptible;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7090) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7091) 	/* Initialize the H/W I/O registers */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7092) 	ret = init_nic(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7093) 	if (ret != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7094) 		DBG_PRINT(ERR_DBG, "%s: H/W initialization failed\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7095) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7096) 		if (ret != -EIO)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7097) 			s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7098) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7099) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7101) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7102) 	 * Initializing the Rx buffers. For now we are considering only 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7103) 	 * Rx ring and initializing buffers into 30 Rx blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7104) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7105) 	config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7106) 	mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7108) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7109) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7110) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7111) 		ring->mtu = dev->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7112) 		ring->lro = !!(dev->features & NETIF_F_LRO);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7113) 		ret = fill_rx_buffers(sp, ring, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7114) 		if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7115) 			DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7116) 				  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7117) 			s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7118) 			free_rx_buffers(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7119) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7120) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7121) 		DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7122) 			  ring->rx_bufs_left);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7123) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7125) 	/* Initialise napi */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7126) 	if (config->napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7127) 		if (config->intr_type ==  MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7128) 			for (i = 0; i < sp->config.rx_ring_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7129) 				napi_enable(&sp->mac_control.rings[i].napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7130) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7131) 			napi_enable(&sp->napi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7132) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7133) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7135) 	/* Maintain the state prior to the open */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7136) 	if (sp->promisc_flg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7137) 		sp->promisc_flg = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7138) 	if (sp->m_cast_flg) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7139) 		sp->m_cast_flg = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7140) 		sp->all_multi_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7141) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7143) 	/* Setting its receive mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7144) 	s2io_set_multicast(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7146) 	if (dev->features & NETIF_F_LRO) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7147) 		/* Initialize max aggregatable pkts per session based on MTU */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7148) 		sp->lro_max_aggr_per_sess = ((1<<16) - 1) / dev->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7149) 		/* Check if we can use (if specified) user provided value */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7150) 		if (lro_max_pkts < sp->lro_max_aggr_per_sess)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7151) 			sp->lro_max_aggr_per_sess = lro_max_pkts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7152) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7154) 	/* Enable Rx Traffic and interrupts on the NIC */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7155) 	if (start_nic(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7156) 		DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7157) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7158) 		free_rx_buffers(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7159) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7160) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7162) 	/* Add interrupt service routine */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7163) 	if (s2io_add_isr(sp) != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7164) 		if (sp->config.intr_type == MSI_X)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7165) 			s2io_rem_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7166) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7167) 		free_rx_buffers(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7168) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7169) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7171) 	timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7172) 	mod_timer(&sp->alarm_timer, jiffies + HZ / 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7174) 	set_bit(__S2IO_STATE_CARD_UP, &sp->state);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7176) 	/*  Enable select interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7177) 	en_dis_err_alarms(sp, ENA_ALL_INTRS, ENABLE_INTRS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7178) 	if (sp->config.intr_type != INTA) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7179) 		interruptible = TX_TRAFFIC_INTR | TX_PIC_INTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7180) 		en_dis_able_nic_intrs(sp, interruptible, ENABLE_INTRS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7181) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7182) 		interruptible = TX_TRAFFIC_INTR | RX_TRAFFIC_INTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7183) 		interruptible |= TX_PIC_INTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7184) 		en_dis_able_nic_intrs(sp, interruptible, ENABLE_INTRS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7185) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7187) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7188) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7190) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7191)  * s2io_restart_nic - Resets the NIC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7192)  * @work : work struct containing a pointer to the device private structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7193)  * Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7194)  * This function is scheduled to be run by the s2io_tx_watchdog
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7195)  * function after 0.5 secs to reset the NIC. The idea is to reduce
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7196)  * the run time of the watch dog routine which is run holding a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7197)  * spin lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7198)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7199) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7200) static void s2io_restart_nic(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7201) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7202) 	struct s2io_nic *sp = container_of(work, struct s2io_nic, rst_timer_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7203) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7205) 	rtnl_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7206) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7207) 	if (!netif_running(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7208) 		goto out_unlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7210) 	s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7211) 	if (s2io_card_up(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7212) 		DBG_PRINT(ERR_DBG, "%s: Device bring up failed\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7213) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7214) 	s2io_wake_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7215) 	DBG_PRINT(ERR_DBG, "%s: was reset by Tx watchdog timer\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7216) out_unlock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7217) 	rtnl_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7218) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7220) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7221)  *  s2io_tx_watchdog - Watchdog for transmit side.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7222)  *  @dev : Pointer to net device structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7223)  *  @txqueue: index of the hanging queue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7224)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7225)  *  This function is triggered if the Tx Queue is stopped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7226)  *  for a pre-defined amount of time when the Interface is still up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7227)  *  If the Interface is jammed in such a situation, the hardware is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7228)  *  reset (by s2io_close) and restarted again (by s2io_open) to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7229)  *  overcome any problem that might have been caused in the hardware.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7230)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7231)  *  void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7232)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7234) static void s2io_tx_watchdog(struct net_device *dev, unsigned int txqueue)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7235) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7236) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7237) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7239) 	if (netif_carrier_ok(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7240) 		swstats->watchdog_timer_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7241) 		schedule_work(&sp->rst_timer_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7242) 		swstats->soft_reset_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7243) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7244) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7246) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7247)  *   rx_osm_handler - To perform some OS related operations on SKB.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7248)  *   @ring_data : the ring from which this RxD was extracted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7249)  *   @rxdp: descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7250)  *   Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7251)  *   This function is called by the Rx interrupt serivce routine to perform
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7252)  *   some OS related operations on the SKB before passing it to the upper
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7253)  *   layers. It mainly checks if the checksum is OK, if so adds it to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7254)  *   SKBs cksum variable, increments the Rx packet count and passes the SKB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7255)  *   to the upper layer. If the checksum is wrong, it increments the Rx
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7256)  *   packet error count, frees the SKB and returns error.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7257)  *   Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7258)  *   SUCCESS on success and -1 on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7259)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7260) static int rx_osm_handler(struct ring_info *ring_data, struct RxD_t * rxdp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7261) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7262) 	struct s2io_nic *sp = ring_data->nic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7263) 	struct net_device *dev = ring_data->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7264) 	struct sk_buff *skb = (struct sk_buff *)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7265) 		((unsigned long)rxdp->Host_Control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7266) 	int ring_no = ring_data->ring_no;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7267) 	u16 l3_csum, l4_csum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7268) 	unsigned long long err = rxdp->Control_1 & RXD_T_CODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7269) 	struct lro *lro;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7270) 	u8 err_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7271) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7272) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7273) 	skb->dev = dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7275) 	if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7276) 		/* Check for parity error */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7277) 		if (err & 0x1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7278) 			swstats->parity_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7279) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7280) 		err_mask = err >> 48;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7281) 		switch (err_mask) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7282) 		case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7283) 			swstats->rx_parity_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7284) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7286) 		case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7287) 			swstats->rx_abort_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7288) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7290) 		case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7291) 			swstats->rx_parity_abort_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7292) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7294) 		case 4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7295) 			swstats->rx_rda_fail_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7296) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7298) 		case 5:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7299) 			swstats->rx_unkn_prot_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7300) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7302) 		case 6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7303) 			swstats->rx_fcs_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7304) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7306) 		case 7:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7307) 			swstats->rx_buf_size_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7308) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7310) 		case 8:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7311) 			swstats->rx_rxd_corrupt_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7312) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7314) 		case 15:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7315) 			swstats->rx_unkn_err_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7316) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7317) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7318) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7319) 		 * Drop the packet if bad transfer code. Exception being
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7320) 		 * 0x5, which could be due to unsupported IPv6 extension header.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7321) 		 * In this case, we let stack handle the packet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7322) 		 * Note that in this case, since checksum will be incorrect,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7323) 		 * stack will validate the same.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7324) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7325) 		if (err_mask != 0x5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7326) 			DBG_PRINT(ERR_DBG, "%s: Rx error Value: 0x%x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7327) 				  dev->name, err_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7328) 			dev->stats.rx_crc_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7329) 			swstats->mem_freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7330) 				+= skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7331) 			dev_kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7332) 			ring_data->rx_bufs_left -= 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7333) 			rxdp->Host_Control = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7334) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7335) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7336) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7337) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7338) 	rxdp->Host_Control = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7339) 	if (sp->rxd_mode == RXD_MODE_1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7340) 		int len = RXD_GET_BUFFER0_SIZE_1(rxdp->Control_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7342) 		skb_put(skb, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7343) 	} else if (sp->rxd_mode == RXD_MODE_3B) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7344) 		int get_block = ring_data->rx_curr_get_info.block_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7345) 		int get_off = ring_data->rx_curr_get_info.offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7346) 		int buf0_len = RXD_GET_BUFFER0_SIZE_3(rxdp->Control_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7347) 		int buf2_len = RXD_GET_BUFFER2_SIZE_3(rxdp->Control_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7348) 		unsigned char *buff = skb_push(skb, buf0_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7349) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7350) 		struct buffAdd *ba = &ring_data->ba[get_block][get_off];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7351) 		memcpy(buff, ba->ba_0, buf0_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7352) 		skb_put(skb, buf2_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7353) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7354) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7355) 	if ((rxdp->Control_1 & TCP_OR_UDP_FRAME) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7356) 	    ((!ring_data->lro) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7357) 	     (!(rxdp->Control_1 & RXD_FRAME_IP_FRAG))) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7358) 	    (dev->features & NETIF_F_RXCSUM)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7359) 		l3_csum = RXD_GET_L3_CKSUM(rxdp->Control_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7360) 		l4_csum = RXD_GET_L4_CKSUM(rxdp->Control_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7361) 		if ((l3_csum == L3_CKSUM_OK) && (l4_csum == L4_CKSUM_OK)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7362) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7363) 			 * NIC verifies if the Checksum of the received
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7364) 			 * frame is Ok or not and accordingly returns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7365) 			 * a flag in the RxD.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7366) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7367) 			skb->ip_summed = CHECKSUM_UNNECESSARY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7368) 			if (ring_data->lro) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7369) 				u32 tcp_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7370) 				u8 *tcp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7371) 				int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7372) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7373) 				ret = s2io_club_tcp_session(ring_data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7374) 							    skb->data, &tcp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7375) 							    &tcp_len, &lro,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7376) 							    rxdp, sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7377) 				switch (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7378) 				case 3: /* Begin anew */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7379) 					lro->parent = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7380) 					goto aggregate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7381) 				case 1: /* Aggregate */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7382) 					lro_append_pkt(sp, lro, skb, tcp_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7383) 					goto aggregate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7384) 				case 4: /* Flush session */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7385) 					lro_append_pkt(sp, lro, skb, tcp_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7386) 					queue_rx_frame(lro->parent,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7387) 						       lro->vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7388) 					clear_lro_session(lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7389) 					swstats->flush_max_pkts++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7390) 					goto aggregate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7391) 				case 2: /* Flush both */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7392) 					lro->parent->data_len = lro->frags_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7393) 					swstats->sending_both++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7394) 					queue_rx_frame(lro->parent,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7395) 						       lro->vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7396) 					clear_lro_session(lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7397) 					goto send_up;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7398) 				case 0: /* sessions exceeded */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7399) 				case -1: /* non-TCP or not L2 aggregatable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7400) 				case 5: /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7401) 					 * First pkt in session not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7402) 					 * L3/L4 aggregatable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7403) 					 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7404) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7405) 				default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7406) 					DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7407) 						  "%s: Samadhana!!\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7408) 						  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7409) 					BUG();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7410) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7411) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7412) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7413) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7414) 			 * Packet with erroneous checksum, let the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7415) 			 * upper layers deal with it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7416) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7417) 			skb_checksum_none_assert(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7418) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7419) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7420) 		skb_checksum_none_assert(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7421) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7422) 	swstats->mem_freed += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7423) send_up:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7424) 	skb_record_rx_queue(skb, ring_no);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7425) 	queue_rx_frame(skb, RXD_GET_VLAN_TAG(rxdp->Control_2));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7426) aggregate:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7427) 	sp->mac_control.rings[ring_no].rx_bufs_left -= 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7428) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7429) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7430) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7431) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7432)  *  s2io_link - stops/starts the Tx queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7433)  *  @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7434)  *  s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7435)  *  @link : inidicates whether link is UP/DOWN.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7436)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7437)  *  This function stops/starts the Tx queue depending on whether the link
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7438)  *  status of the NIC is is down or up. This is called by the Alarm
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7439)  *  interrupt handler whenever a link change interrupt comes up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7440)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7441)  *  void.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7442)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7444) static void s2io_link(struct s2io_nic *sp, int link)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7445) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7446) 	struct net_device *dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7447) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7448) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7449) 	if (link != sp->last_link_state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7450) 		init_tti(sp, link);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7451) 		if (link == LINK_DOWN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7452) 			DBG_PRINT(ERR_DBG, "%s: Link down\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7453) 			s2io_stop_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7454) 			netif_carrier_off(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7455) 			if (swstats->link_up_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7456) 				swstats->link_up_time =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7457) 					jiffies - sp->start_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7458) 			swstats->link_down_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7459) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7460) 			DBG_PRINT(ERR_DBG, "%s: Link Up\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7461) 			if (swstats->link_down_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7462) 				swstats->link_down_time =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7463) 					jiffies - sp->start_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7464) 			swstats->link_up_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7465) 			netif_carrier_on(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7466) 			s2io_wake_all_tx_queue(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7467) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7468) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7469) 	sp->last_link_state = link;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7470) 	sp->start_time = jiffies;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7471) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7472) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7473) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7474)  *  s2io_init_pci -Initialization of PCI and PCI-X configuration registers .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7475)  *  @sp : private member of the device structure, which is a pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7476)  *  s2io_nic structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7477)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7478)  *  This function initializes a few of the PCI and PCI-X configuration registers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7479)  *  with recommended values.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7480)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7481)  *  void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7482)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7483) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7484) static void s2io_init_pci(struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7485) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7486) 	u16 pci_cmd = 0, pcix_cmd = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7487) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7488) 	/* Enable Data Parity Error Recovery in PCI-X command register. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7489) 	pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7490) 			     &(pcix_cmd));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7491) 	pci_write_config_word(sp->pdev, PCIX_COMMAND_REGISTER,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7492) 			      (pcix_cmd | 1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7493) 	pci_read_config_word(sp->pdev, PCIX_COMMAND_REGISTER,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7494) 			     &(pcix_cmd));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7495) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7496) 	/* Set the PErr Response bit in PCI command register. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7497) 	pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7498) 	pci_write_config_word(sp->pdev, PCI_COMMAND,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7499) 			      (pci_cmd | PCI_COMMAND_PARITY));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7500) 	pci_read_config_word(sp->pdev, PCI_COMMAND, &pci_cmd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7501) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7503) static int s2io_verify_parm(struct pci_dev *pdev, u8 *dev_intr_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7504) 			    u8 *dev_multiq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7505) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7506) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7507) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7508) 	if ((tx_fifo_num > MAX_TX_FIFOS) || (tx_fifo_num < 1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7509) 		DBG_PRINT(ERR_DBG, "Requested number of tx fifos "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7510) 			  "(%d) not supported\n", tx_fifo_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7511) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7512) 		if (tx_fifo_num < 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7513) 			tx_fifo_num = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7514) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7515) 			tx_fifo_num = MAX_TX_FIFOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7517) 		DBG_PRINT(ERR_DBG, "Default to %d tx fifos\n", tx_fifo_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7518) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7519) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7520) 	if (multiq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7521) 		*dev_multiq = multiq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7522) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7523) 	if (tx_steering_type && (1 == tx_fifo_num)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7524) 		if (tx_steering_type != TX_DEFAULT_STEERING)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7525) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7526) 				  "Tx steering is not supported with "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7527) 				  "one fifo. Disabling Tx steering.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7528) 		tx_steering_type = NO_STEERING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7529) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7531) 	if ((tx_steering_type < NO_STEERING) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7532) 	    (tx_steering_type > TX_DEFAULT_STEERING)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7533) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7534) 			  "Requested transmit steering not supported\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7535) 		DBG_PRINT(ERR_DBG, "Disabling transmit steering\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7536) 		tx_steering_type = NO_STEERING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7537) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7538) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7539) 	if (rx_ring_num > MAX_RX_RINGS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7540) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7541) 			  "Requested number of rx rings not supported\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7542) 		DBG_PRINT(ERR_DBG, "Default to %d rx rings\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7543) 			  MAX_RX_RINGS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7544) 		rx_ring_num = MAX_RX_RINGS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7545) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7546) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7547) 	if ((*dev_intr_type != INTA) && (*dev_intr_type != MSI_X)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7548) 		DBG_PRINT(ERR_DBG, "Wrong intr_type requested. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7549) 			  "Defaulting to INTA\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7550) 		*dev_intr_type = INTA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7551) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7553) 	if ((*dev_intr_type == MSI_X) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7554) 	    ((pdev->device != PCI_DEVICE_ID_HERC_WIN) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7555) 	     (pdev->device != PCI_DEVICE_ID_HERC_UNI))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7556) 		DBG_PRINT(ERR_DBG, "Xframe I does not support MSI_X. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7557) 			  "Defaulting to INTA\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7558) 		*dev_intr_type = INTA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7559) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7560) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7561) 	if ((rx_ring_mode != 1) && (rx_ring_mode != 2)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7562) 		DBG_PRINT(ERR_DBG, "Requested ring mode not supported\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7563) 		DBG_PRINT(ERR_DBG, "Defaulting to 1-buffer mode\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7564) 		rx_ring_mode = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7565) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7566) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7567) 	for (i = 0; i < MAX_RX_RINGS; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7568) 		if (rx_ring_sz[i] > MAX_RX_BLOCKS_PER_RING) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7569) 			DBG_PRINT(ERR_DBG, "Requested rx ring size not "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7570) 				  "supported\nDefaulting to %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7571) 				  MAX_RX_BLOCKS_PER_RING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7572) 			rx_ring_sz[i] = MAX_RX_BLOCKS_PER_RING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7573) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7574) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7575) 	return SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7576) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7578) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7579)  * rts_ds_steer - Receive traffic steering based on IPv4 or IPv6 TOS or Traffic class respectively.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7580)  * @nic: device private variable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7581)  * @ds_codepoint: data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7582)  * @ring: ring index
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7583)  * Description: The function configures the receive steering to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7584)  * desired receive ring.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7585)  * Return Value:  SUCCESS on success and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7586)  * '-1' on failure (endian settings incorrect).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7587)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7588) static int rts_ds_steer(struct s2io_nic *nic, u8 ds_codepoint, u8 ring)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7589) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7590) 	struct XENA_dev_config __iomem *bar0 = nic->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7591) 	register u64 val64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7592) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7593) 	if (ds_codepoint > 63)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7594) 		return FAILURE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7595) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7596) 	val64 = RTS_DS_MEM_DATA(ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7597) 	writeq(val64, &bar0->rts_ds_mem_data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7598) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7599) 	val64 = RTS_DS_MEM_CTRL_WE |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7600) 		RTS_DS_MEM_CTRL_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7601) 		RTS_DS_MEM_CTRL_OFFSET(ds_codepoint);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7602) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7603) 	writeq(val64, &bar0->rts_ds_mem_ctrl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7605) 	return wait_for_cmd_complete(&bar0->rts_ds_mem_ctrl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7606) 				     RTS_DS_MEM_CTRL_STROBE_CMD_BEING_EXECUTED,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7607) 				     S2IO_BIT_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7608) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7610) static const struct net_device_ops s2io_netdev_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7611) 	.ndo_open	        = s2io_open,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7612) 	.ndo_stop	        = s2io_close,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7613) 	.ndo_get_stats	        = s2io_get_stats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7614) 	.ndo_start_xmit    	= s2io_xmit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7615) 	.ndo_validate_addr	= eth_validate_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7616) 	.ndo_set_rx_mode	= s2io_set_multicast,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7617) 	.ndo_do_ioctl	   	= s2io_ioctl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7618) 	.ndo_set_mac_address    = s2io_set_mac_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7619) 	.ndo_change_mtu	   	= s2io_change_mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7620) 	.ndo_set_features	= s2io_set_features,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7621) 	.ndo_tx_timeout	   	= s2io_tx_watchdog,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7622) #ifdef CONFIG_NET_POLL_CONTROLLER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7623) 	.ndo_poll_controller    = s2io_netpoll,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7624) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7625) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7626) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7627) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7628)  *  s2io_init_nic - Initialization of the adapter .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7629)  *  @pdev : structure containing the PCI related information of the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7630)  *  @pre: List of PCI devices supported by the driver listed in s2io_tbl.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7631)  *  Description:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7632)  *  The function initializes an adapter identified by the pci_dec structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7633)  *  All OS related initialization including memory and device structure and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7634)  *  initlaization of the device private variable is done. Also the swapper
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7635)  *  control register is initialized to enable read and write into the I/O
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7636)  *  registers of the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7637)  *  Return value:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7638)  *  returns 0 on success and negative on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7639)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7640) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7641) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7642) s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7643) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7644) 	struct s2io_nic *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7645) 	struct net_device *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7646) 	int i, j, ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7647) 	int dma_flag = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7648) 	u32 mac_up, mac_down;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7649) 	u64 val64 = 0, tmp64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7650) 	struct XENA_dev_config __iomem *bar0 = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7651) 	u16 subid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7652) 	struct config_param *config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7653) 	struct mac_info *mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7654) 	int mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7655) 	u8 dev_intr_type = intr_type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7656) 	u8 dev_multiq = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7657) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7658) 	ret = s2io_verify_parm(pdev, &dev_intr_type, &dev_multiq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7659) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7660) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7661) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7662) 	ret = pci_enable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7663) 	if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7664) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7665) 			  "%s: pci_enable_device failed\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7666) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7667) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7668) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7669) 	if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7670) 		DBG_PRINT(INIT_DBG, "%s: Using 64bit DMA\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7671) 		dma_flag = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7672) 		if (dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7673) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7674) 				  "Unable to obtain 64bit DMA for coherent allocations\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7675) 			pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7676) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7677) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7678) 	} else if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7679) 		DBG_PRINT(INIT_DBG, "%s: Using 32bit DMA\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7680) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7681) 		pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7682) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7683) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7684) 	ret = pci_request_regions(pdev, s2io_driver_name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7685) 	if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7686) 		DBG_PRINT(ERR_DBG, "%s: Request Regions failed - %x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7687) 			  __func__, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7688) 		pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7689) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7690) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7691) 	if (dev_multiq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7692) 		dev = alloc_etherdev_mq(sizeof(struct s2io_nic), tx_fifo_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7693) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7694) 		dev = alloc_etherdev(sizeof(struct s2io_nic));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7695) 	if (dev == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7696) 		pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7697) 		pci_release_regions(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7698) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7699) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7700) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7701) 	pci_set_master(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7702) 	pci_set_drvdata(pdev, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7703) 	SET_NETDEV_DEV(dev, &pdev->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7704) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7705) 	/*  Private member variable initialized to s2io NIC structure */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7706) 	sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7707) 	sp->dev = dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7708) 	sp->pdev = pdev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7709) 	sp->high_dma_flag = dma_flag;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7710) 	sp->device_enabled_once = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7711) 	if (rx_ring_mode == 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7712) 		sp->rxd_mode = RXD_MODE_1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7713) 	if (rx_ring_mode == 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7714) 		sp->rxd_mode = RXD_MODE_3B;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7715) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7716) 	sp->config.intr_type = dev_intr_type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7717) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7718) 	if ((pdev->device == PCI_DEVICE_ID_HERC_WIN) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7719) 	    (pdev->device == PCI_DEVICE_ID_HERC_UNI))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7720) 		sp->device_type = XFRAME_II_DEVICE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7721) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7722) 		sp->device_type = XFRAME_I_DEVICE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7723) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7724) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7725) 	/* Initialize some PCI/PCI-X fields of the NIC. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7726) 	s2io_init_pci(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7727) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7728) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7729) 	 * Setting the device configuration parameters.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7730) 	 * Most of these parameters can be specified by the user during
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7731) 	 * module insertion as they are module loadable parameters. If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7732) 	 * these parameters are not not specified during load time, they
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7733) 	 * are initialized with default values.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7734) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7735) 	config = &sp->config;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7736) 	mac_control = &sp->mac_control;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7737) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7738) 	config->napi = napi;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7739) 	config->tx_steering_type = tx_steering_type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7740) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7741) 	/* Tx side parameters. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7742) 	if (config->tx_steering_type == TX_PRIORITY_STEERING)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7743) 		config->tx_fifo_num = MAX_TX_FIFOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7744) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7745) 		config->tx_fifo_num = tx_fifo_num;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7746) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7747) 	/* Initialize the fifos used for tx steering */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7748) 	if (config->tx_fifo_num < 5) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7749) 		if (config->tx_fifo_num  == 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7750) 			sp->total_tcp_fifos = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7751) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7752) 			sp->total_tcp_fifos = config->tx_fifo_num - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7753) 		sp->udp_fifo_idx = config->tx_fifo_num - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7754) 		sp->total_udp_fifos = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7755) 		sp->other_fifo_idx = sp->total_tcp_fifos - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7756) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7757) 		sp->total_tcp_fifos = (tx_fifo_num - FIFO_UDP_MAX_NUM -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7758) 				       FIFO_OTHER_MAX_NUM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7759) 		sp->udp_fifo_idx = sp->total_tcp_fifos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7760) 		sp->total_udp_fifos = FIFO_UDP_MAX_NUM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7761) 		sp->other_fifo_idx = sp->udp_fifo_idx + FIFO_UDP_MAX_NUM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7762) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7763) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7764) 	config->multiq = dev_multiq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7765) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7766) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7767) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7768) 		tx_cfg->fifo_len = tx_fifo_len[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7769) 		tx_cfg->fifo_priority = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7770) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7771) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7772) 	/* mapping the QoS priority to the configured fifos */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7773) 	for (i = 0; i < MAX_TX_FIFOS; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7774) 		config->fifo_mapping[i] = fifo_map[config->tx_fifo_num - 1][i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7775) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7776) 	/* map the hashing selector table to the configured fifos */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7777) 	for (i = 0; i < config->tx_fifo_num; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7778) 		sp->fifo_selector[i] = fifo_selector[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7779) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7780) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7781) 	config->tx_intr_type = TXD_INT_TYPE_UTILZ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7782) 	for (i = 0; i < config->tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7783) 		struct tx_fifo_config *tx_cfg = &config->tx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7784) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7785) 		tx_cfg->f_no_snoop = (NO_SNOOP_TXD | NO_SNOOP_TXD_BUFFER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7786) 		if (tx_cfg->fifo_len < 65) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7787) 			config->tx_intr_type = TXD_INT_TYPE_PER_LIST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7788) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7789) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7790) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7791) 	/* + 2 because one Txd for skb->data and one Txd for UFO */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7792) 	config->max_txds = MAX_SKB_FRAGS + 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7794) 	/* Rx side parameters. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7795) 	config->rx_ring_num = rx_ring_num;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7796) 	for (i = 0; i < config->rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7797) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7798) 		struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7799) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7800) 		rx_cfg->num_rxd = rx_ring_sz[i] * (rxd_count[sp->rxd_mode] + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7801) 		rx_cfg->ring_priority = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7802) 		ring->rx_bufs_left = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7803) 		ring->rxd_mode = sp->rxd_mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7804) 		ring->rxd_count = rxd_count[sp->rxd_mode];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7805) 		ring->pdev = sp->pdev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7806) 		ring->dev = sp->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7807) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7808) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7809) 	for (i = 0; i < rx_ring_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7810) 		struct rx_ring_config *rx_cfg = &config->rx_cfg[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7812) 		rx_cfg->ring_org = RING_ORG_BUFF1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7813) 		rx_cfg->f_no_snoop = (NO_SNOOP_RXD | NO_SNOOP_RXD_BUFFER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7814) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7815) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7816) 	/*  Setting Mac Control parameters */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7817) 	mac_control->rmac_pause_time = rmac_pause_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7818) 	mac_control->mc_pause_threshold_q0q3 = mc_pause_threshold_q0q3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7819) 	mac_control->mc_pause_threshold_q4q7 = mc_pause_threshold_q4q7;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7822) 	/*  initialize the shared memory used by the NIC and the host */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7823) 	if (init_shared_mem(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7824) 		DBG_PRINT(ERR_DBG, "%s: Memory allocation failed\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7825) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7826) 		goto mem_alloc_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7827) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7828) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7829) 	sp->bar0 = pci_ioremap_bar(pdev, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7830) 	if (!sp->bar0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7831) 		DBG_PRINT(ERR_DBG, "%s: Neterion: cannot remap io mem1\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7832) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7833) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7834) 		goto bar0_remap_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7835) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7836) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7837) 	sp->bar1 = pci_ioremap_bar(pdev, 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7838) 	if (!sp->bar1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7839) 		DBG_PRINT(ERR_DBG, "%s: Neterion: cannot remap io mem2\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7840) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7841) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7842) 		goto bar1_remap_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7843) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7845) 	/* Initializing the BAR1 address as the start of the FIFO pointer. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7846) 	for (j = 0; j < MAX_TX_FIFOS; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7847) 		mac_control->tx_FIFO_start[j] = sp->bar1 + (j * 0x00020000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7848) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7849) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7850) 	/*  Driver entry points */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7851) 	dev->netdev_ops = &s2io_netdev_ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7852) 	dev->ethtool_ops = &netdev_ethtool_ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7853) 	dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7854) 		NETIF_F_TSO | NETIF_F_TSO6 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7855) 		NETIF_F_RXCSUM | NETIF_F_LRO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7856) 	dev->features |= dev->hw_features |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7857) 		NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7858) 	if (sp->high_dma_flag == true)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7859) 		dev->features |= NETIF_F_HIGHDMA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7860) 	dev->watchdog_timeo = WATCH_DOG_TIMEOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7861) 	INIT_WORK(&sp->rst_timer_task, s2io_restart_nic);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7862) 	INIT_WORK(&sp->set_link_task, s2io_set_link);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7863) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7864) 	pci_save_state(sp->pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7865) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7866) 	/* Setting swapper control on the NIC, for proper reset operation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7867) 	if (s2io_set_swapper(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7868) 		DBG_PRINT(ERR_DBG, "%s: swapper settings are wrong\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7869) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7870) 		ret = -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7871) 		goto set_swap_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7872) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7873) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7874) 	/* Verify if the Herc works on the slot its placed into */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7875) 	if (sp->device_type & XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7876) 		mode = s2io_verify_pci_mode(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7877) 		if (mode < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7878) 			DBG_PRINT(ERR_DBG, "%s: Unsupported PCI bus mode\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7879) 				  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7880) 			ret = -EBADSLT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7881) 			goto set_swap_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7882) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7883) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7884) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7885) 	if (sp->config.intr_type == MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7886) 		sp->num_entries = config->rx_ring_num + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7887) 		ret = s2io_enable_msi_x(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7888) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7889) 		if (!ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7890) 			ret = s2io_test_msi(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7891) 			/* rollback MSI-X, will re-enable during add_isr() */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7892) 			remove_msix_isr(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7893) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7894) 		if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7895) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7896) 			DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7897) 				  "MSI-X requested but failed to enable\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7898) 			sp->config.intr_type = INTA;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7899) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7900) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7901) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7902) 	if (config->intr_type ==  MSI_X) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7903) 		for (i = 0; i < config->rx_ring_num ; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7904) 			struct ring_info *ring = &mac_control->rings[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7905) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7906) 			netif_napi_add(dev, &ring->napi, s2io_poll_msix, 64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7907) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7908) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7909) 		netif_napi_add(dev, &sp->napi, s2io_poll_inta, 64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7910) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7911) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7912) 	/* Not needed for Herc */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7913) 	if (sp->device_type & XFRAME_I_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7914) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7915) 		 * Fix for all "FFs" MAC address problems observed on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7916) 		 * Alpha platforms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7917) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7918) 		fix_mac_address(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7919) 		s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7920) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7921) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7922) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7923) 	 * MAC address initialization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7924) 	 * For now only one mac address will be read and used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7925) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7926) 	bar0 = sp->bar0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7927) 	val64 = RMAC_ADDR_CMD_MEM_RD | RMAC_ADDR_CMD_MEM_STROBE_NEW_CMD |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7928) 		RMAC_ADDR_CMD_MEM_OFFSET(0 + S2IO_MAC_ADDR_START_OFFSET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7929) 	writeq(val64, &bar0->rmac_addr_cmd_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7930) 	wait_for_cmd_complete(&bar0->rmac_addr_cmd_mem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7931) 			      RMAC_ADDR_CMD_MEM_STROBE_CMD_EXECUTING,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7932) 			      S2IO_BIT_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7933) 	tmp64 = readq(&bar0->rmac_addr_data0_mem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7934) 	mac_down = (u32)tmp64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7935) 	mac_up = (u32) (tmp64 >> 32);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7936) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7937) 	sp->def_mac_addr[0].mac_addr[3] = (u8) (mac_up);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7938) 	sp->def_mac_addr[0].mac_addr[2] = (u8) (mac_up >> 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7939) 	sp->def_mac_addr[0].mac_addr[1] = (u8) (mac_up >> 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7940) 	sp->def_mac_addr[0].mac_addr[0] = (u8) (mac_up >> 24);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7941) 	sp->def_mac_addr[0].mac_addr[5] = (u8) (mac_down >> 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7942) 	sp->def_mac_addr[0].mac_addr[4] = (u8) (mac_down >> 24);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7943) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7944) 	/*  Set the factory defined MAC address initially   */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7945) 	dev->addr_len = ETH_ALEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7946) 	memcpy(dev->dev_addr, sp->def_mac_addr, ETH_ALEN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7947) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7948) 	/* initialize number of multicast & unicast MAC entries variables */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7949) 	if (sp->device_type == XFRAME_I_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7950) 		config->max_mc_addr = S2IO_XENA_MAX_MC_ADDRESSES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7951) 		config->max_mac_addr = S2IO_XENA_MAX_MAC_ADDRESSES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7952) 		config->mc_start_offset = S2IO_XENA_MC_ADDR_START_OFFSET;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7953) 	} else if (sp->device_type == XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7954) 		config->max_mc_addr = S2IO_HERC_MAX_MC_ADDRESSES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7955) 		config->max_mac_addr = S2IO_HERC_MAX_MAC_ADDRESSES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7956) 		config->mc_start_offset = S2IO_HERC_MC_ADDR_START_OFFSET;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7957) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7958) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7959) 	/* MTU range: 46 - 9600 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7960) 	dev->min_mtu = MIN_MTU;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7961) 	dev->max_mtu = S2IO_JUMBO_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7962) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7963) 	/* store mac addresses from CAM to s2io_nic structure */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7964) 	do_s2io_store_unicast_mc(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7965) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7966) 	/* Configure MSIX vector for number of rings configured plus one */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7967) 	if ((sp->device_type == XFRAME_II_DEVICE) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7968) 	    (config->intr_type == MSI_X))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7969) 		sp->num_entries = config->rx_ring_num + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7971) 	/* Store the values of the MSIX table in the s2io_nic structure */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7972) 	store_xmsi_data(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7973) 	/* reset Nic and bring it to known state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7974) 	s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7975) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7976) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7977) 	 * Initialize link state flags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7978) 	 * and the card state parameter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7979) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7980) 	sp->state = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7981) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7982) 	/* Initialize spinlocks */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7983) 	for (i = 0; i < sp->config.tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7984) 		struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7985) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7986) 		spin_lock_init(&fifo->tx_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7987) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7988) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7989) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7990) 	 * SXE-002: Configure link and activity LED to init state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7991) 	 * on driver load.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7992) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7993) 	subid = sp->pdev->subsystem_device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7994) 	if ((subid & 0xFF) >= 0x07) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7995) 		val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7996) 		val64 |= 0x0000800000000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7997) 		writeq(val64, &bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7998) 		val64 = 0x0411040400000000ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7999) 		writeq(val64, (void __iomem *)bar0 + 0x2700);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8000) 		val64 = readq(&bar0->gpio_control);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8001) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8002) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8003) 	sp->rx_csum = 1;	/* Rx chksum verify enabled by default */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8004) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8005) 	if (register_netdev(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8006) 		DBG_PRINT(ERR_DBG, "Device registration failed\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8007) 		ret = -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8008) 		goto register_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8009) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8010) 	s2io_vpd_read(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8011) 	DBG_PRINT(ERR_DBG, "Copyright(c) 2002-2010 Exar Corp.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8012) 	DBG_PRINT(ERR_DBG, "%s: Neterion %s (rev %d)\n", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8013) 		  sp->product_name, pdev->revision);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8014) 	DBG_PRINT(ERR_DBG, "%s: Driver version %s\n", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8015) 		  s2io_driver_version);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8016) 	DBG_PRINT(ERR_DBG, "%s: MAC Address: %pM\n", dev->name, dev->dev_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8017) 	DBG_PRINT(ERR_DBG, "Serial number: %s\n", sp->serial_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8018) 	if (sp->device_type & XFRAME_II_DEVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8019) 		mode = s2io_print_pci_mode(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8020) 		if (mode < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8021) 			ret = -EBADSLT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8022) 			unregister_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8023) 			goto set_swap_failed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8024) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8025) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8026) 	switch (sp->rxd_mode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8027) 	case RXD_MODE_1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8028) 		DBG_PRINT(ERR_DBG, "%s: 1-Buffer receive mode enabled\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8029) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8030) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8031) 	case RXD_MODE_3B:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8032) 		DBG_PRINT(ERR_DBG, "%s: 2-Buffer receive mode enabled\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8033) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8034) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8035) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8036) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8037) 	switch (sp->config.napi) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8038) 	case 0:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8039) 		DBG_PRINT(ERR_DBG, "%s: NAPI disabled\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8040) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8041) 	case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8042) 		DBG_PRINT(ERR_DBG, "%s: NAPI enabled\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8043) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8044) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8045) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8046) 	DBG_PRINT(ERR_DBG, "%s: Using %d Tx fifo(s)\n", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8047) 		  sp->config.tx_fifo_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8048) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8049) 	DBG_PRINT(ERR_DBG, "%s: Using %d Rx ring(s)\n", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8050) 		  sp->config.rx_ring_num);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8051) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8052) 	switch (sp->config.intr_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8053) 	case INTA:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8054) 		DBG_PRINT(ERR_DBG, "%s: Interrupt type INTA\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8055) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8056) 	case MSI_X:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8057) 		DBG_PRINT(ERR_DBG, "%s: Interrupt type MSI-X\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8058) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8059) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8060) 	if (sp->config.multiq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8061) 		for (i = 0; i < sp->config.tx_fifo_num; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8062) 			struct fifo_info *fifo = &mac_control->fifos[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8063) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8064) 			fifo->multiq = config->multiq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8065) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8066) 		DBG_PRINT(ERR_DBG, "%s: Multiqueue support enabled\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8067) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8068) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8069) 		DBG_PRINT(ERR_DBG, "%s: Multiqueue support disabled\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8070) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8071) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8072) 	switch (sp->config.tx_steering_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8073) 	case NO_STEERING:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8074) 		DBG_PRINT(ERR_DBG, "%s: No steering enabled for transmit\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8075) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8076) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8077) 	case TX_PRIORITY_STEERING:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8078) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8079) 			  "%s: Priority steering enabled for transmit\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8080) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8081) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8082) 	case TX_DEFAULT_STEERING:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8083) 		DBG_PRINT(ERR_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8084) 			  "%s: Default steering enabled for transmit\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8085) 			  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8086) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8087) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8088) 	DBG_PRINT(ERR_DBG, "%s: Large receive offload enabled\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8089) 		  dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8090) 	/* Initialize device name */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8091) 	snprintf(sp->name, sizeof(sp->name), "%s Neterion %s", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8092) 		 sp->product_name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8093) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8094) 	if (vlan_tag_strip)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8095) 		sp->vlan_strip_flag = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8096) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8097) 		sp->vlan_strip_flag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8098) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8099) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8100) 	 * Make Link state as off at this point, when the Link change
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8101) 	 * interrupt comes the state will be automatically changed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8102) 	 * the right state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8103) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8104) 	netif_carrier_off(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8105) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8106) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8108) register_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8109) set_swap_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8110) 	iounmap(sp->bar1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8111) bar1_remap_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8112) 	iounmap(sp->bar0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8113) bar0_remap_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8114) mem_alloc_failed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8115) 	free_shared_mem(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8116) 	pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8117) 	pci_release_regions(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8118) 	free_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8120) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8121) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8123) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8124)  * s2io_rem_nic - Free the PCI device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8125)  * @pdev: structure containing the PCI related information of the device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8126)  * Description: This function is called by the Pci subsystem to release a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8127)  * PCI device and free up all resource held up by the device. This could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8128)  * be in response to a Hot plug event or when the driver is to be removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8129)  * from memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8130)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8131) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8132) static void s2io_rem_nic(struct pci_dev *pdev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8133) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8134) 	struct net_device *dev = pci_get_drvdata(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8135) 	struct s2io_nic *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8136) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8137) 	if (dev == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8138) 		DBG_PRINT(ERR_DBG, "Driver Data is NULL!!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8139) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8140) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8141) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8142) 	sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8143) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8144) 	cancel_work_sync(&sp->rst_timer_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8145) 	cancel_work_sync(&sp->set_link_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8146) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8147) 	unregister_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8148) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8149) 	free_shared_mem(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8150) 	iounmap(sp->bar0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8151) 	iounmap(sp->bar1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8152) 	pci_release_regions(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8153) 	free_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8154) 	pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8155) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8156) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8157) module_pci_driver(s2io_driver);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8159) static int check_L2_lro_capable(u8 *buffer, struct iphdr **ip,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8160) 				struct tcphdr **tcp, struct RxD_t *rxdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8161) 				struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8162) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8163) 	int ip_off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8164) 	u8 l2_type = (u8)((rxdp->Control_1 >> 37) & 0x7), ip_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8166) 	if (!(rxdp->Control_1 & RXD_FRAME_PROTO_TCP)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8167) 		DBG_PRINT(INIT_DBG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8168) 			  "%s: Non-TCP frames not supported for LRO\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8169) 			  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8170) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8171) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8173) 	/* Checking for DIX type or DIX type with VLAN */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8174) 	if ((l2_type == 0) || (l2_type == 4)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8175) 		ip_off = HEADER_ETHERNET_II_802_3_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8176) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8177) 		 * If vlan stripping is disabled and the frame is VLAN tagged,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8178) 		 * shift the offset by the VLAN header size bytes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8179) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8180) 		if ((!sp->vlan_strip_flag) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8181) 		    (rxdp->Control_1 & RXD_FRAME_VLAN_TAG))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8182) 			ip_off += HEADER_VLAN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8183) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8184) 		/* LLC, SNAP etc are considered non-mergeable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8185) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8186) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8188) 	*ip = (struct iphdr *)(buffer + ip_off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8189) 	ip_len = (u8)((*ip)->ihl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8190) 	ip_len <<= 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8191) 	*tcp = (struct tcphdr *)((unsigned long)*ip + ip_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8193) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8194) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8196) static int check_for_socket_match(struct lro *lro, struct iphdr *ip,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8197) 				  struct tcphdr *tcp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8198) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8199) 	DBG_PRINT(INFO_DBG, "%s: Been here...\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8200) 	if ((lro->iph->saddr != ip->saddr) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8201) 	    (lro->iph->daddr != ip->daddr) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8202) 	    (lro->tcph->source != tcp->source) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8203) 	    (lro->tcph->dest != tcp->dest))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8204) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8205) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8206) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8208) static inline int get_l4_pyld_length(struct iphdr *ip, struct tcphdr *tcp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8209) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8210) 	return ntohs(ip->tot_len) - (ip->ihl << 2) - (tcp->doff << 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8211) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8213) static void initiate_new_session(struct lro *lro, u8 *l2h,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8214) 				 struct iphdr *ip, struct tcphdr *tcp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8215) 				 u32 tcp_pyld_len, u16 vlan_tag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8216) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8217) 	DBG_PRINT(INFO_DBG, "%s: Been here...\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8218) 	lro->l2h = l2h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8219) 	lro->iph = ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8220) 	lro->tcph = tcp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8221) 	lro->tcp_next_seq = tcp_pyld_len + ntohl(tcp->seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8222) 	lro->tcp_ack = tcp->ack_seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8223) 	lro->sg_num = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8224) 	lro->total_len = ntohs(ip->tot_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8225) 	lro->frags_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8226) 	lro->vlan_tag = vlan_tag;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8227) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8228) 	 * Check if we saw TCP timestamp.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8229) 	 * Other consistency checks have already been done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8230) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8231) 	if (tcp->doff == 8) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8232) 		__be32 *ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8233) 		ptr = (__be32 *)(tcp+1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8234) 		lro->saw_ts = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8235) 		lro->cur_tsval = ntohl(*(ptr+1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8236) 		lro->cur_tsecr = *(ptr+2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8237) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8238) 	lro->in_use = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8239) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8241) static void update_L3L4_header(struct s2io_nic *sp, struct lro *lro)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8242) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8243) 	struct iphdr *ip = lro->iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8244) 	struct tcphdr *tcp = lro->tcph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8245) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8246) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8247) 	DBG_PRINT(INFO_DBG, "%s: Been here...\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8249) 	/* Update L3 header */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8250) 	csum_replace2(&ip->check, ip->tot_len, htons(lro->total_len));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8251) 	ip->tot_len = htons(lro->total_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8253) 	/* Update L4 header */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8254) 	tcp->ack_seq = lro->tcp_ack;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8255) 	tcp->window = lro->window;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8256) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8257) 	/* Update tsecr field if this session has timestamps enabled */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8258) 	if (lro->saw_ts) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8259) 		__be32 *ptr = (__be32 *)(tcp + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8260) 		*(ptr+2) = lro->cur_tsecr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8261) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8263) 	/* Update counters required for calculation of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8264) 	 * average no. of packets aggregated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8265) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8266) 	swstats->sum_avg_pkts_aggregated += lro->sg_num;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8267) 	swstats->num_aggregations++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8268) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8270) static void aggregate_new_rx(struct lro *lro, struct iphdr *ip,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8271) 			     struct tcphdr *tcp, u32 l4_pyld)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8272) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8273) 	DBG_PRINT(INFO_DBG, "%s: Been here...\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8274) 	lro->total_len += l4_pyld;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8275) 	lro->frags_len += l4_pyld;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8276) 	lro->tcp_next_seq += l4_pyld;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8277) 	lro->sg_num++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8279) 	/* Update ack seq no. and window ad(from this pkt) in LRO object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8280) 	lro->tcp_ack = tcp->ack_seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8281) 	lro->window = tcp->window;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8283) 	if (lro->saw_ts) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8284) 		__be32 *ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8285) 		/* Update tsecr and tsval from this packet */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8286) 		ptr = (__be32 *)(tcp+1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8287) 		lro->cur_tsval = ntohl(*(ptr+1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8288) 		lro->cur_tsecr = *(ptr + 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8289) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8290) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8291) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8292) static int verify_l3_l4_lro_capable(struct lro *l_lro, struct iphdr *ip,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8293) 				    struct tcphdr *tcp, u32 tcp_pyld_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8294) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8295) 	u8 *ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8297) 	DBG_PRINT(INFO_DBG, "%s: Been here...\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8298) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8299) 	if (!tcp_pyld_len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8300) 		/* Runt frame or a pure ack */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8301) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8302) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8304) 	if (ip->ihl != 5) /* IP has options */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8305) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8306) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8307) 	/* If we see CE codepoint in IP header, packet is not mergeable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8308) 	if (INET_ECN_is_ce(ipv4_get_dsfield(ip)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8309) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8310) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8311) 	/* If we see ECE or CWR flags in TCP header, packet is not mergeable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8312) 	if (tcp->urg || tcp->psh || tcp->rst ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8313) 	    tcp->syn || tcp->fin ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8314) 	    tcp->ece || tcp->cwr || !tcp->ack) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8315) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8316) 		 * Currently recognize only the ack control word and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8317) 		 * any other control field being set would result in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8318) 		 * flushing the LRO session
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8319) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8320) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8321) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8322) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8323) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8324) 	 * Allow only one TCP timestamp option. Don't aggregate if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8325) 	 * any other options are detected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8326) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8327) 	if (tcp->doff != 5 && tcp->doff != 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8328) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8329) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8330) 	if (tcp->doff == 8) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8331) 		ptr = (u8 *)(tcp + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8332) 		while (*ptr == TCPOPT_NOP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8333) 			ptr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8334) 		if (*ptr != TCPOPT_TIMESTAMP || *(ptr+1) != TCPOLEN_TIMESTAMP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8335) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8337) 		/* Ensure timestamp value increases monotonically */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8338) 		if (l_lro)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8339) 			if (l_lro->cur_tsval > ntohl(*((__be32 *)(ptr+2))))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8340) 				return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8342) 		/* timestamp echo reply should be non-zero */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8343) 		if (*((__be32 *)(ptr+6)) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8344) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8345) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8347) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8348) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8349) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8350) static int s2io_club_tcp_session(struct ring_info *ring_data, u8 *buffer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8351) 				 u8 **tcp, u32 *tcp_len, struct lro **lro,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8352) 				 struct RxD_t *rxdp, struct s2io_nic *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8353) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8354) 	struct iphdr *ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8355) 	struct tcphdr *tcph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8356) 	int ret = 0, i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8357) 	u16 vlan_tag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8358) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8360) 	ret = check_L2_lro_capable(buffer, &ip, (struct tcphdr **)tcp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8361) 				   rxdp, sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8362) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8363) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8364) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8365) 	DBG_PRINT(INFO_DBG, "IP Saddr: %x Daddr: %x\n", ip->saddr, ip->daddr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8367) 	vlan_tag = RXD_GET_VLAN_TAG(rxdp->Control_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8368) 	tcph = (struct tcphdr *)*tcp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8369) 	*tcp_len = get_l4_pyld_length(ip, tcph);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8370) 	for (i = 0; i < MAX_LRO_SESSIONS; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8371) 		struct lro *l_lro = &ring_data->lro0_n[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8372) 		if (l_lro->in_use) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8373) 			if (check_for_socket_match(l_lro, ip, tcph))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8374) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8375) 			/* Sock pair matched */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8376) 			*lro = l_lro;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8378) 			if ((*lro)->tcp_next_seq != ntohl(tcph->seq)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8379) 				DBG_PRINT(INFO_DBG, "%s: Out of sequence. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8380) 					  "expected 0x%x, actual 0x%x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8381) 					  __func__,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8382) 					  (*lro)->tcp_next_seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8383) 					  ntohl(tcph->seq));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8385) 				swstats->outof_sequence_pkts++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8386) 				ret = 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8387) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8388) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8389) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8390) 			if (!verify_l3_l4_lro_capable(l_lro, ip, tcph,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8391) 						      *tcp_len))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8392) 				ret = 1; /* Aggregate */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8393) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8394) 				ret = 2; /* Flush both */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8395) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8396) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8397) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8398) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8399) 	if (ret == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8400) 		/* Before searching for available LRO objects,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8401) 		 * check if the pkt is L3/L4 aggregatable. If not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8402) 		 * don't create new LRO session. Just send this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8403) 		 * packet up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8404) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8405) 		if (verify_l3_l4_lro_capable(NULL, ip, tcph, *tcp_len))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8406) 			return 5;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8407) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8408) 		for (i = 0; i < MAX_LRO_SESSIONS; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8409) 			struct lro *l_lro = &ring_data->lro0_n[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8410) 			if (!(l_lro->in_use)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8411) 				*lro = l_lro;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8412) 				ret = 3; /* Begin anew */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8413) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8414) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8415) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8416) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8417) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8418) 	if (ret == 0) { /* sessions exceeded */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8419) 		DBG_PRINT(INFO_DBG, "%s: All LRO sessions already in use\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8420) 			  __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8421) 		*lro = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8422) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8423) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8424) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8425) 	switch (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8426) 	case 3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8427) 		initiate_new_session(*lro, buffer, ip, tcph, *tcp_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8428) 				     vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8429) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8430) 	case 2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8431) 		update_L3L4_header(sp, *lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8432) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8433) 	case 1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8434) 		aggregate_new_rx(*lro, ip, tcph, *tcp_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8435) 		if ((*lro)->sg_num == sp->lro_max_aggr_per_sess) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8436) 			update_L3L4_header(sp, *lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8437) 			ret = 4; /* Flush the LRO */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8438) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8439) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8440) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8441) 		DBG_PRINT(ERR_DBG, "%s: Don't know, can't say!!\n", __func__);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8442) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8443) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8445) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8446) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8447) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8448) static void clear_lro_session(struct lro *lro)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8449) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8450) 	static u16 lro_struct_size = sizeof(struct lro);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8452) 	memset(lro, 0, lro_struct_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8453) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8454) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8455) static void queue_rx_frame(struct sk_buff *skb, u16 vlan_tag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8456) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8457) 	struct net_device *dev = skb->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8458) 	struct s2io_nic *sp = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8459) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8460) 	skb->protocol = eth_type_trans(skb, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8461) 	if (vlan_tag && sp->vlan_strip_flag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8462) 		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8463) 	if (sp->config.napi)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8464) 		netif_receive_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8465) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8466) 		netif_rx(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8467) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8469) static void lro_append_pkt(struct s2io_nic *sp, struct lro *lro,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8470) 			   struct sk_buff *skb, u32 tcp_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8471) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8472) 	struct sk_buff *first = lro->parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8473) 	struct swStat *swstats = &sp->mac_control.stats_info->sw_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8475) 	first->len += tcp_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8476) 	first->data_len = lro->frags_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8477) 	skb_pull(skb, (skb->len - tcp_len));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8478) 	if (skb_shinfo(first)->frag_list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8479) 		lro->last_frag->next = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8480) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8481) 		skb_shinfo(first)->frag_list = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8482) 	first->truesize += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8483) 	lro->last_frag = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8484) 	swstats->clubbed_frms_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8485) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8487) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8488)  * s2io_io_error_detected - called when PCI error is detected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8489)  * @pdev: Pointer to PCI device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8490)  * @state: The current pci connection state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8491)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8492)  * This function is called after a PCI bus error affecting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8493)  * this device has been detected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8494)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8495) static pci_ers_result_t s2io_io_error_detected(struct pci_dev *pdev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8496) 					       pci_channel_state_t state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8497) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8498) 	struct net_device *netdev = pci_get_drvdata(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8499) 	struct s2io_nic *sp = netdev_priv(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8500) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8501) 	netif_device_detach(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8503) 	if (state == pci_channel_io_perm_failure)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8504) 		return PCI_ERS_RESULT_DISCONNECT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8506) 	if (netif_running(netdev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8507) 		/* Bring down the card, while avoiding PCI I/O */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8508) 		do_s2io_card_down(sp, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8509) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8510) 	pci_disable_device(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8511) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8512) 	return PCI_ERS_RESULT_NEED_RESET;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8513) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8515) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8516)  * s2io_io_slot_reset - called after the pci bus has been reset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8517)  * @pdev: Pointer to PCI device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8518)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8519)  * Restart the card from scratch, as if from a cold-boot.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8520)  * At this point, the card has exprienced a hard reset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8521)  * followed by fixups by BIOS, and has its config space
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8522)  * set up identically to what it was at cold boot.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8523)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8524) static pci_ers_result_t s2io_io_slot_reset(struct pci_dev *pdev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8525) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8526) 	struct net_device *netdev = pci_get_drvdata(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8527) 	struct s2io_nic *sp = netdev_priv(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8528) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8529) 	if (pci_enable_device(pdev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8530) 		pr_err("Cannot re-enable PCI device after reset.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8531) 		return PCI_ERS_RESULT_DISCONNECT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8532) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8534) 	pci_set_master(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8535) 	s2io_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8536) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8537) 	return PCI_ERS_RESULT_RECOVERED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8538) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8540) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8541)  * s2io_io_resume - called when traffic can start flowing again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8542)  * @pdev: Pointer to PCI device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8543)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8544)  * This callback is called when the error recovery driver tells
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8545)  * us that its OK to resume normal operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8546)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8547) static void s2io_io_resume(struct pci_dev *pdev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8548) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8549) 	struct net_device *netdev = pci_get_drvdata(pdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8550) 	struct s2io_nic *sp = netdev_priv(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8551) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8552) 	if (netif_running(netdev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8553) 		if (s2io_card_up(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8554) 			pr_err("Can't bring device back up after reset.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8555) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8556) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8558) 		if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8559) 			s2io_card_down(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8560) 			pr_err("Can't restore mac addr after reset.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8561) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8562) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8563) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8564) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8565) 	netif_device_attach(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8566) 	netif_tx_wake_all_queues(netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8567) }