Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) /* 8390.c: A general NS8390 ethernet driver core for linux. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3) 	Written 1992-94 by Donald Becker.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5) 	Copyright 1993 United States Government as represented by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6) 	Director, National Security Agency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8) 	This software may be used and distributed according to the terms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9) 	of the GNU General Public License, incorporated herein by reference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11) 	The author may be reached as becker@scyld.com, or C/O
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12) 	Scyld Computing Corporation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13) 	410 Severn Ave., Suite 210
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14) 	Annapolis MD 21403
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)   This is the chip-specific code for many 8390-based ethernet adaptors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)   This is not a complete driver, it must be combined with board-specific
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)   code such as ne.c, wd.c, 3c503.c, etc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)   Seeing how at least eight drivers use this code, (not counting the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)   PCMCIA ones either) it is easy to break some card by what seems like
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)   a simple innocent change. Please contact me or Donald if you think
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)   you have found something that needs changing. -- PG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)   Changelog:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)   Paul Gortmaker	: remove set_bit lock, other cleanups.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)   Paul Gortmaker	: add ei_get_8390_hdr() so we can pass skb's to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31) 			  ei_block_input() for eth_io_copy_and_sum().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)   Paul Gortmaker	: exchange static int ei_pingpong for a #define,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33) 			  also add better Tx error handling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)   Paul Gortmaker	: rewrite Rx overrun handling as per NS specs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)   Alexey Kuznetsov	: use the 8390's six bit hash multicast filter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)   Paul Gortmaker	: tweak ANK's above multicast changes a bit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)   Paul Gortmaker	: update packet statistics for v2.1.x
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)   Alan Cox		: support arbitrary stupid port mappings on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39) 			  68K Macintosh. Support >16bit I/O spaces
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)   Paul Gortmaker	: add kmod support for auto-loading of the 8390
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41) 			  module by all drivers that require it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)   Alan Cox		: Spinlocking work, added 'BUG_83C690'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)   Paul Gortmaker	: Separate out Tx timeout code from Tx path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)   Paul Gortmaker	: Remove old unused single Tx buffer code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)   Hayato Fujiwara	: Add m32r support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)   Paul Gortmaker	: use skb_padto() instead of stack scratch area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)   Sources:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)   The National Semiconductor LAN Databook, and the 3Com 3c503 databook.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)   */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53) #include <linux/build_bug.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56) #include <linux/jiffies.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) #include <linux/fs.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60) #include <linux/bitops.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61) #include <linux/uaccess.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62) #include <linux/io.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63) #include <asm/irq.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64) #include <linux/delay.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66) #include <linux/fcntl.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67) #include <linux/in.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68) #include <linux/interrupt.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70) #include <linux/crc32.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72) #include <linux/netdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73) #include <linux/etherdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75) #define NS8390_CORE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76) #include "8390.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78) #define BUG_83C690
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80) /* These are the operational function interfaces to board-specific
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81)    routines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82) 	void reset_8390(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83) 		Resets the board associated with DEV, including a hardware reset of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84) 		the 8390.  This is only called when there is a transmit timeout, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85) 		it is always followed by 8390_init().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86) 	void block_output(struct net_device *dev, int count, const unsigned char *buf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87) 					  int start_page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88) 		Write the COUNT bytes of BUF to the packet buffer at START_PAGE.  The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89) 		"page" value uses the 8390's 256-byte pages.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90) 	void get_8390_hdr(struct net_device *dev, struct e8390_hdr *hdr, int ring_page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91) 		Read the 4 byte, page aligned 8390 header. *If* there is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92) 		subsequent read, it will be of the rest of the packet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93) 	void block_input(struct net_device *dev, int count, struct sk_buff *skb, int ring_offset)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94) 		Read COUNT bytes from the packet buffer into the skb data area. Start
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95) 		reading from RING_OFFSET, the address as the 8390 sees it.  This will always
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96) 		follow the read of the 8390 header.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98) #define ei_reset_8390 (ei_local->reset_8390)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99) #define ei_block_output (ei_local->block_output)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100) #define ei_block_input (ei_local->block_input)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101) #define ei_get_8390_hdr (ei_local->get_8390_hdr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103) /* Index to functions. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104) static void ei_tx_intr(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105) static void ei_tx_err(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106) static void ei_receive(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107) static void ei_rx_overrun(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109) /* Routines generic to NS8390-based boards. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110) static void NS8390_trigger_send(struct net_device *dev, unsigned int length,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111) 								int start_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112) static void do_set_multicast_list(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113) static void __NS8390_init(struct net_device *dev, int startp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115) static unsigned version_printed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116) static int msg_enable;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117) static const int default_msg_level = (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_RX_ERR |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) 				     NETIF_MSG_TX_ERR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) module_param(msg_enable, int, 0444);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) MODULE_PARM_DESC(msg_enable, "Debug message level (see linux/netdevice.h for bitmap)");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123)  *	SMP and the 8390 setup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125)  *	The 8390 isn't exactly designed to be multithreaded on RX/TX. There is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126)  *	a page register that controls bank and packet buffer access. We guard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127)  *	this with ei_local->page_lock. Nobody should assume or set the page other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128)  *	than zero when the lock is not held. Lock holders must restore page 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129)  *	before unlocking. Even pure readers must take the lock to protect in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130)  *	page 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132)  *	To make life difficult the chip can also be very slow. We therefore can't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133)  *	just use spinlocks. For the longer lockups we disable the irq the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134)  *	sits on and hold the lock. We must hold the lock because there is a dual
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135)  *	processor case other than interrupts (get stats/set multicast list in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136)  *	parallel with each other and transmit).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138)  *	Note: in theory we can just disable the irq on the card _but_ there is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139)  *	a latency on SMP irq delivery. So we can easily go "disable irq" "sync irqs"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140)  *	enter lock, take the queued irq. So we waddle instead of flying.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142)  *	Finally by special arrangement for the purpose of being generally
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143)  *	annoying the transmit function is called bh atomic. That places
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144)  *	restrictions on the user context callers as disable_irq won't save
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145)  *	them.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147)  *	Additional explanation of problems with locking by Alan Cox:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149)  *	"The author (me) didn't use spin_lock_irqsave because the slowness of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150)  *	card means that approach caused horrible problems like losing serial data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151)  *	at 38400 baud on some chips. Remember many 8390 nics on PCI were ISA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152)  *	chips with FPGA front ends.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154)  *	Ok the logic behind the 8390 is very simple:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156)  *	Things to know
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157)  *		- IRQ delivery is asynchronous to the PCI bus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158)  *		- Blocking the local CPU IRQ via spin locks was too slow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159)  *		- The chip has register windows needing locking work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161)  *	So the path was once (I say once as people appear to have changed it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162)  *	in the mean time and it now looks rather bogus if the changes to use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163)  *	disable_irq_nosync_irqsave are disabling the local IRQ)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166)  *		Take the page lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167)  *		Mask the IRQ on chip
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168)  *		Disable the IRQ (but not mask locally- someone seems to have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169)  *			broken this with the lock validator stuff)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170)  *			[This must be _nosync as the page lock may otherwise
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171)  *				deadlock us]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172)  *		Drop the page lock and turn IRQs back on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174)  *		At this point an existing IRQ may still be running but we can't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175)  *		get a new one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177)  *		Take the lock (so we know the IRQ has terminated) but don't mask
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178)  *	the IRQs on the processor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179)  *		Set irqlock [for debug]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181)  *		Transmit (slow as ****)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183)  *		re-enable the IRQ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186)  *	We have to use disable_irq because otherwise you will get delayed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187)  *	interrupts on the APIC bus deadlocking the transmit path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189)  *	Quite hairy but the chip simply wasn't designed for SMP and you can't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190)  *	even ACK an interrupt without risking corrupting other parallel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191)  *	activities on the chip." [lkml, 25 Jul 2007]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197)  * ei_open - Open/initialize the board.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198)  * @dev: network device to initialize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200)  * This routine goes all-out, setting everything
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201)  * up anew at each open, even though many of these registers should only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202)  * need to be set once at boot.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) static int __ei_open(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) 	if (dev->watchdog_timeo <= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) 		dev->watchdog_timeo = TX_TIMEOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) 	 *	Grab the page lock so we own the register set, then call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) 	 *	the init function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 	__NS8390_init(dev, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) 	/* Set the flag before we drop the lock, That way the IRQ arrives
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 	   after its set and we get no silly warnings */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 	netif_start_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 	ei_local->irqlock = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228)  * ei_close - shut down network device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229)  * @dev: network device to close
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231)  * Opposite of ei_open(). Only used when "ifconfig <devname> down" is done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) static int __ei_close(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 	 *	Hold the page lock during close
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	__NS8390_init(dev, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	netif_stop_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250)  * ei_tx_timeout - handle transmit time out condition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251)  * @dev: network device which has apparently fallen asleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253)  * Called by kernel when device never acknowledges a transmit has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254)  * completed (or failed) - i.e. never posted a Tx related interrupt.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) static void __ei_tx_timeout(struct net_device *dev, unsigned int txqueue)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	int txsr, isr, tickssofar = jiffies - dev_trans_start(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 	dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 	txsr = ei_inb(e8390_base+EN0_TSR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) 	isr = ei_inb(e8390_base+EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 	netdev_dbg(dev, "Tx timed out, %s TSR=%#2x, ISR=%#2x, t=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 		   (txsr & ENTSR_ABT) ? "excess collisions." :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 		   (isr) ? "lost interrupt?" : "cable problem?",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 		   txsr, isr, tickssofar);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 	if (!isr && !dev->stats.tx_packets) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) 		/* The 8390 probably hasn't gotten on the cable yet. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) 		ei_local->interface_num ^= 1;   /* Try a different xcvr.  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 	/* Ugly but a reset can be slow, yet must be protected */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) 	disable_irq_nosync_lockdep(dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 	spin_lock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 	/* Try to restart the card.  Perhaps the user has fixed something. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 	ei_reset_8390(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 	__NS8390_init(dev, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 	spin_unlock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) 	enable_irq_lockdep(dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 	netif_wake_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296)  * ei_start_xmit - begin packet transmission
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297)  * @skb: packet to be sent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298)  * @dev: network device to which packet is sent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300)  * Sends a packet to an 8390 network device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) static netdev_tx_t __ei_start_xmit(struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) 				   struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) 	int send_length = skb->len, output_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 	char buf[ETH_ZLEN];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) 	char *data = skb->data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) 	if (skb->len < ETH_ZLEN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) 		memset(buf, 0, ETH_ZLEN);	/* more efficient than doing just the needed bits */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 		memcpy(buf, data, skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 		send_length = ETH_ZLEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) 		data = buf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 	/* Mask interrupts from the ethercard.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) 	   SMP: We have to grab the lock here otherwise the IRQ handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 	   on another CPU can flip window and race the IRQ mask set. We end
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) 	   up trashing the mcast filter not disabling irqs if we don't lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 	ei_outb_p(0x00, e8390_base + EN0_IMR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 	 *	Slow phase with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) 	disable_irq_nosync_lockdep_irqsave(dev->irq, &flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) 	spin_lock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 	ei_local->irqlock = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 	 * We have two Tx slots available for use. Find the first free
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 	 * slot, and then perform some sanity checks. With two Tx bufs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 	 * you get very close to transmitting back-to-back packets. With
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 	 * only one Tx buf, the transmitter sits idle while you reload the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 	 * card, leaving a substantial gap between each transmitted packet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 	if (ei_local->tx1 == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 		output_page = ei_local->tx_start_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 		ei_local->tx1 = send_length;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 		if ((netif_msg_tx_queued(ei_local)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 		    ei_local->tx2 > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) 			netdev_dbg(dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 				   "idle transmitter tx2=%d, lasttx=%d, txing=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 				   ei_local->tx2, ei_local->lasttx, ei_local->txing);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 	} else if (ei_local->tx2 == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 		output_page = ei_local->tx_start_page + TX_PAGES/2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) 		ei_local->tx2 = send_length;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) 		if ((netif_msg_tx_queued(ei_local)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) 		    ei_local->tx1 > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) 			netdev_dbg(dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 				   "idle transmitter, tx1=%d, lasttx=%d, txing=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 				   ei_local->tx1, ei_local->lasttx, ei_local->txing);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) 	} else {			/* We should never get here. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 		netif_dbg(ei_local, tx_err, dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 			  "No Tx buffers free! tx1=%d tx2=%d last=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) 			  ei_local->tx1, ei_local->tx2, ei_local->lasttx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 		ei_local->irqlock = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) 		netif_stop_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 		ei_outb_p(ENISR_ALL, e8390_base + EN0_IMR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 		spin_unlock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) 		enable_irq_lockdep_irqrestore(dev->irq, &flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 		dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 		return NETDEV_TX_BUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) 	 * Okay, now upload the packet and trigger a send if the transmitter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) 	 * isn't already sending. If it is busy, the interrupt handler will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 	 * trigger the send later, upon receiving a Tx done interrupt.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 	ei_block_output(dev, send_length, data, output_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 	if (!ei_local->txing) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 		ei_local->txing = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 		NS8390_trigger_send(dev, send_length, output_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) 		if (output_page == ei_local->tx_start_page) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 			ei_local->tx1 = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 			ei_local->lasttx = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 			ei_local->tx2 = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) 			ei_local->lasttx = -2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 		ei_local->txqueue++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 	if (ei_local->tx1 && ei_local->tx2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 		netif_stop_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 		netif_start_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 	/* Turn 8390 interrupts back on. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 	ei_local->irqlock = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 	ei_outb_p(ENISR_ALL, e8390_base + EN0_IMR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 	spin_unlock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 	enable_irq_lockdep_irqrestore(dev->irq, &flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 	skb_tx_timestamp(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 	dev_consume_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	dev->stats.tx_bytes += send_length;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 	return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417)  * ei_interrupt - handle the interrupts from an 8390
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418)  * @irq: interrupt number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419)  * @dev_id: a pointer to the net_device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421)  * Handle the ether interface interrupts. We pull packets from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422)  * the 8390 via the card specific functions and fire them at the networking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423)  * stack. We also handle transmit completions and wake the transmit path if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424)  * necessary. We also update the counters and do other housekeeping as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425)  * needed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) static irqreturn_t __ei_interrupt(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 	struct net_device *dev = dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 	int interrupts, nr_serviced = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) 	 *	Protect the irq test too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 	spin_lock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 	if (ei_local->irqlock) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 		 * This might just be an interrupt for a PCI device sharing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 		 * this line
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 		netdev_err(dev, "Interrupted while interrupts are masked! isr=%#2x imr=%#2x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 			   ei_inb_p(e8390_base + EN0_ISR),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) 			   ei_inb_p(e8390_base + EN0_IMR));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 		spin_unlock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 		return IRQ_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) 	/* Change to page 0 and read the intr status reg. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) 	ei_outb_p(E8390_NODMA+E8390_PAGE0, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) 	netif_dbg(ei_local, intr, dev, "interrupt(isr=%#2.2x)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 		  ei_inb_p(e8390_base + EN0_ISR));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	/* !!Assumption!! -- we stay in page 0.	 Don't break this. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	while ((interrupts = ei_inb_p(e8390_base + EN0_ISR)) != 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 	       ++nr_serviced < MAX_SERVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 		if (!netif_running(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 			netdev_warn(dev, "interrupt from stopped card\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 			/* rmk - acknowledge the interrupts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 			ei_outb_p(interrupts, e8390_base + EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 			interrupts = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 		if (interrupts & ENISR_OVER)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) 			ei_rx_overrun(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 		else if (interrupts & (ENISR_RX+ENISR_RX_ERR)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 			/* Got a good (?) packet. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 			ei_receive(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 		/* Push the next to-transmit packet through. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 		if (interrupts & ENISR_TX)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 			ei_tx_intr(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 		else if (interrupts & ENISR_TX_ERR)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 			ei_tx_err(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 		if (interrupts & ENISR_COUNTERS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 			dev->stats.rx_frame_errors += ei_inb_p(e8390_base + EN0_COUNTER0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 			dev->stats.rx_crc_errors   += ei_inb_p(e8390_base + EN0_COUNTER1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 			dev->stats.rx_missed_errors += ei_inb_p(e8390_base + EN0_COUNTER2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 			ei_outb_p(ENISR_COUNTERS, e8390_base + EN0_ISR); /* Ack intr. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 		/* Ignore any RDC interrupts that make it back to here. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 		if (interrupts & ENISR_RDC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 			ei_outb_p(ENISR_RDC, e8390_base + EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 		ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_START, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 	if (interrupts && (netif_msg_intr(ei_local))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 		ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_START, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 		if (nr_serviced >= MAX_SERVICE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 			/* 0xFF is valid for a card removal */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 			if (interrupts != 0xFF)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 				netdev_warn(dev, "Too much work at interrupt, status %#2.2x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) 					    interrupts);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 			ei_outb_p(ENISR_ALL, e8390_base + EN0_ISR); /* Ack. most intrs. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 			netdev_warn(dev, "unknown interrupt %#2x\n", interrupts);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 			ei_outb_p(0xff, e8390_base + EN0_ISR); /* Ack. all intrs. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) 	spin_unlock(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 	return IRQ_RETVAL(nr_serviced > 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) #ifdef CONFIG_NET_POLL_CONTROLLER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) static void __ei_poll(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 	disable_irq(dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 	__ei_interrupt(dev->irq, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 	enable_irq(dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521)  * ei_tx_err - handle transmitter error
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522)  * @dev: network device which threw the exception
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524)  * A transmitter error has happened. Most likely excess collisions (which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525)  * is a fairly normal condition). If the error is one where the Tx will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526)  * have been aborted, we try and send another one right away, instead of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527)  * letting the failed packet sit and collect dust in the Tx buffer. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528)  * is a much better solution as it avoids kernel based Tx timeouts, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529)  * an unnecessary card reset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531)  * Called with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) static void ei_tx_err(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	/* ei_local is used on some platforms via the EI_SHIFT macro */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) 	struct ei_device *ei_local __maybe_unused = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 	unsigned char txsr = ei_inb_p(e8390_base+EN0_TSR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) 	unsigned char tx_was_aborted = txsr & (ENTSR_ABT+ENTSR_FU);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) #ifdef VERBOSE_ERROR_DUMP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 	netdev_dbg(dev, "transmitter error (%#2x):", txsr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 	if (txsr & ENTSR_ABT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 		pr_cont(" excess-collisions ");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 	if (txsr & ENTSR_ND)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 		pr_cont(" non-deferral ");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	if (txsr & ENTSR_CRS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 		pr_cont(" lost-carrier ");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 	if (txsr & ENTSR_FU)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 		pr_cont(" FIFO-underrun ");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 	if (txsr & ENTSR_CDH)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 		pr_cont(" lost-heartbeat ");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 	pr_cont("\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 	ei_outb_p(ENISR_TX_ERR, e8390_base + EN0_ISR); /* Ack intr. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) 	if (tx_was_aborted)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 		ei_tx_intr(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) 	else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 		dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) 		if (txsr & ENTSR_CRS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 			dev->stats.tx_carrier_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 		if (txsr & ENTSR_CDH)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 			dev->stats.tx_heartbeat_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 		if (txsr & ENTSR_OWC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 			dev->stats.tx_window_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573)  * ei_tx_intr - transmit interrupt handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574)  * @dev: network device for which tx intr is handled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576)  * We have finished a transmit: check for errors and then trigger the next
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577)  * packet to be sent. Called with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) static void ei_tx_intr(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 	int status = ei_inb(e8390_base + EN0_TSR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 	ei_outb_p(ENISR_TX, e8390_base + EN0_ISR); /* Ack intr. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) 	 * There are two Tx buffers, see which one finished, and trigger
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 	 * the send of another one if it exists.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	ei_local->txqueue--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 	if (ei_local->tx1 < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 		if (ei_local->lasttx != 1 && ei_local->lasttx != -1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 			pr_err("%s: bogus last_tx_buffer %d, tx1=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 			       ei_local->name, ei_local->lasttx, ei_local->tx1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 		ei_local->tx1 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		if (ei_local->tx2 > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 			ei_local->txing = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 			NS8390_trigger_send(dev, ei_local->tx2, ei_local->tx_start_page + 6);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 			netif_trans_update(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 			ei_local->tx2 = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 			ei_local->lasttx = 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 			ei_local->lasttx = 20;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 			ei_local->txing = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 	} else if (ei_local->tx2 < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 		if (ei_local->lasttx != 2  &&  ei_local->lasttx != -2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 			pr_err("%s: bogus last_tx_buffer %d, tx2=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 			       ei_local->name, ei_local->lasttx, ei_local->tx2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		ei_local->tx2 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 		if (ei_local->tx1 > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 			ei_local->txing = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 			NS8390_trigger_send(dev, ei_local->tx1, ei_local->tx_start_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 			netif_trans_update(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 			ei_local->tx1 = -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 			ei_local->lasttx = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 			ei_local->lasttx = 10;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 			ei_local->txing = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	} /* else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 		netdev_warn(dev, "unexpected TX-done interrupt, lasttx=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 			    ei_local->lasttx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 	/* Minimize Tx latency: update the statistics after we restart TXing. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 	if (status & ENTSR_COL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 		dev->stats.collisions++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 	if (status & ENTSR_PTX)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 		dev->stats.tx_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 	else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 		if (status & ENTSR_ABT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 			dev->stats.tx_aborted_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 			dev->stats.collisions += 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 		if (status & ENTSR_CRS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 			dev->stats.tx_carrier_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 		if (status & ENTSR_FU)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 			dev->stats.tx_fifo_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 		if (status & ENTSR_CDH)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 			dev->stats.tx_heartbeat_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 		if (status & ENTSR_OWC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 			dev->stats.tx_window_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 	netif_wake_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653)  * ei_receive - receive some packets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654)  * @dev: network device with which receive will be run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656)  * We have a good packet(s), get it/them out of the buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657)  * Called with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) static void ei_receive(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 	unsigned char rxing_page, this_frame, next_frame;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 	unsigned short current_offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 	int rx_pkt_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	struct e8390_pkt_hdr rx_frame;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	int num_rx_pages = ei_local->stop_page-ei_local->rx_start_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	while (++rx_pkt_count < 10) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 		int pkt_len, pkt_stat;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 		/* Get the rx page (incoming packet pointer). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 		ei_outb_p(E8390_NODMA+E8390_PAGE1, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 		rxing_page = ei_inb_p(e8390_base + EN1_CURPAG);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 		ei_outb_p(E8390_NODMA+E8390_PAGE0, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 		/* Remove one frame from the ring.  Boundary is always a page behind. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 		this_frame = ei_inb_p(e8390_base + EN0_BOUNDARY) + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 		if (this_frame >= ei_local->stop_page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 			this_frame = ei_local->rx_start_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 		/* Someday we'll omit the previous, iff we never get this message.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 		   (There is at least one clone claimed to have a problem.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 		   Keep quiet if it looks like a card removal. One problem here
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 		   is that some clones crash in roughly the same way.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 		if ((netif_msg_rx_status(ei_local)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 		    this_frame != ei_local->current_page &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 		    (this_frame != 0x0 || rxing_page != 0xFF))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 			netdev_err(dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 				   "mismatched read page pointers %2x vs %2x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 				   this_frame, ei_local->current_page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 		if (this_frame == rxing_page)	/* Read all the frames? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 			break;				/* Done for now */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 		current_offset = this_frame << 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 		ei_get_8390_hdr(dev, &rx_frame, this_frame);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 		pkt_len = rx_frame.count - sizeof(struct e8390_pkt_hdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 		pkt_stat = rx_frame.status;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 		next_frame = this_frame + 1 + ((pkt_len+4)>>8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 		/* Check for bogosity warned by 3c503 book: the status byte is never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 		   written.  This happened a lot during testing! This code should be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 		   cleaned up someday. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 		if (rx_frame.next != next_frame &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 		    rx_frame.next != next_frame + 1 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 		    rx_frame.next != next_frame - num_rx_pages &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 		    rx_frame.next != next_frame + 1 - num_rx_pages) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 			ei_local->current_page = rxing_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 			ei_outb(ei_local->current_page-1, e8390_base+EN0_BOUNDARY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 			dev->stats.rx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 		if (pkt_len < 60  ||  pkt_len > 1518) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 			netif_dbg(ei_local, rx_status, dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 				  "bogus packet size: %d, status=%#2x nxpg=%#2x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 				  rx_frame.count, rx_frame.status,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 				  rx_frame.next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 			dev->stats.rx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 			dev->stats.rx_length_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 		} else if ((pkt_stat & 0x0F) == ENRSR_RXOK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 			struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 			skb = netdev_alloc_skb(dev, pkt_len + 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 			if (skb == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 				netif_err(ei_local, rx_err, dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 					  "Couldn't allocate a sk_buff of size %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 					  pkt_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 				dev->stats.rx_dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 				skb_reserve(skb, 2);	/* IP headers on 16 byte boundaries */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 				skb_put(skb, pkt_len);	/* Make room */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 				ei_block_input(dev, pkt_len, skb, current_offset + sizeof(rx_frame));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 				skb->protocol = eth_type_trans(skb, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 				if (!skb_defer_rx_timestamp(skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 					netif_rx(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 				dev->stats.rx_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 				dev->stats.rx_bytes += pkt_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 				if (pkt_stat & ENRSR_PHY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 					dev->stats.multicast++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 			netif_err(ei_local, rx_err, dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 				  "bogus packet: status=%#2x nxpg=%#2x size=%d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 				  rx_frame.status, rx_frame.next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 				  rx_frame.count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 			dev->stats.rx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 			/* NB: The NIC counts CRC, frame and missed errors. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 			if (pkt_stat & ENRSR_FO)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 				dev->stats.rx_fifo_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 		next_frame = rx_frame.next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 		/* This _should_ never happen: it's here for avoiding bad clones. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 		if (next_frame >= ei_local->stop_page) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 			netdev_notice(dev, "next frame inconsistency, %#2x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 				      next_frame);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 			next_frame = ei_local->rx_start_page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 		ei_local->current_page = next_frame;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 		ei_outb_p(next_frame-1, e8390_base+EN0_BOUNDARY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 	/* We used to also ack ENISR_OVER here, but that would sometimes mask
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 	   a real overrun, leaving the 8390 in a stopped state with rec'vr off. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 	ei_outb_p(ENISR_RX+ENISR_RX_ERR, e8390_base+EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777)  * ei_rx_overrun - handle receiver overrun
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778)  * @dev: network device which threw exception
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780)  * We have a receiver overrun: we have to kick the 8390 to get it started
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781)  * again. Problem is that you have to kick it exactly as NS prescribes in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782)  * the updated datasheets, or "the NIC may act in an unpredictable manner."
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783)  * This includes causing "the NIC to defer indefinitely when it is stopped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784)  * on a busy network."  Ugh.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785)  * Called with lock held. Don't call this with the interrupts off or your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786)  * computer will hate you - it takes 10ms or so.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) static void ei_rx_overrun(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 	unsigned char was_txing, must_resend = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 	/* ei_local is used on some platforms via the EI_SHIFT macro */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 	struct ei_device *ei_local __maybe_unused = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	 * Record whether a Tx was in progress and then issue the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 	 * stop command.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 	was_txing = ei_inb_p(e8390_base+E8390_CMD) & E8390_TRANS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 	ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_STOP, e8390_base+E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 	netif_dbg(ei_local, rx_err, dev, "Receiver overrun\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 	dev->stats.rx_over_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 	 * Wait a full Tx time (1.2ms) + some guard time, NS says 1.6ms total.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	 * Early datasheets said to poll the reset bit, but now they say that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 	 * it "is not a reliable indicator and subsequently should be ignored."
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 	 * We wait at least 10ms.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 	mdelay(10);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 	 * Reset RBCR[01] back to zero as per magic incantation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 	ei_outb_p(0x00, e8390_base+EN0_RCNTLO);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 	ei_outb_p(0x00, e8390_base+EN0_RCNTHI);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 	 * See if any Tx was interrupted or not. According to NS, this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 	 * step is vital, and skipping it will cause no end of havoc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 	if (was_txing) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 		unsigned char tx_completed = ei_inb_p(e8390_base+EN0_ISR) & (ENISR_TX+ENISR_TX_ERR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 		if (!tx_completed)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 			must_resend = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 	 * Have to enter loopback mode and then restart the NIC before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 	 * you are allowed to slurp packets up off the ring.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 	ei_outb_p(E8390_TXOFF, e8390_base + EN0_TXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 	ei_outb_p(E8390_NODMA + E8390_PAGE0 + E8390_START, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 	 * Clear the Rx ring of all the debris, and ack the interrupt.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 	ei_receive(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 	ei_outb_p(ENISR_OVER, e8390_base+EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 	 * Leave loopback mode, and resend any packet that got stopped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 	ei_outb_p(E8390_TXCONFIG, e8390_base + EN0_TXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 	if (must_resend)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 		ei_outb_p(E8390_NODMA + E8390_PAGE0 + E8390_START + E8390_TRANS, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854)  *	Collect the stats. This is called unlocked and from several contexts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) static struct net_device_stats *__ei_get_stats(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 	unsigned long ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 	/* If the card is stopped, just return the present stats. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 	if (!netif_running(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 		return &dev->stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 	/* Read the counter registers, assuming we are in page 0. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 	dev->stats.rx_frame_errors  += ei_inb_p(ioaddr + EN0_COUNTER0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 	dev->stats.rx_crc_errors    += ei_inb_p(ioaddr + EN0_COUNTER1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 	dev->stats.rx_missed_errors += ei_inb_p(ioaddr + EN0_COUNTER2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 	return &dev->stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878)  * Form the 64 bit 8390 multicast table from the linked list of addresses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879)  * associated with this dev structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) static inline void make_mc_bits(u8 *bits, struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 	struct netdev_hw_addr *ha;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 	netdev_for_each_mc_addr(ha, dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 		u32 crc = ether_crc(ETH_ALEN, ha->addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 		 * The 8390 uses the 6 most significant bits of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 		 * CRC to index the multicast table.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 		bits[crc>>29] |= (1<<((crc>>26)&7));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897)  * do_set_multicast_list - set/clear multicast filter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898)  * @dev: net device for which multicast filter is adjusted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900)  *	Set or clear the multicast filter for this adaptor. May be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901)  *	from a BH in 2.1.x. Must be called with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) static void do_set_multicast_list(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 	if (!(dev->flags&(IFF_PROMISC|IFF_ALLMULTI))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 		memset(ei_local->mcfilter, 0, 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 		if (!netdev_mc_empty(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 			make_mc_bits(ei_local->mcfilter, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 		memset(ei_local->mcfilter, 0xFF, 8);	/* mcast set to accept-all */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 	 * DP8390 manuals don't specify any magic sequence for altering
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 	 * the multicast regs on an already running card. To be safe, we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 	 * ensure multicast mode is off prior to loading up the new hash
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 	 * table. If this proves to be not enough, we can always resort
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 	 * to stopping the NIC, loading the table and then restarting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 	 * Bug Alert!  The MC regs on the SMC 83C690 (SMC Elite and SMC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 	 * Elite16) appear to be write-only. The NS 8390 data sheet lists
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 	 * them as r/w so this is a bug.  The SMC 83C790 (SMC Ultra and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 	 * Ultra32 EISA) appears to have this bug fixed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 	if (netif_running(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 		ei_outb_p(E8390_RXCONFIG, e8390_base + EN0_RXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 	ei_outb_p(E8390_NODMA + E8390_PAGE1, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 	for (i = 0; i < 8; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 		ei_outb_p(ei_local->mcfilter[i], e8390_base + EN1_MULT_SHIFT(i));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) #ifndef BUG_83C690
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 		if (ei_inb_p(e8390_base + EN1_MULT_SHIFT(i)) != ei_local->mcfilter[i])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 			netdev_err(dev, "Multicast filter read/write mismap %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 				   i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 	ei_outb_p(E8390_NODMA + E8390_PAGE0, e8390_base + E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 	if (dev->flags&IFF_PROMISC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 		ei_outb_p(E8390_RXCONFIG | 0x18, e8390_base + EN0_RXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 	else if (dev->flags & IFF_ALLMULTI || !netdev_mc_empty(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 		ei_outb_p(E8390_RXCONFIG | 0x08, e8390_base + EN0_RXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 		ei_outb_p(E8390_RXCONFIG, e8390_base + EN0_RXCR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952)  *	Called without lock held. This is invoked from user context and may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953)  *	be parallel to just about everything else. Its also fairly quick and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954)  *	not called too often. Must protect against both bh and irq users
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) static void __ei_set_multicast_list(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 	spin_lock_irqsave(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 	do_set_multicast_list(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 	spin_unlock_irqrestore(&ei_local->page_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968)  * ethdev_setup - init rest of 8390 device struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969)  * @dev: network device structure to init
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971)  * Initialize the rest of the 8390 device structure.  Do NOT __init
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972)  * this, as it is used by 8390 based modular drivers too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) static void ethdev_setup(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 	ether_setup(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 	spin_lock_init(&ei_local->page_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 	ei_local->msg_enable = netif_msg_init(msg_enable, default_msg_level);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 	if (netif_msg_drv(ei_local) && (version_printed++ == 0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 		pr_info("%s", version);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990)  * alloc_ei_netdev - alloc_etherdev counterpart for 8390
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991)  * @size: extra bytes to allocate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993)  * Allocate 8390-specific net_device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) static struct net_device *____alloc_ei_netdev(int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 	return alloc_netdev(sizeof(struct ei_device) + size, "eth%d",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 			    NET_NAME_UNKNOWN, ethdev_setup);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) /* This page of functions should be 8390 generic */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) /* Follow National Semi's recommendations for initializing the "NIC". */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008)  * NS8390_init - initialize 8390 hardware
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009)  * @dev: network device to initialize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010)  * @startp: boolean.  non-zero value to initiate chip processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012)  *	Must be called with lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) static void __NS8390_init(struct net_device *dev, int startp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 	struct ei_device *ei_local = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 	int endcfg = ei_local->word16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 	    ? (0x48 | ENDCFG_WTS | (ei_local->bigendian ? ENDCFG_BOS : 0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 	    : 0x48;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 	BUILD_BUG_ON(sizeof(struct e8390_pkt_hdr) != 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 	/* Follow National Semi's recommendations for initing the DP83902. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 	ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_STOP, e8390_base+E8390_CMD); /* 0x21 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 	ei_outb_p(endcfg, e8390_base + EN0_DCFG);	/* 0x48 or 0x49 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 	/* Clear the remote byte count registers. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 	ei_outb_p(0x00,  e8390_base + EN0_RCNTLO);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 	ei_outb_p(0x00,  e8390_base + EN0_RCNTHI);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 	/* Set to monitor and loopback mode -- this is vital!. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 	ei_outb_p(E8390_RXOFF, e8390_base + EN0_RXCR); /* 0x20 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 	ei_outb_p(E8390_TXOFF, e8390_base + EN0_TXCR); /* 0x02 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 	/* Set the transmit page and receive ring. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 	ei_outb_p(ei_local->tx_start_page, e8390_base + EN0_TPSR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 	ei_local->tx1 = ei_local->tx2 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 	ei_outb_p(ei_local->rx_start_page, e8390_base + EN0_STARTPG);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 	ei_outb_p(ei_local->stop_page-1, e8390_base + EN0_BOUNDARY);	/* 3c503 says 0x3f,NS0x26*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 	ei_local->current_page = ei_local->rx_start_page;		/* assert boundary+1 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 	ei_outb_p(ei_local->stop_page, e8390_base + EN0_STOPPG);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 	/* Clear the pending interrupts and mask. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 	ei_outb_p(0xFF, e8390_base + EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 	ei_outb_p(0x00,  e8390_base + EN0_IMR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 	/* Copy the station address into the DS8390 registers. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 	ei_outb_p(E8390_NODMA + E8390_PAGE1 + E8390_STOP, e8390_base+E8390_CMD); /* 0x61 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 	for (i = 0; i < 6; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 		ei_outb_p(dev->dev_addr[i], e8390_base + EN1_PHYS_SHIFT(i));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 		if ((netif_msg_probe(ei_local)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 		    ei_inb_p(e8390_base + EN1_PHYS_SHIFT(i)) != dev->dev_addr[i])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 			netdev_err(dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 				   "Hw. address read/write mismap %d\n", i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 	ei_outb_p(ei_local->rx_start_page, e8390_base + EN1_CURPAG);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 	ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_STOP, e8390_base+E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 	ei_local->tx1 = ei_local->tx2 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 	ei_local->txing = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 	if (startp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 		ei_outb_p(0xff,  e8390_base + EN0_ISR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 		ei_outb_p(ENISR_ALL,  e8390_base + EN0_IMR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 		ei_outb_p(E8390_NODMA+E8390_PAGE0+E8390_START, e8390_base+E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 		ei_outb_p(E8390_TXCONFIG, e8390_base + EN0_TXCR); /* xmit on. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 		/* 3c503 TechMan says rxconfig only after the NIC is started. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 		ei_outb_p(E8390_RXCONFIG, e8390_base + EN0_RXCR); /* rx on,  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 		do_set_multicast_list(dev);	/* (re)load the mcast table */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) /* Trigger a transmit start, assuming the length is valid.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074)    Always called with the page lock held */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) static void NS8390_trigger_send(struct net_device *dev, unsigned int length,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 								int start_page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) 	unsigned long e8390_base = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 	struct ei_device *ei_local __attribute((unused)) = netdev_priv(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 	ei_outb_p(E8390_NODMA+E8390_PAGE0, e8390_base+E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 	if (ei_inb_p(e8390_base + E8390_CMD) & E8390_TRANS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 		netdev_warn(dev, "trigger_send() called with the transmitter busy\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 	ei_outb_p(length & 0xff, e8390_base + EN0_TCNTLO);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 	ei_outb_p(length >> 8, e8390_base + EN0_TCNTHI);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 	ei_outb_p(start_page, e8390_base + EN0_TPSR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 	ei_outb_p(E8390_NODMA+E8390_TRANS+E8390_START, e8390_base+E8390_CMD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) }