Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) /* lance.c: An AMD LANCE/PCnet ethernet driver for Linux. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3) 	Written/copyright 1993-1998 by Donald Becker.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5) 	Copyright 1993 United States Government as represented by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6) 	Director, National Security Agency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7) 	This software may be used and distributed according to the terms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8) 	of the GNU General Public License, incorporated herein by reference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10) 	This driver is for the Allied Telesis AT1500 and HP J2405A, and should work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11) 	with most other LANCE-based bus-master (NE2100/NE2500) ethercards.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13) 	The author may be reached as becker@scyld.com, or C/O
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14) 	Scyld Computing Corporation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15) 	410 Severn Ave., Suite 210
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16) 	Annapolis MD 21403
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18) 	Andrey V. Savochkin:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19) 	- alignment problem with 1.3.* kernel and some minor changes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20) 	Thomas Bogendoerfer (tsbogend@bigbug.franken.de):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21) 	- added support for Linux/Alpha, but removed most of it, because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)         it worked only for the PCI chip.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)       - added hook for the 32bit lance driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)       - added PCnetPCI II (79C970A) to chip table
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25) 	Paul Gortmaker (gpg109@rsphy1.anu.edu.au):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26) 	- hopefully fix above so Linux/Alpha can use ISA cards too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)     8/20/96 Fixed 7990 autoIRQ failure and reversed unneeded alignment -djb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)     v1.12 10/27/97 Module support -djb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)     v1.14  2/3/98 Module support modified, made PCI support optional -djb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)     v1.15 5/27/99 Fixed bug in the cleanup_module(). dev->priv was freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)                   before unregister_netdev() which caused NULL pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)                   reference later in the chain (in rtnetlink_fill_ifinfo())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)                   -- Mika Kuoppala <miku@iki.fi>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)     Forward ported v1.14 to 2.1.129, merged the PCI and misc changes from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)     the 2.1 version of the old driver - Alan Cox
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)     Get rid of check_region, check kmalloc return in lance_probe1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)     Arnaldo Carvalho de Melo <acme@conectiva.com.br> - 11/01/2001
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41) 	Reworked detection, added support for Racal InterLan EtherBlaster cards
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42) 	Vesselin Kostadinov <vesok at yahoo dot com > - 22/4/2004
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45) static const char version[] = "lance.c:v1.16 2006/11/09 dplatt@3do.com, becker@cesdis.gsfc.nasa.gov\n";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50) #include <linux/delay.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52) #include <linux/ioport.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54) #include <linux/interrupt.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55) #include <linux/pci.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) #include <linux/netdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58) #include <linux/etherdevice.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60) #include <linux/mm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61) #include <linux/bitops.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63) #include <asm/io.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64) #include <asm/dma.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66) static unsigned int lance_portlist[] __initdata = { 0x300, 0x320, 0x340, 0x360, 0};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67) static int lance_probe1(struct net_device *dev, int ioaddr, int irq, int options);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68) static int __init do_lance_probe(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71) static struct card {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72) 	char id_offset14;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73) 	char id_offset15;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74) } cards[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75) 	{	//"normal"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76) 		.id_offset14 = 0x57,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77) 		.id_offset15 = 0x57,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78) 	},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79) 	{	//NI6510EB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80) 		.id_offset14 = 0x52,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81) 		.id_offset15 = 0x44,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82) 	},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83) 	{	//Racal InterLan EtherBlaster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84) 		.id_offset14 = 0x52,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85) 		.id_offset15 = 0x49,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86) 	},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88) #define NUM_CARDS 3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90) #ifdef LANCE_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91) static int lance_debug = LANCE_DEBUG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93) static int lance_debug = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97) 				Theory of Operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99) I. Board Compatibility
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101) This device driver is designed for the AMD 79C960, the "PCnet-ISA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102) single-chip ethernet controller for ISA".  This chip is used in a wide
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103) variety of boards from vendors such as Allied Telesis, HP, Kingston,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104) and Boca.  This driver is also intended to work with older AMD 7990
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105) designs, such as the NE1500 and NE2100, and newer 79C961.  For convenience,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106) I use the name LANCE to refer to all of the AMD chips, even though it properly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107) refers only to the original 7990.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109) II. Board-specific settings
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111) The driver is designed to work the boards that use the faster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112) bus-master mode, rather than in shared memory mode.	 (Only older designs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113) have on-board buffer memory needed to support the slower shared memory mode.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115) Most ISA boards have jumpered settings for the I/O base, IRQ line, and DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116) channel.  This driver probes the likely base addresses:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117) {0x300, 0x320, 0x340, 0x360}.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) After the board is found it generates a DMA-timeout interrupt and uses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) autoIRQ to find the IRQ line.  The DMA channel can be set with the low bits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) of the otherwise-unused dev->mem_start value (aka PARAM1).  If unset it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) probed for by enabling each free DMA channel in turn and checking if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) initialization succeeds.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124) The HP-J2405A board is an exception: with this board it is easy to read the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125) EEPROM-set values for the base, IRQ, and DMA.  (Of course you must already
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126) _know_ the base address -- that field is for writing the EEPROM.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128) III. Driver operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130) IIIa. Ring buffers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131) The LANCE uses ring buffers of Tx and Rx descriptors.  Each entry describes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132) the base and length of the data buffer, along with status bits.	 The length
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133) of these buffers is set by LANCE_LOG_{RX,TX}_BUFFERS, which is log_2() of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134) the buffer length (rather than being directly the buffer length) for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135) implementation ease.  The current values are 2 (Tx) and 4 (Rx), which leads to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136) ring sizes of 4 (Tx) and 16 (Rx).  Increasing the number of ring entries
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137) needlessly uses extra space and reduces the chance that an upper layer will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138) be able to reorder queued Tx packets based on priority.	 Decreasing the number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139) of entries makes it more difficult to achieve back-to-back packet transmission
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140) and increases the chance that Rx ring will overflow.  (Consider the worst case
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141) of receiving back-to-back minimum-sized packets.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) The LANCE has the capability to "chain" both Rx and Tx buffers, but this driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) statically allocates full-sized (slightly oversized -- PKT_BUF_SZ) buffers to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) avoid the administrative overhead. For the Rx side this avoids dynamically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) allocating full-sized buffers "just in case", at the expense of a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) memory-to-memory data copy for each packet received.  For most systems this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) is a good tradeoff: the Rx buffer will always be in low memory, the copy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) is inexpensive, and it primes the cache for later packet processing.  For Tx
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) the buffers are only used when needed as low-memory bounce buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) IIIB. 16M memory limitations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) For the ISA bus master mode all structures used directly by the LANCE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) the initialization block, Rx and Tx rings, and data buffers, must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) accessible from the ISA bus, i.e. in the lower 16M of real memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) This is a problem for current Linux kernels on >16M machines. The network
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) devices are initialized after memory initialization, and the kernel doles out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) memory from the top of memory downward.	 The current solution is to have a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) special network initialization routine that's called before memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) initialization; this will eventually be generalized for all network devices.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) As mentioned before, low-memory "bounce-buffers" are used when needed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) IIIC. Synchronization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) The driver runs as two independent, single-threaded flows of control.  One
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) is the send-packet routine, which enforces single-threaded use by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) dev->tbusy flag.  The other thread is the interrupt handler, which is single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) threaded by the hardware and other software.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) The send packet thread has partial control over the Tx ring and 'dev->tbusy'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) flag.  It sets the tbusy flag whenever it's queuing a Tx packet. If the next
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) queue slot is empty, it clears the tbusy flag when finished otherwise it sets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) the 'lp->tx_full' flag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) The interrupt handler has exclusive control over the Rx ring and records stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) from the Tx ring. (The Tx-done interrupt can't be selectively turned off, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) we can't avoid the interrupt overhead by having the Tx routine reap the Tx
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) stats.)	 After reaping the stats, it marks the queue entry as empty by setting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) the 'base' to zero. Iff the 'lp->tx_full' flag is set, it clears both the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) tx_full and tbusy flags.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183) /* Set the number of Tx and Rx buffers, using Log_2(# buffers).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184)    Reasonable default values are 16 Tx buffers, and 16 Rx buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185)    That translates to 4 and 4 (16 == 2^^4).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186)    This is a compile-time option for efficiency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187)    */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) #ifndef LANCE_LOG_TX_BUFFERS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) #define LANCE_LOG_TX_BUFFERS 4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) #define LANCE_LOG_RX_BUFFERS 4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) #define TX_RING_SIZE			(1 << (LANCE_LOG_TX_BUFFERS))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) #define TX_RING_MOD_MASK		(TX_RING_SIZE - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) #define TX_RING_LEN_BITS		((LANCE_LOG_TX_BUFFERS) << 29)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) #define RX_RING_SIZE			(1 << (LANCE_LOG_RX_BUFFERS))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) #define RX_RING_MOD_MASK		(RX_RING_SIZE - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) #define RX_RING_LEN_BITS		((LANCE_LOG_RX_BUFFERS) << 29)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) #define PKT_BUF_SZ		1544
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) /* Offsets from base I/O address. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) #define LANCE_DATA 0x10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) #define LANCE_ADDR 0x12
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) #define LANCE_RESET 0x14
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) #define LANCE_BUS_IF 0x16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) #define LANCE_TOTAL_SIZE 0x18
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) #define TX_TIMEOUT	(HZ/5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) /* The LANCE Rx and Tx ring descriptors. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) struct lance_rx_head {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) 	s32 base;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) 	s16 buf_length;			/* This length is 2s complement (negative)! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 	s16 msg_length;			/* This length is "normal". */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) struct lance_tx_head {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 	s32 base;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 	s16 length;				/* Length is 2s complement (negative)! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 	s16 misc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) /* The LANCE initialization block, described in databook. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) struct lance_init_block {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) 	u16 mode;		/* Pre-set mode (reg. 15) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) 	u8  phys_addr[6]; /* Physical ethernet address */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) 	u32 filter[2];			/* Multicast filter (unused). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) 	/* Receive and transmit ring base, along with extra bits. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 	u32  rx_ring;			/* Tx and Rx ring base pointers */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) 	u32  tx_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) struct lance_private {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 	/* The Tx and Rx ring entries must be aligned on 8-byte boundaries. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 	struct lance_rx_head rx_ring[RX_RING_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) 	struct lance_tx_head tx_ring[TX_RING_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 	struct lance_init_block	init_block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) 	const char *name;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 	/* The saved address of a sent-in-place packet/buffer, for skfree(). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 	struct sk_buff* tx_skbuff[TX_RING_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	/* The addresses of receive-in-place skbuffs. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 	struct sk_buff* rx_skbuff[RX_RING_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	unsigned long rx_buffs;		/* Address of Rx and Tx buffers. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 	/* Tx low-memory "bounce buffer" address. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) 	char (*tx_bounce_buffs)[PKT_BUF_SZ];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 	int cur_rx, cur_tx;			/* The next free ring entry */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) 	int dirty_rx, dirty_tx;		/* The ring entries to be free()ed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250) 	int dma;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) 	unsigned char chip_version;	/* See lance_chip_type. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 	spinlock_t devlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255) #define LANCE_MUST_PAD          0x00000001
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) #define LANCE_ENABLE_AUTOSELECT 0x00000002
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) #define LANCE_MUST_REINIT_RING  0x00000004
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) #define LANCE_MUST_UNRESET      0x00000008
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) #define LANCE_HAS_MISSED_FRAME  0x00000010
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) /* A mapping from the chip ID number to the part number and features.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262)    These are from the datasheets -- in real life the '970 version
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263)    reportedly has the same ID as the '965. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) static struct lance_chip_type {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 	int id_number;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 	const char *name;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 	int flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) } chip_table[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 	{0x0000, "LANCE 7990",				/* Ancient lance chip.  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 		LANCE_MUST_PAD + LANCE_MUST_UNRESET},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 	{0x0003, "PCnet/ISA 79C960",		/* 79C960 PCnet/ISA.  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 		LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 			LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 	{0x2260, "PCnet/ISA+ 79C961",		/* 79C961 PCnet/ISA+, Plug-n-Play.  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) 		LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 			LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) 	{0x2420, "PCnet/PCI 79C970",		/* 79C970 or 79C974 PCnet-SCSI, PCI. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) 		LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 			LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) 	/* Bug: the PCnet/PCI actually uses the PCnet/VLB ID number, so just call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 		it the PCnet32. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 	{0x2430, "PCnet32",					/* 79C965 PCnet for VL bus. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) 		LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 			LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285)         {0x2621, "PCnet/PCI-II 79C970A",        /* 79C970A PCInetPCI II. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286)                 LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287)                         LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 	{0x0, 	 "PCnet (unknown)",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) 		LANCE_ENABLE_AUTOSELECT + LANCE_MUST_REINIT_RING +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 			LANCE_HAS_MISSED_FRAME},
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) enum {OLD_LANCE = 0, PCNET_ISA=1, PCNET_ISAP=2, PCNET_PCI=3, PCNET_VLB=4, PCNET_PCI_II=5, LANCE_UNKNOWN=6};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) /* Non-zero if lance_probe1() needs to allocate low-memory bounce buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297)    Assume yes until we know the memory size. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) static unsigned char lance_need_isa_bounce_buffers = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) static int lance_open(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) static void lance_init_ring(struct net_device *dev, gfp_t mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) 				    struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) static int lance_rx(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) static irqreturn_t lance_interrupt(int irq, void *dev_id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) static int lance_close(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) static struct net_device_stats *lance_get_stats(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) static void set_multicast_list(struct net_device *dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) #ifdef MODULE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) #define MAX_CARDS		8	/* Max number of interfaces (cards) per module */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) static struct net_device *dev_lance[MAX_CARDS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) static int io[MAX_CARDS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) static int dma[MAX_CARDS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) static int irq[MAX_CARDS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) module_param_hw_array(io, int, ioport, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) module_param_hw_array(dma, int, dma, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) module_param_hw_array(irq, int, irq, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) module_param(lance_debug, int, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) MODULE_PARM_DESC(io, "LANCE/PCnet I/O base address(es),required");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) MODULE_PARM_DESC(dma, "LANCE/PCnet ISA DMA channel (ignored for some devices)");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) MODULE_PARM_DESC(irq, "LANCE/PCnet IRQ number (ignored for some devices)");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) MODULE_PARM_DESC(lance_debug, "LANCE/PCnet debug level (0-7)");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) int __init init_module(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) 	struct net_device *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) 	int this_dev, found = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 	for (this_dev = 0; this_dev < MAX_CARDS; this_dev++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) 		if (io[this_dev] == 0)  {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 			if (this_dev != 0) /* only complain once */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 			printk(KERN_NOTICE "lance.c: Module autoprobing not allowed. Append \"io=0xNNN\" value(s).\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 			return -EPERM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 		dev = alloc_etherdev(0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 		if (!dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 		dev->irq = irq[this_dev];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 		dev->base_addr = io[this_dev];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 		dev->dma = dma[this_dev];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 		if (do_lance_probe(dev) == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 			dev_lance[found++] = dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 		free_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 	if (found != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 	return -ENXIO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) static void cleanup_card(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 	if (dev->dma != 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) 		free_dma(dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 	release_region(dev->base_addr, LANCE_TOTAL_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 	kfree(lp->tx_bounce_buffs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) 	kfree((void*)lp->rx_buffs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 	kfree(lp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) void __exit cleanup_module(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 	int this_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) 	for (this_dev = 0; this_dev < MAX_CARDS; this_dev++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 		struct net_device *dev = dev_lance[this_dev];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) 		if (dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) 			unregister_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) 			cleanup_card(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 			free_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) #endif /* MODULE */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) MODULE_LICENSE("GPL");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) /* Starting in v2.1.*, the LANCE/PCnet probe is now similar to the other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389)    board probes now that kmalloc() can allocate ISA DMA-able regions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390)    This also allows the LANCE driver to be used as a module.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391)    */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) static int __init do_lance_probe(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 	unsigned int *port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 	int result;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 	if (high_memory <= phys_to_virt(16*1024*1024))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 		lance_need_isa_bounce_buffers = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 	for (port = lance_portlist; *port; port++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 		int ioaddr = *port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 		struct resource *r = request_region(ioaddr, LANCE_TOTAL_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 							"lance-probe");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 		if (r) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 			/* Detect the card with minimal I/O reads */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 			char offset14 = inb(ioaddr + 14);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 			int card;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 			for (card = 0; card < NUM_CARDS; ++card)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 				if (cards[card].id_offset14 == offset14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 			if (card < NUM_CARDS) {/*yes, the first byte matches*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 				char offset15 = inb(ioaddr + 15);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 				for (card = 0; card < NUM_CARDS; ++card)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 					if ((cards[card].id_offset14 == offset14) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 						(cards[card].id_offset15 == offset15))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) 						break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 			if (card < NUM_CARDS) { /*Signature OK*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 				result = lance_probe1(dev, ioaddr, 0, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) 				if (!result) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 					struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) 					int ver = lp->chip_version;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) 					r->name = chip_table[ver].name;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) 					return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) 			release_region(ioaddr, LANCE_TOTAL_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 	return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) #ifndef MODULE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) struct net_device * __init lance_probe(int unit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 	struct net_device *dev = alloc_etherdev(0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 	if (!dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 		return ERR_PTR(-ENODEV);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 	sprintf(dev->name, "eth%d", unit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 	netdev_boot_setup_check(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 	err = do_lance_probe(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 	return dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 	free_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) 	return ERR_PTR(err);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) static const struct net_device_ops lance_netdev_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	.ndo_open 		= lance_open,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	.ndo_start_xmit		= lance_start_xmit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 	.ndo_stop		= lance_close,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	.ndo_get_stats		= lance_get_stats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	.ndo_set_rx_mode	= set_multicast_list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 	.ndo_tx_timeout		= lance_tx_timeout,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 	.ndo_set_mac_address 	= eth_mac_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 	.ndo_validate_addr	= eth_validate_addr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) static int __init lance_probe1(struct net_device *dev, int ioaddr, int irq, int options)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 	struct lance_private *lp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 	unsigned long dma_channels;	/* Mark spuriously-busy DMA channels */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 	int i, reset_val, lance_version;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 	const char *chipname;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 	/* Flags for specific chips or boards. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 	unsigned char hpJ2405A = 0;	/* HP ISA adaptor */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 	int hp_builtin = 0;		/* HP on-board ethernet. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 	static int did_version;		/* Already printed version info. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 	int err = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 	void __iomem *bios;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 	/* First we look for special cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 	   Check for HP's on-board ethernet by looking for 'HP' in the BIOS.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 	   There are two HP versions, check the BIOS for the configuration port.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 	   This method provided by L. Julliard, Laurent_Julliard@grenoble.hp.com.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 	   */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	bios = ioremap(0xf00f0, 0x14);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 	if (!bios)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 	if (readw(bios + 0x12) == 0x5048)  {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 		static const short ioaddr_table[] = { 0x300, 0x320, 0x340, 0x360};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 		int hp_port = (readl(bios + 1) & 1)  ? 0x499 : 0x99;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 		/* We can have boards other than the built-in!  Verify this is on-board. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 		if ((inb(hp_port) & 0xc0) == 0x80 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 		    ioaddr_table[inb(hp_port) & 3] == ioaddr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 			hp_builtin = hp_port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 	iounmap(bios);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 	/* We also recognize the HP Vectra on-board here, but check below. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) 	hpJ2405A = (inb(ioaddr) == 0x08 && inb(ioaddr+1) == 0x00 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 		    inb(ioaddr+2) == 0x09);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 	/* Reset the LANCE.	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 	reset_val = inw(ioaddr+LANCE_RESET); /* Reset the LANCE */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 	/* The Un-Reset needed is only needed for the real NE2100, and will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) 	   confuse the HP board. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 	if (!hpJ2405A)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) 		outw(reset_val, ioaddr+LANCE_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	outw(0x0000, ioaddr+LANCE_ADDR); /* Switch to window 0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 	if (inw(ioaddr+LANCE_DATA) != 0x0004)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) 		return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 	/* Get the version of the chip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 	outw(88, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 	if (inw(ioaddr+LANCE_ADDR) != 88) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 		lance_version = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 	} else {			/* Good, it's a newer chip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 		int chip_version = inw(ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 		outw(89, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 		chip_version |= inw(ioaddr+LANCE_DATA) << 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 		if (lance_debug > 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 			printk("  LANCE chip version is %#x.\n", chip_version);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 		if ((chip_version & 0xfff) != 0x003)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 			return -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 		chip_version = (chip_version >> 12) & 0xffff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 		for (lance_version = 1; chip_table[lance_version].id_number; lance_version++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 			if (chip_table[lance_version].id_number == chip_version)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 	/* We can't allocate private data from alloc_etherdev() because it must
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) 	   a ISA DMA-able region. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 	chipname = chip_table[lance_version].name;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	printk("%s: %s at %#3x, ", dev->name, chipname, ioaddr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 	/* There is a 16 byte station address PROM at the base address.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) 	   The first six bytes are the station address. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 	for (i = 0; i < 6; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) 		dev->dev_addr[i] = inb(ioaddr + i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 	printk("%pM", dev->dev_addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 	dev->base_addr = ioaddr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 	/* Make certain the data structures used by the LANCE are aligned and DMAble. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	lp = kzalloc(sizeof(*lp), GFP_DMA | GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 	if (!lp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 	if (lance_debug > 6) printk(" (#0x%05lx)", (unsigned long)lp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 	dev->ml_priv = lp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 	lp->name = chipname;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 	lp->rx_buffs = (unsigned long)kmalloc_array(RX_RING_SIZE, PKT_BUF_SZ,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) 						    GFP_DMA | GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 	if (!lp->rx_buffs)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 		goto out_lp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 	if (lance_need_isa_bounce_buffers) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) 		lp->tx_bounce_buffs = kmalloc_array(TX_RING_SIZE, PKT_BUF_SZ,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 						    GFP_DMA | GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) 		if (!lp->tx_bounce_buffs)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 			goto out_rx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 		lp->tx_bounce_buffs = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 	lp->chip_version = lance_version;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 	spin_lock_init(&lp->devlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) 	lp->init_block.mode = 0x0003;		/* Disable Rx and Tx. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) 	for (i = 0; i < 6; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 		lp->init_block.phys_addr[i] = dev->dev_addr[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 	lp->init_block.filter[0] = 0x00000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	lp->init_block.filter[1] = 0x00000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	lp->init_block.rx_ring = ((u32)isa_virt_to_bus(lp->rx_ring) & 0xffffff) | RX_RING_LEN_BITS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	lp->init_block.tx_ring = ((u32)isa_virt_to_bus(lp->tx_ring) & 0xffffff) | TX_RING_LEN_BITS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 	outw(0x0001, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	inw(ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 	outw((short) (u32) isa_virt_to_bus(&lp->init_block), ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 	outw(0x0002, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 	inw(ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 	outw(((u32)isa_virt_to_bus(&lp->init_block)) >> 16, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 	outw(0x0000, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 	inw(ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 	if (irq) {					/* Set iff PCI card. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) 		dev->dma = 4;			/* Native bus-master, no DMA channel needed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 		dev->irq = irq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) 	} else if (hp_builtin) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 		static const char dma_tbl[4] = {3, 5, 6, 0};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 		static const char irq_tbl[4] = {3, 4, 5, 9};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 		unsigned char port_val = inb(hp_builtin);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 		dev->dma = dma_tbl[(port_val >> 4) & 3];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 		dev->irq = irq_tbl[(port_val >> 2) & 3];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 		printk(" HP Vectra IRQ %d DMA %d.\n", dev->irq, dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 	} else if (hpJ2405A) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 		static const char dma_tbl[4] = {3, 5, 6, 7};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 		static const char irq_tbl[8] = {3, 4, 5, 9, 10, 11, 12, 15};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		short reset_val = inw(ioaddr+LANCE_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 		dev->dma = dma_tbl[(reset_val >> 2) & 3];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 		dev->irq = irq_tbl[(reset_val >> 4) & 7];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 		printk(" HP J2405A IRQ %d DMA %d.\n", dev->irq, dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 	} else if (lance_version == PCNET_ISAP) {		/* The plug-n-play version. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 		short bus_info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 		outw(8, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 		bus_info = inw(ioaddr+LANCE_BUS_IF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 		dev->dma = bus_info & 0x07;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		dev->irq = (bus_info >> 4) & 0x0F;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 		/* The DMA channel may be passed in PARAM1. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 		if (dev->mem_start & 0x07)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 			dev->dma = dev->mem_start & 0x07;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 	if (dev->dma == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 		/* Read the DMA channel status register, so that we can avoid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 		   stuck DMA channels in the DMA detection below. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 		dma_channels = ((inb(DMA1_STAT_REG) >> 4) & 0x0f) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 			(inb(DMA2_STAT_REG) & 0xf0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 	err = -ENODEV;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 	if (dev->irq >= 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 		printk(" assigned IRQ %d", dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	else if (lance_version != 0)  {	/* 7990 boards need DMA detection first. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 		unsigned long irq_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) 		/* To auto-IRQ we enable the initialization-done and DMA error
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 		   interrupts. For ISA boards we get a DMA error, but VLB and PCI
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 		   boards will work. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 		irq_mask = probe_irq_on();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 		/* Trigger an initialization just for the interrupt. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 		outw(0x0041, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		mdelay(20);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 		dev->irq = probe_irq_off(irq_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 		if (dev->irq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 			printk(", probed IRQ %d", dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 		else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 			printk(", failed to detect IRQ line.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 			goto out_tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 		/* Check for the initialization done bit, 0x0100, which means
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 		   that we don't need a DMA channel. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 		if (inw(ioaddr+LANCE_DATA) & 0x0100)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 			dev->dma = 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 	if (dev->dma == 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 		printk(", no DMA needed.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 	} else if (dev->dma) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 		if (request_dma(dev->dma, chipname)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 			printk("DMA %d allocation failed.\n", dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 			goto out_tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 		} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 			printk(", assigned DMA %d.\n", dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 	} else {			/* OK, we have to auto-DMA. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 		for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 			static const char dmas[] = { 5, 6, 7, 3 };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 			int dma = dmas[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 			int boguscnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 			/* Don't enable a permanently busy DMA channel, or the machine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 			   will hang. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 			if (test_bit(dma, &dma_channels))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 			outw(0x7f04, ioaddr+LANCE_DATA); /* Clear the memory error bits. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 			if (request_dma(dma, chipname))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 			flags=claim_dma_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 			set_dma_mode(dma, DMA_MODE_CASCADE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 			enable_dma(dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 			release_dma_lock(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 			/* Trigger an initialization. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 			outw(0x0001, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 			for (boguscnt = 100; boguscnt > 0; --boguscnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 				if (inw(ioaddr+LANCE_DATA) & 0x0900)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 			if (inw(ioaddr+LANCE_DATA) & 0x0100) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 				dev->dma = dma;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 				printk(", DMA %d.\n", dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 				flags=claim_dma_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 				disable_dma(dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 				release_dma_lock(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 				free_dma(dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 		if (i == 4) {			/* Failure: bail. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 			printk("DMA detection failed.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 			goto out_tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 	if (lance_version == 0 && dev->irq == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 		/* We may auto-IRQ now that we have a DMA channel. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 		/* Trigger an initialization just for the interrupt. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 		unsigned long irq_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 		irq_mask = probe_irq_on();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 		outw(0x0041, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 		mdelay(40);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 		dev->irq = probe_irq_off(irq_mask);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 		if (dev->irq == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 			printk("  Failed to detect the 7990 IRQ line.\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 			goto out_dma;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 		printk("  Auto-IRQ detected IRQ%d.\n", dev->irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 	if (chip_table[lp->chip_version].flags & LANCE_ENABLE_AUTOSELECT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 		/* Turn on auto-select of media (10baseT or BNC) so that the user
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 		   can watch the LEDs even if the board isn't opened. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 		outw(0x0002, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 		/* Don't touch 10base2 power bit. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 		outw(inw(ioaddr+LANCE_BUS_IF) | 0x0002, ioaddr+LANCE_BUS_IF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 	if (lance_debug > 0  &&  did_version++ == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 		printk(version);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 	/* The LANCE-specific entries in the device structure. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 	dev->netdev_ops = &lance_netdev_ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 	dev->watchdog_timeo = TX_TIMEOUT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 	err = register_netdev(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 		goto out_dma;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) out_dma:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 	if (dev->dma != 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 		free_dma(dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) out_tx:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 	kfree(lp->tx_bounce_buffs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) out_rx:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 	kfree((void*)lp->rx_buffs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) out_lp:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 	kfree(lp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) lance_open(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 	int ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 	if (dev->irq == 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 		request_irq(dev->irq, lance_interrupt, 0, dev->name, dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 		return -EAGAIN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 	/* We used to allocate DMA here, but that was silly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 	   DMA lines can't be shared!  We now permanently allocate them. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 	/* Reset the LANCE */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 	inw(ioaddr+LANCE_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 	/* The DMA controller is used as a no-operation slave, "cascade mode". */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 	if (dev->dma != 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 		unsigned long flags=claim_dma_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 		enable_dma(dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 		set_dma_mode(dev->dma, DMA_MODE_CASCADE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 		release_dma_lock(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 	/* Un-Reset the LANCE, needed only for the NE2100. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 	if (chip_table[lp->chip_version].flags & LANCE_MUST_UNRESET)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 		outw(0, ioaddr+LANCE_RESET);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 	if (chip_table[lp->chip_version].flags & LANCE_ENABLE_AUTOSELECT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 		/* This is 79C960-specific: Turn on auto-select of media (AUI, BNC). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 		outw(0x0002, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 		/* Only touch autoselect bit. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 		outw(inw(ioaddr+LANCE_BUS_IF) | 0x0002, ioaddr+LANCE_BUS_IF);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783)  	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 	if (lance_debug > 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 		printk("%s: lance_open() irq %d dma %d tx/rx rings %#x/%#x init %#x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 			   dev->name, dev->irq, dev->dma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 		           (u32) isa_virt_to_bus(lp->tx_ring),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 		           (u32) isa_virt_to_bus(lp->rx_ring),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 			   (u32) isa_virt_to_bus(&lp->init_block));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 	lance_init_ring(dev, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 	/* Re-initialize the LANCE, and start it when done. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 	outw(0x0001, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 	outw((short) (u32) isa_virt_to_bus(&lp->init_block), ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 	outw(0x0002, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	outw(((u32)isa_virt_to_bus(&lp->init_block)) >> 16, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	outw(0x0004, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 	outw(0x0915, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 	outw(0x0000, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 	outw(0x0001, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 	netif_start_queue (dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 	i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	while (i++ < 100)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 		if (inw(ioaddr+LANCE_DATA) & 0x0100)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 	 * We used to clear the InitDone bit, 0x0100, here but Mark Stockton
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 	 * reports that doing so triggers a bug in the '974.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815)  	outw(0x0042, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 	if (lance_debug > 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 		printk("%s: LANCE open after %d ticks, init block %#x csr0 %4.4x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 			   dev->name, i, (u32) isa_virt_to_bus(&lp->init_block), inw(ioaddr+LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	return 0;					/* Always succeed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) /* The LANCE has been halted for one reason or another (busmaster memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825)    arbitration error, Tx FIFO underflow, driver stopped it to reconfigure,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826)    etc.).  Modern LANCE variants always reload their ring-buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827)    configuration when restarted, so we must reinitialize our ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828)    context before restarting.  As part of this reinitialization,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829)    find all packets still on the Tx ring and pretend that they had been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830)    sent (in effect, drop the packets on the floor) - the higher-level
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831)    protocols will time out and retransmit.  It'd be better to shuffle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832)    these skbs to a temp list and then actually re-Tx them after
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833)    restarting the chip, but I'm too lazy to do so right now.  dplatt@3do.com
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) lance_purge_ring(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 	/* Free all the skbuffs in the Rx and Tx queues. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 	for (i = 0; i < RX_RING_SIZE; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 		struct sk_buff *skb = lp->rx_skbuff[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 		lp->rx_skbuff[i] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 		lp->rx_ring[i].base = 0;		/* Not owned by LANCE chip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 		if (skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 			dev_kfree_skb_any(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 	for (i = 0; i < TX_RING_SIZE; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 		if (lp->tx_skbuff[i]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 			dev_kfree_skb_any(lp->tx_skbuff[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 			lp->tx_skbuff[i] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) /* Initialize the LANCE Rx and Tx rings. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) lance_init_ring(struct net_device *dev, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 	lp->cur_rx = lp->cur_tx = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 	lp->dirty_rx = lp->dirty_tx = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 	for (i = 0; i < RX_RING_SIZE; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 		struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 		void *rx_buff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 		skb = alloc_skb(PKT_BUF_SZ, GFP_DMA | gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 		lp->rx_skbuff[i] = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 		if (skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 			rx_buff = skb->data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 			rx_buff = kmalloc(PKT_BUF_SZ, GFP_DMA | gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 		if (rx_buff == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 			lp->rx_ring[i].base = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 			lp->rx_ring[i].base = (u32)isa_virt_to_bus(rx_buff) | 0x80000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 		lp->rx_ring[i].buf_length = -PKT_BUF_SZ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 	/* The Tx buffer address is filled in as needed, but we do need to clear
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 	   the upper ownership bit. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 	for (i = 0; i < TX_RING_SIZE; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 		lp->tx_skbuff[i] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 		lp->tx_ring[i].base = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 	lp->init_block.mode = 0x0000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 	for (i = 0; i < 6; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 		lp->init_block.phys_addr[i] = dev->dev_addr[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 	lp->init_block.filter[0] = 0x00000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 	lp->init_block.filter[1] = 0x00000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 	lp->init_block.rx_ring = ((u32)isa_virt_to_bus(lp->rx_ring) & 0xffffff) | RX_RING_LEN_BITS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 	lp->init_block.tx_ring = ((u32)isa_virt_to_bus(lp->tx_ring) & 0xffffff) | TX_RING_LEN_BITS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) lance_restart(struct net_device *dev, unsigned int csr0_bits, int must_reinit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 	if (must_reinit ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 		(chip_table[lp->chip_version].flags & LANCE_MUST_REINIT_RING)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 		lance_purge_ring(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 		lance_init_ring(dev, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 	outw(0x0000,    dev->base_addr + LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 	outw(csr0_bits, dev->base_addr + LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) static void lance_tx_timeout (struct net_device *dev, unsigned int txqueue)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 	struct lance_private *lp = (struct lance_private *) dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 	int ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 	outw (0, ioaddr + LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 	printk ("%s: transmit timed out, status %4.4x, resetting.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 		dev->name, inw (ioaddr + LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 	outw (0x0004, ioaddr + LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 	dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) #ifndef final_version
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 	if (lance_debug > 3) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 		printk (" Ring data dump: dirty_tx %d cur_tx %d%s cur_rx %d.",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 		  lp->dirty_tx, lp->cur_tx, netif_queue_stopped(dev) ? " (full)" : "",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 			lp->cur_rx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 		for (i = 0; i < RX_RING_SIZE; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 			printk ("%s %08x %04x %04x", i & 0x3 ? "" : "\n ",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 			 lp->rx_ring[i].base, -lp->rx_ring[i].buf_length,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 				lp->rx_ring[i].msg_length);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 		for (i = 0; i < TX_RING_SIZE; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 			printk ("%s %08x %04x %04x", i & 0x3 ? "" : "\n ",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 			     lp->tx_ring[i].base, -lp->tx_ring[i].length,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 				lp->tx_ring[i].misc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 		printk ("\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 	lance_restart (dev, 0x0043, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 	netif_trans_update(dev); /* prevent tx timeout */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 	netif_wake_queue (dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 				    struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 	int ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 	int entry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 	spin_lock_irqsave(&lp->devlock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 	if (lance_debug > 3) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 		outw(0x0000, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 		printk("%s: lance_start_xmit() called, csr0 %4.4x.\n", dev->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 			   inw(ioaddr+LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		outw(0x0000, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 	/* Fill in a Tx ring entry */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 	/* Mask to ring buffer boundary. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 	entry = lp->cur_tx & TX_RING_MOD_MASK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 	/* Caution: the write order is important here, set the base address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 	   with the "ownership" bits last. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 	/* The old LANCE chips doesn't automatically pad buffers to min. size. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 	if (chip_table[lp->chip_version].flags & LANCE_MUST_PAD) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 		if (skb->len < ETH_ZLEN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 			if (skb_padto(skb, ETH_ZLEN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 			lp->tx_ring[entry].length = -ETH_ZLEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 			lp->tx_ring[entry].length = -skb->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 		lp->tx_ring[entry].length = -skb->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 	lp->tx_ring[entry].misc = 0x0000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) 	dev->stats.tx_bytes += skb->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 	/* If any part of this buffer is >16M we must copy it to a low-memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 	   buffer. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 	if ((u32)isa_virt_to_bus(skb->data) + skb->len > 0x01000000) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 		if (lance_debug > 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 			printk("%s: bouncing a high-memory packet (%#x).\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 				   dev->name, (u32)isa_virt_to_bus(skb->data));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 		skb_copy_from_linear_data(skb, &lp->tx_bounce_buffs[entry], skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 		lp->tx_ring[entry].base =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 			((u32)isa_virt_to_bus((lp->tx_bounce_buffs + entry)) & 0xffffff) | 0x83000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 		dev_kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 		lp->tx_skbuff[entry] = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 		lp->tx_ring[entry].base = ((u32)isa_virt_to_bus(skb->data) & 0xffffff) | 0x83000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) 	lp->cur_tx++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) 	/* Trigger an immediate send poll. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 	outw(0x0000, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	outw(0x0048, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 	if ((lp->cur_tx - lp->dirty_tx) >= TX_RING_SIZE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 		netif_stop_queue(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 	spin_unlock_irqrestore(&lp->devlock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	return NETDEV_TX_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) /* The LANCE interrupt handler. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) static irqreturn_t lance_interrupt(int irq, void *dev_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 	struct net_device *dev = dev_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 	struct lance_private *lp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 	int csr0, ioaddr, boguscnt=10;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 	int must_restart;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 	ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 	lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 	spin_lock (&lp->devlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 	outw(0x00, dev->base_addr + LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 	while ((csr0 = inw(dev->base_addr + LANCE_DATA)) & 0x8600 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 	       --boguscnt >= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 		/* Acknowledge all of the current interrupt sources ASAP. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 		outw(csr0 & ~0x004f, dev->base_addr + LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 		must_restart = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 		if (lance_debug > 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 			printk("%s: interrupt  csr0=%#2.2x new csr=%#2.2x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 				   dev->name, csr0, inw(dev->base_addr + LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 		if (csr0 & 0x0400)			/* Rx interrupt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 			lance_rx(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 		if (csr0 & 0x0200) {		/* Tx-done interrupt */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 			int dirty_tx = lp->dirty_tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 			while (dirty_tx < lp->cur_tx) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 				int entry = dirty_tx & TX_RING_MOD_MASK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 				int status = lp->tx_ring[entry].base;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 				if (status < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 					break;			/* It still hasn't been Txed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 				lp->tx_ring[entry].base = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 				if (status & 0x40000000) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 					/* There was an major error, log it. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 					int err_status = lp->tx_ring[entry].misc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 					dev->stats.tx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 					if (err_status & 0x0400)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 						dev->stats.tx_aborted_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 					if (err_status & 0x0800)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 						dev->stats.tx_carrier_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 					if (err_status & 0x1000)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 						dev->stats.tx_window_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 					if (err_status & 0x4000) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 						/* Ackk!  On FIFO errors the Tx unit is turned off! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 						dev->stats.tx_fifo_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 						/* Remove this verbosity later! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 						printk("%s: Tx FIFO error! Status %4.4x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 							   dev->name, csr0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 						/* Restart the chip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 						must_restart = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 					}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 				} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) 					if (status & 0x18000000)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 						dev->stats.collisions++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 					dev->stats.tx_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 				/* We must free the original skb if it's not a data-only copy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 				   in the bounce buffer. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 				if (lp->tx_skbuff[entry]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 					dev_consume_skb_irq(lp->tx_skbuff[entry]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 					lp->tx_skbuff[entry] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 				dirty_tx++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) #ifndef final_version
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 			if (lp->cur_tx - dirty_tx >= TX_RING_SIZE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 				printk("out-of-sync dirty pointer, %d vs. %d, full=%s.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 					   dirty_tx, lp->cur_tx,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) 					   netif_queue_stopped(dev) ? "yes" : "no");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 				dirty_tx += TX_RING_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) 			/* if the ring is no longer full, accept more packets */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) 			if (netif_queue_stopped(dev) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) 			    dirty_tx > lp->cur_tx - TX_RING_SIZE + 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) 				netif_wake_queue (dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 			lp->dirty_tx = dirty_tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 		/* Log misc errors. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 		if (csr0 & 0x4000)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 			dev->stats.tx_errors++; /* Tx babble. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 		if (csr0 & 0x1000)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 			dev->stats.rx_errors++; /* Missed a Rx frame. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 		if (csr0 & 0x0800) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 			printk("%s: Bus master arbitration failure, status %4.4x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 				   dev->name, csr0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 			/* Restart the chip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 			must_restart = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 		if (must_restart) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 			/* stop the chip to clear the error condition, then restart */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 			outw(0x0000, dev->base_addr + LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 			outw(0x0004, dev->base_addr + LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 			lance_restart(dev, 0x0002, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 	/* Clear any other interrupt, and set interrupt enable. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 	outw(0x0000, dev->base_addr + LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 	outw(0x7940, dev->base_addr + LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) 	if (lance_debug > 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 		printk("%s: exiting interrupt, csr%d=%#4.4x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 			   dev->name, inw(ioaddr + LANCE_ADDR),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 			   inw(dev->base_addr + LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 	spin_unlock (&lp->devlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 	return IRQ_HANDLED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) lance_rx(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 	int entry = lp->cur_rx & RX_RING_MOD_MASK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 	/* If we own the next entry, it's a new packet. Send it up. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) 	while (lp->rx_ring[entry].base >= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 		int status = lp->rx_ring[entry].base >> 24;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 		if (status != 0x03) {			/* There was an error. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 			/* There is a tricky error noted by John Murphy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 			   <murf@perftech.com> to Russ Nelson: Even with full-sized
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 			   buffers it's possible for a jabber packet to use two
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 			   buffers, with only the last correctly noting the error. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 			if (status & 0x01)	/* Only count a general error at the */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 				dev->stats.rx_errors++; /* end of a packet.*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 			if (status & 0x20)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 				dev->stats.rx_frame_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 			if (status & 0x10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 				dev->stats.rx_over_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 			if (status & 0x08)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 				dev->stats.rx_crc_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 			if (status & 0x04)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 				dev->stats.rx_fifo_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 			lp->rx_ring[entry].base &= 0x03ffffff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 		{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) 			/* Malloc up new buffer, compatible with net3. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) 			short pkt_len = (lp->rx_ring[entry].msg_length & 0xfff)-4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) 			struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 			if(pkt_len<60)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) 			{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 				printk("%s: Runt packet!\n",dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 				dev->stats.rx_errors++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 			{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 				skb = dev_alloc_skb(pkt_len+2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 				if (skb == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 				{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 					printk("%s: Memory squeeze, deferring packet.\n", dev->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 					for (i=0; i < RX_RING_SIZE; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 						if (lp->rx_ring[(entry+i) & RX_RING_MOD_MASK].base < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 							break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 					if (i > RX_RING_SIZE -2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 					{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 						dev->stats.rx_dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 						lp->rx_ring[entry].base |= 0x80000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 						lp->cur_rx++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 					}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 				skb_reserve(skb,2);	/* 16 byte align */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 				skb_put(skb,pkt_len);	/* Make room */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 				skb_copy_to_linear_data(skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 					(unsigned char *)isa_bus_to_virt((lp->rx_ring[entry].base & 0x00ffffff)),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 					pkt_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 				skb->protocol=eth_type_trans(skb,dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 				netif_rx(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 				dev->stats.rx_packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 				dev->stats.rx_bytes += pkt_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 		/* The docs say that the buffer length isn't touched, but Andrew Boyd
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 		   of QNX reports that some revs of the 79C965 clear it. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 		lp->rx_ring[entry].buf_length = -PKT_BUF_SZ;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 		lp->rx_ring[entry].base |= 0x80000000;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 		entry = (++lp->cur_rx) & RX_RING_MOD_MASK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 	/* We should check that at least two ring entries are free.	 If not,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 	   we should free one and mark stats->rx_dropped++. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) lance_close(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 	int ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 	netif_stop_queue (dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 	if (chip_table[lp->chip_version].flags & LANCE_HAS_MISSED_FRAME) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 		outw(112, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 		dev->stats.rx_missed_errors = inw(ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 	outw(0, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 	if (lance_debug > 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 		printk("%s: Shutting down ethercard, status was %2.2x.\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 			   dev->name, inw(ioaddr+LANCE_DATA));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 	/* We stop the LANCE here -- it occasionally polls
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 	   memory if we don't. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 	outw(0x0004, ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 	if (dev->dma != 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 	{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 		unsigned long flags=claim_dma_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 		disable_dma(dev->dma);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 		release_dma_lock(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 	free_irq(dev->irq, dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 	lance_purge_ring(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) static struct net_device_stats *lance_get_stats(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 	struct lance_private *lp = dev->ml_priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 	if (chip_table[lp->chip_version].flags & LANCE_HAS_MISSED_FRAME) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 		short ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 		short saved_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 		unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 		spin_lock_irqsave(&lp->devlock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 		saved_addr = inw(ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 		outw(112, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 		dev->stats.rx_missed_errors = inw(ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 		outw(saved_addr, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 		spin_unlock_irqrestore(&lp->devlock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 	return &dev->stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) /* Set or clear the multicast filter for this adaptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) static void set_multicast_list(struct net_device *dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 	short ioaddr = dev->base_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 	outw(0, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 	outw(0x0004, ioaddr+LANCE_DATA); /* Temporarily stop the lance.	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 	if (dev->flags&IFF_PROMISC) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 		outw(15, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 		outw(0x8000, ioaddr+LANCE_DATA); /* Set promiscuous mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 		short multicast_table[4];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 		int num_addrs=netdev_mc_count(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) 		if(dev->flags&IFF_ALLMULTI)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 			num_addrs=1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 		/* FIXIT: We don't use the multicast table, but rely on upper-layer filtering. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 		memset(multicast_table, (num_addrs == 0) ? 0 : -1, sizeof(multicast_table));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 		for (i = 0; i < 4; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 			outw(8 + i, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 			outw(multicast_table[i], ioaddr+LANCE_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 		outw(15, ioaddr+LANCE_ADDR);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) 		outw(0x0000, ioaddr+LANCE_DATA); /* Unset promiscuous mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 	lance_restart(dev, 0x0142, 0); /*  Resume normal operation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311)