2 Madge Horizon ATM Adapter driver. 3 Copyright (C) 1995-1999 Madge Networks Ltd. 5 This program is free software; you can redistribute it and/or modify 6 it under the terms of the GNU General Public License as published by 7 the Free Software Foundation; either version 2 of the License, or 8 (at your option) any later version. 10 This program is distributed in the hope that it will be useful, 11 but WITHOUT ANY WARRANTY; without even the implied warranty of 12 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 GNU General Public License for more details. 15 You should have received a copy of the GNU General Public License 16 along with this program; if not, write to the Free Software 17 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 The GNU GPL is contained in /usr/doc/copyright/GPL on a Debian 20 system and in the file COPYING in the Linux kernel source. 24 IMPORTANT NOTE: Madge Networks no longer makes the adapters 25 supported by this driver and makes no commitment to maintain it. 28 #include <linux/module.h> 29 #include <linux/kernel.h> 31 #include <linux/pci.h> 32 #include <linux/errno.h> 33 #include <linux/atm.h> 34 #include <linux/atmdev.h> 35 #include <linux/sonet.h> 36 #include <linux/skbuff.h> 37 #include <linux/time.h> 38 #include <linux/delay.h> 39 #include <linux/uio.h> 40 #include <linux/init.h> 41 #include <linux/ioport.h> 43 #include <asm/system.h> 45 #include <asm/uaccess.h> 46 #include <asm/string.h> 47 #include <asm/byteorder.h> 51 #define maintainer_string"Giuliano Procida at Madge Networks <gprocida@madge.com>" 52 #define description_string"Madge ATM Horizon [Ultra] driver" 53 #define version_string"1.2" 55 staticinlinevoid __init
show_version(void) { 56 printk("%s version %s\n", description_string
, version_string
); 63 Driver and documentation by: 65 Chris Aston Madge Networks 66 Giuliano Procida Madge Networks 67 Simon Benham Madge Networks 68 Simon Johnson Madge Networks 69 Various Others Madge Networks 71 Some inspiration taken from other drivers by: 74 Kari Mettinen University of Helsinki 75 Werner Almesberger EPFL LRC 79 I Hardware, detection, initialisation and shutdown. 83 This driver should handle all variants of the PCI Madge ATM adapters 84 with the Horizon chipset. These are all PCI cards supporting PIO, BM 85 DMA and a form of MMIO (registers only, not internal RAM). 87 The driver is only known to work with SONET and UTP Horizon Ultra 88 cards at 155Mb/s. However, code is in place to deal with both the 89 original Horizon and 25Mb/s operation. 91 There are two revisions of the Horizon ASIC: the original and the 92 Ultra. Details of hardware bugs are in section III. 94 The ASIC version can be distinguished by chip markings but is NOT 95 indicated by the PCI revision (all adapters seem to have PCI rev 1). 99 Horizon => Collage 25 PCI Adapter (UTP and STP) 100 Horizon Ultra => Collage 155 PCI Client (UTP or SONET) 101 Ambassador x => Collage 155 PCI Server (completely different) 103 Horizon (25Mb/s) is fitted with UTP and STP connectors. It seems to 104 have a Madge B154 plus glue logic serializer. I have also found a 105 really ancient version of this with slightly different glue. It 106 comes with the revision 0 (140-025-01) ASIC. 108 Horizon Ultra (155Mb/s) is fitted with either a Pulse Medialink 109 output (UTP) or an HP HFBR 5205 output (SONET). It has either 110 Madge's SAMBA framer or a SUNI-lite device (early versions). It 111 comes with the revision 1 (140-027-01) ASIC. 115 All Horizon-based cards present with the same PCI Vendor and Device 116 IDs. The standard Linux 2.2 PCI API is used to locate any cards and 117 to enable bus-mastering (with appropriate latency). 119 ATM_LAYER_STATUS in the control register distinguishes between the 120 two possible physical layers (25 and 155). It is not clear whether 121 the 155 cards can also operate at 25Mbps. We rely on the fact that a 122 card operates at 155 if and only if it has the newer Horizon Ultra 125 For 155 cards the two possible framers are probed for and then set 130 The card is reset and then put into a known state. The physical 131 layer is configured for normal operation at the appropriate speed; 132 in the case of the 155 cards, the framer is initialised with 133 line-based timing; the internal RAM is zeroed and the allocation of 134 buffers for RX and TX is made; the Burnt In Address is read and 135 copied to the ATM ESI; various policy settings for RX (VPI bits, 136 unknown VCs, oam cells) are made. Ideally all policy items should be 137 configurable at module load (if not actually on-demand), however, 138 only the vpi vs vci bit allocation can be specified at insmod. 142 This is in response to module_cleaup. No VCs are in use and the card 143 should be idle; it is reset. 145 II Driver software (as it should be) 147 0. Traffic Parameters 149 The traffic classes (not an enumeration) are currently: ATM_NONE (no 150 traffic), ATM_UBR, ATM_CBR, ATM_VBR and ATM_ABR, ATM_ANYCLASS 151 (compatible with everything). Together with (perhaps only some of) 152 the following items they make up the traffic specification. 155 unsigned char traffic_class; traffic class (ATM_UBR, ...) 156 int max_pcr; maximum PCR in cells per second 157 int pcr; desired PCR in cells per second 158 int min_pcr; minimum PCR in cells per second 159 int max_cdv; maximum CDV in microseconds 160 int max_sdu; maximum SDU in bytes 163 Note that these denote bandwidth available not bandwidth used; the 164 possibilities according to ATMF are: 166 Real Time (cdv and max CDT given) 168 CBR(pcr) pcr bandwidth always available 169 rtVBR(pcr,scr,mbs) scr bandwidth always available, upto pcr at mbs too 173 nrtVBR(pcr,scr,mbs) scr bandwidth always available, upto pcr at mbs too 175 ABR(mcr,pcr) mcr bandwidth always available, upto pcr (depending) too 177 mbs is max burst size (bucket) 178 pcr and scr have associated cdvt values 179 mcr is like scr but has no cdtv 180 cdtv may differ at each hop 182 Some of the above items are qos items (as opposed to traffic 183 parameters). We have nothing to do with qos. All except ABR can have 184 their traffic parameters converted to GCRA parameters. The GCRA may 185 be implemented as a (real-number) leaky bucket. The GCRA can be used 186 in complicated ways by switches and in simpler ways by end-stations. 187 It can be used both to filter incoming cells and shape out-going 190 ATM Linux actually supports: 192 ATM_NONE() (no traffic in this direction) 193 ATM_UBR(max_frame_size) 194 ATM_CBR(max/min_pcr, max_cdv, max_frame_size) 196 0 or ATM_MAX_PCR are used to indicate maximum available PCR 198 A traffic specification consists of the AAL type and separate 199 traffic specifications for either direction. In ATM Linux it is: 202 struct atm_trafprm txtp; 203 struct atm_trafprm rxtp; 209 ATM_NO_AAL AAL not specified 210 ATM_AAL0 "raw" ATM cells 213 ATM_AAL34 AAL3/4 (data) 215 ATM_SAAL signaling AAL 217 The Horizon has support for AAL frame types: 0, 3/4 and 5. However, 218 it does not implement AAL 3/4 SAR and it has a different notion of 219 "raw cell" to ATM Linux's (48 bytes vs. 52 bytes) so neither are 220 supported by this driver. 222 The Horizon has limited support for ABR (including UBR), VBR and 223 CBR. Each TX channel has a bucket (containing up to 31 cell units) 224 and two timers (PCR and SCR) associated with it that can be used to 225 govern cell emissions and host notification (in the case of ABR this 226 is presumably so that RM cells may be emitted at appropriate times). 227 The timers may either be disabled or may be set to any of 240 values 228 (determined by the clock crystal, a fixed (?) per-device divider, a 229 configurable divider and a configurable timer preload value). 231 At the moment only UBR and CBR are supported by the driver. VBR will 232 be supported as soon as ATM for Linux supports it. ABR support is 233 very unlikely as RM cell handling is completely up to the driver. 235 1. TX (TX channel setup and TX transfer) 237 The TX half of the driver owns the TX Horizon registers. The TX 238 component in the IRQ handler is the BM completion handler. This can 239 only be entered when tx_busy is true (enforced by hardware). The 240 other TX component can only be entered when tx_busy is false 241 (enforced by driver). So TX is single-threaded. 243 Apart from a minor optimisation to not re-select the last channel, 244 the TX send component works as follows: 246 Atomic test and set tx_busy until we succeed; we should implement 247 some sort of timeout so that tx_busy will never be stuck at true. 249 If no TX channel is setup for this VC we wait for an idle one (if 250 necessary) and set it up. 252 At this point we have a TX channel ready for use. We wait for enough 253 buffers to become available then start a TX transmit (set the TX 254 descriptor, schedule transfer, exit). 256 The IRQ component handles TX completion (stats, free buffer, tx_busy 257 unset, exit). We also re-schedule further transfers for the same 260 TX setup in more detail: 262 TX open is a nop, the relevant information is held in the hrz_vcc 263 (vcc->dev_data) structure and is "cached" on the card. 265 TX close gets the TX lock and clears the channel from the "cache". 267 2. RX (Data Available and RX transfer) 269 The RX half of the driver owns the RX registers. There are two RX 270 components in the IRQ handler: the data available handler deals with 271 fresh data that has arrived on the card, the BM completion handler 272 is very similar to the TX completion handler. The data available 273 handler grabs the rx_lock and it is only released once the data has 274 been discarded or completely transferred to the host. The BM 275 completion handler only runs when the lock is held; the data 276 available handler is locked out over the same period. 278 Data available on the card triggers an interrupt. If the data is not 279 suitable for out existing RX channels or we cannot allocate a buffer 280 it is flushed. Otherwise an RX receive is scheduled. Multiple RX 281 transfers may be scheduled for the same frame. 283 RX setup in more detail: 290 0. Byte vs Word addressing of adapter RAM. 292 A design feature; see the .h file (especially the memory map). 294 1. Bus Master Data Transfers (original Horizon only, fixed in Ultra) 296 The host must not start a transmit direction transfer at a 297 non-four-byte boundary in host memory. Instead the host should 298 perform a byte, or a two byte, or one byte followed by two byte 299 transfer in order to start the rest of the transfer on a four byte 302 Simultaneous transmit and receive direction bus master transfers are 305 The simplest solution to these two is to always do PIO (never DMA) 306 in the TX direction on the original Horizon. More complicated 307 solutions are likely to hurt my brain. 309 2. Loss of buffer on close VC 311 When a VC is being closed, the buffer associated with it is not 312 returned to the pool. The host must store the reference to this 313 buffer and when opening a new VC then give it to that new VC. 315 The host intervention currently consists of stacking such a buffer 316 pointer at VC close and checking the stack at VC open. 318 3. Failure to close a VC 320 If a VC is currently receiving a frame then closing the VC may fail 321 and the frame continues to be received. 323 The solution is to make sure any received frames are flushed when 324 ready. This is currently done just before the solution to 3. 326 4. PCI bus (original Horizon only, fixed in Ultra) 328 Reading from the data port prior to initialisation will hang the PCI 329 bus. Just don't do that then! We don't. 333 . Timer code may be broken. 335 . Allow users to specify buffer allocation split for TX and RX. 337 . Deal once and for all with buggy VC close. 339 . Handle interrupted and/or non-blocking operations. 341 . Change some macros to functions and move from .h to .c. 343 . Try to limit the number of TX frames each VC may have queued, in 344 order to reduce the chances of TX buffer exhaustion. 346 . Implement VBR (bucket and timers not understood) and ABR (need to 347 do RM cells manually); also no Linux support for either. 349 . Implement QoS changes on open VCs (involves extracting parts of VC open 350 and close into separate functions and using them to make changes). 354 /********** globals **********/ 356 static hrz_dev
* hrz_devs
= NULL
; 357 static struct timer_list housekeeping
; 359 static unsigned short debug
=0; 360 static unsigned short vpi_bits
=0; 361 static unsigned short max_tx_size
=9000; 362 static unsigned short max_rx_size
=9000; 363 static unsigned char pci_lat
=0; 365 /********** access functions **********/ 367 /* Read / Write Horizon registers */ 368 staticinlinevoidwr_regl(const hrz_dev
* dev
,unsigned char reg
, u32 data
) { 369 outl(cpu_to_le32(data
), dev
->iobase
+ reg
); 372 staticinline u32
rd_regl(const hrz_dev
* dev
,unsigned char reg
) { 373 returnle32_to_cpu(inl(dev
->iobase
+ reg
)); 376 staticinlinevoidwr_regw(const hrz_dev
* dev
,unsigned char reg
, u16 data
) { 377 outw(cpu_to_le16(data
), dev
->iobase
+ reg
); 380 staticinline u16
rd_regw(const hrz_dev
* dev
,unsigned char reg
) { 381 returnle16_to_cpu(inw(dev
->iobase
+ reg
)); 384 staticinlinevoidwrs_regb(const hrz_dev
* dev
,unsigned char reg
,void* addr
, u32 len
) { 385 outsb(dev
->iobase
+ reg
, addr
, len
); 388 staticinlinevoidrds_regb(const hrz_dev
* dev
,unsigned char reg
,void* addr
, u32 len
) { 389 insb(dev
->iobase
+ reg
, addr
, len
); 392 /* Read / Write to a given address in Horizon buffer memory. 393 Interrupts must be disabled between the address register and data 394 port accesses as these must form an atomic operation. */ 395 staticinlinevoidwr_mem(const hrz_dev
* dev
, HDW
* addr
, u32 data
) { 396 // wr_regl (dev, MEM_WR_ADDR_REG_OFF, (u32) addr); 397 wr_regl(dev
, MEM_WR_ADDR_REG_OFF
, (addr
- (HDW
*)0) *sizeof(HDW
)); 398 wr_regl(dev
, MEMORY_PORT_OFF
, data
); 401 staticinline u32
rd_mem(const hrz_dev
* dev
, HDW
* addr
) { 402 // wr_regl (dev, MEM_RD_ADDR_REG_OFF, (u32) addr); 403 wr_regl(dev
, MEM_RD_ADDR_REG_OFF
, (addr
- (HDW
*)0) *sizeof(HDW
)); 404 returnrd_regl(dev
, MEMORY_PORT_OFF
); 407 staticinlinevoidwr_framer(const hrz_dev
* dev
, u32 addr
, u32 data
) { 408 wr_regl(dev
, MEM_WR_ADDR_REG_OFF
, (u32
) addr
|0x80000000); 409 wr_regl(dev
, MEMORY_PORT_OFF
, data
); 412 staticinline u32
rd_framer(const hrz_dev
* dev
, u32 addr
) { 413 wr_regl(dev
, MEM_RD_ADDR_REG_OFF
, (u32
) addr
|0x80000000); 414 returnrd_regl(dev
, MEMORY_PORT_OFF
); 417 /********** specialised access functions **********/ 421 staticinlinevoidFLUSH_RX_CHANNEL(hrz_dev
* dev
, u16 channel
) { 422 wr_regw(dev
, RX_CHANNEL_PORT_OFF
, FLUSH_CHANNEL
| channel
); 426 staticinlinevoidWAIT_FLUSH_RX_COMPLETE(hrz_dev
* dev
) { 427 while(rd_regw(dev
, RX_CHANNEL_PORT_OFF
) & FLUSH_CHANNEL
) 432 staticinlinevoidSELECT_RX_CHANNEL(hrz_dev
* dev
, u16 channel
) { 433 wr_regw(dev
, RX_CHANNEL_PORT_OFF
, channel
); 437 staticinlinevoidWAIT_UPDATE_COMPLETE(hrz_dev
* dev
) { 438 while(rd_regw(dev
, RX_CHANNEL_PORT_OFF
) & RX_CHANNEL_UPDATE_IN_PROGRESS
) 445 staticinlinevoidSELECT_TX_CHANNEL(hrz_dev
* dev
, u16 tx_channel
) { 446 wr_regl(dev
, TX_CHANNEL_PORT_OFF
, tx_channel
); 450 /* Update or query one configuration parameter of a particular channel. */ 452 staticinlinevoidupdate_tx_channel_config(hrz_dev
* dev
,short chan
, u8 mode
, u16 value
) { 453 wr_regw(dev
, TX_CHANNEL_CONFIG_COMMAND_OFF
, 454 chan
* TX_CHANNEL_CONFIG_MULT
| mode
); 455 wr_regw(dev
, TX_CHANNEL_CONFIG_DATA_OFF
, value
); 459 staticinline u16
query_tx_channel_config(hrz_dev
* dev
,short chan
, u8 mode
) { 460 wr_regw(dev
, TX_CHANNEL_CONFIG_COMMAND_OFF
, 461 chan
* TX_CHANNEL_CONFIG_MULT
| mode
); 462 returnrd_regw(dev
, TX_CHANNEL_CONFIG_DATA_OFF
); 465 /********** dump functions **********/ 467 staticinlinevoiddump_skb(char* prefix
,unsigned int vc
,struct sk_buff
* skb
) { 470 unsigned char* data
= skb
->data
; 471 PRINTDB(DBG_DATA
,"%s(%u) ", prefix
, vc
); 472 for(i
=0; i
<skb
->len
&& i
<256;i
++) 473 PRINTDM(DBG_DATA
,"%02x ", data
[i
]); 474 PRINTDE(DBG_DATA
,""); 483 staticinlinevoiddump_regs(hrz_dev
* dev
) { 485 PRINTD(DBG_REGS
,"CONTROL 0: %#x",rd_regl(dev
, CONTROL_0_REG
)); 486 PRINTD(DBG_REGS
,"RX CONFIG: %#x",rd_regw(dev
, RX_CONFIG_OFF
)); 487 PRINTD(DBG_REGS
,"TX CONFIG: %#x",rd_regw(dev
, TX_CONFIG_OFF
)); 488 PRINTD(DBG_REGS
,"TX STATUS: %#x",rd_regw(dev
, TX_STATUS_OFF
)); 489 PRINTD(DBG_REGS
,"IRQ ENBLE: %#x",rd_regl(dev
, INT_ENABLE_REG_OFF
)); 490 PRINTD(DBG_REGS
,"IRQ SORCE: %#x",rd_regl(dev
, INT_SOURCE_REG_OFF
)); 497 staticinlinevoiddump_framer(hrz_dev
* dev
) { 500 PRINTDB(DBG_REGS
,"framer registers:"); 501 for(i
=0; i
<0x10; ++i
) 502 PRINTDM(DBG_REGS
," %02x",rd_framer(dev
, i
)); 503 PRINTDE(DBG_REGS
,""); 510 /********** VPI/VCI <-> (RX) channel conversions **********/ 512 /* RX channels are 10 bit integers, these fns are quite paranoid */ 514 staticinlineintchannel_to_vpivci(const u16 channel
,short* vpi
,int* vci
) { 515 unsigned short vci_bits
=10- vpi_bits
; 516 if((channel
& RX_CHANNEL_MASK
) == channel
) { 517 *vci
= channel
& ((~0)<<vci_bits
); 518 *vpi
= channel
>> vci_bits
; 519 return channel
?0: -EINVAL
; 524 staticinlineintvpivci_to_channel(u16
* channel
,const short vpi
,const int vci
) { 525 unsigned short vci_bits
=10- vpi_bits
; 526 if(0<= vpi
&& vpi
<1<<vpi_bits
&&0<= vci
&& vci
<1<<vci_bits
) { 527 *channel
= vpi
<<vci_bits
| vci
; 528 return*channel
?0: -EINVAL
; 533 /********** decode RX queue entries **********/ 535 staticinline u16
rx_q_entry_to_length(u32 x
) { 536 return x
& RX_Q_ENTRY_LENGTH_MASK
; 539 staticinline u16
rx_q_entry_to_rx_channel(u32 x
) { 540 return(x
>>RX_Q_ENTRY_CHANNEL_SHIFT
) & RX_CHANNEL_MASK
; 543 /* Cell Transmit Rate Values 545 * the cell transmit rate (cells per sec) can be set to a variety of 546 * different values by specifying two parameters: a timer preload from 547 * 1 to 16 (stored as 0 to 15) and a clock divider (2 to the power of 548 * an exponent from 0 to 14; the special value 15 disables the timer). 550 * cellrate = baserate / (preload * 2^divider) 552 * The maximum cell rate that can be specified is therefore just the 553 * base rate. Halving the preload is equivalent to adding 1 to the 554 * divider and so values 1 to 8 of the preload are redundant except 555 * in the case of a maximal divider (14). 557 * Given a desired cell rate, an algorithm to determine the preload 560 * a) x = baserate / cellrate, want p * 2^d = x (as far as possible) 561 * b) if x > 16 * 2^14 then set p = 16, d = 14 (min rate), done 562 * if x <= 16 then set p = x, d = 0 (high rates), done 563 * c) now have 16 < x <= 2^18, or 1 < x/16 <= 2^14 and we want to 564 * know n such that 2^(n-1) < x/16 <= 2^n, so slide a bit until 565 * we find the range (n will be between 1 and 14), set d = n 566 * d) Also have 8 < x/2^n <= 16, so set p nearest x/2^n 568 * The algorithm used below is a minor variant of the above. 570 * The base rate is derived from the oscillator frequency (Hz) using a 573 * baserate = freq / 32 in the case of some Unknown Card 574 * baserate = freq / 8 in the case of the Horizon 25 575 * baserate = freq / 8 in the case of the Horizon Ultra 155 577 * The Horizon cards have oscillators and base rates as follows: 579 * Card Oscillator Base Rate 580 * Unknown Card 33 MHz 1.03125 MHz (33 MHz = PCI freq) 581 * Horizon 25 32 MHz 4 MHz 582 * Horizon Ultra 155 40 MHz 5 MHz 584 * The following defines give the base rates in Hz. These were 585 * previously a factor of 100 larger, no doubt someone was using 589 #define BR_UKN 1031250l 590 #define BR_HRZ 4000000l 591 #define BR_ULT 5000000l 597 // p ranges from 1 to a power of 2 600 static intmake_rate(const hrz_dev
* dev
, u32 c
, rounding r
, 601 u16
* bits
,unsigned int* actual
) { 603 // note: rounding the rate down means rounding 'p' up 605 const unsigned long br
=test_bit(ultra
, &dev
->flags
) ? BR_ULT
: BR_HRZ
; 610 // local fn to build the timer bits 611 inlineintset_cr(void) { 613 if(div
> CR_MAXD
|| (!pre
) || pre
>1<<CR_MAXPEXP
) { 614 PRINTD(DBG_QOS
,"set_cr internal failure: d=%u p=%u", 619 *bits
= (div
<<CLOCK_SELECT_SHIFT
) | (pre
-1); 621 *actual
= (br
+ (pre
<<div
) -1) / (pre
<<div
); 622 PRINTD(DBG_QOS
,"actual rate: %u", *actual
); 628 // br_exp and br_man are used to avoid overflowing (c*maxp*2^d) in 629 // the tests below. We could think harder about exact possibilities 632 unsigned long br_man
= br
; 633 unsigned int br_exp
=0; 635 PRINTD(DBG_QOS
|DBG_FLOW
,"make_rate b=%lu, c=%u, %s", br
, c
, 636 (r
== round_up
) ?"up": (r
== round_down
) ?"down":"nearest"); 640 PRINTD(DBG_QOS
|DBG_ERR
,"zero rate is not allowed!"); 644 while(br_exp
< CR_MAXPEXP
+ CR_MIND
&& (br_man
%2==0)) { 648 // (br >>br_exp) <<br_exp == br and 649 // br_exp <= CR_MAXPEXP+CR_MIND 651 if(br_man
<= (c
<< (CR_MAXPEXP
+CR_MIND
-br_exp
))) { 652 // Equivalent to: B <= (c << (MAXPEXP+MIND)) 653 // take care of rounding 656 pre
= (br
+(c
<<div
)-1)/(c
<<div
); 657 // but p must be non-zero 662 pre
= (br
+(c
<<div
)/2)/(c
<<div
); 663 // but p must be non-zero 669 // but p must be non-zero 674 PRINTD(DBG_QOS
,"A: p=%u, d=%u", pre
, div
); 678 // at this point we have 679 // d == MIND and (c << (MAXPEXP+MIND)) < B 680 while(div
< CR_MAXD
) { 682 if(br_man
<= (c
<< (CR_MAXPEXP
+div
-br_exp
))) { 683 // Equivalent to: B <= (c << (MAXPEXP+d)) 684 // c << (MAXPEXP+d-1) < B <= c << (MAXPEXP+d) 685 // 1 << (MAXPEXP-1) < B/2^d/c <= 1 << MAXPEXP 686 // MAXP/2 < B/c2^d <= MAXP 687 // take care of rounding 690 pre
= (br
+(c
<<div
)-1)/(c
<<div
); 693 pre
= (br
+(c
<<div
)/2)/(c
<<div
); 699 PRINTD(DBG_QOS
,"B: p=%u, d=%u", pre
, div
); 703 // at this point we have 704 // d == MAXD and (c << (MAXPEXP+MAXD)) < B 705 // but we cannot go any higher 706 // take care of rounding 717 PRINTD(DBG_QOS
,"C: p=%u, d=%u", pre
, div
); 721 static intmake_rate_with_tolerance(const hrz_dev
* dev
, u32 c
, rounding r
,unsigned int tol
, 722 u16
* bit_pattern
,unsigned int* actual
) { 723 unsigned int my_actual
; 725 PRINTD(DBG_QOS
|DBG_FLOW
,"make_rate_with_tolerance c=%u, %s, tol=%u", 726 c
, (r
== round_up
) ?"up": (r
== round_down
) ?"down":"nearest", tol
); 729 // actual rate is not returned 732 if(make_rate(dev
, c
, round_nearest
, bit_pattern
, actual
)) 733 // should never happen as round_nearest always succeeds 736 if(c
- tol
<= *actual
&& *actual
<= c
+ tol
) 740 // intolerant, try rounding instead 741 returnmake_rate(dev
, c
, r
, bit_pattern
, actual
); 744 /********** Listen on a VC **********/ 746 static inthrz_open_rx(hrz_dev
* dev
, u16 channel
) { 747 // is there any guarantee that we don't get two simulataneous 748 // identical calls of this function from different processes? yes 751 u32 channel_type
;// u16? 753 u16 buf_ptr
= RX_CHANNEL_IDLE
; 755 rx_ch_desc
* rx_desc
= &memmap
->rx_descs
[channel
]; 757 PRINTD(DBG_FLOW
,"hrz_open_rx %x", channel
); 759 spin_lock_irqsave(&dev
->mem_lock
, flags
); 760 channel_type
=rd_mem(dev
, &rx_desc
->wr_buf_type
) & BUFFER_PTR_MASK
; 761 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 763 // very serious error, should never occur 764 if(channel_type
!= RX_CHANNEL_DISABLED
) { 765 PRINTD(DBG_ERR
|DBG_VCC
,"RX channel for VC already open"); 766 return-EBUSY
;// clean up? 769 // Give back spare buffer 770 if(dev
->noof_spare_buffers
) { 771 buf_ptr
= dev
->spare_buffers
[--dev
->noof_spare_buffers
]; 772 PRINTD(DBG_VCC
,"using a spare buffer: %u", buf_ptr
); 773 // should never occur 774 if(buf_ptr
== RX_CHANNEL_DISABLED
|| buf_ptr
== RX_CHANNEL_IDLE
) { 775 // but easy to recover from 776 PRINTD(DBG_ERR
|DBG_VCC
,"bad spare buffer pointer, using IDLE"); 777 buf_ptr
= RX_CHANNEL_IDLE
; 780 PRINTD(DBG_VCC
,"using IDLE buffer pointer"); 783 // Channel is currently disabled so change its status to idle 785 // do we really need to save the flags again? 786 spin_lock_irqsave(&dev
->mem_lock
, flags
); 788 wr_mem(dev
, &rx_desc
->wr_buf_type
, 789 buf_ptr
| CHANNEL_TYPE_AAL5
| FIRST_CELL_OF_AAL5_FRAME
); 790 if(buf_ptr
!= RX_CHANNEL_IDLE
) 791 wr_mem(dev
, &rx_desc
->rd_buf_type
, buf_ptr
); 793 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 795 // rxer->rate = make_rate (qos->peak_cells); 797 PRINTD(DBG_FLOW
,"hrz_open_rx ok"); 803 /********** change vc rate for a given vc **********/ 805 static voidhrz_change_vc_qos(ATM_RXER
* rxer
, MAAL_QOS
* qos
) { 806 rxer
->rate
=make_rate(qos
->peak_cells
); 810 /********** free an skb (as per ATM device driver documentation) **********/ 812 staticinlinevoidhrz_kfree_skb(struct sk_buff
* skb
) { 813 if(ATM_SKB(skb
)->vcc
->pop
) { 814 ATM_SKB(skb
)->vcc
->pop(ATM_SKB(skb
)->vcc
, skb
); 820 /********** cancel listen on a VC **********/ 822 static voidhrz_close_rx(hrz_dev
* dev
, u16 vc
) { 829 rx_ch_desc
* rx_desc
= &memmap
->rx_descs
[vc
]; 833 spin_lock_irqsave(&dev
->mem_lock
, flags
); 834 value
=rd_mem(dev
, &rx_desc
->wr_buf_type
) & BUFFER_PTR_MASK
; 835 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 837 if(value
== RX_CHANNEL_DISABLED
) { 838 // I suppose this could happen once we deal with _NONE traffic properly 839 PRINTD(DBG_VCC
,"closing VC: RX channel %u already disabled", vc
); 842 if(value
== RX_CHANNEL_IDLE
) 845 spin_lock_irqsave(&dev
->mem_lock
, flags
); 848 wr_mem(dev
, &rx_desc
->wr_buf_type
, RX_CHANNEL_DISABLED
); 850 if((rd_mem(dev
, &rx_desc
->wr_buf_type
) & BUFFER_PTR_MASK
) == RX_CHANNEL_DISABLED
) 857 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 861 WAIT_FLUSH_RX_COMPLETE(dev
); 863 // XXX Is this all really necessary? We can rely on the rx_data_av 864 // handler to discard frames that remain queued for delivery. If the 865 // worry is that immediately reopening the channel (perhaps by a 866 // different process) may cause some data to be mis-delivered then 867 // there may still be a simpler solution (such as busy-waiting on 868 // rx_busy once the channel is disabled or before a new one is 869 // opened - does this leave any holes?). Arguably setting up and 870 // tearing down the TX and RX halves of each virtual circuit could 871 // most safely be done within ?x_busy protected regions. 873 // OK, current changes are that Simon's marker is disabled and we DO 874 // look for NULL rxer elsewhere. The code here seems flush frames 875 // and then remember the last dead cell belonging to the channel 876 // just disabled - the cell gets relinked at the next vc_open. 877 // However, when all VCs are closed or only a few opened there are a 878 // handful of buffers that are unusable. 880 // Does anyone feel like documenting spare_buffers properly? 881 // Does anyone feel like fixing this in a nicer way? 883 // Flush any data which is left in the channel 885 // Change the rx channel port to something different to the RX 886 // channel we are trying to close to force Horizon to flush the rx 887 // channel read and write pointers. 889 u16 other
= vc
^(RX_CHANS
/2); 891 SELECT_RX_CHANNEL(dev
, other
); 892 WAIT_UPDATE_COMPLETE(dev
); 894 r1
=rd_mem(dev
, &rx_desc
->rd_buf_type
); 896 // Select this RX channel. Flush doesn't seem to work unless we 897 // select an RX channel before hand 899 SELECT_RX_CHANNEL(dev
, vc
); 900 WAIT_UPDATE_COMPLETE(dev
); 902 // Attempt to flush a frame on this RX channel 904 FLUSH_RX_CHANNEL(dev
, vc
); 905 WAIT_FLUSH_RX_COMPLETE(dev
); 907 // Force Horizon to flush rx channel read and write pointers as before 909 SELECT_RX_CHANNEL(dev
, other
); 910 WAIT_UPDATE_COMPLETE(dev
); 912 r2
=rd_mem(dev
, &rx_desc
->rd_buf_type
); 914 PRINTD(DBG_VCC
|DBG_RX
,"r1 = %u, r2 = %u", r1
, r2
); 917 dev
->spare_buffers
[dev
->noof_spare_buffers
++] = (u16
)r1
; 924 rx_q_entry
* wr_ptr
= &memmap
->rx_q_entries
[rd_regw(dev
, RX_QUEUE_WR_PTR_OFF
)]; 925 rx_q_entry
* rd_ptr
= dev
->rx_q_entry
; 927 PRINTD(DBG_VCC
|DBG_RX
,"rd_ptr = %u, wr_ptr = %u", rd_ptr
, wr_ptr
); 929 while(rd_ptr
!= wr_ptr
) { 930 u32 x
=rd_mem(dev
, (HDW
*) rd_ptr
); 932 if(vc
==rx_q_entry_to_rx_channel(x
)) { 933 x
|= SIMONS_DODGEY_MARKER
; 935 PRINTD(DBG_RX
|DBG_VCC
|DBG_WARN
,"marking a frame as dodgey"); 937 wr_mem(dev
, (HDW
*) rd_ptr
, x
); 940 if(rd_ptr
== dev
->rx_q_wrap
) 941 rd_ptr
= dev
->rx_q_reset
; 948 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 953 /********** schedule RX transfers **********/ 955 // Note on tail recursion: a GCC developer said that it is not likely 956 // to be fixed soon, so do not define TAILRECUSRIONWORKS unless you 957 // are sure it does as you may otherwise overflow the kernel stack. 959 // giving this fn a return value would help GCC, alledgedly 961 static voidrx_schedule(hrz_dev
* dev
,int irq
) { 962 unsigned int rx_bytes
; 965 #ifndef TAILRECURSIONWORKS 970 // bytes waiting for RX transfer 971 rx_bytes
= dev
->rx_bytes
; 975 while(rd_regl(dev
, MASTER_RX_COUNT_REG_OFF
)) { 976 PRINTD(DBG_RX
|DBG_WARN
,"RX error: other PCI Bus Master RX still in progress!"); 977 if(++spin_count
>10) { 978 PRINTD(DBG_RX
|DBG_ERR
,"spun out waiting PCI Bus Master RX completion"); 979 wr_regl(dev
, MASTER_RX_COUNT_REG_OFF
,0); 980 clear_bit(rx_busy
, &dev
->flags
); 981 hrz_kfree_skb(dev
->rx_skb
); 987 // this code follows the TX code but (at the moment) there is only 988 // one region - the skb itself. I don't know if this will change, 989 // but it doesn't hurt to have the code here, disabled. 992 // start next transfer within same region 993 if(rx_bytes
<= MAX_PIO_COUNT
) { 994 PRINTD(DBG_RX
|DBG_BUS
,"(pio)"); 997 if(rx_bytes
<= MAX_TRANSFER_COUNT
) { 998 PRINTD(DBG_RX
|DBG_BUS
,"(simple or last multi)"); 1001 PRINTD(DBG_RX
|DBG_BUS
,"(continuing multi)"); 1002 dev
->rx_bytes
= rx_bytes
- MAX_TRANSFER_COUNT
; 1003 rx_bytes
= MAX_TRANSFER_COUNT
; 1006 // rx_bytes == 0 -- we're between regions 1007 // regions remaining to transfer 1009 unsigned int rx_regions
= dev
->rx_regions
; 1011 unsigned int rx_regions
=0; 1016 // start a new region 1017 dev
->rx_addr
= dev
->rx_iovec
->iov_base
; 1018 rx_bytes
= dev
->rx_iovec
->iov_len
; 1020 dev
->rx_regions
= rx_regions
-1; 1022 if(rx_bytes
<= MAX_PIO_COUNT
) { 1023 PRINTD(DBG_RX
|DBG_BUS
,"(pio)"); 1026 if(rx_bytes
<= MAX_TRANSFER_COUNT
) { 1027 PRINTD(DBG_RX
|DBG_BUS
,"(full region)"); 1030 PRINTD(DBG_RX
|DBG_BUS
,"(start multi region)"); 1031 dev
->rx_bytes
= rx_bytes
- MAX_TRANSFER_COUNT
; 1032 rx_bytes
= MAX_TRANSFER_COUNT
; 1037 // that's all folks - end of frame 1038 struct sk_buff
* skb
= dev
->rx_skb
; 1039 // dev->rx_iovec = 0; 1041 FLUSH_RX_CHANNEL(dev
, dev
->rx_channel
); 1043 dump_skb("<<<", dev
->rx_channel
, skb
); 1045 PRINTD(DBG_RX
|DBG_SKB
,"push %p %u", skb
->data
, skb
->len
); 1048 struct atm_vcc
* vcc
=ATM_SKB(skb
)->vcc
; 1052 // end of our responsability 1053 vcc
->push(vcc
, skb
); 1058 // note: writing RX_COUNT clears any interrupt condition 1062 wr_regl(dev
, MASTER_RX_COUNT_REG_OFF
,0); 1063 rds_regb(dev
, DATA_PORT_OFF
, dev
->rx_addr
, rx_bytes
); 1065 wr_regl(dev
, MASTER_RX_ADDR_REG_OFF
,virt_to_bus(dev
->rx_addr
)); 1066 wr_regl(dev
, MASTER_RX_COUNT_REG_OFF
, rx_bytes
); 1068 dev
->rx_addr
+= rx_bytes
; 1071 wr_regl(dev
, MASTER_RX_COUNT_REG_OFF
,0); 1072 // allow another RX thread to start 1074 clear_bit(rx_busy
, &dev
->flags
); 1075 PRINTD(DBG_RX
,"cleared rx_busy for dev %p", dev
); 1078 #ifdef TAILRECURSIONWORKS 1079 // and we all bless optimised tail calls 1086 }while(pio_instead
); 1091 /********** handle RX bus master complete events **********/ 1093 staticinlinevoidrx_bus_master_complete_handler(hrz_dev
* dev
) { 1094 if(test_bit(rx_busy
, &dev
->flags
)) { 1097 PRINTD(DBG_RX
|DBG_ERR
,"unexpected RX bus master completion"); 1098 // clear interrupt condition on adapter 1099 wr_regl(dev
, MASTER_RX_COUNT_REG_OFF
,0); 1104 /********** (queue to) become the next TX thread **********/ 1106 staticinlineinttx_hold(hrz_dev
* dev
) { 1107 while(test_and_set_bit(tx_busy
, &dev
->flags
)) { 1108 PRINTD(DBG_TX
,"sleeping at tx lock %p %u", dev
, dev
->flags
); 1109 interruptible_sleep_on(&dev
->tx_queue
); 1110 PRINTD(DBG_TX
,"woken at tx lock %p %u", dev
, dev
->flags
); 1111 if(signal_pending(current
)) 1114 PRINTD(DBG_TX
,"set tx_busy for dev %p", dev
); 1118 /********** allow another TX thread to start **********/ 1120 staticinlinevoidtx_release(hrz_dev
* dev
) { 1121 clear_bit(tx_busy
, &dev
->flags
); 1122 PRINTD(DBG_TX
,"cleared tx_busy for dev %p", dev
); 1123 wake_up_interruptible(&dev
->tx_queue
); 1126 /********** schedule TX transfers **********/ 1128 static voidtx_schedule(hrz_dev
*const dev
,int irq
) { 1129 unsigned int tx_bytes
; 1134 #ifndef TAILRECURSIONWORKS 1138 // bytes in current region waiting for TX transfer 1139 tx_bytes
= dev
->tx_bytes
; 1143 while(rd_regl(dev
, MASTER_TX_COUNT_REG_OFF
)) { 1144 PRINTD(DBG_TX
|DBG_WARN
,"TX error: other PCI Bus Master TX still in progress!"); 1145 if(++spin_count
>10) { 1146 PRINTD(DBG_TX
|DBG_ERR
,"spun out waiting PCI Bus Master TX completion"); 1147 wr_regl(dev
, MASTER_TX_COUNT_REG_OFF
,0); 1149 hrz_kfree_skb(dev
->tx_skb
); 1156 // start next transfer within same region 1157 if(!test_bit(ultra
, &dev
->flags
) || tx_bytes
<= MAX_PIO_COUNT
) { 1158 PRINTD(DBG_TX
|DBG_BUS
,"(pio)"); 1161 if(tx_bytes
<= MAX_TRANSFER_COUNT
) { 1162 PRINTD(DBG_TX
|DBG_BUS
,"(simple or last multi)"); 1163 if(!dev
->tx_iovec
) { 1164 // end of last region 1169 PRINTD(DBG_TX
|DBG_BUS
,"(continuing multi)"); 1170 dev
->tx_bytes
= tx_bytes
- MAX_TRANSFER_COUNT
; 1171 tx_bytes
= MAX_TRANSFER_COUNT
; 1174 // tx_bytes == 0 -- we're between regions 1175 // regions remaining to transfer 1176 unsigned int tx_regions
= dev
->tx_regions
; 1179 // start a new region 1180 dev
->tx_addr
= dev
->tx_iovec
->iov_base
; 1181 tx_bytes
= dev
->tx_iovec
->iov_len
; 1183 dev
->tx_regions
= tx_regions
-1; 1185 if(!test_bit(ultra
, &dev
->flags
) || tx_bytes
<= MAX_PIO_COUNT
) { 1186 PRINTD(DBG_TX
|DBG_BUS
,"(pio)"); 1189 if(tx_bytes
<= MAX_TRANSFER_COUNT
) { 1190 PRINTD(DBG_TX
|DBG_BUS
,"(full region)"); 1193 PRINTD(DBG_TX
|DBG_BUS
,"(start multi region)"); 1194 dev
->tx_bytes
= tx_bytes
- MAX_TRANSFER_COUNT
; 1195 tx_bytes
= MAX_TRANSFER_COUNT
; 1199 // that's all folks - end of frame 1200 struct sk_buff
* skb
= dev
->tx_skb
; 1204 ATM_SKB(skb
)->vcc
->stats
->tx
++; 1211 // note: writing TX_COUNT clears any interrupt condition 1215 wr_regl(dev
, MASTER_TX_COUNT_REG_OFF
,0); 1216 wrs_regb(dev
, DATA_PORT_OFF
, dev
->tx_addr
, tx_bytes
); 1218 wr_regl(dev
, TX_DESCRIPTOR_PORT_OFF
,cpu_to_be32(dev
->tx_skb
->len
)); 1220 wr_regl(dev
, MASTER_TX_ADDR_REG_OFF
,virt_to_bus(dev
->tx_addr
)); 1222 wr_regl(dev
, TX_DESCRIPTOR_REG_OFF
,cpu_to_be32(dev
->tx_skb
->len
)); 1223 wr_regl(dev
, MASTER_TX_COUNT_REG_OFF
, 1225 ? tx_bytes
| MASTER_TX_AUTO_APPEND_DESC
1228 dev
->tx_addr
+= tx_bytes
; 1231 wr_regl(dev
, MASTER_TX_COUNT_REG_OFF
,0); 1236 #ifdef TAILRECURSIONWORKS 1237 // and we all bless optimised tail calls 1244 }while(pio_instead
); 1249 /********** handle TX bus master complete events **********/ 1251 staticinlinevoidtx_bus_master_complete_handler(hrz_dev
* dev
) { 1252 if(test_bit(tx_busy
, &dev
->flags
)) { 1255 PRINTD(DBG_TX
|DBG_ERR
,"unexpected TX bus master completion"); 1256 // clear interrupt condition on adapter 1257 wr_regl(dev
, MASTER_TX_COUNT_REG_OFF
,0); 1262 /********** move RX Q pointer to next item in circular buffer **********/ 1264 // called only from IRQ sub-handler 1265 staticinline u32
rx_queue_entry_next(hrz_dev
* dev
) { 1267 spin_lock(&dev
->mem_lock
); 1268 rx_queue_entry
=rd_mem(dev
, &dev
->rx_q_entry
->entry
); 1269 if(dev
->rx_q_entry
== dev
->rx_q_wrap
) 1270 dev
->rx_q_entry
= dev
->rx_q_reset
; 1273 wr_regw(dev
, RX_QUEUE_RD_PTR_OFF
, dev
->rx_q_entry
- dev
->rx_q_reset
); 1274 spin_unlock(&dev
->mem_lock
); 1275 return rx_queue_entry
; 1278 /********** handle RX disabled by device **********/ 1280 staticinlinevoidrx_disabled_handler(hrz_dev
* dev
) { 1281 wr_regw(dev
, RX_CONFIG_OFF
,rd_regw(dev
, RX_CONFIG_OFF
) | RX_ENABLE
); 1283 PRINTK(KERN_WARNING
,"RX was disabled!"); 1286 /********** handle RX data received by device **********/ 1288 // called from IRQ handler 1289 staticinlinevoidrx_data_av_handler(hrz_dev
* dev
) { 1291 u32 rx_queue_entry_flags
; 1295 PRINTD(DBG_FLOW
,"hrz_data_av_handler"); 1297 // try to grab rx lock (not possible during RX bus mastering) 1298 if(test_and_set_bit(rx_busy
, &dev
->flags
)) { 1299 PRINTD(DBG_RX
,"locked out of rx lock"); 1302 PRINTD(DBG_RX
,"set rx_busy for dev %p", dev
); 1303 // lock is cleared if we fail now, o/w after bus master completion 1305 YELLOW_LED_OFF(dev
); 1307 rx_queue_entry
=rx_queue_entry_next(dev
); 1309 rx_len
=rx_q_entry_to_length(rx_queue_entry
); 1310 rx_channel
=rx_q_entry_to_rx_channel(rx_queue_entry
); 1312 WAIT_FLUSH_RX_COMPLETE(dev
); 1314 SELECT_RX_CHANNEL(dev
, rx_channel
); 1316 PRINTD(DBG_RX
,"rx_queue_entry is: %#x", rx_queue_entry
); 1317 rx_queue_entry_flags
= rx_queue_entry
& (RX_CRC_32_OK
|RX_COMPLETE_FRAME
|SIMONS_DODGEY_MARKER
); 1320 // (at least) bus-mastering breaks if we try to handle a 1321 // zero-length frame, besides AAL5 does not support them 1322 PRINTK(KERN_ERR
,"zero-length frame!"); 1323 rx_queue_entry_flags
&= ~RX_COMPLETE_FRAME
; 1326 if(rx_queue_entry_flags
& SIMONS_DODGEY_MARKER
) { 1327 PRINTD(DBG_RX
|DBG_ERR
,"Simon's marker detected!"); 1329 if(rx_queue_entry_flags
== (RX_CRC_32_OK
| RX_COMPLETE_FRAME
)) { 1330 struct atm_vcc
* atm_vcc
; 1332 PRINTD(DBG_RX
,"got a frame on rx_channel %x len %u", rx_channel
, rx_len
); 1334 atm_vcc
= dev
->rxer
[rx_channel
]; 1335 // if no vcc is assigned to this channel, we should drop the frame 1336 // (is this what SIMONS etc. was trying to achieve?) 1340 if(atm_vcc
->qos
.rxtp
.traffic_class
!= ATM_NONE
) { 1342 if(rx_len
<= atm_vcc
->qos
.rxtp
.max_sdu
) { 1343 struct sk_buff
*skb
=atm_alloc_charge(atm_vcc
,rx_len
,GFP_ATOMIC
); 1345 // If everyone has to call atm_pdu2... why isn't it part of 1346 // atm_charge? B'cos some people already have skb->truesize! 1347 // WA: well. even if they think they do, they might not ... :-) 1350 // remember this so we can push it later 1352 // remember this so we can flush it later 1353 dev
->rx_channel
= rx_channel
; 1355 // prepare socket buffer 1356 skb_put(skb
, rx_len
); 1357 ATM_SKB(skb
)->vcc
= atm_vcc
; 1360 // dev->rx_regions = 0; 1361 // dev->rx_iovec = 0; 1362 dev
->rx_bytes
= rx_len
; 1363 dev
->rx_addr
= skb
->data
; 1364 PRINTD(DBG_RX
,"RX start simple transfer (addr %p, len %d)", 1372 PRINTD(DBG_INFO
,"failed to get skb"); 1376 PRINTK(KERN_INFO
,"frame received on TX-only VC %x", rx_channel
); 1377 // do we count this? 1381 PRINTK(KERN_WARNING
,"dropped over-size frame"); 1382 // do we count this? 1386 PRINTD(DBG_WARN
|DBG_VCC
|DBG_RX
,"no VCC for this frame (VC closed)"); 1387 // do we count this? 1391 // Wait update complete ? SPONG 1397 FLUSH_RX_CHANNEL(dev
,rx_channel
); 1398 clear_bit(rx_busy
, &dev
->flags
); 1403 /********** interrupt handler **********/ 1405 static voidinterrupt_handler(int irq
,void* dev_id
,struct pt_regs
* pt_regs
) { 1406 hrz_dev
* dev
= hrz_devs
; 1408 unsigned int irq_ok
; 1411 PRINTD(DBG_FLOW
,"interrupt_handler: %p", dev_id
); 1414 PRINTD(DBG_IRQ
|DBG_ERR
,"irq with NULL dev_id: %d", irq
); 1417 // Did one of our cards generate the interrupt? 1424 PRINTD(DBG_IRQ
,"irq not for me: %d", irq
); 1427 if(irq
!= dev
->irq
) { 1428 PRINTD(DBG_IRQ
|DBG_ERR
,"irq mismatch: %d", irq
); 1432 // definitely for us 1434 while((int_source
=rd_regl(dev
, INT_SOURCE_REG_OFF
) 1435 & INTERESTING_INTERRUPTS
)) { 1436 // In the interests of fairness, the (inline) handlers below are 1437 // called in sequence and without immediate return to the head of 1438 // the while loop. This is only of issue for slow hosts (or when 1439 // debugging messages are on). Really slow hosts may find a fast 1440 // sender keeps them permanently in the IRQ handler. :( 1442 // (only an issue for slow hosts) RX completion goes before 1443 // rx_data_av as the former implies rx_busy and so the latter 1444 // would just abort. If it reschedules another transfer 1445 // (continuing the same frame) then it will not clear rx_busy. 1447 // (only an issue for slow hosts) TX completion goes before RX 1448 // data available as it is a much shorter routine - there is the 1449 // chance that any further transfers it schedules will be complete 1450 // by the time of the return to the head of the while loop 1452 if(int_source
& RX_BUS_MASTER_COMPLETE
) { 1454 PRINTD(DBG_IRQ
|DBG_BUS
|DBG_RX
,"rx_bus_master_complete asserted"); 1455 rx_bus_master_complete_handler(dev
); 1457 if(int_source
& TX_BUS_MASTER_COMPLETE
) { 1459 PRINTD(DBG_IRQ
|DBG_BUS
|DBG_TX
,"tx_bus_master_complete asserted"); 1460 tx_bus_master_complete_handler(dev
); 1462 if(int_source
& RX_DATA_AV
) { 1464 PRINTD(DBG_IRQ
|DBG_RX
,"rx_data_av asserted"); 1465 rx_data_av_handler(dev
); 1469 PRINTD(DBG_IRQ
,"work done: %u", irq_ok
); 1471 PRINTD(DBG_IRQ
|DBG_WARN
,"spurious interrupt source: %#x", int_source
); 1474 PRINTD(DBG_IRQ
|DBG_FLOW
,"interrupt_handler done: %p", dev_id
); 1477 /********** housekeeping **********/ 1479 static voidset_timer(struct timer_list
* timer
,unsigned int delay
) { 1480 timer
->expires
= jiffies
+ delay
; 1485 static voiddo_housekeeping(unsigned long arg
) { 1486 // just stats at the moment 1487 hrz_dev
* dev
= hrz_devs
; 1489 // data is set to zero at module unload 1490 if(housekeeping
.data
) { 1492 // collect device-specific (not driver/atm-linux) stats here 1493 dev
->tx_cell_count
+=rd_regw(dev
, TX_CELL_COUNT_OFF
); 1494 dev
->rx_cell_count
+=rd_regw(dev
, RX_CELL_COUNT_OFF
); 1495 dev
->hec_error_count
+=rd_regw(dev
, HEC_ERROR_COUNT_OFF
); 1496 dev
->unassigned_cell_count
+=rd_regw(dev
, UNASSIGNED_CELL_COUNT_OFF
); 1499 set_timer(&housekeeping
, HZ
/10); 1504 /********** find an idle channel for TX and set it up **********/ 1506 // called with tx_busy set 1507 staticinlineshortsetup_idle_tx_channel(hrz_dev
* dev
, hrz_vcc
* vcc
) { 1508 unsigned short idle_channels
; 1509 short tx_channel
= -1; 1510 unsigned int spin_count
; 1511 PRINTD(DBG_FLOW
|DBG_TX
,"setup_idle_tx_channel %p", dev
); 1513 // better would be to fail immediately, the caller can then decide whether 1514 // to wait or drop (depending on whether this is UBR etc.) 1516 while(!(idle_channels
=rd_regw(dev
, TX_STATUS_OFF
) & IDLE_CHANNELS_MASK
)) { 1517 PRINTD(DBG_TX
|DBG_WARN
,"waiting for idle TX channel"); 1519 if(++spin_count
>100) { 1520 PRINTD(DBG_TX
|DBG_ERR
,"spun out waiting for idle TX channel"); 1525 // got an idle channel 1527 // tx_idle ensures we look for idle channels in RR order 1528 int chan
= dev
->tx_idle
; 1532 if(idle_channels
& (1<<chan
)) { 1537 if(chan
== TX_CHANS
) 1541 dev
->tx_idle
= chan
; 1544 // set up the channel we found 1546 // Initialise the cell header in the transmit channel descriptor 1547 // a.k.a. prepare the channel and remember that we have done so. 1549 tx_ch_desc
* tx_desc
= &memmap
->tx_descs
[tx_channel
]; 1552 u16 channel
= vcc
->channel
; 1554 unsigned long flags
; 1555 spin_lock_irqsave(&dev
->mem_lock
, flags
); 1557 // Update the transmit channel record. 1558 dev
->tx_channel_record
[tx_channel
] = channel
; 1561 update_tx_channel_config(dev
, tx_channel
, RATE_TYPE_ACCESS
, 1564 // Update the PCR counter preload value etc. 1565 update_tx_channel_config(dev
, tx_channel
, PCR_TIMER_ACCESS
, 1569 if(vcc
->tx_xbr_bits
== VBR_RATE_TYPE
) { 1571 update_tx_channel_config(dev
, tx_channel
, SCR_TIMER_ACCESS
, 1575 update_tx_channel_config(dev
, tx_channel
, BUCKET_CAPACITY_ACCESS
, 1576 vcc
->tx_bucket_bits
); 1579 update_tx_channel_config(dev
, tx_channel
, BUCKET_FULLNESS_ACCESS
, 1580 vcc
->tx_bucket_bits
); 1584 // Initialise the read and write buffer pointers 1585 rd_ptr
=rd_mem(dev
, &tx_desc
->rd_buf_type
) & BUFFER_PTR_MASK
; 1586 wr_ptr
=rd_mem(dev
, &tx_desc
->wr_buf_type
) & BUFFER_PTR_MASK
; 1588 // idle TX channels should have identical pointers 1589 if(rd_ptr
!= wr_ptr
) { 1590 PRINTD(DBG_TX
|DBG_ERR
,"TX buffer pointers are broken!"); 1591 // spin_unlock... return -E... 1592 // I wonder if gcc would get rid of one of the pointer aliases 1594 PRINTD(DBG_TX
,"TX buffer pointers are: rd %x, wr %x.", 1599 PRINTD(DBG_QOS
|DBG_TX
,"tx_channel: aal0"); 1600 rd_ptr
|= CHANNEL_TYPE_RAW_CELLS
; 1601 wr_ptr
|= CHANNEL_TYPE_RAW_CELLS
; 1604 PRINTD(DBG_QOS
|DBG_TX
,"tx_channel: aal34"); 1605 rd_ptr
|= CHANNEL_TYPE_AAL3_4
; 1606 wr_ptr
|= CHANNEL_TYPE_AAL3_4
; 1609 rd_ptr
|= CHANNEL_TYPE_AAL5
; 1610 wr_ptr
|= CHANNEL_TYPE_AAL5
; 1611 // Initialise the CRC 1612 wr_mem(dev
, &tx_desc
->partial_crc
, INITIAL_CRC
); 1616 wr_mem(dev
, &tx_desc
->rd_buf_type
, rd_ptr
); 1617 wr_mem(dev
, &tx_desc
->wr_buf_type
, wr_ptr
); 1619 // Write the Cell Header 1620 // Payload Type, CLP and GFC would go here if non-zero 1621 wr_mem(dev
, &tx_desc
->cell_header
, channel
); 1623 spin_unlock_irqrestore(&dev
->mem_lock
, flags
); 1629 /********** send a frame **********/ 1631 static inthrz_send(struct atm_vcc
* atm_vcc
,struct sk_buff
* skb
) { 1632 unsigned int spin_count
; 1634 hrz_dev
* dev
=HRZ_DEV(atm_vcc
->dev
); 1635 hrz_vcc
* vcc
=HRZ_VCC(atm_vcc
); 1636 u16 channel
= vcc
->channel
; 1638 u32 buffers_required
; 1640 /* signed for error return */ 1643 PRINTD(DBG_FLOW
|DBG_TX
,"hrz_send vc %x data %p len %u", 1644 channel
, skb
->data
, skb
->len
); 1646 dump_skb(">>>", channel
, skb
); 1648 if(atm_vcc
->qos
.txtp
.traffic_class
== ATM_NONE
) { 1649 PRINTK(KERN_ERR
,"attempt to send on RX-only VC %x", channel
); 1654 // don't understand this 1655 ATM_SKB(skb
)->vcc
= atm_vcc
; 1657 if(skb
->len
> atm_vcc
->qos
.txtp
.max_sdu
) { 1658 PRINTK(KERN_ERR
,"sk_buff length greater than agreed max_sdu, dropping..."); 1664 PRINTD(DBG_ERR
|DBG_TX
,"attempt to transmit on zero (rx_)channel"); 1670 // where would be a better place for this? housekeeping? 1672 pci_read_config_word(dev
->pci_dev
, PCI_STATUS
, &status
); 1673 if(status
& PCI_STATUS_REC_MASTER_ABORT
) { 1674 PRINTD(DBG_BUS
|DBG_ERR
,"Clearing PCI Master Abort (and cleaning up)"); 1675 status
&= ~PCI_STATUS_REC_MASTER_ABORT
; 1676 pci_write_config_word(dev
->pci_dev
, PCI_STATUS
, status
); 1677 if(test_bit(tx_busy
, &dev
->flags
)) { 1678 hrz_kfree_skb(dev
->tx_skb
); 1685 #ifdef DEBUG_HORIZON 1687 if(channel
==1023) { 1689 unsigned short d
=0; 1690 char* s
= skb
->data
; 1692 for(i
=0; i
<4; ++i
) { 1693 d
= (d
<<4) | ((*s
<='9') ? (*s
-'0') : (*s
-'a'+10)); 1696 PRINTK(KERN_INFO
,"debug bitmap is now %hx", debug
= d
); 1701 // wait until TX is free and grab lock 1705 // Wait for enough space to be available in transmit buffer memory. 1707 // should be number of cells needed + 2 (according to hardware docs) 1708 // = ((framelen+8)+47) / 48 + 2 1709 // = (framelen+7) / 48 + 3, hmm... faster to put addition inside XXX 1710 buffers_required
= (skb
->len
+(ATM_AAL5_TRAILER
-1)) / ATM_CELL_PAYLOAD
+3; 1712 // replace with timer and sleep, add dev->tx_buffers_queue (max 1 entry) 1714 while((free_buffers
=rd_regw(dev
, TX_FREE_BUFFER_COUNT_OFF
)) < buffers_required
) { 1715 PRINTD(DBG_TX
,"waiting for free TX buffers, got %d of %d", 1716 free_buffers
, buffers_required
); 1717 // what is the appropriate delay? implement a timeout? (depending on line speed?) 1719 // what happens if we kill (current_pid, SIGKILL) ? 1721 if(++spin_count
>1000) { 1722 PRINTD(DBG_TX
|DBG_ERR
,"spun out waiting for tx buffers, got %d of %d", 1723 free_buffers
, buffers_required
); 1729 // Select a channel to transmit the frame on. 1730 if(channel
== dev
->last_vc
) { 1731 PRINTD(DBG_TX
,"last vc hack: hit"); 1732 tx_channel
= dev
->tx_last
; 1734 PRINTD(DBG_TX
,"last vc hack: miss"); 1735 // Are we currently transmitting this VC on one of the channels? 1736 for(tx_channel
=0; tx_channel
< TX_CHANS
; ++tx_channel
) 1737 if(dev
->tx_channel_record
[tx_channel
] == channel
) { 1738 PRINTD(DBG_TX
,"vc already on channel: hit"); 1741 if(tx_channel
== TX_CHANS
) { 1742 PRINTD(DBG_TX
,"vc already on channel: miss"); 1743 // Find and set up an idle channel. 1744 tx_channel
=setup_idle_tx_channel(dev
, vcc
); 1746 PRINTD(DBG_TX
|DBG_ERR
,"failed to get channel"); 1752 PRINTD(DBG_TX
,"got channel"); 1753 SELECT_TX_CHANNEL(dev
, tx_channel
); 1755 dev
->last_vc
= channel
; 1756 dev
->tx_last
= tx_channel
; 1759 PRINTD(DBG_TX
,"using channel %u", tx_channel
); 1761 YELLOW_LED_OFF(dev
); 1763 // TX start transfer 1766 unsigned int tx_len
= skb
->len
; 1767 unsigned int tx_iovcnt
=ATM_SKB(skb
)->iovcnt
; 1768 // remember this so we can free it later 1772 // scatter gather transfer 1773 dev
->tx_regions
= tx_iovcnt
; 1774 dev
->tx_iovec
= (struct iovec
*) skb
->data
; 1776 PRINTD(DBG_TX
|DBG_BUS
,"TX start scatter-gather transfer (iovec %p, len %d)", 1782 dev
->tx_bytes
= tx_len
; 1783 dev
->tx_addr
= skb
->data
; 1784 PRINTD(DBG_TX
|DBG_BUS
,"TX start simple transfer (addr %p, len %d)", 1788 // and do the business 1796 /********** reset a card **********/ 1798 static void __init
hrz_reset(const hrz_dev
* dev
) { 1799 u32 control_0_reg
=rd_regl(dev
, CONTROL_0_REG
); 1801 // why not set RESET_HORIZON to one and wait for the card to 1802 // reassert that bit as zero? Like so: 1803 control_0_reg
= control_0_reg
& RESET_HORIZON
; 1804 wr_regl(dev
, CONTROL_0_REG
, control_0_reg
); 1805 while(control_0_reg
& RESET_HORIZON
) 1806 control_0_reg
=rd_regl(dev
, CONTROL_0_REG
); 1808 // old reset code retained: 1809 wr_regl(dev
, CONTROL_0_REG
, control_0_reg
| 1810 RESET_ATM
| RESET_RX
| RESET_TX
| RESET_HOST
); 1811 // just guessing here 1814 wr_regl(dev
, CONTROL_0_REG
, control_0_reg
); 1817 /********** read the burnt in address **********/ 1819 static u16 __init
read_bia(const hrz_dev
* dev
, u16 addr
) { 1821 u32 ctrl
=rd_regl(dev
, CONTROL_0_REG
); 1823 inlinevoidWRITE_IT_WAIT(void) { 1824 wr_regl(dev
, CONTROL_0_REG
, ctrl
); 1828 inlinevoidCLOCK_IT(void) { 1829 // DI must be valid around rising SK edge 1830 ctrl
&= ~SEEPROM_SK
; 1836 const unsigned int addr_bits
=6; 1837 const unsigned int data_bits
=16; 1843 ctrl
&= ~(SEEPROM_CS
| SEEPROM_SK
| SEEPROM_DI
); 1846 // wake Serial EEPROM and send 110 (READ) command 1847 ctrl
|= (SEEPROM_CS
| SEEPROM_DI
); 1853 ctrl
&= ~SEEPROM_DI
; 1856 for(i
=0; i
<addr_bits
; i
++) { 1857 if(addr
& (1<< (addr_bits
-1))) 1860 ctrl
&= ~SEEPROM_DI
; 1867 // we could check that we have DO = 0 here 1868 ctrl
&= ~SEEPROM_DI
; 1871 for(i
=0;i
<data_bits
;i
++) { 1876 if(rd_regl(dev
, CONTROL_0_REG
) & SEEPROM_DO
) 1877 res
|= (1<< (data_bits
-1)); 1880 ctrl
&= ~(SEEPROM_SK
| SEEPROM_CS
); 1886 /********** initialise a card **********/ 1888 static int __init
hrz_init(hrz_dev
* dev
) { 1902 ctrl
=rd_regl(dev
, CONTROL_0_REG
); 1903 PRINTD(DBG_INFO
,"ctrl0reg is %#x", ctrl
); 1904 onefivefive
= ctrl
& ATM_LAYER_STATUS
; 1907 printk(DEV_LABEL
": Horizon Ultra (at 155.52 MBps)"); 1909 printk(DEV_LABEL
": Horizon (at 25 MBps)"); 1912 // Reset the card to get everything in a known state 1917 // Clear all the buffer memory 1919 printk(" clearing memory"); 1921 for(mem
= (HDW
*) memmap
; mem
< (HDW
*) (memmap
+1); ++mem
) 1924 printk(" tx channels"); 1926 // All transmit eight channels are set up as AAL5 ABR channels with 1927 // a 16us cell spacing. Why? 1929 // Channel 0 gets the free buffer at 100h, channel 1 gets the free 1930 // buffer at 110h etc. 1932 for(chan
=0; chan
< TX_CHANS
; ++chan
) { 1933 tx_ch_desc
* tx_desc
= &memmap
->tx_descs
[chan
]; 1934 cell_buf
* buf
= &memmap
->inittxbufs
[chan
]; 1936 // initialise the read and write buffer pointers 1937 wr_mem(dev
, &tx_desc
->rd_buf_type
,BUF_PTR(buf
)); 1938 wr_mem(dev
, &tx_desc
->wr_buf_type
,BUF_PTR(buf
)); 1940 // set the status of the initial buffers to empty 1941 wr_mem(dev
, &buf
->next
, BUFF_STATUS_EMPTY
); 1944 // Use space bufn3 at the moment for tx buffers 1946 printk(" tx buffers"); 1948 tx_desc
= memmap
->bufn3
; 1950 wr_mem(dev
, &memmap
->txfreebufstart
.next
,BUF_PTR(tx_desc
) | BUFF_STATUS_EMPTY
); 1952 for(buff_count
=0; buff_count
< BUFN3_SIZE
-1; buff_count
++) { 1953 wr_mem(dev
, &tx_desc
->next
,BUF_PTR(tx_desc
+1) | BUFF_STATUS_EMPTY
); 1957 wr_mem(dev
, &tx_desc
->next
,BUF_PTR(&memmap
->txfreebufend
) | BUFF_STATUS_EMPTY
); 1959 // Initialise the transmit free buffer count 1960 wr_regw(dev
, TX_FREE_BUFFER_COUNT_OFF
, BUFN3_SIZE
); 1962 printk(" rx channels"); 1964 // Initialise all of the receive channels to be AAL5 disabled with 1965 // an interrupt threshold of 0 1967 for(chan
=0; chan
< RX_CHANS
; ++chan
) { 1968 rx_ch_desc
* rx_desc
= &memmap
->rx_descs
[chan
]; 1970 wr_mem(dev
, &rx_desc
->wr_buf_type
, CHANNEL_TYPE_AAL5
| RX_CHANNEL_DISABLED
); 1973 printk(" rx buffers"); 1975 // Use space bufn4 at the moment for rx buffers 1977 rx_desc
= memmap
->bufn4
; 1979 wr_mem(dev
, &memmap
->rxfreebufstart
.next
,BUF_PTR(rx_desc
) | BUFF_STATUS_EMPTY
); 1981 for(buff_count
=0; buff_count
< BUFN4_SIZE
-1; buff_count
++) { 1982 wr_mem(dev
, &rx_desc
->next
,BUF_PTR(rx_desc
+1) | BUFF_STATUS_EMPTY
); 1987 wr_mem(dev
, &rx_desc
->next
,BUF_PTR(&memmap
->rxfreebufend
) | BUFF_STATUS_EMPTY
); 1989 // Initialise the receive free buffer count 1990 wr_regw(dev
, RX_FREE_BUFFER_COUNT_OFF
, BUFN4_SIZE
); 1992 // Initialize Horizons registers 1995 wr_regw(dev
, TX_CONFIG_OFF
, 1996 ABR_ROUND_ROBIN
| TX_NORMAL_OPERATION
| DRVR_DRVRBAR_ENABLE
); 1998 // RX config. Use 10-x VC bits, x VP bits, non user cells in channel 0. 1999 wr_regw(dev
, RX_CONFIG_OFF
, 2000 DISCARD_UNUSED_VPI_VCI_BITS_SET
| NON_USER_CELLS_IN_ONE_CHANNEL
| vpi_bits
); 2003 wr_regw(dev
, RX_LINE_CONFIG_OFF
, 2004 LOCK_DETECT_ENABLE
| FREQUENCY_DETECT_ENABLE
| GXTALOUT_SELECT_DIV4
); 2006 // Set the max AAL5 cell count to be just enough to contain the 2007 // largest AAL5 frame that the user wants to receive 2008 wr_regw(dev
, MAX_AAL5_CELL_COUNT_OFF
, 2009 (max_rx_size
+ ATM_AAL5_TRAILER
+ ATM_CELL_PAYLOAD
-1) / ATM_CELL_PAYLOAD
); 2012 wr_regw(dev
, RX_CONFIG_OFF
,rd_regw(dev
, RX_CONFIG_OFF
) | RX_ENABLE
); 2016 // Drive the OE of the LEDs then turn the green LED on 2017 ctrl
|= GREEN_LED_OE
| YELLOW_LED_OE
| GREEN_LED
| YELLOW_LED
; 2018 wr_regl(dev
, CONTROL_0_REG
, ctrl
); 2020 // Test for a 155-capable card 2023 // Select 155 mode... make this a choice (or: how do we detect 2024 // external line speed and switch?) 2025 ctrl
|= ATM_LAYER_SELECT
; 2026 wr_regl(dev
, CONTROL_0_REG
, ctrl
); 2028 // test SUNI-lite vs SAMBA 2030 // Register 0x00 in the SUNI will have some of bits 3-7 set, and 2031 // they will always be zero for the SAMBA. Ha! Bloody hardware 2032 // engineers. It'll never work. 2034 if(rd_framer(dev
,0) &0x00f0) { 2038 // Reset, just in case 2039 wr_framer(dev
,0x00,0x0080); 2040 wr_framer(dev
,0x00,0x0000); 2042 // Configure transmit FIFO 2043 wr_framer(dev
,0x63,rd_framer(dev
,0x63) |0x0002); 2045 // Set line timed mode 2046 wr_framer(dev
,0x05,rd_framer(dev
,0x05) |0x0001); 2051 // Reset, just in case 2052 wr_framer(dev
,0,rd_framer(dev
,0) |0x0001); 2053 wr_framer(dev
,0,rd_framer(dev
,0) &~0x0001); 2055 // Turn off diagnostic loopback and enable line-timed mode 2056 wr_framer(dev
,0,0x0002); 2058 // Turn on transmit outputs 2059 wr_framer(dev
,2,0x0B80); 2063 ctrl
&= ~ATM_LAYER_SELECT
; 2079 u8
* esi
= dev
->atm_dev
->esi
; 2081 // in the card I have, EEPROM 2082 // addresses 0, 1, 2 contain 0 2083 // addresess 5, 6 etc. contain ffff 2084 // NB: Madge prefix is 00 00 f6 (which is 00 00 6f in Ethernet bit order) 2085 // the read_bia routine gets the BIA in Ethernet bit order 2087 for(i
=0; i
< ESI_LEN
; ++i
) { 2089 b
=read_bia(dev
, i
/2+2); 2093 printk("%02x", esi
[i
]); 2097 // Enable RX_Q and ?X_COMPLETE interrupts only 2098 wr_regl(dev
, INT_ENABLE_REG_OFF
, INTERESTING_INTERRUPTS
); 2106 /********** check max_sdu **********/ 2108 static intcheck_max_sdu(hrz_aal aal
,struct atm_trafprm
* tp
,unsigned int max_frame_size
) { 2109 PRINTD(DBG_FLOW
|DBG_QOS
,"check_max_sdu"); 2113 if(!(tp
->max_sdu
)) { 2114 PRINTD(DBG_QOS
,"defaulting max_sdu"); 2115 tp
->max_sdu
= ATM_AAL0_SDU
; 2116 }else if(tp
->max_sdu
!= ATM_AAL0_SDU
) { 2117 PRINTD(DBG_QOS
|DBG_ERR
,"rejecting max_sdu"); 2122 if(tp
->max_sdu
==0|| tp
->max_sdu
> ATM_MAX_AAL34_PDU
) { 2123 PRINTD(DBG_QOS
,"%sing max_sdu", tp
->max_sdu
?"capp":"default"); 2124 tp
->max_sdu
= ATM_MAX_AAL34_PDU
; 2128 if(tp
->max_sdu
==0|| tp
->max_sdu
> max_frame_size
) { 2129 PRINTD(DBG_QOS
,"%sing max_sdu", tp
->max_sdu
?"capp":"default"); 2130 tp
->max_sdu
= max_frame_size
; 2137 /********** check pcr **********/ 2139 // something like this should be part of ATM Linux 2140 static intatm_pcr_check(struct atm_trafprm
* tp
,unsigned int pcr
) { 2141 // we are assuming non-UBR, and non-special values of pcr 2142 if(tp
->min_pcr
== ATM_MAX_PCR
) 2143 PRINTD(DBG_QOS
,"luser gave min_pcr = ATM_MAX_PCR"); 2144 else if(tp
->min_pcr
<0) 2145 PRINTD(DBG_QOS
,"luser gave negative min_pcr"); 2146 else if(tp
->min_pcr
&& tp
->min_pcr
> pcr
) 2147 PRINTD(DBG_QOS
,"pcr less than min_pcr"); 2149 // !! max_pcr = UNSPEC (0) is equivalent to max_pcr = MAX (-1) 2150 // easier to #define ATM_MAX_PCR 0 and have all rates unsigned? 2151 // [this would get rid of next two conditionals] 2152 if((0) && tp
->max_pcr
== ATM_MAX_PCR
) 2153 PRINTD(DBG_QOS
,"luser gave max_pcr = ATM_MAX_PCR"); 2154 else if((tp
->max_pcr
!= ATM_MAX_PCR
) && tp
->max_pcr
<0) 2155 PRINTD(DBG_QOS
,"luser gave negative max_pcr"); 2156 else if(tp
->max_pcr
&& tp
->max_pcr
!= ATM_MAX_PCR
&& tp
->max_pcr
< pcr
) 2157 PRINTD(DBG_QOS
,"pcr greater than max_pcr"); 2159 // each limit unspecified or not violated 2160 PRINTD(DBG_QOS
,"xBR(pcr) OK"); 2163 PRINTD(DBG_QOS
,"pcr=%u, tp: min_pcr=%d, pcr=%d, max_pcr=%d", 2164 pcr
, tp
->min_pcr
, tp
->pcr
, tp
->max_pcr
); 2168 /********** open VC **********/ 2170 static inthrz_open(struct atm_vcc
* atm_vcc
,short vpi
,int vci
) { 2174 struct atm_qos
* qos
; 2175 struct atm_trafprm
* txtp
; 2176 struct atm_trafprm
* rxtp
; 2178 hrz_dev
* dev
=HRZ_DEV(atm_vcc
->dev
); 2180 hrz_vcc
* vccp
;// allocated late 2181 PRINTD(DBG_FLOW
|DBG_VCC
,"hrz_open %x %x", vpi
, vci
); 2183 #ifdef ATM_VPI_UNSPEC 2184 // UNSPEC is deprecated, remove this code eventually 2185 if(vpi
== ATM_VPI_UNSPEC
|| vci
== ATM_VCI_UNSPEC
) { 2186 PRINTK(KERN_WARNING
,"rejecting open with unspecified VPI/VCI (deprecated)"); 2191 // deal with possibly wildcarded VCs 2192 error
=atm_find_ci(atm_vcc
, &vpi
, &vci
); 2194 PRINTD(DBG_WARN
|DBG_VCC
,"atm_find_ci failed!"); 2197 PRINTD(DBG_VCC
,"atm_find_ci gives %x %x", vpi
, vci
); 2199 error
=vpivci_to_channel(&channel
, vpi
, vci
); 2201 PRINTD(DBG_WARN
|DBG_VCC
,"VPI/VCI out of range: %hd/%d", vpi
, vci
); 2205 vcc
.channel
= channel
; 2206 // max speed for the moment 2209 qos
= &atm_vcc
->qos
; 2211 // check AAL and remember it 2214 // we would if it were 48 bytes and not 52! 2215 PRINTD(DBG_QOS
|DBG_VCC
,"AAL0"); 2219 // we would if I knew how do the SAR! 2220 PRINTD(DBG_QOS
|DBG_VCC
,"AAL3/4"); 2224 PRINTD(DBG_QOS
|DBG_VCC
,"AAL5"); 2228 PRINTD(DBG_QOS
|DBG_VCC
,"Bad AAL!"); 2233 // TX traffic parameters 2235 // there are two, interrelated problems here: 1. the reservation of 2236 // PCR is not a binary choice, we are given bounds and/or a 2237 // desirable value; 2. the device is only capable of certain values, 2238 // most of which are not integers. It is almost certainly acceptable 2239 // to be off by a maximum of 1 to 10 cps. 2241 // Pragmatic choice: always store an integral PCR as that which has 2242 // been allocated, even if we allocate a little (or a lot) less, 2243 // after rounding. The actual allocation depends on what we can 2244 // manage with our rate selection algorithm. The rate selection 2245 // algorithm is given an integral PCR and a tolerance and told 2246 // whether it should round the value up or down if the tolerance is 2247 // exceeded; it returns: a) the actual rate selected (rounded up to 2248 // the nearest integer), b) a bit pattern to feed to the timer 2249 // register, and c) a failure value if no applicable rate exists. 2251 // Part of the job is done by atm_pcr_goal which gives us a PCR 2252 // specification which says: EITHER grab the maximum available PCR 2253 // (and perhaps a lower bound which we musn't pass), OR grab this 2254 // amount, rounding down if you have to (and perhaps a lower bound 2255 // which we musn't pass) OR grab this amount, rounding up if you 2256 // have to (and perhaps an upper bound which we musn't pass). If any 2257 // bounds ARE passed we fail. Note that rounding is only rounding to 2258 // match device limitations, we do not round down to satisfy 2259 // bandwidth availability even if this would not violate any given 2262 // Note: telephony = 64kb/s = 48 byte cell payload @ 500/3 cells/s 2263 // (say) so this is not even a binary fixpoint cell rate (but this 2264 // device can do it). To avoid this sort of hassle we use a 2265 // tolerance parameter (currently fixed at 10 cps). 2267 PRINTD(DBG_QOS
,"TX:"); 2271 // set up defaults for no traffic 2273 // who knows what would actually happen if you try and send on this? 2274 vcc
.tx_xbr_bits
= IDLE_RATE_TYPE
; 2275 vcc
.tx_pcr_bits
= CLOCK_DISABLE
; 2277 vcc
.tx_scr_bits
= CLOCK_DISABLE
; 2278 vcc
.tx_bucket_bits
=0; 2281 if(txtp
->traffic_class
!= ATM_NONE
) { 2282 error
=check_max_sdu(vcc
.aal
, txtp
, max_tx_size
); 2284 PRINTD(DBG_QOS
,"TX max_sdu check failed"); 2288 switch(txtp
->traffic_class
) { 2290 // we take "the PCR" as a rate-cap 2293 make_rate(dev
,1<<30, round_nearest
, &vcc
.tx_pcr_bits
,0); 2294 vcc
.tx_xbr_bits
= ABR_RATE_TYPE
; 2299 // reserve min, allow up to max 2301 make_rate(dev
,1<<30, round_nearest
, &vcc
.tx_pcr_bits
,0); 2302 vcc
.tx_xbr_bits
= ABR_RATE_TYPE
; 2307 int pcr
=atm_pcr_goal(txtp
); 2310 // down vs. up, remaining bandwidth vs. unlimited bandwidth!! 2311 // should really have: once someone gets unlimited bandwidth 2312 // that no more non-UBR channels can be opened until the 2313 // unlimited one closes?? For the moment, round_down means 2314 // greedy people actually get something and not nothing 2316 // slight race (no locking) here so we may get -EAGAIN 2317 // later; the greedy bastards would deserve it :) 2318 PRINTD(DBG_QOS
,"snatching all remaining TX bandwidth"); 2319 pcr
= dev
->tx_avail
; 2326 error
=make_rate_with_tolerance(dev
, pcr
, r
,10, 2327 &vcc
.tx_pcr_bits
, &vcc
.tx_rate
); 2329 PRINTD(DBG_QOS
,"could not make rate from TX PCR"); 2332 // not really clear what further checking is needed 2333 error
=atm_pcr_check(txtp
, vcc
.tx_rate
); 2335 PRINTD(DBG_QOS
,"TX PCR failed consistency check"); 2338 vcc
.tx_xbr_bits
= CBR_RATE_TYPE
; 2343 int pcr
=atm_pcr_goal(txtp
); 2344 // int scr = atm_scr_goal (txtp); 2345 int scr
= pcr
/2;// just for fun 2346 unsigned int mbs
=60;// just for fun 2349 unsigned int bucket
; 2359 error
=make_rate_with_tolerance(dev
, pcr
, pr
,10, 2360 &vcc
.tx_pcr_bits
,0); 2362 // see comments for PCR with CBR above 2364 // slight race (no locking) here so we may get -EAGAIN 2365 // later; the greedy bastards would deserve it :) 2366 PRINTD(DBG_QOS
,"snatching all remaining TX bandwidth"); 2367 scr
= dev
->tx_avail
; 2374 error
=make_rate_with_tolerance(dev
, scr
, sr
,10, 2375 &vcc
.tx_scr_bits
, &vcc
.tx_rate
); 2377 PRINTD(DBG_QOS
,"could not make rate from TX SCR"); 2380 // not really clear what further checking is needed 2381 // error = atm_scr_check (txtp, vcc.tx_rate); 2383 PRINTD(DBG_QOS
,"TX SCR failed consistency check"); 2386 // bucket calculations (from a piece of paper...) cell bucket 2387 // capacity must be largest integer smaller than m(p-s)/p + 1 2388 // where m = max burst size, p = pcr, s = scr 2389 bucket
= mbs
*(pcr
-scr
)/pcr
; 2390 if(bucket
*pcr
!= mbs
*(pcr
-scr
)) 2392 if(bucket
> BUCKET_MAX_SIZE
) { 2393 PRINTD(DBG_QOS
,"shrinking bucket from %u to %u", 2394 bucket
, BUCKET_MAX_SIZE
); 2395 bucket
= BUCKET_MAX_SIZE
; 2397 vcc
.tx_xbr_bits
= VBR_RATE_TYPE
; 2398 vcc
.tx_bucket_bits
= bucket
; 2403 PRINTD(DBG_QOS
,"unsupported TX traffic class"); 2410 // RX traffic parameters 2412 PRINTD(DBG_QOS
,"RX:"); 2416 // set up defaults for no traffic 2419 if(rxtp
->traffic_class
!= ATM_NONE
) { 2420 error
=check_max_sdu(vcc
.aal
, rxtp
, max_rx_size
); 2422 PRINTD(DBG_QOS
,"RX max_sdu check failed"); 2425 switch(rxtp
->traffic_class
) { 2438 int pcr
=atm_pcr_goal(rxtp
); 2440 // slight race (no locking) here so we may get -EAGAIN 2441 // later; the greedy bastards would deserve it :) 2442 PRINTD(DBG_QOS
,"snatching all remaining RX bandwidth"); 2443 pcr
= dev
->rx_avail
; 2448 // not really clear what further checking is needed 2449 error
=atm_pcr_check(rxtp
, vcc
.rx_rate
); 2451 PRINTD(DBG_QOS
,"RX PCR failed consistency check"); 2458 // int scr = atm_scr_goal (rxtp); 2459 int scr
=1<<16;// just for fun 2461 // slight race (no locking) here so we may get -EAGAIN 2462 // later; the greedy bastards would deserve it :) 2463 PRINTD(DBG_QOS
,"snatching all remaining RX bandwidth"); 2464 scr
= dev
->rx_avail
; 2469 // not really clear what further checking is needed 2470 // error = atm_scr_check (rxtp, vcc.rx_rate); 2472 PRINTD(DBG_QOS
,"RX SCR failed consistency check"); 2479 PRINTD(DBG_QOS
,"unsupported RX traffic class"); 2487 // late abort useful for diagnostics 2488 if(vcc
.aal
!= aal5
) { 2489 PRINTD(DBG_QOS
,"AAL not supported"); 2493 // get space for our vcc stuff and copy parameters into it 2494 vccp
=kmalloc(sizeof(hrz_vcc
), GFP_KERNEL
); 2496 PRINTK(KERN_ERR
,"out of memory!"); 2501 // clear error and grab cell rate resource lock 2503 spin_lock(&dev
->rate_lock
); 2505 if(vcc
.tx_rate
> dev
->tx_avail
) { 2506 PRINTD(DBG_QOS
,"not enough TX PCR left"); 2510 if(vcc
.rx_rate
> dev
->rx_avail
) { 2511 PRINTD(DBG_QOS
,"not enough RX PCR left"); 2516 // really consume cell rates 2517 dev
->tx_avail
-= vcc
.tx_rate
; 2518 dev
->rx_avail
-= vcc
.rx_rate
; 2519 PRINTD(DBG_QOS
|DBG_VCC
,"reserving %u TX PCR and %u RX PCR", 2520 vcc
.tx_rate
, vcc
.rx_rate
); 2523 // release lock and exit on error 2524 spin_unlock(&dev
->rate_lock
); 2526 PRINTD(DBG_QOS
|DBG_VCC
,"insufficient cell rate resources"); 2531 // this is "immediately before allocating the connection identifier 2532 // in hardware" - so long as the next call does not fail :) 2533 atm_vcc
->flags
|= ATM_VF_ADDR
; 2535 // any errors here are very serious and should never occur 2537 if(rxtp
->traffic_class
!= ATM_NONE
) { 2538 if(dev
->rxer
[channel
]) { 2539 PRINTD(DBG_ERR
|DBG_VCC
,"VC already open for RX"); 2542 error
=hrz_open_rx(dev
, channel
); 2547 // this link allows RX frames through 2548 dev
->rxer
[channel
] = atm_vcc
; 2551 // success, set elements of atm_vcc 2554 atm_vcc
->dev_data
= (void*) vccp
; 2556 // indicate readiness 2557 atm_vcc
->flags
|= ATM_VF_READY
; 2563 /********** close VC **********/ 2565 static voidhrz_close(struct atm_vcc
* atm_vcc
) { 2566 hrz_dev
* dev
=HRZ_DEV(atm_vcc
->dev
); 2567 hrz_vcc
* vcc
=HRZ_VCC(atm_vcc
); 2568 u16 channel
= vcc
->channel
; 2569 PRINTD(DBG_VCC
|DBG_FLOW
,"hrz_close"); 2571 // indicate unreadiness 2572 atm_vcc
->flags
&= ~ATM_VF_READY
; 2574 if(atm_vcc
->qos
.txtp
.traffic_class
!= ATM_NONE
) { 2577 // let any TX on this channel that has started complete 2578 // no restart, just keep trying 2581 // remove record of any tx_channel having been setup for this channel 2582 for(i
=0; i
< TX_CHANS
; ++i
) 2583 if(dev
->tx_channel_record
[i
] == channel
) { 2584 dev
->tx_channel_record
[i
] = -1; 2587 if(dev
->last_vc
== channel
) 2592 if(atm_vcc
->qos
.rxtp
.traffic_class
!= ATM_NONE
) { 2593 // disable RXing - it tries quite hard 2594 hrz_close_rx(dev
, channel
); 2595 // forget the vcc - no more skbs will be pushed 2596 if(atm_vcc
!= dev
->rxer
[channel
]) 2597 PRINTK(KERN_ERR
,"%s atm_vcc=%p rxer[channel]=%p", 2598 "arghhh! we're going to die!", 2599 atm_vcc
, dev
->rxer
[channel
]); 2600 dev
->rxer
[channel
] =0; 2603 // atomically release our rate reservation 2604 spin_lock(&dev
->rate_lock
); 2605 PRINTD(DBG_QOS
|DBG_VCC
,"releasing %u TX PCR and %u RX PCR", 2606 vcc
->tx_rate
, vcc
->rx_rate
); 2607 dev
->tx_avail
+= vcc
->tx_rate
; 2608 dev
->rx_avail
+= vcc
->rx_rate
; 2609 spin_unlock(&dev
->rate_lock
); 2611 // free our structure 2613 // say the VPI/VCI is free again 2614 atm_vcc
->flags
&= ~ATM_VF_ADDR
; 2619 static inthrz_getsockopt(struct atm_vcc
* atm_vcc
,int level
,int optname
, 2620 void*optval
,int optlen
) { 2621 hrz_dev
* dev
=HRZ_DEV(atm_vcc
->dev
); 2622 PRINTD(DBG_FLOW
|DBG_VCC
,"hrz_getsockopt"); 2627 // return the right thing 2630 // return the right thing 2641 static inthrz_setsockopt(struct atm_vcc
* atm_vcc
,int level
,int optname
, 2642 void*optval
,int optlen
) { 2643 hrz_dev
* dev
=HRZ_DEV(atm_vcc
->dev
); 2644 PRINTD(DBG_FLOW
|DBG_VCC
,"hrz_setsockopt"); 2664 static inthrz_sg_send(struct atm_vcc
* atm_vcc
, 2665 unsigned long start
, 2666 unsigned long size
) { 2667 if(atm_vcc
->qos
.aal
== ATM_AAL5
) { 2668 PRINTD(DBG_FLOW
|DBG_VCC
,"hrz_sg_send: yes"); 2671 PRINTD(DBG_FLOW
|DBG_VCC
,"hrz_sg_send: no"); 2677 static inthrz_ioctl(struct atm_dev
* atm_dev
,unsigned int cmd
,void*arg
) { 2678 hrz_dev
* dev
=HRZ_DEV(atm_dev
); 2679 PRINTD(DBG_FLOW
,"hrz_ioctl"); 2683 unsigned charhrz_phy_get(struct atm_dev
* atm_dev
,unsigned long addr
) { 2684 hrz_dev
* dev
=HRZ_DEV(atm_dev
); 2685 PRINTD(DBG_FLOW
,"hrz_phy_get"); 2689 static voidhrz_phy_put(struct atm_dev
* atm_dev
,unsigned char value
, 2690 unsigned long addr
) { 2691 hrz_dev
* dev
=HRZ_DEV(atm_dev
); 2692 PRINTD(DBG_FLOW
,"hrz_phy_put"); 2695 static inthrz_change_qos(struct atm_vcc
* atm_vcc
,struct atm_qos
*qos
,int flgs
) { 2696 hrz_dev
* dev
=HRZ_DEV(vcc
->dev
); 2697 PRINTD(DBG_FLOW
,"hrz_change_qos"); 2702 /********** proc file contents **********/ 2704 static inthrz_proc_read(struct atm_dev
* atm_dev
, loff_t
* pos
,char* page
) { 2705 hrz_dev
* dev
=HRZ_DEV(atm_dev
); 2707 PRINTD(DBG_FLOW
,"hrz_proc_read"); 2709 /* more diagnostics here? */ 2713 unsigned int count
=sprintf(page
,"vbr buckets:"); 2715 for(i
=0; i
< TX_CHANS
; ++i
) 2716 count
+=sprintf(page
," %u/%u", 2717 query_tx_channel_config(dev
, i
, BUCKET_FULLNESS_ACCESS
), 2718 query_tx_channel_config(dev
, i
, BUCKET_CAPACITY_ACCESS
)); 2719 count
+=sprintf(page
+count
,".\n"); 2726 "cells: TX %lu, RX %lu, HEC errors %lu, unassigned %lu.\n", 2727 dev
->tx_cell_count
, dev
->rx_cell_count
, 2728 dev
->hec_error_count
, dev
->unassigned_cell_count
); 2732 "free cell buffers: TX %hu, RX %hu+%hu.\n", 2733 rd_regw(dev
, TX_FREE_BUFFER_COUNT_OFF
), 2734 rd_regw(dev
, RX_FREE_BUFFER_COUNT_OFF
), 2735 dev
->noof_spare_buffers
); 2739 "cps remaining: TX %u, RX %u\n", 2740 dev
->tx_avail
, dev
->rx_avail
); 2745 static const struct atmdev_ops hrz_ops
= { 2746 NULL
,// no hrz_dev_close 2749 NULL
,// no hrz_ioctl 2750 NULL
,// hrz_getsockopt, 2751 NULL
,// hrz_setsockopt, 2754 NULL
,// no send_oam - not in fact used yet 2755 NULL
,// no hrz_phy_put - not needed in this driver 2756 NULL
,// no hrz_phy_get - not needed in this driver 2757 NULL
,// no feedback - feedback to the driver! 2758 NULL
,// no hrz_change_qos 2759 NULL
,// no free_rx_skb 2763 static int __init
hrz_probe(void) { 2764 struct pci_dev
* pci_dev
; 2767 PRINTD(DBG_FLOW
,"hrz_probe"); 2774 while((pci_dev
= pci_find_device
2775 (PCI_VENDOR_ID_MADGE
, PCI_DEVICE_ID_MADGE_HORIZON
, pci_dev
) 2779 // adapter slot free, read resources from PCI configuration space 2780 u32 iobase
= pci_dev
->resource
[0].start
; 2781 u32
* membase
=bus_to_virt(pci_dev
->resource
[1].start
); 2782 u8 irq
= pci_dev
->irq
; 2785 if(check_region(iobase
, HRZ_IO_EXTENT
)) { 2786 PRINTD(DBG_WARN
,"IO range already in use"); 2790 dev
=kmalloc(sizeof(hrz_dev
), GFP_KERNEL
); 2792 // perhaps we should be nice: deregister all adapters and abort? 2793 PRINTD(DBG_ERR
,"out of memory"); 2797 memset(dev
,0,sizeof(hrz_dev
)); 2799 // grab IRQ and install handler - move this someplace more sensible 2802 SA_SHIRQ
,/* irqflags guess */ 2803 DEV_LABEL
,/* name guess */ 2805 PRINTD(DBG_WARN
,"request IRQ failed!"); 2806 // free_irq is at "endif" 2809 PRINTD(DBG_INFO
,"found Madge ATM adapter (hrz) at: IO %x, IRQ %u, MEM %p", 2810 iobase
, irq
, membase
); 2812 dev
->atm_dev
=atm_dev_register(DEV_LABEL
, &hrz_ops
, -1,0); 2813 if(!(dev
->atm_dev
)) { 2814 PRINTD(DBG_ERR
,"failed to register Madge ATM adapter"); 2818 PRINTD(DBG_INFO
,"registered Madge ATM adapter (no. %d) (%p) at %p", 2819 dev
->atm_dev
->number
, dev
, dev
->atm_dev
); 2820 dev
->atm_dev
->dev_data
= (void*) dev
; 2821 dev
->pci_dev
= pci_dev
; 2823 /* XXX DEV_LABEL is a guess */ 2824 request_region(iobase
, HRZ_IO_EXTENT
, DEV_LABEL
); 2826 // enable bus master accesses 2827 pci_set_master(pci_dev
); 2829 // frobnicate latency (upwards, usually) 2830 pci_read_config_byte(pci_dev
, PCI_LATENCY_TIMER
, &lat
); 2832 PRINTD(DBG_INFO
,"%s PCI latency timer from %hu to %hu", 2833 "changing", lat
, pci_lat
); 2834 pci_write_config_byte(pci_dev
, PCI_LATENCY_TIMER
, pci_lat
); 2835 }else if(lat
< MIN_PCI_LATENCY
) { 2836 PRINTK(KERN_INFO
,"%s PCI latency timer from %hu to %hu", 2837 "increasing", lat
, MIN_PCI_LATENCY
); 2838 pci_write_config_byte(pci_dev
, PCI_LATENCY_TIMER
, MIN_PCI_LATENCY
); 2841 dev
->iobase
= iobase
; 2843 dev
->membase
= membase
; 2845 dev
->rx_q_entry
= dev
->rx_q_reset
= &memmap
->rx_q_entries
[0]; 2846 dev
->rx_q_wrap
= &memmap
->rx_q_entries
[RX_CHANS
-1]; 2848 // these next three are performance hacks 2858 dev
->tx_cell_count
=0; 2859 dev
->rx_cell_count
=0; 2860 dev
->hec_error_count
=0; 2861 dev
->unassigned_cell_count
=0; 2863 dev
->noof_spare_buffers
=0; 2867 for(i
=0; i
< TX_CHANS
; ++i
) 2868 dev
->tx_channel_record
[i
] = -1; 2873 // Allocate cell rates and remember ASIC version 2874 // Fibre: ATM_OC3_PCR = 1555200000/8/270*260/53 - 29/53 2875 // Copper: (WRONG) we want 6 into the above, close to 25Mb/s 2876 // Copper: (plagarise!) 25600000/8/270*260/53 - n/53 2879 // to be really pedantic, this should be ATM_OC3c_PCR 2880 dev
->tx_avail
= ATM_OC3_PCR
; 2881 dev
->rx_avail
= ATM_OC3_PCR
; 2882 set_bit(ultra
, &dev
->flags
);// NOT "|= ultra" ! 2884 dev
->tx_avail
= ((25600000/8)*26)/(27*53); 2885 dev
->rx_avail
= ((25600000/8)*26)/(27*53); 2886 PRINTD(DBG_WARN
,"Buggy ASIC: no TX bus-mastering."); 2889 // rate changes spinlock 2890 spin_lock_init(&dev
->rate_lock
); 2892 // on-board memory access spinlock; we want atomic reads and 2893 // writes to adapter memory (handles IRQ and SMP) 2894 spin_lock_init(&dev
->mem_lock
); 2896 #if LINUX_VERSION_CODE >= 0x20303 2897 init_waitqueue_head(&dev
->tx_queue
); 2902 // vpi in 0..4, vci in 6..10 2903 dev
->atm_dev
->ci_range
.vpi_bits
= vpi_bits
; 2904 dev
->atm_dev
->ci_range
.vci_bits
=10-vpi_bits
; 2906 // update count and linked list 2908 dev
->prev
= hrz_devs
; 2913 /* not currently reached */ 2914 atm_dev_deregister(dev
->atm_dev
); 2915 }/* atm_dev_register */ 2919 }/* kmalloc and while */ 2923 static void __init
hrz_check_args(void) { 2924 #ifdef DEBUG_HORIZON 2925 PRINTK(KERN_NOTICE
,"debug bitmap is %hx", debug
&= DBG_MASK
); 2928 PRINTK(KERN_NOTICE
,"no debug support in this image"); 2931 if(vpi_bits
> HRZ_MAX_VPI
) 2932 PRINTK(KERN_ERR
,"vpi_bits has been limited to %hu", 2933 vpi_bits
= HRZ_MAX_VPI
); 2935 if(max_tx_size
> TX_AAL5_LIMIT
) 2936 PRINTK(KERN_NOTICE
,"max_tx_size has been limited to %hu", 2937 max_tx_size
= TX_AAL5_LIMIT
); 2939 if(max_rx_size
> RX_AAL5_LIMIT
) 2940 PRINTK(KERN_NOTICE
,"max_rx_size has been limited to %hu", 2941 max_rx_size
= RX_AAL5_LIMIT
); 2949 MODULE_AUTHOR(maintainer_string
); 2950 MODULE_DESCRIPTION(description_string
); 2951 MODULE_PARM(debug
,"h"); 2952 MODULE_PARM(vpi_bits
,"h"); 2953 MODULE_PARM(max_tx_size
,"h"); 2954 MODULE_PARM(max_rx_size
,"h"); 2955 MODULE_PARM(pci_lat
,"b"); 2956 MODULE_PARM_DESC(debug
,"debug bitmap, see .h file"); 2957 MODULE_PARM_DESC(vpi_bits
,"number of bits (0..4) to allocate to VPIs"); 2958 MODULE_PARM_DESC(max_tx_size
,"maximum size of TX AAL5 frames"); 2959 MODULE_PARM_DESC(max_rx_size
,"maximum size of RX AAL5 frames"); 2960 MODULE_PARM_DESC(pci_lat
,"PCI latency in bus cycles"); 2962 /********** module entry **********/ 2964 intinit_module(void) { 2967 // sanity check - cast is needed since printk does not support %Zu 2968 if(sizeof(struct MEMMAP
) !=128*1024/4) { 2969 PRINTK(KERN_ERR
,"Fix struct MEMMAP (is %lu fakewords).", 2970 (unsigned long)sizeof(struct MEMMAP
)); 2983 init_timer(&housekeeping
); 2984 housekeeping
.function
= do_housekeeping
; 2986 housekeeping
.data
=1; 2987 set_timer(&housekeeping
,0); 2989 PRINTK(KERN_ERR
,"no (usable) adapters found"); 2992 return devs
?0: -ENODEV
; 2995 /********** module exit **********/ 2997 voidcleanup_module(void) { 2999 PRINTD(DBG_FLOW
,"cleanup_module"); 3002 housekeeping
.data
=0; 3003 del_timer(&housekeeping
); 3007 hrz_devs
= dev
->prev
; 3009 PRINTD(DBG_INFO
,"closing %p (atm_dev = %p)", dev
, dev
->atm_dev
); 3011 atm_dev_deregister(dev
->atm_dev
); 3012 free_irq(dev
->irq
, dev
); 3013 release_region(dev
->iobase
, HRZ_IO_EXTENT
); 3022 /********** monolithic entry **********/ 3024 int __init
hrz_detect(void) { 3027 // sanity check - cast is needed since printk does not support %Zu 3028 if(sizeof(struct MEMMAP
) !=128*1024/4) { 3029 PRINTK(KERN_ERR
,"Fix struct MEMMAP (is %lu fakewords).", 3030 (unsigned long)sizeof(struct MEMMAP
)); 3036 // what about command line arguments? 3044 init_timer(&housekeeping
); 3045 housekeeping
.function
= do_housekeeping
; 3047 housekeeping
.data
=1; 3048 set_timer(&housekeeping
,0); 3050 PRINTK(KERN_ERR
,"no (usable) adapters found");