Consistent memory is memory for which a write by either the device or
the processor can immediately be read by the processor or device
-without having to worry about caching effects.
+without having to worry about caching effects. (You may however need
+to make sure to flush the processor's write buffers before telling
+devices to read that memory.)
This routine allocates a region of <size> bytes of consistent memory.
it also returns a <dma_handle> which may be cast to an unsigned
could not be created and the driver should take appropriate action (eg
reduce current DMA mapping usage or delay and try again later).
-int
-dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
- enum dma_data_direction direction)
-int
-pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
- int nents, int direction)
+ int
+ dma_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction direction)
+ int
+ pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+ int nents, int direction)
Maps a scatter gather list from the block layer.
aborting the request or even oopsing is better than doing nothing and
corrupting the filesystem.
-void
-dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
- enum dma_data_direction direction)
-void
-pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
- int nents, int direction)
+With scatterlists, you use the resulting mapping like this:
+
+ int i, count = dma_map_sg(dev, sglist, nents, direction);
+ struct scatterlist *sg;
+
+ for (i = 0, sg = sglist; i < count; i++, sg++) {
+ hw_address[i] = sg_dma_address(sg);
+ hw_len[i] = sg_dma_len(sg);
+ }
+
+where nents is the number of entries in the sglist.
+
+The implementation is free to merge several consecutive sglist entries
+into one (e.g. with an IOMMU, or if several pages just happen to be
+physically contiguous) and returns the actual number of sg entries it
+mapped them to. On failure 0, is returned.
+
+Then you should loop count times (note: this can be less than nents times)
+and use sg_dma_address() and sg_dma_len() macros where you previously
+accessed sg->address and sg->length as shown above.
+
+ void
+ dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nhwentries, enum dma_data_direction direction)
+ void
+ pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
+ int nents, int direction)
unmap the previously mapped scatter/gather list. All the parameters
must be the same as those and passed in to the scatter/gather mapping
something like __va(). [ EDIT: Update this when we integrate
Gerd Knorr's generic code which does this. ]
-This rule also means that you may not use kernel image addresses
-(ie. items in the kernel's data/text/bss segment, or your driver's)
-nor may you use kernel stack addresses for DMA. Both of these items
-might be mapped somewhere entirely different than the rest of physical
-memory.
+This rule also means that you may use neither kernel image addresses
+(items in data/text/bss segments), nor module image addresses, nor
+stack addresses for DMA. These could all be mapped somewhere entirely
+different than the rest of physical memory. Even if those classes of
+memory could physically work with DMA, you'd need to ensure the I/O
+buffers were cacheline-aligned. Without that, you'd see cacheline
+sharing problems (data corruption) on CPUs with DMA-incoherent caches.
+(The CPU could write to one word, DMA would write to a different one
+in the same cache line, and one of them could be overwritten.)
Also, this means that you cannot take the return of a kmap()
call and DMA to/from that. This is similar to vmalloc().
in order to get correct behavior on all platforms.
+ Also, on some platforms your driver may need to flush CPU write
+ buffers in much the same way as it needs to flush write buffers
+ found in PCI bridges (such as by reading a register's value
+ after writing it).
+
- Streaming DMA mappings which are usually mapped for one DMA transfer,
unmapped right after it (unless you use pci_dma_sync_* below) and for which
hardware can optimize for sequential accesses.
Neither type of DMA mapping has alignment restrictions that come
from PCI, although some devices may have such restrictions.
+Also, systems with caches that aren't DMA-coherent will work better
+when the underlying buffers don't share cache lines with other data.
+
Using Consistent DMA mappings.
teletext adapters)
It currently supports the following devices:
- * Philips adapter
- * home brew teletext adapter
- * Velleman K8000 adapter
- * ELV adapter
- * Analog Devices evaluation boards (ADM1025, ADM1030, ADM1031, ADM1032)
- * Barco LPT->DVI (K5800236) adapter
+ * (type=0) Philips adapter
+ * (type=1) home brew teletext adapter
+ * (type=2) Velleman K8000 adapter
+ * (type=3) ELV adapter
+ * (type=4) Analog Devices ADM1032 evaluation board
+ * (type=5) Analog Devices evaluation boards: ADM1025, ADM1030, ADM1031
+ * (type=6) Barco LPT->DVI (K5800236) adapter
These devices use different pinout configurations, so you have to tell
the driver what you have, using the type module parameter. There is no
way to autodetect the devices. Support for different pinout configurations
can be easily added when needed.
+Earlier kernels defaulted to type=0 (Philips). But now, if the type
+parameter is missing, the driver will simply fail to initialize.
+
Building your own adapter
-------------------------
--- /dev/null
+
+The sync patches work is based on initial patches from
+Krisztian <hidden@balabit.hu> and others and additional patches
+from Jamal <hadi@cyberus.ca>.
+
+The end goal for syncing is to be able to insert attributes + generate
+events so that the an SA can be safely moved from one machine to another
+for HA purposes.
+The idea is to synchronize the SA so that the takeover machine can do
+the processing of the SA as accurate as possible if it has access to it.
+
+We already have the ability to generate SA add/del/upd events.
+These patches add ability to sync and have accurate lifetime byte (to
+ensure proper decay of SAs) and replay counters to avoid replay attacks
+with as minimal loss at failover time.
+This way a backup stays as closely uptodate as an active member.
+
+Because the above items change for every packet the SA receives,
+it is possible for a lot of the events to be generated.
+For this reason, we also add a nagle-like algorithm to restrict
+the events. i.e we are going to set thresholds to say "let me
+know if the replay sequence threshold is reached or 10 secs have passed"
+These thresholds are set system-wide via sysctls or can be updated
+per SA.
+
+The identified items that need to be synchronized are:
+- the lifetime byte counter
+note that: lifetime time limit is not important if you assume the failover
+machine is known ahead of time since the decay of the time countdown
+is not driven by packet arrival.
+- the replay sequence for both inbound and outbound
+
+1) Message Structure
+----------------------
+
+nlmsghdr:aevent_id:optional-TLVs.
+
+The netlink message types are:
+
+XFRM_MSG_NEWAE and XFRM_MSG_GETAE.
+
+A XFRM_MSG_GETAE does not have TLVs.
+A XFRM_MSG_NEWAE will have at least two TLVs (as is
+discussed further below).
+
+aevent_id structure looks like:
+
+ struct xfrm_aevent_id {
+ struct xfrm_usersa_id sa_id;
+ __u32 flags;
+ };
+
+xfrm_usersa_id in this message layout identifies the SA.
+
+flags are used to indicate different things. The possible
+flags are:
+ XFRM_AE_RTHR=1, /* replay threshold*/
+ XFRM_AE_RVAL=2, /* replay value */
+ XFRM_AE_LVAL=4, /* lifetime value */
+ XFRM_AE_ETHR=8, /* expiry timer threshold */
+ XFRM_AE_CR=16, /* Event cause is replay update */
+ XFRM_AE_CE=32, /* Event cause is timer expiry */
+ XFRM_AE_CU=64, /* Event cause is policy update */
+
+How these flags are used is dependent on the direction of the
+message (kernel<->user) as well the cause (config, query or event).
+This is described below in the different messages.
+
+The pid will be set appropriately in netlink to recognize direction
+(0 to the kernel and pid = processid that created the event
+when going from kernel to user space)
+
+A program needs to subscribe to multicast group XFRMNLGRP_AEVENTS
+to get notified of these events.
+
+2) TLVS reflect the different parameters:
+-----------------------------------------
+
+a) byte value (XFRMA_LTIME_VAL)
+This TLV carries the running/current counter for byte lifetime since
+last event.
+
+b)replay value (XFRMA_REPLAY_VAL)
+This TLV carries the running/current counter for replay sequence since
+last event.
+
+c)replay threshold (XFRMA_REPLAY_THRESH)
+This TLV carries the threshold being used by the kernel to trigger events
+when the replay sequence is exceeded.
+
+d) expiry timer (XFRMA_ETIMER_THRESH)
+This is a timer value in milliseconds which is used as the nagle
+value to rate limit the events.
+
+3) Default configurations for the parameters:
+----------------------------------------------
+
+By default these events should be turned off unless there is
+at least one listener registered to listen to the multicast
+group XFRMNLGRP_AEVENTS.
+
+Programs installing SAs will need to specify the two thresholds, however,
+in order to not change existing applications such as racoon
+we also provide default threshold values for these different parameters
+in case they are not specified.
+
+the two sysctls/proc entries are:
+a) /proc/sys/net/core/sysctl_xfrm_aevent_etime
+used to provide default values for the XFRMA_ETIMER_THRESH in incremental
+units of time of 100ms. The default is 10 (1 second)
+
+b) /proc/sys/net/core/sysctl_xfrm_aevent_rseqth
+used to provide default values for XFRMA_REPLAY_THRESH parameter
+in incremental packet count. The default is two packets.
+
+4) Message types
+----------------
+
+a) XFRM_MSG_GETAE issued by user-->kernel.
+XFRM_MSG_GETAE does not carry any TLVs.
+The response is a XFRM_MSG_NEWAE which is formatted based on what
+XFRM_MSG_GETAE queried for.
+The response will always have XFRMA_LTIME_VAL and XFRMA_REPLAY_VAL TLVs.
+*if XFRM_AE_RTHR flag is set, then XFRMA_REPLAY_THRESH is also retrieved
+*if XFRM_AE_ETHR flag is set, then XFRMA_ETIMER_THRESH is also retrieved
+
+b) XFRM_MSG_NEWAE is issued by either user space to configure
+or kernel to announce events or respond to a XFRM_MSG_GETAE.
+
+i) user --> kernel to configure a specific SA.
+any of the values or threshold parameters can be updated by passing the
+appropriate TLV.
+A response is issued back to the sender in user space to indicate success
+or failure.
+In the case of success, additionally an event with
+XFRM_MSG_NEWAE is also issued to any listeners as described in iii).
+
+ii) kernel->user direction as a response to XFRM_MSG_GETAE
+The response will always have XFRMA_LTIME_VAL and XFRMA_REPLAY_VAL TLVs.
+The threshold TLVs will be included if explicitly requested in
+the XFRM_MSG_GETAE message.
+
+iii) kernel->user to report as event if someone sets any values or
+thresholds for an SA using XFRM_MSG_NEWAE (as described in #i above).
+In such a case XFRM_AE_CU flag is set to inform the user that
+the change happened as a result of an update.
+The message will always have XFRMA_LTIME_VAL and XFRMA_REPLAY_VAL TLVs.
+
+iv) kernel->user to report event when replay threshold or a timeout
+is exceeded.
+In such a case either XFRM_AE_CR (replay exceeded) or XFRM_AE_CE (timeout
+happened) is set to inform the user what happened.
+Note the two flags are mutually exclusive.
+The message will always have XFRMA_LTIME_VAL and XFRMA_REPLAY_VAL TLVs.
+
+Exceptions to threshold settings
+--------------------------------
+
+If you have an SA that is getting hit by traffic in bursts such that
+there is a period where the timer threshold expires with no packets
+seen, then an odd behavior is seen as follows:
+The first packet arrival after a timer expiry will trigger a timeout
+aevent; i.e we dont wait for a timeout period or a packet threshold
+to be reached. This is done for simplicity and efficiency reasons.
+
+-JHS
L: lm-sensors@lm-sensors.org
S: Odd Fixes
-WAN ROUTER & SANGOMA WANPIPE DRIVERS & API (X.25, FRAME RELAY, PPP, CISCO HDLC)
-P: Nenad Corbic
-M: ncorbic@sangoma.com
-M: dm@sangoma.com
-W: http://www.sangoma.com
-S: Supported
-
WATCHDOG DEVICE DRIVERS
P: Wim Van Sebroeck
M: wim@iguana.be
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \
- pci-dma.o i386_ksyms.o i387.o dmi_scan.o bootflag.o \
+ pci-dma.o i386_ksyms.o i387.o bootflag.o \
quirks.o i8237.o topology.o alternative.o
obj-y += cpu/
.long sys_get_robust_list
.long sys_splice
.long sys_sync_file_range
+ .long sys_tee /* 315 */
case PCI_DEVICE_ID_VIA_82C596:
case PCI_DEVICE_ID_VIA_82C686:
case PCI_DEVICE_ID_VIA_8231:
+ case PCI_DEVICE_ID_VIA_8233A:
case PCI_DEVICE_ID_VIA_8235:
+ case PCI_DEVICE_ID_VIA_8237:
+ case PCI_DEVICE_ID_VIA_8237_SATA:
/* FIXME: add new ones for 8233/5 */
r->name = "VIA";
r->get = pirq_via_get;
obj-y := acpi.o entry.o efi.o efi_stub.o gate-data.o fsys.o ia64_ksyms.o irq.o irq_ia64.o \
irq_lsapic.o ivt.o machvec.o pal.o patch.o process.o perfmon.o ptrace.o sal.o \
salinfo.o semaphore.o setup.o signal.o sys_ia64.o time.o traps.o unaligned.o \
- unwind.o mca.o mca_asm.o topology.o dmi_scan.o
+ unwind.o mca.o mca_asm.o topology.o
obj-$(CONFIG_IA64_BRL_EMU) += brl_emu.o
obj-$(CONFIG_IA64_GENERIC) += acpi-ext.o
obj-$(CONFIG_KPROBES) += kprobes.o jprobes.o
obj-$(CONFIG_IA64_UNCACHED_ALLOCATOR) += uncached.o
mca_recovery-y += mca_drv.o mca_drv_asm.o
-dmi_scan-y += ../../i386/kernel/dmi_scan.o
# The gate DSO image is built using a special linker script.
targets += gate.so gate-syms.o
data8 sys_set_robust_list
data8 sys_get_robust_list
data8 sys_sync_file_range // 1300
+ data8 sys_tee
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls
*/
static void
-ia64_wait_for_slaves(int monarch)
+ia64_wait_for_slaves(int monarch, const char *type)
{
int c, wait = 0, missing = 0;
for_each_online_cpu(c) {
}
if (!missing)
goto all_in;
- printk(KERN_INFO "OS MCA slave did not rendezvous on cpu");
+ printk(KERN_INFO "OS %s slave did not rendezvous on cpu", type);
for_each_online_cpu(c) {
if (c == monarch)
continue;
return;
all_in:
- printk(KERN_INFO "All OS MCA slaves have reached rendezvous\n");
+ printk(KERN_INFO "All OS %s slaves have reached rendezvous\n", type);
return;
}
if (notify_die(DIE_MCA_MONARCH_ENTER, "MCA", regs, (long)&nd, 0, 0)
== NOTIFY_STOP)
ia64_mca_spin(__FUNCTION__);
- ia64_wait_for_slaves(cpu);
+ ia64_wait_for_slaves(cpu, "MCA");
/* Wakeup all the processors which are spinning in the rendezvous loop.
* They will leave SAL, then spin in the OS with interrupts disabled
*/
printk("Delaying for 5 seconds...\n");
udelay(5*1000000);
- ia64_wait_for_slaves(cpu);
+ ia64_wait_for_slaves(cpu, "INIT");
/* If nobody intercepts DIE_INIT_MONARCH_PROCESS then we drop through
* to default_monarch_init_process() above and just print all the
* tasks.
}
#endif /* CONFIG_SMP */
+#ifdef CONFIG_VIRTUAL_MEM_MAP
+static inline int find_next_valid_pfn_for_pgdat(pg_data_t *pgdat, int i)
+{
+ unsigned long end_address, hole_next_pfn;
+ unsigned long stop_address;
+
+ end_address = (unsigned long) &vmem_map[pgdat->node_start_pfn + i];
+ end_address = PAGE_ALIGN(end_address);
+
+ stop_address = (unsigned long) &vmem_map[
+ pgdat->node_start_pfn + pgdat->node_spanned_pages];
+
+ do {
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ pgd = pgd_offset_k(end_address);
+ if (pgd_none(*pgd)) {
+ end_address += PGDIR_SIZE;
+ continue;
+ }
+
+ pud = pud_offset(pgd, end_address);
+ if (pud_none(*pud)) {
+ end_address += PUD_SIZE;
+ continue;
+ }
+
+ pmd = pmd_offset(pud, end_address);
+ if (pmd_none(*pmd)) {
+ end_address += PMD_SIZE;
+ continue;
+ }
+
+ pte = pte_offset_kernel(pmd, end_address);
+retry_pte:
+ if (pte_none(*pte)) {
+ end_address += PAGE_SIZE;
+ pte++;
+ if ((end_address < stop_address) &&
+ (end_address != ALIGN(end_address, 1UL << PMD_SHIFT)))
+ goto retry_pte;
+ continue;
+ }
+ /* Found next valid vmem_map page */
+ break;
+ } while (end_address < stop_address);
+
+ end_address = min(end_address, stop_address);
+ end_address = end_address - (unsigned long) vmem_map + sizeof(struct page) - 1;
+ hole_next_pfn = end_address / sizeof(struct page);
+ return hole_next_pfn - pgdat->node_start_pfn;
+}
+#else
+static inline int find_next_valid_pfn_for_pgdat(pg_data_t *pgdat, int i)
+{
+ return i + 1;
+}
+#endif
+
/**
* show_mem - give short summary of memory stats
*
struct page *page;
if (pfn_valid(pgdat->node_start_pfn + i))
page = pfn_to_page(pgdat->node_start_pfn + i);
- else
+ else {
+ i = find_next_valid_pfn_for_pgdat(pgdat, i) - 1;
continue;
+ }
if (PageReserved(page))
reserved++;
else if (PageSwapCache(page))
COMPAT_SYS(ppoll)
SYSCALL(unshare)
SYSCALL(splice)
+SYSCALL(tee)
/*265*/ .long sys_timer_delete, sys_timer_create, sys_nis_syscall, sys_io_setup, sys_io_destroy
/*270*/ .long sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
/*275*/ .long sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
-/*280*/ .long sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
+/*280*/ .long sys_tee, sys_add_key, sys_request_key, sys_keyctl, sys_openat
/*285*/ .long sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*290*/ .long sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
/*295*/ .long sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare
SIGN3(sys32_ioprio_set, sys_ioprio_set, %o0, %o1, %o2)
SIGN2(sys32_splice, sys_splice, %o0, %o1)
SIGN2(sys32_sync_file_range, compat_sync_file_range, %o0, %o5)
+SIGN2(sys32_tee, sys_tee, %o0, %o1)
.globl sys32_mmap2
sys32_mmap2:
.word sys_timer_delete, compat_sys_timer_create, sys_ni_syscall, compat_sys_io_setup, sys_io_destroy
/*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink
.word compat_sys_mq_timedsend, compat_sys_mq_timedreceive, compat_sys_mq_notify, compat_sys_mq_getsetattr, compat_sys_waitid
-/*280*/ .word sys_ni_syscall, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat
+/*280*/ .word sys32_tee, sys_add_key, sys_request_key, sys_keyctl, compat_sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, compat_sys_futimesat, compat_sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, compat_sys_pselect6, compat_sys_ppoll, sys_unshare
.word sys_timer_delete, sys_timer_create, sys_ni_syscall, sys_io_setup, sys_io_destroy
/*270*/ .word sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink
.word sys_mq_timedsend, sys_mq_timedreceive, sys_mq_notify, sys_mq_getsetattr, sys_waitid
-/*280*/ .word sys_nis_syscall, sys_add_key, sys_request_key, sys_keyctl, sys_openat
+/*280*/ .word sys_tee, sys_add_key, sys_request_key, sys_keyctl, sys_openat
.word sys_mkdirat, sys_mknodat, sys_fchownat, sys_futimesat, sys_fstatat64
/*285*/ .word sys_unlinkat, sys_renameat, sys_linkat, sys_symlinkat, sys_readlinkat
.word sys_fchmodat, sys_faccessat, sys_pselect6, sys_ppoll, sys_unshare
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_x86_64.o \
x8664_ksyms.o i387.o syscall.o vsyscall.o \
setup64.o bootflag.o e820.o reboot.o quirks.o i8237.o \
- dmi_scan.o pci-dma.o pci-nommu.o
+ pci-dma.o pci-nommu.o
obj-$(CONFIG_X86_MCE) += mce.o
obj-$(CONFIG_X86_MCE_INTEL) += mce_intel.o
quirks-y += ../../i386/kernel/quirks.o
i8237-y += ../../i386/kernel/i8237.o
msr-$(subst m,y,$(CONFIG_X86_MSR)) += ../../i386/kernel/msr.o
-dmi_scan-y += ../../i386/kernel/dmi_scan.o
-
return ret;
}
+/**
+ * sg_scsi_ioctl -- handle deprecated SCSI_IOCTL_SEND_COMMAND ioctl
+ * @file: file this ioctl operates on (optional)
+ * @q: request queue to send scsi commands down
+ * @disk: gendisk to operate on (option)
+ * @sic: userspace structure describing the command to perform
+ *
+ * Send down the scsi command described by @sic to the device below
+ * the request queue @q. If @file is non-NULL it's used to perform
+ * fine-grained permission checks that allow users to send down
+ * non-destructive SCSI commands. If the caller has a struct gendisk
+ * available it should be passed in as @disk to allow the low level
+ * driver to use the information contained in it. A non-NULL @disk
+ * is only allowed if the caller knows that the low level driver doesn't
+ * need it (e.g. in the scsi subsystem).
+ *
+ * Notes:
+ * - This interface is deprecated - users should use the SG_IO
+ * interface instead, as this is a more flexible approach to
+ * performing SCSI commands on a device.
+ * - The SCSI command length is determined by examining the 1st byte
+ * of the given command. There is no way to override this.
+ * - Data transfers are limited to PAGE_SIZE
+ * - The length (x + y) must be at least OMAX_SB_LEN bytes long to
+ * accommodate the sense buffer when an error occurs.
+ * The sense buffer is truncated to OMAX_SB_LEN (16) bytes so that
+ * old code will not be surprised.
+ * - If a Unix error occurs (e.g. ENOMEM) then the user will receive
+ * a negative return and the Unix error code in 'errno'.
+ * If the SCSI command succeeds then 0 is returned.
+ * Positive numbers returned are the compacted SCSI error codes (4
+ * bytes in one int) where the lowest byte is the SCSI status.
+ */
#define OMAX_SB_LEN 16 /* For backward compatibility */
-
-static int sg_scsi_ioctl(struct file *file, request_queue_t *q,
- struct gendisk *bd_disk, Scsi_Ioctl_Command __user *sic)
+int sg_scsi_ioctl(struct file *file, struct request_queue *q,
+ struct gendisk *disk, struct scsi_ioctl_command __user *sic)
{
struct request *rq;
int err;
unsigned int in_len, out_len, bytes, opcode, cmdlen;
char *buffer = NULL, sense[SCSI_SENSE_BUFFERSIZE];
+ if (!sic)
+ return -EINVAL;
+
/*
* get in an out lengths, verify they don't exceed a page worth of data
*/
if (copy_from_user(rq->cmd, sic->data, cmdlen))
goto error;
- if (copy_from_user(buffer, sic->data + cmdlen, in_len))
+ if (in_len && copy_from_user(buffer, sic->data + cmdlen, in_len))
goto error;
err = verify_command(file, rq->cmd);
if (err)
goto error;
+ /* default. possible overriden later */
+ rq->retries = 5;
+
switch (opcode) {
- case SEND_DIAGNOSTIC:
- case FORMAT_UNIT:
- rq->timeout = FORMAT_UNIT_TIMEOUT;
- break;
- case START_STOP:
- rq->timeout = START_STOP_TIMEOUT;
- break;
- case MOVE_MEDIUM:
- rq->timeout = MOVE_MEDIUM_TIMEOUT;
- break;
- case READ_ELEMENT_STATUS:
- rq->timeout = READ_ELEMENT_STATUS_TIMEOUT;
- break;
- case READ_DEFECT_DATA:
- rq->timeout = READ_DEFECT_DATA_TIMEOUT;
- break;
- default:
- rq->timeout = BLK_DEFAULT_TIMEOUT;
- break;
+ case SEND_DIAGNOSTIC:
+ case FORMAT_UNIT:
+ rq->timeout = FORMAT_UNIT_TIMEOUT;
+ rq->retries = 1;
+ break;
+ case START_STOP:
+ rq->timeout = START_STOP_TIMEOUT;
+ break;
+ case MOVE_MEDIUM:
+ rq->timeout = MOVE_MEDIUM_TIMEOUT;
+ break;
+ case READ_ELEMENT_STATUS:
+ rq->timeout = READ_ELEMENT_STATUS_TIMEOUT;
+ break;
+ case READ_DEFECT_DATA:
+ rq->timeout = READ_DEFECT_DATA_TIMEOUT;
+ rq->retries = 1;
+ break;
+ default:
+ rq->timeout = BLK_DEFAULT_TIMEOUT;
+ break;
+ }
+
+ if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) {
+ err = DRIVER_ERROR << 24;
+ goto out;
}
memset(sense, 0, sizeof(sense));
rq->sense = sense;
rq->sense_len = 0;
-
- rq->data = buffer;
- rq->data_len = bytes;
rq->flags |= REQ_BLOCK_PC;
- rq->retries = 0;
- blk_execute_rq(q, bd_disk, rq, 0);
+ blk_execute_rq(q, disk, rq, 0);
+
+out:
err = rq->errors & 0xff; /* only 8 bit SCSI status */
if (err) {
if (rq->sense_len && rq->sense) {
blk_put_request(rq);
return err;
}
-
+EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
/* Send basic block requests */
static int __blk_send_generic(request_queue_t *q, struct gendisk *bd_disk, int cmd, int data)
up(&dev->sem);
if (dev->parent)
up(&dev->parent->sem);
+
+ if (err > 0) /* success */
+ err = count;
+ else if (err == 0) /* driver didn't accept device */
+ err = -ENODEV;
}
put_device(dev);
put_bus(bus);
kobject_uevent(&class_dev->kobj, KOBJ_ADD);
/* notify any interfaces this device is now here */
- if (parent_class) {
- down(&parent_class->sem);
- list_add_tail(&class_dev->node, &parent_class->children);
- list_for_each_entry(class_intf, &parent_class->interfaces, node)
- if (class_intf->add)
- class_intf->add(class_dev, class_intf);
- up(&parent_class->sem);
+ down(&parent_class->sem);
+ list_add_tail(&class_dev->node, &parent_class->children);
+ list_for_each_entry(class_intf, &parent_class->interfaces, node) {
+ if (class_intf->add)
+ class_intf->add(class_dev, class_intf);
}
+ up(&parent_class->sem);
register_done:
if (error) {
sysfs_remove_link(&dev->kobj, "driver");
klist_remove(&dev->knode_driver);
- if (dev->bus->remove)
+ if (dev->bus && dev->bus->remove)
dev->bus->remove(dev);
else if (drv->remove)
drv->remove(dev);
#include <linux/vt_kern.h>
#include <linux/device.h>
+#include <linux/kallsyms.h>
+#include <linux/pm.h>
#include "../base.h"
#include "power.h"
if (dev->bus && dev->bus->suspend && !dev->power.power_state.event) {
dev_dbg(dev, "suspending\n");
error = dev->bus->suspend(dev, state);
+ suspend_report_result(dev->bus->suspend, error);
}
up(&dev->sem);
return error;
EXPORT_SYMBOL_GPL(device_power_down);
+void __suspend_report_result(const char *function, void *fn, int ret)
+{
+ if (ret) {
+ printk(KERN_ERR "%s(): ", function);
+ print_fn_descriptor_symbol("%s() returns ", (unsigned long)fn);
+ printk("%d\n", ret);
+ }
+}
+EXPORT_SYMBOL_GPL(__suspend_report_result);
{.mask = 0x00000001, .type = 0}
};
+/* This function does the same thing as mask_memory() for this chipset... */
+static inline unsigned long efficeon_mask_memory(unsigned long addr)
+{
+ return addr | 0x00000001;
+}
+
static struct aper_size_info_lvl2 efficeon_generic_sizes[4] =
{
{256, 65536, 0},
last_page = NULL;
for (i = 0; i < count; i++) {
int index = pg_start + i;
- unsigned long insert = mem->memory[i];
+ unsigned long insert = efficeon_mask_memory(mem->memory[i]);
page = (unsigned int *) efficeon_private.l1_table[index >> 10];
printk(KERN_NOTICE "SAK: killed process %d"
" (%s): fd#%d opened to the tty\n",
p->pid, p->comm, i);
- send_sig(SIGKILL, p, 1);
+ force_sig(SIGKILL, p);
break;
}
}
#
# Makefile for the linux kernel.
#
-obj-$(CONFIG_EDD) += edd.o
+obj-$(CONFIG_DMI) += dmi_scan.o
+obj-$(CONFIG_EDD) += edd.o
obj-$(CONFIG_EFI_VARS) += efivars.o
obj-$(CONFIG_EFI_PCDP) += pcdp.o
obj-$(CONFIG_DELL_RBU) += dell_rbu.o
else
printk(KERN_ERR "dmi_string: out of memory.\n");
}
- }
+ }
return str;
}
{
u8 *buf, *data;
int i = 0;
-
+
buf = dmi_ioremap(base, len);
if (buf == NULL)
return -1;
data = buf;
/*
- * Stop when we see all the items the table claimed to have
- * OR we run off the end of the table (also happens)
- */
+ * Stop when we see all the items the table claimed to have
+ * OR we run off the end of the table (also happens)
+ */
while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) {
struct dmi_header *dm = (struct dmi_header *)data;
/*
{
u8 sum = 0;
int a;
-
+
for (a = 0; a < 15; a++)
sum += buf[a];
bank. */
if (kind < 0) {
if (w83792d_read_value(client, W83792D_REG_CONFIG) & 0x80) {
- dev_warn(dev, "Detection failed at step 3\n");
+ dev_dbg(dev, "Detection failed at step 1\n");
goto ERROR1;
}
val1 = w83792d_read_value(client, W83792D_REG_BANK);
if (!(val1 & 0x07)) { /* is Bank0 */
if (((!(val1 & 0x80)) && (val2 != 0xa3)) ||
((val1 & 0x80) && (val2 != 0x5c))) {
+ dev_dbg(dev, "Detection failed at step 2\n");
goto ERROR1;
}
}
should match */
if (w83792d_read_value(client,
W83792D_REG_I2C_ADDR) != address) {
- dev_warn(dev, "Detection failed at step 5\n");
+ dev_dbg(dev, "Detection failed at step 3\n");
goto ERROR1;
}
}
This driver is a replacement for (and was inspired by) an older
driver named i2c-philips-par. The new driver supports more devices,
and makes it easier to add support for new devices.
-
+
+ An adapter type parameter is now mandatory. Please read the file
+ Documentation/i2c/busses/i2c-parport for details.
+
Another driver exists, named i2c-parport-light, which doesn't depend
on the parport driver. This is meant for embedded systems. Don't say
Y here if you intend to say Y or M there.
static int __init i2c_parport_init(void)
{
- if (type < 0 || type >= ARRAY_SIZE(adapter_parm)) {
+ if (type < 0) {
+ printk(KERN_WARNING "i2c-parport: adapter type unspecified\n");
+ return -ENODEV;
+ }
+
+ if (type >= ARRAY_SIZE(adapter_parm)) {
printk(KERN_WARNING "i2c-parport: invalid type (%d)\n", type);
- type = 0;
+ return -ENODEV;
}
if (base == 0) {
static int __init i2c_parport_init(void)
{
- if (type < 0 || type >= ARRAY_SIZE(adapter_parm)) {
+ if (type < 0) {
+ printk(KERN_WARNING "i2c-parport: adapter type unspecified\n");
+ return -ENODEV;
+ }
+
+ if (type >= ARRAY_SIZE(adapter_parm)) {
printk(KERN_WARNING "i2c-parport: invalid type (%d)\n", type);
- type = 0;
+ return -ENODEV;
}
return parport_register_driver(&i2c_parport_driver);
},
};
-static int type;
+static int type = -1;
module_param(type, int, 0);
MODULE_PARM_DESC(type,
"Type of adapter:\n"
#include <linux/init.h>
#include <asm/io.h>
-/*
- HISTORY:
- 2003-05-11 1.0.0 Updated from lm_sensors project for kernel 2.5
- (was i2c-sis645.c from lm_sensors 2.7.0)
-*/
-#define SIS96x_VERSION "1.0.0"
-
/* base address register in PCI config space */
#define SIS96x_BAR 0x04
static int __init i2c_sis96x_init(void)
{
- printk(KERN_INFO "i2c-sis96x version %s\n", SIS96x_VERSION);
return pci_register_driver(&sis96x_driver);
}
#include <linux/rtc.h>
#include <linux/bcd.h>
#include <linux/mutex.h>
+#include <linux/workqueue.h>
#define DS1374_REG_TOD0 0x00
#define DS1374_REG_TOD1 0x01
return t1;
}
-static void ds1374_set_tlet(ulong arg)
+static void ds1374_set_work(void *arg)
{
ulong t1, t2;
int limit = 10; /* arbitrary retry limit */
static ulong new_time;
-static DECLARE_TASKLET_DISABLED(ds1374_tasklet, ds1374_set_tlet,
- (ulong) & new_time);
+static struct workqueue_struct *ds1374_workqueue;
+
+static DECLARE_WORK(ds1374_work, ds1374_set_work, &new_time);
int ds1374_set_rtc_time(ulong nowtime)
{
new_time = nowtime;
if (in_interrupt())
- tasklet_schedule(&ds1374_tasklet);
+ queue_work(ds1374_workqueue, &ds1374_work);
else
- ds1374_set_tlet((ulong) & new_time);
+ ds1374_set_work(&new_time);
return 0;
}
client->adapter = adap;
client->driver = &ds1374_driver;
+ ds1374_workqueue = create_singlethread_workqueue("ds1374");
+
if ((rc = i2c_attach_client(client)) != 0) {
kfree(client);
return rc;
if ((rc = i2c_detach_client(client)) == 0) {
kfree(i2c_get_clientdata(client));
- tasklet_kill(&ds1374_tasklet);
+ destroy_workqueue(ds1374_workqueue);
}
return rc;
}
#include <linux/rtc.h>
#include <linux/bcd.h>
#include <linux/mutex.h>
+#include <linux/workqueue.h>
#include <asm/time.h>
#include <asm/rtc.h>
}
static void
-m41t00_set_tlet(ulong arg)
+m41t00_set(void *arg)
{
struct rtc_time tm;
ulong nowtime = *(ulong *)arg;
return;
}
-static ulong new_time;
-
-DECLARE_TASKLET_DISABLED(m41t00_tasklet, m41t00_set_tlet, (ulong)&new_time);
+static ulong new_time;
+static struct workqueue_struct *m41t00_wq;
+static DECLARE_WORK(m41t00_work, m41t00_set, &new_time);
int
m41t00_set_rtc_time(ulong nowtime)
new_time = nowtime;
if (in_interrupt())
- tasklet_schedule(&m41t00_tasklet);
+ queue_work(m41t00_wq, &m41t00_work);
else
- m41t00_set_tlet((ulong)&new_time);
+ m41t00_set(&new_time);
return 0;
}
return rc;
}
+ m41t00_wq = create_singlethread_workqueue("m41t00");
save_client = client;
return 0;
}
if ((rc = i2c_detach_client(client)) == 0) {
kfree(client);
- tasklet_kill(&m41t00_tasklet);
+ destroy_workqueue(m41t00_wq);
}
return rc;
}
kmalloc(sizeof *device->cache.pkey_cache *
(end_port(device) - start_port(device) + 1), GFP_KERNEL);
device->cache.gid_cache =
- kmalloc(sizeof *device->cache.pkey_cache *
+ kmalloc(sizeof *device->cache.gid_cache *
(end_port(device) - start_port(device) + 1), GFP_KERNEL);
if (!device->cache.pkey_cache || !device->cache.gid_cache) {
local = list_entry(mad_agent_priv->local_list.next,
struct ib_mad_local_private,
completion_list);
+ list_del(&local->completion_list);
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
if (local->mad_priv) {
recv_mad_agent = local->recv_mad_agent;
&mad_send_wc);
spin_lock_irqsave(&mad_agent_priv->lock, flags);
- list_del(&local->completion_list);
atomic_dec(&mad_agent_priv->refcount);
if (!recv)
kmem_cache_free(ib_mad_cache, local->mad_priv);
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
+int ib_rate_to_mult(enum ib_rate rate)
+{
+ switch (rate) {
+ case IB_RATE_2_5_GBPS: return 1;
+ case IB_RATE_5_GBPS: return 2;
+ case IB_RATE_10_GBPS: return 4;
+ case IB_RATE_20_GBPS: return 8;
+ case IB_RATE_30_GBPS: return 12;
+ case IB_RATE_40_GBPS: return 16;
+ case IB_RATE_60_GBPS: return 24;
+ case IB_RATE_80_GBPS: return 32;
+ case IB_RATE_120_GBPS: return 48;
+ default: return -1;
+ }
+}
+EXPORT_SYMBOL(ib_rate_to_mult);
+
+enum ib_rate mult_to_ib_rate(int mult)
+{
+ switch (mult) {
+ case 1: return IB_RATE_2_5_GBPS;
+ case 2: return IB_RATE_5_GBPS;
+ case 4: return IB_RATE_10_GBPS;
+ case 8: return IB_RATE_20_GBPS;
+ case 12: return IB_RATE_30_GBPS;
+ case 16: return IB_RATE_40_GBPS;
+ case 24: return IB_RATE_60_GBPS;
+ case 32: return IB_RATE_80_GBPS;
+ case 48: return IB_RATE_120_GBPS;
+ default: return IB_RATE_PORT_CURRENT;
+ }
+}
+EXPORT_SYMBOL(mult_to_ib_rate);
+
/* Protection domains */
struct ib_pd *ib_alloc_pd(struct ib_device *device)
("Tavor") and the MT25208 PCI Express HCA ("Arbel").
config INFINIBAND_MTHCA_DEBUG
- bool "Verbose debugging output"
+ bool "Verbose debugging output" if EMBEDDED
depends on INFINIBAND_MTHCA
- default n
+ default y
---help---
- This option causes the mthca driver produce a bunch of debug
- messages. Select this is you are developing the driver or
- trying to diagnose a problem.
+ This option causes debugging code to be compiled into the
+ mthca driver. The output can be turned on via the
+ debug_level module parameter (which can also be set after
+ the driver is loaded through sysfs).
-ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
-EXTRA_CFLAGS += -DDEBUG
-endif
-
obj-$(CONFIG_INFINIBAND_MTHCA) += ib_mthca.o
ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \
#include "mthca_dev.h"
+enum {
+ MTHCA_RATE_TAVOR_FULL = 0,
+ MTHCA_RATE_TAVOR_1X = 1,
+ MTHCA_RATE_TAVOR_4X = 2,
+ MTHCA_RATE_TAVOR_1X_DDR = 3
+};
+
+enum {
+ MTHCA_RATE_MEMFREE_FULL = 0,
+ MTHCA_RATE_MEMFREE_QUARTER = 1,
+ MTHCA_RATE_MEMFREE_EIGHTH = 2,
+ MTHCA_RATE_MEMFREE_HALF = 3
+};
+
struct mthca_av {
__be32 port_pd;
u8 reserved1;
__be32 dgid[4];
};
+static enum ib_rate memfree_rate_to_ib(u8 mthca_rate, u8 port_rate)
+{
+ switch (mthca_rate) {
+ case MTHCA_RATE_MEMFREE_EIGHTH:
+ return mult_to_ib_rate(port_rate >> 3);
+ case MTHCA_RATE_MEMFREE_QUARTER:
+ return mult_to_ib_rate(port_rate >> 2);
+ case MTHCA_RATE_MEMFREE_HALF:
+ return mult_to_ib_rate(port_rate >> 1);
+ case MTHCA_RATE_MEMFREE_FULL:
+ default:
+ return mult_to_ib_rate(port_rate);
+ }
+}
+
+static enum ib_rate tavor_rate_to_ib(u8 mthca_rate, u8 port_rate)
+{
+ switch (mthca_rate) {
+ case MTHCA_RATE_TAVOR_1X: return IB_RATE_2_5_GBPS;
+ case MTHCA_RATE_TAVOR_1X_DDR: return IB_RATE_5_GBPS;
+ case MTHCA_RATE_TAVOR_4X: return IB_RATE_10_GBPS;
+ default: return port_rate;
+ }
+}
+
+enum ib_rate mthca_rate_to_ib(struct mthca_dev *dev, u8 mthca_rate, u8 port)
+{
+ if (mthca_is_memfree(dev)) {
+ /* Handle old Arbel FW */
+ if (dev->limits.stat_rate_support == 0x3 && mthca_rate)
+ return IB_RATE_2_5_GBPS;
+
+ return memfree_rate_to_ib(mthca_rate, dev->rate[port - 1]);
+ } else
+ return tavor_rate_to_ib(mthca_rate, dev->rate[port - 1]);
+}
+
+static u8 ib_rate_to_memfree(u8 req_rate, u8 cur_rate)
+{
+ if (cur_rate <= req_rate)
+ return 0;
+
+ /*
+ * Inter-packet delay (IPD) to get from rate X down to a rate
+ * no more than Y is (X - 1) / Y.
+ */
+ switch ((cur_rate - 1) / req_rate) {
+ case 0: return MTHCA_RATE_MEMFREE_FULL;
+ case 1: return MTHCA_RATE_MEMFREE_HALF;
+ case 2: /* fall through */
+ case 3: return MTHCA_RATE_MEMFREE_QUARTER;
+ default: return MTHCA_RATE_MEMFREE_EIGHTH;
+ }
+}
+
+static u8 ib_rate_to_tavor(u8 static_rate)
+{
+ switch (static_rate) {
+ case IB_RATE_2_5_GBPS: return MTHCA_RATE_TAVOR_1X;
+ case IB_RATE_5_GBPS: return MTHCA_RATE_TAVOR_1X_DDR;
+ case IB_RATE_10_GBPS: return MTHCA_RATE_TAVOR_4X;
+ default: return MTHCA_RATE_TAVOR_FULL;
+ }
+}
+
+u8 mthca_get_rate(struct mthca_dev *dev, int static_rate, u8 port)
+{
+ u8 rate;
+
+ if (!static_rate || ib_rate_to_mult(static_rate) >= dev->rate[port - 1])
+ return 0;
+
+ if (mthca_is_memfree(dev))
+ rate = ib_rate_to_memfree(ib_rate_to_mult(static_rate),
+ dev->rate[port - 1]);
+ else
+ rate = ib_rate_to_tavor(static_rate);
+
+ if (!(dev->limits.stat_rate_support & (1 << rate)))
+ rate = 1;
+
+ return rate;
+}
+
int mthca_create_ah(struct mthca_dev *dev,
struct mthca_pd *pd,
struct ib_ah_attr *ah_attr,
av->g_slid = ah_attr->src_path_bits;
av->dlid = cpu_to_be16(ah_attr->dlid);
av->msg_sr = (3 << 4) | /* 2K message */
- ah_attr->static_rate;
+ mthca_get_rate(dev, ah_attr->static_rate, ah_attr->port_num);
av->sl_tclass_flowlabel = cpu_to_be32(ah_attr->sl << 28);
if (ah_attr->ah_flags & IB_AH_GRH) {
av->g_slid |= 0x80;
u32 *outbox;
u8 field;
u16 size;
+ u16 stat_rate;
int err;
#define QUERY_DEV_LIM_OUT_SIZE 0x100
#define QUERY_DEV_LIM_MTU_WIDTH_OFFSET 0x36
#define QUERY_DEV_LIM_VL_PORT_OFFSET 0x37
#define QUERY_DEV_LIM_MAX_GID_OFFSET 0x3b
+#define QUERY_DEV_LIM_RATE_SUPPORT_OFFSET 0x3c
#define QUERY_DEV_LIM_MAX_PKEY_OFFSET 0x3f
#define QUERY_DEV_LIM_FLAGS_OFFSET 0x44
#define QUERY_DEV_LIM_RSVD_UAR_OFFSET 0x48
dev_lim->num_ports = field & 0xf;
MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_GID_OFFSET);
dev_lim->max_gids = 1 << (field & 0xf);
+ MTHCA_GET(stat_rate, outbox, QUERY_DEV_LIM_RATE_SUPPORT_OFFSET);
+ dev_lim->stat_rate_support = stat_rate;
MTHCA_GET(field, outbox, QUERY_DEV_LIM_MAX_PKEY_OFFSET);
dev_lim->max_pkeys = 1 << (field & 0xf);
MTHCA_GET(dev_lim->flags, outbox, QUERY_DEV_LIM_FLAGS_OFFSET);
int max_vl;
int num_ports;
int max_gids;
+ u16 stat_rate_support;
int max_pkeys;
u32 flags;
int reserved_uars;
int reserved_qps;
int num_srqs;
int max_srq_wqes;
+ int max_srq_sge;
int reserved_srqs;
int num_eecs;
int reserved_eecs;
int reserved_pds;
u32 page_size_cap;
u32 flags;
+ u16 stat_rate_support;
u8 port_width_cap;
};
struct ib_mad_agent *send_agent[MTHCA_MAX_PORTS][2];
struct ib_ah *sm_ah[MTHCA_MAX_PORTS];
spinlock_t sm_lock;
+ u8 rate[MTHCA_MAX_PORTS];
};
-#define mthca_dbg(mdev, format, arg...) \
- dev_dbg(&mdev->pdev->dev, format, ## arg)
+#ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
+extern int mthca_debug_level;
+
+#define mthca_dbg(mdev, format, arg...) \
+ do { \
+ if (mthca_debug_level) \
+ dev_printk(KERN_DEBUG, &mdev->pdev->dev, format, ## arg); \
+ } while (0)
+
+#else /* CONFIG_INFINIBAND_MTHCA_DEBUG */
+
+#define mthca_dbg(mdev, format, arg...) do { (void) mdev; } while (0)
+
+#endif /* CONFIG_INFINIBAND_MTHCA_DEBUG */
+
#define mthca_err(mdev, format, arg...) \
dev_err(&mdev->pdev->dev, format, ## arg)
#define mthca_info(mdev, format, arg...) \
int mthca_modify_srq(struct ib_srq *ibsrq, struct ib_srq_attr *attr,
enum ib_srq_attr_mask attr_mask);
int mthca_query_srq(struct ib_srq *srq, struct ib_srq_attr *srq_attr);
+int mthca_max_srq_sge(struct mthca_dev *dev);
void mthca_srq_event(struct mthca_dev *dev, u32 srqn,
enum ib_event_type event_type);
void mthca_free_srq_wqe(struct mthca_srq *srq, u32 wqe_addr);
struct ib_ud_header *header);
int mthca_ah_query(struct ib_ah *ibah, struct ib_ah_attr *attr);
int mthca_ah_grh_present(struct mthca_ah *ah);
+u8 mthca_get_rate(struct mthca_dev *dev, int static_rate, u8 port);
+enum ib_rate mthca_rate_to_ib(struct mthca_dev *dev, u8 mthca_rate, u8 port);
int mthca_multicast_attach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid);
int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid);
MTHCA_VENDOR_CLASS2 = 0xa
};
+int mthca_update_rate(struct mthca_dev *dev, u8 port_num)
+{
+ struct ib_port_attr *tprops = NULL;
+ int ret;
+
+ tprops = kmalloc(sizeof *tprops, GFP_KERNEL);
+ if (!tprops)
+ return -ENOMEM;
+
+ ret = ib_query_port(&dev->ib_dev, port_num, tprops);
+ if (ret) {
+ printk(KERN_WARNING "ib_query_port failed (%d) for %s port %d\n",
+ ret, dev->ib_dev.name, port_num);
+ goto out;
+ }
+
+ dev->rate[port_num - 1] = tprops->active_speed *
+ ib_width_enum_to_int(tprops->active_width);
+
+out:
+ kfree(tprops);
+ return ret;
+}
+
static void update_sm_ah(struct mthca_dev *dev,
u8 port_num, u16 lid, u8 sl)
{
mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) &&
mad->mad_hdr.method == IB_MGMT_METHOD_SET) {
if (mad->mad_hdr.attr_id == IB_SMP_ATTR_PORT_INFO) {
+ mthca_update_rate(to_mdev(ibdev), port_num);
update_sm_ah(to_mdev(ibdev), port_num,
be16_to_cpup((__be16 *) (mad->data + 58)),
(*(u8 *) (mad->data + 76)) & 0xf);
{
struct ib_mad_agent *agent;
int p, q;
+ int ret;
spin_lock_init(&dev->sm_lock);
q ? IB_QPT_GSI : IB_QPT_SMI,
NULL, 0, send_handler,
NULL, NULL);
- if (IS_ERR(agent))
+ if (IS_ERR(agent)) {
+ ret = PTR_ERR(agent);
goto err;
+ }
dev->send_agent[p][q] = agent;
}
+
+ for (p = 1; p <= dev->limits.num_ports; ++p) {
+ ret = mthca_update_rate(dev, p);
+ if (ret) {
+ mthca_err(dev, "Failed to obtain port %d rate."
+ " aborting.\n", p);
+ goto err;
+ }
+ }
+
return 0;
err:
if (dev->send_agent[p][q])
ib_unregister_mad_agent(dev->send_agent[p][q]);
- return PTR_ERR(agent);
+ return ret;
}
void __devexit mthca_free_agents(struct mthca_dev *dev)
MODULE_LICENSE("Dual BSD/GPL");
MODULE_VERSION(DRV_VERSION);
+#ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
+
+int mthca_debug_level = 0;
+module_param_named(debug_level, mthca_debug_level, int, 0644);
+MODULE_PARM_DESC(debug_level, "Enable debug tracing if > 0");
+
+#endif /* CONFIG_INFINIBAND_MTHCA_DEBUG */
+
#ifdef CONFIG_PCI_MSI
static int msi_x = 0;
#endif /* CONFIG_PCI_MSI */
+static int tune_pci = 0;
+module_param(tune_pci, int, 0444);
+MODULE_PARM_DESC(tune_pci, "increase PCI burst from the default set by BIOS if nonzero");
+
static const char mthca_version[] __devinitdata =
DRV_NAME ": Mellanox InfiniBand HCA driver v"
DRV_VERSION " (" DRV_RELDATE ")\n";
int cap;
u16 val;
+ if (!tune_pci)
+ return 0;
+
/* First try to max out Read Byte Count */
cap = pci_find_capability(mdev->pdev, PCI_CAP_ID_PCIX);
if (cap) {
mdev->limits.reserved_srqs = dev_lim->reserved_srqs;
mdev->limits.reserved_eecs = dev_lim->reserved_eecs;
mdev->limits.max_desc_sz = dev_lim->max_desc_sz;
+ mdev->limits.max_srq_sge = mthca_max_srq_sge(mdev);
/*
* Subtract 1 from the limit because we need to allocate a
* spare CQE so the HCA HW can tell the difference between an
mdev->limits.port_width_cap = dev_lim->max_port_width;
mdev->limits.page_size_cap = ~(u32) (dev_lim->min_page_sz - 1);
mdev->limits.flags = dev_lim->flags;
+ /*
+ * For old FW that doesn't return static rate support, use a
+ * value of 0x3 (only static rate values of 0 or 1 are handled),
+ * except on Sinai, where even old FW can handle static rate
+ * values of 2 and 3.
+ */
+ if (dev_lim->stat_rate_support)
+ mdev->limits.stat_rate_support = dev_lim->stat_rate_support;
+ else if (mdev->mthca_flags & MTHCA_FLAG_SINAI_OPT)
+ mdev->limits.stat_rate_support = 0xf;
+ else
+ mdev->limits.stat_rate_support = 0x3;
/* IB_DEVICE_RESIZE_MAX_WR not supported by driver.
May be doable since hardware supports it for SRQ.
props->max_res_rd_atom = props->max_qp_rd_atom * props->max_qp;
props->max_srq = mdev->limits.num_srqs - mdev->limits.reserved_srqs;
props->max_srq_wr = mdev->limits.max_srq_wqes;
- props->max_srq_sge = mdev->limits.max_sg;
+ props->max_srq_sge = mdev->limits.max_srq_sge;
props->local_ca_ack_delay = mdev->limits.local_ca_ack_delay;
props->atomic_cap = mdev->limits.flags & DEV_LIM_FLAG_ATOMIC ?
IB_ATOMIC_HCA : IB_ATOMIC_NONE;
atomic_t refcount;
u32 qpn;
int is_direct;
+ u8 port; /* for SQP and memfree use only */
+ u8 alt_port; /* for memfree use only */
u8 transport;
u8 state;
u8 atomic_rd_en;
struct mthca_sqp {
struct mthca_qp qp;
- int port;
int pkey_index;
u32 qkey;
u32 send_psn;
return;
}
+ if (event_type == IB_EVENT_PATH_MIG)
+ qp->port = qp->alt_port;
+
event.device = &dev->ib_dev;
event.event = event_type;
event.element.qp = &qp->ibqp;
{
memset(ib_ah_attr, 0, sizeof *path);
ib_ah_attr->port_num = (be32_to_cpu(path->port_pkey) >> 24) & 0x3;
+
+ if (ib_ah_attr->port_num == 0 || ib_ah_attr->port_num > dev->limits.num_ports)
+ return;
+
ib_ah_attr->dlid = be16_to_cpu(path->rlid);
ib_ah_attr->sl = be32_to_cpu(path->sl_tclass_flowlabel) >> 28;
ib_ah_attr->src_path_bits = path->g_mylmc & 0x7f;
- ib_ah_attr->static_rate = path->static_rate & 0x7;
+ ib_ah_attr->static_rate = mthca_rate_to_ib(dev,
+ path->static_rate & 0x7,
+ ib_ah_attr->port_num);
ib_ah_attr->ah_flags = (path->g_mylmc & (1 << 7)) ? IB_AH_GRH : 0;
if (ib_ah_attr->ah_flags) {
ib_ah_attr->grh.sgid_index = path->mgid_index & (dev->limits.gid_table_len - 1);
qp_attr->cap.max_recv_sge = qp->rq.max_gs;
qp_attr->cap.max_inline_data = qp->max_inline_data;
- to_ib_ah_attr(dev, &qp_attr->ah_attr, &context->pri_path);
- to_ib_ah_attr(dev, &qp_attr->alt_ah_attr, &context->alt_path);
+ if (qp->transport == RC || qp->transport == UC) {
+ to_ib_ah_attr(dev, &qp_attr->ah_attr, &context->pri_path);
+ to_ib_ah_attr(dev, &qp_attr->alt_ah_attr, &context->alt_path);
+ }
qp_attr->pkey_index = be32_to_cpu(context->pri_path.port_pkey) & 0x7f;
qp_attr->alt_pkey_index = be32_to_cpu(context->alt_path.port_pkey) & 0x7f;
}
static int mthca_path_set(struct mthca_dev *dev, struct ib_ah_attr *ah,
- struct mthca_qp_path *path)
+ struct mthca_qp_path *path, u8 port)
{
path->g_mylmc = ah->src_path_bits & 0x7f;
path->rlid = cpu_to_be16(ah->dlid);
- path->static_rate = !!ah->static_rate;
+ path->static_rate = mthca_get_rate(dev, ah->static_rate, port);
if (ah->ah_flags & IB_AH_GRH) {
if (ah->grh.sgid_index >= dev->limits.gid_table_len) {
if (qp->transport == MLX)
qp_context->pri_path.port_pkey |=
- cpu_to_be32(to_msqp(qp)->port << 24);
+ cpu_to_be32(qp->port << 24);
else {
if (attr_mask & IB_QP_PORT) {
qp_context->pri_path.port_pkey |=
}
if (attr_mask & IB_QP_AV) {
- if (mthca_path_set(dev, &attr->ah_attr, &qp_context->pri_path))
+ if (mthca_path_set(dev, &attr->ah_attr, &qp_context->pri_path,
+ attr_mask & IB_QP_PORT ? attr->port_num : qp->port))
return -EINVAL;
qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_PRIMARY_ADDR_PATH);
return -EINVAL;
}
- if (mthca_path_set(dev, &attr->alt_ah_attr, &qp_context->alt_path))
+ if (mthca_path_set(dev, &attr->alt_ah_attr, &qp_context->alt_path,
+ attr->alt_ah_attr.port_num))
return -EINVAL;
qp_context->alt_path.port_pkey |= cpu_to_be32(attr->alt_pkey_index |
qp->atomic_rd_en = attr->qp_access_flags;
if (attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
qp->resp_depth = attr->max_dest_rd_atomic;
+ if (attr_mask & IB_QP_PORT)
+ qp->port = attr->port_num;
+ if (attr_mask & IB_QP_ALT_PATH)
+ qp->alt_port = attr->alt_port_num;
if (is_sqp(dev, qp))
store_attrs(to_msqp(qp), attr, attr_mask);
if (is_qp0(dev, qp)) {
if (cur_state != IB_QPS_RTR &&
new_state == IB_QPS_RTR)
- init_port(dev, to_msqp(qp)->port);
+ init_port(dev, qp->port);
if (cur_state != IB_QPS_RESET &&
cur_state != IB_QPS_ERR &&
(new_state == IB_QPS_RESET ||
new_state == IB_QPS_ERR))
- mthca_CLOSE_IB(dev, to_msqp(qp)->port, &status);
+ mthca_CLOSE_IB(dev, qp->port, &status);
}
/*
if (qp->qpn == -1)
return -ENOMEM;
+ /* initialize port to zero for error-catching. */
+ qp->port = 0;
+
err = mthca_alloc_qp_common(dev, pd, send_cq, recv_cq,
send_policy, qp);
if (err) {
if (err)
goto err_out;
- sqp->port = port;
+ sqp->qp.port = port;
sqp->qp.qpn = mqpn;
sqp->qp.transport = MLX;
sqp->ud_header.lrh.source_lid = IB_LID_PERMISSIVE;
sqp->ud_header.bth.solicited_event = !!(wr->send_flags & IB_SEND_SOLICITED);
if (!sqp->qp.ibqp.qp_num)
- ib_get_cached_pkey(&dev->ib_dev, sqp->port,
+ ib_get_cached_pkey(&dev->ib_dev, sqp->qp.port,
sqp->pkey_index, &pkey);
else
- ib_get_cached_pkey(&dev->ib_dev, sqp->port,
+ ib_get_cached_pkey(&dev->ib_dev, sqp->qp.port,
wr->wr.ud.pkey_index, &pkey);
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->wr.ud.remote_qpn);
/* Sanity check SRQ size before proceeding */
if (attr->max_wr > dev->limits.max_srq_wqes ||
- attr->max_sge > dev->limits.max_sg)
+ attr->max_sge > dev->limits.max_srq_sge)
return -EINVAL;
srq->max = attr->max_wr;
return err;
}
+int mthca_max_srq_sge(struct mthca_dev *dev)
+{
+ if (mthca_is_memfree(dev))
+ return dev->limits.max_sg;
+
+ /*
+ * SRQ allocations are based on powers of 2 for Tavor,
+ * (although they only need to be multiples of 16 bytes).
+ *
+ * Therefore, we need to base the max number of sg entries on
+ * the largest power of 2 descriptor size that is <= to the
+ * actual max WQE descriptor size, rather than return the
+ * max_sg value given by the firmware (which is based on WQE
+ * sizes as multiples of 16, not powers of 2).
+ *
+ * If SRQ implementation is changed for Tavor to be based on
+ * multiples of 16, the calculation below can be deleted and
+ * the FW max_sg value returned.
+ */
+ return min_t(int, dev->limits.max_sg,
+ ((1 << (fls(dev->limits.max_desc_sz) - 1)) -
+ sizeof (struct mthca_next_seg)) /
+ sizeof (struct mthca_data_seg));
+}
+
int __devinit mthca_init_srq_table(struct mthca_dev *dev)
{
int err;
group: <http://www.ietf.org/html.charters/ipoib-charter.html>.
config INFINIBAND_IPOIB_DEBUG
- bool "IP-over-InfiniBand debugging"
+ bool "IP-over-InfiniBand debugging" if EMBEDDED
depends on INFINIBAND_IPOIB
+ default y
---help---
This option causes debugging code to be compiled into the
IPoIB driver. The output can be turned on via the
IPOIB_RX_RING_SIZE = 128,
IPOIB_TX_RING_SIZE = 64,
+ IPOIB_MAX_QUEUE_SIZE = 8192,
+ IPOIB_MIN_QUEUE_SIZE = 2,
IPOIB_NUM_WC = 4,
INFINIBAND_ALEN, sizeof(void *));
}
+struct ipoib_neigh *ipoib_neigh_alloc(struct neighbour *neigh);
+void ipoib_neigh_free(struct ipoib_neigh *neigh);
+
extern struct workqueue_struct *ipoib_workqueue;
/* functions */
#define ipoib_warn(priv, format, arg...) \
ipoib_printk(KERN_WARNING, priv, format , ## arg)
+extern int ipoib_sendq_size;
+extern int ipoib_recvq_size;
#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
extern int ipoib_debug_level;
gid_buf, path.pathrec.dlid ? "yes" : "no");
if (path.pathrec.dlid) {
- rate = ib_sa_rate_enum_to_int(path.pathrec.rate) * 25;
+ rate = ib_rate_to_mult(path.pathrec.rate) * 25;
seq_printf(file,
" DLID: 0x%04x\n"
struct ipoib_dev_priv *priv = netdev_priv(dev);
int i;
- for (i = 0; i < IPOIB_RX_RING_SIZE; ++i) {
+ for (i = 0; i < ipoib_recvq_size; ++i) {
if (ipoib_alloc_rx_skb(dev, i)) {
ipoib_warn(priv, "failed to allocate receive buffer %d\n", i);
return -ENOMEM;
if (wr_id & IPOIB_OP_RECV) {
wr_id &= ~IPOIB_OP_RECV;
- if (wr_id < IPOIB_RX_RING_SIZE) {
+ if (wr_id < ipoib_recvq_size) {
struct sk_buff *skb = priv->rx_ring[wr_id].skb;
dma_addr_t addr = priv->rx_ring[wr_id].mapping;
struct ipoib_tx_buf *tx_req;
unsigned long flags;
- if (wr_id >= IPOIB_TX_RING_SIZE) {
+ if (wr_id >= ipoib_sendq_size) {
ipoib_warn(priv, "completion event with wrid %d (> %d)\n",
- wr_id, IPOIB_TX_RING_SIZE);
+ wr_id, ipoib_sendq_size);
return;
}
spin_lock_irqsave(&priv->tx_lock, flags);
++priv->tx_tail;
if (netif_queue_stopped(dev) &&
- priv->tx_head - priv->tx_tail <= IPOIB_TX_RING_SIZE / 2)
+ priv->tx_head - priv->tx_tail <= ipoib_sendq_size >> 1)
netif_wake_queue(dev);
spin_unlock_irqrestore(&priv->tx_lock, flags);
* means we have to make sure everything is properly recorded and
* our state is consistent before we call post_send().
*/
- tx_req = &priv->tx_ring[priv->tx_head & (IPOIB_TX_RING_SIZE - 1)];
+ tx_req = &priv->tx_ring[priv->tx_head & (ipoib_sendq_size - 1)];
tx_req->skb = skb;
addr = dma_map_single(priv->ca->dma_device, skb->data, skb->len,
DMA_TO_DEVICE);
pci_unmap_addr_set(tx_req, mapping, addr);
- if (unlikely(post_send(priv, priv->tx_head & (IPOIB_TX_RING_SIZE - 1),
+ if (unlikely(post_send(priv, priv->tx_head & (ipoib_sendq_size - 1),
address->ah, qpn, addr, skb->len))) {
ipoib_warn(priv, "post_send failed\n");
++priv->stats.tx_errors;
address->last_send = priv->tx_head;
++priv->tx_head;
- if (priv->tx_head - priv->tx_tail == IPOIB_TX_RING_SIZE) {
+ if (priv->tx_head - priv->tx_tail == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring full, stopping kernel net queue\n");
netif_stop_queue(dev);
}
int pending = 0;
int i;
- for (i = 0; i < IPOIB_RX_RING_SIZE; ++i)
+ for (i = 0; i < ipoib_recvq_size; ++i)
if (priv->rx_ring[i].skb)
++pending;
*/
while ((int) priv->tx_tail - (int) priv->tx_head < 0) {
tx_req = &priv->tx_ring[priv->tx_tail &
- (IPOIB_TX_RING_SIZE - 1)];
+ (ipoib_sendq_size - 1)];
dma_unmap_single(priv->ca->dma_device,
pci_unmap_addr(tx_req, mapping),
tx_req->skb->len,
++priv->tx_tail;
}
- for (i = 0; i < IPOIB_RX_RING_SIZE; ++i)
+ for (i = 0; i < ipoib_recvq_size; ++i)
if (priv->rx_ring[i].skb) {
dma_unmap_single(priv->ca->dma_device,
pci_unmap_addr(&priv->rx_ring[i],
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
+#include <linux/kernel.h>
#include <linux/if_arp.h> /* For ARPHRD_xxx */
MODULE_DESCRIPTION("IP-over-InfiniBand net driver");
MODULE_LICENSE("Dual BSD/GPL");
+int ipoib_sendq_size __read_mostly = IPOIB_TX_RING_SIZE;
+int ipoib_recvq_size __read_mostly = IPOIB_RX_RING_SIZE;
+
+module_param_named(send_queue_size, ipoib_sendq_size, int, 0444);
+MODULE_PARM_DESC(send_queue_size, "Number of descriptors in send queue");
+module_param_named(recv_queue_size, ipoib_recvq_size, int, 0444);
+MODULE_PARM_DESC(recv_queue_size, "Number of descriptors in receive queue");
+
#ifdef CONFIG_INFINIBAND_IPOIB_DEBUG
int ipoib_debug_level;
*/
if (neigh->ah)
ipoib_put_ah(neigh->ah);
- *to_ipoib_neigh(neigh->neighbour) = NULL;
- kfree(neigh);
+
+ ipoib_neigh_free(neigh);
}
spin_unlock_irqrestore(&priv->lock, flags);
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ipoib_path *path, *tp;
LIST_HEAD(remove_list);
- unsigned long flags;
- spin_lock_irqsave(&priv->lock, flags);
+ spin_lock_irq(&priv->lock);
list_splice(&priv->path_list, &remove_list);
INIT_LIST_HEAD(&priv->path_list);
list_for_each_entry(path, &remove_list, list)
rb_erase(&path->rb_node, &priv->path_tree);
- spin_unlock_irqrestore(&priv->lock, flags);
-
list_for_each_entry_safe(path, tp, &remove_list, list) {
if (path->query)
ib_sa_cancel_query(path->query_id, path->query);
+ spin_unlock_irq(&priv->lock);
wait_for_completion(&path->done);
path_free(dev, path);
+ spin_lock_irq(&priv->lock);
}
+ spin_unlock_irq(&priv->lock);
}
static void path_rec_completion(int status,
struct ib_ah_attr av = {
.dlid = be16_to_cpu(pathrec->dlid),
.sl = pathrec->sl,
- .port_num = priv->port
+ .port_num = priv->port,
+ .static_rate = pathrec->rate
};
- int path_rate = ib_sa_rate_enum_to_int(pathrec->rate);
-
- if (path_rate > 0 && priv->local_rate > path_rate)
- av.static_rate = (priv->local_rate - 1) / path_rate;
-
- ipoib_dbg(priv, "static_rate %d for local port %dX, path %dX\n",
- av.static_rate, priv->local_rate,
- ib_sa_rate_enum_to_int(pathrec->rate));
ah = ipoib_create_ah(dev, priv->pd, &av);
}
struct ipoib_path *path;
struct ipoib_neigh *neigh;
- neigh = kmalloc(sizeof *neigh, GFP_ATOMIC);
+ neigh = ipoib_neigh_alloc(skb->dst->neighbour);
if (!neigh) {
++priv->stats.tx_dropped;
dev_kfree_skb_any(skb);
}
skb_queue_head_init(&neigh->queue);
- neigh->neighbour = skb->dst->neighbour;
- *to_ipoib_neigh(skb->dst->neighbour) = neigh;
/*
* We can only be called from ipoib_start_xmit, so we're
path = path_rec_create(dev,
(union ib_gid *) (skb->dst->neighbour->ha + 4));
if (!path)
- goto err;
+ goto err_path;
__path_add(dev, path);
}
__skb_queue_tail(&neigh->queue, skb);
if (!path->query && path_rec_start(dev, path))
- goto err;
+ goto err_list;
}
spin_unlock(&priv->lock);
return;
-err:
- *to_ipoib_neigh(skb->dst->neighbour) = NULL;
+err_list:
list_del(&neigh->list);
- kfree(neigh);
+err_path:
+ ipoib_neigh_free(neigh);
++priv->stats.tx_dropped;
dev_kfree_skb_any(skb);
if (neigh->ah)
ah = neigh->ah;
list_del(&neigh->list);
- *to_ipoib_neigh(n) = NULL;
- kfree(neigh);
+ ipoib_neigh_free(neigh);
}
spin_unlock_irqrestore(&priv->lock, flags);
ipoib_put_ah(ah);
}
+struct ipoib_neigh *ipoib_neigh_alloc(struct neighbour *neighbour)
+{
+ struct ipoib_neigh *neigh;
+
+ neigh = kmalloc(sizeof *neigh, GFP_ATOMIC);
+ if (!neigh)
+ return NULL;
+
+ neigh->neighbour = neighbour;
+ *to_ipoib_neigh(neighbour) = neigh;
+
+ return neigh;
+}
+
+void ipoib_neigh_free(struct ipoib_neigh *neigh)
+{
+ *to_ipoib_neigh(neigh->neighbour) = NULL;
+ kfree(neigh);
+}
+
static int ipoib_neigh_setup_dev(struct net_device *dev, struct neigh_parms *parms)
{
parms->neigh_destructor = ipoib_neigh_destructor;
struct ipoib_dev_priv *priv = netdev_priv(dev);
/* Allocate RX/TX "rings" to hold queued skbs */
-
- priv->rx_ring = kzalloc(IPOIB_RX_RING_SIZE * sizeof (struct ipoib_rx_buf),
+ priv->rx_ring = kzalloc(ipoib_recvq_size * sizeof *priv->rx_ring,
GFP_KERNEL);
if (!priv->rx_ring) {
printk(KERN_WARNING "%s: failed to allocate RX ring (%d entries)\n",
- ca->name, IPOIB_RX_RING_SIZE);
+ ca->name, ipoib_recvq_size);
goto out;
}
- priv->tx_ring = kzalloc(IPOIB_TX_RING_SIZE * sizeof (struct ipoib_tx_buf),
+ priv->tx_ring = kzalloc(ipoib_sendq_size * sizeof *priv->tx_ring,
GFP_KERNEL);
if (!priv->tx_ring) {
printk(KERN_WARNING "%s: failed to allocate TX ring (%d entries)\n",
- ca->name, IPOIB_TX_RING_SIZE);
+ ca->name, ipoib_sendq_size);
goto out_rx_ring_cleanup;
}
dev->hard_header_len = IPOIB_ENCAP_LEN + INFINIBAND_ALEN;
dev->addr_len = INFINIBAND_ALEN;
dev->type = ARPHRD_INFINIBAND;
- dev->tx_queue_len = IPOIB_TX_RING_SIZE * 2;
+ dev->tx_queue_len = ipoib_sendq_size * 2;
dev->features = NETIF_F_VLAN_CHALLENGED | NETIF_F_LLTX;
/* MTU will be reset when mcast join happens */
{
int ret;
+ ipoib_recvq_size = roundup_pow_of_two(ipoib_recvq_size);
+ ipoib_recvq_size = min(ipoib_recvq_size, IPOIB_MAX_QUEUE_SIZE);
+ ipoib_recvq_size = max(ipoib_recvq_size, IPOIB_MIN_QUEUE_SIZE);
+
+ ipoib_sendq_size = roundup_pow_of_two(ipoib_sendq_size);
+ ipoib_sendq_size = min(ipoib_sendq_size, IPOIB_MAX_QUEUE_SIZE);
+ ipoib_sendq_size = max(ipoib_sendq_size, IPOIB_MIN_QUEUE_SIZE);
+
ret = ipoib_register_debugfs();
if (ret)
return ret;
*/
if (neigh->ah)
ipoib_put_ah(neigh->ah);
- *to_ipoib_neigh(neigh->neighbour) = NULL;
- kfree(neigh);
+ ipoib_neigh_free(neigh);
}
spin_unlock_irqrestore(&priv->lock, flags);
.port_num = priv->port,
.sl = mcast->mcmember.sl,
.ah_flags = IB_AH_GRH,
+ .static_rate = mcast->mcmember.rate,
.grh = {
.flow_label = be32_to_cpu(mcast->mcmember.flow_label),
.hop_limit = mcast->mcmember.hop_limit,
.traffic_class = mcast->mcmember.traffic_class
}
};
- int path_rate = ib_sa_rate_enum_to_int(mcast->mcmember.rate);
-
av.grh.dgid = mcast->mcmember.mgid;
- if (path_rate > 0 && priv->local_rate > path_rate)
- av.static_rate = (priv->local_rate - 1) / path_rate;
-
- ipoib_dbg_mcast(priv, "static_rate %d for local port %dX, mcmember %dX\n",
- av.static_rate, priv->local_rate,
- ib_sa_rate_enum_to_int(mcast->mcmember.rate));
-
ah = ipoib_create_ah(dev, priv->pd, &av);
if (!ah) {
ipoib_warn(priv, "ib_address_create failed\n");
return 0;
}
+static void wait_for_mcast_join(struct ipoib_dev_priv *priv,
+ struct ipoib_mcast *mcast)
+{
+ spin_lock_irq(&priv->lock);
+ if (mcast && mcast->query) {
+ ib_sa_cancel_query(mcast->query_id, mcast->query);
+ mcast->query = NULL;
+ spin_unlock_irq(&priv->lock);
+ ipoib_dbg_mcast(priv, "waiting for MGID " IPOIB_GID_FMT "\n",
+ IPOIB_GID_ARG(mcast->mcmember.mgid));
+ wait_for_completion(&mcast->done);
+ }
+ else
+ spin_unlock_irq(&priv->lock);
+}
+
int ipoib_mcast_stop_thread(struct net_device *dev, int flush)
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
if (flush)
flush_workqueue(ipoib_workqueue);
- spin_lock_irq(&priv->lock);
- if (priv->broadcast && priv->broadcast->query) {
- ib_sa_cancel_query(priv->broadcast->query_id, priv->broadcast->query);
- priv->broadcast->query = NULL;
- spin_unlock_irq(&priv->lock);
- ipoib_dbg_mcast(priv, "waiting for bcast\n");
- wait_for_completion(&priv->broadcast->done);
- } else
- spin_unlock_irq(&priv->lock);
+ wait_for_mcast_join(priv, priv->broadcast);
- list_for_each_entry(mcast, &priv->multicast_list, list) {
- spin_lock_irq(&priv->lock);
- if (mcast->query) {
- ib_sa_cancel_query(mcast->query_id, mcast->query);
- mcast->query = NULL;
- spin_unlock_irq(&priv->lock);
- ipoib_dbg_mcast(priv, "waiting for MGID " IPOIB_GID_FMT "\n",
- IPOIB_GID_ARG(mcast->mcmember.mgid));
- wait_for_completion(&mcast->done);
- } else
- spin_unlock_irq(&priv->lock);
- }
+ list_for_each_entry(mcast, &priv->multicast_list, list)
+ wait_for_mcast_join(priv, mcast);
return 0;
}
if (skb->dst &&
skb->dst->neighbour &&
!*to_ipoib_neigh(skb->dst->neighbour)) {
- struct ipoib_neigh *neigh = kmalloc(sizeof *neigh, GFP_ATOMIC);
+ struct ipoib_neigh *neigh = ipoib_neigh_alloc(skb->dst->neighbour);
if (neigh) {
kref_get(&mcast->ah->ref);
neigh->ah = mcast->ah;
- neigh->neighbour = skb->dst->neighbour;
- *to_ipoib_neigh(skb->dst->neighbour) = neigh;
list_add_tail(&neigh->list, &mcast->neigh_list);
}
}
/* We have to cancel outside of the spinlock */
list_for_each_entry_safe(mcast, tmcast, &remove_list, list) {
+ wait_for_mcast_join(priv, mcast);
ipoib_mcast_leave(mcast->dev, mcast);
ipoib_mcast_free(mcast);
}
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ib_qp_init_attr init_attr = {
.cap = {
- .max_send_wr = IPOIB_TX_RING_SIZE,
- .max_recv_wr = IPOIB_RX_RING_SIZE,
+ .max_send_wr = ipoib_sendq_size,
+ .max_recv_wr = ipoib_recvq_size,
.max_send_sge = 1,
.max_recv_sge = 1
},
}
priv->cq = ib_create_cq(priv->ca, ipoib_ib_completion, NULL, dev,
- IPOIB_TX_RING_SIZE + IPOIB_RX_RING_SIZE + 1);
+ ipoib_sendq_size + ipoib_recvq_size + 1);
if (IS_ERR(priv->cq)) {
printk(KERN_WARNING "%s: failed to create CQ\n", ca->name);
goto out_free_pd;
p = match_strdup(args);
if (strlen(p) != 32) {
printk(KERN_WARNING PFX "bad dest GID parameter '%s'\n", p);
+ kfree(p);
goto out;
}
{
struct ippp_struct *is;
- if (lp->ppp_slot < 0 || lp->ppp_slot > ISDN_MAX_CHANNELS) {
+ if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: ppp_slot(%d) out of range\n",
__FUNCTION__, lp->ppp_slot);
return 0;
lp->netdev->pb->ref_ct--;
spin_unlock(&lp->netdev->pb->lock);
#endif /* CONFIG_ISDN_MPP */
- if (lp->ppp_slot < 0 || lp->ppp_slot > ISDN_MAX_CHANNELS) {
+ if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: ppp_slot(%d) now invalid\n",
__FUNCTION__, lp->ppp_slot);
return 0;
int slot;
struct ippp_struct *is;
- if (min < 0 || min > ISDN_MAX_CHANNELS)
+ if (min < 0 || min >= ISDN_MAX_CHANNELS)
return -ENODEV;
slot = isdn_ppp_get_slot();
if (lp->master) { // FIXME?
mlp = (isdn_net_local *) lp->master->priv;
slot = mlp->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "isdn_ppp_push_higher: master->ppp_slot(%d)\n",
lp->ppp_slot);
goto drop_packet;
/* we have our lp locked from now on */
slot = lp->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "isdn_ppp_xmit: lp->ppp_slot(%d)\n",
lp->ppp_slot);
kfree_skb(skb);
mp = net_dev->pb;
stats = &mp->stats;
slot = lp->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d)\n",
__FUNCTION__, lp->ppp_slot);
stats->frame_drops++;
is->last_link_seqno = minseq = newseq;
for (lpq = net_dev->queue;;) {
slot = lpq->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lpq->ppp_slot(%d)\n",
__FUNCTION__, lpq->ppp_slot);
} else {
printk(KERN_DEBUG "Received CCP frame from peer slot(%d)\n",
lp->ppp_slot);
- if (lp->ppp_slot < 0 || lp->ppp_slot > ISDN_MAX_CHANNELS) {
+ if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
__FUNCTION__, lp->ppp_slot);
return;
if(lp->master) {
int slot = ((isdn_net_local *) (lp->master->priv))->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: slot(%d) out of range\n",
__FUNCTION__, slot);
return;
if (lp->master) {
slot = ((isdn_net_local *) (lp->master->priv))->ppp_slot;
- if (slot < 0 || slot > ISDN_MAX_CHANNELS) {
+ if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: slot(%d) out of range\n",
__FUNCTION__, slot);
return;
{
atomic_inc(&md_event_count);
wake_up(&md_event_waiters);
+ sysfs_notify(&mddev->kobj, NULL, "sync_action");
}
EXPORT_SYMBOL_GPL(md_new_event);
static int
mptsas_slave_configure(struct scsi_device *sdev)
{
- sas_read_port_mode_page(sdev);
+ struct Scsi_Host *host = sdev->host;
+ MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata;
+
+ /*
+ * RAID volumes placed beyond the last expected port.
+ * Ignore sending sas mode pages in that case..
+ */
+ if (sdev->channel < hd->ioc->num_ports)
+ sas_read_port_mode_page(sdev);
return mptscsih_slave_configure(sdev);
}
*
* Copyright (C) 2002 David S. Miller (davem@redhat.com)
* Fixed by Pekka Pietikainen (pp@ee.oulu.fi)
+ * Copyright (C) 2006 Broadcom Corporation.
*
* Distribute under GPL.
*/
#define DRV_MODULE_NAME "b44"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "0.97"
-#define DRV_MODULE_RELDATE "Nov 30, 2005"
+#define DRV_MODULE_VERSION "1.00"
+#define DRV_MODULE_RELDATE "Apr 7, 2006"
#define B44_DEF_MSG_ENABLE \
(NETIF_MSG_DRV | \
return readl(bp->regs + reg);
}
-static inline void bw32(const struct b44 *bp,
+static inline void bw32(const struct b44 *bp,
unsigned long reg, unsigned long val)
{
writel(val, bp->regs + reg);
val |= ((u32) data[4]) << 8;
val |= ((u32) data[5]) << 0;
bw32(bp, B44_CAM_DATA_LO, val);
- val = (CAM_DATA_HI_VALID |
+ val = (CAM_DATA_HI_VALID |
(((u32) data[0]) << 8) |
(((u32) data[1]) << 0));
bw32(bp, B44_CAM_DATA_HI, val);
bw32(bp, B44_CAM_CTRL, (CAM_CTRL_WRITE |
(index << CAM_CTRL_INDEX_SHIFT)));
- b44_wait_bit(bp, B44_CAM_CTRL, CAM_CTRL_BUSY, 100, 1);
+ b44_wait_bit(bp, B44_CAM_CTRL, CAM_CTRL_BUSY, 100, 1);
}
static inline void __b44_disable_ints(struct b44 *bp)
static void b44_set_flow_ctrl(struct b44 *bp, u32 local, u32 remote)
{
- u32 pause_enab = bp->flags & (B44_FLAG_TX_PAUSE |
- B44_FLAG_RX_PAUSE);
+ u32 pause_enab = 0;
- if (local & ADVERTISE_PAUSE_CAP) {
- if (local & ADVERTISE_PAUSE_ASYM) {
- if (remote & LPA_PAUSE_CAP)
- pause_enab |= (B44_FLAG_TX_PAUSE |
- B44_FLAG_RX_PAUSE);
- else if (remote & LPA_PAUSE_ASYM)
- pause_enab |= B44_FLAG_RX_PAUSE;
- } else {
- if (remote & LPA_PAUSE_CAP)
- pause_enab |= (B44_FLAG_TX_PAUSE |
- B44_FLAG_RX_PAUSE);
- }
- } else if (local & ADVERTISE_PAUSE_ASYM) {
- if ((remote & LPA_PAUSE_CAP) &&
- (remote & LPA_PAUSE_ASYM))
- pause_enab |= B44_FLAG_TX_PAUSE;
+ /* The driver supports only rx pause by default because
+ the b44 mac tx pause mechanism generates excessive
+ pause frames.
+ Use ethtool to turn on b44 tx pause if necessary.
+ */
+ if ((local & ADVERTISE_PAUSE_CAP) &&
+ (local & ADVERTISE_PAUSE_ASYM)){
+ if ((remote & LPA_PAUSE_ASYM) &&
+ !(remote & LPA_PAUSE_CAP))
+ pause_enab |= B44_FLAG_RX_PAUSE;
}
__b44_set_flow_ctrl(bp, pause_enab);
spin_unlock_irq(&bp->lock);
b44_enable_ints(bp);
-
+
return 0;
}
bw32(bp, B44_DMARX_ADDR, bp->rx_ring_dma + bp->dma_offset);
bw32(bp, B44_DMARX_PTR, bp->rx_pending);
- bp->rx_prod = bp->rx_pending;
+ bp->rx_prod = bp->rx_pending;
bw32(bp, B44_MIB_CTRL, MIB_CTRL_CLR_ON_READ);
val |= RXCONFIG_ALLMULTI;
else
i = __b44_load_mcast(bp, dev);
-
+
for (; i < 64; i++) {
- __b44_cam_write(bp, zero, i);
+ __b44_cam_write(bp, zero, i);
}
bw32(bp, B44_RXCONFIG, val);
val = br32(bp, B44_CAM_CTRL);
spin_unlock_irq(&bp->lock);
b44_enable_ints(bp);
-
+
return 0;
}
spin_unlock_irq(&bp->lock);
b44_enable_ints(bp);
-
+
return 0;
}
bp->core_unit = ssb_core_unit(bp);
bp->dma_offset = SB_PCI_DMA;
- /* XXX - really required?
+ /* XXX - really required?
bp->flags |= B44_FLAG_BUGGY_TXPTR;
*/
out:
"aborting.\n");
goto err_out_free_res;
}
-
+
err = pci_set_consistent_dma_mask(pdev, (u64) B44_DMA_MASK);
if (err) {
printk(KERN_ERR PFX "No usable DMA configuration, "
pci_save_state(bp->pdev);
- /* Chip reset provides power to the b44 MAC & PCI cores, which
+ /* Chip reset provides power to the b44 MAC & PCI cores, which
* is necessary for MAC register access.
- */
+ */
b44_chip_reset(bp);
printk(KERN_INFO "%s: Broadcom 4400 10/100BaseT Ethernet ", dev->name);
del_timer_sync(&bp->timer);
- spin_lock_irq(&bp->lock);
+ spin_lock_irq(&bp->lock);
b44_halt(bp);
- netif_carrier_off(bp->dev);
+ netif_carrier_off(bp->dev);
netif_device_detach(bp->dev);
b44_free_rings(bp);
/* Time in jiffies before concluding the transmitter is hung. */
#define TX_TIMEOUT (5*HZ)
-static char version[] __devinitdata =
+static const char version[] __devinitdata =
"Broadcom NetXtreme II Gigabit Ethernet Driver " DRV_MODULE_NAME " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("Michael Chan <mchan@broadcom.com>");
+++ /dev/null
-/* $Linux: hydra.h,v 1.0 1994/10/26 02:03:47 cgd Exp $ */
-
-/*
- * Copyright (c) 1994 Timo Rossi
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * 3. All advertising materials mentioning features or use of this software
- * must display the following acknowledgement:
- * This product includes software developed by Timo Rossi
- * 4. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/*
- * The Hydra Systems card uses the National Semiconductor
- * 8390 NIC (Network Interface Controller) chip, located
- * at card base address + 0xffe1. NIC registers are accessible
- * only at odd byte addresses, so the register offsets must
- * be multiplied by two.
- *
- * Card address PROM is located at card base + 0xffc0 (even byte addresses)
- *
- * RAM starts at the card base address, and is 16K or 64K.
- * The current Amiga NetBSD hydra driver is hardwired for 16K.
- * It seems that the RAM should be accessed as words or longwords only.
- *
- */
-
-/* adapted for Linux by Topi Kanerva 03/29/95
- with original author's permission */
-
-#define HYDRA_NIC_BASE 0xffe1
-
-/* Page0 registers */
-
-#define NIC_CR 0 /* Command register */
-#define NIC_PSTART (1*2) /* Page start (write) */
-#define NIC_PSTOP (2*2) /* Page stop (write) */
-#define NIC_BNDRY (3*2) /* Boundary pointer */
-#define NIC_TSR (4*2) /* Transmit status (read) */
-#define NIC_TPSR (4*2) /* Transmit page start (write) */
-#define NIC_NCR (5*2) /* Number of collisions, read */
-#define NIC_TBCR0 (5*2) /* Transmit byte count low (write) */
-#define NIC_FIFO (6*2) /* FIFO reg. (read) */
-#define NIC_TBCR1 (6*2) /* Transmit byte count high (write) */
-#define NIC_ISR (7*2) /* Interrupt status register */
-#define NIC_RBCR0 (0xa*2) /* Remote byte count low (write) */
-#define NIC_RBCR1 (0xb*2) /* Remote byte count high (write) */
-#define NIC_RSR (0xc*2) /* Receive status (read) */
-#define NIC_RCR (0xc*2) /* Receive config (write) */
-#define NIC_CNTR0 (0xd*2) /* Frame alignment error count (read) */
-#define NIC_TCR (0xd*2) /* Transmit config (write) */
-#define NIC_CNTR1 (0xe*2) /* CRC error counter (read) */
-#define NIC_DCR (0xe*2) /* Data config (write) */
-#define NIC_CNTR2 (0xf*2) /* missed packet counter (read) */
-#define NIC_IMR (0xf*2) /* Interrupt mask reg. (write) */
-
-/* Page1 registers */
-
-#define NIC_PAR0 (1*2) /* Physical address */
-#define NIC_PAR1 (2*2)
-#define NIC_PAR2 (3*2)
-#define NIC_PAR3 (4*2)
-#define NIC_PAR4 (5*2)
-#define NIC_PAR5 (6*2)
-#define NIC_CURR (7*2) /* Current RX ring-buffer page */
-#define NIC_MAR0 (8*2) /* Multicast address */
-#define NIC_MAR1 (9*2)
-#define NIC_MAR2 (0xa*2)
-#define NIC_MAR3 (0xb*2)
-#define NIC_MAR4 (0xc*2)
-#define NIC_MAR5 (0xd*2)
-#define NIC_MAR6 (0xe*2)
-#define NIC_MAR7 (0xf*2)
-
-/* Command register definitions */
-
-#define CR_STOP 0x01 /* Stop -- software reset command */
-#define CR_START 0x02 /* Start */
-#define CR_TXP 0x04 /* Transmit packet */
-
-#define CR_RD0 0x08 /* Remote DMA cmd */
-#define CR_RD1 0x10
-#define CR_RD2 0x20
-
-#define CR_NODMA CR_RD2
-
-#define CR_PS0 0x40 /* Page select */
-#define CR_PS1 0x80
-
-#define CR_PAGE0 0
-#define CR_PAGE1 CR_PS0
-#define CR_PAGE2 CR_PS1
-
-/* Interrupt status reg. definitions */
-
-#define ISR_PRX 0x01 /* Packet received without errors */
-#define ISR_PTX 0x02 /* Packet transmitted without errors */
-#define ISR_RXE 0x04 /* Receive error */
-#define ISR_TXE 0x08 /* Transmit error */
-#define ISR_OVW 0x10 /* Ring buffer overrun */
-#define ISR_CNT 0x20 /* Counter overflow */
-#define ISR_RDC 0x40 /* Remote DMA compile */
-#define ISR_RST 0x80 /* Reset status */
-
-/* Data config reg. definitions */
-
-#define DCR_WTS 0x01 /* Word transfer select */
-#define DCR_BOS 0x02 /* Byte order select */
-#define DCR_LAS 0x04 /* Long address select */
-#define DCR_LS 0x08 /* Loopback select */
-#define DCR_AR 0x10 /* Auto-init remote */
-#define DCR_FT0 0x20 /* FIFO threshold select */
-#define DCR_FT1 0x40
-
-/* Transmit config reg. definitions */
-
-#define TCR_CRC 0x01 /* Inhibit CRC */
-#define TCR_LB0 0x02 /* Loopback control */
-#define TCR_LB1 0x04
-#define TCR_ATD 0x08 /* Auto transmit disable */
-#define TCR_OFST 0x10 /* Collision offset enable */
-
-/* Transmit status reg. definitions */
-
-#define TSR_PTX 0x01 /* Packet transmitted */
-#define TSR_COL 0x04 /* Transmit collided */
-#define TSR_ABT 0x08 /* Transmit aborted */
-#define TSR_CRS 0x10 /* Carrier sense lost */
-#define TSR_FU 0x20 /* FIFO underrun */
-#define TSR_CDH 0x40 /* CD Heartbeat */
-#define TSR_OWC 0x80 /* Out of Window Collision */
-
-/* Receiver config register definitions */
-
-#define RCR_SEP 0x01 /* Save errored packets */
-#define RCR_AR 0x02 /* Accept runt packets */
-#define RCR_AB 0x04 /* Accept broadcast */
-#define RCR_AM 0x08 /* Accept multicast */
-#define RCR_PRO 0x10 /* Promiscuous mode */
-#define RCR_MON 0x20 /* Monitor mode */
-
-/* Receiver status register definitions */
-
-#define RSR_PRX 0x01 /* Packet received without error */
-#define RSR_CRC 0x02 /* CRC error */
-#define RSR_FAE 0x04 /* Frame alignment error */
-#define RSR_FO 0x08 /* FIFO overrun */
-#define RSR_MPA 0x10 /* Missed packet */
-#define RSR_PHY 0x20 /* Physical address */
-#define RSR_DIS 0x40 /* Received disabled */
-#define RSR_DFR 0x80 /* Deferring (jabber) */
-
-/* Hydra System card address PROM offset */
-
-#define HYDRA_ADDRPROM 0xffc0
-
-
self->needspatch = (ret < 0);
if (ret < 0) {
printk("patch_device failed\n");
- goto err_out_4;
+ goto err_out_5;
}
/* replace IrDA class descriptor with what patched device is now reporting */
irda_desc = irda_usb_find_class_desc (self->usbintf);
if (irda_desc == NULL) {
ret = -ENODEV;
- goto err_out_4;
+ goto err_out_5;
}
if (self->irda_desc)
kfree (self->irda_desc);
return 0;
+err_out_5:
+ unregister_netdev(self->netdev);
err_out_4:
kfree(self->speed_buff);
err_out_3:
#ifdef CONFIG_PCI
static int __init preconfigure_smsc_chip(struct smsc_ircc_subsystem_configuration *conf);
static int __init preconfigure_through_82801(struct pci_dev *dev, struct smsc_ircc_subsystem_configuration *conf);
+static void __init preconfigure_ali_port(struct pci_dev *dev,
+ unsigned short port);
static int __init preconfigure_through_ali(struct pci_dev *dev, struct smsc_ircc_subsystem_configuration *conf);
static int __init smsc_ircc_preconfigure_subsystems(unsigned short ircc_cfg,
unsigned short ircc_fir,
* pre-configuration not properly done by the BIOS (especially laptops)
* This code is based in part on smcinit.c, tosh1800-smcinit.c
* and tosh2450-smcinit.c. The table lists the device entries
- * for ISA bridges with an LPC (Local Peripheral Configurator)
- * that are in turn used to configure the SMSC device with default
- * SIR and FIR I/O ports, DMA and IRQ.
+ * for ISA bridges with an LPC (Low Pin Count) controller which
+ * handles the communication with the SMSC device. After the LPC
+ * controller is initialized through PCI, the SMSC device is initialized
+ * through a dedicated port in the ISA port-mapped I/O area, this latter
+ * area is used to configure the SMSC device with default
+ * SIR and FIR I/O ports, DMA and IRQ. Different vendors have
+ * used different sets of parameters and different control port
+ * addresses making a subsystem device table necessary.
*/
#ifdef CONFIG_PCI
#define PCIID_VENDOR_INTEL 0x8086
.device = 0x24cc,
.subvendor = 0x103c,
.subdevice = 0x088c,
- .sir_io = 0x02f8, /* Quite certain these are the same for nc8000 as for nc6000 */
+ /* Quite certain these are the same for nc8000 as for nc6000 */
+ .sir_io = 0x02f8,
.fir_io = 0x0130,
- .fir_irq = 0x09,
+ .fir_irq = 0x05,
.fir_dma = 0x03,
.cfg_base = 0x004e,
.preconfigure = preconfigure_through_82801,
.subdevice = 0x0890,
.sir_io = 0x02f8,
.fir_io = 0x0130,
- .fir_irq = 0x09,
+ .fir_irq = 0x05,
.fir_dma = 0x03,
.cfg_base = 0x004e,
.preconfigure = preconfigure_through_82801,
.name = "HP nc6000",
},
{
- .vendor = PCIID_VENDOR_INTEL, /* Intel 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge */
+ /* Intel 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge */
+ .vendor = PCIID_VENDOR_INTEL,
.device = 0x24c0,
.subvendor = 0x1179,
- .subdevice = 0xffff, /* 0xffff is "any", Not sure, 0x0001 or 0x0002 */
+ .subdevice = 0xffff, /* 0xffff is "any" */
.sir_io = 0x03f8,
.fir_io = 0x0130,
.fir_irq = 0x07,
.fir_dma = 0x01,
.cfg_base = 0x002e,
.preconfigure = preconfigure_through_82801,
- .name = "Toshiba Satellite 2450",
+ .name = "Toshiba laptop with Intel 82801DB/DBL LPC bridge",
},
{
.vendor = PCIID_VENDOR_INTEL, /* Intel 82801CAM ISA bridge */
- .device = 0x248c, /* Some use 24cc? */
+ .device = 0x248c,
.subvendor = 0x1179,
- .subdevice = 0xffff, /* 0xffff is "any", Not sure, 0x0001 or 0x0002 */
+ .subdevice = 0xffff, /* 0xffff is "any" */
.sir_io = 0x03f8,
.fir_io = 0x0130,
.fir_irq = 0x03,
.fir_dma = 0x03,
.cfg_base = 0x002e,
.preconfigure = preconfigure_through_82801,
- .name = "Toshiba Satellite 5100/5200, Tecra 9100",
+ .name = "Toshiba laptop with Intel 82801CAM ISA bridge",
},
{
- .vendor = PCIID_VENDOR_ALI, /* ALi M1533/M1535 PCI to ISA Bridge [Aladdin IV/V/V+] */
+ /* 82801DBM (ICH4-M) LPC Interface Bridge */
+ .vendor = PCIID_VENDOR_INTEL,
+ .device = 0x24cc,
+ .subvendor = 0x1179,
+ .subdevice = 0xffff, /* 0xffff is "any" */
+ .sir_io = 0x03f8,
+ .fir_io = 0x0130,
+ .fir_irq = 0x03,
+ .fir_dma = 0x03,
+ .cfg_base = 0x002e,
+ .preconfigure = preconfigure_through_82801,
+ .name = "Toshiba laptop with Intel 8281DBM LPC bridge",
+ },
+ {
+ /* ALi M1533/M1535 PCI to ISA Bridge [Aladdin IV/V/V+] */
+ .vendor = PCIID_VENDOR_ALI,
.device = 0x1533,
.subvendor = 0x1179,
- .subdevice = 0xffff, /* 0xffff is "any", Not sure, 0x0001 or 0x0002 */
+ .subdevice = 0xffff, /* 0xffff is "any" */
.sir_io = 0x02e8,
.fir_io = 0x02f8,
.fir_irq = 0x07,
.fir_dma = 0x03,
.cfg_base = 0x002e,
.preconfigure = preconfigure_through_ali,
- .name = "Toshiba Satellite 1800",
+ .name = "Toshiba laptop with ALi ISA bridge",
},
{ } // Terminator
};
/*
- * This sets up the basic SMSC parameters (FIR port, SIR port, FIR DMA, FIR IRQ)
+ * This sets up the basic SMSC parameters
+ * (FIR port, SIR port, FIR DMA, FIR IRQ)
* through the chip configuration port.
*/
-static int __init preconfigure_smsc_chip(struct smsc_ircc_subsystem_configuration *conf)
+static int __init preconfigure_smsc_chip(struct
+ smsc_ircc_subsystem_configuration
+ *conf)
{
unsigned short iobase = conf->cfg_base;
unsigned char tmpbyte;
outb(LPC47N227_CFGACCESSKEY, iobase); // enter configuration state
outb(SMSCSIOFLAT_DEVICEID_REG, iobase); // set for device ID
tmpbyte = inb(iobase +1); // Read device ID
- IRDA_DEBUG(0, "Detected Chip id: 0x%02x, setting up registers...\n",tmpbyte);
+ IRDA_DEBUG(0,
+ "Detected Chip id: 0x%02x, setting up registers...\n",
+ tmpbyte);
/* Disable UART1 and set up SIR I/O port */
outb(0x24, iobase); // select CR24 - UART1 base addr
tmpbyte = inb(iobase + 1);
if (tmpbyte != (conf->sir_io >> 2) ) {
IRDA_WARNING("ERROR: could not configure SIR ioport.\n");
+ IRDA_WARNING("Try to supply ircc_cfg argument.\n");
return -ENXIO;
}
outb(SMSCSIOFLAT_UARTMODE0C_REG, iobase); // CR0C - UART mode
tmpbyte = inb(iobase + 1);
- tmpbyte &= ~SMSCSIOFLAT_UART2MODE_MASK | SMSCSIOFLAT_UART2MODE_VAL_IRDA;
+ tmpbyte &= ~SMSCSIOFLAT_UART2MODE_MASK |
+ SMSCSIOFLAT_UART2MODE_VAL_IRDA;
outb(tmpbyte, iobase + 1); // enable IrDA (HPSIR) mode, high speed
outb(LPC47N227_APMBOOTDRIVE_REG, iobase); // CR07 - Auto Pwr Mgt/boot drive sel
return 0;
}
-/* 82801CAM registers */
+/* 82801CAM generic registers */
#define VID 0x00
#define DID 0x02
-#define PIRQA_ROUT 0x60
+#define PIRQ_A_D_ROUT 0x60
+#define SIRQ_CNTL 0x64
+#define PIRQ_E_H_ROUT 0x68
#define PCI_DMA_C 0x90
+/* LPC-specific registers */
#define COM_DEC 0xe0
+#define GEN1_DEC 0xe4
#define LPC_EN 0xe6
#define GEN2_DEC 0xec
/*
- * Sets up the I/O range using the 82801CAM ISA bridge, 82801DBM LPC bridge or
- * Intel 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge. They all work the same way!
+ * Sets up the I/O range using the 82801CAM ISA bridge, 82801DBM LPC bridge
+ * or Intel 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge.
+ * They all work the same way!
*/
static int __init preconfigure_through_82801(struct pci_dev *dev,
- struct smsc_ircc_subsystem_configuration *conf)
+ struct
+ smsc_ircc_subsystem_configuration
+ *conf)
{
unsigned short tmpword;
- int ret;
+ unsigned char tmpbyte;
- IRDA_MESSAGE("Setting up the SMSC device via the 82801 controller.\n");
- pci_write_config_byte(dev, COM_DEC, 0x10);
+ IRDA_MESSAGE("Setting up Intel 82801 controller and SMSC device\n");
+ /*
+ * Select the range for the COMA COM port (SIR)
+ * Register COM_DEC:
+ * Bit 7: reserved
+ * Bit 6-4, COMB decode range
+ * Bit 3: reserved
+ * Bit 2-0, COMA decode range
+ *
+ * Decode ranges:
+ * 000 = 0x3f8-0x3ff (COM1)
+ * 001 = 0x2f8-0x2ff (COM2)
+ * 010 = 0x220-0x227
+ * 011 = 0x228-0x22f
+ * 100 = 0x238-0x23f
+ * 101 = 0x2e8-0x2ef (COM4)
+ * 110 = 0x338-0x33f
+ * 111 = 0x3e8-0x3ef (COM3)
+ */
+ pci_read_config_byte(dev, COM_DEC, &tmpbyte);
+ tmpbyte &= 0xf8; /* mask COMA bits */
+ switch(conf->sir_io) {
+ case 0x3f8:
+ tmpbyte |= 0x00;
+ break;
+ case 0x2f8:
+ tmpbyte |= 0x01;
+ break;
+ case 0x220:
+ tmpbyte |= 0x02;
+ break;
+ case 0x228:
+ tmpbyte |= 0x03;
+ break;
+ case 0x238:
+ tmpbyte |= 0x04;
+ break;
+ case 0x2e8:
+ tmpbyte |= 0x05;
+ break;
+ case 0x338:
+ tmpbyte |= 0x06;
+ break;
+ case 0x3e8:
+ tmpbyte |= 0x07;
+ break;
+ default:
+ tmpbyte |= 0x01; /* COM2 default */
+ }
+ IRDA_DEBUG(1, "COM_DEC (write): 0x%02x\n", tmpbyte);
+ pci_write_config_byte(dev, COM_DEC, tmpbyte);
- /* Enable LPC */
- pci_read_config_word(dev, LPC_EN, &tmpword); /* LPC_EN register */
- tmpword &= 0xfffd; /* mask bit 1 */
- tmpword |= 0x0001; /* set bit 0 : COMA addr range enable */
+ /* Enable Low Pin Count interface */
+ pci_read_config_word(dev, LPC_EN, &tmpword);
+ /* These seem to be set up at all times,
+ * just make sure it is properly set.
+ */
+ switch(conf->cfg_base) {
+ case 0x04e:
+ tmpword |= 0x2000;
+ break;
+ case 0x02e:
+ tmpword |= 0x1000;
+ break;
+ case 0x062:
+ tmpword |= 0x0800;
+ break;
+ case 0x060:
+ tmpword |= 0x0400;
+ break;
+ default:
+ IRDA_WARNING("Uncommon I/O base address: 0x%04x\n",
+ conf->cfg_base);
+ break;
+ }
+ tmpword &= 0xfffd; /* disable LPC COMB */
+ tmpword |= 0x0001; /* set bit 0 : enable LPC COMA addr range (GEN2) */
+ IRDA_DEBUG(1, "LPC_EN (write): 0x%04x\n", tmpword);
pci_write_config_word(dev, LPC_EN, tmpword);
- /* Setup DMA */
- pci_write_config_word(dev, PCI_DMA_C, 0xc0c0); /* LPC I/F DMA on, channel 3 -- rtm (?? PCI DMA ?) */
- pci_write_config_word(dev, GEN2_DEC, 0x131); /* LPC I/F 2nd decode range */
+ /*
+ * Configure LPC DMA channel
+ * PCI_DMA_C bits:
+ * Bit 15-14: DMA channel 7 select
+ * Bit 13-12: DMA channel 6 select
+ * Bit 11-10: DMA channel 5 select
+ * Bit 9-8: Reserved
+ * Bit 7-6: DMA channel 3 select
+ * Bit 5-4: DMA channel 2 select
+ * Bit 3-2: DMA channel 1 select
+ * Bit 1-0: DMA channel 0 select
+ * 00 = Reserved value
+ * 01 = PC/PCI DMA
+ * 10 = Reserved value
+ * 11 = LPC I/F DMA
+ */
+ pci_read_config_word(dev, PCI_DMA_C, &tmpword);
+ switch(conf->fir_dma) {
+ case 0x07:
+ tmpword |= 0xc000;
+ break;
+ case 0x06:
+ tmpword |= 0x3000;
+ break;
+ case 0x05:
+ tmpword |= 0x0c00;
+ break;
+ case 0x03:
+ tmpword |= 0x00c0;
+ break;
+ case 0x02:
+ tmpword |= 0x0030;
+ break;
+ case 0x01:
+ tmpword |= 0x000c;
+ break;
+ case 0x00:
+ tmpword |= 0x0003;
+ break;
+ default:
+ break; /* do not change settings */
+ }
+ IRDA_DEBUG(1, "PCI_DMA_C (write): 0x%04x\n", tmpword);
+ pci_write_config_word(dev, PCI_DMA_C, tmpword);
+
+ /*
+ * GEN2_DEC bits:
+ * Bit 15-4: Generic I/O range
+ * Bit 3-1: reserved (read as 0)
+ * Bit 0: enable GEN2 range on LPC I/F
+ */
+ tmpword = conf->fir_io & 0xfff8;
+ tmpword |= 0x0001;
+ IRDA_DEBUG(1, "GEN2_DEC (write): 0x%04x\n", tmpword);
+ pci_write_config_word(dev, GEN2_DEC, tmpword);
/* Pre-configure chip */
- ret = preconfigure_smsc_chip(conf);
+ return preconfigure_smsc_chip(conf);
+}
- /* Disable LPC */
- pci_read_config_word(dev, LPC_EN, &tmpword); /* LPC_EN register */
- tmpword &= 0xfffc; /* mask bit 1 and bit 0, COMA addr range disable */
- pci_write_config_word(dev, LPC_EN, tmpword);
- return ret;
+/*
+ * Pre-configure a certain port on the ALi 1533 bridge.
+ * This is based on reverse-engineering since ALi does not
+ * provide any data sheet for the 1533 chip.
+ */
+static void __init preconfigure_ali_port(struct pci_dev *dev,
+ unsigned short port)
+{
+ unsigned char reg;
+ /* These bits obviously control the different ports */
+ unsigned char mask;
+ unsigned char tmpbyte;
+
+ switch(port) {
+ case 0x0130:
+ case 0x0178:
+ reg = 0xb0;
+ mask = 0x80;
+ break;
+ case 0x03f8:
+ reg = 0xb4;
+ mask = 0x80;
+ break;
+ case 0x02f8:
+ reg = 0xb4;
+ mask = 0x30;
+ break;
+ case 0x02e8:
+ reg = 0xb4;
+ mask = 0x08;
+ break;
+ default:
+ IRDA_ERROR("Failed to configure unsupported port on ALi 1533 bridge: 0x%04x\n", port);
+ return;
+ }
+
+ pci_read_config_byte(dev, reg, &tmpbyte);
+ /* Turn on the right bits */
+ tmpbyte |= mask;
+ pci_write_config_byte(dev, reg, tmpbyte);
+ IRDA_MESSAGE("Activated ALi 1533 ISA bridge port 0x%04x.\n", port);
+ return;
}
static int __init preconfigure_through_ali(struct pci_dev *dev,
- struct smsc_ircc_subsystem_configuration *conf)
+ struct
+ smsc_ircc_subsystem_configuration
+ *conf)
{
- /* TODO: put in ALi 1533 configuration here. */
- IRDA_MESSAGE("SORRY: %s has an unsupported bridge controller (ALi): not pre-configured.\n", conf->name);
- return -ENODEV;
+ /* Configure the two ports on the ALi 1533 */
+ preconfigure_ali_port(dev, conf->sir_io);
+ preconfigure_ali_port(dev, conf->fir_io);
+
+ /* Pre-configure chip */
+ return preconfigure_smsc_chip(conf);
}
static int __init smsc_ircc_preconfigure_subsystems(unsigned short ircc_cfg,
struct smsc_ircc_subsystem_configuration *conf;
/*
- * Cache the subsystem vendor/device: some manufacturers fail to set
- * this for all components, so we save it in case there is just
- * 0x0000 0x0000 on the device we want to check.
+ * Cache the subsystem vendor/device:
+ * some manufacturers fail to set this for all components,
+ * so we save it in case there is just 0x0000 0x0000 on the
+ * device we want to check.
*/
if (dev->subsystem_vendor != 0x0000U) {
ss_vendor = dev->subsystem_vendor;
for( ; conf->subvendor; conf++) {
if(conf->vendor == dev->vendor &&
conf->device == dev->device &&
- conf->subvendor == ss_vendor && /* Sometimes these are cached values */
- (conf->subdevice == ss_device || conf->subdevice == 0xffff)) {
- struct smsc_ircc_subsystem_configuration tmpconf;
+ conf->subvendor == ss_vendor &&
+ /* Sometimes these are cached values */
+ (conf->subdevice == ss_device ||
+ conf->subdevice == 0xffff)) {
+ struct smsc_ircc_subsystem_configuration
+ tmpconf;
- memcpy(&tmpconf, conf, sizeof(struct smsc_ircc_subsystem_configuration));
+ memcpy(&tmpconf, conf,
+ sizeof(struct smsc_ircc_subsystem_configuration));
- /* Override the default values with anything passed in as parameter */
+ /*
+ * Override the default values with anything
+ * passed in as parameter
+ */
if (ircc_cfg != 0)
tmpconf.cfg_base = ircc_cfg;
if (ircc_fir != 0)
if((err = pci_enable_device(pdev)))
return err;
- if(!(err = pci_set_dma_mask(pdev, DMA_64BIT_MASK))) {
+ if(!(err = pci_set_dma_mask(pdev, DMA_64BIT_MASK)) &&
+ !(err = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK))) {
pci_using_dac = 1;
} else {
- if((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK))) {
+ if((err = pci_set_dma_mask(pdev, DMA_32BIT_MASK)) ||
+ (err = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK))) {
IXGB_ERR("No usable DMA configuration, aborting\n");
- return err;
+ goto err_dma_mask;
}
pci_using_dac = 0;
}
if((err = pci_request_regions(pdev, ixgb_driver_name)))
- return err;
+ goto err_request_regions;
pci_set_master(pdev);
free_netdev(netdev);
err_alloc_etherdev:
pci_release_regions(pdev);
+err_request_regions:
+err_dma_mask:
+ pci_disable_device(pdev);
return err;
}
{
struct mv643xx_private *mp = netdev_priv(dev);
- netif_device_detach(dev);
+ if (!netif_running(dev))
+ return;
+
+ netif_stop_queue(dev);
+
eth_port_reset(mp->port_num);
eth_port_start(dev);
- netif_device_attach(dev);
+
+ if (mp->tx_ring_size - mp->tx_desc_count >= MAX_DESCS_PER_SKB)
+ netif_wake_queue(dev);
}
/**
#else
if (eth_int_cause & ETH_INT_CAUSE_RX)
mv643xx_eth_receive_queue(dev, INT_MAX);
+#endif
if (eth_int_cause_ext & ETH_INT_CAUSE_TX)
mv643xx_eth_free_completed_tx_descs(dev);
-#endif
/*
* If no real interrupt occured, exit.
BUG_ON(netif_queue_stopped(dev));
BUG_ON(skb == NULL);
- BUG_ON(mp->tx_ring_size - mp->tx_desc_count < MAX_DESCS_PER_SKB);
+
+ if (mp->tx_ring_size - mp->tx_desc_count < MAX_DESCS_PER_SKB) {
+ printk(KERN_ERR "%s: transmit with queue full\n", dev->name);
+ netif_stop_queue(dev);
+ return 1;
+ }
if (has_tiny_unaligned_frags(skb)) {
if ((skb_linearize(skb, GFP_ATOMIC) != 0)) {
#define NATSEMI_RX_LIMIT 2046 /* maximum supported by hardware */
/* These identify the driver base version and may not be removed. */
-static char version[] __devinitdata =
+static const char version[] __devinitdata =
KERN_INFO DRV_NAME " dp8381x driver, version "
DRV_VERSION ", " DRV_RELDATE "\n"
KERN_INFO " originally by Donald Becker <becker@scyld.com>\n"
static void ei_rx_overrun(struct net_device *dev)
{
- axnet_dev_t *info = (axnet_dev_t *)dev;
+ axnet_dev_t *info = PRIV(dev);
long e8390_base = dev->base_addr;
unsigned char was_txing, must_resend = 0;
struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
{ PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_GE) },
{ PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_YU) },
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, PCI_DEVICE_ID_DLINK_DGE510T), },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b00) },
+ { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b01) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4320) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x5005) }, /* Belkin */
{ PCI_DEVICE(PCI_VENDOR_ID_CNET, PCI_DEVICE_ID_CNET_GIGACARD) },
static const struct pci_device_id sky2_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, 0x9000) },
{ PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, 0x9E00) },
- { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b00) },
- { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b01) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4340) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4341) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4342) },
reg = gma_read16(hw, port, GM_PHY_ADDR);
gma_write16(hw, port, GM_PHY_ADDR, reg | GM_PAR_MIB_CLR);
- for (i = 0; i < GM_MIB_CNT_SIZE; i++)
- gma_read16(hw, port, GM_MIB_CNT_BASE + 8 * i);
+ for (i = GM_MIB_CNT_BASE; i <= GM_MIB_CNT_END; i += 4)
+ gma_read16(hw, port, i);
gma_write16(hw, port, GM_PHY_ADDR, reg);
/* transmit control */
GM_PHY_ADDR = 0x0088, /* 16 bit r/w GPHY Address Register */
/* MIB Counters */
GM_MIB_CNT_BASE = 0x0100, /* Base Address of MIB Counters */
- GM_MIB_CNT_SIZE = 256,
+ GM_MIB_CNT_END = 0x025C, /* Last MIB counter */
};
#define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l))
#define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l))
+#elif defined(CONFIG_MACH_LOGICPD_PXA270)
+
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_IO_SHIFT 0
+#define SMC_NOWAIT 1
+#define SMC_USE_PXA_DMA 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
+
#elif defined(CONFIG_ARCH_INNOKOM) || \
defined(CONFIG_MACH_MAINSTONE) || \
defined(CONFIG_ARCH_PXA_IDP) || \
/* These identify the driver base version and may not be removed. */
-static char version[] __devinitdata =
+static const char version[] __devinitdata =
KERN_INFO "starfire.c:v1.03 7/26/2000 Written by Donald Becker <becker@scyld.com>\n"
KERN_INFO " (unofficial 2.2/2.4 kernel port, version " DRV_VERSION ", " DRV_RELDATE ")\n";
#include "typhoon.h"
#include "typhoon-firmware.h"
-static char version[] __devinitdata =
+static const char version[] __devinitdata =
"typhoon.c: version " DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("David Dillow <dave@thedillows.org>");
struct sk_buff *tx_skbuff[TX_RING_SIZE];
dma_addr_t tx_skbuff_dma[TX_RING_SIZE];
- /* Tx bounce buffers */
+ /* Tx bounce buffers (Rhine-I only) */
unsigned char *tx_buf[TX_RING_SIZE];
unsigned char *tx_bufs;
dma_addr_t tx_bufs_dma;
rp->tx_ring[i].desc_length = cpu_to_le32(TXDESC);
next += sizeof(struct tx_desc);
rp->tx_ring[i].next_desc = cpu_to_le32(next);
- rp->tx_buf[i] = &rp->tx_bufs[i * PKT_BUF_SZ];
+ if (rp->quirks & rqRhineI)
+ rp->tx_buf[i] = &rp->tx_bufs[i * PKT_BUF_SZ];
}
rp->tx_ring[i-1].next_desc = cpu_to_le32(rp->tx_ring_dma);
}
/* Called after status of force_media possibly changed */
-void rhine_set_carrier(struct mii_if_info *mii)
+static void rhine_set_carrier(struct mii_if_info *mii)
{
if (mii->force_media) {
/* autoneg is off: Link is always assumed to be up */
If unsure, say N.
-config VENDOR_SANGOMA
- tristate "Sangoma WANPIPE(tm) multiprotocol cards"
- depends on WAN_ROUTER_DRIVERS && WAN_ROUTER && (PCI || ISA) && BROKEN
- ---help---
- Driver for S514-PCI/ISA Synchronous Data Link Adapters (SDLA).
-
- WANPIPE from Sangoma Technologies Inc. <http://www.sangoma.com/>
- is a family of intelligent multiprotocol WAN adapters with data
- transfer rates up to 4Mbps. Cards support:
-
- - X.25, Frame Relay, PPP, Cisco HDLC protocols.
-
- - API for protocols like HDLC (LAPB), HDLC Streaming, X.25,
- Frame Relay and BiSync.
-
- - Ethernet Bridging over Frame Relay protocol.
-
- - MULTILINK PPP
-
- - Async PPP (Modem Dialup)
-
- The next questions will ask you about the protocols you want
- the driver to support.
-
- If you have one or more of these cards, say M to this option;
- and read <file:Documentation/networking/wan-router.txt>.
-
- To compile this driver as a module, choose M here: the
- module will be called wanpipe.
-
-config WANPIPE_CHDLC
- bool "WANPIPE Cisco HDLC support"
- depends on VENDOR_SANGOMA
- ---help---
- Connect a WANPIPE card to a leased line using the Cisco HDLC.
-
- - Supports Dual Port Cisco HDLC on the S514-PCI/S508-ISA cards
- which allows user to build applications using the HDLC streaming API.
-
- - CHDLC Streaming MULTILINK PPP that can bind multiple WANPIPE T1
- cards into a single logical channel.
-
- Say Y and the Cisco HDLC support, HDLC streaming API and
- MULTILINK PPP will be included in the driver.
-
-config WANPIPE_FR
- bool "WANPIPE Frame Relay support"
- depends on VENDOR_SANGOMA
- help
- Connect a WANPIPE card to a Frame Relay network, or use Frame Relay
- API to develop custom applications.
-
- Contains the Ethernet Bridging over Frame Relay feature, where
- a WANPIPE frame relay link can be directly connected to the Linux
- kernel bridge. The Frame Relay option is supported on S514-PCI
- and S508-ISA cards.
-
- Say Y and the Frame Relay support will be included in the driver.
-
-config WANPIPE_X25
- bool "WANPIPE X.25 support"
- depends on VENDOR_SANGOMA
- help
- Connect a WANPIPE card to an X.25 network.
-
- Includes the X.25 API support for custom applications over the
- X.25 protocol. The X.25 option is supported on S514-PCI and
- S508-ISA cards.
-
- Say Y and the X.25 support will be included in the driver.
-
-config WANPIPE_PPP
- bool "WANPIPE PPP support"
- depends on VENDOR_SANGOMA
- help
- Connect a WANPIPE card to a leased line using Point-to-Point
- Protocol (PPP).
-
- The PPP option is supported on S514-PCI/S508-ISA cards.
-
- Say Y and the PPP support will be included in the driver.
-
-config WANPIPE_MULTPPP
- bool "WANPIPE Multi-Port PPP support"
- depends on VENDOR_SANGOMA
- help
- Connect a WANPIPE card to a leased line using Point-to-Point
- Protocol (PPP).
-
- Uses in-kernel SyncPPP protocol over the Sangoma HDLC Streaming
- adapter. In this case each Sangoma adapter port can support an
- independent PPP connection. For example, a single Quad-Port PCI
- adapter can support up to four independent PPP links. The PPP
- option is supported on S514-PCI/S508-ISA cards.
-
- Say Y and the Multi-Port PPP support will be included in the driver.
-
config CYCLADES_SYNC
tristate "Cyclom 2X(tm) cards (EXPERIMENTAL)"
depends on WAN_ROUTER_DRIVERS && (PCI || ISA)
# Rewritten to use lists instead of if-statements.
#
-wanpipe-y := sdlamain.o sdla_ft1.o
-wanpipe-$(CONFIG_WANPIPE_X25) += sdla_x25.o
-wanpipe-$(CONFIG_WANPIPE_FR) += sdla_fr.o
-wanpipe-$(CONFIG_WANPIPE_CHDLC) += sdla_chdlc.o
-wanpipe-$(CONFIG_WANPIPE_PPP) += sdla_ppp.o
-wanpipe-$(CONFIG_WANPIPE_MULTPPP) += wanpipe_multppp.o
-wanpipe-objs := $(wanpipe-y)
-
cyclomx-y := cycx_main.o
cyclomx-$(CONFIG_CYCLOMX_X25) += cycx_x25.o
cyclomx-objs := $(cyclomx-y)
obj-$(CONFIG_DLCI) += dlci.o
obj-$(CONFIG_SDLA) += sdla.o
-ifeq ($(CONFIG_WANPIPE_MULTPPP),y)
- obj-$(CONFIG_VENDOR_SANGOMA) += sdladrv.o wanpipe.o syncppp.o
-else
- obj-$(CONFIG_VENDOR_SANGOMA) += sdladrv.o wanpipe.o
-endif
obj-$(CONFIG_CYCLADES_SYNC) += cycx_drv.o cyclomx.o
obj-$(CONFIG_LAPBETHER) += lapbether.o
obj-$(CONFIG_SBNI) += sbni.o
static int pci_clock_freq = 33000000;
#define CLOCK_BASE pci_clock_freq
-#define PCI_VENDOR_ID_GORAMO 0x10B5 /* uses PLX:9050 ID - this card */
-#define PCI_DEVICE_ID_PCI200SYN 0x9050 /* doesn't have its own ID */
-
-
/*
* PLX PCI9052 local configuration and shared runtime registers.
* This structure can be used to access 9052 registers (memory mapped).
int i;
card_t *card = pci_get_drvdata(pdev);
- for(i = 0; i < 2; i++)
+ for (i = 0; i < 2; i++)
if (card->ports[i].card) {
struct net_device *dev = port_to_dev(&card->ports[i]);
unregister_hdlc_device(dev);
" %u RX packets rings\n", ramsize / 1024, ramphys,
pdev->irq, card->tx_ring_buffers, card->rx_ring_buffers);
+ if (pdev->subsystem_device == PCI_DEVICE_ID_PLX_9050) {
+ printk(KERN_ERR "Detected PCI200SYN card with old "
+ "configuration data.\n");
+ printk(KERN_ERR "See <http://www.kernel.org/pub/"
+ "linux/utils/net/hdlc/pci200syn/> for update.\n");
+ printk(KERN_ERR "The card will stop working with"
+ " future versions of Linux if not updated.\n");
+ }
+
if (card->tx_ring_buffers < 1) {
printk(KERN_ERR "pci200syn: RAM test failed\n");
pci200_pci_remove_one(pdev);
writew(readw(p) | 0x0040, p);
/* Allocate IRQ */
- if(request_irq(pdev->irq, sca_intr, SA_SHIRQ, devname, card)) {
+ if (request_irq(pdev->irq, sca_intr, SA_SHIRQ, devname, card)) {
printk(KERN_WARNING "pci200syn: could not allocate IRQ%d.\n",
pdev->irq);
pci200_pci_remove_one(pdev);
sca_init(card, 0);
- for(i = 0; i < 2; i++) {
+ for (i = 0; i < 2; i++) {
port_t *port = &card->ports[i];
struct net_device *dev = port_to_dev(port);
hdlc_device *hdlc = dev_to_hdlc(dev);
hdlc->xmit = sca_xmit;
port->settings.clock_type = CLOCK_EXT;
port->card = card;
- if(register_hdlc_device(dev)) {
+ if (register_hdlc_device(dev)) {
printk(KERN_ERR "pci200syn: unable to register hdlc "
"device\n");
port->card = NULL;
static struct pci_device_id pci200_pci_tbl[] __devinitdata = {
- { PCI_VENDOR_ID_GORAMO, PCI_DEVICE_ID_PCI200SYN, PCI_ANY_ID,
- PCI_ANY_ID, 0, 0, 0 },
+ { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_VENDOR_ID_PLX,
+ PCI_DEVICE_ID_PLX_9050, 0, 0, 0 },
+ { PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_VENDOR_ID_PLX,
+ PCI_DEVICE_ID_PLX_PCI200SYN, 0, 0, 0 },
{ 0, }
};
+++ /dev/null
-/*****************************************************************************
-* sdla_chdlc.c WANPIPE(tm) Multiprotocol WAN Link Driver. Cisco HDLC module.
-*
-* Authors: Nenad Corbic <ncorbic@sangoma.com>
-* Gideon Hack
-*
-* Copyright: (c) 1995-2001 Sangoma Technologies Inc.
-*
-* This program is free software; you can redistribute it and/or
-* modify it under the terms of the GNU General Public License
-* as published by the Free Software Foundation; either version
-* 2 of the License, or (at your option) any later version.
-* ============================================================================
-* Feb 28, 2001 Nenad Corbic Updated if_tx_timeout() routine for
-* 2.4.X kernels.
-* Jan 25, 2001 Nenad Corbic Added a TTY Sync serial driver over the
-* HDLC streaming protocol
-* Added a TTY Async serial driver over the
-* Async protocol.
-* Dec 15, 2000 Nenad Corbic Updated for 2.4.X Kernel support
-* Nov 13, 2000 Nenad Corbic Added true interface type encoding option.
-* Tcpdump doesn't support CHDLC inteface
-* types, to fix this "true type" option will set
-* the interface type to RAW IP mode.
-* Nov 07, 2000 Nenad Corbic Added security features for UDP debugging:
-* Deny all and specify allowed requests.
-* Jun 20, 2000 Nenad Corbic Fixed the API IP ERROR bug. Caused by the
-* latest update.
-* May 09, 2000 Nenad Corbic Option to bring down an interface
-* upon disconnect.
-* Mar 23, 2000 Nenad Corbic Improved task queue, bh handling.
-* Mar 16, 2000 Nenad Corbic Fixed the SLARP Dynamic IP addressing.
-* Mar 06, 2000 Nenad Corbic Bug Fix: corrupted mbox recovery.
-* Feb 10, 2000 Gideon Hack Added ASYNC support.
-* Feb 09, 2000 Nenad Corbic Fixed two shutdown bugs in update() and
-* if_stats() functions.
-* Jan 24, 2000 Nenad Corbic Fixed a startup wanpipe state racing,
-* condition between if_open and isr.
-* Jan 10, 2000 Nenad Corbic Added new socket API support.
-* Dev 15, 1999 Nenad Corbic Fixed up header files for 2.0.X kernels
-* Nov 20, 1999 Nenad Corbic Fixed zero length API bug.
-* Sep 30, 1999 Nenad Corbic Fixed dynamic IP and route setup.
-* Sep 23, 1999 Nenad Corbic Added SMP support, fixed tracing
-* Sep 13, 1999 Nenad Corbic Split up Port 0 and 1 into separate devices.
-* Jun 02, 1999 Gideon Hack Added support for the S514 adapter.
-* Oct 30, 1998 Jaspreet Singh Added Support for CHDLC API (HDLC STREAMING).
-* Oct 28, 1998 Jaspreet Singh Added Support for Dual Port CHDLC.
-* Aug 07, 1998 David Fong Initial version.
-*****************************************************************************/
-
-#include <linux/module.h>
-#include <linux/kernel.h> /* printk(), and other useful stuff */
-#include <linux/stddef.h> /* offsetof(), etc. */
-#include <linux/errno.h> /* return codes */
-#include <linux/string.h> /* inline memset(), etc. */
-#include <linux/slab.h> /* kmalloc(), kfree() */
-#include <linux/wanrouter.h> /* WAN router definitions */
-#include <linux/wanpipe.h> /* WANPIPE common user API definitions */
-#include <linux/if_arp.h> /* ARPHRD_* defines */
-
-
-#include <asm/uaccess.h>
-#include <linux/inetdevice.h>
-#include <linux/netdevice.h>
-
-#include <linux/in.h> /* sockaddr_in */
-#include <linux/inet.h>
-#include <linux/if.h>
-#include <asm/byteorder.h> /* htons(), etc. */
-#include <linux/sdlapci.h>
-#include <asm/io.h>
-
-#include <linux/sdla_chdlc.h> /* CHDLC firmware API definitions */
-#include <linux/sdla_asy.h> /* CHDLC (async) API definitions */
-
-#include <linux/if_wanpipe_common.h> /* Socket Driver common area */
-#include <linux/if_wanpipe.h>
-
-/* TTY Includes */
-#include <linux/tty.h>
-#include <linux/tty_flip.h>
-#include <linux/serial.h>
-
-
-/****** Defines & Macros ****************************************************/
-
-/* reasons for enabling the timer interrupt on the adapter */
-#define TMR_INT_ENABLED_UDP 0x01
-#define TMR_INT_ENABLED_UPDATE 0x02
-#define TMR_INT_ENABLED_CONFIG 0x10
-
-#define MAX_IP_ERRORS 10
-
-#define TTY_CHDLC_MAX_MTU 2000
-#define CHDLC_DFLT_DATA_LEN 1500 /* default MTU */
-#define CHDLC_HDR_LEN 1
-
-#define CHDLC_API 0x01
-
-#define PORT(x) (x == 0 ? "PRIMARY" : "SECONDARY" )
-#define MAX_BH_BUFF 10
-
-//#define PRINT_DEBUG
-#ifdef PRINT_DEBUG
-#define dbg_printk(format, a...) printk(format, ## a)
-#else
-#define dbg_printk(format, a...)
-#endif
-
-/******Data Structures*****************************************************/
-
-/* This structure is placed in the private data area of the device structure.
- * The card structure used to occupy the private area but now the following
- * structure will incorporate the card structure along with CHDLC specific data
- */
-
-typedef struct chdlc_private_area
-{
- wanpipe_common_t common;
- sdla_t *card;
- int TracingEnabled; /* For enabling Tracing */
- unsigned long curr_trace_addr; /* Used for Tracing */
- unsigned long start_trace_addr;
- unsigned long end_trace_addr;
- unsigned long base_addr_trace_buffer;
- unsigned long end_addr_trace_buffer;
- unsigned short number_trace_elements;
- unsigned available_buffer_space;
- unsigned long router_start_time;
- unsigned char route_status;
- unsigned char route_removed;
- unsigned long tick_counter; /* For 5s timeout counter */
- unsigned long router_up_time;
- u32 IP_address; /* IP addressing */
- u32 IP_netmask;
- u32 ip_local;
- u32 ip_remote;
- u32 ip_local_tmp;
- u32 ip_remote_tmp;
- u8 ip_error;
- u8 config_chdlc;
- u8 config_chdlc_timeout;
- unsigned char mc; /* Mulitcast support on/off */
- unsigned short udp_pkt_lgth; /* udp packet processing */
- char udp_pkt_src;
- char udp_pkt_data[MAX_LGTH_UDP_MGNT_PKT];
- unsigned short timer_int_enabled;
- char update_comms_stats; /* updating comms stats */
-
- bh_data_t *bh_head; /* Circular buffer for chdlc_bh */
- unsigned long tq_working;
- volatile int bh_write;
- volatile int bh_read;
- atomic_t bh_buff_used;
-
- unsigned char interface_down;
-
- /* Polling work queue entry. Each interface
- * has its own work queue entry, which is used
- * to defer events from the interrupt */
- struct work_struct poll_work;
- struct timer_list poll_delay_timer;
-
- u8 gateway;
- u8 true_if_encoding;
- //FIXME: add driver stats as per frame relay!
-
-} chdlc_private_area_t;
-
-/* Route Status options */
-#define NO_ROUTE 0x00
-#define ADD_ROUTE 0x01
-#define ROUTE_ADDED 0x02
-#define REMOVE_ROUTE 0x03
-
-
-/* variable for keeping track of enabling/disabling FT1 monitor status */
-static int rCount = 0;
-
-/* variable for tracking how many interfaces to open for WANPIPE on the
- two ports */
-
-extern void disable_irq(unsigned int);
-extern void enable_irq(unsigned int);
-
-/****** Function Prototypes *************************************************/
-/* WAN link driver entry points. These are called by the WAN router module. */
-static int update(struct wan_device* wandev);
-static int new_if(struct wan_device* wandev, struct net_device* dev,
- wanif_conf_t* conf);
-
-/* Network device interface */
-static int if_init(struct net_device* dev);
-static int if_open(struct net_device* dev);
-static int if_close(struct net_device* dev);
-static int if_header(struct sk_buff* skb, struct net_device* dev,
- unsigned short type, void* daddr, void* saddr,
- unsigned len);
-
-static int if_rebuild_hdr (struct sk_buff *skb);
-static struct net_device_stats* if_stats(struct net_device* dev);
-
-static int if_send(struct sk_buff* skb, struct net_device* dev);
-
-/* CHDLC Firmware interface functions */
-static int chdlc_configure (sdla_t* card, void* data);
-static int chdlc_comm_enable (sdla_t* card);
-static int chdlc_read_version (sdla_t* card, char* str);
-static int chdlc_set_intr_mode (sdla_t* card, unsigned mode);
-static int chdlc_send (sdla_t* card, void* data, unsigned len);
-static int chdlc_read_comm_err_stats (sdla_t* card);
-static int chdlc_read_op_stats (sdla_t* card);
-static int chdlc_error (sdla_t *card, int err, CHDLC_MAILBOX_STRUCT *mb);
-
-
-static int chdlc_disable_comm_shutdown (sdla_t *card);
-static void if_tx_timeout(struct net_device *dev);
-
-/* Miscellaneous CHDLC Functions */
-static int set_chdlc_config (sdla_t* card);
-static void init_chdlc_tx_rx_buff( sdla_t* card);
-static int process_chdlc_exception(sdla_t *card);
-static int process_global_exception(sdla_t *card);
-static int update_comms_stats(sdla_t* card,
- chdlc_private_area_t* chdlc_priv_area);
-static int configure_ip (sdla_t* card);
-static int unconfigure_ip (sdla_t* card);
-static void process_route(sdla_t *card);
-static void port_set_state (sdla_t *card, int);
-static int config_chdlc (sdla_t *card);
-static void disable_comm (sdla_t *card);
-
-static void trigger_chdlc_poll(struct net_device *dev);
-static void chdlc_poll(struct net_device *dev);
-static void chdlc_poll_delay (unsigned long dev_ptr);
-
-
-/* Miscellaneous asynchronous interface Functions */
-static int set_asy_config (sdla_t* card);
-static int asy_comm_enable (sdla_t* card);
-
-/* Interrupt handlers */
-static void wpc_isr (sdla_t* card);
-static void rx_intr (sdla_t* card);
-static void timer_intr(sdla_t *);
-
-/* Bottom half handlers */
-static void chdlc_work(struct net_device *dev);
-static int chdlc_work_cleanup(struct net_device *dev);
-static int bh_enqueue(struct net_device *dev, struct sk_buff *skb);
-
-/* Miscellaneous functions */
-static int chk_bcast_mcast_addr(sdla_t* card, struct net_device* dev,
- struct sk_buff *skb);
-static int reply_udp( unsigned char *data, unsigned int mbox_len );
-static int intr_test( sdla_t* card);
-static int udp_pkt_type( struct sk_buff *skb , sdla_t* card);
-static int store_udp_mgmt_pkt(char udp_pkt_src, sdla_t* card,
- struct sk_buff *skb, struct net_device* dev,
- chdlc_private_area_t* chdlc_priv_area);
-static int process_udp_mgmt_pkt(sdla_t* card, struct net_device* dev,
- chdlc_private_area_t* chdlc_priv_area);
-static unsigned short calc_checksum (char *, int);
-static void s508_lock (sdla_t *card, unsigned long *smp_flags);
-static void s508_unlock (sdla_t *card, unsigned long *smp_flags);
-
-
-static int Intr_test_counter;
-
-/* TTY Global Definitions */
-
-#define NR_PORTS 4
-#define WAN_TTY_MAJOR 226
-#define WAN_TTY_MINOR 0
-
-#define WAN_CARD(port) (tty_card_map[port])
-#define MIN_PORT 0
-#define MAX_PORT NR_PORTS-1
-
-#define CRC_LENGTH 2
-
-static int wanpipe_tty_init(sdla_t *card);
-static void wanpipe_tty_receive(sdla_t *, unsigned, unsigned int);
-static void wanpipe_tty_trigger_poll(sdla_t *card);
-
-static struct tty_driver serial_driver;
-static int tty_init_cnt=0;
-
-static struct serial_state rs_table[NR_PORTS];
-
-static char tty_driver_mode=WANOPT_TTY_SYNC;
-
-static char *opt_decode[] = {"NONE","CRTSCTS","XONXOFF-RX",
- "CRTSCTS XONXOFF-RX","XONXOFF-TX",
- "CRTSCTS XONXOFF-TX","CRTSCTS XONXOFF"};
-static char *p_decode[] = {"NONE","ODD","EVEN"};
-
-static void* tty_card_map[NR_PORTS] = {NULL,NULL,NULL,NULL};
-
-
-/****** Public Functions ****************************************************/
-
-/*============================================================================
- * Cisco HDLC protocol initialization routine.
- *
- * This routine is called by the main WANPIPE module during setup. At this
- * point adapter is completely initialized and firmware is running.
- * o read firmware version (to make sure it's alive)
- * o configure adapter
- * o initialize protocol-specific fields of the adapter data space.
- *
- * Return: 0 o.k.
- * < 0 failure.
- */
-int wpc_init (sdla_t* card, wandev_conf_t* conf)
-{
- unsigned char port_num;
- int err;
- unsigned long max_permitted_baud = 0;
- SHARED_MEMORY_INFO_STRUCT *flags;
-
- union
- {
- char str[80];
- } u;
- volatile CHDLC_MAILBOX_STRUCT* mb;
- CHDLC_MAILBOX_STRUCT* mb1;
- unsigned long timeout;
-
- /* Verify configuration ID */
- if (conf->config_id != WANCONFIG_CHDLC) {
- printk(KERN_INFO "%s: invalid configuration ID %u!\n",
- card->devname, conf->config_id);
- return -EINVAL;
- }
-
- /* Find out which Port to use */
- if ((conf->comm_port == WANOPT_PRI) || (conf->comm_port == WANOPT_SEC)){
- if (card->next){
-
- if (conf->comm_port != card->next->u.c.comm_port){
- card->u.c.comm_port = conf->comm_port;
- }else{
- printk(KERN_INFO "%s: ERROR - %s port used!\n",
- card->wandev.name, PORT(conf->comm_port));
- return -EINVAL;
- }
- }else{
- card->u.c.comm_port = conf->comm_port;
- }
- }else{
- printk(KERN_INFO "%s: ERROR - Invalid Port Selected!\n",
- card->wandev.name);
- return -EINVAL;
- }
-
-
- /* Initialize protocol-specific fields */
- if(card->hw.type != SDLA_S514){
-
- if (card->u.c.comm_port == WANOPT_PRI){
- card->mbox = (void *) card->hw.dpmbase;
- }else{
- card->mbox = (void *) card->hw.dpmbase +
- SEC_BASE_ADDR_MB_STRUCT - PRI_BASE_ADDR_MB_STRUCT;
- }
- }else{
- /* for a S514 adapter, set a pointer to the actual mailbox in the */
- /* allocated virtual memory area */
- if (card->u.c.comm_port == WANOPT_PRI){
- card->mbox = (void *) card->hw.dpmbase + PRI_BASE_ADDR_MB_STRUCT;
- }else{
- card->mbox = (void *) card->hw.dpmbase + SEC_BASE_ADDR_MB_STRUCT;
- }
- }
-
- mb = mb1 = card->mbox;
-
- if (!card->configured){
-
- /* The board will place an 'I' in the return code to indicate that it is
- ready to accept commands. We expect this to be completed in less
- than 1 second. */
-
- timeout = jiffies;
- while (mb->return_code != 'I') /* Wait 1s for board to initialize */
- if ((jiffies - timeout) > 1*HZ) break;
-
- if (mb->return_code != 'I') {
- printk(KERN_INFO
- "%s: Initialization not completed by adapter\n",
- card->devname);
- printk(KERN_INFO "Please contact Sangoma representative.\n");
- return -EIO;
- }
- }
-
- /* Read firmware version. Note that when adapter initializes, it
- * clears the mailbox, so it may appear that the first command was
- * executed successfully when in fact it was merely erased. To work
- * around this, we execute the first command twice.
- */
-
- if (chdlc_read_version(card, u.str))
- return -EIO;
-
- printk(KERN_INFO "%s: Running Cisco HDLC firmware v%s\n",
- card->devname, u.str);
-
- card->isr = &wpc_isr;
- card->poll = NULL;
- card->exec = NULL;
- card->wandev.update = &update;
- card->wandev.new_if = &new_if;
- card->wandev.del_if = NULL;
- card->wandev.udp_port = conf->udp_port;
- card->disable_comm = &disable_comm;
- card->wandev.new_if_cnt = 0;
-
- /* reset the number of times the 'update()' proc has been called */
- card->u.c.update_call_count = 0;
-
- card->wandev.ttl = conf->ttl;
- card->wandev.interface = conf->interface;
-
- if ((card->u.c.comm_port == WANOPT_SEC && conf->interface == WANOPT_V35)&&
- card->hw.type != SDLA_S514){
- printk(KERN_INFO "%s: ERROR - V35 Interface not supported on S508 %s port \n",
- card->devname, PORT(card->u.c.comm_port));
- return -EIO;
- }
-
- card->wandev.clocking = conf->clocking;
-
- port_num = card->u.c.comm_port;
-
- /* in API mode, we can configure for "receive only" buffering */
- if(card->hw.type == SDLA_S514) {
- card->u.c.receive_only = conf->receive_only;
- if(conf->receive_only) {
- printk(KERN_INFO
- "%s: Configured for 'receive only' mode\n",
- card->devname);
- }
- }
-
- /* Setup Port Bps */
-
- if(card->wandev.clocking) {
- if((port_num == WANOPT_PRI) || card->u.c.receive_only) {
- /* For Primary Port 0 */
- max_permitted_baud =
- (card->hw.type == SDLA_S514) ?
- PRI_MAX_BAUD_RATE_S514 :
- PRI_MAX_BAUD_RATE_S508;
-
- }else if(port_num == WANOPT_SEC) {
- /* For Secondary Port 1 */
- max_permitted_baud =
- (card->hw.type == SDLA_S514) ?
- SEC_MAX_BAUD_RATE_S514 :
- SEC_MAX_BAUD_RATE_S508;
- }
-
- if(conf->bps > max_permitted_baud) {
- conf->bps = max_permitted_baud;
- printk(KERN_INFO "%s: Baud too high!\n",
- card->wandev.name);
- printk(KERN_INFO "%s: Baud rate set to %lu bps\n",
- card->wandev.name, max_permitted_baud);
- }
- card->wandev.bps = conf->bps;
- }else{
- card->wandev.bps = 0;
- }
-
- /* Setup the Port MTU */
- if((port_num == WANOPT_PRI) || card->u.c.receive_only) {
-
- /* For Primary Port 0 */
- card->wandev.mtu =
- (conf->mtu >= MIN_LGTH_CHDLC_DATA_CFG) ?
- min_t(unsigned int, conf->mtu, PRI_MAX_NO_DATA_BYTES_IN_FRAME) :
- CHDLC_DFLT_DATA_LEN;
- } else if(port_num == WANOPT_SEC) {
- /* For Secondary Port 1 */
- card->wandev.mtu =
- (conf->mtu >= MIN_LGTH_CHDLC_DATA_CFG) ?
- min_t(unsigned int, conf->mtu, SEC_MAX_NO_DATA_BYTES_IN_FRAME) :
- CHDLC_DFLT_DATA_LEN;
- }
-
- /* Set up the interrupt status area */
- /* Read the CHDLC Configuration and obtain:
- * Ptr to shared memory infor struct
- * Use this pointer to calculate the value of card->u.c.flags !
- */
- mb1->buffer_length = 0;
- mb1->command = READ_CHDLC_CONFIGURATION;
- err = sdla_exec(mb1) ? mb1->return_code : CMD_TIMEOUT;
- if(err != COMMAND_OK) {
- if(card->hw.type != SDLA_S514)
- enable_irq(card->hw.irq);
-
- chdlc_error(card, err, mb1);
- return -EIO;
- }
-
- if(card->hw.type == SDLA_S514){
- card->u.c.flags = (void *)(card->hw.dpmbase +
- (((CHDLC_CONFIGURATION_STRUCT *)mb1->data)->
- ptr_shared_mem_info_struct));
- }else{
- card->u.c.flags = (void *)(card->hw.dpmbase +
- (((CHDLC_CONFIGURATION_STRUCT *)mb1->data)->
- ptr_shared_mem_info_struct % SDLA_WINDOWSIZE));
- }
-
- flags = card->u.c.flags;
-
- /* This is for the ports link state */
- card->wandev.state = WAN_DUALPORT;
- card->u.c.state = WAN_DISCONNECTED;
-
-
- if (!card->wandev.piggyback){
- int err;
-
- /* Perform interrupt testing */
- err = intr_test(card);
-
- if(err || (Intr_test_counter < MAX_INTR_TEST_COUNTER)) {
- printk(KERN_INFO "%s: Interrupt test failed (%i)\n",
- card->devname, Intr_test_counter);
- printk(KERN_INFO "%s: Please choose another interrupt\n",
- card->devname);
- return -EIO;
- }
-
- printk(KERN_INFO "%s: Interrupt test passed (%i)\n",
- card->devname, Intr_test_counter);
- card->configured = 1;
- }
-
- if ((card->tty_opt=conf->tty) == WANOPT_YES){
- int err;
- card->tty_minor = conf->tty_minor;
-
- /* On ASYNC connections internal clocking
- * is mandatory */
- if ((card->u.c.async_mode = conf->tty_mode)){
- card->wandev.clocking = 1;
- }
- err=wanpipe_tty_init(card);
- if (err){
- return err;
- }
- }else{
-
-
- if (chdlc_set_intr_mode(card, APP_INT_ON_TIMER)){
- printk (KERN_INFO "%s: "
- "Failed to set interrupt triggers!\n",
- card->devname);
- return -EIO;
- }
-
- /* Mask the Timer interrupt */
- flags->interrupt_info_struct.interrupt_permission &=
- ~APP_INT_ON_TIMER;
- }
-
- /* If we are using CHDLC in backup mode, this flag will
- * indicate not to look for IP addresses in config_chdlc()*/
- card->u.c.backup = conf->backup;
-
- printk(KERN_INFO "\n");
-
- return 0;
-}
-
-/******* WAN Device Driver Entry Points *************************************/
-
-/*============================================================================
- * Update device status & statistics
- * This procedure is called when updating the PROC file system and returns
- * various communications statistics. These statistics are accumulated from 3
- * different locations:
- * 1) The 'if_stats' recorded for the device.
- * 2) Communication error statistics on the adapter.
- * 3) CHDLC operational statistics on the adapter.
- * The board level statistics are read during a timer interrupt. Note that we
- * read the error and operational statistics during consecitive timer ticks so
- * as to minimize the time that we are inside the interrupt handler.
- *
- */
-static int update(struct wan_device* wandev)
-{
- sdla_t* card = wandev->private;
- struct net_device* dev;
- volatile chdlc_private_area_t* chdlc_priv_area;
- SHARED_MEMORY_INFO_STRUCT *flags;
- unsigned long timeout;
-
- /* sanity checks */
- if((wandev == NULL) || (wandev->private == NULL))
- return -EFAULT;
-
- if(wandev->state == WAN_UNCONFIGURED)
- return -ENODEV;
-
- /* more sanity checks */
- if(!card->u.c.flags)
- return -ENODEV;
-
- if(test_bit(PERI_CRIT, (void*)&card->wandev.critical))
- return -EAGAIN;
-
- if((dev=card->wandev.dev) == NULL)
- return -ENODEV;
-
- if((chdlc_priv_area=dev->priv) == NULL)
- return -ENODEV;
-
- flags = card->u.c.flags;
- if(chdlc_priv_area->update_comms_stats){
- return -EAGAIN;
- }
-
- /* we will need 2 timer interrupts to complete the */
- /* reading of the statistics */
- chdlc_priv_area->update_comms_stats = 2;
- flags->interrupt_info_struct.interrupt_permission |= APP_INT_ON_TIMER;
- chdlc_priv_area->timer_int_enabled = TMR_INT_ENABLED_UPDATE;
-
- /* wait a maximum of 1 second for the statistics to be updated */
- timeout = jiffies;
- for(;;) {
- if(chdlc_priv_area->update_comms_stats == 0)
- break;
- if ((jiffies - timeout) > (1 * HZ)){
- chdlc_priv_area->update_comms_stats = 0;
- chdlc_priv_area->timer_int_enabled &=
- ~TMR_INT_ENABLED_UPDATE;
- return -EAGAIN;
- }
- }
-
- return 0;
-}
-
-
-/*============================================================================
- * Create new logical channel.
- * This routine is called by the router when ROUTER_IFNEW IOCTL is being
- * handled.
- * o parse media- and hardware-specific configuration
- * o make sure that a new channel can be created
- * o allocate resources, if necessary
- * o prepare network device structure for registaration.
- *
- * Return: 0 o.k.
- * < 0 failure (channel will not be created)
- */
-static int new_if(struct wan_device* wandev, struct net_device* dev,
- wanif_conf_t* conf)
-{
- sdla_t* card = wandev->private;
- chdlc_private_area_t* chdlc_priv_area;
-
-
- printk(KERN_INFO "%s: Configuring Interface: %s\n",
- card->devname, conf->name);
-
- if ((conf->name[0] == '\0') || (strlen(conf->name) > WAN_IFNAME_SZ)) {
- printk(KERN_INFO "%s: Invalid interface name!\n",
- card->devname);
- return -EINVAL;
- }
-
- /* allocate and initialize private data */
- chdlc_priv_area = kmalloc(sizeof(chdlc_private_area_t), GFP_KERNEL);
-
- if(chdlc_priv_area == NULL)
- return -ENOMEM;
-
- memset(chdlc_priv_area, 0, sizeof(chdlc_private_area_t));
-
- chdlc_priv_area->card = card;
- chdlc_priv_area->common.sk = NULL;
- chdlc_priv_area->common.func = NULL;
-
- /* initialize data */
- strcpy(card->u.c.if_name, conf->name);
-
- if(card->wandev.new_if_cnt > 0) {
- kfree(chdlc_priv_area);
- return -EEXIST;
- }
-
- card->wandev.new_if_cnt++;
-
- chdlc_priv_area->TracingEnabled = 0;
- chdlc_priv_area->route_status = NO_ROUTE;
- chdlc_priv_area->route_removed = 0;
-
- card->u.c.async_mode = conf->async_mode;
-
- /* setup for asynchronous mode */
- if(conf->async_mode) {
- printk(KERN_INFO "%s: Configuring for asynchronous mode\n",
- wandev->name);
-
- if(card->u.c.comm_port == WANOPT_PRI) {
- printk(KERN_INFO
- "%s:Asynchronous mode on secondary port only\n",
- wandev->name);
- kfree(chdlc_priv_area);
- return -EINVAL;
- }
-
- if(strcmp(conf->usedby, "WANPIPE") == 0) {
- printk(KERN_INFO
- "%s: Running in WANIPE Async Mode\n", wandev->name);
- card->u.c.usedby = WANPIPE;
- }else{
- card->u.c.usedby = API;
- }
-
- if(!card->wandev.clocking) {
- printk(KERN_INFO
- "%s: Asynch. clocking must be 'Internal'\n",
- wandev->name);
- kfree(chdlc_priv_area);
- return -EINVAL;
- }
-
- if((card->wandev.bps < MIN_ASY_BAUD_RATE) ||
- (card->wandev.bps > MAX_ASY_BAUD_RATE)) {
- printk(KERN_INFO "%s: Selected baud rate is invalid.\n",
- wandev->name);
- printk(KERN_INFO "Must be between %u and %u bps.\n",
- MIN_ASY_BAUD_RATE, MAX_ASY_BAUD_RATE);
- kfree(chdlc_priv_area);
- return -EINVAL;
- }
-
- card->u.c.api_options = 0;
- if (conf->asy_data_trans == WANOPT_YES) {
- card->u.c.api_options |= ASY_RX_DATA_TRANSPARENT;
- }
-
- card->u.c.protocol_options = 0;
- if (conf->rts_hs_for_receive == WANOPT_YES) {
- card->u.c.protocol_options |= ASY_RTS_HS_FOR_RX;
- }
- if (conf->xon_xoff_hs_for_receive == WANOPT_YES) {
- card->u.c.protocol_options |= ASY_XON_XOFF_HS_FOR_RX;
- }
- if (conf->xon_xoff_hs_for_transmit == WANOPT_YES) {
- card->u.c.protocol_options |= ASY_XON_XOFF_HS_FOR_TX;
- }
- if (conf->dcd_hs_for_transmit == WANOPT_YES) {
- card->u.c.protocol_options |= ASY_DCD_HS_FOR_TX;
- }
- if (conf->cts_hs_for_transmit == WANOPT_YES) {
- card->u.c.protocol_options |= ASY_CTS_HS_FOR_TX;
- }
-
- card->u.c.tx_bits_per_char = conf->tx_bits_per_char;
- card->u.c.rx_bits_per_char = conf->rx_bits_per_char;
- card->u.c.stop_bits = conf->stop_bits;
- card->u.c.parity = conf->parity;
- card->u.c.break_timer = conf->break_timer;
- card->u.c.inter_char_timer = conf->inter_char_timer;
- card->u.c.rx_complete_length = conf->rx_complete_length;
- card->u.c.xon_char = conf->xon_char;
-
- } else { /* setup for synchronous mode */
-
- card->u.c.protocol_options = 0;
- if (conf->ignore_dcd == WANOPT_YES){
- card->u.c.protocol_options |= IGNORE_DCD_FOR_LINK_STAT;
- }
- if (conf->ignore_cts == WANOPT_YES){
- card->u.c.protocol_options |= IGNORE_CTS_FOR_LINK_STAT;
- }
-
- if (conf->ignore_keepalive == WANOPT_YES) {
- card->u.c.protocol_options |=
- IGNORE_KPALV_FOR_LINK_STAT;
- card->u.c.kpalv_tx = MIN_Tx_KPALV_TIMER;
- card->u.c.kpalv_rx = MIN_Rx_KPALV_TIMER;
- card->u.c.kpalv_err = MIN_KPALV_ERR_TOL;
-
- } else { /* Do not ignore keepalives */
- card->u.c.kpalv_tx =
- ((conf->keepalive_tx_tmr - MIN_Tx_KPALV_TIMER)
- >= 0) ?
- min_t(unsigned int, conf->keepalive_tx_tmr,MAX_Tx_KPALV_TIMER) :
- DEFAULT_Tx_KPALV_TIMER;
-
- card->u.c.kpalv_rx =
- ((conf->keepalive_rx_tmr - MIN_Rx_KPALV_TIMER)
- >= 0) ?
- min_t(unsigned int, conf->keepalive_rx_tmr,MAX_Rx_KPALV_TIMER) :
- DEFAULT_Rx_KPALV_TIMER;
-
- card->u.c.kpalv_err =
- ((conf->keepalive_err_margin-MIN_KPALV_ERR_TOL)
- >= 0) ?
- min_t(unsigned int, conf->keepalive_err_margin,
- MAX_KPALV_ERR_TOL) :
- DEFAULT_KPALV_ERR_TOL;
- }
-
- /* Setup slarp timer to control delay between slarps */
- card->u.c.slarp_timer =
- ((conf->slarp_timer - MIN_SLARP_REQ_TIMER) >= 0) ?
- min_t(unsigned int, conf->slarp_timer, MAX_SLARP_REQ_TIMER) :
- DEFAULT_SLARP_REQ_TIMER;
-
- if (conf->hdlc_streaming == WANOPT_YES) {
- printk(KERN_INFO "%s: Enabling HDLC STREAMING Mode\n",
- wandev->name);
- card->u.c.protocol_options = HDLC_STREAMING_MODE;
- }
-
- if ((chdlc_priv_area->true_if_encoding = conf->true_if_encoding) == WANOPT_YES){
- printk(KERN_INFO
- "%s: Enabling, true interface type encoding.\n",
- card->devname);
- }
-
- /* Setup wanpipe as a router (WANPIPE) or as an API */
- if( strcmp(conf->usedby, "WANPIPE") == 0) {
-
- printk(KERN_INFO "%s: Running in WANPIPE mode!\n",
- wandev->name);
- card->u.c.usedby = WANPIPE;
-
- /* Option to bring down the interface when
- * the link goes down */
- if (conf->if_down){
- set_bit(DYN_OPT_ON,&chdlc_priv_area->interface_down);
- printk(KERN_INFO
- "%s: Dynamic interface configuration enabled\n",
- card->devname);
- }
-
- } else if( strcmp(conf->usedby, "API") == 0) {
- card->u.c.usedby = API;
- printk(KERN_INFO "%s: Running in API mode !\n",
- wandev->name);
- }
- }
-
- /* Tells us that if this interface is a
- * gateway or not */
- if ((chdlc_priv_area->gateway = conf->gateway) == WANOPT_YES){
- printk(KERN_INFO "%s: Interface %s is set as a gateway.\n",
- card->devname,card->u.c.if_name);
- }
-
- /* Get Multicast Information */
- chdlc_priv_area->mc = conf->mc;
-
- /* prepare network device data space for registration */
- strcpy(dev->name,card->u.c.if_name);
-
- dev->init = &if_init;
- dev->priv = chdlc_priv_area;
-
- /* Initialize the polling work routine */
- INIT_WORK(&chdlc_priv_area->poll_work, (void*)(void*)chdlc_poll, dev);
-
- /* Initialize the polling delay timer */
- init_timer(&chdlc_priv_area->poll_delay_timer);
- chdlc_priv_area->poll_delay_timer.data = (unsigned long)dev;
- chdlc_priv_area->poll_delay_timer.function = chdlc_poll_delay;
-
- printk(KERN_INFO "\n");
-
- return 0;
-}
-
-
-/****** Network Device Interface ********************************************/
-
-/*============================================================================
- * Initialize Linux network interface.
- *
- * This routine is called only once for each interface, during Linux network
- * interface registration. Returning anything but zero will fail interface
- * registration.
- */
-static int if_init(struct net_device* dev)
-{
- chdlc_private_area_t* chdlc_priv_area = dev->priv;
- sdla_t* card = chdlc_priv_area->card;
- struct wan_device* wandev = &card->wandev;
-
- /* Initialize device driver entry points */
- dev->open = &if_open;
- dev->stop = &if_close;
- dev->hard_header = &if_header;
- dev->rebuild_header = &if_rebuild_hdr;
- dev->hard_start_xmit = &if_send;
- dev->get_stats = &if_stats;
- dev->tx_timeout = &if_tx_timeout;
- dev->watchdog_timeo = TX_TIMEOUT;
-
- /* Initialize media-specific parameters */
- dev->flags |= IFF_POINTOPOINT;
- dev->flags |= IFF_NOARP;
-
- /* Enable Mulitcasting if user selected */
- if (chdlc_priv_area->mc == WANOPT_YES){
- dev->flags |= IFF_MULTICAST;
- }
-
- if (chdlc_priv_area->true_if_encoding){
- dev->type = ARPHRD_HDLC; /* This breaks the tcpdump */
- }else{
- dev->type = ARPHRD_PPP;
- }
-
- dev->mtu = card->wandev.mtu;
- /* for API usage, add the API header size to the requested MTU size */
- if(card->u.c.usedby == API) {
- dev->mtu += sizeof(api_tx_hdr_t);
- }
-
- dev->hard_header_len = CHDLC_HDR_LEN;
-
- /* Initialize hardware parameters */
- dev->irq = wandev->irq;
- dev->dma = wandev->dma;
- dev->base_addr = wandev->ioport;
- dev->mem_start = wandev->maddr;
- dev->mem_end = wandev->maddr + wandev->msize - 1;
-
- /* Set transmit buffer queue length
- * If too low packets will not be retransmitted
- * by stack.
- */
- dev->tx_queue_len = 100;
- SET_MODULE_OWNER(dev);
-
- return 0;
-}
-
-/*============================================================================
- * Open network interface.
- * o enable communications and interrupts.
- * o prevent module from unloading by incrementing use count
- *
- * Return 0 if O.k. or errno.
- */
-static int if_open(struct net_device* dev)
-{
- chdlc_private_area_t* chdlc_priv_area = dev->priv;
- sdla_t* card = chdlc_priv_area->card;
- struct timeval tv;
- int err = 0;
-
- /* Only one open per interface is allowed */
-
- if (netif_running(dev))
- return -EBUSY;
-
- /* Initialize the work queue entry */
- chdlc_priv_area->tq_working=0;
-
- INIT_WORK(&chdlc_priv_area->common.wanpipe_work,
- (void *)(void *)chdlc_work, dev);
-
- /* Allocate and initialize BH circular buffer */
- /* Add 1 to MAX_BH_BUFF so we don't have test with (MAX_BH_BUFF-1) */
- chdlc_priv_area->bh_head = kmalloc((sizeof(bh_data_t)*(MAX_BH_BUFF+1)),GFP_ATOMIC);
- memset(chdlc_priv_area->bh_head,0,(sizeof(bh_data_t)*(MAX_BH_BUFF+1)));
- atomic_set(&chdlc_priv_area->bh_buff_used, 0);
-
- do_gettimeofday(&tv);
- chdlc_priv_area->router_start_time = tv.tv_sec;
-
- netif_start_queue(dev);
-
- wanpipe_open(card);
-
- /* TTY is configured during wanpipe_set_termios
- * call, not here */
- if (card->tty_opt)
- return err;
-
- set_bit(0,&chdlc_priv_area->config_chdlc);
- chdlc_priv_area->config_chdlc_timeout=jiffies;
-
- /* Start the CHDLC configuration after 1sec delay.
- * This will give the interface initilization time
- * to finish its configuration */
- mod_timer(&chdlc_priv_area->poll_delay_timer, jiffies + HZ);
- return err;
-}
-
-/*============================================================================
- * Close network interface.
- * o if this is the last close, then disable communications and interrupts.
- * o reset flags.
- */
-static int if_close(struct net_device* dev)
-{
- chdlc_private_area_t* chdlc_priv_area = dev->priv;
- sdla_t* card = chdlc_priv_area->card;
-
- if (chdlc_priv_area->bh_head){
- int i;
- struct sk_buff *skb;
-
- for (i=0; i<(MAX_BH_BUFF+1); i++){
- skb = ((bh_data_t *)&chdlc_priv_area->bh_head[i])->skb;
- if (skb != NULL){
- dev_kfree_skb_any(skb);
- }
- }
- kfree(chdlc_priv_area->bh_head);
- chdlc_priv_area->bh_head=NULL;
- }
-
- netif_stop_queue(dev);
- wanpipe_close(card);
- del_timer(&chdlc_priv_area->poll_delay_timer);
- return 0;
-}
-
-static void disable_comm (sdla_t *card)
-{
- SHARED_MEMORY_INFO_STRUCT *flags = card->u.c.flags;
-
- if (card->u.c.comm_enabled){
- chdlc_disable_comm_shutdown (card);
- }else{
- flags->interrupt_info_struct.interrupt_permission = 0;
- }
-
- if (!tty_init_cnt)
- return;
-
- if (card->tty_opt){
- struct serial_state * state;
- if (!(--tty_init_cnt)){
- int e1;
- serial_driver.refcount=0;
-
- if ((e1 = tty_unregister_driver(&serial_driver)))
- printk("SERIAL: failed to unregister serial driver (%d)\n",
- e1);
- printk(KERN_INFO "%s: Unregistering TTY Driver, Major %i\n",
- card->devname,WAN_TTY_MAJOR);
- }
- card->tty=NULL;
- tty_card_map[card->tty_minor]=NULL;
- state = &rs_table[card->tty_minor];
- memset(state, 0, sizeof(*state));
- }
- return;
-}
-
-
-/*============================================================================
- * Build media header.
- *
- * The trick here is to put packet type (Ethertype) into 'protocol' field of
- * the socket buffer, so that we don't forget it. If packet type is not
- * supported, set skb->protocol to 0 and discard packet later.
- *
- * Return: media header length.
- */
-static int if_header(struct sk_buff* skb, struct net_device* dev,
- unsigned short type, void* daddr, void* saddr,
- unsigned len)
-{
- skb->protocol = htons(type);
-
- return CHDLC_HDR_LEN;
-}
-
-
-/*============================================================================
- * Handle transmit timeout event from netif watchdog
- */
-static void if_tx_timeout(struct net_device *dev)
-{
- chdlc_private_area_t* chan = dev->priv;
- sdla_t *card = chan->card;
-
- /* If our device stays busy for at least 5 seconds then we will
- * kick start the device by making dev->tbusy = 0. We expect
- * that our device never stays busy more than 5 seconds. So this
- * is only used as a last resort.
- */
-
- ++card->wandev.stats.collisions;
-
- printk (KERN_INFO "%s: Transmit timed out on %s\n", card->devname,dev->name);
- netif_wake_queue (dev);
-}
-
-
-
-/*============================================================================
- * Re-build media header.
- *
- * Return: 1 physical address resolved.
- * 0 physical address not resolved
- */
-static int if_rebuild_hdr (struct sk_buff *skb)
-{
- return 1;
-}
-
-
-/*============================================================================
- * Send a packet on a network interface.
- * o set tbusy flag (marks start of the transmission) to block a timer-based
- * transmit from overlapping.
- * o check link state. If link is not up, then drop the packet.
- * o execute adapter send command.
- * o free socket buffer
- *
- * Return: 0 complete (socket buffer must be freed)
- * non-0 packet may be re-transmitted (tbusy must be set)
- *
- * Notes:
- * 1. This routine is called either by the protocol stack or by the "net
- * bottom half" (with interrupts enabled).
- * 2. Setting tbusy flag will inhibit further transmit requests from the
- * protocol stack and can be used for flow control with protocol layer.
- */
-static int if_send(struct sk_buff* skb, struct net_device* dev)
-{
- chdlc_private_area_t *chdlc_priv_area = dev->priv;
- sdla_t *card = chdlc_priv_area->card;
- SHARED_MEMORY_INFO_STRUCT *flags = card->u.c.flags;
- INTERRUPT_INFORMATION_STRUCT *chdlc_int = &flags->interrupt_info_struct;
- int udp_type = 0;
- unsigned long smp_flags;
- int err=0;
-
- netif_stop_queue(dev);
-
- if (skb == NULL){
- /* If we get here, some higher layer thinks we've missed an
- * tx-done interrupt.
- */
- printk(KERN_INFO "%s: interface %s got kicked!\n",
- card->devname, dev->name);
-
- netif_wake_queue(dev);
- return 0;
- }
-
- if (ntohs(skb->protocol) != htons(PVC_PROT)){
-
- /* check the udp packet type */
-
- udp_type = udp_pkt_type(skb, card);
-
- if (udp_type == UDP_CPIPE_TYPE){
- if(store_udp_mgmt_pkt(UDP_PKT_FRM_STACK, card, skb, dev,
- chdlc_priv_area)){
- chdlc_int->interrupt_permission |=
- APP_INT_ON_TIMER;
- }
- netif_start_queue(dev);
- return 0;
- }
-
- /* check to see if the source IP address is a broadcast or */
- /* multicast IP address */
- if(chk_bcast_mcast_addr(card, dev, skb)){
- ++card->wandev.stats.tx_dropped;
- dev_kfree_skb_any(skb);
- netif_start_queue(dev);
- return 0;
- }
- }
-
- /* Lock the 508 Card: SMP is supported */
- if(card->hw.type != SDLA_S514){
- s508_lock(card,&smp_flags);
- }
-
- if(test_and_set_bit(SEND_CRIT, (void*)&card->wandev.critical)) {
-
- printk(KERN_INFO "%s: Critical in if_send: %lx\n",
- card->wandev.name,card->wandev.critical);
- ++card->wandev.stats.tx_dropped;
- netif_start_queue(dev);
- goto if_send_exit_crit;
- }
-
- if(card->u.c.state != WAN_CONNECTED){
- ++card->wandev.stats.tx_dropped;
- netif_start_queue(dev);
-
- }else if(!skb->protocol){
- ++card->wandev.stats.tx_errors;
- netif_start_queue(dev);
-
- }else {
- void* data = skb->data;
- unsigned len = skb->len;
- unsigned char attr;
-
- /* If it's an API packet pull off the API
- * header. Also check that the packet size
- * is larger than the API header
- */
- if (card->u.c.usedby == API){
- api_tx_hdr_t* api_tx_hdr;
-
- /* discard the frame if we are configured for */
- /* 'receive only' mode or if there is no data */
- if (card->u.c.receive_only ||
- (len <= sizeof(api_tx_hdr_t))) {
-
- ++card->wandev.stats.tx_dropped;
- netif_start_queue(dev);
- goto if_send_exit_crit;
- }
-
- api_tx_hdr = (api_tx_hdr_t *)data;
- attr = api_tx_hdr->attr;
- data += sizeof(api_tx_hdr_t);
- len -= sizeof(api_tx_hdr_t);
- }
-
- if(chdlc_send(card, data, len)) {
- netif_stop_queue(dev);
- }else{
- ++card->wandev.stats.tx_packets;
- card->wandev.stats.tx_bytes += len;
-
- netif_start_queue(dev);
-
- dev->trans_start = jiffies;
- }
- }
-
-if_send_exit_crit:
-
- if (!(err=netif_queue_stopped(dev))) {
- dev_kfree_skb_any(skb);
- }else{
- chdlc_priv_area->tick_counter = jiffies;
- chdlc_int->interrupt_permission |= APP_INT_ON_TX_FRAME;
- }
-
- clear_bit(SEND_CRIT, (void*)&card->wandev.critical);
- if(card->hw.type != SDLA_S514){
- s508_unlock(card,&smp_flags);
- }
-
- return err;
-}
-
-
-/*============================================================================
- * Check to see if the packet to be transmitted contains a broadcast or
- * multicast source IP address.
- */
-
-static int chk_bcast_mcast_addr(sdla_t *card, struct net_device* dev,
- struct sk_buff *skb)
-{
- u32 src_ip_addr;
- u32 broadcast_ip_addr = 0;
- struct in_device *in_dev;
-
- /* read the IP source address from the outgoing packet */
- src_ip_addr = *(u32 *)(skb->data + 12);
-
- /* read the IP broadcast address for the device */
- in_dev = dev->ip_ptr;
- if(in_dev != NULL) {
- struct in_ifaddr *ifa= in_dev->ifa_list;
- if(ifa != NULL)
- broadcast_ip_addr = ifa->ifa_broadcast;
- else
- return 0;
- }
-
- /* check if the IP Source Address is a Broadcast address */
- if((dev->flags & IFF_BROADCAST) && (src_ip_addr == broadcast_ip_addr)) {
- printk(KERN_INFO "%s: Broadcast Source Address silently discarded\n",
- card->devname);
- return 1;
- }
-
- /* check if the IP Source Address is a Multicast address */
- if((ntohl(src_ip_addr) >= 0xE0000001) &&
- (ntohl(src_ip_addr) <= 0xFFFFFFFE)) {
- printk(KERN_INFO "%s: Multicast Source Address silently discarded\n",
- card->devname);
- return 1;
- }
-
- return 0;
-}
-
-
-/*============================================================================
- * Reply to UDP Management system.
- * Return length of reply.
- */
-static int reply_udp( unsigned char *data, unsigned int mbox_len )
-{
-
- unsigned short len, udp_length, temp, ip_length;
- unsigned long ip_temp;
- int even_bound = 0;
- chdlc_udp_pkt_t *c_udp_pkt = (chdlc_udp_pkt_t *)data;
-
- /* Set length of packet */
- len = sizeof(ip_pkt_t)+
- sizeof(udp_pkt_t)+
- sizeof(wp_mgmt_t)+
- sizeof(cblock_t)+
- sizeof(trace_info_t)+
- mbox_len;
-
- /* fill in UDP reply */
- c_udp_pkt->wp_mgmt.request_reply = UDPMGMT_REPLY;
-
- /* fill in UDP length */
- udp_length = sizeof(udp_pkt_t)+
- sizeof(wp_mgmt_t)+
- sizeof(cblock_t)+
- sizeof(trace_info_t)+
- mbox_len;
-
- /* put it on an even boundary */
- if ( udp_length & 0x0001 ) {
- udp_length += 1;
- len += 1;
- even_bound = 1;
- }
-
- temp = (udp_length<<8)|(udp_length>>8);
- c_udp_pkt->udp_pkt.udp_length = temp;
-
- /* swap UDP ports */
- temp = c_udp_pkt->udp_pkt.udp_src_port;
- c_udp_pkt->udp_pkt.udp_src_port =
- c_udp_pkt->udp_pkt.udp_dst_port;
- c_udp_pkt->udp_pkt.udp_dst_port = temp;
-
- /* add UDP pseudo header */
- temp = 0x1100;
- *((unsigned short *)(c_udp_pkt->data+mbox_len+even_bound)) = temp;
- temp = (udp_length<<8)|(udp_length>>8);
- *((unsigned short *)(c_udp_pkt->data+mbox_len+even_bound+2)) = temp;
-
-
- /* calculate UDP checksum */
- c_udp_pkt->udp_pkt.udp_checksum = 0;
- c_udp_pkt->udp_pkt.udp_checksum = calc_checksum(&data[UDP_OFFSET],udp_length+UDP_OFFSET);
-
- /* fill in IP length */
- ip_length = len;
- temp = (ip_length<<8)|(ip_length>>8);
- c_udp_pkt->ip_pkt.total_length = temp;
-
- /* swap IP addresses */
- ip_temp = c_udp_pkt->ip_pkt.ip_src_address;
- c_udp_pkt->ip_pkt.ip_src_address = c_udp_pkt->ip_pkt.ip_dst_address;
- c_udp_pkt->ip_pkt.ip_dst_address = ip_temp;
-
- /* fill in IP checksum */
- c_udp_pkt->ip_pkt.hdr_checksum = 0;
- c_udp_pkt->ip_pkt.hdr_checksum = calc_checksum(data,sizeof(ip_pkt_t));
-
- return len;
-
-} /* reply_udp */
-
-unsigned short calc_checksum (char *data, int len)
-{
- unsigned short temp;
- unsigned long sum=0;
- int i;
-
- for( i = 0; i <len; i+=2 ) {
- memcpy(&temp,&data[i],2);
- sum += (unsigned long)temp;
- }
-
- while (sum >> 16 ) {
- sum = (sum & 0xffffUL) + (sum >> 16);
- }
-
- temp = (unsigned short)sum;
- temp = ~temp;
-
- if( temp == 0 )
- temp = 0xffff;
-
- return temp;
-}
-
-
-/*============================================================================
- * Get ethernet-style interface statistics.
- * Return a pointer to struct enet_statistics.
- */
-static struct net_device_stats* if_stats(struct net_device* dev)
-{
- sdla_t *my_card;
- chdlc_private_area_t* chdlc_priv_area;
-
- if ((chdlc_priv_area=dev->priv) == NULL)
- return NULL;
-
- my_card = chdlc_priv_area->card;
- return &my_card->wandev.stats;
-}
-
-
-/****** Cisco HDLC Firmware Interface Functions *******************************/
-
-/*============================================================================
- * Read firmware code version.
- * Put code version as ASCII string in str.
- */
-static int chdlc_read_version (sdla_t* card, char* str)
-{
- CHDLC_MAILBOX_STRUCT* mb = card->mbox;
- int len;
- char err;
- mb->buffer_length = 0;
- mb->command = READ_CHDLC_CODE_VERSION;
- err = sdla_exec(mb) ? mb->return_code : CMD_TIMEOUT;
-
- if(err != COMMAND_OK) {
- chdlc_error(card,err,mb);
- }
- else if (str) { /* is not null */
- len = mb->buffer_length;
- memcpy(str, mb->data, len);
- str[len] = '\0';
- }
- return (err);
-}
-
-/*-----------------------------------------------------------------------------
- * Configure CHDLC firmware.
- */
-static int chdlc_configure (sdla_t* card, void* data)
-{
- int err;
- CHDLC_MAILBOX_STRUCT *mailbox = card->mbox;
- int data_length = sizeof(CHDLC_CONFIGURATION_STRUCT);
-
- mailbox->buffer_length = data_length;
- memcpy(mailbox->data, data, data_length);
- mailbox->command = SET_CHDLC_CONFIGURATION;
- err = sdla_exec(mailbox) ? mailbox->return_code : CMD_TIMEOUT;
-
- if (err != COMMAND_OK) chdlc_error (card, err, mailbox);
-
- return err;
-}
-
-
-/*============================================================================
- * Set interrupt mode -- HDLC Version.
- */
-
-static int chdlc_set_intr_mode (sdla_t* card, unsigned mode)
-{
- CHDLC_MAILBOX_STRUCT* mb = card->mbox;
- CHDLC_INT_TRIGGERS_STRUCT* int_data =
- (CHDLC_INT_TRIGGERS_STRUCT *)mb->data;
- int err;
-
- int_data->CHDLC_interrupt_triggers = mode;
- int_data->IRQ = card->hw.irq;
- int_data->interrupt_timer = 1;
-
- mb->buffer_length = sizeof(CHDLC_INT_TRIGGERS_STRUCT);
- mb->command = SET_CHDLC_INTERRUPT_TRIGGERS;
- err = sdla_exec(mb) ? mb->return_code : CMD_TIMEOUT;
- if (err != COMMAND_OK)
- chdlc_error (card, err, mb);
- return err;
-}
-
-
-/*===========================================================
- * chdlc_disable_comm_shutdown
- *
- * Shutdown() disables the communications. We must
- * have a sparate functions, because we must not
- * call chdlc_error() hander since the private
- * area has already been replaced */
-
-static int chdlc_disable_comm_shutdown (sdla_t *card)
-{
- CHDLC_MAILBOX_STRUCT* mb = card->mbox;
- CHDLC_INT_TRIGGERS_STRUCT* int_data =
- (CHDLC_INT_TRIGGERS_STRUCT *)mb->data;
- int err;
-
- /* Disable Interrutps */
- int_data->CHDLC_interrupt_triggers = 0;
- int_data->IRQ = card->hw.irq;
- int_data->interrupt_timer = 1;
-
- mb->buffer_length = sizeof(CHDLC_INT_TRIGGERS_STRUCT);
- mb->command = SET_CHDLC_INTERRUPT_TRIGGERS;
- err = sdla_exec(mb) ? mb->return_code : CMD_TIMEOUT;
-
- /* Disable Communications */
-
- if (card->u.c.async_mode) {
- mb->command = DISABLE_ASY_COMMUNICATIONS;
- }else{
- mb->command = DISABLE_CHDLC_COMMUNICATIONS;
- }
-
- mb->buffer_length = 0;
- err = sdla_exec(mb) ? mb->return_code : CMD_TIMEOUT;
-
- card->u.c.comm_enabled = 0;
-
- return 0;
-}
-
-/*============================================================================
- * Enable communications.
- */
-
-static int chdlc_comm_enable (sdla_t* card)
-{
- int err;
- CHDLC_MAILBOX_STRUCT* mb = card->mbox;
-
- mb->buffer_length = 0;
- mb->command = ENABLE_CHDLC_COMMUNICATIONS;
- err = sdla_exec(mb) ? mb->return_code : CMD_TIMEOUT;
- if (err != COMMAND_OK)
- chdlc_error(card, err, mb);
- else