* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev:
[PATCH] pata-qdi: fix le32 in data_xfer
[libata] sata_promise: add PCI ID
[PATCH] libata: return sense data in HDIO_DRIVE_CMD ioctl
[PATCH] libata: Don't believe bogus claims in the older PIO mode register
- info, mount options and specifications for the Ext2 filesystem.
ext3.txt
- info, mount options and specifications for the Ext3 filesystem.
+ext4.txt
+ - info, mount options and specifications for the Ext4 filesystem.
files.txt
- info on file management in the Linux kernel.
fuse.txt
--- /dev/null
+
+Ext4 Filesystem
+===============
+
+This is a development version of the ext4 filesystem, an advanced level
+of the ext3 filesystem which incorporates scalability and reliability
+enhancements for supporting large filesystems (64 bit) in keeping with
+increasing disk capacities and state-of-the-art feature requirements.
+
+Mailing list: linux-ext4@vger.kernel.org
+
+
+1. Quick usage instructions:
+===========================
+
+ - Grab updated e2fsprogs from
+ ftp://ftp.kernel.org/pub/linux/kernel/people/tytso/e2fsprogs-interim/
+ This is a patchset on top of e2fsprogs-1.39, which can be found at
+ ftp://ftp.kernel.org/pub/linux/kernel/people/tytso/e2fsprogs/
+
+ - It's still mke2fs -j /dev/hda1
+
+ - mount /dev/hda1 /wherever -t ext4dev
+
+ - To enable extents,
+
+ mount /dev/hda1 /wherever -t ext4dev -o extents
+
+ - The filesystem is compatible with the ext3 driver until you add a file
+ which has extents (ie: `mount -o extents', then create a file).
+
+ NOTE: The "extents" mount flag is temporary. It will soon go away and
+ extents will be enabled by the "-o extents" flag to mke2fs or tune2fs
+
+ - When comparing performance with other filesystems, remember that
+ ext3/4 by default offers higher data integrity guarantees than most. So
+ when comparing with a metadata-only journalling filesystem, use `mount -o
+ data=writeback'. And you might as well use `mount -o nobh' too along
+ with it. Making the journal larger than the mke2fs default often helps
+ performance with metadata-intensive workloads.
+
+2. Features
+===========
+
+2.1 Currently available
+
+* ability to use filesystems > 16TB
+* extent format reduces metadata overhead (RAM, IO for access, transactions)
+* extent format more robust in face of on-disk corruption due to magics,
+* internal redunancy in tree
+
+2.1 Previously available, soon to be enabled by default by "mkefs.ext4":
+
+* dir_index and resize inode will be on by default
+* large inodes will be used by default for fast EAs, nsec timestamps, etc
+
+2.2 Candidate features for future inclusion
+
+There are several under discussion, whether they all make it in is
+partly a function of how much time everyone has to work on them:
+
+* improved file allocation (multi-block alloc, delayed alloc; basically done)
+* fix 32000 subdirectory limit (patch exists, needs some e2fsck work)
+* nsec timestamps for mtime, atime, ctime, create time (patch exists,
+ needs some e2fsck work)
+* inode version field on disk (NFSv4, Lustre; prototype exists)
+* reduced mke2fs/e2fsck time via uninitialized groups (prototype exists)
+* journal checksumming for robustness, performance (prototype exists)
+* persistent file preallocation (e.g for streaming media, databases)
+
+Features like metadata checksumming have been discussed and planned for
+a bit but no patches exist yet so I'm not sure they're in the near-term
+roadmap.
+
+The big performance win will come with mballoc and delalloc. CFS has
+been using mballoc for a few years already with Lustre, and IBM + Bull
+did a lot of benchmarking on it. The reason it isn't in the first set of
+patches is partly a manageability issue, and partly because it doesn't
+directly affect the on-disk format (outside of much better allocation)
+so it isn't critical to get into the first round of changes. I believe
+Alex is working on a new set of patches right now.
+
+3. Options
+==========
+
+When mounting an ext4 filesystem, the following option are accepted:
+(*) == default
+
+extents ext4 will use extents to address file data. The
+ file system will no longer be mountable by ext3.
+
+journal=update Update the ext4 file system's journal to the current
+ format.
+
+journal=inum When a journal already exists, this option is ignored.
+ Otherwise, it specifies the number of the inode which
+ will represent the ext4 file system's journal file.
+
+journal_dev=devnum When the external journal device's major/minor numbers
+ have changed, this option allows the user to specify
+ the new journal location. The journal device is
+ identified through its new major/minor numbers encoded
+ in devnum.
+
+noload Don't load the journal on mounting.
+
+data=journal All data are committed into the journal prior to being
+ written into the main file system.
+
+data=ordered (*) All data are forced directly out to the main file
+ system prior to its metadata being committed to the
+ journal.
+
+data=writeback Data ordering is not preserved, data may be written
+ into the main file system after its metadata has been
+ committed to the journal.
+
+commit=nrsec (*) Ext4 can be told to sync all its data and metadata
+ every 'nrsec' seconds. The default value is 5 seconds.
+ This means that if you lose your power, you will lose
+ as much as the latest 5 seconds of work (your
+ filesystem will not be damaged though, thanks to the
+ journaling). This default value (or any low value)
+ will hurt performance, but it's good for data-safety.
+ Setting it to 0 will have the same effect as leaving
+ it at the default (5 seconds).
+ Setting it to very large values will improve
+ performance.
+
+barrier=1 This enables/disables barriers. barrier=0 disables
+ it, barrier=1 enables it.
+
+orlov (*) This enables the new Orlov block allocator. It is
+ enabled by default.
+
+oldalloc This disables the Orlov block allocator and enables
+ the old block allocator. Orlov should have better
+ performance - we'd like to get some feedback if it's
+ the contrary for you.
+
+user_xattr Enables Extended User Attributes. Additionally, you
+ need to have extended attribute support enabled in the
+ kernel configuration (CONFIG_EXT4_FS_XATTR). See the
+ attr(5) manual page and http://acl.bestbits.at/ to
+ learn more about extended attributes.
+
+nouser_xattr Disables Extended User Attributes.
+
+acl Enables POSIX Access Control Lists support.
+ Additionally, you need to have ACL support enabled in
+ the kernel configuration (CONFIG_EXT4_FS_POSIX_ACL).
+ See the acl(5) manual page and http://acl.bestbits.at/
+ for more information.
+
+noacl This option disables POSIX Access Control List
+ support.
+
+reservation
+
+noreservation
+
+bsddf (*) Make 'df' act like BSD.
+minixdf Make 'df' act like Minix.
+
+check=none Don't do extra checking of bitmaps on mount.
+nocheck
+
+debug Extra debugging information is sent to syslog.
+
+errors=remount-ro(*) Remount the filesystem read-only on an error.
+errors=continue Keep going on a filesystem error.
+errors=panic Panic and halt the machine if an error occurs.
+
+grpid Give objects the same group ID as their creator.
+bsdgroups
+
+nogrpid (*) New objects have the group ID of their creator.
+sysvgroups
+
+resgid=n The group ID which may use the reserved blocks.
+
+resuid=n The user ID which may use the reserved blocks.
+
+sb=n Use alternate superblock at this location.
+
+quota
+noquota
+grpquota
+usrquota
+
+bh (*) ext4 associates buffer heads to data pages to
+nobh (a) cache disk block mapping information
+ (b) link pages into transaction to provide
+ ordering guarantees.
+ "bh" option forces use of buffer heads.
+ "nobh" option tries to avoid associating buffer
+ heads (supported only for "writeback" mode).
+
+
+Data Mode
+---------
+There are 3 different data modes:
+
+* writeback mode
+In data=writeback mode, ext4 does not journal data at all. This mode provides
+a similar level of journaling as that of XFS, JFS, and ReiserFS in its default
+mode - metadata journaling. A crash+recovery can cause incorrect data to
+appear in files which were written shortly before the crash. This mode will
+typically provide the best ext4 performance.
+
+* ordered mode
+In data=ordered mode, ext4 only officially journals metadata, but it logically
+groups metadata and data blocks into a single unit called a transaction. When
+it's time to write the new metadata out to disk, the associated data blocks
+are written first. In general, this mode performs slightly slower than
+writeback but significantly faster than journal mode.
+
+* journal mode
+data=journal mode provides full data and metadata journaling. All new data is
+written to the journal first, and then to its final location.
+In the event of a crash, the journal can be replayed, bringing both data and
+metadata into a consistent state. This mode is the slowest except when data
+needs to be read from and written to disk at the same time where it
+outperforms all others modes.
+
+References
+==========
+
+kernel source: <file:fs/ext4/>
+ <file:fs/jbd2/>
+
+programs: http://e2fsprogs.sourceforge.net/
+ http://ext2resize.sourceforge.net
+
+useful links: http://fedoraproject.org/wiki/ext3-devel
+ http://www.bullopensource.org/ext4/
softirq-read respectively, and the character displayed in each
indicates:
- '.' acquired while irqs enabled
+ '.' acquired while irqs disabled
'+' acquired in irq context
- '-' acquired in process context with irqs disabled
- '?' read-acquired both with irqs enabled and in irq context
+ '-' acquired with irqs enabled
+ '?' read acquired in irq context with irqs enabled.
Unused mutexes cannot be part of the cause of an error.
core_pattern:
core_pattern is used to specify a core dumpfile pattern name.
-. max length 64 characters; default value is "core"
+. max length 128 characters; default value is "core"
. core_pattern is used as a pattern template for the output filename;
certain string patterns (beginning with '%') are substituted with
their actual values.
%h hostname
%e executable filename
%<OTHER> both are dropped
+. If the first character of the pattern is a '|', the kernel will treat
+ the rest of the pattern as a command to run. The core dump will be
+ written to the standard input of that program instead of to a file.
==============================================================
# vmlinux image - including updated kernel symbols
vmlinux: $(vmlinux-lds) $(vmlinux-init) $(vmlinux-main) $(kallsyms.o) FORCE
+ifdef CONFIG_HEADERS_CHECK
+ $(Q)$(MAKE) headers_check
+endif
$(call if_changed_rule,vmlinux__)
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost $@
$(Q)rm -f .old_version
*/
#include <linux/module.h>
-#include <linux/string.h>
-#include <linux/user.h>
-#include <linux/elfcore.h>
-#include <linux/socket.h>
-#include <linux/syscalls.h>
-#include <linux/in.h>
-#include <linux/in6.h>
-#include <linux/pci.h>
-#include <linux/screen_info.h>
-#include <linux/tty.h>
-#include <linux/mm.h>
-#include <linux/delay.h>
-#include <linux/dma-mapping.h>
-
-#include <asm/io.h>
#include <asm/console.h>
-#include <asm/hwrpb.h>
#include <asm/uaccess.h>
-#include <asm/processor.h>
#include <asm/checksum.h>
-#include <linux/interrupt.h>
#include <asm/fpu.h>
-#include <asm/irq.h>
#include <asm/machvec.h>
-#include <asm/pgalloc.h>
-#include <asm/semaphore.h>
-#include <asm/tlbflush.h>
-#include <asm/cacheflush.h>
-#include <asm/vga.h>
#include <asm/unistd.h>
-extern struct hwrpb_struct *hwrpb;
-extern spinlock_t rtc_lock;
-
/* these are C runtime functions with special calling conventions: */
extern void __divl (void);
extern void __reml (void);
extern void __remqu (void);
EXPORT_SYMBOL(alpha_mv);
-EXPORT_SYMBOL(screen_info);
-EXPORT_SYMBOL(perf_irq);
EXPORT_SYMBOL(callback_getenv);
EXPORT_SYMBOL(callback_setenv);
EXPORT_SYMBOL(callback_save_env);
-#ifdef CONFIG_ALPHA_GENERIC
-EXPORT_SYMBOL(alpha_using_srm);
-#endif /* CONFIG_ALPHA_GENERIC */
/* platform dependent support */
EXPORT_SYMBOL(strcat);
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(clear_page);
-EXPORT_SYMBOL(__direct_map_base);
-EXPORT_SYMBOL(__direct_map_size);
-
-#ifdef CONFIG_PCI
-EXPORT_SYMBOL(pci_alloc_consistent);
-EXPORT_SYMBOL(pci_free_consistent);
-EXPORT_SYMBOL(pci_map_single);
-EXPORT_SYMBOL(pci_map_page);
-EXPORT_SYMBOL(pci_unmap_single);
-EXPORT_SYMBOL(pci_unmap_page);
-EXPORT_SYMBOL(pci_map_sg);
-EXPORT_SYMBOL(pci_unmap_sg);
-EXPORT_SYMBOL(pci_dma_supported);
-EXPORT_SYMBOL(pci_dac_dma_supported);
-EXPORT_SYMBOL(pci_dac_page_to_dma);
-EXPORT_SYMBOL(pci_dac_dma_to_page);
-EXPORT_SYMBOL(pci_dac_dma_to_offset);
-EXPORT_SYMBOL(alpha_gendev_to_pci);
-#endif
-EXPORT_SYMBOL(dma_set_mask);
-
-EXPORT_SYMBOL(dump_thread);
-EXPORT_SYMBOL(dump_elf_thread);
-EXPORT_SYMBOL(dump_elf_task);
-EXPORT_SYMBOL(dump_elf_task_fp);
-EXPORT_SYMBOL(hwrpb);
-EXPORT_SYMBOL(start_thread);
EXPORT_SYMBOL(alpha_read_fp_reg);
EXPORT_SYMBOL(alpha_read_fp_reg_s);
EXPORT_SYMBOL(alpha_write_fp_reg);
EXPORT_SYMBOL(alpha_write_fp_reg_s);
-/* In-kernel system calls. */
+/* entry.S */
EXPORT_SYMBOL(kernel_thread);
-EXPORT_SYMBOL(sys_dup);
-EXPORT_SYMBOL(sys_exit);
-EXPORT_SYMBOL(sys_write);
-EXPORT_SYMBOL(sys_lseek);
EXPORT_SYMBOL(kernel_execve);
-EXPORT_SYMBOL(sys_setsid);
-EXPORT_SYMBOL(sys_wait4);
/* Networking helper routines. */
EXPORT_SYMBOL(csum_tcpudp_magic);
EXPORT_SYMBOL(alpha_fp_emul);
#endif
-#ifdef CONFIG_ALPHA_BROKEN_IRQ_MASK
-EXPORT_SYMBOL(__min_ipl);
-#endif
-
/*
* The following are specially called from the uaccess assembly stubs.
*/
*/
#ifdef CONFIG_SMP
-EXPORT_SYMBOL(flush_tlb_mm);
-EXPORT_SYMBOL(flush_tlb_range);
-EXPORT_SYMBOL(flush_tlb_page);
-EXPORT_SYMBOL(smp_imb);
-EXPORT_SYMBOL(cpu_data);
-EXPORT_SYMBOL(smp_num_cpus);
-EXPORT_SYMBOL(smp_call_function);
-EXPORT_SYMBOL(smp_call_function_on_cpu);
EXPORT_SYMBOL(_atomic_dec_and_lock);
#endif /* CONFIG_SMP */
-/*
- * NUMA specific symbols
- */
-#ifdef CONFIG_DISCONTIGMEM
-EXPORT_SYMBOL(node_data);
-#endif /* CONFIG_DISCONTIGMEM */
-
-EXPORT_SYMBOL(rtc_lock);
-
/*
* The following are special because they're not called
* explicitly (the C compiler or assembler generates them in
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memchr);
-
-#ifdef CONFIG_ALPHA_IRONGATE
-EXPORT_SYMBOL(irongate_ioremap);
-EXPORT_SYMBOL(irongate_iounmap);
-#endif
#endif
return (void __iomem *)vaddr;
}
+EXPORT_SYMBOL(irongate_ioremap);
void
irongate_iounmap(volatile void __iomem *xaddr)
if (addr)
return vfree((void *)(PAGE_MASK & addr));
}
+EXPORT_SYMBOL(irongate_iounmap);
#include <linux/sched.h>
#include <linux/irq.h>
#include <linux/kernel_stat.h>
+#include <linux/module.h>
#include <asm/machvec.h>
#include <asm/dma.h>
/* Hack minimum IPL during interrupt processing for broken hardware. */
#ifdef CONFIG_ALPHA_BROKEN_IRQ_MASK
int __min_ipl;
+EXPORT_SYMBOL(__min_ipl);
#endif
/*
}
void (*perf_irq)(unsigned long, struct pt_regs *) = dummy_perf;
+EXPORT_SYMBOL(perf_irq);
/*
* The main interrupt entry point.
return 0;
}
+EXPORT_SYMBOL(dma_set_mask);
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
dac_allowed = pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0;
return pci_map_single_1(pdev, cpu_addr, size, dac_allowed);
}
+EXPORT_SYMBOL(pci_map_single);
dma_addr_t
pci_map_page(struct pci_dev *pdev, struct page *page, unsigned long offset,
return pci_map_single_1(pdev, (char *)page_address(page) + offset,
size, dac_allowed);
}
+EXPORT_SYMBOL(pci_map_page);
/* Unmap a single streaming mode DMA translation. The DMA_ADDR and
SIZE must match what was provided for in a previous pci_map_single
DBGA2("pci_unmap_single: sg [%lx,%lx] np %ld from %p\n",
dma_addr, size, npages, __builtin_return_address(0));
}
+EXPORT_SYMBOL(pci_unmap_single);
void
pci_unmap_page(struct pci_dev *pdev, dma_addr_t dma_addr,
{
pci_unmap_single(pdev, dma_addr, size, direction);
}
+EXPORT_SYMBOL(pci_unmap_page);
/* Allocate and map kernel buffer using consistent mode DMA for PCI
device. Returns non-NULL cpu-view pointer to the buffer if
return cpu_addr;
}
+EXPORT_SYMBOL(pci_alloc_consistent);
/* Free and unmap a consistent DMA buffer. CPU_ADDR and DMA_ADDR must
be values that were returned from pci_alloc_consistent. SIZE must
DBGA2("pci_free_consistent: [%x,%lx] from %p\n",
dma_addr, size, __builtin_return_address(0));
}
-
+EXPORT_SYMBOL(pci_free_consistent);
/* Classify the elements of the scatterlist. Write dma_address
of each element with:
pci_unmap_sg(pdev, start, out - start, direction);
return 0;
}
+EXPORT_SYMBOL(pci_map_sg);
/* Unmap a set of streaming mode DMA translations. Again, cpu read
rules concerning calls here are the same as for pci_unmap_single()
DBGA("pci_unmap_sg: %ld entries\n", nents - (end - sg));
}
+EXPORT_SYMBOL(pci_unmap_sg);
/* Return whether the given PCI device DMA address mask can be
return 0;
}
+EXPORT_SYMBOL(pci_dma_supported);
\f
/*
return ok;
}
+EXPORT_SYMBOL(pci_dac_dma_supported);
dma64_addr_t
pci_dac_page_to_dma(struct pci_dev *pdev, struct page *page,
+ __pa(page_address(page))
+ (dma64_addr_t) offset);
}
+EXPORT_SYMBOL(pci_dac_page_to_dma);
struct page *
pci_dac_dma_to_page(struct pci_dev *pdev, dma64_addr_t dma_addr)
unsigned long paddr = (dma_addr & PAGE_MASK) - alpha_mv.pci_dac_offset;
return virt_to_page(__va(paddr));
}
+EXPORT_SYMBOL(pci_dac_dma_to_page);
unsigned long
pci_dac_dma_to_offset(struct pci_dev *pdev, dma64_addr_t dma_addr)
{
return (dma_addr & ~PAGE_MASK);
}
-
+EXPORT_SYMBOL(pci_dac_dma_to_offset);
/* Helper for generic DMA-mapping functions. */
/* This assumes ISA bus master with dma_mask 0xffffff. */
return NULL;
}
+EXPORT_SYMBOL(alpha_gendev_to_pci);
int
dma_set_mask(struct device *dev, u64 mask)
return 0;
}
+EXPORT_SYMBOL(dma_set_mask);
regs->ps = 8;
wrusp(sp);
}
+EXPORT_SYMBOL(start_thread);
/*
* Free current thread data structures etc..
dump->regs[EF_A2] = pt->r18;
memcpy((char *)dump->regs + EF_SIZE, sw->fp, 32 * 8);
}
+EXPORT_SYMBOL(dump_thread);
/*
* Fill in the user structure for a ELF core dump.
useful value of the thread's UNIQUE field. */
dest[32] = ti->pcb.unique;
}
+EXPORT_SYMBOL(dump_elf_thread);
int
dump_elf_task(elf_greg_t *dest, struct task_struct *task)
dump_elf_thread(dest, task_pt_regs(task), task_thread_info(task));
return 1;
}
+EXPORT_SYMBOL(dump_elf_task);
int
dump_elf_task_fp(elf_fpreg_t *dest, struct task_struct *task)
memcpy(dest, sw->fp, 32 * 8);
return 1;
}
+EXPORT_SYMBOL(dump_elf_task_fp);
/*
* sys_execve() executes a new program.
struct hwrpb_struct *hwrpb;
+EXPORT_SYMBOL(hwrpb);
unsigned long srm_hae;
int alpha_l1i_cacheshape;
#ifdef CONFIG_ALPHA_GENERIC
struct alpha_machine_vector alpha_mv;
int alpha_using_srm;
+EXPORT_SYMBOL(alpha_using_srm);
#endif
static struct alpha_machine_vector *get_sysvec(unsigned long, unsigned long,
.orig_video_points = 16
};
+EXPORT_SYMBOL(screen_info);
+
/*
* The direct map I/O window, if any. This should be the same
* for all busses, since it's used by virt_to_bus.
unsigned long __direct_map_base;
unsigned long __direct_map_size;
+EXPORT_SYMBOL(__direct_map_base);
+EXPORT_SYMBOL(__direct_map_size);
/*
* Declare all of the machine vectors.
/* A collection of per-processor data. */
struct cpuinfo_alpha cpu_data[NR_CPUS];
+EXPORT_SYMBOL(cpu_data);
/* A collection of single bit ipi messages. */
static struct {
int smp_num_probed; /* Internal processor count */
int smp_num_cpus = 1; /* Number that came online. */
+EXPORT_SYMBOL(smp_num_cpus);
extern void calibrate_delay(void);
return 0;
}
+EXPORT_SYMBOL(smp_call_function_on_cpu);
int
smp_call_function (void (*func) (void *info), void *info, int retry, int wait)
return smp_call_function_on_cpu (func, info, retry, wait,
cpu_online_map);
}
+EXPORT_SYMBOL(smp_call_function);
static void
ipi_imb(void *ignored)
if (on_each_cpu(ipi_imb, NULL, 1, 1))
printk(KERN_CRIT "smp_imb: timed out\n");
}
+EXPORT_SYMBOL(smp_imb);
static void
ipi_flush_tlb_all(void *ignored)
preempt_enable();
}
+EXPORT_SYMBOL(flush_tlb_mm);
struct flush_tlb_page_struct {
struct vm_area_struct *vma;
preempt_enable();
}
+EXPORT_SYMBOL(flush_tlb_page);
void
flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
/* On the Alpha we always flush the whole user tlb. */
flush_tlb_mm(vma->vm_mm);
}
+EXPORT_SYMBOL(flush_tlb_range);
static void
ipi_flush_icache_page(void *x)
static int set_rtc_mmss(unsigned long);
DEFINE_SPINLOCK(rtc_lock);
+EXPORT_SYMBOL(rtc_lock);
#define TICK_SIZE (tick_nsec / 1000)
#include <linux/swap.h>
#include <linux/initrd.h>
#include <linux/pfn.h>
+#include <linux/module.h>
#include <asm/hwrpb.h>
#include <asm/pgalloc.h>
pg_data_t node_data[MAX_NUMNODES];
bootmem_data_t node_bdata[MAX_NUMNODES];
+EXPORT_SYMBOL(node_data);
#undef DEBUG_DISCONTIG
#ifdef DEBUG_DISCONTIG
EXPORT_SYMBOL(_find_first_bit_be);
EXPORT_SYMBOL(_find_next_bit_be);
#endif
-
- /* syscalls */
-EXPORT_SYMBOL(sys_write);
-EXPORT_SYMBOL(sys_lseek);
-EXPORT_SYMBOL(sys_exit);
-EXPORT_SYMBOL(sys_wait4);
.length = SZ_4K,
.type = MT_DEVICE
}, {
- .virtual = VERSATILE_PCI_VIRT_BASE,
+ .virtual = (unsigned long)VERSATILE_PCI_VIRT_BASE,
.pfn = __phys_to_pfn(VERSATILE_PCI_BASE),
.length = VERSATILE_PCI_BASE_SIZE,
.type = MT_DEVICE
}, {
- .virtual = VERSATILE_PCI_CFG_VIRT_BASE,
+ .virtual = (unsigned long)VERSATILE_PCI_CFG_VIRT_BASE,
.pfn = __phys_to_pfn(VERSATILE_PCI_CFG_BASE),
.length = VERSATILE_PCI_CFG_BASE_SIZE,
.type = MT_DEVICE
* Cfg 42000000 - 42FFFFFF PCI config
*
*/
-#define SYS_PCICTL IO_ADDRESS(VERSATILE_SYS_PCICTL)
-#define PCI_IMAP0 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x0)
-#define PCI_IMAP1 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x4)
-#define PCI_IMAP2 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x8)
-#define PCI_SMAP0 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x10)
-#define PCI_SMAP1 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x14)
-#define PCI_SMAP2 IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x18)
-#define PCI_SELFID IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0xc)
+#define __IO_ADDRESS(n) ((void __iomem *)(unsigned long)IO_ADDRESS(n))
+#define SYS_PCICTL __IO_ADDRESS(VERSATILE_SYS_PCICTL)
+#define PCI_IMAP0 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x0)
+#define PCI_IMAP1 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x4)
+#define PCI_IMAP2 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x8)
+#define PCI_SMAP0 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x10)
+#define PCI_SMAP1 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x14)
+#define PCI_SMAP2 __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0x18)
+#define PCI_SELFID __IO_ADDRESS(VERSATILE_PCI_CORE_BASE+0xc)
#define DEVICE_ID_OFFSET 0x00
#define CSR_OFFSET 0x04
__setup("pci_slot_ignore=", versatile_pci_slot_ignore);
-static unsigned long __pci_addr(struct pci_bus *bus,
+static void __iomem *__pci_addr(struct pci_bus *bus,
unsigned int devfn, int offset)
{
unsigned int busnr = bus->number;
if (devfn > 255)
BUG();
- return (VERSATILE_PCI_CFG_VIRT_BASE | (busnr << 16) |
+ return VERSATILE_PCI_CFG_VIRT_BASE + ((busnr << 16) |
(PCI_SLOT(devfn) << 11) | (PCI_FUNC(devfn) << 8) | offset);
}
static int versatile_read_config(struct pci_bus *bus, unsigned int devfn, int where,
int size, u32 *val)
{
- unsigned long addr = __pci_addr(bus, devfn, where);
+ void __iomem *addr = __pci_addr(bus, devfn, where & ~3);
u32 v;
int slot = PCI_SLOT(devfn);
break;
case 2:
- v = __raw_readl(addr & ~3);
- if (addr & 2) v >>= 16;
+ v = __raw_readl(addr);
+ if (where & 2) v >>= 16;
v &= 0xffff;
break;
default:
- addr &= ~3;
v = __raw_readl(addr);
break;
}
static int versatile_write_config(struct pci_bus *bus, unsigned int devfn, int where,
int size, u32 val)
{
- unsigned long addr = __pci_addr(bus, devfn, where);
+ void __iomem *addr = __pci_addr(bus, devfn, where);
int slot = PCI_SLOT(devfn);
if (pci_slot_ignore & (1 << slot)) {
printk("PCI core found (slot %d)\n",myslot);
__raw_writel(myslot, PCI_SELFID);
- local_pci_cfg_base = (void *) VERSATILE_PCI_CFG_VIRT_BASE + (myslot << 11);
+ local_pci_cfg_base = VERSATILE_PCI_CFG_VIRT_BASE + (myslot << 11);
val = __raw_readl(local_pci_cfg_base + CSR_OFFSET);
val |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE;
info.si_signo = SIGFPE;
info.si_code = sicode;
- info.si_addr = (void *)(instruction_pointer(regs) - 4);
+ info.si_addr = (void __user *)(instruction_pointer(regs) - 4);
/*
* This is the same as NWFPE, because it's not clear what
EXPORT_SYMBOL(elf_platform);
EXPORT_SYMBOL(elf_hwcap);
- /* syscalls */
-EXPORT_SYMBOL(sys_write);
-EXPORT_SYMBOL(sys_read);
-EXPORT_SYMBOL(sys_lseek);
-EXPORT_SYMBOL(sys_open);
-EXPORT_SYMBOL(sys_exit);
-EXPORT_SYMBOL(sys_wait4);
-
#ifdef CONFIG_PREEMPT
EXPORT_SYMBOL(kernel_flag);
#endif
*
* In UP mode, it is invoked from the (global) timer_interrupt.
*/
-static void local_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+static void local_timer_interrupt(int irq, void *dev_id)
{
if (current->pid)
- profile_tick(CPU_PROFILING, regs);
- update_process_times(user_mode(regs));
+ profile_tick(CPU_PROFILING);
+ update_process_times(user_mode(get_irq_regs()));
}
static irqreturn_t
-timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+timer_interrupt(int irq, void *dev_id)
{
unsigned int count;
*
* SMP is not supported yet.
*/
- local_timer_interrupt(irq, dev_id, regs);
+ local_timer_interrupt(irq, dev_id);
return IRQ_HANDLED;
}
.set_type = eim_set_irq_type,
};
-static void demux_eim_irq(unsigned int irq, struct irq_desc *desc,
- struct pt_regs *regs)
+static void demux_eim_irq(unsigned int irq, struct irq_desc *desc)
{
struct at32_sm *sm = desc->handler_data;
struct irq_desc *ext_desc;
ext_irq = i + sm->eim_first_irq;
ext_desc = irq_desc + ext_irq;
- ext_desc->handle_irq(ext_irq, ext_desc, regs);
+ ext_desc->handle_irq(ext_irq, ext_desc);
}
spin_unlock(&sm->lock);
asmlinkage void do_IRQ(int level, struct pt_regs *regs)
{
struct irq_desc *desc;
+ struct pt_regs *old_regs;
unsigned int irq;
unsigned long status_reg;
local_irq_disable();
+ old_regs = set_irq_regs(regs);
+
irq_enter();
irq = intc_readl(&intc0, INTCAUSE0 - 4 * level);
desc = irq_desc + irq;
- desc->handle_irq(irq, desc, regs);
+ desc->handle_irq(irq, desc);
/*
* Clear all interrupt level masks so that we may handle
sysreg_write(SR, status_reg);
irq_exit();
+
+ set_irq_regs(old_regs);
}
void __init init_IRQ(void)
config M386
bool "386"
+ depends on !UML
---help---
This is the processor type of your CPU. This information is used for
optimizing purposes. In order to compile a kernel that can run on
config X86_USE_3DNOW
bool
- depends on MCYRIXIII || MK7 || MGEODE_LX
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
default y
config X86_OOSTORE
}
#endif
-static struct hw_interrupt_type ht_irq_chip = {
+static struct irq_chip ht_irq_chip = {
.name = "PCI-HT",
.mask = mask_ht_irq,
.unmask = unmask_ht_irq,
static int mc_sysdev_add(struct sys_device *sys_dev)
{
- int cpu = sys_dev->id;
+ int err, cpu = sys_dev->id;
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
if (!cpu_online(cpu))
return 0;
+
pr_debug("Microcode:CPU %d added\n", cpu);
memset(uci, 0, sizeof(*uci));
- sysfs_create_group(&sys_dev->kobj, &mc_attr_group);
+
+ err = sysfs_create_group(&sys_dev->kobj, &mc_attr_group);
+ if (err)
+ return err;
microcode_init_cpu(cpu);
return 0;
void __init zone_sizes_init(void)
{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] =
+ virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {
- virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT,
- max_low_pfn,
- highend_pfn};
+ max_zone_pfns[ZONE_HIGHMEM] = highend_pfn;
add_active_range(0, 0, highend_pfn);
#else
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {
- virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT,
- max_low_pfn};
add_active_range(0, 0, max_low_pfn);
#endif
.long sys_vmsplice
.long sys_move_pages
.long sys_getcpu
+ .long sys_epoll_pwait
EXPORT_SYMBOL(__clear_user);
/**
- * strlen_user: - Get the size of a string in user space.
+ * strnlen_user: - Get the size of a string in user space.
* @s: The string to measure.
* @n: The maximum valid length
*
void __init zone_sizes_init(void)
{
int nid;
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {
- virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT,
- max_low_pfn,
- highend_pfn
- };
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] =
+ virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ max_zone_pfns[ZONE_HIGHMEM] = highend_pfn;
/* If SRAT has not registered memory, register it now */
if (find_max_pfn_with_active_regions() == 0) {
efi_memmap_walk(count_pages, &num_physpages);
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
max_zone_pfns[ZONE_DMA] = max_dma;
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
max_pfn = mem_data[node].max_pfn;
}
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
max_zone_pfns[ZONE_DMA] = max_dma;
max_zone_pfns[ZONE_NORMAL] = max_pfn;
free_area_init_nodes(max_zone_pfns);
}
struct seq_operations cpuinfo_op = {
- start: c_start,
- next: c_next,
- stop: c_stop,
- show: show_cpuinfo,
+ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = show_cpuinfo,
};
#endif /* CONFIG_PROC_FS */
/* INT0 : LAN controller (RTL8019AS) */
irq_desc[M32R_IRQ_INT0].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_INT0].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT0].action = 0;
+ irq_desc[M32R_IRQ_INT0].action = NULL;
irq_desc[M32R_IRQ_INT0].depth = 1;
icu_data[M32R_IRQ_INT0].icucr = M32R_ICUCR_IEN|M32R_ICUCR_ISMOD10;
disable_mappi_irq(M32R_IRQ_INT0);
/* MFT2 : system timer */
irq_desc[M32R_IRQ_MFT2].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_MFT2].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_MFT2].action = 0;
+ irq_desc[M32R_IRQ_MFT2].action = NULL;
irq_desc[M32R_IRQ_MFT2].depth = 1;
icu_data[M32R_IRQ_MFT2].icucr = M32R_ICUCR_IEN;
disable_mappi_irq(M32R_IRQ_MFT2);
/* SIO0_R : uart receive data */
irq_desc[M32R_IRQ_SIO0_R].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO0_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_R].action = 0;
+ irq_desc[M32R_IRQ_SIO0_R].action = NULL;
irq_desc[M32R_IRQ_SIO0_R].depth = 1;
icu_data[M32R_IRQ_SIO0_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_R);
/* SIO0_S : uart send data */
irq_desc[M32R_IRQ_SIO0_S].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO0_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO0_S].action = 0;
+ irq_desc[M32R_IRQ_SIO0_S].action = NULL;
irq_desc[M32R_IRQ_SIO0_S].depth = 1;
icu_data[M32R_IRQ_SIO0_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO0_S);
/* SIO1_R : uart receive data */
irq_desc[M32R_IRQ_SIO1_R].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO1_R].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_R].action = 0;
+ irq_desc[M32R_IRQ_SIO1_R].action = NULL;
irq_desc[M32R_IRQ_SIO1_R].depth = 1;
icu_data[M32R_IRQ_SIO1_R].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_R);
/* SIO1_S : uart send data */
irq_desc[M32R_IRQ_SIO1_S].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_SIO1_S].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_SIO1_S].action = 0;
+ irq_desc[M32R_IRQ_SIO1_S].action = NULL;
irq_desc[M32R_IRQ_SIO1_S].depth = 1;
icu_data[M32R_IRQ_SIO1_S].icucr = 0;
disable_mappi_irq(M32R_IRQ_SIO1_S);
/* INT1 : pccard0 interrupt */
irq_desc[M32R_IRQ_INT1].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_INT1].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT1].action = 0;
+ irq_desc[M32R_IRQ_INT1].action = NULL;
irq_desc[M32R_IRQ_INT1].depth = 1;
icu_data[M32R_IRQ_INT1].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
disable_mappi_irq(M32R_IRQ_INT1);
/* INT2 : pccard1 interrupt */
irq_desc[M32R_IRQ_INT2].status = IRQ_DISABLED;
irq_desc[M32R_IRQ_INT2].chip = &mappi_irq_type;
- irq_desc[M32R_IRQ_INT2].action = 0;
+ irq_desc[M32R_IRQ_INT2].action = NULL;
irq_desc[M32R_IRQ_INT2].depth = 1;
icu_data[M32R_IRQ_INT2].icucr = M32R_ICUCR_IEN | M32R_ICUCR_ISMOD00;
disable_mappi_irq(M32R_IRQ_INT2);
int do_signal(struct pt_regs *, sigset_t *);
asmlinkage int
-sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize,
+sys_rt_sigsuspend(sigset_t __user *unewset, size_t sigsetsize,
unsigned long r2, unsigned long r3, unsigned long r4,
unsigned long r5, unsigned long r6, struct pt_regs *regs)
{
struct rt_sigframe
{
int sig;
- struct siginfo *pinfo;
- void *puc;
+ struct siginfo __user *pinfo;
+ void __user *puc;
struct siginfo info;
struct ucontext uc;
// struct _fpstate fpstate;
local_irq_save(flags);
__flush_tlb_all();
local_irq_restore(flags);
- smp_call_function(flush_tlb_all_ipi, 0, 1, 1);
+ smp_call_function(flush_tlb_all_ipi, NULL, 1, 1);
preempt_enable();
}
/*
* sys_tas() - test-and-set
*/
-asmlinkage int sys_tas(int *addr)
+asmlinkage int sys_tas(int __user *addr)
{
int oldval;
error = do_pipe(fd);
if (!error) {
- if (copy_to_user((void *)r0, (void *)fd, 2*sizeof(int)))
+ if (copy_to_user((void __user *)r0, fd, 2*sizeof(int)))
error = -EFAULT;
}
return error;
}
}
-asmlinkage int sys_uname(struct old_utsname * name)
+asmlinkage int sys_uname(struct old_utsname __user * name)
{
int err;
if (!name)
#define DO_ERROR(trapnr, signr, str, name) \
asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
{ \
- do_trap(trapnr, signr, 0, regs, error_code, NULL); \
+ do_trap(trapnr, signr, NULL, regs, error_code, NULL); \
}
#define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
#include <linux/module.h>
-#include <linux/linkage.h>
-#include <linux/sched.h>
-#include <linux/mm.h>
-#include <linux/user.h>
-#include <linux/elfcore.h>
-#include <linux/in6.h>
-#include <linux/interrupt.h>
-
-#include <asm/setup.h>
-#include <asm/machdep.h>
-#include <asm/pgalloc.h>
-#include <asm/irq.h>
-#include <asm/io.h>
#include <asm/semaphore.h>
-#include <asm/checksum.h>
asmlinkage long long __ashldi3 (long long, int);
asmlinkage long long __ashrdi3 (long long, int);
asmlinkage long long __lshrdi3 (long long, int);
asmlinkage long long __muldi3 (long long, long long);
-extern char m68k_debug_device[];
-
-/* platform dependent support */
-
-EXPORT_SYMBOL(m68k_machtype);
-EXPORT_SYMBOL(m68k_cputype);
-EXPORT_SYMBOL(m68k_is040or060);
-EXPORT_SYMBOL(m68k_realnum_memory);
-EXPORT_SYMBOL(m68k_memory);
-#ifndef CONFIG_SUN3
-EXPORT_SYMBOL(cache_push);
-EXPORT_SYMBOL(cache_clear);
-#ifndef CONFIG_SINGLE_MEMORY_CHUNK
-EXPORT_SYMBOL(mm_vtop);
-EXPORT_SYMBOL(mm_ptov);
-EXPORT_SYMBOL(mm_end_of_chunk);
-#else
-EXPORT_SYMBOL(m68k_memoffset);
-#endif /* !CONFIG_SINGLE_MEMORY_CHUNK */
-EXPORT_SYMBOL(__ioremap);
-EXPORT_SYMBOL(iounmap);
-EXPORT_SYMBOL(kernel_set_cachemode);
-#endif /* !CONFIG_SUN3 */
-EXPORT_SYMBOL(m68k_debug_device);
-EXPORT_SYMBOL(mach_hwclk);
-EXPORT_SYMBOL(mach_get_ss);
-EXPORT_SYMBOL(mach_get_rtc_pll);
-EXPORT_SYMBOL(mach_set_rtc_pll);
-#ifdef CONFIG_INPUT_M68K_BEEP_MODULE
-EXPORT_SYMBOL(mach_beep);
-#endif
-EXPORT_SYMBOL(dump_fpu);
-EXPORT_SYMBOL(dump_thread);
-EXPORT_SYMBOL(kernel_thread);
-#ifdef CONFIG_VME
-EXPORT_SYMBOL(vme_brdtype);
-#endif
/* The following are special because they're not called
explicitly (the C compiler generates them). Fortunately,
set_fs (fs);
return pid;
}
+EXPORT_SYMBOL(kernel_thread);
void flush_thread(void)
{
{
unsigned long clone_flags;
unsigned long newsp;
- int *parent_tidptr, *child_tidptr;
+ int __user *parent_tidptr, *child_tidptr;
/* syscall2 puts clone_flags in d1 and usp in d2 */
clone_flags = regs->d1;
newsp = regs->d2;
- parent_tidptr = (int *)regs->d3;
- child_tidptr = (int *)regs->d4;
+ parent_tidptr = (int __user *)regs->d3;
+ child_tidptr = (int __user *)regs->d4;
if (!newsp)
newsp = rdusp();
return do_fork(clone_flags, newsp, regs, 0,
: "memory");
return 1;
}
+EXPORT_SYMBOL(dump_fpu);
/*
* fill in the user structure for a core dump..
/* dump floating point stuff */
dump->u_fpvalid = dump_fpu (regs, &dump->m68kfp);
}
+EXPORT_SYMBOL(dump_thread);
/*
* sys_execve() executes a new program.
*/
-asmlinkage int sys_execve(char *name, char **argv, char **envp)
+asmlinkage int sys_execve(char __user *name, char __user * __user *argv, char __user * __user *envp)
{
int error;
char * filename;
unsigned long m68k_machtype;
unsigned long m68k_cputype;
+EXPORT_SYMBOL(m68k_machtype);
+EXPORT_SYMBOL(m68k_cputype);
unsigned long m68k_fputype;
unsigned long m68k_mmutype;
#ifdef CONFIG_VME
unsigned long vme_brdtype;
+EXPORT_SYMBOL(vme_brdtype);
#endif
int m68k_is040or060;
+EXPORT_SYMBOL(m68k_is040or060);
extern int end;
extern unsigned long availmem;
int m68k_num_memory;
int m68k_realnum_memory;
+EXPORT_SYMBOL(m68k_realnum_memory);
+#ifdef CONFIG_SINGLE_MEMORY_CHUNK
unsigned long m68k_memoffset;
+EXPORT_SYMBOL(m68k_memoffset);
+#endif
struct mem_info m68k_memory[NUM_MEMINFO];
+EXPORT_SYMBOL(m68k_memory);
static struct mem_info m68k_ramdisk;
static char m68k_command_line[CL_SIZE];
char m68k_debug_device[6] = "";
+EXPORT_SYMBOL(m68k_debug_device);
void (*mach_sched_init) (irq_handler_t handler) __initdata = NULL;
/* machine dependent irq functions */
/* machine dependent timer functions */
unsigned long (*mach_gettimeoffset) (void);
int (*mach_hwclk) (int, struct rtc_time*);
+EXPORT_SYMBOL(mach_hwclk);
int (*mach_set_clock_mmss) (unsigned long);
unsigned int (*mach_get_ss)(void);
int (*mach_get_rtc_pll)(struct rtc_pll_info *);
int (*mach_set_rtc_pll)(struct rtc_pll_info *);
+EXPORT_SYMBOL(mach_get_ss);
+EXPORT_SYMBOL(mach_get_rtc_pll);
+EXPORT_SYMBOL(mach_set_rtc_pll);
void (*mach_reset)( void );
void (*mach_halt)( void );
void (*mach_power_off)( void );
#endif
#if defined(CONFIG_INPUT_M68K_BEEP) || defined(CONFIG_INPUT_M68K_BEEP_MODULE)
void (*mach_beep)(unsigned int, unsigned int);
+EXPORT_SYMBOL(mach_beep);
#endif
#if defined(CONFIG_ISA) && defined(MULTI_ISA)
int isa_type;
switch (wbs & WBSIZ_040) {
case BA_SIZE_BYTE:
- res = put_user(wbd & 0xff, (char *)wba);
+ res = put_user(wbd & 0xff, (char __user *)wba);
break;
case BA_SIZE_WORD:
- res = put_user(wbd & 0xffff, (short *)wba);
+ res = put_user(wbd & 0xffff, (short __user *)wba);
break;
case BA_SIZE_LONG:
- res = put_user(wbd, (int *)wba);
+ res = put_user(wbd, (int __user *)wba);
break;
}
* used by other architectures /Roman Zippel
*/
+#include <linux/module.h>
#include <linux/mm.h>
#include <linux/kernel.h>
#include <linux/string.h>
return (void __iomem *)retaddr;
}
+EXPORT_SYMBOL(__ioremap);
/*
* Unmap a ioremap()ed region again
free_io_area((__force void *)addr);
#endif
}
+EXPORT_SYMBOL(iounmap);
/*
* __iounmap unmaps nearly everything, so be careful
flush_tlb_all();
}
+EXPORT_SYMBOL(kernel_set_cachemode);
* Copyright (C) 1995 Hamish Macdonald
*/
+#include <linux/module.h>
#include <linux/mm.h>
#include <linux/kernel.h>
#include <linux/string.h>
return -1;
}
-#endif
+EXPORT_SYMBOL(mm_vtop);
-#ifndef CONFIG_SINGLE_MEMORY_CHUNK
unsigned long mm_ptov (unsigned long paddr)
{
int i = 0;
#endif
return -1;
}
+EXPORT_SYMBOL(mm_ptov);
#endif
/* invalidate page in both caches */
mach_l2_flush(0);
#endif
}
+EXPORT_SYMBOL(cache_clear); /* probably can be unexported */
/*
mach_l2_flush(1);
#endif
}
+EXPORT_SYMBOL(cache_push); /* probably can be unexported */
#ifndef CONFIG_SINGLE_MEMORY_CHUNK
int mm_end_of_chunk (unsigned long addr, int len)
return 1;
return 0;
}
+EXPORT_SYMBOL(mm_end_of_chunk);
#endif
* for more details.
*/
+#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/mm.h>
}
}
-void *sun3_ioremap(unsigned long phys, unsigned long size,
+void __iomem *sun3_ioremap(unsigned long phys, unsigned long size,
unsigned long type)
{
struct vm_struct *area;
virt += seg_pages * PAGE_SIZE;
}
- return (void *)ret;
+ return (void __iomem *)ret;
}
-void *__ioremap(unsigned long phys, unsigned long size, int cache)
+void __iomem *__ioremap(unsigned long phys, unsigned long size, int cache)
{
return sun3_ioremap(phys, size, SUN3_PAGE_TYPE_IO);
}
+EXPORT_SYMBOL(__ioremap);
-void iounmap(void *addr)
+void iounmap(void __iomem *addr)
{
vfree((void *)(PAGE_MASK & (unsigned long)addr));
}
+EXPORT_SYMBOL(iounmap);
/* sun3_map_test(addr, val) -- Reads a byte from addr, storing to val,
* trapping the potential read fault. Returns 0 if the access faulted,
# Makefile for Linux arch/m68k/sun3 source directory
#
-obj-y := sun3_ksyms.o sun3ints.o sun3dvma.o sbus.o idprom.o
+obj-y := sun3ints.o sun3dvma.o sbus.o idprom.o
obj-$(CONFIG_SUN3) += config.o mmu_emu.o leds.o dvma.o intersil.o
* Sun3/3x models added by David Monro (davidm@psrg.cs.usyd.edu.au)
*/
+#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/init.h>
#include <asm/machines.h> /* Fun with Sun released architectures. */
struct idprom *idprom;
+EXPORT_SYMBOL(idprom);
+
static struct idprom idprom_buffer;
/* Here is the master table of Sun machines which use some implementation
+++ /dev/null
-#include <linux/module.h>
-#include <linux/types.h>
-#include <asm/dvma.h>
-#include <asm/idprom.h>
-
-/*
- * Add things here when you find the need for it.
- */
-EXPORT_SYMBOL(dvma_map_align);
-EXPORT_SYMBOL(dvma_unmap);
-EXPORT_SYMBOL(dvma_malloc_align);
-EXPORT_SYMBOL(dvma_free);
-EXPORT_SYMBOL(idprom);
* Contains common routines for sun3/sun3x DVMA management.
*/
+#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/list.h>
BUG();
return 0;
}
+EXPORT_SYMBOL(dvma_map_align);
void dvma_unmap(void *baddr)
{
return;
}
-
+EXPORT_SYMBOL(dvma_unmap);
void *dvma_malloc_align(unsigned long len, unsigned long align)
{
return (void *)vaddr;
}
+EXPORT_SYMBOL(dvma_malloc_align);
void dvma_free(void *vaddr)
{
return;
}
+EXPORT_SYMBOL(dvma_free);
EXPORT_SYMBOL(memcpy_fromio);
EXPORT_SYMBOL(memset_io);
-#include <asm/unistd.h>
-EXPORT_SYMBOL(sys_lseek);
-EXPORT_SYMBOL(sys_write);
-
#include <asm/semaphore.h>
EXPORT_SYMBOL(__up);
EXPORT_SYMBOL(__down_interruptible);
top_of_ram, total_ram);
printk(KERN_DEBUG "Memory hole size: %ldMB\n",
(top_of_ram - total_ram) >> 20);
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
#ifdef CONFIG_HIGHMEM
- max_zone_pfns[0] = total_lowmem >> PAGE_SHIFT;
- max_zone_pfns[1] = top_of_ram >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_DMA] = total_lowmem >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_HIGHMEM] = top_of_ram >> PAGE_SHIFT;
#else
- max_zone_pfns[0] = top_of_ram >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_DMA] = top_of_ram >> PAGE_SHIFT;
#endif
free_area_init_nodes(max_zone_pfns);
}
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {
- lmb_end_of_DRAM() >> PAGE_SHIFT
- };
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] = lmb_end_of_DRAM() >> PAGE_SHIFT;
free_area_init_nodes(max_zone_pfns);
}
end_pfn = start_pfn + (total_memory >> PAGE_SHIFT);
add_active_range(0, start_pfn, end_pfn);
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
#ifdef CONFIG_HIGHMEM
- max_zone_pfns[0] = total_lowmem >> PAGE_SHIFT;
- max_zone_pfns[1] = total_memory >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_DMA] = total_lowmem >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_HIGHMEM] = total_memory >> PAGE_SHIFT;
#else
- max_zone_pfns[0] = total_memory >> PAGE_SHIFT;
+ max_zone_pfns[ZONE_DMA] = total_memory >> PAGE_SHIFT;
#endif /* CONFIG_HIGHMEM */
free_area_init_nodes(max_zone_pfns);
}
EXPORT_SYMBOL(console_mode);
EXPORT_SYMBOL(console_devno);
EXPORT_SYMBOL(console_irq);
-EXPORT_SYMBOL(sys_wait4);
config PCMCIA
bool
+# Yet to do!
+config TRACE_IRQFLAGS_SUPPORT
+ bool
+ default n
+
+config LOCKDEP_SUPPORT
+ bool
+ default y
+
+config STACKTRACE_SUPPORT
+ bool
+ default y
+
config GENERIC_CALIBRATE_DELAY
bool
default y
menu "UML-specific options"
config MODE_TT
- bool "Tracing thread support"
+ bool "Tracing thread support (DEPRECATED)"
default n
help
This option controls whether tracing thread support is compiled
- into UML. This option is largely obsolete, given that skas0 provides
+ into UML. This option is largely obsolete, given that skas0 provides
skas security and performance without needing to patch the host.
- It is safe to say 'N' here.
+ It is safe to say 'N' here; saying 'Y' may cause additional problems
+ with the resulting binary even if you run UML in SKAS mode, and running
+ in TT mode is strongly *NOT RECOMMENDED*.
config STATIC_LINK
bool "Force a static link"
for use in a chroot jail. So, if you intend to run UML inside a
chroot, and you disable CONFIG_MODE_TT, you probably want to say Y
here.
+ Additionally, this option enables using higher memory spaces (up to
+ 2.75G) for UML - disabling CONFIG_MODE_TT and enabling this option leads
+ to best results for this.
config KERNEL_HALF_GIGS
int "Kernel address space size (in .5G units)"
default y
help
This option controls whether skas (separate kernel address space)
- support is compiled in. If you have applied the skas patch to the
- host, then you certainly want to say Y here (and consider saying N
- to CONFIG_MODE_TT). Otherwise, it is safe to say Y. Disabling this
- option will shrink the UML binary slightly.
+ support is compiled in.
+ Unless you have specific needs to use TT mode (which applies almost only
+ to developers), you should say Y here.
+ SKAS mode will make use of the SKAS3 patch if it is applied on the host
+ (and your UML will run in SKAS3 mode), but if no SKAS patch is applied
+ on the host it will run in SKAS0 mode, which is anyway faster than TT
+ mode.
source "arch/um/Kconfig.arch"
source "mm/Kconfig"
bool
default y
-config HOST_2G_2G
- bool "2G/2G host address space split"
- default n
- help
- This is needed when the host on which you run has a 2G/2G memory
- split, instead of the customary 3G/1G.
-
- Note that to enable such a host
- configuration, which makes sense only in some cases, you need special
- host patches.
-
- So, if you do not know what to do here, say 'N'.
+choice
+ prompt "Host memory split"
+ default HOST_VMSPLIT_3G
+ ---help---
+ This is needed when the host kernel on which you run has a non-default
+ (like 2G/2G) memory split, instead of the customary 3G/1G. If you did
+ not recompile your own kernel but use the default distro's one, you can
+ safely accept the "Default split" option.
+
+ It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via
+ CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck
+ patchset by Con Kolivas, or other ones) - option names match closely the
+ host CONFIG_VM_SPLIT_* ones.
+
+ A lower setting (where 1G/3G is lowest and 3G/1G is higher) will
+ tolerate even more "normal" host kernels, but an higher setting will be
+ stricter.
+
+ So, if you do not know what to do here, say 'Default split'.
+
+ config HOST_VMSPLIT_3G
+ bool "Default split (3G/1G user/kernel host split)"
+ config HOST_VMSPLIT_3G_OPT
+ bool "3G/1G user/kernel host split (for full 1G low memory)"
+ config HOST_VMSPLIT_2G
+ bool "2G/2G user/kernel host split"
+ config HOST_VMSPLIT_1G
+ bool "1G/3G user/kernel host split"
+endchoice
config TOP_ADDR
- hex
- default 0xc0000000 if !HOST_2G_2G
- default 0x80000000 if HOST_2G_2G
+ hex
+ default 0xB0000000 if HOST_VMSPLIT_3G_OPT
+ default 0x78000000 if HOST_VMSPLIT_2G
+ default 0x40000000 if HOST_VMSPLIT_1G
+ default 0xC0000000
config 3_LEVEL_PGTABLES
bool "Three-level pagetables (EXPERIMENTAL)"
# Copyright 2003 - 2004 Pathscale, Inc
# Released under the GPL
-core-y += arch/um/sys-x86_64/
+core-y += arch/um/sys-x86_64/ arch/x86_64/crypto/
START := 0x60000000
-_extra_flags_ = -fno-builtin -m64 -mcmodel=kernel
+_extra_flags_ = -fno-builtin -m64
#We #undef __x86_64__ for kernelspace, not for userspace where
#it's needed for headers to work!
/* for use by sys-$SUBARCH/kernel-offsets.c */
+DEFINE(KERNEL_MADV_REMOVE, MADV_REMOVE);
+#ifdef CONFIG_MODE_TT
+OFFSET(HOST_TASK_EXTERN_PID, task_struct, thread.mode.tt.extern_pid);
+#endif
+
OFFSET(HOST_TASK_REGS, task_struct, thread.regs);
OFFSET(HOST_TASK_PID, task_struct, pid);
+
DEFINE(UM_KERN_PAGE_SIZE, PAGE_SIZE);
DEFINE(UM_NSEC_PER_SEC, NSEC_PER_SEC);
+
DEFINE_STR(UM_KERN_EMERG, KERN_EMERG);
DEFINE_STR(UM_KERN_ALERT, KERN_ALERT);
DEFINE_STR(UM_KERN_CRIT, KERN_CRIT);
DEFINE_STR(UM_KERN_NOTICE, KERN_NOTICE);
DEFINE_STR(UM_KERN_INFO, KERN_INFO);
DEFINE_STR(UM_KERN_DEBUG, KERN_DEBUG);
+
DEFINE(UM_ELF_CLASS, ELF_CLASS);
DEFINE(UM_ELFCLASS32, ELFCLASS32);
DEFINE(UM_ELFCLASS64, ELFCLASS64);
+
+/* For crypto assembler code. */
+DEFINE(crypto_tfm_ctx_offset, offsetof(struct crypto_tfm, __crt_ctx));
} while(0)
#define UML_SETJMP(buf) ({ \
- int n, enable; \
+ int n; \
+ volatile int enable; \
enable = get_signals(); \
n = setjmp(*buf); \
if(n != 0) \
#ifdef UML_CONFIG_MODE_TT
extern void init_new_thread_stack(void *sig_stack, void (*usr1_handler)(int));
+extern void stop(void);
#endif
extern void init_new_thread_signals(void);
extern int run_kernel_thread(int (*fn)(void *), void *arg, void **jmp_ptr);
#include <linux/stddef.h>
#include <linux/sched.h>
#include <linux/elf.h>
+#include <linux/crypto.h>
#include <asm/mman.h>
#define DEFINE(sym, val) \
void foo(void)
{
OFFSET(HOST_TASK_DEBUGREGS, task_struct, thread.arch.debugregs);
- DEFINE(KERNEL_MADV_REMOVE, MADV_REMOVE);
-#ifdef CONFIG_MODE_TT
- OFFSET(HOST_TASK_EXTERN_PID, task_struct, thread.mode.tt.extern_pid);
-#endif
#include <common-offsets.h>
}
#include <linux/sched.h>
#include <linux/time.h>
#include <linux/elf.h>
+#include <linux/crypto.h>
#include <asm/page.h>
#include <asm/mman.h>
void foo(void)
{
- DEFINE(KERNEL_MADV_REMOVE, MADV_REMOVE);
-#ifdef CONFIG_MODE_TT
- OFFSET(HOST_TASK_EXTERN_PID, task_struct, thread.mode.tt.extern_pid);
-#endif
#include <common-offsets.h>
}
#endif
*pte = mk_pte(virt_to_page(kernel), __pgprot(_PAGE_PRESENT));
- /* This is wrong for the code page, but it doesn't matter since the
- * stub is mapped by hand with the correct permissions.
- */
- *pte = pte_mkwrite(*pte);
+ *pte = pte_mkread(*pte);
return(0);
out_pmd:
* Licensed under the GPL
*/
-#include <setjmp.h>
#include <string.h>
#include "user_util.h"
#include "uml_uaccess.h"
#include "task.h"
#include "kern_util.h"
#include "os.h"
+#include "longjmp.h"
int __do_copy_from_user(void *to, const void *from, int n,
void **fault_addr, void **fault_catcher)
struct tt_regs save = TASK_REGS(get_current())->tt;
int ret;
unsigned long *faddrp = (unsigned long *)fault_addr;
- sigjmp_buf jbuf;
+ jmp_buf jbuf;
*fault_catcher = &jbuf;
- if(sigsetjmp(jbuf, 1) == 0)
+ if(UML_SETJMP(&jbuf) == 0)
ret = strlen(str) + 1;
else ret = *faddrp - (unsigned long) str;
#include <errno.h>
#include <stdarg.h>
#include <stdlib.h>
-#include <setjmp.h>
#include <sys/time.h>
#include <sys/ptrace.h>
#include <linux/ptrace.h>
struct utsname host;
uname(&host);
-#if defined(UML_CONFIG_UML_X86) && !defined(UML_CONFIG_64BIT)
+#ifdef UML_CONFIG_UML_X86
+# ifndef UML_CONFIG_64BIT
if (!strcmp(host.machine, "x86_64")) {
strcpy(machine_out, "i686");
return;
}
+# else
+ if (!strcmp(host.machine, "i686")) {
+ strcpy(machine_out, "x86_64");
+ return;
+ }
+# endif
#endif
strcpy(machine_out, host.machine);
}
/*XXX: we need them because they would be exported by x86_64 */
EXPORT_SYMBOL(__memcpy);
-
-/* Networking helper routines. */
-EXPORT_SYMBOL(ip_compute_csum);
}
#endif
-static struct hw_interrupt_type ht_irq_chip = {
+static struct irq_chip ht_irq_chip = {
.name = "PCI-HT",
.mask = mask_ht_irq,
.unmask = unmask_ht_irq,
#ifndef CONFIG_NUMA
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {MAX_DMA_PFN,
- MAX_DMA32_PFN,
- end_pfn};
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+ max_zone_pfns[ZONE_NORMAL] = end_pfn;
+
memory_present(0, 0, end_pfn);
sparse_init();
free_area_init_nodes(max_zone_pfns);
void __init paging_init(void)
{
int i;
- unsigned long max_zone_pfns[MAX_NR_ZONES] = { MAX_DMA_PFN,
- MAX_DMA32_PFN,
- end_pfn};
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+ max_zone_pfns[ZONE_NORMAL] = end_pfn;
arch_sparse_init();
static inline void DAC960_P_To_PD_TranslateDeviceState(void *DeviceState)
{
memcpy(DeviceState + 2, DeviceState + 3, 1);
- memcpy(DeviceState + 4, DeviceState + 5, 2);
- memcpy(DeviceState + 6, DeviceState + 8, 4);
+ memmove(DeviceState + 4, DeviceState + 5, 2);
+ memmove(DeviceState + 6, DeviceState + 8, 4);
}
static inline
return get_disk(unit[drive].gendisk);
}
-int __init amiga_floppy_init(void)
+static int __init amiga_floppy_init(void)
{
int i, ret;
+ if (!MACH_IS_AMIGA)
+ return -ENXIO;
+
if (!AMIGAHW_PRESENT(AMI_FLOPPY))
return -ENXIO;
return ret;
}
+module_init(amiga_floppy_init);
#ifdef MODULE
-int init_module(void)
-{
- if (!MACH_IS_AMIGA)
- return -ENXIO;
- return amiga_floppy_init();
-}
-
#if 0 /* not safe to unload */
void cleanup_module(void)
{
#include <linux/blkdev.h>
#include <linux/blkpg.h>
#include <linux/delay.h>
+#include <linux/io.h>
#include <asm/system.h>
-#include <asm/io.h>
#include <asm/uaccess.h>
#include <asm/dma.h>
static struct request_queue *z2_queue;
-int __init
+static int __init
z2_init(void)
{
int ret;
return ret;
}
-#if defined(MODULE)
-
-MODULE_LICENSE("GPL");
-
-int
-init_module( void )
-{
- int error;
-
- error = z2_init();
- if ( error == 0 )
- {
- printk( KERN_INFO DEVICE_NAME ": loaded as module\n" );
- }
-
- return error;
-}
-
-void
-cleanup_module( void )
+static void __exit z2_exit(void)
{
int i, j;
blk_unregister_region(MKDEV(Z2RAM_MAJOR, 0), 256);
return;
}
-#endif
+
+module_init(z2_init);
+module_exit(z2_exit);
+MODULE_LICENSE("GPL");
// applications that one cannot break out of.
//******************************************************************************
static int
-i2Output(i2ChanStrPtr pCh, const char *pSource, int count, int user )
+i2Output(i2ChanStrPtr pCh, const char *pSource, int count)
{
i2eBordStrPtr pB;
unsigned char *pInsert;
int bailout = 10;
- ip2trace (CHANN, ITRC_OUTPUT, ITRC_ENTER, 2, count, user );
+ ip2trace (CHANN, ITRC_OUTPUT, ITRC_ENTER, 2, count, 0 );
// Ensure channel structure seems real
if ( !i2Validate ( pCh ) )
DATA_COUNT_OF(pInsert) = amountToMove;
// Move the data
- if ( user ) {
- rc = copy_from_user((char*)(DATA_OF(pInsert)), pSource,
- amountToMove );
- } else {
- memcpy( (char*)(DATA_OF(pInsert)), pSource, amountToMove );
- }
+ memcpy( (char*)(DATA_OF(pInsert)), pSource, amountToMove );
// Adjust pointers and indices
pSource += amountToMove;
pCh->Obuf_char_count += amountToMove;
static int i2GetStatus(i2ChanStrPtr, int);
static int i2Input(i2ChanStrPtr);
static int i2InputFlush(i2ChanStrPtr);
-static int i2Output(i2ChanStrPtr, const char *, int, int);
+static int i2Output(i2ChanStrPtr, const char *, int);
static int i2OutputFree(i2ChanStrPtr);
static int i2ServiceBoard(i2eBordStrPtr);
static void i2DrainOutput(i2ChanStrPtr, int);
/* This is the actual move bit. Make sure it does what we need!!!!! */
WRITE_LOCK_IRQSAVE(&pCh->Pbuf_spinlock,flags);
- bytesSent = i2Output( pCh, pData, count, 0 );
+ bytesSent = i2Output( pCh, pData, count);
WRITE_UNLOCK_IRQRESTORE(&pCh->Pbuf_spinlock,flags);
ip2trace (CHANN, ITRC_WRITE, ITRC_RETURN, 1, bytesSent );
//
// We may need to restart i2Output if it does not fullfill this request
//
- strip = i2Output( pCh, pCh->Pbuf, pCh->Pbuf_stuff, 0 );
+ strip = i2Output( pCh, pCh->Pbuf, pCh->Pbuf_stuff);
if ( strip != pCh->Pbuf_stuff ) {
memmove( pCh->Pbuf, &pCh->Pbuf[strip], pCh->Pbuf_stuff - strip );
}
(long long) bmc->guid[8]);
}
-static void
-cleanup_bmc_device(struct kref *ref)
+static void remove_files(struct bmc_device *bmc)
{
- struct bmc_device *bmc;
-
- bmc = container_of(ref, struct bmc_device, refcount);
-
device_remove_file(&bmc->dev->dev,
&bmc->device_id_attr);
device_remove_file(&bmc->dev->dev,
&bmc->manufacturer_id_attr);
device_remove_file(&bmc->dev->dev,
&bmc->product_id_attr);
+
if (bmc->id.aux_firmware_revision_set)
device_remove_file(&bmc->dev->dev,
&bmc->aux_firmware_rev_attr);
if (bmc->guid_set)
device_remove_file(&bmc->dev->dev,
&bmc->guid_attr);
+}
+
+static void
+cleanup_bmc_device(struct kref *ref)
+{
+ struct bmc_device *bmc;
+
+ bmc = container_of(ref, struct bmc_device, refcount);
+
+ remove_files(bmc);
platform_device_unregister(bmc->dev);
kfree(bmc);
}
mutex_unlock(&ipmidriver_mutex);
}
+static int create_files(struct bmc_device *bmc)
+{
+ int err;
+
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->device_id_attr);
+ if (err) goto out;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->provides_dev_sdrs_attr);
+ if (err) goto out_devid;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->revision_attr);
+ if (err) goto out_sdrs;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->firmware_rev_attr);
+ if (err) goto out_rev;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->version_attr);
+ if (err) goto out_firm;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->add_dev_support_attr);
+ if (err) goto out_version;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->manufacturer_id_attr);
+ if (err) goto out_add_dev;
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->product_id_attr);
+ if (err) goto out_manu;
+ if (bmc->id.aux_firmware_revision_set) {
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->aux_firmware_rev_attr);
+ if (err) goto out_prod_id;
+ }
+ if (bmc->guid_set) {
+ err = device_create_file(&bmc->dev->dev,
+ &bmc->guid_attr);
+ if (err) goto out_aux_firm;
+ }
+
+ return 0;
+
+out_aux_firm:
+ if (bmc->id.aux_firmware_revision_set)
+ device_remove_file(&bmc->dev->dev,
+ &bmc->aux_firmware_rev_attr);
+out_prod_id:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->product_id_attr);
+out_manu:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->manufacturer_id_attr);
+out_add_dev:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->add_dev_support_attr);
+out_version:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->version_attr);
+out_firm:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->firmware_rev_attr);
+out_rev:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->revision_attr);
+out_sdrs:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->provides_dev_sdrs_attr);
+out_devid:
+ device_remove_file(&bmc->dev->dev,
+ &bmc->device_id_attr);
+out:
+ return err;
+}
+
static int ipmi_bmc_register(ipmi_smi_t intf)
{
int rv;
bmc->provides_dev_sdrs_attr.attr.mode = S_IRUGO;
bmc->provides_dev_sdrs_attr.show = provides_dev_sdrs_show;
-
bmc->revision_attr.attr.name = "revision";
bmc->revision_attr.attr.owner = THIS_MODULE;
bmc->revision_attr.attr.mode = S_IRUGO;
bmc->aux_firmware_rev_attr.attr.mode = S_IRUGO;
bmc->aux_firmware_rev_attr.show = aux_firmware_rev_show;
- device_create_file(&bmc->dev->dev,
- &bmc->device_id_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->provides_dev_sdrs_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->revision_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->firmware_rev_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->version_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->add_dev_support_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->manufacturer_id_attr);
- device_create_file(&bmc->dev->dev,
- &bmc->product_id_attr);
- if (bmc->id.aux_firmware_revision_set)
- device_create_file(&bmc->dev->dev,
- &bmc->aux_firmware_rev_attr);
- if (bmc->guid_set)
- device_create_file(&bmc->dev->dev,
- &bmc->guid_attr);
+ rv = create_files(bmc);
+ if (rv) {
+ mutex_lock(&ipmidriver_mutex);
+ platform_device_unregister(bmc->dev);
+ mutex_unlock(&ipmidriver_mutex);
+
+ return rv;
+ }
printk(KERN_INFO
"ipmi: Found new BMC (man_id: 0x%6.6x, "
spin_unlock(&driver_lock);
- sysfs_create_group(&dev->kobj, chip->vendor.attr_group);
+ if (sysfs_create_group(&dev->kobj, chip->vendor.attr_group)) {
+ list_del(&chip->list);
+ put_device(dev);
+ clear_bit(chip->dev_num, dev_mask);
+ kfree(chip);
+ kfree(devname);
+ return NULL;
+ }
chip->bios_dir = tpm_bios_log_setup(devname);
unsigned long base;
struct tpm_chip *chip;
- driver_register(&atml_drv);
+ rc = driver_register(&atml_drv);
+ if (rc)
+ return rc;
if ((iobase = atmel_get_base_addr(&base, ®ion_size)) == NULL) {
rc = -ENODEV;
(atmel_request_region
(tpm_atmel.base, region_size, "tpm_atmel0") == NULL) ? 0 : 1;
-
- if (IS_ERR
- (pdev =
- platform_device_register_simple("tpm_atmel", -1, NULL, 0))) {
+ pdev = platform_device_register_simple("tpm_atmel", -1, NULL, 0);
+ if (IS_ERR(pdev)) {
rc = PTR_ERR(pdev);
goto err_rel_reg;
}
static int __init init_nsc(void)
{
int rc = 0;
- int lo, hi;
+ int lo, hi, err;
int nscAddrBase = TPM_ADDR;
struct tpm_chip *chip;
unsigned long base;
return -ENODEV;
}
- driver_register(&nsc_drv);
+ err = driver_register(&nsc_drv);
+ if (err)
+ return err;
hi = tpm_read_index(nscAddrBase, TPM_NSC_BASE0_HI);
lo = tpm_read_index(nscAddrBase, TPM_NSC_BASE0_LO);
static int __init eisa_register_device (struct eisa_device *edev)
{
- if (device_register (&edev->dev))
- return -1;
+ int rc = device_register (&edev->dev);
+ if (rc)
+ return rc;
- device_create_file (&edev->dev, &dev_attr_signature);
- device_create_file (&edev->dev, &dev_attr_enabled);
- device_create_file (&edev->dev, &dev_attr_modalias);
+ rc = device_create_file (&edev->dev, &dev_attr_signature);
+ if (rc) goto err_devreg;
+ rc = device_create_file (&edev->dev, &dev_attr_enabled);
+ if (rc) goto err_sig;
+ rc = device_create_file (&edev->dev, &dev_attr_modalias);
+ if (rc) goto err_enab;
return 0;
+
+err_enab:
+ device_remove_file (&edev->dev, &dev_attr_enabled);
+err_sig:
+ device_remove_file (&edev->dev, &dev_attr_signature);
+err_devreg:
+ device_unregister(&edev->dev);
+ return rc;
}
static int __init eisa_request_resources (struct eisa_root_device *root,
if ((rc = create_packet(temp, packet_length)))
return rc;
- pr_debug("%p:%lu\n", temp, (end - temp));
+ pr_debug("%p:%td\n", temp, (end - temp));
temp += packet_length;
}
return -EIO;
}
- sysfs_create_bin_file(&rbu_device->dev.kobj, &rbu_data_attr);
- sysfs_create_bin_file(&rbu_device->dev.kobj, &rbu_image_type_attr);
- sysfs_create_bin_file(&rbu_device->dev.kobj,
+ rc = sysfs_create_bin_file(&rbu_device->dev.kobj, &rbu_data_attr);
+ if (rc)
+ goto out_devreg;
+ rc = sysfs_create_bin_file(&rbu_device->dev.kobj, &rbu_image_type_attr);
+ if (rc)
+ goto out_data;
+ rc = sysfs_create_bin_file(&rbu_device->dev.kobj,
&rbu_packet_size_attr);
+ if (rc)
+ goto out_imtype;
rbu_data.entry_created = 0;
- return rc;
+ return 0;
+out_imtype:
+ sysfs_remove_bin_file(&rbu_device->dev.kobj, &rbu_image_type_attr);
+out_data:
+ sysfs_remove_bin_file(&rbu_device->dev.kobj, &rbu_data_attr);
+out_devreg:
+ platform_device_unregister(rbu_device);
+ return rc;
}
static __exit void dcdrbu_exit(void)
kobject_set_name(&new_efivar->kobj, "%s", short_name);
kobj_set_kset_s(new_efivar, vars_subsys);
- kobject_register(&new_efivar->kobj);
+ i = kobject_register(&new_efivar->kobj);
+ if (i) {
+ kfree(short_name);
+ kfree(new_efivar);
+ return 1;
+ }
kfree(short_name);
short_name = NULL;
if (dev->vendor == PCI_VENDOR_ID_JMICRON && PCI_FUNC(dev->devfn) != 1)
goto out;
- pci_read_config_word(dev, PCI_COMMAND, &command);
- if (!(command & PCI_COMMAND_IO)) {
- printk(KERN_INFO "Skipping disabled %s IDE controller.\n", d->name);
- goto out;
+ if (dev->vendor != PCI_VENDOR_ID_JMICRON) {
+ pci_read_config_word(dev, PCI_COMMAND, &command);
+ if (!(command & PCI_COMMAND_IO)) {
+ printk(KERN_INFO "Skipping disabled %s IDE controller.\n", d->name);
+ goto out;
+ }
}
ret = ide_setup_pci_device(dev, d);
out:
* with this program; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place Suite 330, Boston, MA 02111-1307, USA.
*/
-#include <asm/io.h>
+#include <linux/io.h>
#include <linux/dmi.h>
#include <linux/init.h>
#include <linux/input.h>
dev->read_queue = frame->next;
spin_unlock_irqrestore(&dev->lock, flags);
+ msg = 0;
SET_MSG_CPU(msg, 0);
SET_MSG_PROC(msg, 0);
SET_MSG_CMD(msg, frame->skb->data[2]);
* Confirm the I/O Address with a test
*/
if(io[b] == 0) {
- pr_debug("I/O Address 0x%x is in use.\n");
+ pr_debug("I/O Address invalid.\n");
continue;
}
outb(0x18, io[b] + 0x400 * EXP_PAGE0);
if(inb(io[b] + 0x400 * EXP_PAGE0) != 0x18) {
- pr_debug("I/O Base 0x%x fails test\n");
+ pr_debug("I/O Base 0x%x fails test\n",
+ io[b] + 0x400 * EXP_PAGE0);
continue;
}
}
outb(0xFF, io[b] + RESET_OFFSET);
msleep_interruptible(10000);
}
- pr_debug("RAM Base for board %d is 0x%x, %s probe\n", b, ram[b],
- ram[b] == 0 ? "will" : "won't");
+ pr_debug("RAM Base for board %d is 0x%lx, %s probe\n", b,
+ ram[b], ram[b] == 0 ? "will" : "won't");
if(ram[b]) {
/*
* board model
*/
if(request_region(ram[b], SRAM_PAGESIZE, "sc test")) {
- pr_debug("request_region for RAM base 0x%x succeeded\n", ram[b]);
+ pr_debug("request_region for RAM base 0x%lx succeeded\n", ram[b]);
model = identify_board(ram[b], io[b]);
release_region(ram[b], SRAM_PAGESIZE);
}
* Nope, there was no place in RAM for the
* board, or it couldn't be identified
*/
- pr_debug("Failed to find an adapter at 0x%x\n", ram[b]);
+ pr_debug("Failed to find an adapter at 0x%lx\n", ram[b]);
continue;
}
HWConfig_pl hwci;
int x;
- pr_debug("Attempting to identify adapter @ 0x%x io 0x%x\n",
+ pr_debug("Attempting to identify adapter @ 0x%lx io 0x%x\n",
rambase, iobase);
/*
outb(PRI_BASEPG_VAL, pgport);
msleep_interruptible(1000);
sig = readl(rambase + SIG_OFFSET);
- pr_debug("Looking for a signature, got 0x%x\n", sig);
+ pr_debug("Looking for a signature, got 0x%lx\n", sig);
if(sig == SIGNATURE)
return PRI_BOARD;
outb(BRI_BASEPG_VAL, pgport);
msleep_interruptible(1000);
sig = readl(rambase + SIG_OFFSET);
- pr_debug("Looking for a signature, got 0x%x\n", sig);
+ pr_debug("Looking for a signature, got 0x%lx\n", sig);
if(sig == SIGNATURE)
return BRI_BOARD;
* Try to spot a card
*/
sig = readl(rambase + SIG_OFFSET);
- pr_debug("Looking for a signature, got 0x%x\n", sig);
+ pr_debug("Looking for a signature, got 0x%lx\n", sig);
if(sig != SIGNATURE)
return -1;
memcpy_fromio(&rcvmsg, &(dpm->rsp_queue[dpm->rsp_tail]), MSG_LEN);
pr_debug("Got HWConfig response, status = 0x%x\n", rcvmsg.rsp_status);
memcpy(&hwci, &(rcvmsg.msg_data.HWCresponse), sizeof(HWConfig_pl));
- pr_debug("Hardware Config: Interface: %s, RAM Size: %d, Serial: %s\n"
+ pr_debug("Hardware Config: Interface: %s, RAM Size: %ld, Serial: %s\n"
" Part: %s, Rev: %s\n",
hwci.st_u_sense ? "S/T" : "U", hwci.ram_size,
hwci.serial_no, hwci.part_no, hwci.rev_no);
return -ENODEV;
}
- pr_debug("%s: sndpkt: frst = 0x%x nxt = %d f = %d n = %d\n",
+ pr_debug("%s: sndpkt: frst = 0x%lx nxt = %d f = %d n = %d\n",
sc_adapter[card]->devicename,
sc_adapter[card]->channel[channel].first_sendbuf,
sc_adapter[card]->channel[channel].next_sendbuf,
ReqLnkWrite.buff_offset = sc_adapter[card]->channel[channel].next_sendbuf *
BUFFER_SIZE + sc_adapter[card]->channel[channel].first_sendbuf;
ReqLnkWrite.msg_len = data->len; /* sk_buff size */
- pr_debug("%s: writing %d bytes to buffer offset 0x%x\n",
+ pr_debug("%s: writing %d bytes to buffer offset 0x%lx\n",
sc_adapter[card]->devicename,
ReqLnkWrite.msg_len, ReqLnkWrite.buff_offset);
memcpy_toshmem(card, (char *)ReqLnkWrite.buff_offset, data->data, ReqLnkWrite.msg_len);
/*
* sendmessage
*/
- pr_debug("%s: sndpkt size=%d, buf_offset=0x%x buf_indx=%d\n",
+ pr_debug("%s: sndpkt size=%d, buf_offset=0x%lx buf_indx=%d\n",
sc_adapter[card]->devicename,
ReqLnkWrite.msg_len, ReqLnkWrite.buff_offset,
sc_adapter[card]->channel[channel].next_sendbuf);
return;
}
skb_put(skb, rcvmsg->msg_data.response.msg_len);
- pr_debug("%s: getting data from offset: 0x%x\n",
+ pr_debug("%s: getting data from offset: 0x%lx\n",
sc_adapter[card]->devicename,
rcvmsg->msg_data.response.buff_offset);
memcpy_fromshmem(card,
/* memset_shmem(card, rcvmsg->msg_data.response.buff_offset, 0, BUFFER_SIZE); */
newll.buff_offset = rcvmsg->msg_data.response.buff_offset;
newll.msg_len = BUFFER_SIZE;
- pr_debug("%s: recycled buffer at offset 0x%x size %d\n",
+ pr_debug("%s: recycled buffer at offset 0x%lx size %d\n",
sc_adapter[card]->devicename,
newll.buff_offset, newll.msg_len);
sendmessage(card, CEPID, ceReqTypeLnk, ceReqClass1, ceReqLnkRead,
sc_adapter[card]->channel[c-1].num_sendbufs = nBuffers / 2;
sc_adapter[card]->channel[c-1].free_sendbufs = nBuffers / 2;
sc_adapter[card]->channel[c-1].next_sendbuf = 0;
- pr_debug("%s: send buffer setup complete: first=0x%x n=%d f=%d, nxt=%d\n",
+ pr_debug("%s: send buffer setup complete: first=0x%lx n=%d f=%d, nxt=%d\n",
sc_adapter[card]->devicename,
sc_adapter[card]->channel[c-1].first_sendbuf,
sc_adapter[card]->channel[c-1].num_sendbufs,
((sc_adapter[card]->channel[c-1].first_sendbuf +
(nBuffers / 2) * buffer_size) + (buffer_size * i));
RcvBuffOffset.msg_len = buffer_size;
- pr_debug("%s: adding RcvBuffer #%d offset=0x%x sz=%d bufsz:%d\n",
+ pr_debug("%s: adding RcvBuffer #%d offset=0x%lx sz=%d bufsz:%d\n",
sc_adapter[card]->devicename,
i + 1, RcvBuffOffset.buff_offset,
RcvBuffOffset.msg_len,buffer_size);
spin_unlock_irqrestore(&sc_adapter[card]->lock, flags);
pr_debug("%s: set page to %#x\n",sc_adapter[card]->devicename,
((sc_adapter[card]->shmem_magic + ch * SRAM_PAGESIZE)>>14)|0x80);
- pr_debug("%s: copying %d bytes from %#x to %#x\n",
+ pr_debug("%s: copying %d bytes from %#lx to %#lx\n",
sc_adapter[card]->devicename, n,
(unsigned long) src,
sc_adapter[card]->rambase + ((unsigned long) dest %0x4000));
int __init mca_register_device(int bus, struct mca_device *mca_dev)
{
struct mca_bus *mca_bus = mca_root_busses[bus];
+ int rc;
mca_dev->dev.parent = &mca_bus->dev;
mca_dev->dev.bus = &mca_bus_type;
mca_dev->dev.dma_mask = &mca_dev->dma_mask;
mca_dev->dev.coherent_dma_mask = mca_dev->dma_mask;
- if (device_register(&mca_dev->dev))
- return 0;
+ rc = device_register(&mca_dev->dev);
+ if (rc)
+ goto err_out;
- device_create_file(&mca_dev->dev, &dev_attr_id);
- device_create_file(&mca_dev->dev, &dev_attr_pos);
+ rc = device_create_file(&mca_dev->dev, &dev_attr_id);
+ if (rc) goto err_out_devreg;
+ rc = device_create_file(&mca_dev->dev, &dev_attr_pos);
+ if (rc) goto err_out_id;
return 1;
+
+err_out_id:
+ device_remove_file(&mca_dev->dev, &dev_attr_id);
+err_out_devreg:
+ device_unregister(&mca_dev->dev);
+err_out:
+ return 0;
}
/* */
return NULL;
}
- mca_bus = kmalloc(sizeof(struct mca_bus), GFP_KERNEL);
+ mca_bus = kzalloc(sizeof(struct mca_bus), GFP_KERNEL);
if (!mca_bus)
return NULL;
- memset(mca_bus, 0, sizeof(struct mca_bus));
+
sprintf(mca_bus->dev.bus_id,"mca%d",bus);
sprintf(mca_bus->name,"Host %s MCA Bridge", bus ? "Secondary" : "Primary");
- device_register(&mca_bus->dev);
+ if (device_register(&mca_bus->dev)) {
+ kfree(mca_bus);
+ return NULL;
+ }
mca_root_busses[bus] = mca_bus;
int err;
sector_t start;
- BUG_ON(sizeof(bitmap_super_t) != 256);
+ BUILD_BUG_ON(sizeof(bitmap_super_t) != 256);
if (!file && !mddev->bitmap_offset) /* bitmap disabled, nothing to do */
return 0;
__b44_set_mac_addr(bp);
- if (dev->flags & IFF_ALLMULTI)
+ if ((dev->flags & IFF_ALLMULTI) ||
+ (dev->mc_count > B44_MCAST_TABLE_SIZE))
val |= RXCONFIG_ALLMULTI;
else
i = __b44_load_mcast(bp, dev);
- for (; i < 64; i++) {
+ for (; i < 64; i++)
__b44_cam_write(bp, zero, i);
- }
+
bw32(bp, B44_RXCONFIG, val);
val = br32(bp, B44_CAM_CTRL);
bw32(bp, B44_CAM_CTRL, val | CAM_CTRL_ENABLE);
u16 *ptr = (u16 *) data;
for (i = 0; i < 128; i += 2)
- ptr[i / 2] = readw(bp->regs + 4096 + i);
+ ptr[i / 2] = cpu_to_le16(readw(bp->regs + 4096 + i));
return 0;
}
* write lock to protect from other code that also
* sets the promiscuity.
*/
- write_lock(&bond->curr_slave_lock);
+ write_lock_bh(&bond->curr_slave_lock);
if (bond_info->primary_is_promisc &&
(++bond_info->rlb_promisc_timeout_counter >= RLB_PROMISC_TIMEOUT)) {
bond_info->primary_is_promisc = 0;
}
- write_unlock(&bond->curr_slave_lock);
+ write_unlock_bh(&bond->curr_slave_lock);
if (bond_info->rlb_rebalance) {
bond_info->rlb_rebalance = 0;
#include <asm/io.h>
#define DRV_NAME "ehea"
-#define DRV_VERSION "EHEA_0028"
+#define DRV_VERSION "EHEA_0034"
#define EHEA_MSG_DEFAULT (NETIF_MSG_LINK | NETIF_MSG_TIMER \
| NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
#define EHEA_MAX_ENTRIES_SQ 32767
#define EHEA_MIN_ENTRIES_QP 127
+#define EHEA_SMALL_QUEUES
#define EHEA_NUM_TX_QP 1
#ifdef EHEA_SMALL_QUEUES
#define EHEA_DEF_ENTRIES_RQ2 1023
#define EHEA_DEF_ENTRIES_RQ3 1023
#else
-#define EHEA_MAX_CQE_COUNT 32000
-#define EHEA_DEF_ENTRIES_SQ 16000
-#define EHEA_DEF_ENTRIES_RQ1 32080
-#define EHEA_DEF_ENTRIES_RQ2 4020
-#define EHEA_DEF_ENTRIES_RQ3 4020
+#define EHEA_MAX_CQE_COUNT 4080
+#define EHEA_DEF_ENTRIES_SQ 4080
+#define EHEA_DEF_ENTRIES_RQ1 8160
+#define EHEA_DEF_ENTRIES_RQ2 2040
+#define EHEA_DEF_ENTRIES_RQ3 2040
#endif
#define EHEA_MAX_ENTRIES_EQ 20
if (EHEA_BMASK_GET(NEQE_PORT_UP, eqe)) {
if (!netif_carrier_ok(port->netdev)) {
ret = ehea_sense_port_attr(
- adapter->port[portnum]);
+ port);
if (ret) {
ehea_error("failed resensing port "
"attributes");
netif_stop_queue(port->netdev);
break;
default:
- ehea_error("unknown event code %x", ec);
+ ehea_error("unknown event code %x, eqe=0x%lX", ec, eqe);
break;
}
}
if (netif_msg_tx_queued(port)) {
ehea_info("post swqe on QP %d", pr->qp->init_attr.qp_nr);
- ehea_dump(swqe, sizeof(*swqe), "swqe");
+ ehea_dump(swqe, 512, "swqe");
}
ehea_post_swqe(pr->qp, swqe);
#define H_ALL_RES_TYPE_MR 5
#define H_ALL_RES_TYPE_MW 6
-static long ehea_hcall_9arg_9ret(unsigned long opcode,
- unsigned long arg1, unsigned long arg2,
- unsigned long arg3, unsigned long arg4,
- unsigned long arg5, unsigned long arg6,
- unsigned long arg7, unsigned long arg8,
- unsigned long arg9, unsigned long *out1,
- unsigned long *out2,unsigned long *out3,
- unsigned long *out4,unsigned long *out5,
- unsigned long *out6,unsigned long *out7,
- unsigned long *out8,unsigned long *out9)
+static long ehea_plpar_hcall_norets(unsigned long opcode,
+ unsigned long arg1,
+ unsigned long arg2,
+ unsigned long arg3,
+ unsigned long arg4,
+ unsigned long arg5,
+ unsigned long arg6,
+ unsigned long arg7)
{
- long hret;
+ long ret;
int i, sleep_msecs;
for (i = 0; i < 5; i++) {
- hret = plpar_hcall_9arg_9ret(opcode,arg1, arg2, arg3, arg4,
- arg5, arg6, arg7, arg8, arg9, out1,
- out2, out3, out4, out5, out6, out7,
- out8, out9);
- if (H_IS_LONG_BUSY(hret)) {
- sleep_msecs = get_longbusy_msecs(hret);
+ ret = plpar_hcall_norets(opcode, arg1, arg2, arg3, arg4,
+ arg5, arg6, arg7);
+
+ if (H_IS_LONG_BUSY(ret)) {
+ sleep_msecs = get_longbusy_msecs(ret);
msleep_interruptible(sleep_msecs);
continue;
}
- if (hret < H_SUCCESS)
- ehea_error("op=%lx hret=%lx "
- "i1=%lx i2=%lx i3=%lx i4=%lx i5=%lx i6=%lx "
- "i7=%lx i8=%lx i9=%lx "
- "o1=%lx o2=%lx o3=%lx o4=%lx o5=%lx o6=%lx "
- "o7=%lx o8=%lx o9=%lx",
- opcode, hret, arg1, arg2, arg3, arg4, arg5,
- arg6, arg7, arg8, arg9, *out1, *out2, *out3,
- *out4, *out5, *out6, *out7, *out8, *out9);
- return hret;
+ if (ret < H_SUCCESS)
+ ehea_error("opcode=%lx ret=%lx"
+ " arg1=%lx arg2=%lx arg3=%lx arg4=%lx"
+ " arg5=%lx arg6=%lx arg7=%lx ",
+ opcode, ret,
+ arg1, arg2, arg3, arg4, arg5,
+ arg6, arg7);
+
+ return ret;
}
+
return H_BUSY;
}
-u64 ehea_h_query_ehea_qp(const u64 adapter_handle, const u8 qp_category,
- const u64 qp_handle, const u64 sel_mask, void *cb_addr)
+static long ehea_plpar_hcall9(unsigned long opcode,
+ unsigned long *outs, /* array of 9 outputs */
+ unsigned long arg1,
+ unsigned long arg2,
+ unsigned long arg3,
+ unsigned long arg4,
+ unsigned long arg5,
+ unsigned long arg6,
+ unsigned long arg7,
+ unsigned long arg8,
+ unsigned long arg9)
{
- u64 dummy;
+ long ret;
+ int i, sleep_msecs;
- if ((((u64)cb_addr) & (PAGE_SIZE - 1)) != 0) {
- ehea_error("not on pageboundary");
- return H_PARAMETER;
+ for (i = 0; i < 5; i++) {
+ ret = plpar_hcall9(opcode, outs,
+ arg1, arg2, arg3, arg4, arg5,
+ arg6, arg7, arg8, arg9);
+
+ if (H_IS_LONG_BUSY(ret)) {
+ sleep_msecs = get_longbusy_msecs(ret);
+ msleep_interruptible(sleep_msecs);
+ continue;
+ }
+
+ if (ret < H_SUCCESS)
+ ehea_error("opcode=%lx ret=%lx"
+ " arg1=%lx arg2=%lx arg3=%lx arg4=%lx"
+ " arg5=%lx arg6=%lx arg7=%lx arg8=%lx"
+ " arg9=%lx"
+ " out1=%lx out2=%lx out3=%lx out4=%lx"
+ " out5=%lx out6=%lx out7=%lx out8=%lx"
+ " out9=%lx",
+ opcode, ret,
+ arg1, arg2, arg3, arg4, arg5,
+ arg6, arg7, arg8, arg9,
+ outs[0], outs[1], outs[2], outs[3],
+ outs[4], outs[5], outs[6], outs[7],
+ outs[8]);
+
+ return ret;
}
- return ehea_hcall_9arg_9ret(H_QUERY_HEA_QP,
- adapter_handle, /* R4 */
- qp_category, /* R5 */
- qp_handle, /* R6 */
- sel_mask, /* R7 */
- virt_to_abs(cb_addr), /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return H_BUSY;
+}
+
+u64 ehea_h_query_ehea_qp(const u64 adapter_handle, const u8 qp_category,
+ const u64 qp_handle, const u64 sel_mask, void *cb_addr)
+{
+ return ehea_plpar_hcall_norets(H_QUERY_HEA_QP,
+ adapter_handle, /* R4 */
+ qp_category, /* R5 */
+ qp_handle, /* R6 */
+ sel_mask, /* R7 */
+ virt_to_abs(cb_addr), /* R8 */
+ 0, 0);
}
/* input param R5 */
u64 *qp_handle, struct h_epas *h_epas)
{
u64 hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
u64 allocate_controls =
EHEA_BMASK_SET(H_ALL_RES_QP_EQPO, init_attr->low_lat_rq1 ? 1 : 0)
EHEA_BMASK_SET(H_ALL_RES_QP_TH_RQ2, init_attr->rq2_threshold)
| EHEA_BMASK_SET(H_ALL_RES_QP_TH_RQ3, init_attr->rq3_threshold);
- u64 r5_out = 0;
- u64 r6_out = 0;
- u64 r7_out = 0;
- u64 r8_out = 0;
- u64 r9_out = 0;
- u64 g_la_user_out = 0;
- u64 r11_out = 0;
- u64 r12_out = 0;
-
- hret = ehea_hcall_9arg_9ret(H_ALLOC_HEA_RESOURCE,
- adapter_handle, /* R4 */
- allocate_controls, /* R5 */
- init_attr->send_cq_handle, /* R6 */
- init_attr->recv_cq_handle, /* R7 */
- init_attr->aff_eq_handle, /* R8 */
- r9_reg, /* R9 */
- max_r10_reg, /* R10 */
- r11_in, /* R11 */
- threshold, /* R12 */
- qp_handle, /* R4 */
- &r5_out, /* R5 */
- &r6_out, /* R6 */
- &r7_out, /* R7 */
- &r8_out, /* R8 */
- &r9_out, /* R9 */
- &g_la_user_out, /* R10 */
- &r11_out, /* R11 */
- &r12_out); /* R12 */
-
- init_attr->qp_nr = (u32)r5_out;
+ hret = ehea_plpar_hcall9(H_ALLOC_HEA_RESOURCE,
+ outs,
+ adapter_handle, /* R4 */
+ allocate_controls, /* R5 */
+ init_attr->send_cq_handle, /* R6 */
+ init_attr->recv_cq_handle, /* R7 */
+ init_attr->aff_eq_handle, /* R8 */
+ r9_reg, /* R9 */
+ max_r10_reg, /* R10 */
+ r11_in, /* R11 */
+ threshold); /* R12 */
+
+ *qp_handle = outs[0];
+ init_attr->qp_nr = (u32)outs[1];
init_attr->act_nr_send_wqes =
- (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_SWQE, r6_out);
+ (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_SWQE, outs[2]);
init_attr->act_nr_rwqes_rq1 =
- (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R1WQE, r6_out);
+ (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R1WQE, outs[2]);
init_attr->act_nr_rwqes_rq2 =
- (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R2WQE, r6_out);
+ (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R2WQE, outs[2]);
init_attr->act_nr_rwqes_rq3 =
- (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R3WQE, r6_out);
+ (u16)EHEA_BMASK_GET(H_ALL_RES_QP_ACT_R3WQE, outs[2]);
init_attr->act_wqe_size_enc_sq = init_attr->wqe_size_enc_sq;
init_attr->act_wqe_size_enc_rq1 = init_attr->wqe_size_enc_rq1;
init_attr->act_wqe_size_enc_rq3 = init_attr->wqe_size_enc_rq3;
init_attr->nr_sq_pages =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_SQ, r8_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_SQ, outs[4]);
init_attr->nr_rq1_pages =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ1, r8_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ1, outs[4]);
init_attr->nr_rq2_pages =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ2, r9_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ2, outs[5]);
init_attr->nr_rq3_pages =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ3, r9_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_SIZE_RQ3, outs[5]);
init_attr->liobn_sq =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_SQ, r11_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_SQ, outs[7]);
init_attr->liobn_rq1 =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ1, r11_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ1, outs[7]);
init_attr->liobn_rq2 =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ2, r12_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ2, outs[8]);
init_attr->liobn_rq3 =
- (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ3, r12_out);
+ (u32)EHEA_BMASK_GET(H_ALL_RES_QP_LIOBN_RQ3, outs[8]);
if (!hret)
- hcp_epas_ctor(h_epas, g_la_user_out, g_la_user_out);
+ hcp_epas_ctor(h_epas, outs[6], outs[6]);
return hret;
}
struct ehea_cq_attr *cq_attr,
u64 *cq_handle, struct h_epas *epas)
{
- u64 hret, dummy, act_nr_of_cqes_out, act_pages_out;
- u64 g_la_privileged_out, g_la_user_out;
-
- hret = ehea_hcall_9arg_9ret(H_ALLOC_HEA_RESOURCE,
- adapter_handle, /* R4 */
- H_ALL_RES_TYPE_CQ, /* R5 */
- cq_attr->eq_handle, /* R6 */
- cq_attr->cq_token, /* R7 */
- cq_attr->max_nr_of_cqes, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- cq_handle, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &act_nr_of_cqes_out, /* R7 */
- &act_pages_out, /* R8 */
- &g_la_privileged_out, /* R9 */
- &g_la_user_out, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
-
- cq_attr->act_nr_of_cqes = act_nr_of_cqes_out;
- cq_attr->nr_pages = act_pages_out;
+ u64 hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+
+ hret = ehea_plpar_hcall9(H_ALLOC_HEA_RESOURCE,
+ outs,
+ adapter_handle, /* R4 */
+ H_ALL_RES_TYPE_CQ, /* R5 */
+ cq_attr->eq_handle, /* R6 */
+ cq_attr->cq_token, /* R7 */
+ cq_attr->max_nr_of_cqes, /* R8 */
+ 0, 0, 0, 0); /* R9-R12 */
+
+ *cq_handle = outs[0];
+ cq_attr->act_nr_of_cqes = outs[3];
+ cq_attr->nr_pages = outs[4];
if (!hret)
- hcp_epas_ctor(epas, g_la_privileged_out, g_la_user_out);
+ hcp_epas_ctor(epas, outs[5], outs[6]);
return hret;
}
u64 ehea_h_alloc_resource_eq(const u64 adapter_handle,
struct ehea_eq_attr *eq_attr, u64 *eq_handle)
{
- u64 hret, dummy, eq_liobn, allocate_controls;
- u64 ist1_out, ist2_out, ist3_out, ist4_out;
- u64 act_nr_of_eqes_out, act_pages_out;
+ u64 hret, allocate_controls;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
/* resource type */
allocate_controls =
| EHEA_BMASK_SET(H_ALL_RES_EQ_INH_EQE_GEN, !eq_attr->eqe_gen)
| EHEA_BMASK_SET(H_ALL_RES_EQ_NON_NEQ_ISN, 1);
- hret = ehea_hcall_9arg_9ret(H_ALLOC_HEA_RESOURCE,
- adapter_handle, /* R4 */
- allocate_controls, /* R5 */
- eq_attr->max_nr_of_eqes, /* R6 */
- 0, 0, 0, 0, 0, 0, /* R7-R10 */
- eq_handle, /* R4 */
- &dummy, /* R5 */
- &eq_liobn, /* R6 */
- &act_nr_of_eqes_out, /* R7 */
- &act_pages_out, /* R8 */
- &ist1_out, /* R9 */
- &ist2_out, /* R10 */
- &ist3_out, /* R11 */
- &ist4_out); /* R12 */
-
- eq_attr->act_nr_of_eqes = act_nr_of_eqes_out;
- eq_attr->nr_pages = act_pages_out;
- eq_attr->ist1 = ist1_out;
- eq_attr->ist2 = ist2_out;
- eq_attr->ist3 = ist3_out;
- eq_attr->ist4 = ist4_out;
+ hret = ehea_plpar_hcall9(H_ALLOC_HEA_RESOURCE,
+ outs,
+ adapter_handle, /* R4 */
+ allocate_controls, /* R5 */
+ eq_attr->max_nr_of_eqes, /* R6 */
+ 0, 0, 0, 0, 0, 0); /* R7-R10 */
+
+ *eq_handle = outs[0];
+ eq_attr->act_nr_of_eqes = outs[3];
+ eq_attr->nr_pages = outs[4];
+ eq_attr->ist1 = outs[5];
+ eq_attr->ist2 = outs[6];
+ eq_attr->ist3 = outs[7];
+ eq_attr->ist4 = outs[8];
return hret;
}
void *cb_addr, u64 *inv_attr_id, u64 *proc_mask,
u16 *out_swr, u16 *out_rwr)
{
- u64 hret, dummy, act_out_swr, act_out_rwr;
-
- if ((((u64)cb_addr) & (PAGE_SIZE - 1)) != 0) {
- ehea_error("not on page boundary");
- return H_PARAMETER;
- }
-
- hret = ehea_hcall_9arg_9ret(H_MODIFY_HEA_QP,
- adapter_handle, /* R4 */
- (u64) cat, /* R5 */
- qp_handle, /* R6 */
- sel_mask, /* R7 */
- virt_to_abs(cb_addr), /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- inv_attr_id, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &act_out_swr, /* R7 */
- &act_out_rwr, /* R8 */
- proc_mask, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
- *out_swr = act_out_swr;
- *out_rwr = act_out_rwr;
+ u64 hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+
+ hret = ehea_plpar_hcall9(H_MODIFY_HEA_QP,
+ outs,
+ adapter_handle, /* R4 */
+ (u64) cat, /* R5 */
+ qp_handle, /* R6 */
+ sel_mask, /* R7 */
+ virt_to_abs(cb_addr), /* R8 */
+ 0, 0, 0, 0); /* R9-R12 */
+
+ *inv_attr_id = outs[0];
+ *out_swr = outs[3];
+ *out_rwr = outs[4];
+ *proc_mask = outs[5];
return hret;
}
const u8 queue_type, const u64 resource_handle,
const u64 log_pageaddr, u64 count)
{
- u64 dummy, reg_control;
+ u64 reg_control;
reg_control = EHEA_BMASK_SET(H_REG_RPAGE_PAGE_SIZE, pagesize)
| EHEA_BMASK_SET(H_REG_RPAGE_QT, queue_type);
- return ehea_hcall_9arg_9ret(H_REGISTER_HEA_RPAGES,
- adapter_handle, /* R4 */
- reg_control, /* R5 */
- resource_handle, /* R6 */
- log_pageaddr, /* R7 */
- count, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall_norets(H_REGISTER_HEA_RPAGES,
+ adapter_handle, /* R4 */
+ reg_control, /* R5 */
+ resource_handle, /* R6 */
+ log_pageaddr, /* R7 */
+ count, /* R8 */
+ 0, 0); /* R9-R10 */
}
u64 ehea_h_register_smr(const u64 adapter_handle, const u64 orig_mr_handle,
const u64 vaddr_in, const u32 access_ctrl, const u32 pd,
struct ehea_mr *mr)
{
- u64 hret, dummy, lkey_out;
-
- hret = ehea_hcall_9arg_9ret(H_REGISTER_SMR,
- adapter_handle , /* R4 */
- orig_mr_handle, /* R5 */
- vaddr_in, /* R6 */
- (((u64)access_ctrl) << 32ULL), /* R7 */
- pd, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &mr->handle, /* R4 */
- &dummy, /* R5 */
- &lkey_out, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
- mr->lkey = (u32)lkey_out;
+ u64 hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+
+ hret = ehea_plpar_hcall9(H_REGISTER_SMR,
+ outs,
+ adapter_handle , /* R4 */
+ orig_mr_handle, /* R5 */
+ vaddr_in, /* R6 */
+ (((u64)access_ctrl) << 32ULL), /* R7 */
+ pd, /* R8 */
+ 0, 0, 0, 0); /* R9-R12 */
+
+ mr->handle = outs[0];
+ mr->lkey = (u32)outs[2];
return hret;
}
u64 ehea_h_disable_and_get_hea(const u64 adapter_handle, const u64 qp_handle)
{
- u64 hret, dummy, ladr_next_sq_wqe_out;
- u64 ladr_next_rq1_wqe_out, ladr_next_rq2_wqe_out, ladr_next_rq3_wqe_out;
-
- hret = ehea_hcall_9arg_9ret(H_DISABLE_AND_GET_HEA,
- adapter_handle, /* R4 */
- H_DISABLE_GET_EHEA_WQE_P, /* R5 */
- qp_handle, /* R6 */
- 0, 0, 0, 0, 0, 0, /* R7-R12 */
- &ladr_next_sq_wqe_out, /* R4 */
- &ladr_next_rq1_wqe_out, /* R5 */
- &ladr_next_rq2_wqe_out, /* R6 */
- &ladr_next_rq3_wqe_out, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
- return hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+
+ return ehea_plpar_hcall9(H_DISABLE_AND_GET_HEA,
+ outs,
+ adapter_handle, /* R4 */
+ H_DISABLE_GET_EHEA_WQE_P, /* R5 */
+ qp_handle, /* R6 */
+ 0, 0, 0, 0, 0, 0); /* R7-R12 */
}
u64 ehea_h_free_resource(const u64 adapter_handle, const u64 res_handle)
{
- u64 dummy;
-
- return ehea_hcall_9arg_9ret(H_FREE_RESOURCE,
- adapter_handle, /* R4 */
- res_handle, /* R5 */
- 0, 0, 0, 0, 0, 0, 0, /* R6-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall_norets(H_FREE_RESOURCE,
+ adapter_handle, /* R4 */
+ res_handle, /* R5 */
+ 0, 0, 0, 0, 0); /* R6-R10 */
}
u64 ehea_h_alloc_resource_mr(const u64 adapter_handle, const u64 vaddr,
const u64 length, const u32 access_ctrl,
const u32 pd, u64 *mr_handle, u32 *lkey)
{
- u64 hret, dummy, lkey_out;
-
- hret = ehea_hcall_9arg_9ret(H_ALLOC_HEA_RESOURCE,
- adapter_handle, /* R4 */
- 5, /* R5 */
- vaddr, /* R6 */
- length, /* R7 */
- (((u64) access_ctrl) << 32ULL),/* R8 */
- pd, /* R9 */
- 0, 0, 0, /* R10-R12 */
- mr_handle, /* R4 */
- &dummy, /* R5 */
- &lkey_out, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
- *lkey = (u32) lkey_out;
-
+ u64 hret;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+
+ hret = ehea_plpar_hcall9(H_ALLOC_HEA_RESOURCE,
+ outs,
+ adapter_handle, /* R4 */
+ 5, /* R5 */
+ vaddr, /* R6 */
+ length, /* R7 */
+ (((u64) access_ctrl) << 32ULL), /* R8 */
+ pd, /* R9 */
+ 0, 0, 0); /* R10-R12 */
+
+ *mr_handle = outs[0];
+ *lkey = (u32)outs[2];
return hret;
}
u64 ehea_h_query_ehea(const u64 adapter_handle, void *cb_addr)
{
- u64 hret, dummy, cb_logaddr;
+ u64 hret, cb_logaddr;
cb_logaddr = virt_to_abs(cb_addr);
- hret = ehea_hcall_9arg_9ret(H_QUERY_HEA,
- adapter_handle, /* R4 */
- cb_logaddr, /* R5 */
- 0, 0, 0, 0, 0, 0, 0, /* R6-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ hret = ehea_plpar_hcall_norets(H_QUERY_HEA,
+ adapter_handle, /* R4 */
+ cb_logaddr, /* R5 */
+ 0, 0, 0, 0, 0); /* R6-R10 */
#ifdef DEBUG
ehea_dmp(cb_addr, sizeof(struct hcp_query_ehea), "hcp_query_ehea");
#endif
const u8 cb_cat, const u64 select_mask,
void *cb_addr)
{
- u64 port_info, dummy;
+ u64 port_info;
u64 cb_logaddr = virt_to_abs(cb_addr);
u64 arr_index = 0;
port_info = EHEA_BMASK_SET(H_MEHEAPORT_CAT, cb_cat)
| EHEA_BMASK_SET(H_MEHEAPORT_PN, port_num);
- return ehea_hcall_9arg_9ret(H_QUERY_HEA_PORT,
- adapter_handle, /* R4 */
- port_info, /* R5 */
- select_mask, /* R6 */
- arr_index, /* R7 */
- cb_logaddr, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall_norets(H_QUERY_HEA_PORT,
+ adapter_handle, /* R4 */
+ port_info, /* R5 */
+ select_mask, /* R6 */
+ arr_index, /* R7 */
+ cb_logaddr, /* R8 */
+ 0, 0); /* R9-R10 */
}
u64 ehea_h_modify_ehea_port(const u64 adapter_handle, const u16 port_num,
const u8 cb_cat, const u64 select_mask,
void *cb_addr)
{
- u64 port_info, dummy, inv_attr_ident, proc_mask;
+ u64 outs[PLPAR_HCALL9_BUFSIZE];
+ u64 port_info;
u64 arr_index = 0;
u64 cb_logaddr = virt_to_abs(cb_addr);
#ifdef DEBUG
ehea_dump(cb_addr, sizeof(struct hcp_ehea_port_cb0), "Before HCALL");
#endif
- return ehea_hcall_9arg_9ret(H_MODIFY_HEA_PORT,
- adapter_handle, /* R4 */
- port_info, /* R5 */
- select_mask, /* R6 */
- arr_index, /* R7 */
- cb_logaddr, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &inv_attr_ident, /* R4 */
- &proc_mask, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall9(H_MODIFY_HEA_PORT,
+ outs,
+ adapter_handle, /* R4 */
+ port_info, /* R5 */
+ select_mask, /* R6 */
+ arr_index, /* R7 */
+ cb_logaddr, /* R8 */
+ 0, 0, 0, 0); /* R9-R12 */
}
u64 ehea_h_reg_dereg_bcmc(const u64 adapter_handle, const u16 port_num,
const u8 reg_type, const u64 mc_mac_addr,
const u16 vlan_id, const u32 hcall_id)
{
- u64 r5_port_num, r6_reg_type, r7_mc_mac_addr, r8_vlan_id, dummy;
+ u64 r5_port_num, r6_reg_type, r7_mc_mac_addr, r8_vlan_id;
u64 mac_addr = mc_mac_addr >> 16;
r5_port_num = EHEA_BMASK_SET(H_REGBCMC_PN, port_num);
r7_mc_mac_addr = EHEA_BMASK_SET(H_REGBCMC_MACADDR, mac_addr);
r8_vlan_id = EHEA_BMASK_SET(H_REGBCMC_VLANID, vlan_id);
- return ehea_hcall_9arg_9ret(hcall_id,
- adapter_handle, /* R4 */
- r5_port_num, /* R5 */
- r6_reg_type, /* R6 */
- r7_mc_mac_addr, /* R7 */
- r8_vlan_id, /* R8 */
- 0, 0, 0, 0, /* R9-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall_norets(hcall_id,
+ adapter_handle, /* R4 */
+ r5_port_num, /* R5 */
+ r6_reg_type, /* R6 */
+ r7_mc_mac_addr, /* R7 */
+ r8_vlan_id, /* R8 */
+ 0, 0); /* R9-R12 */
}
u64 ehea_h_reset_events(const u64 adapter_handle, const u64 neq_handle,
const u64 event_mask)
{
- u64 dummy;
-
- return ehea_hcall_9arg_9ret(H_RESET_EVENTS,
- adapter_handle, /* R4 */
- neq_handle, /* R5 */
- event_mask, /* R6 */
- 0, 0, 0, 0, 0, 0, /* R7-R12 */
- &dummy, /* R4 */
- &dummy, /* R5 */
- &dummy, /* R6 */
- &dummy, /* R7 */
- &dummy, /* R8 */
- &dummy, /* R9 */
- &dummy, /* R10 */
- &dummy, /* R11 */
- &dummy); /* R12 */
+ return ehea_plpar_hcall_norets(H_RESET_EVENTS,
+ adapter_handle, /* R4 */
+ neq_handle, /* R5 */
+ event_mask, /* R6 */
+ 0, 0, 0, 0); /* R7-R12 */
}
#include <linux/skbuff.h>
#include <linux/bitops.h>
#include <linux/jiffies.h>
+#include <linux/io.h>
#include <asm/system.h>
-#include <asm/io.h>
#include <asm/dma.h>
u8 __iomem *base = get_hwbase(dev);
u32 events;
int i;
+ unsigned long flags;
dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name);
if (!(events & np->irqmask))
break;
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
nv_tx_done(dev);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
if (events & (NVREG_IRQ_TX_ERR)) {
dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
dev->name, events);
}
if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
/* disable interrupts on the nic */
writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
pci_push(base);
mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
}
printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
break;
}
u8 __iomem *base = get_hwbase(dev);
u32 events;
int i;
+ unsigned long flags;
dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name);
nv_rx_process(dev, dev->weight);
if (nv_alloc_rx(dev)) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
if (!np->in_shutdown)
mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
}
if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
/* disable interrupts on the nic */
writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
pci_push(base);
mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
}
printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
break;
}
}
u8 __iomem *base = get_hwbase(dev);
u32 events;
int i;
+ unsigned long flags;
dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name);
break;
if (events & NVREG_IRQ_LINK) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
nv_link_irq(dev);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
}
if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
nv_linkchange(dev);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
np->link_timeout = jiffies + LINK_TIMEOUT;
}
if (events & (NVREG_IRQ_UNKNOWN)) {
dev->name, events);
}
if (i > max_interrupt_work) {
- spin_lock_irq(&np->lock);
+ spin_lock_irqsave(&np->lock, flags);
/* disable interrupts on the nic */
writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
pci_push(base);
mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
}
printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
- spin_unlock_irq(&np->lock);
+ spin_unlock_irqrestore(&np->lock, flags);
break;
}
}
free_index = pool->consumer_index++ % pool->size;
+ pool->consumer_index = free_index;
index = pool->free_map[free_index];
ibmveth_assert(index != IBM_VETH_INVALID_MAP);
if(lpar_rc != H_SUCCESS) {
pool->free_map[free_index] = index;
pool->skbuff[index] = NULL;
- pool->consumer_index--;
+ if (pool->consumer_index == 0)
+ pool->consumer_index = pool->size - 1;
+ else
+ pool->consumer_index--;
dma_unmap_single(&adapter->vdev->dev,
pool->dma_addr[index], pool->buff_size,
DMA_FROM_DEVICE);
DMA_FROM_DEVICE);
free_index = adapter->rx_buff_pool[pool].producer_index++ % adapter->rx_buff_pool[pool].size;
+ adapter->rx_buff_pool[pool].producer_index = free_index;
adapter->rx_buff_pool[pool].free_map[free_index] = index;
mb();
&adapter->rx_buff_pool[i]);
}
+static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
+ union ibmveth_buf_desc rxq_desc, u64 mac_address)
+{
+ int rc, try_again = 1;
+
+ /* After a kexec the adapter will still be open, so our attempt to
+ * open it will fail. So if we get a failure we free the adapter and
+ * try again, but only once. */
+retry:
+ rc = h_register_logical_lan(adapter->vdev->unit_address,
+ adapter->buffer_list_dma, rxq_desc.desc,
+ adapter->filter_list_dma, mac_address);
+
+ if (rc != H_SUCCESS && try_again) {
+ do {
+ rc = h_free_logical_lan(adapter->vdev->unit_address);
+ } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
+
+ try_again = 0;
+ goto retry;
+ }
+
+ return rc;
+}
+
static int ibmveth_open(struct net_device *netdev)
{
struct ibmveth_adapter *adapter = netdev->priv;
ibmveth_debug_printk("filter list @ 0x%p\n", adapter->filter_list_addr);
ibmveth_debug_printk("receive q @ 0x%p\n", adapter->rx_queue.queue_addr);
+ h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_DISABLE);
- lpar_rc = h_register_logical_lan(adapter->vdev->unit_address,
- adapter->buffer_list_dma,
- rxq_desc.desc,
- adapter->filter_list_dma,
- mac_address);
+ lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
if(lpar_rc != H_SUCCESS) {
ibmveth_error_printk("h_register_logical_lan failed with %ld\n", lpar_rc);
return -EINVAL;
}
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void ibmveth_poll_controller(struct net_device *dev)
+{
+ ibmveth_replenish_task(dev->priv);
+ ibmveth_interrupt(dev->irq, dev);
+}
+#endif
+
static int __devinit ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
{
int rc, i;
netdev->ethtool_ops = &netdev_ethtool_ops;
netdev->change_mtu = ibmveth_change_mtu;
SET_NETDEV_DEV(netdev, &dev->dev);
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ netdev->poll_controller = ibmveth_poll_controller;
+#endif
netdev->features |= NETIF_F_LLTX;
spin_lock_init(&adapter->stats_lock);
{
struct proc_dir_entry *entry;
if (ibmveth_proc_dir) {
- entry = create_proc_entry(adapter->netdev->name, S_IFREG, ibmveth_proc_dir);
+ char u_addr[10];
+ sprintf(u_addr, "%x", adapter->vdev->unit_address);
+ entry = create_proc_entry(u_addr, S_IFREG, ibmveth_proc_dir);
if (!entry) {
ibmveth_error_printk("Cannot create adapter proc entry");
} else {
static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter)
{
if (ibmveth_proc_dir) {
- remove_proc_entry(adapter->netdev->name, ibmveth_proc_dir);
+ char u_addr[10];
+ sprintf(u_addr, "%x", adapter->vdev->unit_address);
+ remove_proc_entry(u_addr, ibmveth_proc_dir);
}
}
for (offset = ETH_MIB_BAD_OCTETS_RECEIVED;
offset <= ETH_MIB_FRAMES_1024_TO_MAX_OCTETS;
offset += 4)
- *(u32 *)((char *)p + offset) = read_mib(mp, offset);
+ *(u32 *)((char *)p + offset) += read_mib(mp, offset);
p->good_octets_sent += read_mib(mp, ETH_MIB_GOOD_OCTETS_SENT_LOW);
p->good_octets_sent +=
for (offset = ETH_MIB_GOOD_FRAMES_SENT;
offset <= ETH_MIB_LATE_COLLISION;
offset += 4)
- *(u32 *)((char *)p + offset) = read_mib(mp, offset);
+ *(u32 *)((char *)p + offset) += read_mib(mp, offset);
}
/*
#include "skge.h"
#define DRV_NAME "skge"
-#define DRV_VERSION "1.8"
+#define DRV_VERSION "1.9"
#define PFX DRV_NAME " "
#define DEFAULT_TX_RING_SIZE 128
else if (hw->chip_id == CHIP_ID_YUKON)
supported &= ~SUPPORTED_1000baseT_Half;
} else
- supported = SUPPORTED_1000baseT_Full | SUPPORTED_FIBRE
- | SUPPORTED_Autoneg;
+ supported = SUPPORTED_1000baseT_Full | SUPPORTED_1000baseT_Half
+ | SUPPORTED_FIBRE | SUPPORTED_Autoneg;
return supported;
}
{
struct skge_port *skge = netdev_priv(dev);
- ecmd->tx_pause = (skge->flow_control == FLOW_MODE_LOC_SEND)
- || (skge->flow_control == FLOW_MODE_SYMMETRIC);
- ecmd->rx_pause = (skge->flow_control == FLOW_MODE_REM_SEND)
- || (skge->flow_control == FLOW_MODE_SYMMETRIC);
+ ecmd->rx_pause = (skge->flow_control == FLOW_MODE_SYMMETRIC)
+ || (skge->flow_control == FLOW_MODE_SYM_OR_REM);
+ ecmd->tx_pause = ecmd->rx_pause || (skge->flow_control == FLOW_MODE_LOC_SEND);
- ecmd->autoneg = skge->autoneg;
+ ecmd->autoneg = ecmd->rx_pause || ecmd->tx_pause;
}
static int skge_set_pauseparam(struct net_device *dev,
struct ethtool_pauseparam *ecmd)
{
struct skge_port *skge = netdev_priv(dev);
+ struct ethtool_pauseparam old;
- skge->autoneg = ecmd->autoneg;
- if (ecmd->rx_pause && ecmd->tx_pause)
- skge->flow_control = FLOW_MODE_SYMMETRIC;
- else if (ecmd->rx_pause && !ecmd->tx_pause)
- skge->flow_control = FLOW_MODE_REM_SEND;
- else if (!ecmd->rx_pause && ecmd->tx_pause)
- skge->flow_control = FLOW_MODE_LOC_SEND;
- else
- skge->flow_control = FLOW_MODE_NONE;
+ skge_get_pauseparam(dev, &old);
+
+ if (ecmd->autoneg != old.autoneg)
+ skge->flow_control = ecmd->autoneg ? FLOW_MODE_NONE : FLOW_MODE_SYMMETRIC;
+ else {
+ if (ecmd->rx_pause && ecmd->tx_pause)
+ skge->flow_control = FLOW_MODE_SYMMETRIC;
+ else if (ecmd->rx_pause && !ecmd->tx_pause)
+ skge->flow_control = FLOW_MODE_SYM_OR_REM;
+ else if (!ecmd->rx_pause && ecmd->tx_pause)
+ skge->flow_control = FLOW_MODE_LOC_SEND;
+ else
+ skge->flow_control = FLOW_MODE_NONE;
+ }
if (netif_running(dev))
skge_phy_reset(skge);
+
return 0;
}
return 0;
}
+static const char *skge_pause(enum pause_status status)
+{
+ switch(status) {
+ case FLOW_STAT_NONE:
+ return "none";
+ case FLOW_STAT_REM_SEND:
+ return "rx only";
+ case FLOW_STAT_LOC_SEND:
+ return "tx_only";
+ case FLOW_STAT_SYMMETRIC: /* Both station may send PAUSE */
+ return "both";
+ default:
+ return "indeterminated";
+ }
+}
+
+
static void skge_link_up(struct skge_port *skge)
{
skge_write8(skge->hw, SK_REG(skge->port, LNK_LED_REG),
netif_carrier_on(skge->netdev);
netif_wake_queue(skge->netdev);
- if (netif_msg_link(skge))
+ if (netif_msg_link(skge)) {
printk(KERN_INFO PFX
"%s: Link is up at %d Mbps, %s duplex, flow control %s\n",
skge->netdev->name, skge->speed,
skge->duplex == DUPLEX_FULL ? "full" : "half",
- (skge->flow_control == FLOW_MODE_NONE) ? "none" :
- (skge->flow_control == FLOW_MODE_LOC_SEND) ? "tx only" :
- (skge->flow_control == FLOW_MODE_REM_SEND) ? "rx only" :
- (skge->flow_control == FLOW_MODE_SYMMETRIC) ? "tx and rx" :
- "unknown");
+ skge_pause(skge->flow_status));
+ }
}
static void skge_link_down(struct skge_port *skge)
printk(KERN_INFO PFX "%s: Link is down.\n", skge->netdev->name);
}
+
+static void xm_link_down(struct skge_hw *hw, int port)
+{
+ struct net_device *dev = hw->dev[port];
+ struct skge_port *skge = netdev_priv(dev);
+ u16 cmd, msk;
+
+ if (hw->phy_type == SK_PHY_XMAC) {
+ msk = xm_read16(hw, port, XM_IMSK);
+ msk |= XM_IS_INP_ASS | XM_IS_LIPA_RC | XM_IS_RX_PAGE | XM_IS_AND;
+ xm_write16(hw, port, XM_IMSK, msk);
+ }
+
+ cmd = xm_read16(hw, port, XM_MMU_CMD);
+ cmd &= ~(XM_MMU_ENA_RX | XM_MMU_ENA_TX);
+ xm_write16(hw, port, XM_MMU_CMD, cmd);
+ /* dummy read to ensure writing */
+ (void) xm_read16(hw, port, XM_MMU_CMD);
+
+ if (netif_carrier_ok(dev))
+ skge_link_down(skge);
+}
+
static int __xm_phy_read(struct skge_hw *hw, int port, u16 reg, u16 *val)
{
int i;
[FLOW_MODE_NONE] = 0,
[FLOW_MODE_LOC_SEND] = PHY_AN_PAUSE_ASYM,
[FLOW_MODE_SYMMETRIC] = PHY_AN_PAUSE_CAP,
- [FLOW_MODE_REM_SEND] = PHY_AN_PAUSE_CAP | PHY_AN_PAUSE_ASYM,
+ [FLOW_MODE_SYM_OR_REM] = PHY_AN_PAUSE_CAP | PHY_AN_PAUSE_ASYM,
+};
+
+/* special defines for FIBER (88E1011S only) */
+static const u16 fiber_pause_map[] = {
+ [FLOW_MODE_NONE] = PHY_X_P_NO_PAUSE,
+ [FLOW_MODE_LOC_SEND] = PHY_X_P_ASYM_MD,
+ [FLOW_MODE_SYMMETRIC] = PHY_X_P_SYM_MD,
+ [FLOW_MODE_SYM_OR_REM] = PHY_X_P_BOTH_MD,
};
status = xm_phy_read(hw, port, PHY_BCOM_STAT);
if ((status & PHY_ST_LSYNC) == 0) {
- u16 cmd = xm_read16(hw, port, XM_MMU_CMD);
- cmd &= ~(XM_MMU_ENA_RX | XM_MMU_ENA_TX);
- xm_write16(hw, port, XM_MMU_CMD, cmd);
- /* dummy read to ensure writing */
- (void) xm_read16(hw, port, XM_MMU_CMD);
-
- if (netif_carrier_ok(dev))
- skge_link_down(skge);
+ xm_link_down(hw, port);
return;
}
return;
}
-
/* We are using IEEE 802.3z/D5.0 Table 37-4 */
switch (aux & PHY_B_AS_PAUSE_MSK) {
case PHY_B_AS_PAUSE_MSK:
- skge->flow_control = FLOW_MODE_SYMMETRIC;
+ skge->flow_status = FLOW_STAT_SYMMETRIC;
break;
case PHY_B_AS_PRR:
- skge->flow_control = FLOW_MODE_REM_SEND;
+ skge->flow_status = FLOW_STAT_REM_SEND;
break;
case PHY_B_AS_PRT:
- skge->flow_control = FLOW_MODE_LOC_SEND;
+ skge->flow_status = FLOW_STAT_LOC_SEND;
break;
default:
- skge->flow_control = FLOW_MODE_NONE;
+ skge->flow_status = FLOW_STAT_NONE;
}
skge->speed = SPEED_1000;
}
if (skge->advertising & ADVERTISED_1000baseT_Full)
ctrl |= PHY_X_AN_FD;
- switch(skge->flow_control) {
- case FLOW_MODE_NONE:
- ctrl |= PHY_X_P_NO_PAUSE;
- break;
- case FLOW_MODE_LOC_SEND:
- ctrl |= PHY_X_P_ASYM_MD;
- break;
- case FLOW_MODE_SYMMETRIC:
- ctrl |= PHY_X_P_BOTH_MD;
- break;
- }
+ ctrl |= fiber_pause_map[skge->flow_control];
xm_phy_write(hw, port, PHY_XMAC_AUNE_ADV, ctrl);
status = xm_phy_read(hw, port, PHY_XMAC_STAT);
if ((status & PHY_ST_LSYNC) == 0) {
- u16 cmd = xm_read16(hw, port, XM_MMU_CMD);
- cmd &= ~(XM_MMU_ENA_RX | XM_MMU_ENA_TX);
- xm_write16(hw, port, XM_MMU_CMD, cmd);
- /* dummy read to ensure writing */
- (void) xm_read16(hw, port, XM_MMU_CMD);
-
- if (netif_carrier_ok(dev))
- skge_link_down(skge);
+ xm_link_down(hw, port);
return;
}
}
/* We are using IEEE 802.3z/D5.0 Table 37-4 */
- if (lpa & PHY_X_P_SYM_MD)
- skge->flow_control = FLOW_MODE_SYMMETRIC;
- else if ((lpa & PHY_X_RS_PAUSE) == PHY_X_P_ASYM_MD)
- skge->flow_control = FLOW_MODE_REM_SEND;
- else if ((lpa & PHY_X_RS_PAUSE) == PHY_X_P_BOTH_MD)
- skge->flow_control = FLOW_MODE_LOC_SEND;
+ if ((skge->flow_control == FLOW_MODE_SYMMETRIC ||
+ skge->flow_control == FLOW_MODE_SYM_OR_REM) &&
+ (lpa & PHY_X_P_SYM_MD))
+ skge->flow_status = FLOW_STAT_SYMMETRIC;
+ else if (skge->flow_control == FLOW_MODE_SYM_OR_REM &&
+ (lpa & PHY_X_RS_PAUSE) == PHY_X_P_ASYM_MD)
+ /* Enable PAUSE receive, disable PAUSE transmit */
+ skge->flow_status = FLOW_STAT_REM_SEND;
+ else if (skge->flow_control == FLOW_MODE_LOC_SEND &&
+ (lpa & PHY_X_RS_PAUSE) == PHY_X_P_BOTH_MD)
+ /* Disable PAUSE receive, enable PAUSE transmit */
+ skge->flow_status = FLOW_STAT_LOC_SEND;
else
- skge->flow_control = FLOW_MODE_NONE;
-
+ skge->flow_status = FLOW_STAT_NONE;
skge->speed = SPEED_1000;
}
printk(KERN_DEBUG PFX "%s: mac interrupt status 0x%x\n",
skge->netdev->name, status);
+ if (hw->phy_type == SK_PHY_XMAC &&
+ (status & (XM_IS_INP_ASS | XM_IS_LIPA_RC)))
+ xm_link_down(hw, port);
+
if (status & XM_IS_TXF_UR) {
xm_write32(hw, port, XM_MODE, XM_MD_FTF);
++skge->net_stats.tx_fifo_errors;
{
struct skge_hw *hw = skge->hw;
int port = skge->port;
- u16 cmd;
+ u16 cmd, msk;
u32 mode;
cmd = xm_read16(hw, port, XM_MMU_CMD);
* enabling pause frame reception is required for 1000BT
* because the XMAC is not reset if the link is going down
*/
- if (skge->flow_control == FLOW_MODE_NONE ||
- skge->flow_control == FLOW_MODE_LOC_SEND)
+ if (skge->flow_status == FLOW_STAT_NONE ||
+ skge->flow_status == FLOW_STAT_LOC_SEND)
/* Disable Pause Frame Reception */
cmd |= XM_MMU_IGN_PF;
else
xm_write16(hw, port, XM_MMU_CMD, cmd);
mode = xm_read32(hw, port, XM_MODE);
- if (skge->flow_control == FLOW_MODE_SYMMETRIC ||
- skge->flow_control == FLOW_MODE_LOC_SEND) {
+ if (skge->flow_status== FLOW_STAT_SYMMETRIC ||
+ skge->flow_status == FLOW_STAT_LOC_SEND) {
/*
* Configure Pause Frame Generation
* Use internal and external Pause Frame Generation.
}
xm_write32(hw, port, XM_MODE, mode);
- xm_write16(hw, port, XM_IMSK, XM_DEF_MSK);
+ msk = XM_DEF_MSK;
+ if (hw->phy_type != SK_PHY_XMAC)
+ msk |= XM_IS_INP_ASS; /* disable GP0 interrupt bit */
+
+ xm_write16(hw, port, XM_IMSK, msk);
xm_read16(hw, port, XM_ISRC);
/* get MMU Command Reg. */
adv |= PHY_M_AN_10_FD;
if (skge->advertising & ADVERTISED_10baseT_Half)
adv |= PHY_M_AN_10_HD;
- } else /* special defines for FIBER (88E1011S only) */
- adv |= PHY_M_AN_1000X_AHD | PHY_M_AN_1000X_AFD;
- /* Set Flow-control capabilities */
- adv |= phy_pause_map[skge->flow_control];
+ /* Set Flow-control capabilities */
+ adv |= phy_pause_map[skge->flow_control];
+ } else {
+ if (skge->advertising & ADVERTISED_1000baseT_Full)
+ adv |= PHY_M_AN_1000X_AFD;
+ if (skge->advertising & ADVERTISED_1000baseT_Half)
+ adv |= PHY_M_AN_1000X_AHD;
+
+ adv |= fiber_pause_map[skge->flow_control];
+ }
/* Restart Auto-negotiation */
ctrl |= PHY_CT_ANE | PHY_CT_RE_CFG;
case FLOW_MODE_LOC_SEND:
/* disable Rx flow-control */
reg |= GM_GPCR_FC_RX_DIS | GM_GPCR_AU_FCT_DIS;
+ break;
+ case FLOW_MODE_SYMMETRIC:
+ case FLOW_MODE_SYM_OR_REM:
+ /* enable Tx & Rx flow-control */
+ break;
}
gma_write16(hw, port, GM_GP_CTRL, reg);
ctrl &= ~(GM_GPCR_RX_ENA | GM_GPCR_TX_ENA);
gma_write16(hw, port, GM_GP_CTRL, ctrl);
- if (skge->flow_control == FLOW_MODE_REM_SEND) {
+ if (skge->flow_status == FLOW_STAT_REM_SEND) {
+ ctrl = gm_phy_read(hw, port, PHY_MARV_AUNE_ADV);
+ ctrl |= PHY_M_AN_ASP;
/* restore Asymmetric Pause bit */
- gm_phy_write(hw, port, PHY_MARV_AUNE_ADV,
- gm_phy_read(hw, port,
- PHY_MARV_AUNE_ADV)
- | PHY_M_AN_ASP);
-
+ gm_phy_write(hw, port, PHY_MARV_AUNE_ADV, ctrl);
}
yukon_reset(hw, port);
/* We are using IEEE 802.3z/D5.0 Table 37-4 */
switch (phystat & PHY_M_PS_PAUSE_MSK) {
case PHY_M_PS_PAUSE_MSK:
- skge->flow_control = FLOW_MODE_SYMMETRIC;
+ skge->flow_status = FLOW_STAT_SYMMETRIC;
break;
case PHY_M_PS_RX_P_EN:
- skge->flow_control = FLOW_MODE_REM_SEND;
+ skge->flow_status = FLOW_STAT_REM_SEND;
break;
case PHY_M_PS_TX_P_EN:
- skge->flow_control = FLOW_MODE_LOC_SEND;
+ skge->flow_status = FLOW_STAT_LOC_SEND;
break;
default:
- skge->flow_control = FLOW_MODE_NONE;
+ skge->flow_status = FLOW_STAT_NONE;
}
- if (skge->flow_control == FLOW_MODE_NONE ||
+ if (skge->flow_status == FLOW_STAT_NONE ||
(skge->speed < SPEED_1000 && skge->duplex == DUPLEX_HALF))
skge_write8(hw, SK_REG(port, GMAC_CTRL), GMC_PAUSE_OFF);
else
/* Auto speed and flow control */
skge->autoneg = AUTONEG_ENABLE;
- skge->flow_control = FLOW_MODE_SYMMETRIC;
+ skge->flow_control = FLOW_MODE_SYM_OR_REM;
skge->duplex = -1;
skge->speed = -1;
skge->advertising = skge_supported_modes(hw);
XM_IS_RX_COMP = 1<<0, /* Bit 0: Frame Rx Complete */
};
-#define XM_DEF_MSK (~(XM_IS_RXC_OV | XM_IS_TXC_OV | XM_IS_RXF_OV | XM_IS_TXF_UR))
+#define XM_DEF_MSK (~(XM_IS_INP_ASS | XM_IS_LIPA_RC | \
+ XM_IS_RXF_OV | XM_IS_TXF_UR))
/* XM_HW_CFG 16 bit r/w Hardware Config Register */
struct mutex phy_mutex;
};
-enum {
- FLOW_MODE_NONE = 0, /* No Flow-Control */
- FLOW_MODE_LOC_SEND = 1, /* Local station sends PAUSE */
- FLOW_MODE_REM_SEND = 2, /* Symmetric or just remote */
+enum pause_control {
+ FLOW_MODE_NONE = 1, /* No Flow-Control */
+ FLOW_MODE_LOC_SEND = 2, /* Local station sends PAUSE */
FLOW_MODE_SYMMETRIC = 3, /* Both stations may send PAUSE */
+ FLOW_MODE_SYM_OR_REM = 4, /* Both stations may send PAUSE or
+ * just the remote station may send PAUSE
+ */
+};
+
+enum pause_status {
+ FLOW_STAT_INDETERMINATED=0, /* indeterminated */
+ FLOW_STAT_NONE, /* No Flow Control */
+ FLOW_STAT_REM_SEND, /* Remote Station sends PAUSE */
+ FLOW_STAT_LOC_SEND, /* Local station sends PAUSE */
+ FLOW_STAT_SYMMETRIC, /* Both station may send PAUSE */
};
+
struct skge_port {
u32 msg_enable;
struct skge_hw *hw;
struct net_device_stats net_stats;
struct work_struct link_thread;
+ enum pause_control flow_control;
+ enum pause_status flow_status;
u8 rx_csum;
u8 blink_on;
- u8 flow_control;
u8 wol;
u8 autoneg; /* AUTONEG_ENABLE, AUTONEG_DISABLE */
u8 duplex; /* DUPLEX_HALF, DUPLEX_FULL */
sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON);
if (hw->chip_id == CHIP_ID_YUKON_EC_U) {
- sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8);
+ sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 512/8);
sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8);
if (hw->dev[port]->mtu > ETH_DATA_LEN) {
/* set Tx GMAC FIFO Almost Empty Threshold */
pci_dma_sync_single_for_device(sky2->hw->pdev, re->data_addr,
length, PCI_DMA_FROMDEVICE);
re->skb->ip_summed = CHECKSUM_NONE;
- __skb_put(skb, length);
+ skb_put(skb, length);
}
return skb;
}
if (skb_shinfo(skb)->nr_frags)
skb_put_frags(skb, hdr_space, length);
else
- skb_put(skb, hdr_space);
+ skb_put(skb, length);
return skb;
}
/* PCI-Express uncorrectable Error occurred */
u32 pex_err;
- pex_err = sky2_pci_read32(hw,
- hw->err_cap + PCI_ERR_UNCOR_STATUS);
+ pex_err = sky2_pci_read32(hw, PEX_UNC_ERR_STAT);
if (net_ratelimit())
printk(KERN_ERR PFX "%s: pci express error (0x%x)\n",
/* clear the interrupt */
sky2_write32(hw, B2_TST_CTRL1, TST_CFG_WRITE_ON);
- sky2_pci_write32(hw,
- hw->err_cap + PCI_ERR_UNCOR_STATUS,
- 0xffffffffUL);
+ sky2_pci_write32(hw, PEX_UNC_ERR_STAT,
+ 0xffffffffUL);
sky2_write32(hw, B2_TST_CTRL1, TST_CFG_WRITE_OFF);
-
- /* In case of fatal error mask off to keep from getting stuck */
- if (pex_err & (PCI_ERR_UNC_POISON_TLP | PCI_ERR_UNC_FCP
- | PCI_ERR_UNC_DLP)) {
+ if (pex_err & PEX_FATAL_ERRORS) {
u32 hwmsk = sky2_read32(hw, B0_HWE_IMSK);
hwmsk &= ~Y2_IS_PCI_EXP;
sky2_write32(hw, B0_HWE_IMSK, hwmsk);
}
-
}
if (status & Y2_HWE_L1_MASK)
u16 status;
u8 t8;
int i;
- u32 msk;
sky2_write8(hw, B0_CTST, CS_RST_CLR);
sky2_write8(hw, B0_CTST, CS_MRST_CLR);
/* clear any PEX errors */
- if (pci_find_capability(hw->pdev, PCI_CAP_ID_EXP)) {
- hw->err_cap = pci_find_ext_capability(hw->pdev, PCI_EXT_CAP_ID_ERR);
- if (hw->err_cap)
- sky2_pci_write32(hw,
- hw->err_cap + PCI_ERR_UNCOR_STATUS,
- 0xffffffffUL);
- }
+ if (pci_find_capability(hw->pdev, PCI_CAP_ID_EXP))
+ sky2_pci_write32(hw, PEX_UNC_ERR_STAT, 0xffffffffUL);
+
hw->pmd_type = sky2_read8(hw, B2_PMD_TYP);
hw->ports = 1;
sky2_write8(hw, RAM_BUFFER(i, B3_RI_RTO_XS2), SK_RI_TO_53);
}
- msk = Y2_HWE_ALL_MASK;
- if (!hw->err_cap)
- msk &= ~Y2_IS_PCI_EXP;
- sky2_write32(hw, B0_HWE_IMSK, msk);
+ sky2_write32(hw, B0_HWE_IMSK, Y2_HWE_ALL_MASK);
for (i = 0; i < hw->ports; i++)
sky2_gmac_reset(hw, i);
#define ETH_JUMBO_MTU 9000 /* Maximum MTU supported */
-/* PCI device specific config registers */
+/* PCI config registers */
enum {
PCI_DEV_REG1 = 0x40,
PCI_DEV_REG2 = 0x44,
+ PCI_DEV_STATUS = 0x7c,
PCI_DEV_REG3 = 0x80,
PCI_DEV_REG4 = 0x84,
PCI_DEV_REG5 = 0x88,
};
+enum {
+ PEX_DEV_CAP = 0xe4,
+ PEX_DEV_CTRL = 0xe8,
+ PEX_DEV_STA = 0xea,
+ PEX_LNK_STAT = 0xf2,
+ PEX_UNC_ERR_STAT= 0x104,
+};
+
/* Yukon-2 */
enum pci_dev_reg_1 {
PCI_Y2_PIG_ENA = 1<<31, /* Enable Plug-in-Go (YUKON-2) */
PCI_STATUS_REC_MASTER_ABORT | \
PCI_STATUS_REC_TARGET_ABORT | \
PCI_STATUS_PARITY)
+
+enum pex_dev_ctrl {
+ PEX_DC_MAX_RRS_MSK = 7<<12, /* Bit 14..12: Max. Read Request Size */
+ PEX_DC_EN_NO_SNOOP = 1<<11,/* Enable No Snoop */
+ PEX_DC_EN_AUX_POW = 1<<10,/* Enable AUX Power */
+ PEX_DC_EN_PHANTOM = 1<<9, /* Enable Phantom Functions */
+ PEX_DC_EN_EXT_TAG = 1<<8, /* Enable Extended Tag Field */
+ PEX_DC_MAX_PLS_MSK = 7<<5, /* Bit 7.. 5: Max. Payload Size Mask */
+ PEX_DC_EN_REL_ORD = 1<<4, /* Enable Relaxed Ordering */
+ PEX_DC_EN_UNS_RQ_RP = 1<<3, /* Enable Unsupported Request Reporting */
+ PEX_DC_EN_FAT_ER_RP = 1<<2, /* Enable Fatal Error Reporting */
+ PEX_DC_EN_NFA_ER_RP = 1<<1, /* Enable Non-Fatal Error Reporting */
+ PEX_DC_EN_COR_ER_RP = 1<<0, /* Enable Correctable Error Reporting */
+};
+#define PEX_DC_MAX_RD_RQ_SIZE(x) (((x)<<12) & PEX_DC_MAX_RRS_MSK)
+
+/* PEX_UNC_ERR_STAT PEX Uncorrectable Errors Status Register (Yukon-2) */
+enum pex_err {
+ PEX_UNSUP_REQ = 1<<20, /* Unsupported Request Error */
+
+ PEX_MALFOR_TLP = 1<<18, /* Malformed TLP */
+
+ PEX_UNEXP_COMP = 1<<16, /* Unexpected Completion */
+
+ PEX_COMP_TO = 1<<14, /* Completion Timeout */
+ PEX_FLOW_CTRL_P = 1<<13, /* Flow Control Protocol Error */
+ PEX_POIS_TLP = 1<<12, /* Poisoned TLP */
+
+ PEX_DATA_LINK_P = 1<<4, /* Data Link Protocol Error */
+ PEX_FATAL_ERRORS= (PEX_MALFOR_TLP | PEX_FLOW_CTRL_P | PEX_DATA_LINK_P),
+};
+
+
enum csr_regs {
B0_RAP = 0x0000,
B0_CTST = 0x0004,
struct net_device *dev[2];
int pm_cap;
- int err_cap;
u8 chip_id;
u8 chip_rev;
u8 pmd_type;
#define SMC_IRQ_FLAGS (0)
+#elif defined(CONFIG_ARCH_VERSATILE)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_NOWAIT 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+#define SMC_IRQ_FLAGS (0)
+
#else
#define SMC_CAN_USE_8BIT 1
"<Jens.Osterkamp@de.ibm.com>");
MODULE_DESCRIPTION("Spider Southbridge Gigabit Ethernet driver");
MODULE_LICENSE("GPL");
+MODULE_VERSION(VERSION);
static int rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_DEFAULT;
static int tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_DEFAULT;
-module_param(rx_descriptors, int, 0644);
-module_param(tx_descriptors, int, 0644);
+module_param(rx_descriptors, int, 0444);
+module_param(tx_descriptors, int, 0444);
MODULE_PARM_DESC(rx_descriptors, "number of descriptors used " \
"in rx chains");
spider_net_init_chain(struct spider_net_card *card,
struct spider_net_descr_chain *chain,
struct spider_net_descr *start_descr,
- int direction, int no)
+ int no)
{
int i;
struct spider_net_descr *descr;
buf = pci_map_single(card->pdev, descr,
SPIDER_NET_DESCR_SIZE,
- direction);
+ PCI_DMA_BIDIRECTIONAL);
if (pci_dma_mapping_error(buf))
goto iommu_error;
(descr-1)->next = start_descr;
start_descr->prev = descr-1;
- descr = start_descr;
- if (direction == PCI_DMA_FROMDEVICE)
- for (i=0; i < no; i++, descr++)
- descr->next_descr_addr = descr->next->bus_addr;
-
spin_lock_init(&chain->lock);
chain->head = start_descr;
chain->tail = start_descr;
if (descr->bus_addr)
pci_unmap_single(card->pdev, descr->bus_addr,
SPIDER_NET_DESCR_SIZE,
- direction);
+ PCI_DMA_BIDIRECTIONAL);
return -ENOMEM;
}
struct spider_net_descr *descr;
descr = card->rx_chain.head;
- while (descr->next != card->rx_chain.head) {
+ do {
if (descr->skb) {
dev_kfree_skb(descr->skb);
pci_unmap_single(card->pdev, descr->buf_addr,
SPIDER_NET_MAX_FRAME,
- PCI_DMA_FROMDEVICE);
+ PCI_DMA_BIDIRECTIONAL);
}
descr = descr->next;
- }
+ } while (descr != card->rx_chain.head);
}
/**
spider_net_prepare_tx_descr(struct spider_net_card *card,
struct sk_buff *skb)
{
- struct spider_net_descr *descr = card->tx_chain.head;
+ struct spider_net_descr *descr;
dma_addr_t buf;
+ unsigned long flags;
+ int length;
- buf = pci_map_single(card->pdev, skb->data, skb->len, PCI_DMA_TODEVICE);
+ length = skb->len;
+ if (length < ETH_ZLEN) {
+ if (skb_pad(skb, ETH_ZLEN-length))
+ return 0;
+ length = ETH_ZLEN;
+ }
+
+ buf = pci_map_single(card->pdev, skb->data, length, PCI_DMA_TODEVICE);
if (pci_dma_mapping_error(buf)) {
if (netif_msg_tx_err(card) && net_ratelimit())
pr_err("could not iommu-map packet (%p, %i). "
- "Dropping packet\n", skb->data, skb->len);
+ "Dropping packet\n", skb->data, length);
card->spider_stats.tx_iommu_map_error++;
return -ENOMEM;
}
+ spin_lock_irqsave(&card->tx_chain.lock, flags);
+ descr = card->tx_chain.head;
+ card->tx_chain.head = descr->next;
+
descr->buf_addr = buf;
- descr->buf_size = skb->len;
+ descr->buf_size = length;
descr->next_descr_addr = 0;
descr->skb = skb;
descr->data_status = 0;
descr->dmac_cmd_status =
SPIDER_NET_DESCR_CARDOWNED | SPIDER_NET_DMAC_NOCS;
+ spin_unlock_irqrestore(&card->tx_chain.lock, flags);
+
if (skb->protocol == htons(ETH_P_IP))
switch (skb->nh.iph->protocol) {
case IPPROTO_TCP:
break;
}
+ /* Chain the bus address, so that the DMA engine finds this descr. */
descr->prev->next_descr_addr = descr->bus_addr;
+ card->netdev->trans_start = jiffies; /* set netdev watchdog timer */
return 0;
}
-/**
- * spider_net_release_tx_descr - processes a used tx descriptor
- * @card: card structure
- * @descr: descriptor to release
- *
- * releases a used tx descriptor (unmapping, freeing of skb)
- */
-static inline void
-spider_net_release_tx_descr(struct spider_net_card *card)
+static int
+spider_net_set_low_watermark(struct spider_net_card *card)
{
+ unsigned long flags;
+ int status;
+ int cnt=0;
+ int i;
struct spider_net_descr *descr = card->tx_chain.tail;
- struct sk_buff *skb;
- card->tx_chain.tail = card->tx_chain.tail->next;
- descr->dmac_cmd_status |= SPIDER_NET_DESCR_NOT_IN_USE;
+ /* Measure the length of the queue. Measurement does not
+ * need to be precise -- does not need a lock. */
+ while (descr != card->tx_chain.head) {
+ status = descr->dmac_cmd_status & SPIDER_NET_DESCR_NOT_IN_USE;
+ if (status == SPIDER_NET_DESCR_NOT_IN_USE)
+ break;
+ descr = descr->next;
+ cnt++;
+ }
- /* unmap the skb */
- skb = descr->skb;
- pci_unmap_single(card->pdev, descr->buf_addr, skb->len,
- PCI_DMA_TODEVICE);
- dev_kfree_skb_any(skb);
+ /* If TX queue is short, don't even bother with interrupts */
+ if (cnt < card->num_tx_desc/4)
+ return cnt;
+
+ /* Set low-watermark 3/4th's of the way into the queue. */
+ descr = card->tx_chain.tail;
+ cnt = (cnt*3)/4;
+ for (i=0;i<cnt; i++)
+ descr = descr->next;
+
+ /* Set the new watermark, clear the old watermark */
+ spin_lock_irqsave(&card->tx_chain.lock, flags);
+ descr->dmac_cmd_status |= SPIDER_NET_DESCR_TXDESFLG;
+ if (card->low_watermark && card->low_watermark != descr)
+ card->low_watermark->dmac_cmd_status =
+ card->low_watermark->dmac_cmd_status & ~SPIDER_NET_DESCR_TXDESFLG;
+ card->low_watermark = descr;
+ spin_unlock_irqrestore(&card->tx_chain.lock, flags);
+ return cnt;
}
/**
spider_net_release_tx_chain(struct spider_net_card *card, int brutal)
{
struct spider_net_descr_chain *chain = &card->tx_chain;
+ struct spider_net_descr *descr;
+ struct sk_buff *skb;
+ u32 buf_addr;
+ unsigned long flags;
int status;