--- /dev/null
+
+-------
+PHY Abstraction Layer
+(Updated 2005-07-21)
+
+Purpose
+
+ Most network devices consist of set of registers which provide an interface
+ to a MAC layer, which communicates with the physical connection through a
+ PHY. The PHY concerns itself with negotiating link parameters with the link
+ partner on the other side of the network connection (typically, an ethernet
+ cable), and provides a register interface to allow drivers to determine what
+ settings were chosen, and to configure what settings are allowed.
+
+ While these devices are distinct from the network devices, and conform to a
+ standard layout for the registers, it has been common practice to integrate
+ the PHY management code with the network driver. This has resulted in large
+ amounts of redundant code. Also, on embedded systems with multiple (and
+ sometimes quite different) ethernet controllers connected to the same
+ management bus, it is difficult to ensure safe use of the bus.
+
+ Since the PHYs are devices, and the management busses through which they are
+ accessed are, in fact, busses, the PHY Abstraction Layer treats them as such.
+ In doing so, it has these goals:
+
+ 1) Increase code-reuse
+ 2) Increase overall code-maintainability
+ 3) Speed development time for new network drivers, and for new systems
+
+ Basically, this layer is meant to provide an interface to PHY devices which
+ allows network driver writers to write as little code as possible, while
+ still providing a full feature set.
+
+The MDIO bus
+
+ Most network devices are connected to a PHY by means of a management bus.
+ Different devices use different busses (though some share common interfaces).
+ In order to take advantage of the PAL, each bus interface needs to be
+ registered as a distinct device.
+
+ 1) read and write functions must be implemented. Their prototypes are:
+
+ int write(struct mii_bus *bus, int mii_id, int regnum, u16 value);
+ int read(struct mii_bus *bus, int mii_id, int regnum);
+
+ mii_id is the address on the bus for the PHY, and regnum is the register
+ number. These functions are guaranteed not to be called from interrupt
+ time, so it is safe for them to block, waiting for an interrupt to signal
+ the operation is complete
+
+ 2) A reset function is necessary. This is used to return the bus to an
+ initialized state.
+
+ 3) A probe function is needed. This function should set up anything the bus
+ driver needs, setup the mii_bus structure, and register with the PAL using
+ mdiobus_register. Similarly, there's a remove function to undo all of
+ that (use mdiobus_unregister).
+
+ 4) Like any driver, the device_driver structure must be configured, and init
+ exit functions are used to register the driver.
+
+ 5) The bus must also be declared somewhere as a device, and registered.
+
+ As an example for how one driver implemented an mdio bus driver, see
+ drivers/net/gianfar_mii.c and arch/ppc/syslib/mpc85xx_devices.c
+
+Connecting to a PHY
+
+ Sometime during startup, the network driver needs to establish a connection
+ between the PHY device, and the network device. At this time, the PHY's bus
+ and drivers need to all have been loaded, so it is ready for the connection.
+ At this point, there are several ways to connect to the PHY:
+
+ 1) The PAL handles everything, and only calls the network driver when
+ the link state changes, so it can react.
+
+ 2) The PAL handles everything except interrupts (usually because the
+ controller has the interrupt registers).
+
+ 3) The PAL handles everything, but checks in with the driver every second,
+ allowing the network driver to react first to any changes before the PAL
+ does.
+
+ 4) The PAL serves only as a library of functions, with the network device
+ manually calling functions to update status, and configure the PHY
+
+
+Letting the PHY Abstraction Layer do Everything
+
+ If you choose option 1 (The hope is that every driver can, but to still be
+ useful to drivers that can't), connecting to the PHY is simple:
+
+ First, you need a function to react to changes in the link state. This
+ function follows this protocol:
+
+ static void adjust_link(struct net_device *dev);
+
+ Next, you need to know the device name of the PHY connected to this device.
+ The name will look something like, "phy0:0", where the first number is the
+ bus id, and the second is the PHY's address on that bus.
+
+ Now, to connect, just call this function:
+
+ phydev = phy_connect(dev, phy_name, &adjust_link, flags);
+
+ phydev is a pointer to the phy_device structure which represents the PHY. If
+ phy_connect is successful, it will return the pointer. dev, here, is the
+ pointer to your net_device. Once done, this function will have started the
+ PHY's software state machine, and registered for the PHY's interrupt, if it
+ has one. The phydev structure will be populated with information about the
+ current state, though the PHY will not yet be truly operational at this
+ point.
+
+ flags is a u32 which can optionally contain phy-specific flags.
+ This is useful if the system has put hardware restrictions on
+ the PHY/controller, of which the PHY needs to be aware.
+
+ Now just make sure that phydev->supported and phydev->advertising have any
+ values pruned from them which don't make sense for your controller (a 10/100
+ controller may be connected to a gigabit capable PHY, so you would need to
+ mask off SUPPORTED_1000baseT*). See include/linux/ethtool.h for definitions
+ for these bitfields. Note that you should not SET any bits, or the PHY may
+ get put into an unsupported state.
+
+ Lastly, once the controller is ready to handle network traffic, you call
+ phy_start(phydev). This tells the PAL that you are ready, and configures the
+ PHY to connect to the network. If you want to handle your own interrupts,
+ just set phydev->irq to PHY_IGNORE_INTERRUPT before you call phy_start.
+ Similarly, if you don't want to use interrupts, set phydev->irq to PHY_POLL.
+
+ When you want to disconnect from the network (even if just briefly), you call
+ phy_stop(phydev).
+
+Keeping Close Tabs on the PAL
+
+ It is possible that the PAL's built-in state machine needs a little help to
+ keep your network device and the PHY properly in sync. If so, you can
+ register a helper function when connecting to the PHY, which will be called
+ every second before the state machine reacts to any changes. To do this, you
+ need to manually call phy_attach() and phy_prepare_link(), and then call
+ phy_start_machine() with the second argument set to point to your special
+ handler.
+
+ Currently there are no examples of how to use this functionality, and testing
+ on it has been limited because the author does not have any drivers which use
+ it (they all use option 1). So Caveat Emptor.
+
+Doing it all yourself
+
+ There's a remote chance that the PAL's built-in state machine cannot track
+ the complex interactions between the PHY and your network device. If this is
+ so, you can simply call phy_attach(), and not call phy_start_machine or
+ phy_prepare_link(). This will mean that phydev->state is entirely yours to
+ handle (phy_start and phy_stop toggle between some of the states, so you
+ might need to avoid them).
+
+ An effort has been made to make sure that useful functionality can be
+ accessed without the state-machine running, and most of these functions are
+ descended from functions which did not interact with a complex state-machine.
+ However, again, no effort has been made so far to test running without the
+ state machine, so tryer beware.
+
+ Here is a brief rundown of the functions:
+
+ int phy_read(struct phy_device *phydev, u16 regnum);
+ int phy_write(struct phy_device *phydev, u16 regnum, u16 val);
+
+ Simple read/write primitives. They invoke the bus's read/write function
+ pointers.
+
+ void phy_print_status(struct phy_device *phydev);
+
+ A convenience function to print out the PHY status neatly.
+
+ int phy_clear_interrupt(struct phy_device *phydev);
+ int phy_config_interrupt(struct phy_device *phydev, u32 interrupts);
+
+ Clear the PHY's interrupt, and configure which ones are allowed,
+ respectively. Currently only supports all on, or all off.
+
+ int phy_enable_interrupts(struct phy_device *phydev);
+ int phy_disable_interrupts(struct phy_device *phydev);
+
+ Functions which enable/disable PHY interrupts, clearing them
+ before and after, respectively.
+
+ int phy_start_interrupts(struct phy_device *phydev);
+ int phy_stop_interrupts(struct phy_device *phydev);
+
+ Requests the IRQ for the PHY interrupts, then enables them for
+ start, or disables then frees them for stop.
+
+ struct phy_device * phy_attach(struct net_device *dev, const char *phy_id,
+ u32 flags);
+
+ Attaches a network device to a particular PHY, binding the PHY to a generic
+ driver if none was found during bus initialization. Passes in
+ any phy-specific flags as needed.
+
+ int phy_start_aneg(struct phy_device *phydev);
+
+ Using variables inside the phydev structure, either configures advertising
+ and resets autonegotiation, or disables autonegotiation, and configures
+ forced settings.
+
+ static inline int phy_read_status(struct phy_device *phydev);
+
+ Fills the phydev structure with up-to-date information about the current
+ settings in the PHY.
+
+ void phy_sanitize_settings(struct phy_device *phydev)
+
+ Resolves differences between currently desired settings, and
+ supported settings for the given PHY device. Does not make
+ the changes in the hardware, though.
+
+ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd);
+ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd);
+
+ Ethtool convenience functions.
+
+ int phy_mii_ioctl(struct phy_device *phydev,
+ struct mii_ioctl_data *mii_data, int cmd);
+
+ The MII ioctl. Note that this function will completely screw up the state
+ machine if you write registers like BMCR, BMSR, ADVERTISE, etc. Best to
+ use this only to write registers which are not standard, and don't set off
+ a renegotiation.
+
+
+PHY Device Drivers
+
+ With the PHY Abstraction Layer, adding support for new PHYs is
+ quite easy. In some cases, no work is required at all! However,
+ many PHYs require a little hand-holding to get up-and-running.
+
+Generic PHY driver
+
+ If the desired PHY doesn't have any errata, quirks, or special
+ features you want to support, then it may be best to not add
+ support, and let the PHY Abstraction Layer's Generic PHY Driver
+ do all of the work.
+
+Writing a PHY driver
+
+ If you do need to write a PHY driver, the first thing to do is
+ make sure it can be matched with an appropriate PHY device.
+ This is done during bus initialization by reading the device's
+ UID (stored in registers 2 and 3), then comparing it to each
+ driver's phy_id field by ANDing it with each driver's
+ phy_id_mask field. Also, it needs a name. Here's an example:
+
+ static struct phy_driver dm9161_driver = {
+ .phy_id = 0x0181b880,
+ .name = "Davicom DM9161E",
+ .phy_id_mask = 0x0ffffff0,
+ ...
+ }
+
+ Next, you need to specify what features (speed, duplex, autoneg,
+ etc) your PHY device and driver support. Most PHYs support
+ PHY_BASIC_FEATURES, but you can look in include/mii.h for other
+ features.
+
+ Each driver consists of a number of function pointers:
+
+ config_init: configures PHY into a sane state after a reset.
+ For instance, a Davicom PHY requires descrambling disabled.
+ probe: Does any setup needed by the driver
+ suspend/resume: power management
+ config_aneg: Changes the speed/duplex/negotiation settings
+ read_status: Reads the current speed/duplex/negotiation settings
+ ack_interrupt: Clear a pending interrupt
+ config_intr: Enable or disable interrupts
+ remove: Does any driver take-down
+
+ Of these, only config_aneg and read_status are required to be
+ assigned by the driver code. The rest are optional. Also, it is
+ preferred to use the generic phy driver's versions of these two
+ functions if at all possible: genphy_read_status and
+ genphy_config_aneg. If this is not possible, it is likely that
+ you only need to perform some actions before and after invoking
+ these functions, and so your functions will wrap the generic
+ ones.
+
+ Feel free to look at the Marvell, Cicada, and Davicom drivers in
+ drivers/net/phy/ for examples (the lxt and qsemi drivers have
+ not been tested as of this writing)
W: http://www.simtec.co.uk/products/EB2410ITX/
S: Supported
+SIS 190 ETHERNET DRIVER
+P: Francois Romieu
+M: romieu@fr.zoreil.com
+L: netdev@vger.kernel.org
+S: Maintained
+
SIS 5513 IDE CONTROLLER DRIVER
P: Lionel Bouton
M: Lionel.Bouton@inet6.fr
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 13
-EXTRAVERSION =-rc7
-NAME=Woozy Numbat
+EXTRAVERSION =
+NAME=Affluent Albatross
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
if (ka->sa.sa_flags & SA_RESETHAND)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
static inline void
and the Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
- Note that, even if you say N here, Linux on the x86 architecture
- will issue the hlt instruction if nothing is to be done, thereby
- sending the processor to sleep and saving power.
-
config APM
tristate "Advanced Power Management Emulation"
depends on PM
battery status information, and user-space programs will receive
notification of APM "events" (e.g. battery status change).
- If you select "Y" here, you can disable actual use of the APM
- BIOS by passing the "apm=off" option to the kernel at boot time.
-
- Note that the APM support is almost completely disabled for
- machines with more than one CPU.
-
In order to use APM, you will need supporting software. For location
and more information, read <file:Documentation/pm.txt> and the
Battery Powered Linux mini-HOWTO, available from
manpage ("man 8 hdparm") for that), and it doesn't turn off
VESA-compliant "green" monitors.
- This driver does not support the TI 4000M TravelMate and the ACER
- 486/DX4/75 because they don't have compliant BIOSes. Many "green"
- desktop machines also don't have compliant BIOSes, and this driver
- may cause those machines to panic during the boot phase.
-
Generally, if you don't have a battery in your machine, there isn't
much point in using this driver and you should say N. If you get
random kernel OOPSes or reboots that don't seem to be related to
anything, try disabling/enabling this option (or disabling/enabling
APM in your BIOS).
- Some other things you should try when experiencing seemingly random,
- "weird" problems:
-
- 1) make sure that you have enough swap space and that it is
- enabled.
- 2) pass the "no-hlt" option to the kernel
- 3) switch on floating point emulation in the kernel and pass
- the "no387" option to the kernel
- 4) pass the "floppy=nodma" option to the kernel
- 5) pass the "mem=4M" option to the kernel (thereby disabling
- all but the first 4 MB of RAM)
- 6) make sure that the CPU is not over clocked.
- 7) read the sig11 FAQ at <http://www.bitwizard.nl/sig11/>
- 8) disable the cache from your BIOS settings
- 9) install a fan for the video card or exchange video RAM
- 10) install a better fan for the CPU
- 11) exchange RAM chips
- 12) exchange the motherboard.
-
- To compile this driver as a module, choose M here: the
- module will be called apm.
-
endmenu
source "net/Kconfig"
source "drivers/misc/Kconfig"
+source "drivers/mfd/Kconfig"
+
source "drivers/media/Kconfig"
source "drivers/video/Kconfig"
config ICST525
bool
+config ARM_GIC
+ bool
+
config ICST307
bool
obj-y += rtctime.o
obj-$(CONFIG_ARM_AMBA) += amba.o
+obj-$(CONFIG_ARM_GIC) += gic.o
obj-$(CONFIG_ICST525) += icst525.o
obj-$(CONFIG_ICST307) += icst307.o
obj-$(CONFIG_SA1111) += sa1111.o
--- /dev/null
+/*
+ * linux/arch/arm/common/gic.c
+ *
+ * Copyright (C) 2002 ARM Limited, All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Interrupt architecture for the GIC:
+ *
+ * o There is one Interrupt Distributor, which receives interrupts
+ * from system devices and sends them to the Interrupt Controllers.
+ *
+ * o There is one CPU Interface per CPU, which sends interrupts sent
+ * by the Distributor, and interrupts generated locally, to the
+ * associated CPU.
+ *
+ * Note that IRQs 0-31 are special - they are local to each CPU.
+ * As such, the enable set/clear, pending set/clear and active bit
+ * registers are banked per-cpu for these sources.
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/smp.h>
+
+#include <asm/irq.h>
+#include <asm/io.h>
+#include <asm/mach/irq.h>
+#include <asm/hardware/gic.h>
+
+static void __iomem *gic_dist_base;
+static void __iomem *gic_cpu_base;
+
+/*
+ * Routines to acknowledge, disable and enable interrupts
+ *
+ * Linux assumes that when we're done with an interrupt we need to
+ * unmask it, in the same way we need to unmask an interrupt when
+ * we first enable it.
+ *
+ * The GIC has a seperate notion of "end of interrupt" to re-enable
+ * an interrupt after handling, in order to support hardware
+ * prioritisation.
+ *
+ * We can make the GIC behave in the way that Linux expects by making
+ * our "acknowledge" routine disable the interrupt, then mark it as
+ * complete.
+ */
+static void gic_ack_irq(unsigned int irq)
+{
+ u32 mask = 1 << (irq % 32);
+ writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
+ writel(irq, gic_cpu_base + GIC_CPU_EOI);
+}
+
+static void gic_mask_irq(unsigned int irq)
+{
+ u32 mask = 1 << (irq % 32);
+ writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
+}
+
+static void gic_unmask_irq(unsigned int irq)
+{
+ u32 mask = 1 << (irq % 32);
+ writel(mask, gic_dist_base + GIC_DIST_ENABLE_SET + (irq / 32) * 4);
+}
+
+static void gic_set_cpu(struct irqdesc *desc, unsigned int irq, unsigned int cpu)
+{
+ void __iomem *reg = gic_dist_base + GIC_DIST_TARGET + (irq & ~3);
+ unsigned int shift = (irq % 4) * 8;
+ u32 val;
+
+ val = readl(reg) & ~(0xff << shift);
+ val |= 1 << (cpu + shift);
+ writel(val, reg);
+}
+
+static struct irqchip gic_chip = {
+ .ack = gic_ack_irq,
+ .mask = gic_mask_irq,
+ .unmask = gic_unmask_irq,
+#ifdef CONFIG_SMP
+ .set_cpu = gic_set_cpu,
+#endif
+};
+
+void __init gic_dist_init(void __iomem *base)
+{
+ unsigned int max_irq, i;
+ u32 cpumask = 1 << smp_processor_id();
+
+ cpumask |= cpumask << 8;
+ cpumask |= cpumask << 16;
+
+ gic_dist_base = base;
+
+ writel(0, base + GIC_DIST_CTRL);
+
+ /*
+ * Find out how many interrupts are supported.
+ */
+ max_irq = readl(base + GIC_DIST_CTR) & 0x1f;
+ max_irq = (max_irq + 1) * 32;
+
+ /*
+ * The GIC only supports up to 1020 interrupt sources.
+ * Limit this to either the architected maximum, or the
+ * platform maximum.
+ */
+ if (max_irq > max(1020, NR_IRQS))
+ max_irq = max(1020, NR_IRQS);
+
+ /*
+ * Set all global interrupts to be level triggered, active low.
+ */
+ for (i = 32; i < max_irq; i += 16)
+ writel(0, base + GIC_DIST_CONFIG + i * 4 / 16);
+
+ /*
+ * Set all global interrupts to this CPU only.
+ */
+ for (i = 32; i < max_irq; i += 4)
+ writel(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
+
+ /*
+ * Set priority on all interrupts.
+ */
+ for (i = 0; i < max_irq; i += 4)
+ writel(0xa0a0a0a0, base + GIC_DIST_PRI + i * 4 / 4);
+
+ /*
+ * Disable all interrupts.
+ */
+ for (i = 0; i < max_irq; i += 32)
+ writel(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32);
+
+ /*
+ * Setup the Linux IRQ subsystem.
+ */
+ for (i = 29; i < max_irq; i++) {
+ set_irq_chip(i, &gic_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID | IRQF_PROBE);
+ }
+
+ writel(1, base + GIC_DIST_CTRL);
+}
+
+void __cpuinit gic_cpu_init(void __iomem *base)
+{
+ gic_cpu_base = base;
+ writel(0xf0, base + GIC_CPU_PRIMASK);
+ writel(1, base + GIC_CPU_CTRL);
+}
+
+#ifdef CONFIG_SMP
+void gic_raise_softirq(cpumask_t cpumask, unsigned int irq)
+{
+ unsigned long map = *cpus_addr(cpumask);
+
+ writel(map << 16 | irq, gic_dist_base + GIC_DIST_SOFTINT);
+}
+#endif
/*
* Block the signal if we were unsuccessful.
*/
- if (ret != 0 || !(ka->sa.sa_flags & SA_NODEFER)) {
+ if (ret != 0) {
spin_lock_irq(&tsk->sighand->siglock);
sigorsets(&tsk->blocked, &tsk->blocked,
&ka->sa.sa_mask);
- sigaddset(&tsk->blocked, sig);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(&tsk->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&tsk->sighand->siglock);
}
static struct resource coyote_flash_resource = {
.start = COYOTE_FLASH_BASE,
- .end = COYOTE_FLASH_BASE + COYOTE_FLASH_SIZE,
+ .end = COYOTE_FLASH_BASE + COYOTE_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};
static struct resource gtwx5715_flash_resource = {
.start = GTWX5715_FLASH_BASE,
- .end = GTWX5715_FLASH_BASE + GTWX5715_FLASH_SIZE,
+ .end = GTWX5715_FLASH_BASE + GTWX5715_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};
static struct resource ixdp425_flash_resource = {
.start = IXDP425_FLASH_BASE,
- .end = IXDP425_FLASH_BASE + IXDP425_FLASH_SIZE,
+ .end = IXDP425_FLASH_BASE + IXDP425_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/assabet.h>
+#include <asm/arch/mcp.h>
#include "generic.h"
.set_speed = assabet_irda_set_speed,
};
+static struct mcp_plat_data assabet_mcp_data = {
+ .mccr0 = MCCR0_ADM,
+ .sclk_rate = 11981000,
+};
+
static void __init assabet_init(void)
{
/*
sa11x0_set_flash_data(&assabet_flash_data, assabet_flash_resources,
ARRAY_SIZE(assabet_flash_resources));
sa11x0_set_irda_data(&assabet_irda_data);
+ sa11x0_set_mcp_data(&assabet_mcp_data);
}
/*
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/cerf.h>
+#include <asm/arch/mcp.h>
#include "generic.h"
static struct resource cerfuart2_resources[] = {
GPDR |= CERF_GPIO_CF_RESET;
}
+static struct mcp_plat_data cerf_mcp_data = {
+ .mccr0 = MCCR0_ADM,
+ .sclk_rate = 11981000,
+};
+
static void __init cerf_init(void)
{
platform_add_devices(cerf_devices, ARRAY_SIZE(cerf_devices));
sa11x0_set_flash_data(&cerf_flash_data, &cerf_flash_resource, 1);
+ sa11x0_set_mcp_data(&cerf_mcp_data);
}
MACHINE_START(CERF, "Intrinsyc CerfBoard/CerfCube")
.resource = sa11x0mcp_resources,
};
+void sa11x0_set_mcp_data(struct mcp_plat_data *data)
+{
+ sa11x0mcp_device.dev.platform_data = data;
+}
+
static struct resource sa11x0ssp_resources[] = {
[0] = {
.start = 0x80070000,
extern void sa11x0_set_flash_data(struct flash_platform_data *flash,
struct resource *res, int nr);
+struct sa11x0_ssp_plat_ops;
+extern void sa11x0_set_ssp_data(struct sa11x0_ssp_plat_ops *ops);
+
struct irda_platform_data;
void sa11x0_set_irda_data(struct irda_platform_data *irda);
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
+#include <asm/arch/mcp.h>
#include "generic.h"
#warning "include/asm/arch-sa1100/ide.h needs fixing for lart"
+static struct mcp_plat_data lart_mcp_data = {
+ .mccr0 = MCCR0_ADM,
+ .sclk_rate = 11981000,
+};
+
+static void __init lart_init(void)
+{
+ sa11x0_set_mcp_data(&lart_mcp_data);
+}
+
static struct map_desc lart_io_desc[] __initdata = {
/* virtual physical length type */
{ 0xe8000000, 0x00000000, 0x00400000, MT_DEVICE }, /* main flash memory */
.boot_params = 0xc0000100,
.map_io = lart_map_io,
.init_irq = sa1100_init_irq,
+ .init_machine = lart_init,
.timer = &sa1100_timer,
MACHINE_END
#include <asm/mach/flash.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
+#include <asm/arch/mcp.h>
#include <asm/arch/shannon.h>
#include "generic.h"
.flags = IORESOURCE_MEM,
};
+static struct mcp_plat_data shannon_mcp_data = {
+ .mccr0 = MCCR0_ADM,
+ .sclk_rate = 11981000,
+};
+
static void __init shannon_init(void)
{
sa11x0_set_flash_data(&shannon_flash_data, &shannon_flash_resource, 1);
+ sa11x0_set_mcp_data(&shannon_mcp_data);
}
static void __init shannon_map_io(void)
#include <asm/mach/flash.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
+#include <asm/arch/mcp.h>
#include <asm/arch/simpad.h>
#include <linux/serial_core.h>
}
};
+static struct mcp_plat_data simpad_mcp_data = {
+ .mccr0 = MCCR0_ADM,
+ .sclk_rate = 11981000,
+};
+
static void __init simpad_map_io(void)
sa11x0_set_flash_data(&simpad_flash_data, simpad_flash_resources,
ARRAY_SIZE(simpad_flash_resources));
+ sa11x0_set_mcp_data(&simpad_mcp_data);
}
static void simpad_power_off(void)
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(&tsk->sighand->siglock);
- sigorsets(&tsk->blocked, &tsk->blocked,
- &ka->sa.sa_mask);
+ spin_lock_irq(&tsk->sighand->siglock);
+ sigorsets(&tsk->blocked, &tsk->blocked,
+ &ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&tsk->blocked, sig);
- recalc_sigpending();
- spin_unlock_irq(&tsk->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(&tsk->sighand->siglock);
return;
}
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
else
setup_frame(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked, sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
} /* end handle_signal() */
/*****************************************************************************/
else
setup_frame(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
else
ret = setup_frame(sig, ka, oldset, regs);
- if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
+ if (ret) {
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
- sigaddset(¤t->blocked,sig);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked,sig);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
}
if (!setup_frame(sig, ka, info, oldset, scr))
return 0;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- {
- sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
- sigaddset(¤t->blocked, sig);
- recalc_sigpending();
- }
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
return 1;
}
res->start = region->start + offset;
res->end = region->end + offset;
}
+EXPORT_SYMBOL(pcibios_bus_to_resource);
static int __devinit is_valid_resource(struct pci_dev *dev, int idx)
{
/* Set up the stack frame */
setup_rt_frame(sig, ka, info, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
else
setup_irix_frame(ka, regs, sig, oldset);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
asmlinkage int do_irix_signal(sigset_t *oldset, struct pt_regs *regs)
setup_frame(ka, regs, sig, oldset);
#endif
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
extern int do_signal32(sigset_t *oldset, struct pt_regs *regs);
else
setup_frame(ka, regs, sig, oldset);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
int do_signal32(sigset_t *oldset, struct pt_regs *regs)
if (!setup_rt_frame(sig, ka, info, oldset, regs, in_syscall))
return 0;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
return 1;
}
else
handle_signal(signr, &ka, &info, oldset, regs, newsp);
- if (!(ka.sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka.sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka.sa.sa_mask);
+ if (!(ka.sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked, signr);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
return 1;
}
ppc_md.find_end_of_memory = m8xx_find_end_of_memory;
ppc_md.setup_io_mappings = m8xx_map_io;
-#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE)
+#if defined(CONFIG_BLK_DEV_MPC8xx_IDE)
m8xx_ide_init();
#endif
}
local_irq_disable();
while (1) ;
}
+/* Used by the G5 thermal driver */
+EXPORT_SYMBOL_GPL(machine_power_off);
void machine_halt(void)
{
/* Set up Signal Frame */
ret = setup_rt_frame(sig, ka, info, oldset, regs);
- if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
+ if (ret) {
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked, ¤t->blocked, &ka->sa.sa_mask);
- sigaddset(¤t->blocked,sig);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked,sig);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
}
else
ret = handle_signal32(signr, &ka, &info, oldset, regs, newsp);
- if (ret && !(ka.sa.sa_flags & SA_NODEFER)) {
+ if (ret) {
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked, ¤t->blocked,
&ka.sa.sa_mask);
- sigaddset(¤t->blocked, signr);
+ if (!(ka.sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked, signr);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
}
else
setup_frame32(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
else
setup_frame(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
else
setup_frame(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
else
setup_frame(&ka->sa, regs, signr, oldset, info);
}
- if (!(ka->sa.sa_flags & SA_NOMASK)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(¤t->blocked, signr);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,
pbm->parent->resource_adjust(pdev, res, root);
}
+EXPORT_SYMBOL(pcibios_bus_to_resource);
char * __init pcibios_setup(char *str)
{
{
setup_rt_frame(ka, regs, signr, oldset,
(ka->sa.sa_flags & SA_SIGINFO) ? info : NULL);
- if (!(ka->sa.sa_flags & SA_NOMASK)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(¤t->blocked,signr);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,
else
setup_frame32(&ka->sa, regs, signr, oldset, info);
}
- if (!(ka->sa.sa_flags & SA_NOMASK)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(¤t->blocked,signr);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
static inline void syscall_restart32(unsigned long orig_i0, struct pt_regs *regs,
*
*/
-#include <linux/types.h>
-#include <linux/kdev_t.h>
-#include <linux/time.h>
-#include <linux/devfs_fs_kernel.h>
+#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include <linux/smp_lock.h>
#include <linux/miscdevice.h>
#include <asm/uaccess.h>
-#include <asm/irq.h>
-#include <asm/pgtable.h>
#include "mem_user.h"
#include "user_util.h"
static char *v_buf = NULL;
static ssize_t
-mmapper_read(struct file *file, char *buf, size_t count, loff_t *ppos)
+mmapper_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
{
- if(*ppos > mmapper_size)
- return -EINVAL;
-
- if(count + *ppos > mmapper_size)
- count = count + *ppos - mmapper_size;
-
- if(count < 0)
- return -EINVAL;
-
- copy_to_user(buf,&v_buf[*ppos],count);
-
- return count;
+ return simple_read_from_buffer(buf, count, ppos, v_buf, mmapper_size);
}
static ssize_t
-mmapper_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
+mmapper_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
- if(*ppos > mmapper_size)
+ if (*ppos > mmapper_size)
return -EINVAL;
- if(count + *ppos > mmapper_size)
- count = count + *ppos - mmapper_size;
-
- if(count < 0)
- return -EINVAL;
+ if (count > mmapper_size - *ppos)
+ count = mmapper_size - *ppos;
- copy_from_user(&v_buf[*ppos],buf,count);
+ if (copy_from_user(&v_buf[*ppos], buf, count))
+ return -EFAULT;
return count;
}
int ret = -EINVAL;
int size;
- lock_kernel();
if (vma->vm_pgoff != 0)
goto out;
goto out;
ret = 0;
out:
- unlock_kernel();
return ret;
}
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
force_sigsegv(signr, current);
- }
- else if(!(ka->sa.sa_flags & SA_NODEFER)){
+ } else {
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked, ¤t->blocked,
&ka->sa.sa_mask);
- sigaddset(¤t->blocked, signr);
+ if(!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked, signr);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
}
else
setup_frame(sig, ka, oldset, regs);
- if (!(ka->sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
}
/*
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.13-rc3
-# Fri Jul 22 16:47:31 2005
+# Linux kernel version: 2.6.13-rc6-git3
+# Fri Aug 12 16:40:34 2005
#
CONFIG_X86_64=y
CONFIG_64BIT=y
# Network testing
#
# CONFIG_NET_PKTGEN is not set
-CONFIG_NETPOLL=y
-# CONFIG_NETPOLL_RX is not set
-# CONFIG_NETPOLL_TRAP is not set
-CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_SCSI_SATA=y
+# CONFIG_SCSI_SATA_AHCI is not set
# CONFIG_SCSI_SATA_SVW is not set
CONFIG_SCSI_ATA_PIIX=y
# CONFIG_SCSI_SATA_NV is not set
# CONFIG_SCSI_QLA2300 is not set
# CONFIG_SCSI_QLA2322 is not set
# CONFIG_SCSI_QLA6312 is not set
+# CONFIG_SCSI_QLA24XX is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_DC390T is not set
#
# Fusion MPT device support
#
-# CONFIG_FUSION is not set
-# CONFIG_FUSION_SPI is not set
+CONFIG_FUSION=y
+CONFIG_FUSION_SPI=y
# CONFIG_FUSION_FC is not set
+CONFIG_FUSION_MAX_SGE=128
+# CONFIG_FUSION_CTL is not set
#
# IEEE 1394 (FireWire) support
# CONFIG_ACENIC is not set
# CONFIG_DL2K is not set
CONFIG_E1000=y
+# CONFIG_E1000_NAPI is not set
# CONFIG_NS83820 is not set
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
# CONFIG_NET_FC is not set
# CONFIG_SHAPER is not set
CONFIG_NETCONSOLE=y
+CONFIG_NETPOLL=y
+# CONFIG_NETPOLL_RX is not set
+# CONFIG_NETPOLL_TRAP is not set
+CONFIG_NET_POLL_CONTROLLER=y
#
# ISDN subsystem
}
/*
+ * Compute how much memory is missing in a range.
+ * Unlike the other functions in this file the arguments are in page numbers.
+ */
+unsigned long __init
+e820_hole_size(unsigned long start_pfn, unsigned long end_pfn)
+{
+ unsigned long ram = 0;
+ unsigned long start = start_pfn << PAGE_SHIFT;
+ unsigned long end = end_pfn << PAGE_SHIFT;
+ int i;
+ for (i = 0; i < e820.nr_map; i++) {
+ struct e820entry *ei = &e820.map[i];
+ unsigned long last, addr;
+
+ if (ei->type != E820_RAM ||
+ ei->addr+ei->size <= start ||
+ ei->addr >= end)
+ continue;
+
+ addr = round_up(ei->addr, PAGE_SIZE);
+ if (addr < start)
+ addr = start;
+
+ last = round_down(ei->addr + ei->size, PAGE_SIZE);
+ if (last >= end)
+ last = end;
+
+ if (last > addr)
+ ram += last - addr;
+ }
+ return ((end - start) - ram) >> PAGE_SHIFT;
+}
+
+/*
* Mark e820 reserved areas as busy for the resource manager.
*/
void __init e820_reserve_resources(void)
#endif
ret = setup_rt_frame(sig, ka, info, oldset, regs);
- if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
+ if (ret) {
spin_lock_irq(¤t->sighand->siglock);
sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
- sigaddset(¤t->blocked,sig);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked,sig);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
}
void __init paging_init(void)
{
{
- unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
+ unsigned long zones_size[MAX_NR_ZONES];
+ unsigned long holes[MAX_NR_ZONES];
unsigned int max_dma;
+ memset(zones_size, 0, sizeof(zones_size));
+ memset(holes, 0, sizeof(holes));
+
max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
- if (end_pfn < max_dma)
+ if (end_pfn < max_dma) {
zones_size[ZONE_DMA] = end_pfn;
- else {
+ holes[ZONE_DMA] = e820_hole_size(0, end_pfn);
+ } else {
zones_size[ZONE_DMA] = max_dma;
+ holes[ZONE_DMA] = e820_hole_size(0, max_dma);
zones_size[ZONE_NORMAL] = end_pfn - max_dma;
+ holes[ZONE_NORMAL] = e820_hole_size(max_dma, end_pfn);
}
- free_area_init(zones_size);
+ free_area_init_node(0, NODE_DATA(0), zones_size,
+ __pa(PAGE_OFFSET) >> PAGE_SHIFT, holes);
}
return;
}
{
unsigned long start_pfn, end_pfn;
unsigned long zones[MAX_NR_ZONES];
+ unsigned long holes[MAX_NR_ZONES];
unsigned long dma_end_pfn;
memset(zones, 0, sizeof(unsigned long) * MAX_NR_ZONES);
+ memset(holes, 0, sizeof(unsigned long) * MAX_NR_ZONES);
start_pfn = node_start_pfn(nodeid);
end_pfn = node_end_pfn(nodeid);
dma_end_pfn = __pa(MAX_DMA_ADDRESS) >> PAGE_SHIFT;
if (start_pfn < dma_end_pfn) {
zones[ZONE_DMA] = dma_end_pfn - start_pfn;
+ holes[ZONE_DMA] = e820_hole_size(start_pfn, dma_end_pfn);
zones[ZONE_NORMAL] = end_pfn - dma_end_pfn;
+ holes[ZONE_NORMAL] = e820_hole_size(dma_end_pfn, end_pfn);
+
} else {
zones[ZONE_NORMAL] = end_pfn - start_pfn;
+ holes[ZONE_NORMAL] = e820_hole_size(start_pfn, end_pfn);
}
free_area_init_node(nodeid, NODE_DATA(nodeid), zones,
- start_pfn, NULL);
+ start_pfn, holes);
}
void __init numa_init_array(void)
if (ka.sa.sa_flags & SA_ONESHOT)
ka.sa.sa_handler = SIG_DFL;
- if (!(ka.sa.sa_flags & SA_NODEFER)) {
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked, ¤t->blocked, &ka.sa.sa_mask);
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked, ¤t->blocked, &ka.sa.sa_mask);
+ if (!(ka.sa.sa_flags & SA_NODEFER))
sigaddset(¤t->blocked, signr);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
- }
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
return 1;
}
source "drivers/misc/Kconfig"
+source "drivers/mfd/Kconfig"
+
source "drivers/media/Kconfig"
source "drivers/video/Kconfig"
obj-$(CONFIG_SERIO) += input/serio/
obj-y += serial/
obj-$(CONFIG_PARPORT) += parport/
-obj-y += base/ block/ misc/ net/ media/
+obj-y += base/ block/ misc/ mfd/ net/ media/
obj-$(CONFIG_NUBUS) += nubus/
obj-$(CONFIG_ATM) += atm/
obj-$(CONFIG_PPC_PMAC) += macintosh/
static int acpi_shutdown(struct sys_device *x)
{
- return acpi_sleep_prepare(ACPI_STATE_S5);
+ if (system_state == SYSTEM_POWER_OFF) {
+ /* Prepare if we are going to power off the system */
+ return acpi_sleep_prepare(ACPI_STATE_S5);
+ }
+ return 0;
}
static struct sysdev_class acpi_sysclass = {
/*
* disable queueing at the driver/hardware level
*/
-static int cfq_max_depth = 1;
+static int cfq_max_depth = 2;
/*
* for the hash of cfqq inside the cfqd
return crq2;
if (crq2 == NULL)
return crq1;
- if (cfq_crq_requeued(crq1))
+
+ if (cfq_crq_requeued(crq1) && !cfq_crq_requeued(crq2))
return crq1;
- if (cfq_crq_requeued(crq2))
+ else if (cfq_crq_requeued(crq2) && !cfq_crq_requeued(crq1))
+ return crq2;
+
+ if (cfq_crq_is_sync(crq1) && !cfq_crq_is_sync(crq2))
+ return crq1;
+ else if (cfq_crq_is_sync(crq2) && !cfq_crq_is_sync(crq1))
return crq2;
s1 = crq1->request->sector;
cfq_crq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
struct cfq_rq *crq)
{
- const int sync = cfq_crq_is_sync(crq);
+ struct cfq_io_context *cic;
cfqq->next_crq = cfq_choose_req(cfqd, cfqq->next_crq, crq);
- if (sync) {
- struct cfq_io_context *cic = crq->io_context;
+ /*
+ * we never wait for an async request and we don't allow preemption
+ * of an async request. so just return early
+ */
+ if (!cfq_crq_is_sync(crq))
+ return;
- cfq_update_io_thinktime(cfqd, cic);
- cfq_update_idle_window(cfqd, cfqq, cic);
+ cic = crq->io_context;
- cic->last_queue = jiffies;
- }
+ cfq_update_io_thinktime(cfqd, cic);
+ cfq_update_idle_window(cfqd, cfqq, cic);
+
+ cic->last_queue = jiffies;
if (cfqq == cfqd->active_queue) {
/*
int ret = 0;
acquire_console_sem();
- if (tty->count == 1) {
+ if (tty->driver_data == NULL) {
ret = vc_allocate(currcons);
if (ret == 0) {
struct vc_data *vc = vc_cons[currcons].d;
int adm1026_detach_client(struct i2c_client *client)
{
i2c_detach_client(client);
- kfree(client);
+ kfree(i2c_get_clientdata(client));
return 0;
}
/* Error out and cleanup code */
exitfree:
- kfree(new_client);
+ kfree(data);
exit:
return err;
}
return 0;
exit_free:
- kfree(new_client);
+ kfree(data);
exit:
return err;
}
if ((ret = i2c_detach_client(client)) != 0) {
return ret;
}
- kfree(client);
+ kfree(i2c_get_clientdata(client));
return 0;
}
return 0;
exit_free:
- kfree(new_client);
+ kfree(data);
exit:
return err;
}
"experience to the module author.\n");
/* Supported value: 2 (clears the status) */
- fscpos_write_value(client, FSCPOS_REG_TEMP_STATE[nr], 2);
+ fscpos_write_value(client, FSCPOS_REG_TEMP_STATE[nr - 1], 2);
return count;
}
return 0;
error_free:
- kfree(new_client);
+ kfree(data);
error_release:
release_region(addr, SMSC_EXTENT);
return err;
return 0;
error_free:
- kfree(new_client);
+ kfree(data);
error_release:
release_region(address, SMSC_EXTENT);
return err;
int num_ports, i;
spin_lock_init(&ohci->phy_reg_lock);
- spin_lock_init(&ohci->event_lock);
/* Put some defaults to these undefined bus options */
buf = reg_read(ohci, OHCI1394_BusOptions);
/* We hopefully don't have to pre-allocate IT DMA like we did
* for IR DMA above. Allocate it on-demand and mark inactive. */
ohci->it_legacy_context.ohci = NULL;
+ spin_lock_init(&ohci->event_lock);
+ /*
+ * interrupts are disabled, all right, but... due to SA_SHIRQ we
+ * might get called anyway. We'll see no event, of course, but
+ * we need to get to that "no event", so enough should be initialized
+ * by that point.
+ */
if (request_irq(dev->irq, ohci_irq_handler, SA_SHIRQ,
OHCI1394_DRIVER_NAME, ohci))
FAIL(-ENOMEM, "Failed to allocate shared interrupt %d", dev->irq);
-EXTRA_CFLAGS += -Idrivers/infiniband/include
-
obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o \
ib_cm.o ib_umad.o ib_ucm.o
obj-$(CONFIG_INFINIBAND_USER_VERBS) += ib_uverbs.o
/*
- * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
- * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
- * Copyright (c) 2004 Intel Corporation. All rights reserved.
- * Copyright (c) 2004 Topspin Corporation. All rights reserved.
- * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <asm/bug.h>
-#include <ib_smi.h>
+#include <rdma/ib_smi.h>
#include "smi.h"
#include "agent_priv.h"
/*
- * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
- * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
- * Copyright (c) 2004 Intel Corporation. All rights reserved.
- * Copyright (c) 2004 Topspin Corporation. All rights reserved.
- * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Intel Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
+ * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* $Id: cache.c 1349 2004-12-16 21:09:43Z roland $
*/
-#include <linux/version.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/slab.h>
-#include <ib_cache.h>
+#include <rdma/ib_cache.h>
#include "core_priv.h"
#include <linux/spinlock.h>
#include <linux/workqueue.h>
-#include <ib_cache.h>
-#include <ib_cm.h>
+#include <rdma/ib_cache.h>
+#include <rdma/ib_cm.h>
#include "cm_msgs.h"
MODULE_AUTHOR("Sean Hefty");
struct cm_device {
struct list_head list;
struct ib_device *device;
- u64 ca_guid;
+ __be64 ca_guid;
struct cm_port port[0];
};
struct list_head list;
struct cm_port *port;
struct ib_mad_recv_wc *mad_recv_wc; /* Received MADs */
- u32 local_id; /* Established / timewait */
- u32 remote_id;
+ __be32 local_id; /* Established / timewait */
+ __be32 remote_id;
struct ib_cm_event cm_event;
struct ib_sa_path_rec path[0];
};
struct cm_work work; /* Must be first. */
struct rb_node remote_qp_node;
struct rb_node remote_id_node;
- u64 remote_ca_guid;
- u32 remote_qpn;
+ __be64 remote_ca_guid;
+ __be32 remote_qpn;
u8 inserted_remote_qp;
u8 inserted_remote_id;
};
struct cm_av alt_av;
void *private_data;
- u64 tid;
- u32 local_qpn;
- u32 remote_qpn;
- u32 sq_psn;
- u32 rq_psn;
+ __be64 tid;
+ __be32 local_qpn;
+ __be32 remote_qpn;
+ __be32 sq_psn;
+ __be32 rq_psn;
int timeout_ms;
enum ib_mtu path_mtu;
u8 private_data_len;
u16 dlid, u8 sl, u16 src_path_bits)
{
memset(ah_attr, 0, sizeof ah_attr);
- ah_attr->dlid = be16_to_cpu(dlid);
+ ah_attr->dlid = dlid;
ah_attr->sl = sl;
ah_attr->src_path_bits = src_path_bits;
ah_attr->port_num = port_num;
{
av->port = port;
av->pkey_index = wc->pkey_index;
- cm_set_ah_attr(&av->ah_attr, port->port_num, cpu_to_be16(wc->slid),
+ cm_set_ah_attr(&av->ah_attr, port->port_num, wc->slid,
wc->sl, wc->dlid_path_bits);
}
return ret;
av->port = port;
- cm_set_ah_attr(&av->ah_attr, av->port->port_num, path->dlid,
- path->sl, path->slid & 0x7F);
+ cm_set_ah_attr(&av->ah_attr, av->port->port_num,
+ be16_to_cpu(path->dlid), path->sl,
+ be16_to_cpu(path->slid) & 0x7F);
av->packet_life_time = path->packet_life_time;
return 0;
}
do {
spin_lock_irqsave(&cm.lock, flags);
ret = idr_get_new_above(&cm.local_id_table, cm_id_priv, 1,
- (int *) &cm_id_priv->id.local_id);
+ (__force int *) &cm_id_priv->id.local_id);
spin_unlock_irqrestore(&cm.lock, flags);
} while( (ret == -EAGAIN) && idr_pre_get(&cm.local_id_table, GFP_KERNEL) );
return ret;
}
-static void cm_free_id(u32 local_id)
+static void cm_free_id(__be32 local_id)
{
unsigned long flags;
spin_lock_irqsave(&cm.lock, flags);
- idr_remove(&cm.local_id_table, (int) local_id);
+ idr_remove(&cm.local_id_table, (__force int) local_id);
spin_unlock_irqrestore(&cm.lock, flags);
}
-static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id)
+static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id)
{
struct cm_id_private *cm_id_priv;
- cm_id_priv = idr_find(&cm.local_id_table, (int) local_id);
+ cm_id_priv = idr_find(&cm.local_id_table, (__force int) local_id);
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
return cm_id_priv;
}
-static struct cm_id_private * cm_acquire_id(u32 local_id, u32 remote_id)
+static struct cm_id_private * cm_acquire_id(__be32 local_id, __be32 remote_id)
{
struct cm_id_private *cm_id_priv;
unsigned long flags;
struct rb_node **link = &cm.listen_service_table.rb_node;
struct rb_node *parent = NULL;
struct cm_id_private *cur_cm_id_priv;
- u64 service_id = cm_id_priv->id.service_id;
- u64 service_mask = cm_id_priv->id.service_mask;
+ __be64 service_id = cm_id_priv->id.service_id;
+ __be64 service_mask = cm_id_priv->id.service_mask;
while (*link) {
parent = *link;
return NULL;
}
-static struct cm_id_private * cm_find_listen(u64 service_id)
+static struct cm_id_private * cm_find_listen(__be64 service_id)
{
struct rb_node *node = cm.listen_service_table.rb_node;
struct cm_id_private *cm_id_priv;
struct rb_node **link = &cm.remote_id_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
- u64 remote_ca_guid = timewait_info->remote_ca_guid;
- u32 remote_id = timewait_info->work.remote_id;
+ __be64 remote_ca_guid = timewait_info->remote_ca_guid;
+ __be32 remote_id = timewait_info->work.remote_id;
while (*link) {
parent = *link;
return NULL;
}
-static struct cm_timewait_info * cm_find_remote_id(u64 remote_ca_guid,
- u32 remote_id)
+static struct cm_timewait_info * cm_find_remote_id(__be64 remote_ca_guid,
+ __be32 remote_id)
{
struct rb_node *node = cm.remote_id_table.rb_node;
struct cm_timewait_info *timewait_info;
struct rb_node **link = &cm.remote_qp_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
- u64 remote_ca_guid = timewait_info->remote_ca_guid;
- u32 remote_qpn = timewait_info->remote_qpn;
+ __be64 remote_ca_guid = timewait_info->remote_ca_guid;
+ __be32 remote_qpn = timewait_info->remote_qpn;
while (*link) {
parent = *link;
struct rb_node *parent = NULL;
struct cm_id_private *cur_cm_id_priv;
union ib_gid *port_gid = &cm_id_priv->av.dgid;
- u32 remote_id = cm_id_priv->id.remote_id;
+ __be32 remote_id = cm_id_priv->id.remote_id;
while (*link) {
parent = *link;
spin_unlock_irqrestore(&cm.lock, flags);
}
-static struct cm_timewait_info * cm_create_timewait_info(u32 local_id)
+static struct cm_timewait_info * cm_create_timewait_info(__be32 local_id)
{
struct cm_timewait_info *timewait_info;
EXPORT_SYMBOL(ib_destroy_cm_id);
int ib_cm_listen(struct ib_cm_id *cm_id,
- u64 service_id,
- u64 service_mask)
+ __be64 service_id,
+ __be64 service_mask)
{
struct cm_id_private *cm_id_priv, *cur_cm_id_priv;
unsigned long flags;
int ret = 0;
- service_mask = service_mask ? service_mask : ~0ULL;
+ service_mask = service_mask ? service_mask :
+ __constant_cpu_to_be64(~0ULL);
service_id &= service_mask;
if ((service_id & IB_SERVICE_ID_AGN_MASK) == IB_CM_ASSIGN_SERVICE_ID &&
(service_id != IB_CM_ASSIGN_SERVICE_ID))
spin_lock_irqsave(&cm.lock, flags);
if (service_id == IB_CM_ASSIGN_SERVICE_ID) {
- cm_id->service_id = __cpu_to_be64(cm.listen_service_id++);
- cm_id->service_mask = ~0ULL;
+ cm_id->service_id = cpu_to_be64(cm.listen_service_id++);
+ cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
} else {
cm_id->service_id = service_id;
cm_id->service_mask = service_mask;
}
EXPORT_SYMBOL(ib_cm_listen);
-static u64 cm_form_tid(struct cm_id_private *cm_id_priv,
- enum cm_msg_sequence msg_seq)
+static __be64 cm_form_tid(struct cm_id_private *cm_id_priv,
+ enum cm_msg_sequence msg_seq)
{
u64 hi_tid, low_tid;
hi_tid = ((u64) cm_id_priv->av.port->mad_agent->hi_tid) << 32;
- low_tid = (u64) (cm_id_priv->id.local_id | (msg_seq << 30));
+ low_tid = (u64) ((__force u32)cm_id_priv->id.local_id |
+ (msg_seq << 30));
return cpu_to_be64(hi_tid | low_tid);
}
static void cm_format_mad_hdr(struct ib_mad_hdr *hdr,
- enum cm_msg_attr_id attr_id, u64 tid)
+ __be16 attr_id, __be64 tid)
{
hdr->base_version = IB_MGMT_BASE_VERSION;
hdr->mgmt_class = IB_MGMT_CLASS_CM;
goto error1;
}
cm_id->service_id = param->service_id;
- cm_id->service_mask = ~0ULL;
+ cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
cm_id_priv->timeout_ms = cm_convert_to_ms(
param->primary_path->packet_life_time) * 2 +
cm_convert_to_ms(
rej_msg->remote_comm_id = rcv_msg->local_comm_id;
rej_msg->local_comm_id = rcv_msg->remote_comm_id;
cm_rej_set_msg_rejected(rej_msg, msg_rejected);
- rej_msg->reason = reason;
+ rej_msg->reason = cpu_to_be16(reason);
if (ari && ari_length) {
cm_rej_set_reject_info_len(rej_msg, ari_length);
return ret;
}
-static inline int cm_is_active_peer(u64 local_ca_guid, u64 remote_ca_guid,
- u32 local_qpn, u32 remote_qpn)
+static inline int cm_is_active_peer(__be64 local_ca_guid, __be64 remote_ca_guid,
+ __be32 local_qpn, __be32 remote_qpn)
{
return (be64_to_cpu(local_ca_guid) > be64_to_cpu(remote_ca_guid) ||
((local_ca_guid == remote_ca_guid) &&
break;
}
- rej_msg->reason = reason;
+ rej_msg->reason = cpu_to_be16(reason);
if (ari && ari_length) {
cm_rej_set_reject_info_len(rej_msg, ari_length);
memcpy(rej_msg->ari, ari, ari_length);
cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler;
cm_id_priv->id.context = listen_cm_id_priv->id.context;
cm_id_priv->id.service_id = req_msg->service_id;
- cm_id_priv->id.service_mask = ~0ULL;
+ cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL);
cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]);
ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av);
param = &work->cm_event.param.rej_rcvd;
param->ari = rej_msg->ari;
param->ari_length = cm_rej_get_reject_info_len(rej_msg);
- param->reason = rej_msg->reason;
+ param->reason = __be16_to_cpu(rej_msg->reason);
work->cm_event.private_data = &rej_msg->private_data;
}
struct cm_timewait_info *timewait_info;
struct cm_id_private *cm_id_priv;
unsigned long flags;
- u32 remote_id;
+ __be32 remote_id;
remote_id = rej_msg->local_comm_id;
- if (rej_msg->reason == IB_CM_REJ_TIMEOUT) {
+ if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_TIMEOUT) {
spin_lock_irqsave(&cm.lock, flags);
- timewait_info = cm_find_remote_id( *((u64 *) rej_msg->ari),
+ timewait_info = cm_find_remote_id( *((__be64 *) rej_msg->ari),
remote_id);
if (!timewait_info) {
spin_unlock_irqrestore(&cm.lock, flags);
return NULL;
}
cm_id_priv = idr_find(&cm.local_id_table,
- (int) timewait_info->work.local_id);
+ (__force int) timewait_info->work.local_id);
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
/* fall through */
case IB_CM_REQ_RCVD:
case IB_CM_MRA_REQ_SENT:
- if (rej_msg->reason == IB_CM_REJ_STALE_CONN)
+ if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_STALE_CONN)
cm_enter_timewait(cm_id_priv);
else
cm_reset_to_idle(cm_id_priv);
cm_format_mad_hdr(&sidr_req_msg->hdr, CM_SIDR_REQ_ATTR_ID,
cm_form_tid(cm_id_priv, CM_MSG_SEQUENCE_SIDR));
sidr_req_msg->request_id = cm_id_priv->id.local_id;
- sidr_req_msg->pkey = param->pkey;
+ sidr_req_msg->pkey = cpu_to_be16(param->pkey);
sidr_req_msg->service_id = param->service_id;
if (param->private_data && param->private_data_len)
goto out;
cm_id->service_id = param->service_id;
- cm_id->service_mask = ~0ULL;
+ cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
cm_id_priv->timeout_ms = param->timeout_ms;
cm_id_priv->max_cm_retries = param->max_cm_retries;
ret = cm_alloc_msg(cm_id_priv, &msg);
sidr_req_msg = (struct cm_sidr_req_msg *)
work->mad_recv_wc->recv_buf.mad;
param = &work->cm_event.param.sidr_req_rcvd;
- param->pkey = sidr_req_msg->pkey;
+ param->pkey = __be16_to_cpu(sidr_req_msg->pkey);
param->listen_id = listen_id;
param->device = work->port->mad_agent->device;
param->port = work->port->port_num;
sidr_req_msg = (struct cm_sidr_req_msg *)
work->mad_recv_wc->recv_buf.mad;
wc = work->mad_recv_wc->wc;
- cm_id_priv->av.dgid.global.subnet_prefix = wc->slid;
+ cm_id_priv->av.dgid.global.subnet_prefix = cpu_to_be64(wc->slid);
cm_id_priv->av.dgid.global.interface_id = 0;
cm_init_av_for_response(work->port, work->mad_recv_wc->wc,
&cm_id_priv->av);
cm_id_priv->id.cm_handler = cur_cm_id_priv->id.cm_handler;
cm_id_priv->id.context = cur_cm_id_priv->id.context;
cm_id_priv->id.service_id = sidr_req_msg->service_id;
- cm_id_priv->id.service_mask = ~0ULL;
+ cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL);
cm_format_sidr_req_event(work, &cur_cm_id_priv->id);
cm_process_work(cm_id_priv, work);
}
EXPORT_SYMBOL(ib_cm_init_qp_attr);
-static u64 cm_get_ca_guid(struct ib_device *device)
+static __be64 cm_get_ca_guid(struct ib_device *device)
{
struct ib_device_attr *device_attr;
- u64 guid;
+ __be64 guid;
int ret;
device_attr = kmalloc(sizeof *device_attr, GFP_KERNEL);
#if !defined(CM_MSGS_H)
#define CM_MSGS_H
-#include <ib_mad.h>
+#include <rdma/ib_mad.h>
/*
* Parameters to routines below should be in network-byte order, and values
#define IB_CM_CLASS_VERSION 2 /* IB specification 1.2 */
-enum cm_msg_attr_id {
- CM_REQ_ATTR_ID = __constant_htons(0x0010),
- CM_MRA_ATTR_ID = __constant_htons(0x0011),
- CM_REJ_ATTR_ID = __constant_htons(0x0012),
- CM_REP_ATTR_ID = __constant_htons(0x0013),
- CM_RTU_ATTR_ID = __constant_htons(0x0014),
- CM_DREQ_ATTR_ID = __constant_htons(0x0015),
- CM_DREP_ATTR_ID = __constant_htons(0x0016),
- CM_SIDR_REQ_ATTR_ID = __constant_htons(0x0017),
- CM_SIDR_REP_ATTR_ID = __constant_htons(0x0018),
- CM_LAP_ATTR_ID = __constant_htons(0x0019),
- CM_APR_ATTR_ID = __constant_htons(0x001A)
-};
+#define CM_REQ_ATTR_ID __constant_htons(0x0010)
+#define CM_MRA_ATTR_ID __constant_htons(0x0011)
+#define CM_REJ_ATTR_ID __constant_htons(0x0012)
+#define CM_REP_ATTR_ID __constant_htons(0x0013)
+#define CM_RTU_ATTR_ID __constant_htons(0x0014)
+#define CM_DREQ_ATTR_ID __constant_htons(0x0015)
+#define CM_DREP_ATTR_ID __constant_htons(0x0016)
+#define CM_SIDR_REQ_ATTR_ID __constant_htons(0x0017)
+#define CM_SIDR_REP_ATTR_ID __constant_htons(0x0018)
+#define CM_LAP_ATTR_ID __constant_htons(0x0019)
+#define CM_APR_ATTR_ID __constant_htons(0x001A)
enum cm_msg_sequence {
CM_MSG_SEQUENCE_REQ,
struct cm_req_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 rsvd4;
- u64 service_id;
- u64 local_ca_guid;
- u32 rsvd24;
- u32 local_qkey;
+ __be32 local_comm_id;
+ __be32 rsvd4;
+ __be64 service_id;
+ __be64 local_ca_guid;
+ __be32 rsvd24;
+ __be32 local_qkey;
/* local QPN:24, responder resources:8 */
- u32 offset32;
+ __be32 offset32;
/* local EECN:24, initiator depth:8 */
- u32 offset36;
+ __be32 offset36;
/*
* remote EECN:24, remote CM response timeout:5,
* transport service type:2, end-to-end flow control:1
*/
- u32 offset40;
+ __be32 offset40;
/* starting PSN:24, local CM response timeout:5, retry count:3 */
- u32 offset44;
- u16 pkey;
+ __be32 offset44;
+ __be16 pkey;
/* path MTU:4, RDC exists:1, RNR retry count:3. */
u8 offset50;
/* max CM Retries:4, SRQ:1, rsvd:3 */
u8 offset51;
- u16 primary_local_lid;
- u16 primary_remote_lid;
+ __be16 primary_local_lid;
+ __be16 primary_remote_lid;
union ib_gid primary_local_gid;
union ib_gid primary_remote_gid;
/* flow label:20, rsvd:6, packet rate:6 */
- u32 primary_offset88;
+ __be32 primary_offset88;
u8 primary_traffic_class;
u8 primary_hop_limit;
/* SL:4, subnet local:1, rsvd:3 */
/* local ACK timeout:5, rsvd:3 */
u8 primary_offset95;
- u16 alt_local_lid;
- u16 alt_remote_lid;
+ __be16 alt_local_lid;
+ __be16 alt_remote_lid;
union ib_gid alt_local_gid;
union ib_gid alt_remote_gid;
/* flow label:20, rsvd:6, packet rate:6 */
- u32 alt_offset132;
+ __be32 alt_offset132;
u8 alt_traffic_class;
u8 alt_hop_limit;
/* SL:4, subnet local:1, rsvd:3 */
} __attribute__ ((packed));
-static inline u32 cm_req_get_local_qpn(struct cm_req_msg *req_msg)
+static inline __be32 cm_req_get_local_qpn(struct cm_req_msg *req_msg)
{
return cpu_to_be32(be32_to_cpu(req_msg->offset32) >> 8);
}
-static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, u32 qpn)
+static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, __be32 qpn)
{
req_msg->offset32 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(req_msg->offset32) &
0xFFFFFFFE));
}
-static inline u32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
+static inline __be32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
{
return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8);
}
static inline void cm_req_set_starting_psn(struct cm_req_msg *req_msg,
- u32 starting_psn)
+ __be32 starting_psn)
{
req_msg->offset44 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
(be32_to_cpu(req_msg->offset44) & 0x000000FF));
((srq & 0x1) << 3));
}
-static inline u32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
+static inline __be32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
{
- return cpu_to_be32((be32_to_cpu(req_msg->primary_offset88) >> 12));
+ return cpu_to_be32(be32_to_cpu(req_msg->primary_offset88) >> 12);
}
static inline void cm_req_set_primary_flow_label(struct cm_req_msg *req_msg,
- u32 flow_label)
+ __be32 flow_label)
{
req_msg->primary_offset88 = cpu_to_be32(
(be32_to_cpu(req_msg->primary_offset88) &
(local_ack_timeout << 3));
}
-static inline u32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg)
+static inline __be32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg)
{
- return cpu_to_be32((be32_to_cpu(req_msg->alt_offset132) >> 12));
+ return cpu_to_be32(be32_to_cpu(req_msg->alt_offset132) >> 12);
}
static inline void cm_req_set_alt_flow_label(struct cm_req_msg *req_msg,
- u32 flow_label)
+ __be32 flow_label)
{
req_msg->alt_offset132 = cpu_to_be32(
(be32_to_cpu(req_msg->alt_offset132) &
struct cm_mra_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
/* message MRAed:2, rsvd:6 */
u8 offset8;
/* service timeout:5, rsvd:3 */
struct cm_rej_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
/* message REJected:2, rsvd:6 */
u8 offset8;
/* reject info length:7, rsvd:1. */
u8 offset9;
- u16 reason;
+ __be16 reason;
u8 ari[IB_CM_REJ_ARI_LENGTH];
u8 private_data[IB_CM_REJ_PRIVATE_DATA_SIZE];
struct cm_rep_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
- u32 local_qkey;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
+ __be32 local_qkey;
/* local QPN:24, rsvd:8 */
- u32 offset12;
+ __be32 offset12;
/* local EECN:24, rsvd:8 */
- u32 offset16;
+ __be32 offset16;
/* starting PSN:24 rsvd:8 */
- u32 offset20;
+ __be32 offset20;
u8 resp_resources;
u8 initiator_depth;
/* target ACK delay:5, failover accepted:2, end-to-end flow control:1 */
u8 offset26;
/* RNR retry count:3, SRQ:1, rsvd:5 */
u8 offset27;
- u64 local_ca_guid;
+ __be64 local_ca_guid;
u8 private_data[IB_CM_REP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
-static inline u32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg)
+static inline __be32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg)
{
return cpu_to_be32(be32_to_cpu(rep_msg->offset12) >> 8);
}
-static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, u32 qpn)
+static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, __be32 qpn)
{
rep_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(rep_msg->offset12) & 0x000000FF));
}
-static inline u32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
+static inline __be32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
{
return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8);
}
static inline void cm_rep_set_starting_psn(struct cm_rep_msg *rep_msg,
- u32 starting_psn)
+ __be32 starting_psn)
{
rep_msg->offset20 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
(be32_to_cpu(rep_msg->offset20) & 0x000000FF));
struct cm_rtu_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
u8 private_data[IB_CM_RTU_PRIVATE_DATA_SIZE];
struct cm_dreq_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
/* remote QPN/EECN:24, rsvd:8 */
- u32 offset8;
+ __be32 offset8;
u8 private_data[IB_CM_DREQ_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
-static inline u32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg)
+static inline __be32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg)
{
return cpu_to_be32(be32_to_cpu(dreq_msg->offset8) >> 8);
}
-static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn)
+static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, __be32 qpn)
{
dreq_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(dreq_msg->offset8) & 0x000000FF));
struct cm_drep_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
u8 private_data[IB_CM_DREP_PRIVATE_DATA_SIZE];
struct cm_lap_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
- u32 rsvd8;
+ __be32 rsvd8;
/* remote QPN/EECN:24, remote CM response timeout:5, rsvd:3 */
- u32 offset12;
- u32 rsvd16;
+ __be32 offset12;
+ __be32 rsvd16;
- u16 alt_local_lid;
- u16 alt_remote_lid;
+ __be16 alt_local_lid;
+ __be16 alt_remote_lid;
union ib_gid alt_local_gid;
union ib_gid alt_remote_gid;
/* flow label:20, rsvd:4, traffic class:8 */
- u32 offset56;
+ __be32 offset56;
u8 alt_hop_limit;
/* rsvd:2, packet rate:6 */
- uint8_t offset61;
+ u8 offset61;
/* SL:4, subnet local:1, rsvd:3 */
- uint8_t offset62;
+ u8 offset62;
/* local ACK timeout:5, rsvd:3 */
- uint8_t offset63;
+ u8 offset63;
u8 private_data[IB_CM_LAP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
-static inline u32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
+static inline __be32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
{
return cpu_to_be32(be32_to_cpu(lap_msg->offset12) >> 8);
}
-static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, u32 qpn)
+static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, __be32 qpn)
{
lap_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(lap_msg->offset12) &
0xFFFFFF07));
}
-static inline u32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
+static inline __be32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
{
- return be32_to_cpu(lap_msg->offset56) >> 12;
+ return cpu_to_be32(be32_to_cpu(lap_msg->offset56) >> 12);
}
static inline void cm_lap_set_flow_label(struct cm_lap_msg *lap_msg,
- u32 flow_label)
+ __be32 flow_label)
{
- lap_msg->offset56 = cpu_to_be32((flow_label << 12) |
- (be32_to_cpu(lap_msg->offset56) &
- 0x00000FFF));
+ lap_msg->offset56 = cpu_to_be32(
+ (be32_to_cpu(lap_msg->offset56) & 0x00000FFF) |
+ (be32_to_cpu(flow_label) << 12));
}
static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg)
struct cm_apr_msg {
struct ib_mad_hdr hdr;
- u32 local_comm_id;
- u32 remote_comm_id;
+ __be32 local_comm_id;
+ __be32 remote_comm_id;
u8 info_length;
u8 ap_status;
struct cm_sidr_req_msg {
struct ib_mad_hdr hdr;
- u32 request_id;
- u16 pkey;
- u16 rsvd;
- u64 service_id;
+ __be32 request_id;
+ __be16 pkey;
+ __be16 rsvd;
+ __be64 service_id;
u8 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
struct cm_sidr_rep_msg {
struct ib_mad_hdr hdr;
- u32 request_id;
+ __be32 request_id;
u8 status;
u8 info_length;
- u16 rsvd;
+ __be16 rsvd;
/* QPN:24, rsvd:8 */
- u32 offset8;
- u64 service_id;
- u32 qkey;
+ __be32 offset8;
+ __be64 service_id;
+ __be32 qkey;
u8 info[IB_CM_SIDR_REP_INFO_LENGTH];
u8 private_data[IB_CM_SIDR_REP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
-static inline u32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
+static inline __be32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
{
return cpu_to_be32(be32_to_cpu(sidr_rep_msg->offset8) >> 8);
}
static inline void cm_sidr_rep_set_qpn(struct cm_sidr_rep_msg *sidr_rep_msg,
- u32 qpn)
+ __be32 qpn)
{
sidr_rep_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(sidr_rep_msg->offset8) &
#include <linux/list.h>
#include <linux/spinlock.h>
-#include <ib_verbs.h>
+#include <rdma/ib_verbs.h>
int ib_device_register_sysfs(struct ib_device *device);
void ib_device_unregister_sysfs(struct ib_device *device);
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/jhash.h>
#include <linux/kthread.h>
-#include <ib_fmr_pool.h>
+#include <rdma/ib_fmr_pool.h>
#include "core_priv.h"
{
struct ib_pool_fmr *fmr;
struct ib_pool_fmr *tmp;
+ LIST_HEAD(fmr_list);
int i;
kthread_stop(pool->thread);
i = 0;
list_for_each_entry_safe(fmr, tmp, &pool->free_list, list) {
+ if (fmr->remap_count) {
+ INIT_LIST_HEAD(&fmr_list);
+ list_add_tail(&fmr->fmr->list, &fmr_list);
+ ib_unmap_fmr(&fmr_list);
+ }
ib_dealloc_fmr(fmr->fmr);
list_del(&fmr->list);
kfree(fmr);
goto out;
}
- build_smp_wc(send_wr->wr_id, smp->dr_slid, send_wr->wr.ud.pkey_index,
+ build_smp_wc(send_wr->wr_id, be16_to_cpu(smp->dr_slid),
+ send_wr->wr.ud.pkey_index,
send_wr->wr.ud.port_num, &mad_wc);
/* No GRH for DR SMP */
}
struct ib_mad_send_wr_private*
-ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid)
+ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid)
{
struct ib_mad_send_wr_private *mad_send_wr;
struct ib_mad_send_wr_private *mad_send_wr;
struct ib_mad_send_wc mad_send_wc;
unsigned long flags;
- u64 tid;
+ __be64 tid;
INIT_LIST_HEAD(&mad_recv_wc->rmpp_list);
list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list);
* Defined behavior is to complete response
* before request
*/
- build_smp_wc(local->wr_id, IB_LID_PERMISSIVE,
+ build_smp_wc(local->wr_id,
+ be16_to_cpu(IB_LID_PERMISSIVE),
0 /* pkey index */,
recv_mad_agent->agent.port_num, &wc);
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
}
-static void ib_mad_thread_completion_handler(struct ib_cq *cq)
+static void ib_mad_thread_completion_handler(struct ib_cq *cq, void *arg)
{
struct ib_mad_port_private *port_priv = cq->cq_context;
cq_size = (IB_MAD_QP_SEND_SIZE + IB_MAD_QP_RECV_SIZE) * 2;
port_priv->cq = ib_create_cq(port_priv->device,
- (ib_comp_handler)
- ib_mad_thread_completion_handler,
+ ib_mad_thread_completion_handler,
NULL, port_priv, cq_size);
if (IS_ERR(port_priv->cq)) {
printk(KERN_ERR PFX "Couldn't create ib_mad CQ\n");
#include <linux/pci.h>
#include <linux/kthread.h>
#include <linux/workqueue.h>
-#include <ib_mad.h>
-#include <ib_smi.h>
+#include <rdma/ib_mad.h>
+#include <rdma/ib_smi.h>
#define PFX "ib_mad: "
struct ib_send_wr send_wr;
struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
u64 wr_id; /* client WR ID */
- u64 tid;
+ __be64 tid;
unsigned long timeout;
int retries;
int retry;
struct ib_send_wr send_wr;
struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
u64 wr_id; /* client WR ID */
- u64 tid;
+ __be64 tid;
};
struct ib_mad_mgmt_method_table {
int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr);
struct ib_mad_send_wr_private *
-ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid);
+ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid);
void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr,
struct ib_mad_send_wc *mad_send_wc);
int seg_num;
int newwin;
- u64 tid;
+ __be64 tid;
u32 src_qp;
u16 slid;
u8 mgmt_class;
}
}
+static int data_offset(u8 mgmt_class)
+{
+ if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM)
+ return offsetof(struct ib_sa_mad, data);
+ else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
+ (mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END))
+ return offsetof(struct ib_vendor_mad, data);
+ else
+ return offsetof(struct ib_rmpp_mad, data);
+}
+
+static void format_ack(struct ib_rmpp_mad *ack,
+ struct ib_rmpp_mad *data,
+ struct mad_rmpp_recv *rmpp_recv)
+{
+ unsigned long flags;
+
+ memcpy(&ack->mad_hdr, &data->mad_hdr,
+ data_offset(data->mad_hdr.mgmt_class));
+
+ ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
+ ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK;
+ ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+
+ spin_lock_irqsave(&rmpp_recv->lock, flags);
+ rmpp_recv->last_ack = rmpp_recv->seg_num;
+ ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num);
+ ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin);
+ spin_unlock_irqrestore(&rmpp_recv->lock, flags);
+}
+
+static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
+ struct ib_mad_recv_wc *recv_wc)
+{
+ struct ib_mad_send_buf *msg;
+ struct ib_send_wr *bad_send_wr;
+ int hdr_len, ret;
+
+ hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
+ msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp,
+ recv_wc->wc->pkey_index, rmpp_recv->ah, 1,
+ hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len,
+ GFP_KERNEL);
+ if (!msg)
+ return;
+
+ format_ack((struct ib_rmpp_mad *) msg->mad,
+ (struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv);
+ ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr,
+ &bad_send_wr);
+ if (ret)
+ ib_free_send_mad(msg);
+}
+
+static int alloc_response_msg(struct ib_mad_agent *agent,
+ struct ib_mad_recv_wc *recv_wc,
+ struct ib_mad_send_buf **msg)
+{
+ struct ib_mad_send_buf *m;
+ struct ib_ah *ah;
+ int hdr_len;
+
+ ah = ib_create_ah_from_wc(agent->qp->pd, recv_wc->wc,
+ recv_wc->recv_buf.grh, agent->port_num);
+ if (IS_ERR(ah))
+ return PTR_ERR(ah);
+
+ hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
+ m = ib_create_send_mad(agent, recv_wc->wc->src_qp,
+ recv_wc->wc->pkey_index, ah, 1, hdr_len,
+ sizeof(struct ib_rmpp_mad) - hdr_len,
+ GFP_KERNEL);
+ if (IS_ERR(m)) {
+ ib_destroy_ah(ah);
+ return PTR_ERR(m);
+ }
+ *msg = m;
+ return 0;
+}
+
+static void free_msg(struct ib_mad_send_buf *msg)
+{
+ ib_destroy_ah(msg->send_wr.wr.ud.ah);
+ ib_free_send_mad(msg);
+}
+
+static void nack_recv(struct ib_mad_agent_private *agent,
+ struct ib_mad_recv_wc *recv_wc, u8 rmpp_status)
+{
+ struct ib_mad_send_buf *msg;
+ struct ib_rmpp_mad *rmpp_mad;
+ struct ib_send_wr *bad_send_wr;
+ int ret;
+
+ ret = alloc_response_msg(&agent->agent, recv_wc, &msg);
+ if (ret)
+ return;
+
+ rmpp_mad = (struct ib_rmpp_mad *) msg->mad;
+ memcpy(rmpp_mad, recv_wc->recv_buf.mad,
+ data_offset(recv_wc->recv_buf.mad->mad_hdr.mgmt_class));
+
+ rmpp_mad->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
+ rmpp_mad->rmpp_hdr.rmpp_version = IB_MGMT_RMPP_VERSION;
+ rmpp_mad->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ABORT;
+ ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
+ rmpp_mad->rmpp_hdr.rmpp_status = rmpp_status;
+ rmpp_mad->rmpp_hdr.seg_num = 0;
+ rmpp_mad->rmpp_hdr.paylen_newwin = 0;
+
+ ret = ib_post_send_mad(&agent->agent, &msg->send_wr, &bad_send_wr);
+ if (ret)
+ free_msg(msg);
+}
+
static void recv_timeout_handler(void *data)
{
struct mad_rmpp_recv *rmpp_recv = data;
list_del(&rmpp_recv->list);
spin_unlock_irqrestore(&rmpp_recv->agent->lock, flags);
- /* TODO: send abort. */
rmpp_wc = rmpp_recv->rmpp_wc;
+ nack_recv(rmpp_recv->agent, rmpp_wc, IB_MGMT_RMPP_STATUS_T2L);
destroy_rmpp_recv(rmpp_recv);
ib_free_recv_mad(rmpp_wc);
}
return cur_rmpp_recv;
}
-static int data_offset(u8 mgmt_class)
-{
- if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM)
- return offsetof(struct ib_sa_mad, data);
- else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
- (mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END))
- return offsetof(struct ib_vendor_mad, data);
- else
- return offsetof(struct ib_rmpp_mad, data);
-}
-
-static void format_ack(struct ib_rmpp_mad *ack,
- struct ib_rmpp_mad *data,
- struct mad_rmpp_recv *rmpp_recv)
-{
- unsigned long flags;
-
- memcpy(&ack->mad_hdr, &data->mad_hdr,
- data_offset(data->mad_hdr.mgmt_class));
-
- ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
- ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK;
- ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
-
- spin_lock_irqsave(&rmpp_recv->lock, flags);
- rmpp_recv->last_ack = rmpp_recv->seg_num;
- ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num);
- ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin);
- spin_unlock_irqrestore(&rmpp_recv->lock, flags);
-}
-
-static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
- struct ib_mad_recv_wc *recv_wc)
-{
- struct ib_mad_send_buf *msg;
- struct ib_send_wr *bad_send_wr;
- int hdr_len, ret;
-
- hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
- msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp,
- recv_wc->wc->pkey_index, rmpp_recv->ah, 1,
- hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len,
- GFP_KERNEL);
- if (!msg)
- return;
-
- format_ack((struct ib_rmpp_mad *) msg->mad,
- (struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv);
- ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr,
- &bad_send_wr);
- if (ret)
- ib_free_send_mad(msg);
-}
-
static inline int get_last_flag(struct ib_mad_recv_buf *seg)
{
struct ib_rmpp_mad *rmpp_mad;
return ib_send_mad(mad_send_wr);
}
+static void abort_send(struct ib_mad_agent_private *agent, __be64 tid,
+ u8 rmpp_status)
+{
+ struct ib_mad_send_wr_private *mad_send_wr;
+ struct ib_mad_send_wc wc;
+ unsigned long flags;
+
+ spin_lock_irqsave(&agent->lock, flags);
+ mad_send_wr = ib_find_send_mad(agent, tid);
+ if (!mad_send_wr)
+ goto out; /* Unmatched send */
+
+ if ((mad_send_wr->last_ack == mad_send_wr->total_seg) ||
+ (!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS))
+ goto out; /* Send is already done */
+
+ ib_mark_mad_done(mad_send_wr);
+ spin_unlock_irqrestore(&agent->lock, flags);
+
+ wc.status = IB_WC_REM_ABORT_ERR;
+ wc.vendor_err = rmpp_status;
+ wc.wr_id = mad_send_wr->wr_id;
+ ib_mad_complete_send_wr(mad_send_wr, &wc);
+ return;
+out:
+ spin_unlock_irqrestore(&agent->lock, flags);
+}
+
static void process_rmpp_ack(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
int seg_num, newwin, ret;
rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
- if (rmpp_mad->rmpp_hdr.rmpp_status)
+ if (rmpp_mad->rmpp_hdr.rmpp_status) {
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_BAD_STATUS);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
return;
+ }
seg_num = be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num);
newwin = be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin);
+ if (newwin < seg_num) {
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_W2S);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_W2S);
+ return;
+ }
spin_lock_irqsave(&agent->lock, flags);
mad_send_wr = ib_find_send_mad(agent, rmpp_mad->mad_hdr.tid);
(!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS))
goto out; /* Send is already done */
- if (seg_num > mad_send_wr->total_seg)
- goto out; /* Bad ACK */
+ if (seg_num > mad_send_wr->total_seg || seg_num > mad_send_wr->newwin) {
+ spin_unlock_irqrestore(&agent->lock, flags);
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_S2B);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_S2B);
+ return;
+ }
if (newwin < mad_send_wr->newwin || seg_num < mad_send_wr->last_ack)
goto out; /* Old ACK */
spin_unlock_irqrestore(&agent->lock, flags);
}
+static struct ib_mad_recv_wc *
+process_rmpp_data(struct ib_mad_agent_private *agent,
+ struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_rmpp_hdr *rmpp_hdr;
+ u8 rmpp_status;
+
+ rmpp_hdr = &((struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad)->rmpp_hdr;
+
+ if (rmpp_hdr->rmpp_status) {
+ rmpp_status = IB_MGMT_RMPP_STATUS_BAD_STATUS;
+ goto bad;
+ }
+
+ if (rmpp_hdr->seg_num == __constant_htonl(1)) {
+ if (!(ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST)) {
+ rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG;
+ goto bad;
+ }
+ return start_rmpp(agent, mad_recv_wc);
+ } else {
+ if (ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST) {
+ rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG;
+ goto bad;
+ }
+ return continue_rmpp(agent, mad_recv_wc);
+ }
+bad:
+ nack_recv(agent, mad_recv_wc, rmpp_status);
+ ib_free_recv_mad(mad_recv_wc);
+ return NULL;
+}
+
+static void process_rmpp_stop(struct ib_mad_agent_private *agent,
+ struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_rmpp_mad *rmpp_mad;
+
+ rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
+
+ if (rmpp_mad->rmpp_hdr.rmpp_status != IB_MGMT_RMPP_STATUS_RESX) {
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_BAD_STATUS);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
+ } else
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ rmpp_mad->rmpp_hdr.rmpp_status);
+}
+
+static void process_rmpp_abort(struct ib_mad_agent_private *agent,
+ struct ib_mad_recv_wc *mad_recv_wc)
+{
+ struct ib_rmpp_mad *rmpp_mad;
+
+ rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
+
+ if (rmpp_mad->rmpp_hdr.rmpp_status < IB_MGMT_RMPP_STATUS_ABORT_MIN ||
+ rmpp_mad->rmpp_hdr.rmpp_status > IB_MGMT_RMPP_STATUS_ABORT_MAX) {
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_BAD_STATUS);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
+ } else
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ rmpp_mad->rmpp_hdr.rmpp_status);
+}
+
struct ib_mad_recv_wc *
ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
if (!(rmpp_mad->rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE))
return mad_recv_wc;
- if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION)
+ if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) {
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_UNV);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_UNV);
goto out;
+ }
switch (rmpp_mad->rmpp_hdr.rmpp_type) {
case IB_MGMT_RMPP_TYPE_DATA:
- if (rmpp_mad->rmpp_hdr.seg_num == __constant_htonl(1))
- return start_rmpp(agent, mad_recv_wc);
- else
- return continue_rmpp(agent, mad_recv_wc);
+ return process_rmpp_data(agent, mad_recv_wc);
case IB_MGMT_RMPP_TYPE_ACK:
process_rmpp_ack(agent, mad_recv_wc);
break;
case IB_MGMT_RMPP_TYPE_STOP:
+ process_rmpp_stop(agent, mad_recv_wc);
+ break;
case IB_MGMT_RMPP_TYPE_ABORT:
- /* TODO: process_rmpp_nack(agent, mad_recv_wc); */
+ process_rmpp_abort(agent, mad_recv_wc);
break;
default:
+ abort_send(agent, rmpp_mad->mad_hdr.tid,
+ IB_MGMT_RMPP_STATUS_BADT);
+ nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BADT);
break;
}
out:
if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) {
msg = (struct ib_mad_send_buf *) (unsigned long)
mad_send_wc->wr_id;
- ib_free_send_mad(msg);
+ if (rmpp_mad->rmpp_hdr.rmpp_type == IB_MGMT_RMPP_TYPE_ACK)
+ ib_free_send_mad(msg);
+ else
+ free_msg(msg);
return IB_RMPP_RESULT_INTERNAL; /* ACK, STOP, or ABORT */
}
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* $Id: packer.c 1349 2004-12-16 21:09:43Z roland $
*/
-#include <ib_pack.h>
+#include <rdma/ib_pack.h>
static u64 value_read(int offset, int size, void *structure)
{
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
- * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
+ * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/kref.h>
#include <linux/idr.h>
-#include <ib_pack.h>
-#include <ib_sa.h>
+#include <rdma/ib_pack.h>
+#include <rdma/ib_sa.h>
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("InfiniBand subnet administration query support");
/*
- * Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
- * Copyright (c) 2004 Infinicon Corporation. All rights reserved.
- * Copyright (c) 2004 Intel Corporation. All rights reserved.
- * Copyright (c) 2004 Topspin Corporation. All rights reserved.
- * Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* $Id: smi.c 1389 2004-12-27 22:56:47Z roland $
*/
-#include <ib_smi.h>
+#include <rdma/ib_smi.h>
#include "smi.h"
/*
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies Ltd. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include "core_priv.h"
-#include <ib_mad.h>
+#include <rdma/ib_mad.h>
struct ib_port {
struct kobject kobj;
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
- be16_to_cpu(((u16 *) gid.raw)[0]),
- be16_to_cpu(((u16 *) gid.raw)[1]),
- be16_to_cpu(((u16 *) gid.raw)[2]),
- be16_to_cpu(((u16 *) gid.raw)[3]),
- be16_to_cpu(((u16 *) gid.raw)[4]),
- be16_to_cpu(((u16 *) gid.raw)[5]),
- be16_to_cpu(((u16 *) gid.raw)[6]),
- be16_to_cpu(((u16 *) gid.raw)[7]));
+ be16_to_cpu(((__be16 *) gid.raw)[0]),
+ be16_to_cpu(((__be16 *) gid.raw)[1]),
+ be16_to_cpu(((__be16 *) gid.raw)[2]),
+ be16_to_cpu(((__be16 *) gid.raw)[3]),
+ be16_to_cpu(((__be16 *) gid.raw)[4]),
+ be16_to_cpu(((__be16 *) gid.raw)[5]),
+ be16_to_cpu(((__be16 *) gid.raw)[6]),
+ be16_to_cpu(((__be16 *) gid.raw)[7]));
}
static ssize_t show_port_pkey(struct ib_port *p, struct port_attribute *attr,
break;
case 16:
ret = sprintf(buf, "%u\n",
- be16_to_cpup((u16 *)(out_mad->data + 40 + offset / 8)));
+ be16_to_cpup((__be16 *)(out_mad->data + 40 + offset / 8)));
break;
case 32:
ret = sprintf(buf, "%u\n",
- be32_to_cpup((u32 *)(out_mad->data + 40 + offset / 8)));
+ be32_to_cpup((__be32 *)(out_mad->data + 40 + offset / 8)));
break;
default:
ret = 0;
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x\n",
- be16_to_cpu(((u16 *) &attr.sys_image_guid)[0]),
- be16_to_cpu(((u16 *) &attr.sys_image_guid)[1]),
- be16_to_cpu(((u16 *) &attr.sys_image_guid)[2]),
- be16_to_cpu(((u16 *) &attr.sys_image_guid)[3]));
+ be16_to_cpu(((__be16 *) &attr.sys_image_guid)[0]),
+ be16_to_cpu(((__be16 *) &attr.sys_image_guid)[1]),
+ be16_to_cpu(((__be16 *) &attr.sys_image_guid)[2]),
+ be16_to_cpu(((__be16 *) &attr.sys_image_guid)[3]));
}
static ssize_t show_node_guid(struct class_device *cdev, char *buf)
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x\n",
- be16_to_cpu(((u16 *) &attr.node_guid)[0]),
- be16_to_cpu(((u16 *) &attr.node_guid)[1]),
- be16_to_cpu(((u16 *) &attr.node_guid)[2]),
- be16_to_cpu(((u16 *) &attr.node_guid)[3]));
+ be16_to_cpu(((__be16 *) &attr.node_guid)[0]),
+ be16_to_cpu(((__be16 *) &attr.node_guid)[1]),
+ be16_to_cpu(((__be16 *) &attr.node_guid)[2]),
+ be16_to_cpu(((__be16 *) &attr.node_guid)[3]));
}
static CLASS_DEVICE_ATTR(node_type, S_IRUGO, show_node_type, NULL);
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
static struct idr ctx_id_table;
static int ctx_id_rover = 0;
-static struct ib_ucm_context *ib_ucm_ctx_get(int id)
+static struct ib_ucm_context *ib_ucm_ctx_get(struct ib_ucm_file *file, int id)
{
struct ib_ucm_context *ctx;
down(&ctx_id_mutex);
ctx = idr_find(&ctx_id_table, id);
- if (ctx)
- ctx->ref++;
+ if (!ctx)
+ ctx = ERR_PTR(-ENOENT);
+ else if (ctx->file != file)
+ ctx = ERR_PTR(-EINVAL);
+ else
+ atomic_inc(&ctx->ref);
up(&ctx_id_mutex);
return ctx;
static void ib_ucm_ctx_put(struct ib_ucm_context *ctx)
{
+ if (atomic_dec_and_test(&ctx->ref))
+ wake_up(&ctx->wait);
+}
+
+static ssize_t ib_ucm_destroy_ctx(struct ib_ucm_file *file, int id)
+{
+ struct ib_ucm_context *ctx;
struct ib_ucm_event *uevent;
down(&ctx_id_mutex);
-
- ctx->ref--;
- if (!ctx->ref)
+ ctx = idr_find(&ctx_id_table, id);
+ if (!ctx)
+ ctx = ERR_PTR(-ENOENT);
+ else if (ctx->file != file)
+ ctx = ERR_PTR(-EINVAL);
+ else
idr_remove(&ctx_id_table, ctx->id);
-
up(&ctx_id_mutex);
- if (ctx->ref)
- return;
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
- down(&ctx->file->mutex);
+ atomic_dec(&ctx->ref);
+ wait_event(ctx->wait, !atomic_read(&ctx->ref));
+
+ /* No new events will be generated after destroying the cm_id. */
+ if (!IS_ERR(ctx->cm_id))
+ ib_destroy_cm_id(ctx->cm_id);
+ /* Cleanup events not yet reported to the user. */
+ down(&file->mutex);
list_del(&ctx->file_list);
while (!list_empty(&ctx->events)) {
kfree(uevent);
}
+ up(&file->mutex);
- up(&ctx->file->mutex);
-
- ucm_dbg("Destroyed CM ID <%d>\n", ctx->id);
-
- ib_destroy_cm_id(ctx->cm_id);
kfree(ctx);
+ return 0;
}
static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file)
if (!ctx)
return NULL;
- ctx->ref = 1; /* user reference */
+ atomic_set(&ctx->ref, 1);
+ init_waitqueue_head(&ctx->wait);
ctx->file = file;
INIT_LIST_HEAD(&ctx->events);
- init_MUTEX(&ctx->mutex);
list_add_tail(&ctx->file_list, &file->ctxs);
if (!kpath || !upath)
return;
- memcpy(upath->dgid, kpath->dgid.raw, sizeof(union ib_gid));
- memcpy(upath->sgid, kpath->sgid.raw, sizeof(union ib_gid));
+ memcpy(upath->dgid, kpath->dgid.raw, sizeof *upath->dgid);
+ memcpy(upath->sgid, kpath->sgid.raw, sizeof *upath->sgid);
upath->dlid = kpath->dlid;
upath->slid = kpath->slid;
kpath->packet_life_time_selector;
}
-static void ib_ucm_event_req_get(struct ib_ucm_req_event_resp *ureq,
+static void ib_ucm_event_req_get(struct ib_ucm_context *ctx,
+ struct ib_ucm_req_event_resp *ureq,
struct ib_cm_req_event_param *kreq)
{
- ureq->listen_id = (long)kreq->listen_id->context;
+ ureq->listen_id = ctx->id;
ureq->remote_ca_guid = kreq->remote_ca_guid;
ureq->remote_qkey = kreq->remote_qkey;
urep->srq = krep->srq;
}
-static void ib_ucm_event_rej_get(struct ib_ucm_rej_event_resp *urej,
- struct ib_cm_rej_event_param *krej)
-{
- urej->reason = krej->reason;
-}
-
-static void ib_ucm_event_mra_get(struct ib_ucm_mra_event_resp *umra,
- struct ib_cm_mra_event_param *kmra)
-{
- umra->timeout = kmra->service_timeout;
-}
-
-static void ib_ucm_event_lap_get(struct ib_ucm_lap_event_resp *ulap,
- struct ib_cm_lap_event_param *klap)
-{
- ib_ucm_event_path_get(&ulap->path, klap->alternate_path);
-}
-
-static void ib_ucm_event_apr_get(struct ib_ucm_apr_event_resp *uapr,
- struct ib_cm_apr_event_param *kapr)
-{
- uapr->status = kapr->ap_status;
-}
-
-static void ib_ucm_event_sidr_req_get(struct ib_ucm_sidr_req_event_resp *ureq,
+static void ib_ucm_event_sidr_req_get(struct ib_ucm_context *ctx,
+ struct ib_ucm_sidr_req_event_resp *ureq,
struct ib_cm_sidr_req_event_param *kreq)
{
- ureq->listen_id = (long)kreq->listen_id->context;
+ ureq->listen_id = ctx->id;
ureq->pkey = kreq->pkey;
}
urep->qpn = krep->qpn;
};
-static int ib_ucm_event_process(struct ib_cm_event *evt,
+static int ib_ucm_event_process(struct ib_ucm_context *ctx,
+ struct ib_cm_event *evt,
struct ib_ucm_event *uvt)
{
void *info = NULL;
- int result;
switch (evt->event) {
case IB_CM_REQ_RECEIVED:
- ib_ucm_event_req_get(&uvt->resp.u.req_resp,
+ ib_ucm_event_req_get(ctx, &uvt->resp.u.req_resp,
&evt->param.req_rcvd);
uvt->data_len = IB_CM_REQ_PRIVATE_DATA_SIZE;
- uvt->resp.present |= (evt->param.req_rcvd.primary_path ?
- IB_UCM_PRES_PRIMARY : 0);
+ uvt->resp.present = IB_UCM_PRES_PRIMARY;
uvt->resp.present |= (evt->param.req_rcvd.alternate_path ?
IB_UCM_PRES_ALTERNATE : 0);
break;
ib_ucm_event_rep_get(&uvt->resp.u.rep_resp,
&evt->param.rep_rcvd);
uvt->data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
-
break;
case IB_CM_RTU_RECEIVED:
uvt->data_len = IB_CM_RTU_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
-
break;
case IB_CM_DREQ_RECEIVED:
uvt->data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
-
break;
case IB_CM_DREP_RECEIVED:
uvt->data_len = IB_CM_DREP_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
-
break;
case IB_CM_MRA_RECEIVED:
- ib_ucm_event_mra_get(&uvt->resp.u.mra_resp,
- &evt->param.mra_rcvd);
+ uvt->resp.u.mra_resp.timeout =
+ evt->param.mra_rcvd.service_timeout;
uvt->data_len = IB_CM_MRA_PRIVATE_DATA_SIZE;
-
break;
case IB_CM_REJ_RECEIVED:
- ib_ucm_event_rej_get(&uvt->resp.u.rej_resp,
- &evt->param.rej_rcvd);
+ uvt->resp.u.rej_resp.reason = evt->param.rej_rcvd.reason;
uvt->data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.rej_rcvd.ari_length;
info = evt->param.rej_rcvd.ari;
-
break;
case IB_CM_LAP_RECEIVED:
- ib_ucm_event_lap_get(&uvt->resp.u.lap_resp,
- &evt->param.lap_rcvd);
+ ib_ucm_event_path_get(&uvt->resp.u.lap_resp.path,
+ evt->param.lap_rcvd.alternate_path);
uvt->data_len = IB_CM_LAP_PRIVATE_DATA_SIZE;
- uvt->resp.present |= (evt->param.lap_rcvd.alternate_path ?
- IB_UCM_PRES_ALTERNATE : 0);
+ uvt->resp.present = IB_UCM_PRES_ALTERNATE;
break;
case IB_CM_APR_RECEIVED:
- ib_ucm_event_apr_get(&uvt->resp.u.apr_resp,
- &evt->param.apr_rcvd);
+ uvt->resp.u.apr_resp.status = evt->param.apr_rcvd.ap_status;
uvt->data_len = IB_CM_APR_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.apr_rcvd.info_len;
info = evt->param.apr_rcvd.apr_info;
-
break;
case IB_CM_SIDR_REQ_RECEIVED:
- ib_ucm_event_sidr_req_get(&uvt->resp.u.sidr_req_resp,
+ ib_ucm_event_sidr_req_get(ctx, &uvt->resp.u.sidr_req_resp,
&evt->param.sidr_req_rcvd);
uvt->data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE;
-
break;
case IB_CM_SIDR_REP_RECEIVED:
ib_ucm_event_sidr_rep_get(&uvt->resp.u.sidr_rep_resp,
uvt->data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.sidr_rep_rcvd.info_len;
info = evt->param.sidr_rep_rcvd.info;
-
break;
default:
uvt->resp.u.send_status = evt->param.send_status;
-
break;
}
- if (uvt->data_len && evt->private_data) {
-
+ if (uvt->data_len) {
uvt->data = kmalloc(uvt->data_len, GFP_KERNEL);
- if (!uvt->data) {
- result = -ENOMEM;
- goto error;
- }
+ if (!uvt->data)
+ goto err1;
memcpy(uvt->data, evt->private_data, uvt->data_len);
uvt->resp.present |= IB_UCM_PRES_DATA;
}
- if (uvt->info_len && info) {
-
+ if (uvt->info_len) {
uvt->info = kmalloc(uvt->info_len, GFP_KERNEL);
- if (!uvt->info) {
- result = -ENOMEM;
- goto error;
- }
+ if (!uvt->info)
+ goto err2;
memcpy(uvt->info, info, uvt->info_len);
uvt->resp.present |= IB_UCM_PRES_INFO;
}
-
return 0;
-error:
- kfree(uvt->info);
+
+err2:
kfree(uvt->data);
- return result;
+err1:
+ return -ENOMEM;
}
static int ib_ucm_event_handler(struct ib_cm_id *cm_id,
struct ib_ucm_context *ctx;
int result = 0;
int id;
- /*
- * lookup correct context based on event type.
- */
- switch (event->event) {
- case IB_CM_REQ_RECEIVED:
- id = (long)event->param.req_rcvd.listen_id->context;
- break;
- case IB_CM_SIDR_REQ_RECEIVED:
- id = (long)event->param.sidr_req_rcvd.listen_id->context;
- break;
- default:
- id = (long)cm_id->context;
- break;
- }
- ucm_dbg("Event. CM ID <%d> event <%d>\n", id, event->event);
-
- ctx = ib_ucm_ctx_get(id);
- if (!ctx)
- return -ENOENT;
+ ctx = cm_id->context;
if (event->event == IB_CM_REQ_RECEIVED ||
event->event == IB_CM_SIDR_REQ_RECEIVED)
id = IB_UCM_CM_ID_INVALID;
+ else
+ id = ctx->id;
uevent = kmalloc(sizeof(*uevent), GFP_KERNEL);
- if (!uevent) {
- result = -ENOMEM;
- goto done;
- }
+ if (!uevent)
+ goto err1;
memset(uevent, 0, sizeof(*uevent));
-
uevent->resp.id = id;
uevent->resp.event = event->event;
- result = ib_ucm_event_process(event, uevent);
+ result = ib_ucm_event_process(ctx, event, uevent);
if (result)
- goto done;
+ goto err2;
uevent->ctx = ctx;
- uevent->cm_id = ((event->event == IB_CM_REQ_RECEIVED ||
- event->event == IB_CM_SIDR_REQ_RECEIVED ) ?
- cm_id : NULL);
+ uevent->cm_id = (id == IB_UCM_CM_ID_INVALID) ? cm_id : NULL;
down(&ctx->file->mutex);
-
list_add_tail(&uevent->file_list, &ctx->file->events);
list_add_tail(&uevent->ctx_list, &ctx->events);
-
wake_up_interruptible(&ctx->file->poll_wait);
-
up(&ctx->file->mutex);
-done:
- ctx->error = result;
- ib_ucm_ctx_put(ctx); /* func reference */
- return result;
+ return 0;
+
+err2:
+ kfree(uevent);
+err1:
+ /* Destroy new cm_id's */
+ return (id == IB_UCM_CM_ID_INVALID);
}
static ssize_t ib_ucm_event(struct ib_ucm_file *file,
goto done;
}
- ctx->cm_id = uevent->cm_id;
- ctx->cm_id->cm_handler = ib_ucm_event_handler;
- ctx->cm_id->context = (void *)(unsigned long)ctx->id;
+ ctx->cm_id = uevent->cm_id;
+ ctx->cm_id->context = ctx;
uevent->resp.id = ctx->id;
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
+ down(&file->mutex);
ctx = ib_ucm_ctx_alloc(file);
+ up(&file->mutex);
if (!ctx)
return -ENOMEM;
- ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler,
- (void *)(unsigned long)ctx->id);
- if (!ctx->cm_id) {
- result = -ENOMEM;
- goto err_cm;
+ ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler, ctx);
+ if (IS_ERR(ctx->cm_id)) {
+ result = PTR_ERR(ctx->cm_id);
+ goto err;
}
resp.id = ctx->id;
if (copy_to_user((void __user *)(unsigned long)cmd.response,
&resp, sizeof(resp))) {
result = -EFAULT;
- goto err_ret;
+ goto err;
}
return 0;
-err_ret:
- ib_destroy_cm_id(ctx->cm_id);
-err_cm:
- ib_ucm_ctx_put(ctx); /* user reference */
+err:
+ ib_ucm_destroy_ctx(file, ctx->id);
return result;
}
int in_len, int out_len)
{
struct ib_ucm_destroy_id cmd;
- struct ib_ucm_context *ctx;
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx)
- return -ENOENT;
-
- ib_ucm_ctx_put(ctx); /* user reference */
- ib_ucm_ctx_put(ctx); /* func reference */
-
- return 0;
+ return ib_ucm_destroy_ctx(file, cmd.id);
}
static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx)
- return -ENOENT;
-
- down(&ctx->file->mutex);
- if (ctx->file != file) {
- result = -EINVAL;
- goto done;
- }
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
resp.service_id = ctx->cm_id->service_id;
resp.service_mask = ctx->cm_id->service_mask;
&resp, sizeof(resp)))
result = -EFAULT;
-done:
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
+ ib_ucm_ctx_put(ctx);
return result;
}
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx)
- return -ENOENT;
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
- result = ib_cm_listen(ctx->cm_id, cmd.service_id,
- cmd.service_mask);
-
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
+ result = ib_cm_listen(ctx->cm_id, cmd.service_id, cmd.service_mask);
+ ib_ucm_ctx_put(ctx);
return result;
}
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx)
- return -ENOENT;
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
- result = ib_cm_establish(ctx->cm_id);
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
+ result = ib_cm_establish(ctx->cm_id);
+ ib_ucm_ctx_put(ctx);
return result;
}
return -EFAULT;
}
- memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof(union ib_gid));
- memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof(union ib_gid));
+ memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof sa_path->dgid);
+ memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof sa_path->sgid);
sa_path->dlid = ucm_path.dlid;
sa_path->slid = ucm_path.slid;
param.max_cm_retries = cmd.max_cm_retries;
param.srq = cmd.srq;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = ib_send_cm_req(ctx->cm_id, ¶m);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.primary_path);
kfree(param.alternate_path);
-
return result;
}
param.rnr_retry_count = cmd.rnr_retry_count;
param.srq = cmd.srq;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = ib_send_cm_rep(ctx->cm_id, ¶m);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
-done:
kfree(param.private_data);
-
return result;
}
if (result)
return result;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = func(ctx->cm_id, private_data, cmd.len);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
-done:
kfree(private_data);
-
return result;
}
if (result)
goto done;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
- result = func(ctx->cm_id, cmd.status,
- info, cmd.info_len,
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
+ result = func(ctx->cm_id, cmd.status, info, cmd.info_len,
data, cmd.data_len);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(data);
kfree(info);
-
return result;
}
if (result)
return result;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
+ result = ib_send_cm_mra(ctx->cm_id, cmd.timeout, data, cmd.len);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
- result = ib_send_cm_mra(ctx->cm_id, cmd.timeout,
- data, cmd.len);
-
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
-done:
kfree(data);
-
return result;
}
if (result)
goto done;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = ib_send_cm_lap(ctx->cm_id, path, data, cmd.len);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(data);
kfree(path);
-
return result;
}
param.max_cm_retries = cmd.max_cm_retries;
param.pkey = cmd.pkey;
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
-
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = ib_send_cm_sidr_req(ctx->cm_id, ¶m);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.path);
-
return result;
}
if (result)
goto done;
- param.qp_num = cmd.qpn;
- param.qkey = cmd.qkey;
- param.status = cmd.status;
- param.info_length = cmd.info_len;
- param.private_data_len = cmd.data_len;
-
- ctx = ib_ucm_ctx_get(cmd.id);
- if (!ctx) {
- result = -ENOENT;
- goto done;
- }
+ param.qp_num = cmd.qpn;
+ param.qkey = cmd.qkey;
+ param.status = cmd.status;
+ param.info_length = cmd.info_len;
+ param.private_data_len = cmd.data_len;
- down(&ctx->file->mutex);
- if (ctx->file != file)
- result = -EINVAL;
- else
+ ctx = ib_ucm_ctx_get(file, cmd.id);
+ if (!IS_ERR(ctx)) {
result = ib_send_cm_sidr_rep(ctx->cm_id, ¶m);
+ ib_ucm_ctx_put(ctx);
+ } else
+ result = PTR_ERR(ctx);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.info);
-
return result;
}
struct ib_ucm_context *ctx;
down(&file->mutex);
-
while (!list_empty(&file->ctxs)) {
ctx = list_entry(file->ctxs.next,
struct ib_ucm_context, file_list);
- up(&ctx->file->mutex);
- ib_ucm_ctx_put(ctx); /* user reference */
+ up(&file->mutex);
+ ib_ucm_destroy_ctx(file, ctx->id);
down(&file->mutex);
}
-
up(&file->mutex);
-
kfree(file);
-
- ucm_dbg("Deleted struct\n");
return 0;
}
#include <linux/cdev.h>
#include <linux/idr.h>
-#include <ib_cm.h>
-#include <ib_user_cm.h>
+#include <rdma/ib_cm.h>
+#include <rdma/ib_user_cm.h>
#define IB_UCM_CM_ID_INVALID 0xffffffff
struct ib_ucm_file {
struct semaphore mutex;
struct file *filp;
- /*
- * list of pending events
- */
+
struct list_head ctxs; /* list of active connections */
struct list_head events; /* list of pending events */
wait_queue_head_t poll_wait;
struct ib_ucm_context {
int id;
- int ref;
- int error;
+ wait_queue_head_t wait;
+ atomic_t ref;
struct ib_ucm_file *file;
struct ib_cm_id *cm_id;
- struct semaphore mutex;
struct list_head events; /* list of pending events. */
struct list_head file_list; /* member in file ctx list */
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/errno.h>
-#include <ib_pack.h>
+#include <rdma/ib_pack.h>
#define STRUCT_FIELD(header, field) \
.struct_offset_bytes = offsetof(struct ib_unpacked_ ## header, field), \
struct ib_ud_header *header)
{
int header_len;
+ u16 packet_length;
memset(header, 0, sizeof *header);
header->lrh.link_version = 0;
header->lrh.link_next_header =
grh_present ? IB_LNH_IBA_GLOBAL : IB_LNH_IBA_LOCAL;
- header->lrh.packet_length = (IB_LRH_BYTES +
+ packet_length = (IB_LRH_BYTES +
IB_BTH_BYTES +
IB_DETH_BYTES +
payload_bytes +
header->grh_present = grh_present;
if (grh_present) {
- header->lrh.packet_length += IB_GRH_BYTES / 4;
-
+ packet_length += IB_GRH_BYTES / 4;
header->grh.ip_version = 6;
header->grh.payload_length =
cpu_to_be16((IB_BTH_BYTES +
header->grh.next_header = 0x1b;
}
- cpu_to_be16s(&header->lrh.packet_length);
+ header->lrh.packet_length = cpu_to_be16(packet_length);
if (header->immediate_present)
header->bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
- * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
+ * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
#include <asm/uaccess.h>
#include <asm/semaphore.h>
-#include <ib_mad.h>
-#include <ib_user_mad.h>
+#include <rdma/ib_mad.h>
+#include <rdma/ib_user_mad.h>
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("InfiniBand userspace MAD packet access");
struct ib_send_wr *bad_wr;
struct ib_rmpp_mad *rmpp_mad;
u8 method;
- u64 *tid;
+ __be64 *tid;
int ret, length, hdr_len, data_len, rmpp_hdr_size;
int rmpp_active = 0;
if (packet->mad.hdr.grh_present) {
ah_attr.ah_flags = IB_AH_GRH;
memcpy(ah_attr.grh.dgid.raw, packet->mad.hdr.gid, 16);
- ah_attr.grh.flow_label = packet->mad.hdr.flow_label;
+ ah_attr.grh.flow_label = be32_to_cpu(packet->mad.hdr.flow_label);
ah_attr.grh.hop_limit = packet->mad.hdr.hop_limit;
ah_attr.grh.traffic_class = packet->mad.hdr.traffic_class;
}
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/kref.h>
#include <linux/idr.h>
-#include <ib_verbs.h>
-#include <ib_user_verbs.h>
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_user_verbs.h>
struct ib_uverbs_device {
int devnum;
extern struct idr ib_uverbs_ah_idr;
extern struct idr ib_uverbs_cq_idr;
extern struct idr ib_uverbs_qp_idr;
+extern struct idr ib_uverbs_srq_idr;
void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context);
void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr);
+void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr);
int ib_umem_get(struct ib_device *dev, struct ib_umem *mem,
void *addr, size_t size, int write);
IB_UVERBS_DECLARE_CMD(destroy_qp);
IB_UVERBS_DECLARE_CMD(attach_mcast);
IB_UVERBS_DECLARE_CMD(detach_mcast);
+IB_UVERBS_DECLARE_CMD(create_srq);
+IB_UVERBS_DECLARE_CMD(modify_srq);
+IB_UVERBS_DECLARE_CMD(destroy_srq);
#endif /* UVERBS_H */
struct ib_uobject *uobj;
struct ib_pd *pd;
struct ib_cq *scq, *rcq;
+ struct ib_srq *srq;
struct ib_qp *qp;
struct ib_qp_init_attr attr;
int ret;
pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle);
scq = idr_find(&ib_uverbs_cq_idr, cmd.send_cq_handle);
rcq = idr_find(&ib_uverbs_cq_idr, cmd.recv_cq_handle);
+ srq = cmd.is_srq ? idr_find(&ib_uverbs_srq_idr, cmd.srq_handle) : NULL;
if (!pd || pd->uobject->context != file->ucontext ||
!scq || scq->uobject->context != file->ucontext ||
- !rcq || rcq->uobject->context != file->ucontext) {
+ !rcq || rcq->uobject->context != file->ucontext ||
+ (cmd.is_srq && (!srq || srq->uobject->context != file->ucontext))) {
ret = -EINVAL;
goto err_up;
}
attr.qp_context = file;
attr.send_cq = scq;
attr.recv_cq = rcq;
- attr.srq = NULL;
+ attr.srq = srq;
attr.sq_sig_type = cmd.sq_sig_all ? IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR;
attr.qp_type = cmd.qp_type;
return ret ? ret : in_len;
}
+
+ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file,
+ const char __user *buf, int in_len,
+ int out_len)
+{
+ struct ib_uverbs_create_srq cmd;
+ struct ib_uverbs_create_srq_resp resp;
+ struct ib_udata udata;
+ struct ib_uobject *uobj;
+ struct ib_pd *pd;
+ struct ib_srq *srq;
+ struct ib_srq_init_attr attr;
+ int ret;
+
+ if (out_len < sizeof resp)
+ return -ENOSPC;
+
+ if (copy_from_user(&cmd, buf, sizeof cmd))
+ return -EFAULT;
+
+ INIT_UDATA(&udata, buf + sizeof cmd,
+ (unsigned long) cmd.response + sizeof resp,
+ in_len - sizeof cmd, out_len - sizeof resp);
+
+ uobj = kmalloc(sizeof *uobj, GFP_KERNEL);
+ if (!uobj)
+ return -ENOMEM;
+
+ down(&ib_uverbs_idr_mutex);
+
+ pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle);
+
+ if (!pd || pd->uobject->context != file->ucontext) {
+ ret = -EINVAL;
+ goto err_up;
+ }
+
+ attr.event_handler = ib_uverbs_srq_event_handler;
+ attr.srq_context = file;
+ attr.attr.max_wr = cmd.max_wr;
+ attr.attr.max_sge = cmd.max_sge;
+ attr.attr.srq_limit = cmd.srq_limit;
+
+ uobj->user_handle = cmd.user_handle;
+ uobj->context = file->ucontext;
+
+ srq = pd->device->create_srq(pd, &attr, &udata);
+ if (IS_ERR(srq)) {
+ ret = PTR_ERR(srq);
+ goto err_up;
+ }
+
+ srq->device = pd->device;
+ srq->pd = pd;
+ srq->uobject = uobj;
+ srq->event_handler = attr.event_handler;
+ srq->srq_context = attr.srq_context;
+ atomic_inc(&pd->usecnt);
+ atomic_set(&srq->usecnt, 0);
+
+ memset(&resp, 0, sizeof resp);
+
+retry:
+ if (!idr_pre_get(&ib_uverbs_srq_idr, GFP_KERNEL)) {
+ ret = -ENOMEM;
+ goto err_destroy;
+ }
+
+ ret = idr_get_new(&ib_uverbs_srq_idr, srq, &uobj->id);
+
+ if (ret == -EAGAIN)
+ goto retry;
+ if (ret)
+ goto err_destroy;
+
+ resp.srq_handle = uobj->id;
+
+ spin_lock_irq(&file->ucontext->lock);
+ list_add_tail(&uobj->list, &file->ucontext->srq_list);
+ spin_unlock_irq(&file->ucontext->lock);
+
+ if (copy_to_user((void __user *) (unsigned long) cmd.response,
+ &resp, sizeof resp)) {
+ ret = -EFAULT;
+ goto err_list;
+ }
+
+ up(&ib_uverbs_idr_mutex);
+
+ return in_len;
+
+err_list:
+ spin_lock_irq(&file->ucontext->lock);
+ list_del(&uobj->list);
+ spin_unlock_irq(&file->ucontext->lock);
+
+err_destroy:
+ ib_destroy_srq(srq);
+
+err_up:
+ up(&ib_uverbs_idr_mutex);
+
+ kfree(uobj);
+ return ret;
+}
+
+ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file,
+ const char __user *buf, int in_len,
+ int out_len)
+{
+ struct ib_uverbs_modify_srq cmd;
+ struct ib_srq *srq;
+ struct ib_srq_attr attr;
+ int ret;
+
+ if (copy_from_user(&cmd, buf, sizeof cmd))
+ return -EFAULT;
+
+ down(&ib_uverbs_idr_mutex);
+
+ srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle);
+ if (!srq || srq->uobject->context != file->ucontext) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ attr.max_wr = cmd.max_wr;
+ attr.max_sge = cmd.max_sge;
+ attr.srq_limit = cmd.srq_limit;
+
+ ret = ib_modify_srq(srq, &attr, cmd.attr_mask);
+
+out:
+ up(&ib_uverbs_idr_mutex);
+
+ return ret ? ret : in_len;
+}
+
+ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file,
+ const char __user *buf, int in_len,
+ int out_len)
+{
+ struct ib_uverbs_destroy_srq cmd;
+ struct ib_srq *srq;
+ struct ib_uobject *uobj;
+ int ret = -EINVAL;
+
+ if (copy_from_user(&cmd, buf, sizeof cmd))
+ return -EFAULT;
+
+ down(&ib_uverbs_idr_mutex);
+
+ srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle);
+ if (!srq || srq->uobject->context != file->ucontext)
+ goto out;
+
+ uobj = srq->uobject;
+
+ ret = ib_destroy_srq(srq);
+ if (ret)
+ goto out;
+
+ idr_remove(&ib_uverbs_srq_idr, cmd.srq_handle);
+
+ spin_lock_irq(&file->ucontext->lock);
+ list_del(&uobj->list);
+ spin_unlock_irq(&file->ucontext->lock);
+
+ kfree(uobj);
+
+out:
+ up(&ib_uverbs_idr_mutex);
+
+ return ret ? ret : in_len;
+}
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
DEFINE_IDR(ib_uverbs_ah_idr);
DEFINE_IDR(ib_uverbs_cq_idr);
DEFINE_IDR(ib_uverbs_qp_idr);
+DEFINE_IDR(ib_uverbs_srq_idr);
static spinlock_t map_lock;
static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES);
[IB_USER_VERBS_CMD_DESTROY_QP] = ib_uverbs_destroy_qp,
[IB_USER_VERBS_CMD_ATTACH_MCAST] = ib_uverbs_attach_mcast,
[IB_USER_VERBS_CMD_DETACH_MCAST] = ib_uverbs_detach_mcast,
+ [IB_USER_VERBS_CMD_CREATE_SRQ] = ib_uverbs_create_srq,
+ [IB_USER_VERBS_CMD_MODIFY_SRQ] = ib_uverbs_modify_srq,
+ [IB_USER_VERBS_CMD_DESTROY_SRQ] = ib_uverbs_destroy_srq,
};
static struct vfsmount *uverbs_event_mnt;
kfree(uobj);
}
- /* XXX Free SRQs */
+ list_for_each_entry_safe(uobj, tmp, &context->srq_list, list) {
+ struct ib_srq *srq = idr_find(&ib_uverbs_srq_idr, uobj->id);
+ idr_remove(&ib_uverbs_srq_idr, uobj->id);
+ ib_destroy_srq(srq);
+ list_del(&uobj->list);
+ kfree(uobj);
+ }
+
/* XXX Free MWs */
list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) {
struct ib_mr *mr = idr_find(&ib_uverbs_mr_idr, uobj->id);
+ struct ib_device *mrdev = mr->device;
struct ib_umem_object *memobj;
idr_remove(&ib_uverbs_mr_idr, uobj->id);
ib_dereg_mr(mr);
memobj = container_of(uobj, struct ib_umem_object, uobject);
- ib_umem_release_on_close(mr->device, &memobj->umem);
+ ib_umem_release_on_close(mrdev, &memobj->umem);
list_del(&uobj->list);
kfree(memobj);
event->event);
}
+void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr)
+{
+ ib_uverbs_async_handler(context_ptr,
+ event->element.srq->uobject->user_handle,
+ event->event);
+}
+
static void ib_uverbs_event_handler(struct ib_event_handler *handler,
struct ib_event *event)
{
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
+ * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
*
* This software is available to you under a choice of one of two
#include <linux/errno.h>
#include <linux/err.h>
-#include <ib_verbs.h>
-#include <ib_cache.h>
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_cache.h>
/* Protection domains */
}
EXPORT_SYMBOL(ib_destroy_ah);
+/* Shared receive queues */
+
+struct ib_srq *ib_create_srq(struct ib_pd *pd,
+ struct ib_srq_init_attr *srq_init_attr)
+{
+ struct ib_srq *srq;
+
+ if (!pd->device->create_srq)
+ return ERR_PTR(-ENOSYS);
+
+ srq = pd->device->create_srq(pd, srq_init_attr, NULL);
+
+ if (!IS_ERR(srq)) {
+ srq->device = pd->device;
+ srq->pd = pd;
+ srq->uobject = NULL;
+ srq->event_handler = srq_init_attr->event_handler;
+ srq->srq_context = srq_init_attr->srq_context;
+ atomic_inc(&pd->usecnt);
+ atomic_set(&srq->usecnt, 0);
+ }
+
+ return srq;
+}
+EXPORT_SYMBOL(ib_create_srq);
+
+int ib_modify_srq(struct ib_srq *srq,
+ struct ib_srq_attr *srq_attr,
+ enum ib_srq_attr_mask srq_attr_mask)
+{
+ return srq->device->modify_srq(srq, srq_attr, srq_attr_mask);
+}
+EXPORT_SYMBOL(ib_modify_srq);
+
+int ib_query_srq(struct ib_srq *srq,
+ struct ib_srq_attr *srq_attr)
+{
+ return srq->device->query_srq ?
+ srq->device->query_srq(srq, srq_attr) : -ENOSYS;
+}
+EXPORT_SYMBOL(ib_query_srq);
+
+int ib_destroy_srq(struct ib_srq *srq)
+{
+ struct ib_pd *pd;
+ int ret;
+
+ if (atomic_read(&srq->usecnt))
+ return -EBUSY;
+
+ pd = srq->pd;
+
+ ret = srq->device->destroy_srq(srq);
+ if (!ret)
+ atomic_dec(&pd->usecnt);
+
+ return ret;
+}
+EXPORT_SYMBOL(ib_destroy_srq);
+
/* Queue pairs */
struct ib_qp *ib_create_qp(struct ib_pd *pd,
-EXTRA_CFLAGS += -Idrivers/infiniband/include
-
ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
EXTRA_CFLAGS += -DDEBUG
endif
ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \
mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o \
mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o \
- mthca_provider.o mthca_memfree.o mthca_uar.o
+ mthca_provider.o mthca_memfree.o mthca_uar.o mthca_srq.o
kfree(array->page_list);
}
+
+/*
+ * Handling for queue buffers -- we allocate a bunch of memory and
+ * register it in a memory region at HCA virtual address 0. If the
+ * requested size is > max_direct, we split the allocation into
+ * multiple pages, so we don't require too much contiguous memory.
+ */
+
+int mthca_buf_alloc(struct mthca_dev *dev, int size, int max_direct,
+ union mthca_buf *buf, int *is_direct, struct mthca_pd *pd,
+ int hca_write, struct mthca_mr *mr)
+{
+ int err = -ENOMEM;
+ int npages, shift;
+ u64 *dma_list = NULL;
+ dma_addr_t t;
+ int i;
+
+ if (size <= max_direct) {
+ *is_direct = 1;
+ npages = 1;
+ shift = get_order(size) + PAGE_SHIFT;
+
+ buf->direct.buf = dma_alloc_coherent(&dev->pdev->dev,
+ size, &t, GFP_KERNEL);
+ if (!buf->direct.buf)
+ return -ENOMEM;
+
+ pci_unmap_addr_set(&buf->direct, mapping, t);
+
+ memset(buf->direct.buf, 0, size);
+
+ while (t & ((1 << shift) - 1)) {
+ --shift;
+ npages *= 2;
+ }
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ goto err_free;
+
+ for (i = 0; i < npages; ++i)
+ dma_list[i] = t + i * (1 << shift);
+ } else {
+ *is_direct = 0;
+ npages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ shift = PAGE_SHIFT;
+
+ dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
+ if (!dma_list)
+ return -ENOMEM;
+
+ buf->page_list = kmalloc(npages * sizeof *buf->page_list,
+ GFP_KERNEL);
+ if (!buf->page_list)
+ goto err_out;
+
+ for (i = 0; i < npages; ++i)
+ buf->page_list[i].buf = NULL;
+
+ for (i = 0; i < npages; ++i) {
+ buf->page_list[i].buf =
+ dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
+ &t, GFP_KERNEL);
+ if (!buf->page_list[i].buf)
+ goto err_free;
+
+ dma_list[i] = t;
+ pci_unmap_addr_set(&buf->page_list[i], mapping, t);
+
+ memset(buf->page_list[i].buf, 0, PAGE_SIZE);
+ }
+ }
+
+ err = mthca_mr_alloc_phys(dev, pd->pd_num,
+ dma_list, shift, npages,
+ 0, size,
+ MTHCA_MPT_FLAG_LOCAL_READ |
+ (hca_write ? MTHCA_MPT_FLAG_LOCAL_WRITE : 0),
+ mr);
+ if (err)
+ goto err_free;
+
+ kfree(dma_list);
+
+ return 0;
+
+err_free:
+ mthca_buf_free(dev, size, buf, *is_direct, NULL);
+
+err_out:
+ kfree(dma_list);
+
+ return err;
+}
+
+void mthca_buf_free(struct mthca_dev *dev, int size, union mthca_buf *buf,
+ int is_direct, struct mthca_mr *mr)
+{
+ int i;
+
+ if (mr)
+ mthca_free_mr(dev, mr);
+
+ if (is_direct)
+ dma_free_coherent(&dev->pdev->dev, size, buf->direct.buf,
+ pci_unmap_addr(&buf->direct, mapping));
+ else {
+ for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i)
+ dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
+ buf->page_list[i].buf,
+ pci_unmap_addr(&buf->page_list[i],
+ mapping));
+ kfree(buf->page_list);
+ }
+}
#include <linux/init.h>
-#include <ib_verbs.h>
-#include <ib_cache.h>
+#include <rdma/ib_verbs.h>
+#include <rdma/ib_cache.h>
#include "mthca_dev.h"
struct mthca_av {
- u32 port_pd;
- u8 reserved1;
- u8 g_slid;
- u16 dlid;
- u8 reserved2;
- u8 gid_index;
- u8 msg_sr;
- u8 hop_limit;
- u32 sl_tclass_flowlabel;
- u32 dgid[4];
+ __be32 port_pd;
+ u8 reserved1;
+ u8 g_slid;
+ __be16 dlid;
+ u8 reserved2;
+ u8 gid_index;
+ u8 msg_sr;
+ u8 hop_limit;
+ __be32 sl_tclass_flowlabel;
+ __be32 dgid[4];
};
int mthca_create_ah(struct mthca_dev *dev,
av, (unsigned long) ah->avdma);
for (j = 0; j < 8; ++j)
printk(KERN_DEBUG " [%2x] %08x\n",
- j * 4, be32_to_cpu(((u32 *) av)[j]));
+ j * 4, be32_to_cpu(((__be32 *) av)[j]));
}
if (ah->type == MTHCA_AH_ON_HCA) {
header->lrh.service_level = be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 28;
header->lrh.destination_lid = ah->av->dlid;
- header->lrh.source_lid = ah->av->g_slid & 0x7f;
+ header->lrh.source_lid = cpu_to_be16(ah->av->g_slid & 0x7f);
if (ah->av->g_slid & 0x80) {
header->grh_present = 1;
header->grh.traffic_class =
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/pci.h>
#include <linux/errno.h>
#include <asm/io.h>
-#include <ib_mad.h>
+#include <rdma/ib_mad.h>
#include "mthca_dev.h"
#include "mthca_config_reg.h"
CMD_SW2HW_SRQ = 0x35,
CMD_HW2SW_SRQ = 0x36,
CMD_QUERY_SRQ = 0x37,
+ CMD_ARM_SRQ = 0x40,
/* QP/EE commands */
CMD_RST2INIT_QPEE = 0x19,
* (and some architectures such as ia64 implement memcpy_toio
* in terms of writeb).
*/
- __raw_writel(cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4);
- __raw_writel(cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4);
- __raw_writel(cpu_to_be32(in_modifier), dev->hcr + 2 * 4);
- __raw_writel(cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4);
- __raw_writel(cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4);
- __raw_writel(cpu_to_be32(token << 16), dev->hcr + 5 * 4);
+ __raw_writel((__force u32) cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4);
+ __raw_writel((__force u32) cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4);
+ __raw_writel((__force u32) cpu_to_be32(in_modifier), dev->hcr + 2 * 4);
+ __raw_writel((__force u32) cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4);
+ __raw_writel((__force u32) cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4);
+ __raw_writel((__force u32) cpu_to_be32(token << 16), dev->hcr + 5 * 4);
/* __raw_writel may not order writes. */
wmb();
- __raw_writel(cpu_to_be32((1 << HCR_GO_BIT) |
- (event ? (1 << HCA_E_BIT) : 0) |
- (op_modifier << HCR_OPMOD_SHIFT) |
- op), dev->hcr + 6 * 4);
+ __raw_writel((__force u32) cpu_to_be32((1 << HCR_GO_BIT) |
+ (event ? (1 << HCA_E_BIT) : 0) |
+ (op_modifier << HCR_OPMOD_SHIFT) |
+ op), dev->hcr + 6 * 4);
out:
up(&dev->cmd.hcr_sem);
goto out;
}
- if (out_is_imm) {
- memcpy_fromio(out_param, dev->hcr + HCR_OUT_PARAM_OFFSET, sizeof (u64));
- be64_to_cpus(out_param);
- }
+ if (out_is_imm)
+ *out_param =
+ (u64) be32_to_cpu((__force __be32)
+ __raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET)) << 32 |
+ (u64) be32_to_cpu((__force __be32)
+ __raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET + 4));
- *status = be32_to_cpu(__raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24;
+ *status = be32_to_cpu((__force __be32) __raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24;
out:
up(&dev->cmd.poll_sem);
mthca_dbg(dev, "Max QPs: %d, reserved QPs: %d, entry size: %d\n",
dev_lim->max_qps, dev_lim->reserved_qps, dev_lim->qpc_entry_sz);
+ mthca_dbg(dev, "Max SRQs: %d, reserved SRQs: %d, entry size: %d\n",
+ dev_lim->max_srqs, dev_lim->reserved_srqs, dev_lim->srq_entry_sz);
mthca_dbg(dev, "Max CQs: %d, reserved CQs: %d, entry size: %d\n",
dev_lim->max_cqs, dev_lim->reserved_cqs, dev_lim->cqc_entry_sz);
mthca_dbg(dev, "Max EQs: %d, reserved EQs: %d, entry size: %d\n",
return err;
}
+static void get_board_id(void *vsd, char *board_id)
+{
+ int i;
+
+#define VSD_OFFSET_SIG1 0x00
+#define VSD_OFFSET_SIG2 0xde
+#define VSD_OFFSET_MLX_BOARD_ID 0xd0
+#define VSD_OFFSET_TS_BOARD_ID 0x20
+
+#define VSD_SIGNATURE_TOPSPIN 0x5ad
+
+ memset(board_id, 0, MTHCA_BOARD_ID_LEN);
+
+ if (be16_to_cpup(vsd + VSD_OFFSET_SIG1) == VSD_SIGNATURE_TOPSPIN &&
+ be16_to_cpup(vsd + VSD_OFFSET_SIG2) == VSD_SIGNATURE_TOPSPIN) {
+ strlcpy(board_id, vsd + VSD_OFFSET_TS_BOARD_ID, MTHCA_BOARD_ID_LEN);
+ } else {
+ /*
+ * The board ID is a string but the firmware byte
+ * swaps each 4-byte word before passing it back to
+ * us. Therefore we need to swab it before printing.
+ */
+ for (i = 0; i < 4; ++i)
+ ((u32 *) board_id)[i] =
+ swab32(*(u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4));
+ }
+}
+
int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
struct mthca_adapter *adapter, u8 *status)
{
#define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04
#define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08
#define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10
+#define QUERY_ADAPTER_VSD_OFFSET 0x20
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
if (IS_ERR(mailbox))
MTHCA_GET(adapter->revision_id, outbox, QUERY_ADAPTER_REVISION_ID_OFFSET);
MTHCA_GET(adapter->inta_pin, outbox, QUERY_ADAPTER_INTA_PIN_OFFSET);
+ get_board_id(outbox + QUERY_ADAPTER_VSD_OFFSET / 4,
+ adapter->board_id);
+
out:
mthca_free_mailbox(dev, mailbox);
return err;
u8 *status)
{
struct mthca_mailbox *mailbox;
- u32 *inbox;
+ __be32 *inbox;
int err;
#define INIT_HCA_IN_SIZE 0x200
#define INIT_IB_FLAG_SIG (1 << 18)
#define INIT_IB_FLAG_NG (1 << 17)
#define INIT_IB_FLAG_G0 (1 << 16)
-#define INIT_IB_FLAG_1X (1 << 8)
-#define INIT_IB_FLAG_4X (1 << 9)
-#define INIT_IB_FLAG_12X (1 << 11)
#define INIT_IB_VL_SHIFT 4
+#define INIT_IB_PORT_WIDTH_SHIFT 8
#define INIT_IB_MTU_SHIFT 12
#define INIT_IB_MAX_GID_OFFSET 0x06
#define INIT_IB_MAX_PKEY_OFFSET 0x0a
memset(inbox, 0, INIT_IB_IN_SIZE);
flags = 0;
- flags |= param->enable_1x ? INIT_IB_FLAG_1X : 0;
- flags |= param->enable_4x ? INIT_IB_FLAG_4X : 0;
flags |= param->set_guid0 ? INIT_IB_FLAG_G0 : 0;
flags |= param->set_node_guid ? INIT_IB_FLAG_NG : 0;
flags |= param->set_si_guid ? INIT_IB_FLAG_SIG : 0;
flags |= param->vl_cap << INIT_IB_VL_SHIFT;
+ flags |= param->port_width << INIT_IB_PORT_WIDTH_SHIFT;
flags |= param->mtu_cap << INIT_IB_MTU_SHIFT;
MTHCA_PUT(inbox, flags, INIT_IB_FLAGS_OFFSET);
int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status)
{
struct mthca_mailbox *mailbox;
- u64 *inbox;
+ __be64 *inbox;
int err;
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
CMD_TIME_CLASS_A, status);
}
+int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ int srq_num, u8 *status)
+{
+ return mthca_cmd(dev, mailbox->dma, srq_num, 0, CMD_SW2HW_SRQ,
+ CMD_TIME_CLASS_A, status);
+}
+
+int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ int srq_num, u8 *status)
+{
+ return mthca_cmd_box(dev, 0, mailbox->dma, srq_num, 0,
+ CMD_HW2SW_SRQ,
+ CMD_TIME_CLASS_A, status);
+}
+
+int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status)
+{
+ return mthca_cmd(dev, limit, srq_num, 0, CMD_ARM_SRQ,
+ CMD_TIME_CLASS_B, status);
+}
+
int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
int is_ee, struct mthca_mailbox *mailbox, u32 optmask,
u8 *status)
if (i % 8 == 0)
printk(" [%02x] ", i * 4);
printk(" %08x",
- be32_to_cpu(((u32 *) mailbox->buf)[i + 2]));
+ be32_to_cpu(((__be32 *) mailbox->buf)[i + 2]));
if ((i + 1) % 8 == 0)
printk("\n");
}
if (i % 8 == 0)
printk("[%02x] ", i * 4);
printk(" %08x",
- be32_to_cpu(((u32 *) mailbox->buf)[i + 2]));
+ be32_to_cpu(((__be32 *) mailbox->buf)[i + 2]));
if ((i + 1) % 8 == 0)
printk("\n");
}
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#ifndef MTHCA_CMD_H
#define MTHCA_CMD_H
-#include <ib_verbs.h>
+#include <rdma/ib_verbs.h>
#define MTHCA_MAILBOX_SIZE 4096
};
struct mthca_adapter {
- u32 vendor_id;
- u32 device_id;
- u32 revision_id;
- u8 inta_pin;
+ u32 vendor_id;
+ u32 device_id;
+ u32 revision_id;
+ char board_id[MTHCA_BOARD_ID_LEN];
+ u8 inta_pin;
};
struct mthca_init_hca_param {
};
struct mthca_init_ib_param {
- int enable_1x;
- int enable_4x;
+ int port_width;
int vl_cap;
int mtu_cap;
u16 gid_cap;
int cq_num, u8 *status);
int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int cq_num, u8 *status);
+int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ int srq_num, u8 *status);
+int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
+ int srq_num, u8 *status);
+int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status);
int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
int is_ee, struct mthca_mailbox *mailbox, u32 optmask,
u8 *status);
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems, Inc. All rights reserved.
+ * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
#include <linux/init.h>
#include <linux/hardirq.h>